text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Control charts are graphical plots used in production control to determine whether quality and manufacturing processes are being controlled under stable conditions. (ISO 7870-1)
The hourly status is arranged on the graph, and the occurrence of abnormalities is judged based on the presence of data that differs from the conventional trend or deviates from the control limit line.
Control charts are classified into Shewhart individuals control chart (ISO 7870-2) and CUSUM(CUsUM)(or cumulative sum control chart)(ISO 7870-4).
Control charts, also known as Shewhart charts (after Walter A. Shewhart) or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for statistical process monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when the underlying form of the process distributions are known. However, more advanced techniques are available in the 21st century where incoming data streaming can-be monitored even without any knowledge of the underlying process distributions. Distribution-free control charts are becoming increasingly popular.
== Overview ==
If analysis of the control chart indicates that the process is currently under control (i.e., is stable, with variation only coming from sources common to the process), then no corrections or changes to process control parameters are needed or desired. In addition, data from the process can be used to predict the future performance of the process. If the chart indicates that the monitored process is not in control, analysis of the chart can help determine the sources of variation, as this will result in degraded process performance. A process that is stable but operating outside desired (specification) limits (e.g., scrap rates may be in statistical control but above desired limits) needs to be improved through a deliberate effort to understand the causes of current performance and fundamentally improve the process.
The control chart is one of the seven basic tools of quality control. Typically control charts are used for time-series data, also known as continuous data or variable data. Although they can also be used for data that has logical comparability (i.e. you want to compare samples that were taken all at the same time, or the performance of different individuals); however the type of chart used to do this requires consideration.
== History ==
The control chart was invented by Walter A. Shewhart working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a stronger business need to reduce the frequency of failures and repairs. By 1920, the engineers had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it set forth all of the essential principles and considerations which are involved in what we know today as process quality control." Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.
Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes typically produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.
In 1924, or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander for the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.
Bonnie Small worked in an Allentown plant in the 1950s after the transistor was made. Used Shewhart's methods to improve plant performance in quality control and made up to 5000 control charts. In 1958, The Western Electric Statistical Quality Control Handbook had appeared from her writings and led to use at AT&T.
== Chart details ==
A control chart consists of:
Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times (i.e., the data)
The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions) - or for a reference period against which change can be assessed. Similarly a median can be used instead.
A centre line is drawn at the value of the mean or median of the statistic
The standard deviation (e.g., sqrt(variance) of the mean) of the statistic is calculated using all the samples - or again for a reference period against which change can be assessed. in the case of XmR charts, strictly it is an approximation of standard deviation, the does not make the assumption of homogeneity of process over time that the standard deviation makes.
Upper and lower control limits (sometimes called "natural process limits") that indicate the threshold at which the process output is considered statistically 'unlikely' and are drawn typically at 3 standard deviations from the center line
The chart may have other optional features, including:
More restrictive upper and lower warning or control limits, drawn as separate lines, typically two standard deviations above and below the center line. This is regularly used when a process needs tighter controls on variability.
Division into zones, with the addition of rules governing frequencies of observations in each zone
Annotation with events of interest, as determined by the Quality Engineer in charge of the process' quality
Action on special causes
(n.b., there are several rule sets for detection of signal; this is just one set. The rule set should be clearly stated.)
Any point outside the control limits
A Run of 7 Points all above or all below the central line - Stop the production
Quarantine and 100% check
Adjust Process.
Check 5 Consecutive samples
Continue The Process.
A Run of 7 Point Up or Down - Instruction as above
=== Chart usage ===
If the process is in control (and the process statistic is normal), 99.7300% of all the points will fall between the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs, a control chart "signaling" the presence of a special-cause requires immediate investigation.
This makes the control limits very important decision aids. The control limits provide information about the process behavior and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the centre line) may not coincide with the specified value (or target) of the quality characteristic because the process design simply cannot deliver the process characteristic at the desired level.
Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural centre is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.
The purpose of control charts is to allow simple detection of events that are indicative of an increase in process variability. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.
The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it is clear that the process is truly in control. Note that with three-sigma limits, common-cause variations result in signals less than once out of every twenty-two points for skewed processes and about once out of every three hundred seventy (1/370.4) points for normally distributed processes. The two-sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points in normally distributed data. (For example, the means of sufficiently large samples drawn from practically any underlying distribution whose variance exists are normally distributed, according to the Central Limit Theorem.)
=== Choice of limits ===
Shewhart set 3-sigma (3-standard deviation) limits on the following basis.
The coarse result of Chebyshev's inequality that, for any probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 1/k2.
The finer result of the Vysochanskii–Petunin inequality, that for any unimodal probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 4/(9k2).
In the Normal distribution, a very common probability distribution, 99.7% of the observations occur within three standard deviations of the mean (see Normal distribution).
Shewhart summarized the conclusions by saying:
... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.
Although he initially experimented with limits based on probability distributions, Shewhart ultimately wrote:
Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.
The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman–Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past.... He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors:
Ascribe a variation or a mistake to a special cause (assignable cause) when in fact the cause belongs to the system (common cause). (Also known as a Type I error or False Positive)
Ascribe a variation or a mistake to the system (common causes) when in fact the cause was a special cause (assignable cause). (Also known as a Type II error or False Negative)
=== Calculation of standard deviation ===
As for the calculation of control limits, the standard deviation (error) required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation.
An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, as an estimator which tends to be less influenced by the extreme observations which typify special-causes.
== Rules for detecting signals ==
The most common sets are:
The Western Electric rules
The Wheeler rules (equivalent to the Western Electric zone tests)
The Nelson rules
There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 6, 7, 8 and 9 all being advocated by various writers.
The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data.
== Alternative bases ==
In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart–Deming tradition.
== Performance of control charts ==
When a point falls outside the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, it is appropriate to determine if the results with the special cause are better than or worse than results from common causes alone. If worse, then that cause should be eliminated if possible. If better, it may be appropriate to intentionally retain the special cause within the system producing the results.
Even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. So, even an in control process plotted on a properly constructed control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.
Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.
It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart, the CUSUM chart and the real-time contrasts chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.
Many control charts work best for numeric data with Gaussian assumptions. The real-time contrasts chart was proposed to monitor process with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.
== Criticisms ==
Several authors have criticised the control chart on the grounds that it violates the likelihood principle. However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.
Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability and difficulties.
Some authors have criticized that most control charts focus on numeric data. Nowadays, process data can be much more complex, e.g. non-Gaussian, mix numerical and categorical, or be missing-valued.
== Types of charts ==
†Some practitioners also recommend the use of Individuals charts for attribute data, particularly when the assumptions of either binomially distributed data (p- and np-charts) or Poisson-distributed data (u- and c-charts) are violated. Two primary justifications are given for this practice. First, normality is not necessary for statistical control, so the Individuals chart may be used with non-normal data. Second, attribute charts derive the measure of dispersion directly from the mean proportion (by assuming a probability distribution), while Individuals charts derive the measure of dispersion from the data, independent of the mean, making Individuals charts more robust than attributes charts to violations of the assumptions about the distribution of the underlying population. It is sometimes noted that the substitution of the Individuals chart works best for large counts, when the binomial and Poisson distributions approximate a normal distribution. i.e. when the number of trials n > 1000 for p- and np-charts or λ > 500 for u- and c-charts.
Critics of this approach argue that control charts should not be used when their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use.
== See also ==
Analytic and enumerative statistical studies
Common cause and special cause
Process capability
Seven Basic Tools of Quality
Six Sigma
Statistical process control
Total quality management
== References ==
== Bibliography ==
Deming, W. E. (1975). "On probability as a basis for action". The American Statistician. 29 (4): 146–152. CiteSeerX 10.1.1.470.9636. doi:10.2307/2683482. JSTOR 2683482.
Deming, W. E. (1982). Out of the Crisis: Quality, Productivity and Competitive Position. ISBN 978-0-521-30553-2.
Deng, H.; Runger, G.; Tuv, Eugene (2012). "System monitoring with real-time contrasts". Journal of Quality Technology. 44 (1): 9–27. doi:10.1080/00224065.2012.11917878. S2CID 119835984.
Mandel, B. J. (1969). "The Regression Control Chart". Journal of Quality Technology. 1 (1): 1–9. doi:10.1080/00224065.1969.11980341.
Oakland, J. (2002). Statistical Process Control. ISBN 978-0-7506-5766-2.
Shewhart, W. A. (1931). Economic Control of Quality of Manufactured Product. American Society for Quality Control. ISBN 978-0-87389-076-2. {{cite book}}: ISBN / Date incompatibility (help)
Shewhart, W. A. (1939). Statistical Method from the Viewpoint of Quality Control. Courier Corporation. ISBN 978-0-486-65232-0. {{cite book}}: ISBN / Date incompatibility (help)
Wheeler, D. J. (2000). Normality and the Process-Behaviour Chart. SPC Press. ISBN 978-0-945320-56-2.
Wheeler, D. J.; Chambers, D. S. (1992). Understanding Statistical Process Control. SPC Press. ISBN 978-0-945320-13-5.
Wheeler, Donald J. (1999). Understanding Variation: The Key to Managing Chaos (2nd ed.). SPC Press. ISBN 978-0-945320-53-1.
== External links ==
NIST/SEMATECH e-Handbook of Statistical Methods
Monitoring and Control with Control Charts | Wikipedia/Control_chart |
In mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals.
Their respective names stem from their integral definitions, which are defined similarly to the gamma function but with different or "incomplete" integral limits. The gamma function is defined as an integral from zero to infinity. This contrasts with the lower incomplete gamma function, which is defined as an integral from zero to a variable upper limit. Similarly, the upper incomplete gamma function is defined as an integral from a variable lower limit to infinity.
== Definition ==
The upper incomplete gamma function is defined as:
Γ
(
s
,
x
)
=
∫
x
∞
t
s
−
1
e
−
t
d
t
,
{\displaystyle \Gamma (s,x)=\int _{x}^{\infty }t^{s-1}\,e^{-t}\,dt,}
whereas the lower incomplete gamma function is defined as:
γ
(
s
,
x
)
=
∫
0
x
t
s
−
1
e
−
t
d
t
.
{\displaystyle \gamma (s,x)=\int _{0}^{x}t^{s-1}\,e^{-t}\,dt.}
In both cases s is a complex parameter, such that the real part of s is positive.
== Properties ==
By integration by parts we find the recurrence relations
Γ
(
s
+
1
,
x
)
=
s
Γ
(
s
,
x
)
+
x
s
e
−
x
{\displaystyle \Gamma (s+1,x)=s\Gamma (s,x)+x^{s}e^{-x}}
and
γ
(
s
+
1
,
x
)
=
s
γ
(
s
,
x
)
−
x
s
e
−
x
.
{\displaystyle \gamma (s+1,x)=s\gamma (s,x)-x^{s}e^{-x}.}
Since the ordinary gamma function is defined as
Γ
(
s
)
=
∫
0
∞
t
s
−
1
e
−
t
d
t
{\displaystyle \Gamma (s)=\int _{0}^{\infty }t^{s-1}\,e^{-t}\,dt}
we have
Γ
(
s
)
=
Γ
(
s
,
0
)
=
lim
x
→
∞
γ
(
s
,
x
)
{\displaystyle \Gamma (s)=\Gamma (s,0)=\lim _{x\to \infty }\gamma (s,x)}
and
γ
(
s
,
x
)
+
Γ
(
s
,
x
)
=
Γ
(
s
)
.
{\displaystyle \gamma (s,x)+\Gamma (s,x)=\Gamma (s).}
=== Continuation to complex values ===
The lower incomplete gamma and the upper incomplete gamma function, as defined above for real positive s and x, can be developed into holomorphic functions, with respect both to x and s, defined for almost all combinations of complex x and s. Complex analysis shows how properties of the real incomplete gamma functions extend to their holomorphic counterparts.
==== Lower incomplete gamma function ====
===== Holomorphic extension =====
Repeated application of the recurrence relation for the lower incomplete gamma function leads to the power series expansion:
γ
(
s
,
x
)
=
∑
k
=
0
∞
x
s
e
−
x
x
k
s
(
s
+
1
)
⋯
(
s
+
k
)
=
x
s
Γ
(
s
)
e
−
x
∑
k
=
0
∞
x
k
Γ
(
s
+
k
+
1
)
.
{\displaystyle \gamma (s,x)=\sum _{k=0}^{\infty }{\frac {x^{s}e^{-x}x^{k}}{s(s+1)\cdots (s+k)}}=x^{s}\,\Gamma (s)\,e^{-x}\sum _{k=0}^{\infty }{\frac {x^{k}}{\Gamma (s+k+1)}}.}
Given the rapid growth in absolute value of Γ(z + k) when k → ∞, and the fact that the reciprocal of Γ(z) is an entire function, the coefficients in the rightmost sum are well-defined, and locally the sum converges uniformly for all complex s and x. By a theorem of Weierstrass, the limiting function, sometimes denoted as
γ
∗
{\displaystyle \gamma ^{*}}
,
γ
∗
(
s
,
z
)
:=
e
−
z
∑
k
=
0
∞
z
k
Γ
(
s
+
k
+
1
)
{\displaystyle \gamma ^{*}(s,z):=e^{-z}\sum _{k=0}^{\infty }{\frac {z^{k}}{\Gamma (s+k+1)}}}
is entire with respect to both z (for fixed s) and s (for fixed z), and, thus, holomorphic on C × C by Hartog's theorem. Hence, the following decomposition
γ
(
s
,
z
)
=
z
s
Γ
(
s
)
γ
∗
(
s
,
z
)
,
{\displaystyle \gamma (s,z)=z^{s}\,\Gamma (s)\,\gamma ^{*}(s,z),}
extends the real lower incomplete gamma function as a holomorphic function, both jointly and separately in z and s. It follows from the properties of
z
s
{\displaystyle z^{s}}
and the Γ-function, that the first two factors capture the singularities of
γ
(
s
,
z
)
{\displaystyle \gamma (s,z)}
(at z = 0 or s a non-positive integer), whereas the last factor contributes to its zeros.
===== Multi-valuedness =====
The complex logarithm log z = log |z| + i arg z is determined up to a multiple of 2πi only, which renders it multi-valued. Functions involving the complex logarithm typically inherit this property. Among these are the complex power, and, since zs appears in its decomposition, the γ-function, too.
The indeterminacy of multi-valued functions introduces complications, since it must be stated how to select a value. Strategies to handle this are:
(the most general way) replace the domain C of multi-valued functions by a suitable manifold in C × C called Riemann surface. While this removes multi-valuedness, one has to know the theory behind it;
restrict the domain such that a multi-valued function decomposes into separate single-valued branches, which can be handled individually.
The following set of rules can be used to interpret formulas in this section correctly. If not mentioned otherwise, the following is assumed:
====== Sectors ======
Sectors in C having their vertex at z = 0 often prove to be appropriate domains for complex expressions. A sector D consists of all complex z fulfilling z ≠ 0 and α − δ < arg z < α + δ with some α and 0 < δ ≤ π. Often, α can be arbitrarily chosen and is not specified then. If δ is not given, it is assumed to be π, and the sector is in fact the whole plane C, with the exception of a half-line originating at z = 0 and pointing into the direction of −α, usually serving as a branch cut. Note: In many applications and texts, α is silently taken to be 0, which centers the sector around the positive real axis.
====== Branches ======
In particular, a single-valued and holomorphic logarithm exists on any such sector D having its imaginary part bound to the range (α − δ, α + δ). Based on such a restricted logarithm, zs and the incomplete gamma functions in turn collapse to single-valued, holomorphic functions on D (or C×D), called branches of their multi-valued counterparts on D. Adding a multiple of 2π to α yields a different set of correlated branches on the same set D. However, in any given context here, α is assumed fixed and all branches involved are associated to it. If |α| < δ, the branches are called principal, because they equal their real analogues on the positive real axis. Note: In many applications and texts, formulas hold only for principal branches.
====== Relation between branches ======
The values of different branches of both the complex power function and the lower incomplete gamma function can be derived from each other by multiplication of
e
2
π
i
k
s
{\displaystyle e^{2\pi iks}}
, for k a suitable integer.
===== Behavior near branch point =====
The decomposition above further shows, that γ behaves near z = 0 asymptotically like:
γ
(
s
,
z
)
≍
z
s
Γ
(
s
)
γ
∗
(
s
,
0
)
=
z
s
Γ
(
s
)
/
Γ
(
s
+
1
)
=
z
s
/
s
.
{\displaystyle \gamma (s,z)\asymp z^{s}\,\Gamma (s)\,\gamma ^{*}(s,0)=z^{s}\,\Gamma (s)/\Gamma (s+1)=z^{s}/s.}
For positive real x, y and s, xy/y → 0, when (x, y) → (0, s). This seems to justify setting γ(s, 0) = 0 for real s > 0. However, matters are somewhat different in the complex realm. Only if (a) the real part of s is positive, and (b) values uv are taken from just a finite set of branches, they are guaranteed to converge to zero as (u, v) → (0, s), and so does γ(u, v). On a single branch of γ(b) is naturally fulfilled, so there γ(s, 0) = 0 for s with positive real part is a continuous limit. Also note that such a continuation is by no means an analytic one.
===== Algebraic relations =====
All algebraic relations and differential equations observed by the real γ(s, z) hold for its holomorphic counterpart as well. This is a consequence of the identity theorem, stating that equations between holomorphic functions valid on a real interval, hold everywhere. In particular, the recurrence relation and ∂γ(s, z)/∂z = zs−1 e−z are preserved on corresponding branches.
===== Integral representation =====
The last relation tells us, that, for fixed s, γ is a primitive or antiderivative of the holomorphic function zs−1 e−z. Consequently, for any complex u, v ≠ 0,
∫
u
v
t
s
−
1
e
−
t
d
t
=
γ
(
s
,
v
)
−
γ
(
s
,
u
)
{\displaystyle \int _{u}^{v}t^{s-1}\,e^{-t}\,dt=\gamma (s,v)-\gamma (s,u)}
holds, as long as the path of integration is entirely contained in the domain of a branch of the integrand. If, additionally, the real part of s is positive, then the limit γ(s, u) → 0 for u → 0 applies, finally arriving at the complex integral definition of γ
γ
(
s
,
z
)
=
∫
0
z
t
s
−
1
e
−
t
d
t
,
ℜ
(
s
)
>
0.
{\displaystyle \gamma (s,z)=\int _{0}^{z}t^{s-1}\,e^{-t}\,dt,\,\Re (s)>0.}
Any path of integration containing 0 only at its beginning, otherwise restricted to the domain of a branch of the integrand, is valid here, for example, the straight line connecting 0 and z.
===== Limit for z → +∞ =====
====== Real values ======
Given the integral representation of a principal branch of γ, the following equation holds for all positive real s, x:
Γ
(
s
)
=
∫
0
∞
t
s
−
1
e
−
t
d
t
=
lim
x
→
∞
γ
(
s
,
x
)
{\displaystyle \Gamma (s)=\int _{0}^{\infty }t^{s-1}\,e^{-t}\,dt=\lim _{x\to \infty }\gamma (s,x)}
====== s complex ======
This result extends to complex s. Assume first 1 ≤ Re(s) ≤ 2 and 1 < a < b. Then
|
γ
(
s
,
b
)
−
γ
(
s
,
a
)
|
≤
∫
a
b
|
t
s
−
1
|
e
−
t
d
t
=
∫
a
b
t
ℜ
s
−
1
e
−
t
d
t
≤
∫
a
b
t
e
−
t
d
t
{\displaystyle \left|\gamma (s,b)-\gamma (s,a)\right|\leq \int _{a}^{b}\left|t^{s-1}\right|e^{-t}\,dt=\int _{a}^{b}t^{\Re s-1}e^{-t}\,dt\leq \int _{a}^{b}te^{-t}\,dt}
where
|
z
s
|
=
|
z
|
ℜ
s
e
−
ℑ
s
arg
z
{\displaystyle \left|z^{s}\right|=\left|z\right|^{\Re s}\,e^{-\Im s\arg z}}
has been used in the middle. Since the final integral becomes arbitrarily small if only a is large enough, γ(s, x) converges uniformly for x → ∞ on the strip 1 ≤ Re(s) ≤ 2 towards a holomorphic function, which must be Γ(s) because of the identity theorem. Taking the limit in the recurrence relation γ(s, x) = (s − 1) γ(s − 1, x) − xs − 1 e−x and noting, that lim xn e−x = 0 for x → ∞ and all n, shows, that γ(s, x) converges outside the strip, too, towards a function obeying the recurrence relation of the Γ-function. It follows
Γ
(
s
)
=
lim
x
→
∞
γ
(
s
,
x
)
{\displaystyle \Gamma (s)=\lim _{x\to \infty }\gamma (s,x)}
for all complex s not a non-positive integer, x real and γ principal.
====== Sectorwise convergence ======
Now let u be from the sector |arg z| < δ < π/2 with some fixed δ (α = 0), γ be the principal branch on this sector, and look at
Γ
(
s
)
−
γ
(
s
,
u
)
=
Γ
(
s
)
−
γ
(
s
,
|
u
|
)
+
γ
(
s
,
|
u
|
)
−
γ
(
s
,
u
)
.
{\displaystyle \Gamma (s)-\gamma (s,u)=\Gamma (s)-\gamma (s,|u|)+\gamma (s,|u|)-\gamma (s,u).}
As shown above, the first difference can be made arbitrarily small, if |u| is sufficiently large. The second difference allows for following estimation:
|
γ
(
s
,
|
u
|
)
−
γ
(
s
,
u
)
|
≤
∫
u
|
u
|
|
z
s
−
1
e
−
z
|
d
z
=
∫
u
|
u
|
|
z
|
ℜ
s
−
1
e
−
ℑ
s
arg
z
e
−
ℜ
z
d
z
,
{\displaystyle \left|\gamma (s,|u|)-\gamma (s,u)\right|\leq \int _{u}^{|u|}\left|z^{s-1}e^{-z}\right|dz=\int _{u}^{|u|}\left|z\right|^{\Re s-1}\,e^{-\Im s\,\arg z}\,e^{-\Re z}\,dz,}
where we made use of the integral representation of γ and the formula about |zs| above. If we integrate along the arc with radius R = |u| around 0 connecting u and |u|, then the last integral is
≤
R
|
arg
u
|
R
ℜ
s
−
1
e
ℑ
s
|
arg
u
|
e
−
R
cos
arg
u
≤
δ
R
ℜ
s
e
ℑ
s
δ
e
−
R
cos
δ
=
M
(
R
cos
δ
)
ℜ
s
e
−
R
cos
δ
{\displaystyle \leq R\left|\arg u\right|R^{\Re s-1}\,e^{\Im s\,|\arg u|}\,e^{-R\cos \arg u}\leq \delta \,R^{\Re s}\,e^{\Im s\,\delta }\,e^{-R\cos \delta }=M\,(R\,\cos \delta )^{\Re s}\,e^{-R\cos \delta }}
where M = δ(cos δ)−Re s eIm sδ is a constant independent of u or R. Again referring to the behavior of xn e−x for large x, we see that the last expression approaches 0 as R increases towards ∞.
In total we now have:
Γ
(
s
)
=
lim
|
z
|
→
∞
γ
(
s
,
z
)
,
|
arg
z
|
<
π
/
2
−
ϵ
,
{\displaystyle \Gamma (s)=\lim _{|z|\to \infty }\gamma (s,z),\quad \left|\arg z\right|<\pi /2-\epsilon ,}
if s is not a non-negative integer, 0 < ε < π/2 is arbitrarily small, but fixed, and γ denotes the principal branch on this domain.
===== Overview =====
γ
(
s
,
z
)
{\displaystyle \gamma (s,z)}
is:
entire in z for fixed, positive integer s;
multi-valued holomorphic in z for fixed s not an integer, with a branch point at z = 0;
on each branch meromorphic in s for fixed z ≠ 0, with simple poles at non-positive integers s.
==== Upper incomplete gamma function ====
As for the upper incomplete gamma function, a holomorphic extension, with respect to z or s, is given by
Γ
(
s
,
z
)
=
Γ
(
s
)
−
γ
(
s
,
z
)
{\displaystyle \Gamma (s,z)=\Gamma (s)-\gamma (s,z)}
at points (s, z), where the right hand side exists. Since
γ
{\displaystyle \gamma }
is multi-valued, the same holds for
Γ
{\displaystyle \Gamma }
, but a restriction to principal values only yields the single-valued principal branch of
Γ
{\displaystyle \Gamma }
.
When s is a non-positive integer in the above equation, neither part of the difference is defined, and a limiting process, here developed for s → 0, fills in the missing values. Complex analysis guarantees holomorphicity, because
Γ
(
s
,
z
)
{\displaystyle \Gamma (s,z)}
proves to be bounded in a neighbourhood of that limit for a fixed z.
To determine the limit, the power series of
γ
∗
{\displaystyle \gamma ^{*}}
at z = 0 is useful. When replacing
e
−
x
{\displaystyle e^{-x}}
by its power series in the integral definition of
γ
{\displaystyle \gamma }
, one obtains (assume x,s positive reals for now):
γ
(
s
,
x
)
=
∫
0
x
t
s
−
1
e
−
t
d
t
=
∫
0
x
∑
k
=
0
∞
(
−
1
)
k
t
s
+
k
−
1
k
!
d
t
=
∑
k
=
0
∞
(
−
1
)
k
x
s
+
k
k
!
(
s
+
k
)
=
x
s
∑
k
=
0
∞
(
−
x
)
k
k
!
(
s
+
k
)
{\displaystyle \gamma (s,x)=\int _{0}^{x}t^{s-1}e^{-t}\,dt=\int _{0}^{x}\sum _{k=0}^{\infty }\left(-1\right)^{k}\,{\frac {t^{s+k-1}}{k!}}\,dt=\sum _{k=0}^{\infty }\left(-1\right)^{k}\,{\frac {x^{s+k}}{k!(s+k)}}=x^{s}\,\sum _{k=0}^{\infty }{\frac {(-x)^{k}}{k!(s+k)}}}
or
γ
∗
(
s
,
x
)
=
∑
k
=
0
∞
(
−
x
)
k
k
!
Γ
(
s
)
(
s
+
k
)
,
{\displaystyle \gamma ^{*}(s,x)=\sum _{k=0}^{\infty }{\frac {(-x)^{k}}{k!\,\Gamma (s)(s+k)}},}
which, as a series representation of the entire
γ
∗
{\displaystyle \gamma ^{*}}
function, converges for all complex x (and all complex s not a non-positive integer).
With its restriction to real values lifted, the series allows the expansion:
γ
(
s
,
z
)
−
1
s
=
−
1
s
+
z
s
∑
k
=
0
∞
(
−
z
)
k
k
!
(
s
+
k
)
=
z
s
−
1
s
+
z
s
∑
k
=
1
∞
(
−
z
)
k
k
!
(
s
+
k
)
,
ℜ
(
s
)
>
−
1
,
s
≠
0.
{\displaystyle \gamma (s,z)-{\frac {1}{s}}=-{\frac {1}{s}}+z^{s}\,\sum _{k=0}^{\infty }{\frac {(-z)^{k}}{k!(s+k)}}={\frac {z^{s}-1}{s}}+z^{s}\,\sum _{k=1}^{\infty }{\frac {\left(-z\right)^{k}}{k!(s+k)}},\quad \Re (s)>-1,\,s\neq 0.}
When s → 0:
z
s
−
1
s
→
ln
(
z
)
,
Γ
(
s
)
−
1
s
=
1
s
−
γ
+
O
(
s
)
−
1
s
→
−
γ
,
{\displaystyle {\frac {z^{s}-1}{s}}\to \ln(z),\quad \Gamma (s)-{\frac {1}{s}}={\frac {1}{s}}-\gamma +O(s)-{\frac {1}{s}}\to -\gamma ,}
(
γ
{\displaystyle \gamma }
is the Euler–Mascheroni constant here), hence,
Γ
(
0
,
z
)
=
lim
s
→
0
(
Γ
(
s
)
−
1
s
−
(
γ
(
s
,
z
)
−
1
s
)
)
=
−
γ
−
ln
(
z
)
−
∑
k
=
1
∞
(
−
z
)
k
k
(
k
!
)
{\displaystyle \Gamma (0,z)=\lim _{s\to 0}\left(\Gamma (s)-{\tfrac {1}{s}}-(\gamma (s,z)-{\tfrac {1}{s}})\right)=-\gamma -\ln(z)-\sum _{k=1}^{\infty }{\frac {(-z)^{k}}{k\,(k!)}}}
is the limiting function to the upper incomplete gamma function as s → 0, also known as the exponential integral
E
1
(
z
)
{\displaystyle E_{1}(z)}
.
By way of the recurrence relation, values of
Γ
(
−
n
,
z
)
{\displaystyle \Gamma (-n,z)}
for positive integers n can be derived from this result,
Γ
(
−
n
,
z
)
=
1
n
!
(
e
−
z
z
n
∑
k
=
0
n
−
1
(
−
1
)
k
(
n
−
k
−
1
)
!
z
k
+
(
−
1
)
n
Γ
(
0
,
z
)
)
{\displaystyle \Gamma (-n,z)={\frac {1}{n!}}\left({\frac {e^{-z}}{z^{n}}}\sum _{k=0}^{n-1}(-1)^{k}(n-k-1)!\,z^{k}+\left(-1\right)^{n}\Gamma (0,z)\right)}
so the upper incomplete gamma function proves to exist and be holomorphic, with respect both to z and s, for all s and z ≠ 0.
Γ
(
s
,
z
)
{\displaystyle \Gamma (s,z)}
is:
entire in z for fixed, positive integral s;
multi-valued holomorphic in z for fixed s non zero and not a positive integer, with a branch point at z = 0;
equal to
Γ
(
s
)
{\displaystyle \Gamma (s)}
for s with positive real part and z = 0 (the limit when
(
s
i
,
z
i
)
→
(
s
,
0
)
{\displaystyle (s_{i},z_{i})\to (s,0)}
), but this is a continuous extension, not an analytic one (does not hold for real s < 0!);
on each branch entire in s for fixed z ≠ 0.
=== Special values ===
Γ
(
s
+
1
,
1
)
=
⌊
e
s
!
⌋
e
{\displaystyle \Gamma (s+1,1)={\frac {\lfloor es!\rfloor }{e}}}
if s is a positive integer,
Γ
(
s
,
x
)
=
(
s
−
1
)
!
e
−
x
∑
k
=
0
s
−
1
x
k
k
!
{\displaystyle \Gamma (s,x)=(s-1)!\,e^{-x}\sum _{k=0}^{s-1}{\frac {x^{k}}{k!}}}
if s is a positive integer,
Γ
(
s
,
0
)
=
Γ
(
s
)
,
ℜ
(
s
)
>
0
{\displaystyle \Gamma (s,0)=\Gamma (s),\Re (s)>0}
,
Γ
(
1
,
x
)
=
e
−
x
{\displaystyle \Gamma (1,x)=e^{-x}}
,
γ
(
1
,
x
)
=
1
−
e
−
x
{\displaystyle \gamma (1,x)=1-e^{-x}}
,
Γ
(
0
,
x
)
=
−
Ei
(
−
x
)
{\displaystyle \Gamma (0,x)=-\operatorname {Ei} (-x)}
for
x
>
0
{\displaystyle x>0}
,
Γ
(
s
,
x
)
=
x
s
E
1
−
s
(
x
)
{\displaystyle \Gamma (s,x)=x^{s}\operatorname {E} _{1-s}(x)}
,
Γ
(
1
2
,
x
)
=
π
erfc
(
x
)
{\displaystyle \Gamma \left({\tfrac {1}{2}},x\right)={\sqrt {\pi }}\operatorname {erfc} \left({\sqrt {x}}\right)}
,
γ
(
1
2
,
x
)
=
π
erf
(
x
)
{\displaystyle \gamma \left({\tfrac {1}{2}},x\right)={\sqrt {\pi }}\operatorname {erf} \left({\sqrt {x}}\right)}
.
Here,
Ei
{\displaystyle \operatorname {Ei} }
is the exponential integral,
E
n
{\displaystyle \operatorname {E} _{n}}
is the generalized exponential integral,
erf
{\displaystyle \operatorname {erf} }
is the error function, and
erfc
{\displaystyle \operatorname {erfc} }
is the complementary error function,
erfc
(
x
)
=
1
−
erf
(
x
)
{\displaystyle \operatorname {erfc} (x)=1-\operatorname {erf} (x)}
.
=== Asymptotic behavior ===
γ
(
s
,
x
)
x
s
→
1
s
{\displaystyle {\frac {\gamma (s,x)}{x^{s}}}\to {\frac {1}{s}}}
as
x
→
0
{\displaystyle x\to 0}
,
Γ
(
s
,
x
)
x
s
→
−
1
s
{\displaystyle {\frac {\Gamma (s,x)}{x^{s}}}\to -{\frac {1}{s}}}
as
x
→
0
{\displaystyle x\to 0}
and
ℜ
(
s
)
<
0
{\displaystyle \Re (s)<0}
(for real s, the error of Γ(s, x) ~ −xs / s is on the order of O(xmin{s + 1, 0}) if s ≠ −1 and O(ln(x)) if s = −1),
Γ
(
s
,
x
)
∼
Γ
(
s
)
−
∑
n
=
0
∞
(
−
1
)
n
x
s
+
n
n
!
(
s
+
n
)
{\displaystyle \Gamma (s,x)\sim \Gamma (s)-\sum _{n=0}^{\infty }(-1)^{n}{\frac {x^{s+n}}{n!(s+n)}}}
as an asymptotic series where
x
→
0
+
{\displaystyle x\to 0^{+}}
and
s
≠
0
,
−
1
,
−
2
,
…
{\displaystyle s\neq 0,-1,-2,\dots }
.
Γ
(
−
N
,
x
)
∼
C
N
+
(
−
1
)
N
+
1
N
!
ln
x
−
∑
n
=
0
,
n
≠
N
∞
(
−
1
)
n
x
n
−
N
n
!
(
n
−
N
)
{\displaystyle \Gamma (-N,x)\sim C_{N}+{\frac {(-1)^{N+1}}{N!}}\ln x-\sum _{n=0,n\neq N}^{\infty }(-1)^{n}{\frac {x^{n-N}}{n!(n-N)}}}
as an asymptotic series where
x
→
0
+
{\displaystyle x\to 0^{+}}
and
N
=
1
,
2
,
…
{\displaystyle N=1,2,\dots }
, where
C
N
=
(
−
1
)
N
+
1
N
!
(
γ
−
∑
n
=
1
N
1
n
)
{\textstyle C_{N}={\frac {(-1)^{N+1}}{N!}}\left(\gamma -\displaystyle \sum _{n=1}^{N}{\frac {1}{n}}\right)}
, where
γ
{\displaystyle \gamma }
is the Euler-Mascheroni constant.
γ
(
s
,
x
)
→
Γ
(
s
)
{\displaystyle \gamma (s,x)\to \Gamma (s)}
as
x
→
∞
{\displaystyle x\to \infty }
,
Γ
(
s
,
x
)
x
s
−
1
e
−
x
→
1
{\displaystyle {\frac {\Gamma (s,x)}{x^{s-1}e^{-x}}}\to 1}
as
x
→
∞
{\displaystyle x\to \infty }
,
Γ
(
s
,
z
)
∼
z
s
−
1
e
−
z
∑
k
=
0
Γ
(
s
)
Γ
(
s
−
k
)
z
−
k
{\displaystyle \Gamma (s,z)\sim z^{s-1}e^{-z}\sum _{k=0}{\frac {\Gamma (s)}{\Gamma (s-k)}}z^{-k}}
as an asymptotic series where
|
z
|
→
∞
{\displaystyle |z|\to \infty }
and
|
arg
z
|
<
3
2
π
{\displaystyle \left|\arg z\right|<{\tfrac {3}{2}}\pi }
.
== Evaluation formulae ==
The lower gamma function can be evaluated using the power series expansion:
γ
(
s
,
z
)
=
∑
k
=
0
∞
z
s
e
−
z
z
k
s
(
s
+
1
)
…
(
s
+
k
)
=
z
s
e
−
z
∑
k
=
0
∞
z
k
s
k
+
1
¯
{\displaystyle \gamma (s,z)=\sum _{k=0}^{\infty }{\frac {z^{s}e^{-z}z^{k}}{s(s+1)\dots (s+k)}}=z^{s}e^{-z}\sum _{k=0}^{\infty }{\dfrac {z^{k}}{s^{\overline {k+1}}}}}
where
s
k
+
1
¯
{\displaystyle s^{\overline {k+1}}}
is the Pochhammer symbol.
An alternative expansion is
γ
(
s
,
z
)
=
∑
k
=
0
∞
(
−
1
)
k
k
!
z
s
+
k
s
+
k
=
z
s
s
M
(
s
,
s
+
1
,
−
z
)
,
{\displaystyle \gamma (s,z)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{k!}}{\frac {z^{s+k}}{s+k}}={\frac {z^{s}}{s}}M(s,s+1,-z),}
where M is Kummer's confluent hypergeometric function.
=== Connection with Kummer's confluent hypergeometric function ===
When the real part of z is positive,
γ
(
s
,
z
)
=
s
−
1
z
s
e
−
z
M
(
1
,
s
+
1
,
z
)
{\displaystyle \gamma (s,z)=s^{-1}z^{s}e^{-z}M(1,s+1,z)}
where
M
(
1
,
s
+
1
,
z
)
=
1
+
z
(
s
+
1
)
+
z
2
(
s
+
1
)
(
s
+
2
)
+
z
3
(
s
+
1
)
(
s
+
2
)
(
s
+
3
)
+
⋯
{\displaystyle M(1,s+1,z)=1+{\frac {z}{(s+1)}}+{\frac {z^{2}}{(s+1)(s+2)}}+{\frac {z^{3}}{(s+1)(s+2)(s+3)}}+\cdots }
has an infinite radius of convergence.
Again with confluent hypergeometric functions and employing Kummer's identity,
Γ
(
s
,
z
)
=
e
−
z
U
(
1
−
s
,
1
−
s
,
z
)
=
z
s
e
−
z
Γ
(
1
−
s
)
∫
0
∞
e
−
u
u
s
(
z
+
u
)
d
u
=
e
−
z
z
s
U
(
1
,
1
+
s
,
z
)
=
e
−
z
∫
0
∞
e
−
u
(
z
+
u
)
s
−
1
d
u
=
e
−
z
z
s
∫
0
∞
e
−
z
u
(
1
+
u
)
s
−
1
d
u
.
{\displaystyle {\begin{aligned}\Gamma (s,z)&=e^{-z}U(1-s,1-s,z)={\frac {z^{s}e^{-z}}{\Gamma (1-s)}}\int _{0}^{\infty }{\frac {e^{-u}}{u^{s}(z+u)}}du\\&=e^{-z}z^{s}U(1,1+s,z)=e^{-z}\int _{0}^{\infty }e^{-u}(z+u)^{s-1}du=e^{-z}z^{s}\int _{0}^{\infty }e^{-zu}(1+u)^{s-1}du.\end{aligned}}}
For the actual computation of numerical values, Gauss's continued fraction provides a useful expansion:
γ
(
s
,
z
)
=
z
s
e
−
z
s
−
s
z
s
+
1
+
z
s
+
2
−
(
s
+
1
)
z
s
+
3
+
2
z
s
+
4
−
(
s
+
2
)
z
s
+
5
+
3
z
s
+
6
−
⋱
.
{\displaystyle \gamma (s,z)={\cfrac {z^{s}e^{-z}}{s-{\cfrac {sz}{s+1+{\cfrac {z}{s+2-{\cfrac {(s+1)z}{s+3+{\cfrac {2z}{s+4-{\cfrac {(s+2)z}{s+5+{\cfrac {3z}{s+6-\ddots }}}}}}}}}}}}}}.}
This continued fraction converges for all complex z, provided only that s is not a negative integer.
The upper gamma function has the continued fraction
Γ
(
s
,
z
)
=
z
s
e
−
z
z
+
1
−
s
1
+
1
z
+
2
−
s
1
+
2
z
+
3
−
s
1
+
⋱
{\displaystyle \Gamma (s,z)={\cfrac {z^{s}e^{-z}}{z+{\cfrac {1-s}{1+{\cfrac {1}{z+{\cfrac {2-s}{1+{\cfrac {2}{z+{\cfrac {3-s}{1+\ddots }}}}}}}}}}}}}
and
Γ
(
s
,
z
)
=
z
s
e
−
z
1
+
z
−
s
+
s
−
1
3
+
z
−
s
+
2
(
s
−
2
)
5
+
z
−
s
+
3
(
s
−
3
)
7
+
z
−
s
+
4
(
s
−
4
)
9
+
z
−
s
+
⋱
{\displaystyle \Gamma (s,z)={\cfrac {z^{s}e^{-z}}{1+z-s+{\cfrac {s-1}{3+z-s+{\cfrac {2(s-2)}{5+z-s+{\cfrac {3(s-3)}{7+z-s+{\cfrac {4(s-4)}{9+z-s+\ddots }}}}}}}}}}}
=== Multiplication theorem ===
The following multiplication theorem holds true:
Γ
(
s
,
z
)
=
1
t
s
∑
i
=
0
∞
(
1
−
1
t
)
i
i
!
Γ
(
s
+
i
,
t
z
)
=
Γ
(
s
,
t
z
)
−
(
t
z
)
s
e
−
t
z
∑
i
=
1
∞
(
1
t
−
1
)
i
i
L
i
−
1
(
s
−
i
)
(
t
z
)
.
{\displaystyle \Gamma (s,z)={\frac {1}{t^{s}}}\sum _{i=0}^{\infty }{\frac {\left(1-{\frac {1}{t}}\right)^{i}}{i!}}\Gamma (s+i,tz)=\Gamma (s,tz)-(tz)^{s}e^{-tz}\sum _{i=1}^{\infty }{\frac {\left({\frac {1}{t}}-1\right)^{i}}{i}}L_{i-1}^{(s-i)}(tz).}
=== Software implementation ===
The incomplete gamma functions are available in various of the computer algebra systems.
Even if unavailable directly, however, incomplete function values can be calculated using functions commonly included in spreadsheets (and computer algebra packages). In Excel, for example, these can be calculated using the gamma function combined with the gamma distribution function.
The lower incomplete function:
γ
(
s
,
x
)
{\displaystyle \gamma (s,x)}
= EXP(GAMMALN(s))*GAMMA.DIST(x,s,1,TRUE).
The upper incomplete function:
Γ
(
s
,
x
)
{\displaystyle \Gamma (s,x)}
= EXP(GAMMALN(s))*(1-GAMMA.DIST(x,s,1,TRUE)).
These follow from the definition of the gamma distribution's cumulative distribution function.
In Python, the Scipy library provides implementations of incomplete gamma functions under scipy.special, however, it does not support negative values for the first argument. The function gammainc from the mpmath library supports all complex arguments.
== Regularized gamma functions and Poisson random variables ==
Two related functions are the regularized gamma functions:
P
(
s
,
x
)
=
γ
(
s
,
x
)
Γ
(
s
)
,
Q
(
s
,
x
)
=
Γ
(
s
,
x
)
Γ
(
s
)
=
1
−
P
(
s
,
x
)
.
{\displaystyle {\begin{aligned}P(s,x)&={\frac {\gamma (s,x)}{\Gamma (s)}},\\[1ex]Q(s,x)&={\frac {\Gamma (s,x)}{\Gamma (s)}}=1-P(s,x).\end{aligned}}}
P
(
s
,
x
)
{\displaystyle P(s,x)}
is the cumulative distribution function for gamma random variables with shape parameter
s
{\displaystyle s}
and scale parameter 1.
When
s
{\displaystyle s}
is an integer,
Q
(
s
+
1
,
λ
)
{\displaystyle Q(s+1,\lambda )}
is the cumulative distribution function for Poisson random variables: If
X
{\displaystyle X}
is a
P
o
i
(
λ
)
{\displaystyle \mathrm {Poi} (\lambda )}
random variable then
Pr
(
X
≤
s
)
=
∑
i
≤
s
e
−
λ
λ
i
i
!
=
Γ
(
s
+
1
,
λ
)
Γ
(
s
+
1
)
=
Q
(
s
+
1
,
λ
)
.
{\displaystyle \Pr(X\leq s)=\sum _{i\leq s}e^{-\lambda }{\frac {\lambda ^{i}}{i!}}={\frac {\Gamma (s+1,\lambda )}{\Gamma (s+1)}}=Q(s+1,\lambda ).}
This formula can be derived by repeated integration by parts.
In the context of the stable count distribution, the
s
{\displaystyle s}
parameter can be regarded as inverse of Lévy's stability parameter
α
{\displaystyle \alpha }
:
Q
(
s
,
x
)
=
∫
0
∞
e
(
−
x
s
/
ν
)
N
1
/
s
(
ν
)
d
ν
,
(
s
>
1
)
{\displaystyle Q(s,x)=\int _{0}^{\infty }e^{\left(-{x^{s}}/{\nu }\right)}\,{\mathfrak {N}}_{{1}/{s}}\left(\nu \right)\,d\nu ,\quad (s>1)}
where
N
α
(
ν
)
{\displaystyle {\mathfrak {N}}_{\alpha }(\nu )}
is a standard stable count distribution of shape
α
=
1
/
s
<
1
{\displaystyle \alpha =1/s<1}
.
P
(
s
,
x
)
{\displaystyle P(s,x)}
and
Q
(
s
,
x
)
{\displaystyle Q(s,x)}
are implemented as gammainc and gammaincc in scipy.
== Derivatives ==
Using the integral representation above, the derivative of the upper incomplete gamma function
Γ
(
s
,
x
)
{\displaystyle \Gamma (s,x)}
with respect to x is
∂
Γ
(
s
,
x
)
∂
x
=
−
x
s
−
1
e
−
x
{\displaystyle {\frac {\partial \Gamma (s,x)}{\partial x}}=-x^{s-1}e^{-x}}
The derivative with respect to its first argument
s
{\displaystyle s}
is given by
∂
Γ
(
s
,
x
)
∂
s
=
ln
x
Γ
(
s
,
x
)
+
x
T
(
3
,
s
,
x
)
{\displaystyle {\frac {\partial \Gamma (s,x)}{\partial s}}=\ln x\Gamma (s,x)+x\,T(3,s,x)}
and the second derivative by
∂
2
Γ
(
s
,
x
)
∂
s
2
=
ln
2
x
Γ
(
s
,
x
)
+
2
x
[
ln
x
T
(
3
,
s
,
x
)
+
T
(
4
,
s
,
x
)
]
{\displaystyle {\frac {\partial ^{2}\Gamma (s,x)}{\partial s^{2}}}=\ln ^{2}x\Gamma (s,x)+2x\left[\ln x\,T(3,s,x)+T(4,s,x)\right]}
where the function
T
(
m
,
s
,
x
)
{\displaystyle T(m,s,x)}
is a special case of the Meijer G-function
T
(
m
,
s
,
x
)
=
G
m
−
1
,
m
m
,
0
(
0
,
0
,
…
,
0
s
−
1
,
−
1
,
…
,
−
1
|
x
)
.
{\displaystyle T(m,s,x)=G_{m-1,\,m}^{\,m,\,0}\!\left(\left.{\begin{matrix}0,0,\dots ,0\\s-1,-1,\dots ,-1\end{matrix}}\;\right|\,x\right).}
This particular special case has internal closure properties of its own because it can be used to express all successive derivatives. In general,
∂
m
Γ
(
s
,
x
)
∂
s
m
=
ln
m
x
Γ
(
s
,
x
)
+
m
x
∑
n
=
0
m
−
1
P
n
m
−
1
ln
m
−
n
−
1
x
T
(
3
+
n
,
s
,
x
)
{\displaystyle {\frac {\partial ^{m}\Gamma (s,x)}{\partial s^{m}}}=\ln ^{m}x\Gamma (s,x)+mx\,\sum _{n=0}^{m-1}P_{n}^{m-1}\ln ^{m-n-1}x\,T(3+n,s,x)}
where
P
j
n
{\displaystyle P_{j}^{n}}
is the permutation defined by the Pochhammer symbol:
P
j
n
=
(
n
j
)
j
!
=
n
!
(
n
−
j
)
!
.
{\displaystyle P_{j}^{n}={\binom {n}{j}}j!={\frac {n!}{(n-j)!}}.}
All such derivatives can be generated in succession from:
∂
T
(
m
,
s
,
x
)
∂
s
=
ln
x
T
(
m
,
s
,
x
)
+
(
m
−
1
)
T
(
m
+
1
,
s
,
x
)
{\displaystyle {\frac {\partial T(m,s,x)}{\partial s}}=\ln x~T(m,s,x)+(m-1)T(m+1,s,x)}
and
∂
T
(
m
,
s
,
x
)
∂
x
=
−
T
(
m
−
1
,
s
,
x
)
+
T
(
m
,
s
,
x
)
x
{\displaystyle {\frac {\partial T(m,s,x)}{\partial x}}=-{\frac {T(m-1,s,x)+T(m,s,x)}{x}}}
This function
T
(
m
,
s
,
x
)
{\displaystyle T(m,s,x)}
can be computed from its series representation valid for
|
z
|
<
1
{\displaystyle |z|<1}
,
T
(
m
,
s
,
z
)
=
−
(
−
1
)
m
−
1
(
m
−
2
)
!
d
m
−
2
d
t
m
−
2
[
Γ
(
s
−
t
)
z
t
−
1
]
|
t
=
0
+
∑
n
=
0
∞
(
−
1
)
n
z
s
−
1
+
n
n
!
(
−
s
−
n
)
m
−
1
{\displaystyle T(m,s,z)=-{\frac {\left(-1\right)^{m-1}}{(m-2)!}}\left.{\frac {d^{m-2}}{dt^{m-2}}}\left[\Gamma (s-t)z^{t-1}\right]\right|_{t=0}+\sum _{n=0}^{\infty }{\frac {\left(-1\right)^{n}z^{s-1+n}}{n!\left(-s-n\right)^{m-1}}}}
with the understanding that s is not a negative integer or zero. In such a case, one must use a limit. Results for
|
z
|
≥
1
{\displaystyle |z|\geq 1}
can be obtained by analytic continuation. Some special cases of this function can be simplified. For example,
T
(
2
,
s
,
x
)
=
Γ
(
s
,
x
)
/
x
{\displaystyle T(2,s,x)=\Gamma (s,x)/x}
,
x
T
(
3
,
1
,
x
)
=
E
1
(
x
)
{\displaystyle x\,T(3,1,x)=\mathrm {E} _{1}(x)}
, where
E
1
(
x
)
{\displaystyle \mathrm {E} _{1}(x)}
is the Exponential integral. These derivatives and the function
T
(
m
,
s
,
x
)
{\displaystyle T(m,s,x)}
provide exact solutions to a number of integrals by repeated differentiation of the integral definition of the upper incomplete gamma function.
For example,
∫
x
∞
t
s
−
1
ln
m
t
e
t
d
t
=
∂
m
∂
s
m
∫
x
∞
t
s
−
1
e
t
d
t
=
∂
m
∂
s
m
Γ
(
s
,
x
)
{\displaystyle \int _{x}^{\infty }{\frac {t^{s-1}\ln ^{m}t}{e^{t}}}dt={\frac {\partial ^{m}}{\partial s^{m}}}\int _{x}^{\infty }{\frac {t^{s-1}}{e^{t}}}dt={\frac {\partial ^{m}}{\partial s^{m}}}\Gamma (s,x)}
This formula can be further inflated or generalized to a huge class of Laplace transforms and Mellin transforms. When combined with a computer algebra system, the exploitation of special functions provides a powerful method for solving definite integrals, in particular those encountered by practical engineering applications (see Symbolic integration for more details).
== Indefinite and definite integrals ==
The following indefinite integrals are readily obtained using integration by parts (with the constant of integration omitted in both cases):
∫
x
b
−
1
γ
(
s
,
x
)
d
x
=
1
b
(
x
b
γ
(
s
,
x
)
−
γ
(
s
+
b
,
x
)
)
,
∫
x
b
−
1
Γ
(
s
,
x
)
d
x
=
1
b
(
x
b
Γ
(
s
,
x
)
−
Γ
(
s
+
b
,
x
)
)
.
{\displaystyle {\begin{aligned}\int x^{b-1}\gamma (s,x)\,dx&={\frac {1}{b}}\left(x^{b}\gamma (s,x)-\gamma (s+b,x)\right),\\[1ex]\int x^{b-1}\Gamma (s,x)\,dx&={\frac {1}{b}}\left(x^{b}\Gamma (s,x)-\Gamma (s+b,x)\right).\end{aligned}}}
The lower and the upper incomplete gamma function are connected via the Fourier transform:
∫
−
∞
∞
γ
(
s
2
,
z
2
π
)
(
z
2
π
)
s
2
e
−
2
π
i
k
z
d
z
=
Γ
(
1
−
s
2
,
k
2
π
)
(
k
2
π
)
1
−
s
2
.
{\displaystyle \int _{-\infty }^{\infty }{\frac {\gamma \left({\frac {s}{2}},z^{2}\pi \right)}{(z^{2}\pi )^{\frac {s}{2}}}}e^{-2\pi ikz}dz={\frac {\Gamma \left({\frac {1-s}{2}},k^{2}\pi \right)}{(k^{2}\pi )^{\frac {1-s}{2}}}}.}
This follows, for example, by suitable specialization of (Gradshteyn et al. 2015, §7.642).
== Notes ==
== References ==
== External links ==
P
(
a
,
x
)
{\displaystyle P(a,x)}
— Regularized Lower Incomplete Gamma Function Calculator
Q
(
a
,
x
)
{\displaystyle Q(a,x)}
— Regularized Upper Incomplete Gamma Function Calculator
γ
(
a
,
x
)
{\displaystyle \gamma (a,x)}
— Lower Incomplete Gamma Function Calculator
Γ
(
a
,
x
)
{\displaystyle \Gamma (a,x)}
— Upper Incomplete Gamma Function Calculator
formulas and identities of the Incomplete Gamma Function functions.wolfram.com | Wikipedia/Incomplete_gamma_function |
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
The set of rational functions over a field K is a field, the field of fractions of the ring of the polynomial functions over K.
== Definitions ==
A function
f
{\displaystyle f}
is called a rational function if it can be written in the form
f
(
x
)
=
P
(
x
)
Q
(
x
)
{\displaystyle f(x)={\frac {P(x)}{Q(x)}}}
where
P
{\displaystyle P}
and
Q
{\displaystyle Q}
are polynomial functions of
x
{\displaystyle x}
and
Q
{\displaystyle Q}
is not the zero function. The domain of
f
{\displaystyle f}
is the set of all values of
x
{\displaystyle x}
for which the denominator
Q
(
x
)
{\displaystyle Q(x)}
is not zero.
However, if
P
{\displaystyle \textstyle P}
and
Q
{\displaystyle \textstyle Q}
have a non-constant polynomial greatest common divisor
R
{\displaystyle \textstyle R}
, then setting
P
=
P
1
R
{\displaystyle \textstyle P=P_{1}R}
and
Q
=
Q
1
R
{\displaystyle \textstyle Q=Q_{1}R}
produces a rational function
f
1
(
x
)
=
P
1
(
x
)
Q
1
(
x
)
,
{\displaystyle f_{1}(x)={\frac {P_{1}(x)}{Q_{1}(x)}},}
which may have a larger domain than
f
{\displaystyle f}
, and is equal to
f
{\displaystyle f}
on the domain of
f
.
{\displaystyle f.}
It is a common usage to identify
f
{\displaystyle f}
and
f
1
{\displaystyle f_{1}}
, that is to extend "by continuity" the domain of
f
{\displaystyle f}
to that of
f
1
.
{\displaystyle f_{1}.}
Indeed, one can define a rational fraction as an equivalence class of fractions of polynomials, where two fractions
A
(
x
)
B
(
x
)
{\displaystyle \textstyle {\frac {A(x)}{B(x)}}}
and
C
(
x
)
D
(
x
)
{\displaystyle \textstyle {\frac {C(x)}{D(x)}}}
are considered equivalent if
A
(
x
)
D
(
x
)
=
B
(
x
)
C
(
x
)
{\displaystyle A(x)D(x)=B(x)C(x)}
. In this case
P
(
x
)
Q
(
x
)
{\displaystyle \textstyle {\frac {P(x)}{Q(x)}}}
is equivalent to
P
1
(
x
)
Q
1
(
x
)
.
{\displaystyle \textstyle {\frac {P_{1}(x)}{Q_{1}(x)}}.}
A proper rational function is a rational function in which the degree of
P
(
x
)
{\displaystyle P(x)}
is less than the degree of
Q
(
x
)
{\displaystyle Q(x)}
and both are real polynomials, named by analogy to a proper fraction in
Q
.
{\displaystyle \mathbb {Q} .}
=== Complex rational functions ===
In complex analysis, a rational function
f
(
z
)
=
P
(
z
)
Q
(
z
)
{\displaystyle f(z)={\frac {P(z)}{Q(z)}}}
is the ratio of two polynomials with complex coefficients, where Q is not the zero polynomial and P and Q have no common factor (this avoids f taking the indeterminate value 0/0).
The domain of f is the set of complex numbers such that
Q
(
z
)
≠
0
{\displaystyle Q(z)\neq 0}
.
Every rational function can be naturally extended to a function whose domain and range are the whole Riemann sphere (complex projective line).
A complex rational function with degree one is a Möbius transformation.
Rational functions are representative examples of meromorphic functions.
Iteration of rational functions on the Riemann sphere (i.e. a rational mapping) creates discrete dynamical systems.
Julia sets for rational maps
=== Degree ===
There are several non equivalent definitions of the degree of a rational function.
Most commonly, the degree of a rational function is the maximum of the degrees of its constituent polynomials P and Q, when the fraction is reduced to lowest terms. If the degree of f is d, then the equation
f
(
z
)
=
w
{\displaystyle f(z)=w\,}
has d distinct solutions in z except for certain values of w, called critical values, where two or more solutions coincide or where some solution is rejected at infinity (that is, when the degree of the equation decreases after having cleared the denominator).
The degree of the graph of a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator.
In some contexts, such as in asymptotic analysis, the degree of a rational function is the difference between the degrees of the numerator and the denominator.: §13.6.1 : Chapter IV
In network synthesis and network analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called a biquadratic function.
== Examples ==
The rational function
f
(
x
)
=
x
3
−
2
x
2
(
x
2
−
5
)
{\displaystyle f(x)={\frac {x^{3}-2x}{2(x^{2}-5)}}}
is not defined at
x
2
=
5
⇔
x
=
±
5
.
{\displaystyle x^{2}=5\Leftrightarrow x=\pm {\sqrt {5}}.}
It is asymptotic to
x
2
{\displaystyle {\tfrac {x}{2}}}
as
x
→
∞
.
{\displaystyle x\to \infty .}
The rational function
f
(
x
)
=
x
2
+
2
x
2
+
1
{\displaystyle f(x)={\frac {x^{2}+2}{x^{2}+1}}}
is defined for all real numbers, but not for all complex numbers, since if x were a square root of
−
1
{\displaystyle -1}
(i.e. the imaginary unit or its negative), then formal evaluation would lead to division by zero:
f
(
i
)
=
i
2
+
2
i
2
+
1
=
−
1
+
2
−
1
+
1
=
1
0
,
{\displaystyle f(i)={\frac {i^{2}+2}{i^{2}+1}}={\frac {-1+2}{-1+1}}={\frac {1}{0}},}
which is undefined.
A constant function such as f(x) = π is a rational function since constants are polynomials. The function itself is rational, even though the value of f(x) is irrational for all x.
Every polynomial function
f
(
x
)
=
P
(
x
)
{\displaystyle f(x)=P(x)}
is a rational function with
Q
(
x
)
=
1.
{\displaystyle Q(x)=1.}
A function that cannot be written in this form, such as
f
(
x
)
=
sin
(
x
)
,
{\displaystyle f(x)=\sin(x),}
is not a rational function. However, the adjective "irrational" is not generally used for functions.
Every Laurent polynomial can be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is a subring of the rational functions.
The rational function
f
(
x
)
=
x
x
{\displaystyle f(x)={\tfrac {x}{x}}}
is equal to 1 for all x except 0, where there is a removable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, since x/x is equivalent to 1/1.
== Taylor series ==
The coefficients of a Taylor series of any rational function satisfy a linear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collecting like terms after clearing the denominator.
For example,
1
x
2
−
x
+
2
=
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle {\frac {1}{x^{2}-x+2}}=\sum _{k=0}^{\infty }a_{k}x^{k}.}
Multiplying through by the denominator and distributing,
1
=
(
x
2
−
x
+
2
)
∑
k
=
0
∞
a
k
x
k
{\displaystyle 1=(x^{2}-x+2)\sum _{k=0}^{\infty }a_{k}x^{k}}
1
=
∑
k
=
0
∞
a
k
x
k
+
2
−
∑
k
=
0
∞
a
k
x
k
+
1
+
2
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle 1=\sum _{k=0}^{\infty }a_{k}x^{k+2}-\sum _{k=0}^{\infty }a_{k}x^{k+1}+2\sum _{k=0}^{\infty }a_{k}x^{k}.}
After adjusting the indices of the sums to get the same powers of x, we get
1
=
∑
k
=
2
∞
a
k
−
2
x
k
−
∑
k
=
1
∞
a
k
−
1
x
k
+
2
∑
k
=
0
∞
a
k
x
k
.
{\displaystyle 1=\sum _{k=2}^{\infty }a_{k-2}x^{k}-\sum _{k=1}^{\infty }a_{k-1}x^{k}+2\sum _{k=0}^{\infty }a_{k}x^{k}.}
Combining like terms gives
1
=
2
a
0
+
(
2
a
1
−
a
0
)
x
+
∑
k
=
2
∞
(
a
k
−
2
−
a
k
−
1
+
2
a
k
)
x
k
.
{\displaystyle 1=2a_{0}+(2a_{1}-a_{0})x+\sum _{k=2}^{\infty }(a_{k-2}-a_{k-1}+2a_{k})x^{k}.}
Since this holds true for all x in the radius of convergence of the original Taylor series, we can compute as follows. Since the constant term on the left must equal the constant term on the right it follows that
a
0
=
1
2
.
{\displaystyle a_{0}={\frac {1}{2}}.}
Then, since there are no powers of x on the left, all of the coefficients on the right must be zero, from which it follows that
a
1
=
1
4
{\displaystyle a_{1}={\frac {1}{4}}}
a
k
=
1
2
(
a
k
−
1
−
a
k
−
2
)
for
k
≥
2.
{\displaystyle a_{k}={\frac {1}{2}}(a_{k-1}-a_{k-2})\quad {\text{for}}\ k\geq 2.}
Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by using partial fraction decomposition we can write any proper rational function as a sum of factors of the form 1 / (ax + b) and expand these as geometric series, giving an explicit formula for the Taylor coefficients; this is the method of generating functions.
== Abstract algebra ==
In abstract algebra the concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from any field. In this setting, given a field F and some indeterminate X, a rational expression (also known as a rational fraction or, in algebraic geometry, a rational function) is any element of the field of fractions of the polynomial ring F[X]. Any rational expression can be written as the quotient of two polynomials P/Q with Q ≠ 0, although this representation isn't unique. P/Q is equivalent to R/S, for polynomials P, Q, R, and S, when PS = QR. However, since F[X] is a unique factorization domain, there is a unique representation for any rational expression P/Q with P and Q polynomials of lowest degree and Q chosen to be monic. This is similar to how a fraction of integers can always be written uniquely in lowest terms by canceling out common factors.
The field of rational expressions is denoted F(X). This field is said to be generated (as a field) over F by (a transcendental element) X, because F(X) does not contain any proper subfield containing both F and the element X.
=== Notion of a rational function on an algebraic variety ===
Like polynomials, rational expressions can also be generalized to n indeterminates X1,..., Xn, by taking the field of fractions of F[X1,..., Xn], which is denoted by F(X1,..., Xn).
An extended version of the abstract idea of rational function is used in algebraic geometry. There the function field of an algebraic variety V is formed as the field of fractions of the coordinate ring of V (more accurately said, of a Zariski-dense affine open set in V). Its elements f are considered as regular functions in the sense of algebraic geometry on non-empty open sets U, and also may be seen as morphisms to the projective line.
== Applications ==
Rational functions are used in numerical analysis for interpolation and approximation of functions, for example the Padé approximants introduced by Henri Padé. Approximations in terms of rational functions are well suited for computer algebra systems and other numerical software. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials.
Rational functions are used to approximate or model more complex equations in science and engineering including fields and forces in physics, spectroscopy in analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo, wave functions for atoms and molecules, optics and photography to improve image resolution, and acoustics and sound.
In signal processing, the Laplace transform (for continuous systems) or the z-transform (for discrete-time systems) of the impulse response of commonly-used linear time-invariant systems (filters) with infinite impulse response are rational functions over complex numbers.
== See also ==
Partial fraction decomposition
Partial fractions in integration
Function field of an algebraic variety
Algebraic fractions – a generalization of rational functions that allows taking integer roots
== References ==
== Further reading ==
"Rational function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007), "Section 3.4. Rational Function Interpolation and Extrapolation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8
== External links ==
Dynamic visualization of rational functions with JSXGraph | Wikipedia/Rational_functions |
REDUCE is a general-purpose computer algebra system originally geared towards applications in physics.
The development of REDUCE was started in 1963 by Anthony C. Hearn; since then, many scientists from all over the world have contributed to its development. REDUCE was open-sourced in December 2008 and is available for free under a modified BSD license on SourceForge. Previously it had cost $695.
REDUCE is written entirely in its own Lisp dialect called Standard Lisp, expressed in an ALGOL-like syntax called RLISP that is also used as the basis for REDUCE's user-level language.
Implementations of REDUCE are available on most variants of Unix, Linux, Microsoft Windows, or Apple Macintosh systems by using an underlying Portable Standard Lisp (PSL) or Codemist Standard Lisp (CSL) implementation. CSL REDUCE offers a graphical user interface. REDUCE can also be built on other Lisps, such as Common Lisp.
== Features ==
arbitrary precision integer, rational, complex and floating-point arithmetic
expressions and functions involving one or more variables
algorithms for polynomials, rational and transcendental functions
facilities for the solution of a variety of algebraic equations
automatic and user-controlled simplification of expressions
substitutions and pattern matching in a wide variety of forms
symbolic differentiation, indefinite and definite integration
solution of ordinary differential equations
computations with a wide variety of special functions
general matrix and non-commutative algebra
plotting in 2 and 3 dimensions of
graphs of functions
arbitrary points, lines and curves
Dirac matrix calculations of interest in high energy physics
quantifier elimination and decision for interpreted first-order logic
powerful intuitive user-level programming language.
== Syntax ==
The REDUCE language is a high-level structured programming language based on ALGOL 60 (but with Standard Lisp semantics), although it does not support all ALGOL 60 syntax. It is similar to Pascal, which evolved from ALGOL 60, and Modula, which evolved from Pascal.
REDUCE is a free-form language, meaning that spacing and line breaks are not significant, but consequently input statements must be separated from each other and all input must be terminated with either a semi-colon (;) or a dollar sign ($). The difference is that if the input results in a useful (non-nil) value then it will be output if the separator is a semi-colon (;) but hidden if it is a dollar sign ($). The assignment operator is colon-equal (:=), which in its simplest usage assigns to the variable on its left the value of the expression on its right. However, a REDUCE variable can have no value, in which case it is displayed as its name, in order to allow mathematical expressions involving indeterminates to be constructed and manipulated. The simplest way to use REDUCE is interactively: type input after the last input prompt, terminate it with semi-colon and press the Return or Enter key; REDUCE processes the input and displays the result. This is illustrated in the screenshot.
=== Identifiers and strings ===
Programming languages use identifiers to name constructs such as variables and functions, and strings to store text. A REDUCE identifier must begin with a letter and can be followed by letters, digits and underscore characters (_). A REDUCE identifier can also include any character anywhere if it is input preceded by an exclamation mark (!). A REDUCE string is any sequence of characters delimited when input by double quote characters ("). A double quote can be included in a string by entering two double quotes; no other escape mechanism is implemented within strings. An identifier can be used instead of a string in most situations in REDUCE, such as to represent a file name.
REDUCE source code was originally written in all upper-case letters, as were all programming languages in the 1960s. (Hence, the name REDUCE is normally written in all upper-case.) However, modern REDUCE is case-insensitive (by default), which means that it ignores the case of letters, and it is normally written in lower-case. (The REDUCE source code has been converted to lower case.) The exceptions to this rule are that case is preserved within strings and when letters in identifiers are preceded by an exclamation mark (!). Hence, it is conventional to use snake-case (e.g. long_name) rather than camel-case (e.g. longName) for REDUCE identifiers, because camel-case gets lost without also using exclamation marks.
=== Hello World programs ===
Below is a REDUCE "Hello, World!" program, which is almost as short as such a program could possibly be!
REDUCE displays the output
Another REDUCE "Hello, World!" program, which is slightly longer than the version above, uses an identifier as follows
CSL REDUCE displays the same output as shown above. (Other REDUCE GUIs may italicise this output on the grounds that it is an identifier rather than a string.)
=== Statements and expressions ===
Because REDUCE inherits Lisp semantics, all programming constructs have values. Therefore, the only distinction between statements and expressions is that the value of an expression is used but the value of a statement is not. The terms statement and expression are interchangeable, although a few constructs always return the Lisp value nil and so are always used as statements.
There are two ways to group several statements or expressions into a single unit that is syntactically equivalent to a single statement or expression, which is necessary to facilitate structured programming. One is the begin...end construct inherited from ALGOL 60, which is called a block or compound statement. Its value is the value of the expression following the (optional) keyword return. The other uses the bracketing syntax <<...>>, which is called a group statement. Its value is the value of the last (unterminated) expression in it. Both are illustrated below in the procedural programming example below.
=== Structured programming ===
REDUCE supports conditional and repetition statements, some of which are controlled by a boolean expression, which is any expression whose value can be either true or false, such as
x
>
0
{\displaystyle x>0}
. (The REDUCE user-level language does not explicitly support constants representing true or false although, as in C and related languages, 0 has the boolean value false, whereas 1 and many other non-zero values have the boolean value true.)
==== Conditional statements: if ... then ... else ====
The conditional statement has the form
which can optionally be followed by
For example, the following conditional statement ensures that the value of
n
{\displaystyle n}
, assumed to be numerical, is positive. (It effectively implements the absolute value function.)
The following conditional statement, used as an expression, avoids an error that would be caused by dividing by 0.
==== Repetition statements: for ... ====
The for statement is a flexible loop construct that executes statement repeatedly a number of times that must be known in advance. One version has the form
where variable names a variable whose value can be used within statement, and initial, increment and final are numbers (preferably integers). The value of variable is initialized to initial and statement is executed, then the value of variable is repeatedly increased by increment and the statement executed again, provided the value of variable is not greater than final. The common special case "initial step 1 until final" can be abbreviated as "initial : final".
The following for statement computes the value of
n
!
{\displaystyle n!}
as the value of the variable fac.
Another version of the for statement iterates over a list, and the keyword do can be replaced by product, sum, collect or join, in which case the for statement becomes an expression and the controlled statement is treated as an expression. With product, the value is the product of the values of the controlled statement; with sum, the value is the sum of the values of the controlled statement; with collect, the value is the values of the controlled statement collected into a list; with join, the value is the values of the controlled statement, which must be lists, joined into one list.
The following for statement computes the value of
n
!
{\displaystyle n!}
much more succinctly and elegantly than the previous example.
==== Repetition statements: while ... do; repeat ... until ====
The two loop statements
are closely related to the conditional statement and execute statement repeatedly a number of times that need not be known in advance. Their difference is that while repetition stops when boolean expression becomes false whereas repeat repetition stops when boolean expression becomes true. Also, repeat always executes statement at least once and it can be used to initialize boolean expression, whereas when using while boolean expression must be initialized before entering the loop.
The following while statement computes the value of
n
!
{\displaystyle n!}
as the value of the variable fac. Note that this code treats the assignment n := n - 1 as an expression and uses its value.
=== Comments ===
REDUCE has three comment conventions. It inherits the comment statement from ALGOL 60, which looks like this:
comment This is a multi-line comment
that ends at the next separator,
so it cannot contain separators;
Comment statements mostly appear in older code.
It inherits the %... comment from Standard Lisp, which looks like this:
% This is a single-line comment that ends at the end of the line.
% It can appear on a line after code and
% can contain the separators ";" and "$".
%... comments are analogous to C++ //... comments and are the most commonly used form of comment.
REDUCE also supports a C-style /*...*/ comment that looks like this:
/* This is a multi-line comment that can
appear anywhere a space could and
can contain the separators ";" and "$".
*/
== Programming paradigms ==
REDUCE's user-level language supports several programming paradigms, as illustrated in the algebraic programming examples below.
Since it is based on Lisp, which is a functional programming language, REDUCE supports functional programming and all statements have values (although they are not always useful). REDUCE also supports procedural programming by ignoring statement values. Algebraic computation usually proceeds by transforming a mathematical expression into an equivalent but different form. This is called simplification, even though the result might be much longer. (The name REDUCE is a pun on this problem of intermediate expression swell!) In REDUCE, simplification occurs automatically when an expression is entered or computed, controlled by simplification rules and switches. In this way, REDUCE supports rule-based programming, which is the classic REDUCE programming paradigm. In early versions of REDUCE, rules and switches could only be set globally, but modern REDUCE also supports local setting of rules and switches, meaning that they control the simplification of only one expression. REDUCE programs often contain a mix of programming paradigms.
== Algebraic programming examples ==
The screenshot shows simple interactive use.
As a simple programming example, consider the problem of computing the
n
{\displaystyle n}
th Taylor polynomial of the function
f
(
x
)
{\displaystyle f(x)}
about the point
x
=
a
{\displaystyle x=a}
, which is given by the formula
∑
r
=
0
n
f
(
r
)
(
a
)
r
!
(
x
−
a
)
r
{\displaystyle \sum _{r=0}^{n}{\frac {f^{(r)}(a)}{r!}}(x-a)^{r}}
. Here,
f
(
r
)
{\displaystyle f^{(r)}}
denotes the
r
{\displaystyle r}
th derivative of
f
{\displaystyle f}
evaluated at the point
a
{\displaystyle a}
and
r
!
{\displaystyle r!}
denotes the factorial of
r
{\displaystyle r}
. (However, note that REDUCE includes sophisticated facilities for power-series expansion.)
As an example of functional programming in REDUCE, here is an easy way to compute the 5th Taylor polynomial of
sin
x
{\displaystyle \sin x}
about 0. In the following code, the control variable r takes values from 0 through 5 in steps of 1, df is the REDUCE differentiation operator and the operator sub performs substitution of its first argument into its second. Note that this code is very similar to the mathematical formula above (with
n
=
5
{\displaystyle n=5}
and
a
=
0
{\displaystyle a=0}
).
produces by default the output
This is correct, but it doesn't look much like a Taylor series. That can be fixed by changing a few output-control switches and then evaluating the special variable ws, which stands for workspace and holds the last non-empty output expression:
As an example of procedural programming in REDUCE, here is a procedure to compute the general Taylor polynomial, which works for functions that are well-behaved at the expansion point
a
{\displaystyle a}
.
The procedure is called my_taylor because REDUCE already includes an operator called taylor. All the text following a % sign up to the end of the line is a comment. The keyword scalar introduces and initializes two local variables, result and mul. The keywords begin and end delimit a block of code that may include local variables and may return a value, whereas the symbols << and >> delimit a group of statements without introducing local variables.
The procedure may be called as follows to compute the same Taylor polynomial as above.
== File and package handling ==
REDUCE GUIs provide menu support for some or all of the file and package handling described below.
=== File handling ===
In order to develop non-trivial computations, it is convenient to store source code in a file and have REDUCE read it instead of interactive input. REDUCE input should be plain text (not rich text as produced by word-processing applications). REDUCE filenames are arbitrary. The REDUCE source code uses the filename extension .red for the main source code and .tst for the test files, and for that reason REDUCE GUIs such as CSL REDUCE normally offer to input files with those extensions by default, but on platforms such as Microsoft Windows the extension .txt may be more convenient. It is recommended to end a REDUCE input file with the line
;end;
as an end-of-file marker. This is something of a historical quirk but it avoids potential warning messages. Apart from that, an input file can contain whatever might be entered interactively into REDUCE. The command
in file1, file2, ...
inputs each of the named files in succession into REDUCE, essentially as if their contents had been entered interactively, after which REDUCE waits for further interactive input. If the separator used to terminate this command is a semi-colon (;) then the file content is echoed as output; if the separator is a dollar sign ($) then the file content is not echoed.
REDUCE filenames can be either absolute or relative to the current directory; when using a REDUCE GUI absolute filenames are safer because it is not obvious what the current directory is! Filenames can be specified as either strings or identifiers; strings (in double quotes) are usually more convenient because otherwise filename elements such as directory separators and dots must be escaped with an exclamation mark (!). Note that the Microsoft Windows directory or folder separator, backslash (\), does not need to be doubled in REDUCE strings because backslash is not an escape character in REDUCE, but REDUCE on Microsoft Windows also accepts forward slash (/) as the directory separator.
REDUCE output can be directed to a file instead of the interactive display by executing the command
out file;
Output redirection can be terminated permanenty by executing the command
shut file;
or temporarily by executing the command
out t;
There are similar mechanisms for directing a compiled version of the REDUCE input to a file and loading compiled code, which is the basis for building REDUCE and can be used to extend it.
=== Loading packages ===
REDUCE is composed of a number of packages; some are pre-loaded, some are auto-loaded when needed, and some must be explicitly loaded before they can be used. The command
load_package package1, package2, ...
loads each of the named packages in succession into REDUCE. Package names are not filenames; they are simple identifiers that do not need any exclamation marks, so they are normally input as identifiers, although they can be input as strings. A package consists of one or more files of compiled Lisp code, and the load_package command ensures that the right files are loaded in the right order. The precise filenames and locations depend on the version of Lisp on which REDUCE is built, but the package names are always the same.
== Types and variable scope ==
REDUCE inherits dynamic scoping from Lisp, which means that data have types but variables themselves do not: the type of a variable is the type of the data assigned to it. The simplest REDUCE data types are Standard Lisp atomic types such as identifiers, machine numbers (i.e. "small" integers and floating-point numbers supported directly by the computer hardware), and strings. Most other REDUCE data types are represented internally as Lisp lists whose first element (car) indicates the data type. For example, the REDUCE input
produces the display
and the internal representation of this matrix is the Lisp list
(mat (1 2) (3 4))
The main algebraic objects used in REDUCE are quotients of two possibly-multivariate polynomials, the indeterminates of which, called kernels, may in fact be functions of one or more variables, e.g. the input
produces the display
REDUCE uses two representations for such algebraic objects. One is called prefix form, which is just the Standard Lisp code for the expression and is convenient for operations such as input and output; e.g. for
z
{\displaystyle z}
it is
(quotient (plus x (expt y 2)) (f x y))
The other is called standard quotient form, which is better for performing algebraic manipulations such as addition; e.g. for
z
{\displaystyle z}
it is
(!*sq ((((x . 1) . 1) ((y . 2) . 1)) (((f x y) . 1) . 1)) t)
REDUCE converts between these two representations as necessary, but tries to retain standard quotient form as much as possible to avoid the conversion overhead.
Because variables have no types there are no variable type declarations in REDUCE, but there are variable scope declarations. The scope of a variable refers to the range of a program throughout which it has the same significance. By default, REDUCE variables are automatically global in scope, meaning that they have the same significance everywhere, i.e. once a variable has been assigned a value, it will evaluate to that same value everywhere. Variables can be declared to have scope limited to a particular block of code by delimiting that block of code by the keywords begin and end, and declaring the variables scalar at the start of the block, using the following syntax (as illustrated in the algebraic programming examples above):
begin scalar variable1, variable2, ...;
statements
end
Each variable so declared can optionally be followed by an assignment operator (:=) and an initial value. The keyword scalar should be read as meaning local. (The reason for the name scalar is buried in the history of REDUCE, but it was probably chosen to distinguish local variables from the relativistic 4-vectors and Dirac gamma matrices defined in the high-energy physics package, which was the original core of REDUCE.)
The scalar keyword can be replaced by integer or real. The difference is that integer variables are initialized by default to 0, whereas scalar and real variables are initialized by default to the Lisp value nil (which has the algebraic value 0 anyway). This distinction is more significant in the REDUCE implementation language, RLISP, also known as symbolic or lisp mode. Otherwise, it is useful as documentation of the intended use of local variables.
There are two other variable declarations that are used only in the implementation of REDUCE, i.e. in symbolic mode. The REDUCE begin...end block described above is translated into a Standard Lisp prog form by the REDUCE parser, and all Standard Lisp variables should either be bound in prog forms, or declared global or fluid. In RLISP, these declarations look like this:
fluid '(variable1 variable2 ...)
global '(variable1 variable2 ...)
A global variable cannot be rebound in a prog form, whereas a fluid variable can. This distinction is normally only significant to a Lisp compiler and is used to maximize efficiency; in interpreted code these declarations can be skipped and undeclared variables are effectively fluid.
== Graphics ==
REDUCE supports graphical display via gnuplot, which is an independent portable open-source graphics package that is included in all REDUCE binary distributions. The REDUCE GNUPLOT package supports the display of curves or surfaces defined by formulas and/or data sets via the command plot(...). This command exposes some, but not all, of the capabilities of gnuplot. The REDUCE TURTLE and LOGOTURTLE packages are built on the REDUCE GNUPLOT package and support turtle graphics in two dimensions; the LOGOTURTLE package also exposes additional capabilities of gnuplot, such as control of colour and line thickness, filling and text annotations.
== Available implementations and supported platforms ==
REDUCE is available from SourceForge. Binary distributions are released a few times a year with no fixed schedule as snapshots of the Subversion repository, and also offer compressed archive snapshots of the full source code. SourceForge can be set up to notify users when a new release is available. In 2024, binary distributions were released for 64-bit versions of macOS, Linux (Debian and Red Hat based systems) and Microsoft Windows. The installers either include or are available for both CSL- and PSL-REDUCE, and may include the REDUCE source code. REDUCE can be built from the source code on a larger range of platforms and on other Lisp systems, such as Common Lisp.
== Other software that uses REDUCE ==
The following projects use REDUCE:
ALLTYPES (ALgebraic Language and TYPe System) is a computer algebra type system with particular emphasis on differential algebra and differential equations;
DAISY (Differential Algebra for Identifiability of SYstems) is a software tool to perform structural identifiability analysis for linear and nonlinear dynamic models described by polynomial or rational ODE equations;
MTT (Model Transformation Tools) is a set of tools for modeling dynamic physical systems using the bond-graph methodology;
Reduce.jl is a symbolic parser for Julia language term rewriting using REDUCE algebra;
Redlog (REDUCE Logic System) provides more than 100 functions on first-order formulas and was originally independent but is now available as a REDUCE package;
Pure is a programming language, which has bindings for REDUCE, providing a very interesting environment for doing computer-powered science.
== See also ==
List of computer algebra systems
ALTRAN
REDUCE Meets CAMAL - J. P. Fitch [1]
== References ==
== External links ==
REDUCE on SourceForge
Anthony C. Hearn at al., REDUCE User's Manual [ HTML | PDF ].
Anthony C. Hearn, "REDUCE: The First Forty Years". Invited paper presented at the A3L Conference in Honor of the 60th Birthday of Volker Weispfenning, April 2005.
Andrey Grozin, "TeXmacs-Reduce interface", April 2012. | Wikipedia/Reduce_(computer_algebra_system) |
In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method).
Polynomial long division is an algorithm that implements the Euclidean division of polynomials, which starting from two polynomials A (the dividend) and B (the divisor) produces, if B is not zero, a quotient Q and a remainder R such that
A = BQ + R,
and either R = 0 or the degree of R is lower than the degree of B. These conditions uniquely define Q and R, which means that Q and R do not depend on the method used to compute them.
The result R = 0 occurs if and only if the polynomial A has B as a factor. Thus long division is a means for testing whether one polynomial has another as a factor, and, if it does, for factoring it out. For example, if a root r of A is known, it can be factored out by dividing A by (x – r).
== Example ==
=== Polynomial long division ===
Find the quotient and the remainder of the division of
(
x
3
−
2
x
2
−
4
)
{\displaystyle (x^{3}-2x^{2}-4)}
, the dividend, by
(
x
−
3
)
{\displaystyle (x-3)}
, the divisor.
The dividend is first rewritten like this:
x
3
−
2
x
2
+
0
x
−
4.
{\displaystyle x^{3}-2x^{2}+0x-4.}
The quotient and remainder can then be determined as follows:
Divide the first term of the dividend by the highest term of the divisor (meaning the one with the highest power of x, which in this case is x). Place the result above the bar (x3 ÷ x = x2).
x
−
3
)
x
3
−
2
x
2
x
−
3
)
x
3
−
2
x
2
+
0
x
−
4
¯
{\displaystyle {\begin{array}{l}{\color {White}x-3\ )\ x^{3}-2}x^{2}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\end{array}}}
Multiply the divisor by the result just obtained (the first term of the eventual quotient). Write the result under the first two terms of the dividend (x2 · (x − 3) = x3 − 3x2).
x
−
3
)
x
3
−
2
x
2
x
−
3
)
x
3
−
2
x
2
+
0
x
−
4
¯
x
−
3
)
x
3
−
3
x
2
{\displaystyle {\begin{array}{l}{\color {White}x-3\ )\ x^{3}-2}x^{2}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\color {White}x-3\ )\ }x^{3}-3x^{2}\end{array}}}
Subtract the product just obtained from the appropriate terms of the original dividend (being careful that subtracting something having a minus sign is equivalent to adding something having a plus sign), and write the result underneath (x3 − 2x2) − (x3 − 3x2) = −2x2 + 3x2 = x2
Then, "bring down" the next term from the dividend.
x
−
3
)
x
3
−
2
x
2
x
−
3
)
x
3
−
2
x
2
+
0
x
−
4
¯
x
−
3
)
x
3
−
3
x
2
_
x
−
3
)
0
x
3
+
x
2
+
0
x
{\displaystyle {\begin{array}{l}{\color {White}x-3\ )\ x^{3}-2}x^{2}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\color {White}x-3\ )\ }{\underline {x^{3}-3x^{2}}}\\{\color {White}x-3\ )\ 0x^{3}}+{\color {White}}x^{2}+0x\end{array}}}
Repeat the previous three steps, except this time use the two terms that have just been written as the dividend.
x
2
+
1
x
+
3
x
−
3
)
x
3
−
2
x
2
+
0
x
−
4
¯
x
3
−
3
x
2
+
0
x
−
4
_
+
x
2
+
0
x
−
4
+
x
2
−
3
x
−
4
_
+
3
x
−
4
{\displaystyle {\begin{array}{r}x^{2}+{\color {White}1}x{\color {White}{}+3}\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\underline {x^{3}-3x^{2}{\color {White}{}+0x-4}}}\\+x^{2}+0x{\color {White}{}-4}\\{\underline {+x^{2}-3x{\color {White}{}-4}}}\\+3x-4\\\end{array}}}
Repeat step 4. This time, there is nothing to "bring down".
x
2
+
1
x
+
3
x
−
3
)
x
3
−
2
x
2
+
0
x
−
4
¯
x
3
−
3
x
2
+
0
x
−
4
_
+
x
2
+
0
x
−
4
+
x
2
−
3
x
−
4
_
+
3
x
−
4
+
3
x
−
9
_
+
5
{\displaystyle {\begin{array}{r}x^{2}+{\color {White}1}x+3\\x-3\ {\overline {)\ x^{3}-2x^{2}+0x-4}}\\{\underline {x^{3}-3x^{2}{\color {White}{}+0x-4}}}\\+x^{2}+0x{\color {White}{}-4}\\{\underline {+x^{2}-3x{\color {White}{}-4}}}\\+3x-4\\{\underline {+3x-9}}\\+5\end{array}}}
The polynomial above the bar is the quotient q(x), and the number left over (5) is the remainder r(x).
x
3
−
2
x
2
−
4
=
(
x
−
3
)
(
x
2
+
x
+
3
)
⏟
q
(
x
)
+
5
⏟
r
(
x
)
{\displaystyle {x^{3}-2x^{2}-4}=(x-3)\,\underbrace {(x^{2}+x+3)} _{q(x)}+\underbrace {5} _{r(x)}}
The long division algorithm for arithmetic is very similar to the above algorithm, in which the variable x is replaced (in base 10) by the specific number 10.
=== Polynomial short division ===
Blomqvist's method is an abbreviated version of the long division above. This pen-and-paper method uses the same algorithm as polynomial long division, but mental calculation is used to determine remainders. This requires less writing, and can therefore be a faster method once mastered.
The division is at first written in a similar way as long multiplication with the dividend at the top, and the divisor below it. The quotient is to be written below the bar from left to right.
x
3
−
2
x
2
+
0
x
−
4
÷
x
−
3
_
{\displaystyle {\begin{matrix}\qquad \qquad x^{3}-2x^{2}+{0x}-4\\{\underline {\div \quad \qquad \qquad \qquad \qquad x-3}}\end{matrix}}}
Divide the first term of the dividend by the highest term of the divisor (x3 ÷ x = x2). Place the result below the bar. x3 has been divided leaving no remainder, and can therefore be marked as used by crossing it out. The result x2 is then multiplied by the second term in the divisor −3 = −3x2. Determine the partial remainder by subtracting −2x2 − (−3x2) = x2. Mark −2x2 as used and place the new remainder x2 above it.
x
2
x
3
+
−
2
x
2
+
0
x
−
4
÷
x
−
3
_
x
2
{\displaystyle {\begin{matrix}\qquad x^{2}\\\qquad \quad {\bcancel {x^{3}}}+{\bcancel {-2x^{2}}}+{0x}-4\\{\underline {\div \qquad \qquad \qquad \qquad \qquad x-3}}\\x^{2}\qquad \qquad \end{matrix}}}
Divide the highest term of the remainder by the highest term of the divisor (x2 ÷ x = x). Place the result (+x) below the bar. x2 has been divided leaving no remainder, and can therefore be marked as used. The result x is then multiplied by the second term in the divisor −3 = −3x. Determine the partial remainder by subtracting 0x − (−3x) = 3x. Mark 0x as used and place the new remainder 3x above it.
x
2
3
x
x
3
+
−
2
x
2
+
0
x
−
4
÷
x
−
3
_
x
2
+
x
{\displaystyle {\begin{matrix}\qquad \qquad \quad {\bcancel {x^{2}}}\quad 3x\\\qquad \quad {\bcancel {x^{3}}}+{\bcancel {-2x^{2}}}+{\bcancel {0x}}-4\\{\underline {\div \qquad \qquad \qquad \qquad \qquad x-3}}\\x^{2}+x\qquad \end{matrix}}}
Divide the highest term of the remainder by the highest term of the divisor (3x ÷ x = 3). Place the result (+3) below the bar. 3x has been divided leaving no remainder, and can therefore be marked as used. The result 3 is then multiplied by the second term in the divisor −3 = −9. Determine the partial remainder by subtracting −4 − (−9) = 5. Mark −4 as used and place the new remainder 5 above it.
x
2
3
x
5
x
3
+
−
2
x
2
+
0
x
−
4
÷
x
−
3
_
x
2
+
x
+
3
{\displaystyle {\begin{matrix}\quad \qquad \qquad \qquad {\bcancel {x^{2}}}\quad {\bcancel {3x}}\quad 5\\\qquad \quad {\bcancel {x^{3}}}+{\bcancel {-2x^{2}}}+{\bcancel {0x}}{\bcancel {-4}}\\{\underline {\div \qquad \qquad \qquad \qquad \qquad x-3}}\\x^{2}+x+3\qquad \end{matrix}}}
The polynomial below the bar is the quotient q(x), and the number left over (5) is the remainder r(x).
== Pseudocode ==
The algorithm can be represented in pseudocode as follows, where +, −, and × represent polynomial arithmetic, and lead(r) / lead(d) represents the polynomial obtained by dividing the two leading terms:
function n / d is
require d ≠ 0
q ← 0
r ← n // At each step n = d × q + r
while r ≠ 0 and degree(r) ≥ degree(d) do
t ← lead(r) / lead(d) // Divide the leading terms
q ← q + t
r ← r − t × d
return (q, r)
This works equally well when degree(n) < degree(d); in that case the result is just the trivial (0, n).
This algorithm describes exactly the above paper and pencil method: d is written on the left of the ")"; q is written, term after term, above the horizontal line, the last term being the value of t; the region under the horizontal line is used to compute and write down the successive values of r.
== Euclidean division ==
For every pair of polynomials (A, B) such that B ≠ 0, polynomial division provides a quotient Q and a remainder R such that
A
=
B
Q
+
R
,
{\displaystyle A=BQ+R,}
and either R=0 or degree(R) < degree(B). Moreover (Q, R) is the unique pair of polynomials having this property.
The process of getting the uniquely defined polynomials Q and R from A and B is called Euclidean division (sometimes division transformation). Polynomial long division is thus an algorithm for Euclidean division.
== Applications ==
=== Factoring polynomials ===
Sometimes one or more roots of a polynomial are known, perhaps having been found using the rational root theorem. If one root r of a polynomial P(x) of degree n is known then polynomial long division can be used to factor P(x) into the form (x − r)Q(x) where Q(x) is a polynomial of degree n − 1. Q(x) is simply the quotient obtained from the division process; since r is known to be a root of P(x), it is known that the remainder must be zero.
Likewise, if several roots r, s, . . . of P(x) are known, a linear factor (x − r) can be divided out to obtain Q(x), and then (x − s) can be divided out of Q(x), etc. Alternatively, the quadratic factor
(
x
−
r
)
(
x
−
s
)
=
x
2
−
(
r
+
s
)
x
+
r
s
{\displaystyle (x-r)(x-s)=x^{2}-(r{+}s)x+rs}
can be divided out of P(x) to obtain a quotient of degree n − 2.
This method is especially useful for cubic polynomials, and sometimes all the roots of a higher-degree polynomial can be obtained. For example, if the rational root theorem produces a single (rational) root of a quintic polynomial, it can be factored out to obtain a quartic (fourth degree) quotient; the explicit formula for the roots of a quartic polynomial can then be used to find the other four roots of the quintic. There is, however, no general way to solve a quintic by purely algebraic methods, see Abel–Ruffini theorem.
=== Finding tangents to polynomial functions ===
Polynomial long division can be used to find the equation of the line that is tangent to the graph of the function defined by the polynomial P(x) at a particular point x = r. If R(x) is the remainder of the division of P(x) by (x − r)2, then the equation of the tangent line at x = r to the graph of the function y = P(x) is y = R(x), regardless of whether or not r is a root of the polynomial.
==== Example ====
Find the equation of the line that is tangent to the following curve
y
=
(
x
3
−
12
x
2
−
42
)
{\displaystyle y=(x^{3}-12x^{2}-42)}
at:
x
=
1
{\displaystyle x=1}
Begin by dividing the polynomial by:
(
x
−
1
)
2
=
(
x
2
−
2
x
+
1
)
{\displaystyle (x-1)^{2}=(x^{2}-2x+1)}
x
−
10
x
2
−
2
x
+
1
)
x
3
−
12
x
2
+
0
x
−
42
¯
x
3
−
0
2
x
2
+
1
x
_
−
42
−
10
x
2
−
01
x
−
42
−
10
x
2
+
20
x
−
10
_
−
21
x
−
32
{\displaystyle {\begin{array}{r}x-10\\x^{2}-2x+1\ {\overline {)\ x^{3}-12x^{2}+0x-42}}\\{\underline {x^{3}-{\color {White}0}2x^{2}+{\color {White}1}x}}{\color {White}{}-42}\\-10x^{2}-{\color {White}01}x-42\\{\underline {-10x^{2}+20x-10}}\\-21x-32\end{array}}}
The tangent line is
y
=
(
−
21
x
−
32
)
{\displaystyle y=(-21x-32)}
=== Cyclic redundancy check ===
A cyclic redundancy check uses the remainder of polynomial division to detect errors in transmitted messages.
== See also ==
Polynomial remainder theorem
Synthetic division, a more concise method of performing Euclidean polynomial division
Ruffini's rule
Euclidean domain
Gröbner basis
Greatest common divisor of two polynomials
== References == | Wikipedia/Polynomial_division_algorithm |
In mathematics, Liouville's theorem, originally formulated by French mathematician Joseph Liouville in 1833 to 1841, places an important restriction on antiderivatives that can be expressed as elementary functions.
The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions. These are called nonelementary antiderivatives. A standard example of such a function is
e
−
x
2
,
{\displaystyle e^{-x^{2}},}
whose antiderivative is (with a multiplier of a constant) the error function, familiar from statistics. Other examples include the functions
sin
(
x
)
x
{\displaystyle {\frac {\sin(x)}{x}}}
and
x
x
.
{\displaystyle x^{x}.}
Liouville's theorem states that elementary antiderivatives, if they exist, are in the same differential field as the function, plus possibly a finite number of applications of the logarithm function.
== Definitions ==
For any differential field
F
,
{\displaystyle F,}
the constants of
F
{\displaystyle F}
is the subfield
Con
(
F
)
=
{
f
∈
F
:
D
f
=
0
}
.
{\displaystyle \operatorname {Con} (F)=\{f\in F:Df=0\}.}
Given two differential fields
F
{\displaystyle F}
and
G
,
{\displaystyle G,}
G
{\displaystyle G}
is called a logarithmic extension of
F
{\displaystyle F}
if
G
{\displaystyle G}
is a simple transcendental extension of
F
{\displaystyle F}
(that is,
G
=
F
(
t
)
{\displaystyle G=F(t)}
for some transcendental
t
{\displaystyle t}
) such that
D
t
=
D
s
s
for some
s
∈
F
.
{\displaystyle Dt={\frac {Ds}{s}}\quad {\text{ for some }}s\in F.}
This has the form of a logarithmic derivative. Intuitively, one may think of
t
{\displaystyle t}
as the logarithm of some element
s
{\displaystyle s}
of
F
,
{\displaystyle F,}
in which case, this condition is analogous to the ordinary chain rule. However,
F
{\displaystyle F}
is not necessarily equipped with a unique logarithm; one might adjoin many "logarithm-like" extensions to
F
.
{\displaystyle F.}
Similarly, an exponential extension is a simple transcendental extension that satisfies
D
t
t
=
D
s
for some
s
∈
F
.
{\displaystyle {\frac {Dt}{t}}=Ds\quad {\text{ for some }}s\in F.}
With the above caveat in mind, this element may be thought of as an exponential of an element
s
{\displaystyle s}
of
F
.
{\displaystyle F.}
Finally,
G
{\displaystyle G}
is called an elementary differential extension of
F
{\displaystyle F}
if there is a finite chain of subfields from
F
{\displaystyle F}
to
G
{\displaystyle G}
where each extension in the chain is either algebraic, logarithmic, or exponential.
== Basic theorem ==
Suppose
F
{\displaystyle F}
and
G
{\displaystyle G}
are differential fields with
Con
(
F
)
=
Con
(
G
)
,
{\displaystyle \operatorname {Con} (F)=\operatorname {Con} (G),}
and that
G
{\displaystyle G}
is an elementary differential extension of
F
.
{\displaystyle F.}
Suppose
f
∈
F
{\displaystyle f\in F}
and
g
∈
G
{\displaystyle g\in G}
satisfy
D
g
=
f
{\displaystyle Dg=f}
(in words, suppose that
G
{\displaystyle G}
contains an antiderivative of
f
{\displaystyle f}
).
Then there exist
c
1
,
…
,
c
n
∈
Con
(
F
)
{\displaystyle c_{1},\ldots ,c_{n}\in \operatorname {Con} (F)}
and
f
1
,
…
,
f
n
,
s
∈
F
{\displaystyle f_{1},\ldots ,f_{n},s\in F}
such that
f
=
c
1
D
f
1
f
1
+
⋯
+
c
n
D
f
n
f
n
+
D
s
.
{\displaystyle f=c_{1}{\frac {Df_{1}}{f_{1}}}+\dotsb +c_{n}{\frac {Df_{n}}{f_{n}}}+Ds.}
In other words, the only functions that have "elementary antiderivatives" (that is, antiderivatives living in, at worst, an elementary differential extension of
F
{\displaystyle F}
) are those with this form. Thus, on an intuitive level, the theorem states that the only elementary antiderivatives are the "simple" functions plus a finite number of logarithms of "simple" functions.
A proof of Liouville's theorem can be found in section 12.4 of Geddes, et al. See Lützen's scientific bibliography for a sketch of Liouville's original proof (Chapter IX. Integration in Finite Terms), its modern exposition and algebraic treatment (ibid. §61).
== Examples ==
As an example, the field
F
:=
C
(
x
)
{\displaystyle F:=\mathbb {C} (x)}
of rational functions in a single variable has a derivation given by the standard derivative with respect to that variable. The constants of this field are just the complex numbers
C
;
{\displaystyle \mathbb {C} ;}
that is,
Con
(
C
(
x
)
)
=
C
,
{\displaystyle \operatorname {Con} (\mathbb {C} (x))=\mathbb {C} ,}
The function
f
:=
1
x
,
{\displaystyle f:={\tfrac {1}{x}},}
which exists in
C
(
x
)
,
{\displaystyle \mathbb {C} (x),}
does not have an antiderivative in
C
(
x
)
.
{\displaystyle \mathbb {C} (x).}
Its antiderivatives
ln
x
+
C
{\displaystyle \ln x+C}
do, however, exist in the logarithmic extension
C
(
x
,
ln
x
)
.
{\displaystyle \mathbb {C} (x,\ln x).}
Likewise, the function
1
x
2
+
1
{\displaystyle {\tfrac {1}{x^{2}+1}}}
does not have an antiderivative in
C
(
x
)
.
{\displaystyle \mathbb {C} (x).}
Its antiderivatives
tan
−
1
(
x
)
+
C
{\displaystyle \tan ^{-1}(x)+C}
do not seem to satisfy the requirements of the theorem, since they are not (apparently) sums of rational functions and logarithms of rational functions. However, a calculation with Euler's formula
e
i
θ
=
cos
θ
+
i
sin
θ
{\displaystyle e^{i\theta }=\cos \theta +i\sin \theta }
shows that in fact the antiderivatives can be written in the required manner (as logarithms of rational functions).
e
2
i
θ
=
e
i
θ
e
−
i
θ
=
cos
θ
+
i
sin
θ
cos
θ
−
i
sin
θ
=
1
+
i
tan
θ
1
−
i
tan
θ
θ
=
1
2
i
ln
(
1
+
i
tan
θ
1
−
i
tan
θ
)
tan
−
1
x
=
1
2
i
ln
(
1
+
i
x
1
−
i
x
)
{\displaystyle {\begin{aligned}e^{2i\theta }&={\frac {e^{i\theta }}{e^{-i\theta }}}={\frac {\cos \theta +i\sin \theta }{\cos \theta -i\sin \theta }}={\frac {1+i\tan \theta }{1-i\tan \theta }}\\\theta &={\frac {1}{2i}}\ln \left({\frac {1+i\tan \theta }{1-i\tan \theta }}\right)\\\tan ^{-1}x&={\frac {1}{2i}}\ln \left({\frac {1+ix}{1-ix}}\right)\end{aligned}}}
== Relationship with differential Galois theory ==
Liouville's theorem is sometimes presented as a theorem in differential Galois theory, but this is not strictly true. The theorem can be proved without any use of Galois theory. Furthermore, the Galois group of a simple antiderivative is either trivial (if no field extension is required to express it), or is simply the additive group of the constants (corresponding to the constant of integration). Thus, an antiderivative's differential Galois group does not encode enough information to determine if it can be expressed using elementary functions, the major condition of Liouville's theorem.
== See also ==
== Notes ==
== References ==
Bertrand, D. (1996), "Review of "Lectures on differential Galois theory"" (PDF), Bulletin of the American Mathematical Society, 33 (2), doi:10.1090/s0273-0979-96-00652-0, ISSN 0002-9904
Geddes, Keith O.; Czapor, Stephen R.; Labahn, George (1992). Algorithms for Computer Algebra. Kluwer Academic Publishers. ISBN 0-7923-9259-0.
Liouville, Joseph (1833a). "Premier mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 124–148.
Liouville, Joseph (1833b). "Second mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 149–193.
Liouville, Joseph (1833c). "Note sur la détermination des intégrales dont la valeur est algébrique". Journal für die reine und angewandte Mathematik. 10: 347–359.
Magid, Andy R. (1994), Lectures on differential Galois theory, University Lecture Series, vol. 7, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-7004-4, MR 1301076
Magid, Andy R. (1999), "Differential Galois theory" (PDF), Notices of the American Mathematical Society, 46 (9): 1041–1049, ISSN 0002-9920, MR 1710665
van der Put, Marius; Singer, Michael F. (2003), Galois theory of linear differential equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 328, Berlin, New York: Springer-Verlag, ISBN 978-3-540-44228-8, MR 1960772
== External links ==
Weisstein, Eric W. "Liouville's Principle". MathWorld. | Wikipedia/Liouville's_theorem_(differential_algebra) |
In the mathematical study of partial differential equations, the Bateman transform is a method for solving the Laplace equation in four dimensions and wave equation in three by using a line integral of a holomorphic function in three complex variables. It is named after the mathematician Harry Bateman, who first published the result in (Bateman 1904).
The formula asserts that if ƒ is a holomorphic function of three complex variables, then
ϕ
(
w
,
x
,
y
,
z
)
=
∮
γ
f
(
(
w
+
i
x
)
+
(
i
y
+
z
)
ζ
,
(
i
y
−
z
)
+
(
w
−
i
x
)
ζ
,
ζ
)
d
ζ
{\displaystyle \phi (w,x,y,z)=\oint _{\gamma }f{\big (}(w+ix)+(iy+z)\zeta ,(iy-z)+(w-ix)\zeta ,\zeta {\big )}\,d\zeta }
is a solution of the Laplace equation, which follows by differentiation under the integral. Furthermore, Bateman asserted that the most general solution of the Laplace equation arises in this way.
== References ==
Bateman, Harry (1904), "The solution of partial differential equations by means of definite integrals", Proceedings of the London Mathematical Society, 1 (1): 451–458, doi:10.1112/plms/s2-1.1.451, archived from the original on 2013-04-15.
Eastwood, Michael (2002), Bateman's formula (PDF), MSRI. | Wikipedia/Bateman_transform |
In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.
== Definition ==
=== Standard definition ===
Definition 1. Let Ω be an open set in the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
and f : Ω →
C
{\displaystyle \mathbb {C} }
be a Lebesgue measurable function. If f on Ω is such that
∫
K
|
f
|
d
x
<
+
∞
,
{\displaystyle \int _{K}|f|\,\mathrm {d} x<+\infty ,}
i.e. its Lebesgue integral is finite on all compact subsets K of Ω, then f is called locally integrable. The set of all such functions is denoted by L1,loc(Ω):
L
1
,
l
o
c
(
Ω
)
=
{
f
:
Ω
→
C
measurable
:
f
|
K
∈
L
1
(
K
)
∀
K
⊂
Ω
,
K
compact
}
,
{\displaystyle L_{1,\mathrm {loc} }(\Omega )={\bigl \{}f\colon \Omega \to \mathbb {C} {\text{ measurable}}:f|_{K}\in L_{1}(K)\ \forall \,K\subset \Omega ,\,K{\text{ compact}}{\bigr \}},}
where
f
|
K
{\textstyle \left.f\right|_{K}}
denotes the restriction of f to the set K.
The classical definition of a locally integrable function involves only measure theoretic and topological concepts and can be carried over abstract to complex-valued functions on a topological measure space (X, Σ, μ): however, since the most common application of such functions is to distribution theory on Euclidean spaces, all the definitions in this and the following sections deal explicitly only with this important case.
=== An alternative definition ===
Definition 2. Let Ω be an open set in the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
. Then a function f : Ω →
C
{\displaystyle \mathbb {C} }
such that
∫
Ω
|
f
φ
|
d
x
<
+
∞
,
{\displaystyle \int _{\Omega }|f\varphi |\,\mathrm {d} x<+\infty ,}
for each test function φ ∈ C ∞c (Ω) is called locally integrable, and the set of such functions is denoted by L1,loc(Ω). Here C ∞c (Ω) denotes the set of all infinitely differentiable functions φ : Ω →
R
{\displaystyle \mathbb {R} }
with compact support contained in Ω.
This definition has its roots in the approach to measure and integration theory based on the concept of continuous linear functional on a topological vector space, developed by the Nicolas Bourbaki school: it is also the one adopted by Strichartz (2003) and by Maz'ya & Shaposhnikova (2009, p. 34). This "distribution theoretic" definition is equivalent to the standard one, as the following lemma proves:
Lemma 1. A given function f : Ω →
C
{\displaystyle \mathbb {C} }
is locally integrable according to Definition 1 if and only if it is locally integrable according to Definition 2, i.e.
∫
K
|
f
|
d
x
<
+
∞
∀
K
⊂
Ω
,
K
compact
⟺
∫
Ω
|
f
φ
|
d
x
<
+
∞
∀
φ
∈
C
c
∞
(
Ω
)
.
{\displaystyle \int _{K}|f|\,\mathrm {d} x<+\infty \quad \forall \,K\subset \Omega ,\,K{\text{ compact}}\quad \Longleftrightarrow \quad \int _{\Omega }|f\varphi |\,\mathrm {d} x<+\infty \quad \forall \,\varphi \in C_{\mathrm {c} }^{\infty }(\Omega ).}
=== Proof of Lemma 1 ===
If part: Let φ ∈ C ∞c (Ω) be a test function. It is bounded by its supremum norm ||φ||∞, measurable, and has a compact support, let's call it K. Hence
∫
Ω
|
f
φ
|
d
x
=
∫
K
|
f
|
|
φ
|
d
x
≤
‖
φ
‖
∞
∫
K
|
f
|
d
x
<
∞
{\displaystyle \int _{\Omega }|f\varphi |\,\mathrm {d} x=\int _{K}|f|\,|\varphi |\,\mathrm {d} x\leq \|\varphi \|_{\infty }\int _{K}|f|\,\mathrm {d} x<\infty }
by Definition 1.
Only if part: Let K be a compact subset of the open set Ω. We will first construct a test function φK ∈ C ∞c (Ω) which majorises the indicator function χK of K.
The usual set distance between K and the boundary ∂Ω is strictly greater than zero, i.e.
Δ
:=
d
(
K
,
∂
Ω
)
>
0
,
{\displaystyle \Delta :=d(K,\partial \Omega )>0,}
hence it is possible to choose a real number δ such that Δ > 2δ > 0 (if ∂Ω is the empty set, take Δ = ∞). Let Kδ and K2δ denote the closed δ-neighborhood and 2δ-neighborhood of K, respectively. They are likewise compact and satisfy
K
⊂
K
δ
⊂
K
2
δ
⊂
Ω
,
d
(
K
δ
,
∂
Ω
)
=
Δ
−
δ
>
δ
>
0.
{\displaystyle K\subset K_{\delta }\subset K_{2\delta }\subset \Omega ,\qquad d(K_{\delta },\partial \Omega )=\Delta -\delta >\delta >0.}
Now use convolution to define the function φK : Ω →
R
{\displaystyle \mathbb {R} }
by
φ
K
(
x
)
=
χ
K
δ
∗
φ
δ
(
x
)
=
∫
R
n
χ
K
δ
(
y
)
φ
δ
(
x
−
y
)
d
y
,
{\displaystyle \varphi _{K}(x)={\chi _{K_{\delta }}\ast \varphi _{\delta }(x)}=\int _{\mathbb {R} ^{n}}\chi _{K_{\delta }}(y)\,\varphi _{\delta }(x-y)\,\mathrm {d} y,}
where φδ is a mollifier constructed by using the standard positive symmetric one. Obviously φK is non-negative in the sense that φK ≥ 0, infinitely differentiable, and its support is contained in K2δ, in particular it is a test function. Since φK(x) = 1 for all x ∈ K, we have that χK ≤ φK.
Let f be a locally integrable function according to Definition 2. Then
∫
K
|
f
|
d
x
=
∫
Ω
|
f
|
χ
K
d
x
≤
∫
Ω
|
f
|
φ
K
d
x
<
∞
.
{\displaystyle \int _{K}|f|\,\mathrm {d} x=\int _{\Omega }|f|\chi _{K}\,\mathrm {d} x\leq \int _{\Omega }|f|\varphi _{K}\,\mathrm {d} x<\infty .}
Since this holds for every compact subset K of Ω, the function f is locally integrable according to Definition 1. □
=== Generalization: locally p-integrable functions ===
Definition 3. Let Ω be an open set in the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
and f : Ω →
C
{\displaystyle \mathbb {C} }
be a Lebesgue measurable function. If, for a given p with 1 ≤ p ≤ +∞, f satisfies
∫
K
|
f
|
p
d
x
<
+
∞
,
{\displaystyle \int _{K}|f|^{p}\,\mathrm {d} x<+\infty ,}
i.e., it belongs to Lp(K) for all compact subsets K of Ω, then f is called locally p-integrable or also p-locally integrable. The set of all such functions is denoted by Lp,loc(Ω):
L
p
,
l
o
c
(
Ω
)
=
{
f
:
Ω
→
C
measurable
|
f
|
K
∈
L
p
(
K
)
,
∀
K
⊂
Ω
,
K
compact
}
.
{\displaystyle L_{p,\mathrm {loc} }(\Omega )=\left\{f:\Omega \to \mathbb {C} {\text{ measurable }}\left|\ f|_{K}\in L_{p}(K),\ \forall \,K\subset \Omega ,K{\text{ compact}}\right.\right\}.}
An alternative definition, completely analogous to the one given for locally integrable functions, can also be given for locally p-integrable functions: it can also be and proven equivalent to the one in this section. Despite their apparent higher generality, locally p-integrable functions form a subset of locally integrable functions for every p such that 1 < p ≤ +∞.
=== Notation ===
Apart from the different glyphs which may be used for the uppercase "L", there are few variants for the notation of the set of locally integrable functions
L
l
o
c
p
(
Ω
)
,
{\displaystyle L_{\mathrm {loc} }^{p}(\Omega ),}
adopted by (Hörmander 1990, p. 37), (Strichartz 2003, pp. 12–13) and (Vladimirov 2002, p. 3).
L
p
,
l
o
c
(
Ω
)
,
{\displaystyle L_{p,\mathrm {loc} }(\Omega ),}
adopted by (Maz'ya & Poborchi 1997, p. 4) and Maz'ya & Shaposhnikova (2009, p. 44).
L
p
(
Ω
,
l
o
c
)
,
{\displaystyle L_{p}(\Omega ,\mathrm {loc} ),}
adopted by (Maz'ja 1985, p. 6) and (Maz'ya 2011, p. 2).
== Properties ==
=== Lp,loc is a complete metric space for all p ≥ 1 ===
Theorem 1. Lp,loc is a complete metrizable space: its topology can be generated by the following metric:
d
(
u
,
v
)
=
∑
k
≥
1
1
2
k
‖
u
−
v
‖
p
,
ω
k
1
+
‖
u
−
v
‖
p
,
ω
k
u
,
v
∈
L
p
,
l
o
c
(
Ω
)
,
{\displaystyle d(u,v)=\sum _{k\geq 1}{\frac {1}{2^{k}}}{\frac {\Vert u-v\Vert _{p,\omega _{k}}}{1+\Vert u-v\Vert _{p,\omega _{k}}}}\qquad u,v\in L_{p,\mathrm {loc} }(\Omega ),}
where {ωk}k≥1 is a family of non empty open sets such that
ωk ⊂⊂ ωk+1, meaning that ωk is compactly included in ωk+1 i.e. it is a set having compact closure strictly included in the set of higher index.
∪kωk = Ω.
‖
⋅
‖
p
,
ω
k
→
R
+
{\displaystyle \scriptstyle {\Vert \cdot \Vert _{p,\omega _{k}}}\to \mathbb {R} ^{+}}
, k ∈
N
{\displaystyle \mathbb {N} }
is an indexed family of seminorms, defined as
‖
u
‖
p
,
ω
k
=
(
∫
ω
k
|
u
(
x
)
|
p
d
x
)
1
/
p
∀
u
∈
L
p
,
l
o
c
(
Ω
)
.
{\displaystyle {\Vert u\Vert _{p,\omega _{k}}}=\left(\int _{\omega _{k}}|u(x)|^{p}\,\mathrm {d} x\right)^{1/p}\qquad \forall \,u\in L_{p,\mathrm {loc} }(\Omega ).}
In references (Gilbarg & Trudinger 2001, p. 147), (Maz'ya & Poborchi 1997, p. 5), (Maz'ja 1985, p. 6) and (Maz'ya 2011, p. 2), this theorem is stated but not proved on a formal basis: a complete proof of a more general result, which includes it, is found in (Meise & Vogt 1997, p. 40).
=== Lp is a subspace of L1,loc for all p ≥ 1 ===
Theorem 2. Every function f belonging to Lp(Ω), 1 ≤ p ≤ +∞, where Ω is an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
, is locally integrable.
Proof. The case p = 1 is trivial, therefore in the sequel of the proof it is assumed that 1 < p ≤ +∞. Consider the characteristic function χK of a compact subset K of Ω: then, for p ≤ +∞,
|
∫
Ω
|
χ
K
|
q
d
x
|
1
/
q
=
|
∫
K
d
x
|
1
/
q
=
|
K
|
1
/
q
<
+
∞
,
{\displaystyle \left|{\int _{\Omega }|\chi _{K}|^{q}\,\mathrm {d} x}\right|^{1/q}=\left|{\int _{K}\mathrm {d} x}\right|^{1/q}=|K|^{1/q}<+\infty ,}
where
q is a positive number such that 1/p + 1/q = 1 for a given 1 ≤ p ≤ +∞
|K| is the Lebesgue measure of the compact set K
Then for any f belonging to Lp(Ω), by Hölder's inequality, the product fχK is integrable i.e. belongs to L1(Ω) and
∫
K
|
f
|
d
x
=
∫
Ω
|
f
χ
K
|
d
x
≤
|
∫
Ω
|
f
|
p
d
x
|
1
/
p
|
∫
K
d
x
|
1
/
q
=
‖
f
‖
p
|
K
|
1
/
q
<
+
∞
,
{\displaystyle {\int _{K}|f|\,\mathrm {d} x}={\int _{\Omega }|f\chi _{K}|\,\mathrm {d} x}\leq \left|{\int _{\Omega }|f|^{p}\,\mathrm {d} x}\right|^{1/p}\left|{\int _{K}\mathrm {d} x}\right|^{1/q}=\|f\|_{p}|K|^{1/q}<+\infty ,}
therefore
f
∈
L
1
,
l
o
c
(
Ω
)
.
{\displaystyle f\in L_{1,\mathrm {loc} }(\Omega ).}
Note that since the following inequality is true
∫
K
|
f
|
d
x
=
∫
Ω
|
f
χ
K
|
d
x
≤
|
∫
K
|
f
|
p
d
x
|
1
/
p
|
∫
K
d
x
|
1
/
q
=
‖
f
χ
K
‖
p
|
K
|
1
/
q
<
+
∞
,
{\displaystyle {\int _{K}|f|\,\mathrm {d} x}={\int _{\Omega }|f\chi _{K}|\,\mathrm {d} x}\leq \left|{\int _{K}|f|^{p}\,\mathrm {d} x}\right|^{1/p}\left|{\int _{K}\mathrm {d} x}\right|^{1/q}=\|f\chi _{K}\|_{p}|K|^{1/q}<+\infty ,}
the theorem is true also for functions f belonging only to the space of locally p-integrable functions, therefore the theorem implies also the following result.
Corollary 1. Every function
f
{\displaystyle f}
in
L
p
,
l
o
c
(
Ω
)
{\displaystyle L_{p,loc}(\Omega )}
,
1
<
p
≤
∞
{\displaystyle 1<p\leq \infty }
, is locally integrable, i. e. belongs to
L
1
,
l
o
c
(
Ω
)
{\displaystyle L_{1,loc}(\Omega )}
.
Note: If
Ω
{\displaystyle \Omega }
is an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
that is also bounded, then one has the standard inclusion
L
p
(
Ω
)
⊂
L
1
(
Ω
)
{\displaystyle L_{p}(\Omega )\subset L_{1}(\Omega )}
which makes sense given the above inclusion
L
1
(
Ω
)
⊂
L
1
,
l
o
c
(
Ω
)
{\displaystyle L_{1}(\Omega )\subset L_{1,loc}(\Omega )}
. But the first of these statements is not true if
Ω
{\displaystyle \Omega }
is not bounded; then it is still true that
L
p
(
Ω
)
⊂
L
1
,
l
o
c
(
Ω
)
{\displaystyle L_{p}(\Omega )\subset L_{1,loc}(\Omega )}
for any
p
{\displaystyle p}
, but not that
L
p
(
Ω
)
⊂
L
1
(
Ω
)
{\displaystyle L_{p}(\Omega )\subset L_{1}(\Omega )}
. To see this, one typically considers the function
u
(
x
)
=
1
{\displaystyle u(x)=1}
, which is in
L
∞
(
R
n
)
{\displaystyle L_{\infty }(\mathbb {R} ^{n})}
but not in
L
p
(
R
n
)
{\displaystyle L_{p}(\mathbb {R} ^{n})}
for any finite
p
{\displaystyle p}
.
=== L1,loc is the space of densities of absolutely continuous measures ===
Theorem 3. A function f is the density of an absolutely continuous measure if and only if
f
∈
L
1
,
l
o
c
{\displaystyle f\in L_{1,loc}}
.
The proof of this result is sketched by (Schwartz 1998, p. 18). Rephrasing its statement, this theorem asserts that every locally integrable function defines an absolutely continuous measure and conversely that every absolutely continuous measures defines a locally integrable function: this is also, in the abstract measure theory framework, the form of the important Radon–Nikodym theorem given by Stanisław Saks in his treatise.
== Examples ==
The constant function 1 defined on the real line is locally integrable but not globally integrable since the real line has infinite measure. More generally, constants, continuous functions and integrable functions are locally integrable.
The function
f
(
x
)
=
1
/
x
{\displaystyle f(x)=1/x}
for x ∈ (0, 1) is locally but not globally integrable on (0, 1). It is locally integrable since any compact set K ⊆ (0, 1) has positive distance from 0 and f is hence bounded on K. This example underpins the initial claim that locally integrable functions do not require the satisfaction of growth conditions near the boundary in bounded domains.
The function
f
(
x
)
=
{
1
/
x
x
≠
0
,
0
x
=
0
,
x
∈
R
{\displaystyle f(x)={\begin{cases}1/x&x\neq 0,\\0&x=0,\end{cases}}\quad x\in \mathbb {R} }
is not locally integrable at x = 0: it is indeed locally integrable near this point since its integral over every compact set not including it is finite. Formally speaking,
1
/
x
∈
L
1
,
l
o
c
(
R
∖
0
)
{\displaystyle 1/x\in L_{1,loc}(\mathbb {R} \setminus 0)}
: however, this function can be extended to a distribution on the whole
R
{\displaystyle \mathbb {R} }
as a Cauchy principal value.
The preceding example raises a question: does every function which is locally integrable in Ω ⊊
R
{\displaystyle \mathbb {R} }
admit an extension to the whole
R
{\displaystyle \mathbb {R} }
as a distribution? The answer is negative, and a counterexample is provided by the following function:
f
(
x
)
=
{
e
1
/
x
x
≠
0
,
0
x
=
0
,
{\displaystyle f(x)={\begin{cases}e^{1/x}&x\neq 0,\\0&x=0,\end{cases}}}
does not define any distribution on
R
{\displaystyle \mathbb {R} }
.
The following example, similar to the preceding one, is a function belonging to L1,loc(
R
{\displaystyle \mathbb {R} }
\ 0) which serves as an elementary counterexample in the application of the theory of distributions to differential operators with irregular singular coefficients:
f
(
x
)
=
{
k
1
e
1
/
x
2
x
>
0
,
0
x
=
0
,
k
2
e
1
/
x
2
x
<
0
,
{\displaystyle f(x)={\begin{cases}k_{1}e^{1/x^{2}}&x>0,\\0&x=0,\\k_{2}e^{1/x^{2}}&x<0,\end{cases}}}
where k1 and k2 are complex constants, is a general solution of the following elementary non-Fuchsian differential equation of first order
x
3
d
f
d
x
+
2
f
=
0.
{\displaystyle x^{3}{\frac {\mathrm {d} f}{\mathrm {d} x}}+2f=0.}
Again it does not define any distribution on the whole
R
{\displaystyle \mathbb {R} }
, if k1 or k2 are not zero: the only distributional global solution of such equation is therefore the zero distribution, and this shows how, in this branch of the theory of differential equations, the methods of the theory of distributions cannot be expected to have the same success achieved in other branches of the same theory, notably in the theory of linear differential equations with constant coefficients.
== Applications ==
Locally integrable functions play a prominent role in distribution theory and they occur in the definition of various classes of functions and function spaces, like functions of bounded variation. Moreover, they appear in the Radon–Nikodym theorem by characterizing the absolutely continuous part of every measure.
== See also ==
Compact set
Distribution (mathematics)
Lebesgue's density theorem
Lebesgue differentiation theorem
Lebesgue integral
Lp space
== Notes ==
== References ==
Cafiero, Federico (1959), Misura e integrazione, Monografie matematiche del Consiglio Nazionale delle Ricerche (in Italian), vol. 5, Roma: Edizioni Cremonese, pp. VII+451, MR 0215954, Zbl 0171.01503. Measure and integration (as the English translation of the title reads) is a definitive monograph on integration and measure theory: the treatment of the limiting behavior of the integral of various kind of sequences of measure-related structures (measurable functions, measurable sets, measures and their combinations) is somewhat conclusive.
Gel'fand, I. M.; Shilov, G. E. (1964) [1958], Generalized functions. Vol. I: Properties and operations, New York–London: Academic Press, pp. xviii+423, ISBN 978-0-12-279501-5, MR 0166596, Zbl 0115.33101 {{citation}}: ISBN / Date incompatibility (help). Translated from the original 1958 Russian edition by Eugene Saletan, this is an important monograph on the theory of generalized functions, dealing both with distributions and analytic functionals.
Gilbarg, David; Trudinger, Neil S. (2001) [1998], Elliptic partial differential equations of second order, Classics in Mathematics (Revised 3rd printing of 2nd ed.), Berlin – Heidelberg – New York: Springer Verlag, pp. xiv+517, ISBN 3-540-41160-7, MR 1814364, Zbl 1042.35002.
Hörmander, Lars (1990), The analysis of linear partial differential operators I, Grundlehren der Mathematischen Wissenschaft, vol. 256 (2nd ed.), Berlin-Heidelberg-New York City: Springer-Verlag, pp. xii+440, ISBN 0-387-52343-X, MR 1065136, Zbl 0712.35001 (available also as ISBN 3-540-52343-X).
Maz'ja, Vladimir G. (1985), Sobolev Spaces, Berlin–Heidelberg–New York: Springer-Verlag, pp. xix+486, ISBN 3-540-13589-8, MR 0817985, Zbl 0692.46023 (available also as ISBN 0-387-13589-8).
Maz'ya, Vladimir G. (2011) [1985], Sobolev Spaces. With Applications to Elliptic Partial Differential Equations., Grundlehren der Mathematischen Wissenschaften, vol. 342 (2nd revised and augmented ed.), Berlin–Heidelberg–New York: Springer Verlag, pp. xxviii+866, ISBN 978-3-642-15563-5, MR 2777530, Zbl 1217.46002.
Maz'ya, Vladimir G.; Poborchi, Sergei V. (1997), Differentiable Functions on Bad Domains, Singapore–New Jersey–London–Hong Kong: World Scientific, pp. xx+481, ISBN 981-02-2767-1, MR 1643072, Zbl 0918.46033.
Maz'ya, Vladimir G.; Shaposhnikova, Tatyana O. (2009), Theory of Sobolev multipliers. With applications to differential and integral operators, Grundlehren der Mathematischen Wissenschaft, vol. 337, Heidelberg: Springer-Verlag, pp. xiii+609, ISBN 978-3-540-69490-8, MR 2457601, Zbl 1157.46001.
Meise, Reinhold; Vogt, Dietmar (1997), Introduction to Functional Analysis, Oxford Graduate Texts in Mathematics, vol. 2, Oxford: Clarendon Press, pp. x+437, ISBN 0-19-851485-9, MR 1483073, Zbl 0924.46002.
Saks, Stanisław (1937), Theory of the Integral, Monografie Matematyczne, vol. 7 (2nd ed.), Warsaw-Lwów: G.E. Stechert & Co., pp. VI+347, JFM 63.0183.05, MR 0167578, Zbl 0017.30004. English translation by Laurence Chisholm Young, with two additional notes by Stefan Banach: the Mathematical Reviews number refers to the Dover Publications 1964 edition, which is basically a reprint.
Schwartz, Laurent (1998) [1966], Théorie des distributions, Publications de l'Institut de Mathématique de l'Université de Strasbourg (in French) (Nouvelle ed.), Paris: Hermann Éditeurs, pp. xiii+420, ISBN 2-7056-5551-4, MR 0209834, Zbl 0149.09501.
Strichartz, Robert S. (2003), A Guide to Distribution Theory and Fourier Transforms (2nd printing ed.), River Edge, NJ: World Scientific Publishers, pp. x+226, ISBN 981-238-430-8, MR 2000535, Zbl 1029.46039.
Vladimirov, V. S. (2002), Methods of the theory of generalized functions, Analytical Methods and Special Functions, vol. 6, London–New York: Taylor & Francis, pp. XII+353, ISBN 0-415-27356-0, MR 2012831, Zbl 1078.46029. A monograph on the theory of generalized functions written with an eye towards their applications to several complex variables and mathematical physics, as is customary for the Author.
== External links ==
Rowland, Todd. "Locally integrable". MathWorld.
Vinogradova, I.A. (2001) [1994], "Locally integrable function", Encyclopedia of Mathematics, EMS Press
This article incorporates material from Locally integrable function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Locally_integrable_function |
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes (integrating over lines is known as the X-ray transform). It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
== Explanation ==
If a function
f
{\displaystyle f}
represents an unknown density, then the Radon transform represents the projection data obtained as the output of a tomographic scan. Hence the inverse of the Radon transform can be used to reconstruct the original density from the projection data, and thus it forms the mathematical underpinning for tomographic reconstruction, also known as iterative reconstruction.
The Radon transform data is often called a sinogram because the Radon transform of an off-center point source is a sinusoid. Consequently, the Radon transform of a number of small objects appears graphically as a number of blurred sine waves with different amplitudes and phases.
The Radon transform is useful in computed axial tomography (CAT scan), barcode scanners, electron microscopy of macromolecular assemblies like viruses and protein complexes, reflection seismology and in the solution of hyperbolic partial differential equations.
== Definition ==
Let
f
(
x
)
=
f
(
x
,
y
)
{\displaystyle f({\textbf {x}})=f(x,y)}
be a function that satisfies the three regularity conditions:
f
(
x
)
{\displaystyle f({\textbf {x}})}
is continuous;
the double integral
∬
|
f
(
x
)
|
x
2
+
y
2
d
x
d
y
{\displaystyle \iint {\dfrac {\vert f({\textbf {x}})\vert }{\sqrt {x^{2}+y^{2}}}}\,dx\,dy}
, extending over the whole plane, converges;
for any arbitrary point
(
x
,
y
)
{\displaystyle (x,y)}
on the plane it holds that
lim
r
→
∞
∫
0
2
π
f
(
x
+
r
cos
φ
,
y
+
r
sin
φ
)
d
φ
=
0.
{\displaystyle \lim _{r\to \infty }\int _{0}^{2\pi }f(x+r\cos \varphi ,y+r\sin \varphi )\,d\varphi =0.}
The Radon transform,
R
f
{\displaystyle Rf}
, is a function defined on the space of straight lines
L
⊂
R
2
{\displaystyle L\subset \mathbb {R} ^{2}}
by the line integral along each such line as:
R
f
(
L
)
=
∫
L
f
(
x
)
|
d
x
|
.
{\displaystyle Rf(L)=\int _{L}f(\mathbf {x} )\vert d\mathbf {x} \vert .}
Concretely, the parametrization of any straight line
L
{\displaystyle L}
with respect to arc length
z
{\displaystyle z}
can always be written:
(
x
(
z
)
,
y
(
z
)
)
=
(
(
z
sin
α
+
s
cos
α
)
,
(
−
z
cos
α
+
s
sin
α
)
)
{\displaystyle (x(z),y(z))={\Big (}(z\sin \alpha +s\cos \alpha ),(-z\cos \alpha +s\sin \alpha ){\Big )}\,}
where
s
{\displaystyle s}
is the distance of
L
{\displaystyle L}
from the origin and
α
{\displaystyle \alpha }
is the angle the normal vector to
L
{\displaystyle L}
makes with the
X
{\displaystyle X}
-axis. It follows that the quantities
(
α
,
s
)
{\displaystyle (\alpha ,s)}
can be considered as coordinates on the space of all lines in
R
2
{\displaystyle \mathbb {R} ^{2}}
, and the Radon transform can be expressed in these coordinates by:
R
f
(
α
,
s
)
=
∫
−
∞
∞
f
(
x
(
z
)
,
y
(
z
)
)
d
z
=
∫
−
∞
∞
f
(
(
z
sin
α
+
s
cos
α
)
,
(
−
z
cos
α
+
s
sin
α
)
)
d
z
.
{\displaystyle {\begin{aligned}Rf(\alpha ,s)&=\int _{-\infty }^{\infty }f(x(z),y(z))\,dz\\&=\int _{-\infty }^{\infty }f{\big (}(z\sin \alpha +s\cos \alpha ),(-z\cos \alpha +s\sin \alpha ){\big )}\,dz.\end{aligned}}}
More generally, in the
n
{\displaystyle n}
-dimensional Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, the Radon transform of a function
f
{\displaystyle f}
satisfying the regularity conditions is a function
R
f
{\displaystyle Rf}
on the space
Σ
n
{\displaystyle \Sigma _{n}}
of all hyperplanes in
R
n
{\displaystyle \mathbb {R} ^{n}}
. It is defined by:
R
f
(
ξ
)
=
∫
ξ
f
(
x
)
d
σ
(
x
)
,
∀
ξ
∈
Σ
n
{\displaystyle Rf(\xi )=\int _{\xi }f(\mathbf {x} )\,d\sigma (\mathbf {x} ),\quad \forall \xi \in \Sigma _{n}}
where the integral is taken with respect to the natural hypersurface measure,
d
σ
{\displaystyle d\sigma }
(generalizing the
|
d
x
|
{\displaystyle \vert d\mathbf {x} \vert }
term from the
2
{\displaystyle 2}
-dimensional case). Observe that any element of
Σ
n
{\displaystyle \Sigma _{n}}
is characterized as the solution locus of an equation
x
⋅
α
=
s
{\displaystyle \mathbf {x} \cdot \alpha =s}
, where
α
∈
S
n
−
1
{\displaystyle \alpha \in S^{n-1}}
is a unit vector and
s
∈
R
{\displaystyle s\in \mathbb {R} }
. Thus the
n
{\displaystyle n}
-dimensional Radon transform may be rewritten as a function on
S
n
−
1
×
R
{\displaystyle S^{n-1}\times \mathbb {R} }
via:
R
f
(
α
,
s
)
=
∫
x
⋅
α
=
s
f
(
x
)
d
σ
(
x
)
.
{\displaystyle Rf(\alpha ,s)=\int _{\mathbf {x} \cdot \alpha =s}f(\mathbf {x} )\,d\sigma (\mathbf {x} ).}
It is also possible to generalize the Radon transform still further by integrating instead over
k
{\displaystyle k}
-dimensional affine subspaces of
R
n
{\displaystyle \mathbb {R} ^{n}}
. The X-ray transform is the most widely used special case of this construction, and is obtained by integrating over straight lines.
== Relationship with the Fourier transform ==
The Radon transform is closely related to the Fourier transform. We define the univariate Fourier transform here as:
f
^
(
ω
)
=
∫
−
∞
∞
f
(
x
)
e
−
2
π
i
x
ω
d
x
.
{\displaystyle {\hat {f}}(\omega )=\int _{-\infty }^{\infty }f(x)e^{-2\pi ix\omega }\,dx.}
For a function of a
2
{\displaystyle 2}
-vector
x
=
(
x
,
y
)
{\displaystyle \mathbf {x} =(x,y)}
, the univariate Fourier transform is:
f
^
(
w
)
=
∬
R
2
f
(
x
)
e
−
2
π
i
x
⋅
w
d
x
d
y
.
{\displaystyle {\hat {f}}(\mathbf {w} )=\iint _{\mathbb {R} ^{2}}f(\mathbf {x} )e^{-2\pi i\mathbf {x} \cdot \mathbf {w} }\,dx\,dy.}
For convenience, denote
R
α
[
f
]
(
s
)
=
R
[
f
]
(
α
,
s
)
{\displaystyle {\mathcal {R}}_{\alpha }[f](s)={\mathcal {R}}[f](\alpha ,s)}
. The Fourier slice theorem then states:
R
α
[
f
]
^
(
σ
)
=
f
^
(
σ
n
(
α
)
)
{\displaystyle {\widehat {{\mathcal {R}}_{\alpha }[f]}}(\sigma )={\hat {f}}(\sigma \mathbf {n} (\alpha ))}
where
n
(
α
)
=
(
cos
α
,
sin
α
)
.
{\displaystyle \mathbf {n} (\alpha )=(\cos \alpha ,\sin \alpha ).}
Thus the two-dimensional Fourier transform of the initial function along a line at the inclination angle
α
{\displaystyle \alpha }
is the one variable Fourier transform of the Radon transform (acquired at angle
α
{\displaystyle \alpha }
) of that function. This fact can be used to compute both the Radon transform and its inverse. The result can be generalized into n dimensions:
f
^
(
r
α
)
=
∫
R
R
f
(
α
,
s
)
e
−
2
π
i
s
r
d
s
.
{\displaystyle {\hat {f}}(r\alpha )=\int _{\mathbb {R} }{\mathcal {R}}f(\alpha ,s)e^{-2\pi isr}\,ds.}
== Dual transform ==
The dual Radon transform is a kind of adjoint to the Radon transform. Beginning with a function g on the space
Σ
n
{\displaystyle \Sigma _{n}}
, the dual Radon transform is the function
R
∗
g
{\displaystyle {\mathcal {R}}^{*}g}
on Rn defined by:
R
∗
g
(
x
)
=
∫
x
∈
ξ
g
(
ξ
)
d
μ
(
ξ
)
.
{\displaystyle {\mathcal {R}}^{*}g(\mathbf {x} )=\int _{\mathbf {x} \in \xi }g(\xi )\,d\mu (\xi ).}
The integral here is taken over the set of all hyperplanes incident with the point
x
∈
R
n
{\displaystyle {\textbf {x}}\in \mathbb {R} ^{n}}
, and the measure
d
μ
{\displaystyle d\mu }
is the unique probability measure on the set
{
ξ
|
x
∈
ξ
}
{\displaystyle \{\xi |\mathbf {x} \in \xi \}}
invariant under rotations about the point
x
{\displaystyle \mathbf {x} }
.
Concretely, for the two-dimensional Radon transform, the dual transform is given by:
R
∗
g
(
x
)
=
1
2
π
∫
α
=
0
2
π
g
(
α
,
n
(
α
)
⋅
x
)
d
α
.
{\displaystyle {\mathcal {R}}^{*}g(\mathbf {x} )={\frac {1}{2\pi }}\int _{\alpha =0}^{2\pi }g(\alpha ,\mathbf {n} (\alpha )\cdot \mathbf {x} )\,d\alpha .}
In the context of image processing, the dual transform is commonly called back-projection as it takes a function defined on each line in the plane and 'smears' or projects it back over the line to produce an image.
=== Intertwining property ===
Let
Δ
{\displaystyle \Delta }
denote the Laplacian on
R
n
{\displaystyle \mathbb {R} ^{n}}
defined by:
Δ
=
∂
2
∂
x
1
2
+
⋯
+
∂
2
∂
x
n
2
{\displaystyle \Delta ={\frac {\partial ^{2}}{\partial x_{1}^{2}}}+\cdots +{\frac {\partial ^{2}}{\partial x_{n}^{2}}}}
This is a natural rotationally invariant second-order differential operator. On
Σ
n
{\displaystyle \Sigma _{n}}
, the "radial" second derivative
L
f
(
α
,
s
)
≡
∂
2
∂
s
2
f
(
α
,
s
)
{\displaystyle Lf(\alpha ,s)\equiv {\frac {\partial ^{2}}{\partial s^{2}}}f(\alpha ,s)}
is also rotationally invariant. The Radon transform and its dual are intertwining operators for these two differential operators in the sense that:
R
(
Δ
f
)
=
L
(
R
f
)
,
R
∗
(
L
g
)
=
Δ
(
R
∗
g
)
.
{\displaystyle {\mathcal {R}}(\Delta f)=L({\mathcal {R}}f),\quad {\mathcal {R}}^{*}(Lg)=\Delta ({\mathcal {R}}^{*}g).}
In analysing the solutions to the wave equation in multiple spatial dimensions, the intertwining property leads to the translational representation of Lax and Philips. In imaging and numerical analysis this is exploited to reduce multi-dimensional problems into single-dimensional ones, as a dimensional splitting method.
== Reconstruction approaches ==
The process of reconstruction produces the image (or function
f
{\displaystyle f}
in the previous section) from its projection data. Reconstruction is an inverse problem.
=== Radon inversion formula ===
In the two-dimensional case, the most commonly used analytical formula to recover
f
{\displaystyle f}
from its Radon transform is the Filtered Back-projection Formula or Radon Inversion Formula:
f
(
x
)
=
∫
0
π
(
R
f
(
⋅
,
θ
)
∗
h
)
(
⟨
x
,
n
θ
⟩
)
d
θ
{\displaystyle f(\mathbf {x} )=\int _{0}^{\pi }({\mathcal {R}}f(\cdot ,\theta )*h)(\left\langle \mathbf {x} ,\mathbf {n} _{\theta }\right\rangle )\,d\theta }
where
h
{\displaystyle h}
is such that
h
^
(
k
)
=
|
k
|
{\displaystyle {\hat {h}}(k)=|k|}
. The convolution kernel
h
{\displaystyle h}
is referred to as Ramp filter in some literature.
=== Ill-posedness ===
Intuitively, in the filtered back-projection formula, by analogy with differentiation, for which
(
d
d
x
f
^
)
(
k
)
=
i
k
f
^
(
k
)
{\textstyle \left({\widehat {{\frac {d}{dx}}f}}\right)\!(k)=ik{\widehat {f}}(k)}
, we see that the filter performs an operation similar to a derivative. Roughly speaking, then, the filter makes objects more singular. A quantitive statement of the ill-posedness of Radon inversion goes as follows:
R
∗
R
[
g
]
^
(
k
)
=
1
‖
k
‖
g
^
(
k
)
{\displaystyle {\widehat {{\mathcal {R}}^{*}{\mathcal {R}}[g]}}(\mathbf {k} )={\frac {1}{\|\mathbf {k} \|}}{\hat {g}}(\mathbf {k} )}
where
R
∗
{\displaystyle {\mathcal {R}}^{*}}
is the previously defined adjoint to the Radon transform. Thus for
g
(
x
)
=
e
i
⟨
k
0
,
x
⟩
{\displaystyle g(\mathbf {x} )=e^{i\left\langle \mathbf {k} _{0},\mathbf {x} \right\rangle }}
, we have:
R
∗
R
[
g
]
(
x
)
=
1
‖
k
0
‖
e
i
⟨
k
0
,
x
⟩
{\displaystyle {\mathcal {R}}^{*}{\mathcal {R}}[g](\mathbf {x} )={\frac {1}{\|\mathbf {k_{0}} \|}}e^{i\left\langle \mathbf {k} _{0},\mathbf {x} \right\rangle }}
The complex exponential
e
i
⟨
k
0
,
x
⟩
{\displaystyle e^{i\left\langle \mathbf {k} _{0},\mathbf {x} \right\rangle }}
is thus an eigenfunction of
R
∗
R
{\displaystyle {\mathcal {R}}^{*}{\mathcal {R}}}
with eigenvalue
1
‖
k
0
‖
{\textstyle {\frac {1}{\|\mathbf {k} _{0}\|}}}
. Thus the singular values of
R
{\displaystyle {\mathcal {R}}}
are
1
‖
k
‖
{\textstyle {\frac {1}{\sqrt {\|\mathbf {k} \|}}}}
. Since these singular values tend to
0
{\displaystyle 0}
,
R
−
1
{\displaystyle {\mathcal {R}}^{-1}}
is unbounded.
=== Iterative reconstruction methods ===
Compared with the Filtered Back-projection method, iterative reconstruction costs large computation time, limiting its practical use. However, due to the ill-posedness of Radon Inversion, the Filtered Back-projection method may be infeasible in the presence of discontinuity or noise. Iterative reconstruction methods (e.g. iterative Sparse Asymptotic Minimum Variance) could provide metal artefact reduction, noise and dose reduction for the reconstructed result that attract much research interest around the world.
== Inversion formulas ==
Explicit and computationally efficient inversion formulas for the Radon transform and its dual are available. The Radon transform in
n
{\displaystyle n}
dimensions can be inverted by the formula:
c
n
f
=
(
−
Δ
)
(
n
−
1
)
/
2
R
∗
R
f
{\displaystyle c_{n}f=(-\Delta )^{(n-1)/2}R^{*}Rf\,}
where
c
n
=
(
4
π
)
(
n
−
1
)
/
2
Γ
(
n
/
2
)
Γ
(
1
/
2
)
{\displaystyle c_{n}=(4\pi )^{(n-1)/2}{\frac {\Gamma (n/2)}{\Gamma (1/2)}}}
, and the power of the Laplacian
(
−
Δ
)
(
n
−
1
)
/
2
{\displaystyle (-\Delta )^{(n-1)/2}}
is defined as a pseudo-differential operator if necessary by the Fourier transform:
[
F
(
−
Δ
)
(
n
−
1
)
/
2
φ
]
(
ξ
)
=
|
2
π
ξ
|
n
−
1
(
F
φ
)
(
ξ
)
.
{\displaystyle \left[{\mathcal {F}}(-\Delta )^{(n-1)/2}\varphi \right](\xi )=|2\pi \xi |^{n-1}({\mathcal {F}}\varphi )(\xi ).}
For computational purposes, the power of the Laplacian is commuted with the dual transform
R
∗
{\displaystyle R^{*}}
to give:
c
n
f
=
{
R
∗
d
n
−
1
d
s
n
−
1
R
f
n
odd
R
∗
H
s
d
n
−
1
d
s
n
−
1
R
f
n
even
{\displaystyle c_{n}f={\begin{cases}R^{*}{\frac {d^{n-1}}{ds^{n-1}}}Rf&n{\text{ odd}}\\R^{*}{\mathcal {H}}_{s}{\frac {d^{n-1}}{ds^{n-1}}}Rf&n{\text{ even}}\end{cases}}}
where
H
s
{\displaystyle {\mathcal {H}}_{s}}
is the Hilbert transform with respect to the s variable. In two dimensions, the operator
H
s
d
d
s
{\displaystyle {\mathcal {H}}_{s}{\frac {d}{ds}}}
appears in image processing as a ramp filter. One can prove directly from the Fourier slice theorem and change of variables for integration that for a compactly supported continuous function
f
{\displaystyle f}
of two variables:
f
=
1
2
R
∗
H
s
d
d
s
R
f
.
{\displaystyle f={\frac {1}{2}}R^{*}{\mathcal {H}}_{s}{\frac {d}{ds}}Rf.}
Thus in an image processing context the original image
f
{\displaystyle f}
can be recovered from the 'sinogram' data
R
f
{\displaystyle Rf}
by applying a ramp filter (in the
s
{\displaystyle s}
variable) and then back-projecting. As the filtering step can be performed efficiently (for example using digital signal processing techniques) and the back projection step is simply an accumulation of values in the pixels of the image, this results in a highly efficient, and hence widely used, algorithm.
Explicitly, the inversion formula obtained by the latter method is:
f
(
x
)
=
{
−
ı
2
π
(
2
π
)
−
n
(
−
1
)
n
/
2
∫
S
n
−
1
∂
n
−
1
2
∂
s
n
−
1
R
f
(
α
,
α
⋅
x
)
d
α
n
odd
(
2
π
)
−
n
(
−
1
)
n
/
2
∬
R
×
S
n
−
1
∂
n
−
1
q
∂
s
n
−
1
R
f
(
α
,
α
⋅
x
+
q
)
d
α
d
q
n
even
{\displaystyle f(x)={\begin{cases}\displaystyle -\imath 2\pi (2\pi )^{-n}(-1)^{n/2}\int _{S^{n-1}}{\frac {\partial ^{n-1}}{2\partial s^{n-1}}}Rf(\alpha ,\alpha \cdot x)\,d\alpha &n{\text{ odd}}\\\displaystyle (2\pi )^{-n}(-1)^{n/2}\iint _{\mathbb {R} \times S^{n-1}}{\frac {\partial ^{n-1}}{q\partial s^{n-1}}}Rf(\alpha ,\alpha \cdot x+q)\,d\alpha \,dq&n{\text{ even}}\\\end{cases}}}
The dual transform can also be inverted by an analogous formula:
c
n
g
=
(
−
L
)
(
n
−
1
)
/
2
R
(
R
∗
g
)
.
{\displaystyle c_{n}g=(-L)^{(n-1)/2}R(R^{*}g).\,}
== Radon transform in algebraic geometry ==
In algebraic geometry, a Radon transform (also known as the Brylinski–Radon transform) is constructed as follows.
Write
P
d
←
p
1
H
→
p
2
P
∨
,
d
{\displaystyle \mathbf {P} ^{d}\,{\stackrel {p_{1}}{\gets }}\,H\,{\stackrel {p_{2}}{\to }}\,\mathbf {P} ^{\vee ,d}}
for the universal hyperplane, i.e., H consists of pairs (x, h) where x is a point in d-dimensional projective space
P
d
{\displaystyle \mathbf {P} ^{d}}
and h is a point in the dual projective space (in other words, x is a line through the origin in (d+1)-dimensional affine space, and h is a hyperplane in that space) such that x is contained in h.
Then the Brylinksi–Radon transform is the functor between appropriate derived categories of étale sheaves
Rad
:=
R
p
2
,
∗
p
1
∗
:
D
(
P
d
)
→
D
(
P
∨
,
d
)
.
{\displaystyle \operatorname {Rad} :=Rp_{2,*}p_{1}^{*}:D(\mathbf {P} ^{d})\to D(\mathbf {P} ^{\vee ,d}).}
The main theorem about this transform is that this transform induces an equivalence of the categories of perverse sheaves on the projective space and its dual projective space, up to constant sheaves.
== See also ==
Periodogram
Matched filter
Deconvolution
X-ray transform
Funk transform
The Hough transform, when written in a continuous form, is very similar, if not equivalent, to the Radon transform.
Cauchy–Crofton theorem is a closely related formula for computing the length of curves in space.
Fast Fourier transform
== Notes ==
== References ==
== Further reading ==
Lokenath Debnath; Dambaru Bhatta (19 April 2016). Integral Transforms and Their Applications. CRC Press. ISBN 978-1-4200-1091-6.
Deans, Stanley R. (1983), The Radon Transform and Some of Its Applications, New York: John Wiley & Sons
Helgason, Sigurdur (2008), Geometric analysis on symmetric spaces, Mathematical Surveys and Monographs, vol. 39 (2nd ed.), Providence, R.I.: American Mathematical Society, doi:10.1090/surv/039, ISBN 978-0-8218-4530-1, MR 2463854
Herman, Gabor T. (2009), Fundamentals of Computerized Tomography: Image Reconstruction from Projections (2nd ed.), Springer, ISBN 978-1-85233-617-2
Minlos, R.A. (2001) [1994], "Radon transform", Encyclopedia of Mathematics, EMS Press
Natterer, Frank (June 2001), The Mathematics of Computerized Tomography, Classics in Applied Mathematics, vol. 32, Society for Industrial and Applied Mathematics, ISBN 0-89871-493-1
Natterer, Frank; Wübbeling, Frank (2001), Mathematical Methods in Image Reconstruction, Society for Industrial and Applied Mathematics, Bibcode:2001mmir.book.....N, ISBN 0-89871-472-9
== External links ==
Weisstein, Eric W. "Radon Transform". MathWorld.
Analytical projection (the Radon transform) (video). Part of the "Computed Tomography and the ASTRA Toolbox" course. University of Antwerp. September 10, 2015. | Wikipedia/Radon_transform |
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function
1
/
(
π
t
)
{\displaystyle 1/(\pi t)}
(see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.
== Definition ==
The Hilbert transform of u can be thought of as the convolution of u(t) with the function h(t) = 1/πt, known as the Cauchy kernel. Because 1/t is not integrable across t = 0, the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by p.v.). Explicitly, the Hilbert transform of a function (or signal) u(t) is given by
H
(
u
)
(
t
)
=
1
π
p
.
v
.
∫
−
∞
+
∞
u
(
τ
)
t
−
τ
d
τ
,
{\displaystyle \operatorname {H} (u)(t)={\frac {1}{\pi }}\,\operatorname {p.v.} \int _{-\infty }^{+\infty }{\frac {u(\tau )}{t-\tau }}\,\mathrm {d} \tau ,}
provided this integral exists as a principal value. This is precisely the convolution of u with the tempered distribution p.v. 1/πt. Alternatively, by changing variables, the principal-value integral can be written explicitly as
H
(
u
)
(
t
)
=
2
π
lim
ε
→
0
∫
ε
∞
u
(
t
−
τ
)
−
u
(
t
+
τ
)
2
τ
d
τ
.
{\displaystyle \operatorname {H} (u)(t)={\frac {2}{\pi }}\,\lim _{\varepsilon \to 0}\int _{\varepsilon }^{\infty }{\frac {u(t-\tau )-u(t+\tau )}{2\tau }}\,\mathrm {d} \tau .}
When the Hilbert transform is applied twice in succession to a function u, the result is
H
(
H
(
u
)
)
(
t
)
=
−
u
(
t
)
,
{\displaystyle \operatorname {H} {\bigl (}\operatorname {H} (u){\bigr )}(t)=-u(t),}
provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is
−
H
{\displaystyle -\operatorname {H} }
. This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of u(t) (see § Relationship with the Fourier transform below).
For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if f(z) is analytic in the upper half complex plane {z : Im{z} > 0}, and u(t) = Re{f (t + 0·i)}, then Im{f(t + 0·i)} = H(u)(t) up to an additive constant, provided this Hilbert transform exists.
=== Notation ===
In signal processing the Hilbert transform of u(t) is commonly denoted by
u
^
(
t
)
{\displaystyle {\hat {u}}(t)}
. However, in mathematics, this notation is already extensively used to denote the Fourier transform of u(t). Occasionally, the Hilbert transform may be denoted by
u
~
(
t
)
{\displaystyle {\tilde {u}}(t)}
. Furthermore, many sources define the Hilbert transform as the negative of the one defined here.
== History ==
The Hilbert transform arose in Hilbert's 1905 work on a problem Riemann posed concerning analytic functions, which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle. Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation. Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case. These results were restricted to the spaces L2 and ℓ2. In 1928, Marcel Riesz proved that the Hilbert transform can be defined for u in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
(Lp space) for 1 < p < ∞, that the Hilbert transform is a bounded operator on
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
for 1 < p < ∞, and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform. The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals. Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today.
== Relationship with the Fourier transform ==
The Hilbert transform is a multiplier operator. The multiplier of H is σH(ω) = −i sgn(ω), where sgn is the signum function. Therefore:
F
(
H
(
u
)
)
(
ω
)
=
−
i
sgn
(
ω
)
⋅
F
(
u
)
(
ω
)
,
{\displaystyle {\mathcal {F}}{\bigl (}\operatorname {H} (u){\bigr )}(\omega )=-i\operatorname {sgn}(\omega )\cdot {\mathcal {F}}(u)(\omega ),}
where
F
{\displaystyle {\mathcal {F}}}
denotes the Fourier transform. Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of
F
{\displaystyle {\mathcal {F}}}
.
By Euler's formula,
σ
H
(
ω
)
=
{
i
=
e
+
i
π
/
2
if
ω
<
0
0
if
ω
=
0
−
i
=
e
−
i
π
/
2
if
ω
>
0
{\displaystyle \sigma _{\operatorname {H} }(\omega )={\begin{cases}~~i=e^{+i\pi /2}&{\text{if }}\omega <0\\~~0&{\text{if }}\omega =0\\-i=e^{-i\pi /2}&{\text{if }}\omega >0\end{cases}}}
Therefore, H(u)(t) has the effect of shifting the phase of the negative frequency components of u(t) by +90° (π⁄2 radians) and the phase of the positive frequency components by −90°, and i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation (i.e., a multiplication by −1).
When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of u(t) are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated; i.e., H(H(u)) = −u, because
(
σ
H
(
ω
)
)
2
=
e
±
i
π
=
−
1
for
ω
≠
0.
{\displaystyle \left(\sigma _{\operatorname {H} }(\omega )\right)^{2}=e^{\pm i\pi }=-1\quad {\text{for }}\omega \neq 0.}
== Table of selected Hilbert transforms ==
In the following table, the frequency parameter
ω
{\displaystyle \omega }
is real.
Notes
An extensive table of Hilbert transforms is available.
Note that the Hilbert transform of a constant is zero.
== Domain of definition ==
It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
for 1 < p < ∞.
More precisely, if u is in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
for 1 < p < ∞, then the limit defining the improper integral
H
(
u
)
(
t
)
=
2
π
lim
ε
→
0
∫
ε
∞
u
(
t
−
τ
)
−
u
(
t
+
τ
)
2
τ
d
τ
{\displaystyle \operatorname {H} (u)(t)={\frac {2}{\pi }}\lim _{\varepsilon \to 0}\int _{\varepsilon }^{\infty }{\frac {u(t-\tau )-u(t+\tau )}{2\tau }}\,d\tau }
exists for almost every t. The limit function is also in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
and is in fact the limit in the mean of the improper integral as well. That is,
2
π
∫
ε
∞
u
(
t
−
τ
)
−
u
(
t
+
τ
)
2
τ
d
τ
→
H
(
u
)
(
t
)
{\displaystyle {\frac {2}{\pi }}\int _{\varepsilon }^{\infty }{\frac {u(t-\tau )-u(t+\tau )}{2\tau }}\,\mathrm {d} \tau \to \operatorname {H} (u)(t)}
as ε → 0 in the Lp norm, as well as pointwise almost everywhere, by the Titchmarsh theorem.
In the case p = 1, the Hilbert transform still converges pointwise almost everywhere, but may itself fail to be integrable, even locally. In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an L1 function does converge, however, in L1-weak, and the Hilbert transform is a bounded operator from L1 to L1,w. (In particular, since the Hilbert transform is also a multiplier operator on L2, Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that H is bounded on Lp.)
== Properties ==
=== Boundedness ===
If 1 < p < ∞, then the Hilbert transform on
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
is a bounded linear operator, meaning that there exists a constant Cp such that
‖
H
u
‖
p
≤
C
p
‖
u
‖
p
{\displaystyle \left\|\operatorname {H} u\right\|_{p}\leq C_{p}\left\|u\right\|_{p}}
for all
u
∈
L
p
(
R
)
{\displaystyle u\in L^{p}(\mathbb {R} )}
.
The best constant
C
p
{\displaystyle C_{p}}
is given by
C
p
=
{
tan
π
2
p
if
1
<
p
≤
2
cot
π
2
p
if
2
<
p
<
∞
{\displaystyle C_{p}={\begin{cases}\tan {\frac {\pi }{2p}}&{\text{if}}~1<p\leq 2\\[4pt]\cot {\frac {\pi }{2p}}&{\text{if}}~2<p<\infty \end{cases}}}
An easy way to find the best
C
p
{\displaystyle C_{p}}
for
p
{\displaystyle p}
being a power of 2 is through the so-called Cotlar's identity that
(
H
f
)
2
=
f
2
+
2
H
(
f
H
f
)
{\displaystyle (\operatorname {H} f)^{2}=f^{2}+2\operatorname {H} (f\operatorname {H} f)}
for all real valued f. The same best constants hold for the periodic Hilbert transform.
The boundedness of the Hilbert transform implies the
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
convergence of the symmetric partial sum operator
S
R
f
=
∫
−
R
R
f
^
(
ξ
)
e
2
π
i
x
ξ
d
ξ
{\displaystyle S_{R}f=\int _{-R}^{R}{\hat {f}}(\xi )e^{2\pi ix\xi }\,\mathrm {d} \xi }
to f in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
.
=== Anti-self adjointness ===
The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
and the dual space
L
q
(
R
)
{\displaystyle L^{q}(\mathbb {R} )}
, where p and q are Hölder conjugates and 1 < p, q < ∞. Symbolically,
⟨
H
u
,
v
⟩
=
⟨
u
,
−
H
v
⟩
{\displaystyle \langle \operatorname {H} u,v\rangle =\langle u,-\operatorname {H} v\rangle }
for
u
∈
L
p
(
R
)
{\displaystyle u\in L^{p}(\mathbb {R} )}
and
v
∈
L
q
(
R
)
{\displaystyle v\in L^{q}(\mathbb {R} )}
.
=== Inverse transform ===
The Hilbert transform is an anti-involution, meaning that
H
(
H
(
u
)
)
=
−
u
{\displaystyle \operatorname {H} {\bigl (}\operatorname {H} \left(u\right){\bigr )}=-u}
provided each transform is well-defined. Since H preserves the space
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
, this implies in particular that the Hilbert transform is invertible on
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
, and that
H
−
1
=
−
H
{\displaystyle \operatorname {H} ^{-1}=-\operatorname {H} }
=== Complex structure ===
Because H2 = −I ("I" is the identity operator) on the real Banach space of real-valued functions in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
, the Hilbert transform defines a linear complex structure on this Banach space. In particular, when p = 2, the Hilbert transform gives the Hilbert space of real-valued functions in
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
the structure of a complex Hilbert space.
The (complex) eigenstates of the Hilbert transform admit representations as holomorphic functions in the upper and lower half-planes in the Hardy space H2 by the Paley–Wiener theorem.
=== Differentiation ===
Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute:
H
(
d
u
d
t
)
=
d
d
t
H
(
u
)
{\displaystyle \operatorname {H} \left({\frac {\mathrm {d} u}{\mathrm {d} t}}\right)={\frac {\mathrm {d} }{\mathrm {d} t}}\operatorname {H} (u)}
Iterating this identity,
H
(
d
k
u
d
t
k
)
=
d
k
d
t
k
H
(
u
)
{\displaystyle \operatorname {H} \left({\frac {\mathrm {d} ^{k}u}{\mathrm {d} t^{k}}}\right)={\frac {\mathrm {d} ^{k}}{\mathrm {d} t^{k}}}\operatorname {H} (u)}
This is rigorously true as stated provided u and its first k derivatives belong to
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
. One can check this easily in the frequency domain, where differentiation becomes multiplication by ω.
=== Convolutions ===
The Hilbert transform can formally be realized as a convolution with the tempered distribution
h
(
t
)
=
p
.
v
.
1
π
t
{\displaystyle h(t)=\operatorname {p.v.} {\frac {1}{\pi \,t}}}
Thus formally,
H
(
u
)
=
h
∗
u
{\displaystyle \operatorname {H} (u)=h*u}
However, a priori this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions a fortiori) are dense in Lp. Alternatively, one may use the fact that h(t) is the distributional derivative of the function log|t|/π; to wit
H
(
u
)
(
t
)
=
d
d
t
(
1
π
(
u
∗
log
|
⋅
|
)
(
t
)
)
{\displaystyle \operatorname {H} (u)(t)={\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {1}{\pi }}\left(u*\log {\bigl |}\cdot {\bigr |}\right)(t)\right)}
For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform applied on only one of either of the factors:
H
(
u
∗
v
)
=
H
(
u
)
∗
v
=
u
∗
H
(
v
)
{\displaystyle \operatorname {H} (u*v)=\operatorname {H} (u)*v=u*\operatorname {H} (v)}
This is rigorously true if u and v are compactly supported distributions since, in that case,
h
∗
(
u
∗
v
)
=
(
h
∗
u
)
∗
v
=
u
∗
(
h
∗
v
)
{\displaystyle h*(u*v)=(h*u)*v=u*(h*v)}
By passing to an appropriate limit, it is thus also true if u ∈ Lp and v ∈ Lq provided that
1
<
1
p
+
1
q
{\displaystyle 1<{\frac {1}{p}}+{\frac {1}{q}}}
from a theorem due to Titchmarsh.
=== Invariance ===
The Hilbert transform has the following invariance properties on
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
.
It commutes with translations. That is, it commutes with the operators Ta f(x) = f(x + a) for all a in
R
.
{\displaystyle \mathbb {R} .}
It commutes with positive dilations. That is it commutes with the operators Mλ f (x) = f (λ x) for all λ > 0.
It anticommutes with the reflection R f (x) = f (−x).
Up to a multiplicative constant, the Hilbert transform is the only bounded operator on L2 with these properties.
In fact there is a wider set of operators that commute with the Hilbert transform. The group
SL
(
2
,
R
)
{\displaystyle {\text{SL}}(2,\mathbb {R} )}
acts by unitary operators Ug on the space
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
by the formula
U
g
−
1
f
(
x
)
=
1
c
x
+
d
f
(
a
x
+
b
c
x
+
d
)
,
g
=
[
a
b
c
d
]
,
for
a
d
−
b
c
=
±
1.
{\displaystyle \operatorname {U} _{g}^{-1}f(x)={\frac {1}{cx+d}}\,f\left({\frac {ax+b}{cx+d}}\right)\,,\qquad g={\begin{bmatrix}a&b\\c&d\end{bmatrix}}~,\qquad {\text{ for }}~ad-bc=\pm 1.}
This unitary representation is an example of a principal series representation of
SL
(
2
,
R
)
.
{\displaystyle ~{\text{SL}}(2,\mathbb {R} )~.}
In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space
H
2
(
R
)
{\displaystyle H^{2}(\mathbb {R} )}
and its conjugate. These are the spaces of L2 boundary values of holomorphic functions on the upper and lower halfplanes.
H
2
(
R
)
{\displaystyle H^{2}(\mathbb {R} )}
and its conjugate consist of exactly those L2 functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to H = −i (2P − I), with P being the orthogonal projection from
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
onto
H
2
(
R
)
,
{\displaystyle \operatorname {H} ^{2}(\mathbb {R} ),}
and I the identity operator, it follows that
H
2
(
R
)
{\displaystyle \operatorname {H} ^{2}(\mathbb {R} )}
and its orthogonal complement are eigenspaces of H for the eigenvalues ±i. In other words, H commutes with the operators Ug. The restrictions of the operators Ug to
H
2
(
R
)
{\displaystyle \operatorname {H} ^{2}(\mathbb {R} )}
and its conjugate give irreducible representations of
SL
(
2
,
R
)
{\displaystyle {\text{SL}}(2,\mathbb {R} )}
– the so-called limit of discrete series representations.
== Extending the domain of definition ==
=== Hilbert transform of distributions ===
It is further possible to extend the Hilbert transform to certain spaces of distributions (Pandey 1996, Chapter 3). Since the Hilbert transform commutes with differentiation, and is a bounded operator on Lp, H restricts to give a continuous transform on the inverse limit of Sobolev spaces:
D
L
p
=
lim
⟵
n
→
∞
W
n
,
p
(
R
)
{\displaystyle {\mathcal {D}}_{L^{p}}={\underset {n\to \infty }{\underset {\longleftarrow }{\lim }}}W^{n,p}(\mathbb {R} )}
The Hilbert transform can then be defined on the dual space of
D
L
p
{\displaystyle {\mathcal {D}}_{L^{p}}}
, denoted
D
L
p
′
{\displaystyle {\mathcal {D}}_{L^{p}}'}
, consisting of Lp distributions. This is accomplished by the duality pairing:
For
u
∈
D
L
p
′
{\displaystyle u\in {\mathcal {D}}'_{L^{p}}}
, define:
H
(
u
)
∈
D
L
p
′
=
⟨
H
u
,
v
⟩
≜
⟨
u
,
−
H
v
⟩
,
for all
v
∈
D
L
p
.
{\displaystyle \operatorname {H} (u)\in {\mathcal {D}}'_{L^{p}}=\langle \operatorname {H} u,v\rangle \ \triangleq \ \langle u,-\operatorname {H} v\rangle ,\ {\text{for all}}\ v\in {\mathcal {D}}_{L^{p}}.}
It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand and Shilov, but considerably more care is needed because of the singularity in the integral.
=== Hilbert transform of bounded functions ===
The Hilbert transform can be defined for functions in
L
∞
(
R
)
{\displaystyle L^{\infty }(\mathbb {R} )}
as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps
L
∞
(
R
)
{\displaystyle L^{\infty }(\mathbb {R} )}
to the Banach space of bounded mean oscillation (BMO) classes.
Interpreted naïvely, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with u = sgn(x), the integral defining H(u) diverges almost everywhere to ±∞. To alleviate such difficulties, the Hilbert transform of an L∞ function is therefore defined by the following regularized form of the integral
H
(
u
)
(
t
)
=
p
.
v
.
∫
−
∞
∞
u
(
τ
)
{
h
(
t
−
τ
)
−
h
0
(
−
τ
)
}
d
τ
{\displaystyle \operatorname {H} (u)(t)=\operatorname {p.v.} \int _{-\infty }^{\infty }u(\tau )\left\{h(t-\tau )-h_{0}(-\tau )\right\}\,\mathrm {d} \tau }
where as above h(x) = 1/πx and
h
0
(
x
)
=
{
0
if
|
x
|
<
1
1
π
x
if
|
x
|
≥
1
{\displaystyle h_{0}(x)={\begin{cases}0&{\text{if}}~|x|<1\\{\frac {1}{\pi \,x}}&{\text{if}}~|x|\geq 1\end{cases}}}
The modified transform H agrees with the original transform up to an additive constant on functions of compact support from a general result by Calderón and Zygmund. Furthermore, the resulting integral converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation.
A deep result of Fefferman's work is that a function is of bounded mean oscillation if and only if it has the form f + H(g) for some
f
,
g
∈
L
∞
(
R
)
{\displaystyle f,g\in L^{\infty }(\mathbb {R} )}
.
== Conjugate functions ==
The Hilbert transform can be understood in terms of a pair of functions f(x) and g(x) such that the function
F
(
x
)
=
f
(
x
)
+
i
g
(
x
)
{\displaystyle F(x)=f(x)+i\,g(x)}
is the boundary value of a holomorphic function F(z) in the upper half-plane. Under these circumstances, if f and g are sufficiently integrable, then one is the Hilbert transform of the other.
Suppose that
f
∈
L
p
(
R
)
.
{\displaystyle f\in L^{p}(\mathbb {R} ).}
Then, by the theory of the Poisson integral, f admits a unique harmonic extension into the upper half-plane, and this extension is given by
u
(
x
+
i
y
)
=
u
(
x
,
y
)
=
1
π
∫
−
∞
∞
f
(
s
)
y
(
x
−
s
)
2
+
y
2
d
s
{\displaystyle u(x+iy)=u(x,y)={\frac {1}{\pi }}\int _{-\infty }^{\infty }f(s)\;{\frac {y}{(x-s)^{2}+y^{2}}}\;\mathrm {d} s}
which is the convolution of f with the Poisson kernel
P
(
x
,
y
)
=
y
π
(
x
2
+
y
2
)
{\displaystyle P(x,y)={\frac {y}{\pi \,\left(x^{2}+y^{2}\right)}}}
Furthermore, there is a unique harmonic function v defined in the upper half-plane such that F(z) = u(z) + i v(z) is holomorphic and
lim
y
→
∞
v
(
x
+
i
y
)
=
0
{\displaystyle \lim _{y\to \infty }v\,(x+i\,y)=0}
This harmonic function is obtained from f by taking a convolution with the conjugate Poisson kernel
Q
(
x
,
y
)
=
x
π
(
x
2
+
y
2
)
.
{\displaystyle Q(x,y)={\frac {x}{\pi \,\left(x^{2}+y^{2}\right)}}.}
Thus
v
(
x
,
y
)
=
1
π
∫
−
∞
∞
f
(
s
)
x
−
s
(
x
−
s
)
2
+
y
2
d
s
.
{\displaystyle v(x,y)={\frac {1}{\pi }}\int _{-\infty }^{\infty }f(s)\;{\frac {x-s}{\,(x-s)^{2}+y^{2}\,}}\;\mathrm {d} s.}
Indeed, the real and imaginary parts of the Cauchy kernel are
i
π
z
=
P
(
x
,
y
)
+
i
Q
(
x
,
y
)
{\displaystyle {\frac {i}{\pi \,z}}=P(x,y)+i\,Q(x,y)}
so that F = u + i v is holomorphic by Cauchy's integral formula.
The function v obtained from u in this way is called the harmonic conjugate of u. The (non-tangential) boundary limit of v(x,y) as y → 0 is the Hilbert transform of f. Thus, succinctly,
H
(
f
)
=
lim
y
→
0
Q
(
−
,
y
)
⋆
f
{\displaystyle \operatorname {H} (f)=\lim _{y\to 0}Q(-,y)\star f}
=== Titchmarsh's theorem ===
Titchmarsh's theorem (named for E. C. Titchmarsh who included it in his 1937 work) makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform. It gives necessary and sufficient conditions for a complex-valued square-integrable function F(x) on the real line to be the boundary value of a function in the Hardy space H2(U) of holomorphic functions in the upper half-plane U.
The theorem states that the following conditions for a complex-valued square-integrable function
F
:
R
→
C
{\displaystyle F:\mathbb {R} \to \mathbb {C} }
are equivalent:
F(x) is the limit as z → x of a holomorphic function F(z) in the upper half-plane such that
∫
−
∞
∞
|
F
(
x
+
i
y
)
|
2
d
x
<
K
{\displaystyle \int _{-\infty }^{\infty }|F(x+i\,y)|^{2}\;\mathrm {d} x<K}
The real and imaginary parts of F(x) are Hilbert transforms of each other.
The Fourier transform
F
(
F
)
(
x
)
{\displaystyle {\mathcal {F}}(F)(x)}
vanishes for x < 0.
A weaker result is true for functions of class Lp for p > 1. Specifically, if F(z) is a holomorphic function such that
∫
−
∞
∞
|
F
(
x
+
i
y
)
|
p
d
x
<
K
{\displaystyle \int _{-\infty }^{\infty }|F(x+i\,y)|^{p}\;\mathrm {d} x<K}
for all y, then there is a complex-valued function F(x) in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
such that F(x + i y) → F(x) in the Lp norm as y → 0 (as well as holding pointwise almost everywhere). Furthermore,
F
(
x
)
=
f
(
x
)
+
i
g
(
x
)
{\displaystyle F(x)=f(x)+i\,g(x)}
where f is a real-valued function in
L
p
(
R
)
{\displaystyle L^{p}(\mathbb {R} )}
and g is the Hilbert transform (of class Lp) of f.
This is not true in the case p = 1. In fact, the Hilbert transform of an L1 function f need not converge in the mean to another L1 function. Nevertheless, the Hilbert transform of f does converge almost everywhere to a finite function g such that
∫
−
∞
∞
|
g
(
x
)
|
p
1
+
x
2
d
x
<
∞
{\displaystyle \int _{-\infty }^{\infty }{\frac {|g(x)|^{p}}{1+x^{2}}}\;\mathrm {d} x<\infty }
This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc. Although usually called Titchmarsh's theorem, the result aggregates much work of others, including Hardy, Paley and Wiener (see Paley–Wiener theorem), as well as work by Riesz, Hille, and Tamarkin
=== Riemann–Hilbert problem ===
One form of the Riemann–Hilbert problem seeks to identify pairs of functions F+ and F− such that F+ is holomorphic on the upper half-plane and F− is holomorphic on the lower half-plane, such that for x along the real axis,
F
+
(
x
)
−
F
−
(
x
)
=
f
(
x
)
{\displaystyle F_{+}(x)-F_{-}(x)=f(x)}
where f(x) is some given real-valued function of
x
∈
R
{\displaystyle x\in \mathbb {R} }
. The left-hand side of this equation may be understood either as the difference of the limits of F± from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem.
Formally, if F± solve the Riemann–Hilbert problem
f
(
x
)
=
F
+
(
x
)
−
F
−
(
x
)
{\displaystyle f(x)=F_{+}(x)-F_{-}(x)}
then the Hilbert transform of f(x) is given by
H
(
f
)
(
x
)
=
−
i
(
F
+
(
x
)
+
F
−
(
x
)
)
.
{\displaystyle H(f)(x)=-i{\bigl (}F_{+}(x)+F_{-}(x){\bigr )}.}
== Hilbert transform on the circle ==
For a periodic function f the circular Hilbert transform is defined:
f
~
(
x
)
≜
1
2
π
p
.
v
.
∫
0
2
π
f
(
t
)
cot
(
x
−
t
2
)
d
t
{\displaystyle {\tilde {f}}(x)\triangleq {\frac {1}{2\pi }}\operatorname {p.v.} \int _{0}^{2\pi }f(t)\,\cot \left({\frac {x-t}{2}}\right)\,\mathrm {d} t}
The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel,
cot
(
x
−
t
2
)
{\displaystyle \cot \left({\frac {x-t}{2}}\right)}
is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied.
The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel 1⁄x periodic. More precisely, for x ≠ 0
1
2
cot
(
x
2
)
=
1
x
+
∑
n
=
1
∞
(
1
x
+
2
n
π
+
1
x
−
2
n
π
)
{\displaystyle {\frac {1}{\,2\,}}\cot \left({\frac {x}{2}}\right)={\frac {1}{x}}+\sum _{n=1}^{\infty }\left({\frac {1}{x+2n\pi }}+{\frac {1}{\,x-2n\pi \,}}\right)}
Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence.
Another more direct connection is provided by the Cayley transform C(x) = (x – i) / (x + i), which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map
U
f
(
x
)
=
1
(
x
+
i
)
π
f
(
C
(
x
)
)
{\displaystyle U\,f(x)={\frac {1}{(x+i)\,{\sqrt {\pi }}}}\,f\left(C\left(x\right)\right)}
of L2(T) onto
L
2
(
R
)
.
{\displaystyle L^{2}(\mathbb {R} ).}
The operator U carries the Hardy space H2(T) onto the Hardy space
H
2
(
R
)
{\displaystyle H^{2}(\mathbb {R} )}
.
== Hilbert transform in signal processing ==
=== Bedrosian's theorem ===
Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or
H
(
f
LP
(
t
)
⋅
f
HP
(
t
)
)
=
f
LP
(
t
)
⋅
H
(
f
HP
(
t
)
)
,
{\displaystyle \operatorname {H} \left(f_{\text{LP}}(t)\cdot f_{\text{HP}}(t)\right)=f_{\text{LP}}(t)\cdot \operatorname {H} \left(f_{\text{HP}}(t)\right),}
where fLP and fHP are the low- and high-pass signals respectively. A category of communication signals to which this applies is called the narrowband signal model. A member of that category is amplitude modulation of a high-frequency sinusoidal "carrier":
u
(
t
)
=
u
m
(
t
)
⋅
cos
(
ω
t
+
φ
)
,
{\displaystyle u(t)=u_{m}(t)\cdot \cos(\omega t+\varphi ),}
where um(t) is the narrow bandwidth "message" waveform, such as voice or music. Then by Bedrosian's theorem:
H
(
u
)
(
t
)
=
{
+
u
m
(
t
)
⋅
sin
(
ω
t
+
φ
)
if
ω
>
0
−
u
m
(
t
)
⋅
sin
(
ω
t
+
φ
)
if
ω
<
0
{\displaystyle \operatorname {H} (u)(t)={\begin{cases}+u_{m}(t)\cdot \sin(\omega t+\varphi )&{\text{if }}\omega >0\\-u_{m}(t)\cdot \sin(\omega t+\varphi )&{\text{if }}\omega <0\end{cases}}}
=== Analytic representation ===
A specific type of conjugate function is:
u
a
(
t
)
≜
u
(
t
)
+
i
⋅
H
(
u
)
(
t
)
,
{\displaystyle u_{a}(t)\triangleq u(t)+i\cdot H(u)(t),}
known as the analytic representation of
u
(
t
)
.
{\displaystyle u(t).}
The name reflects its mathematical tractability, due largely to Euler's formula. Applying Bedrosian's theorem to the narrowband model, the analytic representation is:
A Fourier transform property indicates that this complex heterodyne operation can shift all the negative frequency components of um(t) above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms.
=== Angle (phase/frequency) modulation ===
The form:
u
(
t
)
=
A
⋅
cos
(
ω
t
+
φ
m
(
t
)
)
{\displaystyle u(t)=A\cdot \cos(\omega t+\varphi _{m}(t))}
is called angle modulation, which includes both phase modulation and frequency modulation. The instantaneous frequency is
ω
+
φ
m
′
(
t
)
.
{\displaystyle \omega +\varphi _{m}^{\prime }(t).}
For sufficiently large ω, compared to
φ
m
′
{\displaystyle \varphi _{m}^{\prime }}
:
H
(
u
)
(
t
)
≈
A
⋅
sin
(
ω
t
+
φ
m
(
t
)
)
{\displaystyle \operatorname {H} (u)(t)\approx A\cdot \sin(\omega t+\varphi _{m}(t))}
and:
u
a
(
t
)
≈
A
⋅
e
i
(
ω
t
+
φ
m
(
t
)
)
.
{\displaystyle u_{a}(t)\approx A\cdot e^{i(\omega t+\varphi _{m}(t))}.}
=== Single sideband modulation (SSB) ===
When um(t) in Eq.1 is also an analytic representation (of a message waveform), that is:
u
m
(
t
)
=
m
(
t
)
+
i
⋅
m
^
(
t
)
{\displaystyle u_{m}(t)=m(t)+i\cdot {\widehat {m}}(t)}
the result is single-sideband modulation:
u
a
(
t
)
=
(
m
(
t
)
+
i
⋅
m
^
(
t
)
)
⋅
e
i
(
ω
t
+
φ
)
{\displaystyle u_{a}(t)=(m(t)+i\cdot {\widehat {m}}(t))\cdot e^{i(\omega t+\varphi )}}
whose transmitted component is:
u
(
t
)
=
Re
{
u
a
(
t
)
}
=
m
(
t
)
⋅
cos
(
ω
t
+
φ
)
−
m
^
(
t
)
⋅
sin
(
ω
t
+
φ
)
{\displaystyle {\begin{aligned}u(t)&=\operatorname {Re} \{u_{a}(t)\}\\&=m(t)\cdot \cos(\omega t+\varphi )-{\widehat {m}}(t)\cdot \sin(\omega t+\varphi )\end{aligned}}}
=== Causality ===
The function
h
(
t
)
=
1
/
(
π
t
)
{\displaystyle h(t)=1/(\pi t)}
presents two causality-based challenges to practical implementation in a convolution (in addition to its undefined value at 0):
Its duration is infinite (technically infinite support). Finite-length windowing reduces the effective frequency range of the transform; shorter windows result in greater losses at low and high frequencies. See also quadrature filter.
It is a non-causal filter. So a delayed version,
h
(
t
−
τ
)
,
{\displaystyle h(t-\tau ),}
is required. The corresponding output is subsequently delayed by
τ
.
{\displaystyle \tau .}
When creating the imaginary part of an analytic signal, the source (real part) must also be delayed by
τ
{\displaystyle \tau }
.
== Discrete Hilbert transform ==
For a discrete function,
u
[
n
]
,
{\displaystyle u[n],}
with discrete-time Fourier transform (DTFT),
U
(
ω
)
{\displaystyle U(\omega )}
, and discrete Hilbert transform
u
^
[
n
]
,
{\displaystyle {\widehat {u}}[n],}
the DTFT of
u
^
[
n
]
{\displaystyle {\widehat {u}}[n]}
in the region −π < ω < π is given by:
DTFT
(
u
^
)
=
U
(
ω
)
⋅
(
−
i
⋅
sgn
(
ω
)
)
.
{\displaystyle \operatorname {DTFT} ({\widehat {u}})=U(\omega )\cdot (-i\cdot \operatorname {sgn}(\omega )).}
The inverse DTFT, using the convolution theorem, is:
u
^
[
n
]
=
D
T
F
T
−
1
(
U
(
ω
)
)
∗
D
T
F
T
−
1
(
−
i
⋅
sgn
(
ω
)
)
=
u
[
n
]
∗
1
2
π
∫
−
π
π
(
−
i
⋅
sgn
(
ω
)
)
⋅
e
i
ω
n
d
ω
=
u
[
n
]
∗
1
2
π
[
∫
−
π
0
i
⋅
e
i
ω
n
d
ω
−
∫
0
π
i
⋅
e
i
ω
n
d
ω
]
⏟
h
[
n
]
,
{\displaystyle {\begin{aligned}{\widehat {u}}[n]&={\scriptstyle \mathrm {DTFT} ^{-1}}(U(\omega ))\ *\ {\scriptstyle \mathrm {DTFT} ^{-1}}(-i\cdot \operatorname {sgn}(\omega ))\\&=u[n]\ *\ {\frac {1}{2\pi }}\int _{-\pi }^{\pi }(-i\cdot \operatorname {sgn}(\omega ))\cdot e^{i\omega n}\,\mathrm {d} \omega \\&=u[n]\ *\ \underbrace {{\frac {1}{2\pi }}\left[\int _{-\pi }^{0}i\cdot e^{i\omega n}\,\mathrm {d} \omega -\int _{0}^{\pi }i\cdot e^{i\omega n}\,\mathrm {d} \omega \right]} _{h[n]},\end{aligned}}}
where
h
[
n
]
≜
{
0
,
if
n
even
2
π
n
if
n
odd
{\displaystyle h[n]\ \triangleq \ {\begin{cases}0,&{\text{if }}n{\text{ even}}\\{\frac {2}{\pi n}}&{\text{if }}n{\text{ odd}}\end{cases}}}
which is an infinite impulse response (IIR).
Practical considerations
Method 1: Direct convolution of streaming
u
[
n
]
{\displaystyle u[n]}
data with an FIR approximation of
h
[
n
]
,
{\displaystyle h[n],}
which we will designate by
h
~
[
n
]
.
{\displaystyle {\tilde {h}}[n].}
Examples of truncated
h
[
n
]
{\displaystyle h[n]}
are shown in figures 1 and 2. Fig 1 has an odd number of anti-symmetric coefficients and is called Type III. This type inherently exhibits responses of zero magnitude at frequencies 0 and Nyquist, resulting in a bandpass filter shape. A Type IV design (even number of anti-symmetric coefficients) is shown in Fig 2. It has a highpass frequency response. Type III is the usual choice. for these reasons:
A typical (i.e. properly filtered and sampled)
u
[
n
]
{\displaystyle u[n]}
sequence has no useful components at the Nyquist frequency.
The Type IV impulse response requires a
1
2
{\displaystyle {\tfrac {1}{2}}}
sample shift in the
h
[
n
]
{\displaystyle h[n]}
sequence. That causes the zero-valued coefficients to become non-zero, as seen in Figure 2. So a Type III design is potentially twice as efficient as Type IV.
The group delay of a Type III design is an integer number of samples, which facilitates aligning
u
^
[
n
]
{\displaystyle {\widehat {u}}[n]}
with
u
[
n
]
{\displaystyle u[n]}
to create an analytic signal. The group delay of Type IV is halfway between two samples.
The abrupt truncation of
h
[
n
]
{\displaystyle h[n]}
creates a rippling (Gibbs effect) of the flat frequency response. That can be mitigated by use of a window function to taper
h
~
[
n
]
{\displaystyle {\tilde {h}}[n]}
to zero.
Method 2: Piecewise convolution. It is well known that direct convolution is computationally much more intensive than methods like overlap-save that give access to the efficiencies of the Fast Fourier transform via the convolution theorem. Specifically, the discrete Fourier transform (DFT) of a segment of
u
[
n
]
{\displaystyle u[n]}
is multiplied pointwise with a DFT of the
h
~
[
n
]
{\displaystyle {\tilde {h}}[n]}
sequence. An inverse DFT is done on the product, and the transient artifacts at the leading and trailing edges of the segment are discarded. Over-lapping input segments prevent gaps in the output stream. An equivalent time domain description is that segments of length
N
{\displaystyle N}
(an arbitrary parameter) are convolved with the periodic function:
h
~
N
[
n
]
≜
∑
m
=
−
∞
∞
h
~
[
n
−
m
N
]
.
{\displaystyle {\tilde {h}}_{N}[n]\ \triangleq \sum _{m=-\infty }^{\infty }{\tilde {h}}[n-mN].}
When the duration of non-zero values of
h
~
[
n
]
{\displaystyle {\tilde {h}}[n]}
is
M
<
N
,
{\displaystyle M<N,}
the output sequence includes
N
−
M
+
1
{\displaystyle N-M+1}
samples of
u
^
.
{\displaystyle {\widehat {u}}.}
M
−
1
{\displaystyle M-1}
outputs are discarded from each block of
N
,
{\displaystyle N,}
and the input blocks are overlapped by that amount to prevent gaps.
Method 3: Same as method 2, except the DFT of
h
~
[
n
]
{\displaystyle {\tilde {h}}[n]}
is replaced by samples of the
−
i
sgn
(
ω
)
{\displaystyle -i\operatorname {sgn} (\omega )}
distribution (whose real and imaginary components are all just
0
{\displaystyle 0}
or
±
1.
{\displaystyle \pm 1.}
) That convolves
u
[
n
]
{\displaystyle u[n]}
with a periodic summation:
h
N
[
n
]
≜
∑
m
=
−
∞
∞
h
[
n
−
m
N
]
,
{\displaystyle h_{N}[n]\ \triangleq \sum _{m=-\infty }^{\infty }h[n-mN],}
for some arbitrary parameter,
N
.
{\displaystyle N.}
h
[
n
]
{\displaystyle h[n]}
is not an FIR, so the edge effects extend throughout the entire transform. Deciding what to delete and the corresponding amount of overlap is an application-dependent design issue.
Fig 3 depicts the difference between methods 2 and 3. Only half of the antisymmetric impulse response is shown, and only the non-zero coefficients. The blue graph corresponds to method 2 where
h
[
n
]
{\displaystyle h[n]}
is truncated by a rectangular window function, rather than tapered. It is generated by a Matlab function, hilb(65). Its transient effects are exactly known and readily discarded. The frequency response, which is determined by the function argument, is the only application-dependent design issue.
The red graph is
h
512
[
n
]
,
{\displaystyle h_{512}[n],}
corresponding to method 3. It is the inverse DFT of the
−
i
sgn
(
ω
)
{\displaystyle -i\operatorname {sgn} (\omega )}
distribution. Specifically, it is the function that is convolved with a segment of
u
[
n
]
{\displaystyle u[n]}
by the MATLAB function, hilbert(u,512). The real part of the output sequence is the original input sequence, so that the complex output is an analytic representation of
u
[
n
]
.
{\displaystyle u[n].}
When the input is a segment of a pure cosine, the resulting convolution for two different values of
N
{\displaystyle N}
is depicted in Fig 4 (red and blue plots). Edge effects prevent the result from being a pure sine function (green plot). Since
h
N
[
n
]
{\displaystyle h_{N}[n]}
is not an FIR sequence, the theoretical extent of the effects is the entire output sequence. But the differences from a sine function diminish with distance from the edges. Parameter
N
{\displaystyle N}
is the output sequence length. If it exceeds the length of the input sequence, the input is modified by appending zero-valued elements. In most cases, that reduces the magnitude of the edge distortions. But their duration is dominated by the inherent rise and fall times of the
h
[
n
]
{\displaystyle h[n]}
impulse response.
Fig 5 is an example of piecewise convolution, using both methods 2 (in blue) and 3 (red dots). A sine function is created by computing the Discrete Hilbert transform of a cosine function, which was processed in four overlapping segments, and pieced back together. As the FIR result (blue) shows, the distortions apparent in the IIR result (red) are not caused by the difference between
h
[
n
]
{\displaystyle h[n]}
and
h
N
[
n
]
{\displaystyle h_{N}[n]}
(green and red in Fig 3). The fact that
h
N
[
n
]
{\displaystyle h_{N}[n]}
is tapered (windowed) is actually helpful in this context. The real problem is that it's not windowed enough. Effectively,
M
=
N
,
{\displaystyle M=N,}
whereas the overlap-save method needs
M
<
N
.
{\displaystyle M<N.}
== Number-theoretic Hilbert transform ==
The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo an appropriate prime number. In this it follows the generalization of discrete Fourier transform to number theoretic transforms. The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences.
== See also ==
Analytic signal
Harmonic conjugate
Hilbert spectroscopy
Hilbert transform in the complex plane
Hilbert–Huang transform
Kramers–Kronig relations
Riesz transform
Single-sideband modulation
Singular integral operators of convolution type
== Notes ==
== Page citations ==
== References ==
== Further reading ==
== External links ==
Derivation of the boundedness of the Hilbert transform
Mathworld Hilbert transform — Contains a table of transforms
Weisstein, Eric W. "Titchmarsh theorem". MathWorld.
"GS256 Lecture 3: Hilbert Transformation" (PDF). Archived from the original (PDF) on 2012-02-27. an entry level introduction to Hilbert transformation. | Wikipedia/Hilbert_transform |
In mathematics, Legendre transform is an integral transform named after the mathematician Adrien-Marie Legendre, which uses Legendre polynomials
P
n
(
x
)
{\displaystyle P_{n}(x)}
as kernels of the transform. Legendre transform is a special case of Jacobi transform.
The Legendre transform of a function
f
(
x
)
{\displaystyle f(x)}
is
J
n
{
f
(
x
)
}
=
f
~
(
n
)
=
∫
−
1
1
P
n
(
x
)
f
(
x
)
d
x
{\displaystyle {\mathcal {J}}_{n}\{f(x)\}={\tilde {f}}(n)=\int _{-1}^{1}P_{n}(x)\ f(x)\ dx}
The inverse Legendre transform is given by
J
n
−
1
{
f
~
(
n
)
}
=
f
(
x
)
=
∑
n
=
0
∞
2
n
+
1
2
f
~
(
n
)
P
n
(
x
)
{\displaystyle {\mathcal {J}}_{n}^{-1}\{{\tilde {f}}(n)\}=f(x)=\sum _{n=0}^{\infty }{\frac {2n+1}{2}}{\tilde {f}}(n)P_{n}(x)}
== Associated Legendre transform ==
Associated Legendre transform is defined as
J
n
,
m
{
f
(
x
)
}
=
f
~
(
n
,
m
)
=
∫
−
1
1
(
1
−
x
2
)
−
m
/
2
P
n
m
(
x
)
f
(
x
)
d
x
{\displaystyle {\mathcal {J}}_{n,m}\{f(x)\}={\tilde {f}}(n,m)=\int _{-1}^{1}(1-x^{2})^{-m/2}P_{n}^{m}(x)\ f(x)\ dx}
The inverse Legendre transform is given by
J
n
,
m
−
1
{
f
~
(
n
,
m
)
}
=
f
(
x
)
=
∑
n
=
0
∞
2
n
+
1
2
(
n
−
m
)
!
(
n
+
m
)
!
f
~
(
n
,
m
)
(
1
−
x
2
)
m
/
2
P
n
m
(
x
)
{\displaystyle {\mathcal {J}}_{n,m}^{-1}\{{\tilde {f}}(n,m)\}=f(x)=\sum _{n=0}^{\infty }{\frac {2n+1}{2}}{\frac {(n-m)!}{(n+m)!}}{\tilde {f}}(n,m)(1-x^{2})^{m/2}P_{n}^{m}(x)}
== Some Legendre transform pairs ==
== References == | Wikipedia/Legendre_transform_(integral_transform) |
In mathematics, the Hankel transform expresses any given function f(r) as the weighted sum of an infinite number of Bessel functions of the first kind Jν(kr). The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor k along the r axis. The necessary coefficient Fν of each Bessel function in the sum, as a function of the scaling factor k constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval.
== Definition ==
The Hankel transform of order
ν
{\displaystyle \nu }
of a function f(r) is given by
F
ν
(
k
)
=
∫
0
∞
f
(
r
)
J
ν
(
k
r
)
r
d
r
,
{\displaystyle F_{\nu }(k)=\int _{0}^{\infty }f(r)J_{\nu }(kr)\,r\,\mathrm {d} r,}
where
J
ν
{\displaystyle J_{\nu }}
is the Bessel function of the first kind of order
ν
{\displaystyle \nu }
with
ν
≥
−
1
/
2
{\displaystyle \nu \geq -1/2}
. The inverse Hankel transform of Fν(k) is defined as
f
(
r
)
=
∫
0
∞
F
ν
(
k
)
J
ν
(
k
r
)
k
d
k
,
{\displaystyle f(r)=\int _{0}^{\infty }F_{\nu }(k)J_{\nu }(kr)\,k\,\mathrm {d} k,}
which can be readily verified using the orthogonality relationship described below.
=== Domain of definition ===
Inverting a Hankel transform of a function f(r) is valid at every point at which f(r) is continuous, provided that the function is defined in (0, ∞), is piecewise continuous and of bounded variation in every finite subinterval in (0, ∞), and
∫
0
∞
|
f
(
r
)
|
r
1
2
d
r
<
∞
.
{\displaystyle \int _{0}^{\infty }|f(r)|\,r^{\frac {1}{2}}\,\mathrm {d} r<\infty .}
However, like the Fourier transform, the domain can be extended by a density argument to include some functions whose above integral is not finite, for example
f
(
r
)
=
(
1
+
r
)
−
3
/
2
{\displaystyle f(r)=(1+r)^{-3/2}}
.
=== Alternative definition ===
An alternative definition says that the Hankel transform of g(r) is
h
ν
(
k
)
=
∫
0
∞
g
(
r
)
J
ν
(
k
r
)
k
r
d
r
.
{\displaystyle h_{\nu }(k)=\int _{0}^{\infty }g(r)J_{\nu }(kr)\,{\sqrt {kr}}\,\mathrm {d} r.}
The two definitions are related:
If
g
(
r
)
=
f
(
r
)
r
{\displaystyle g(r)=f(r){\sqrt {r}}}
, then
h
ν
(
k
)
=
F
ν
(
k
)
k
.
{\displaystyle h_{\nu }(k)=F_{\nu }(k){\sqrt {k}}.}
This means that, as with the previous definition, the Hankel transform defined this way is also its own inverse:
g
(
r
)
=
∫
0
∞
h
ν
(
k
)
J
ν
(
k
r
)
k
r
d
k
.
{\displaystyle g(r)=\int _{0}^{\infty }h_{\nu }(k)J_{\nu }(kr)\,{\sqrt {kr}}\,\mathrm {d} k.}
The obvious domain now has the condition
∫
0
∞
|
g
(
r
)
|
d
r
<
∞
,
{\displaystyle \int _{0}^{\infty }|g(r)|\,\mathrm {d} r<\infty ,}
but this can be extended. According to the reference given above, we can take the integral as the limit as the upper limit goes to infinity (an improper integral rather than a Lebesgue integral), and in this way the Hankel transform and its inverse work for all functions in L2(0, ∞).
== Transforming Laplace's equation ==
The Hankel transform can be used to transform and solve Laplace's equation expressed in cylindrical coordinates. Under the Hankel transform, the Bessel operator becomes a multiplication by
−
k
2
{\displaystyle -k^{2}}
. In the axisymmetric case, the partial differential equation is transformed as
H
0
{
∂
2
u
∂
r
2
+
1
r
∂
u
∂
r
+
∂
2
u
∂
z
2
}
=
−
k
2
U
+
∂
2
∂
z
2
U
,
{\displaystyle {\mathcal {H}}_{0}\left\{{\frac {\partial ^{2}u}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial u}{\partial r}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right\}=-k^{2}U+{\frac {\partial ^{2}}{\partial z^{2}}}U,}
where
U
=
H
0
u
{\displaystyle U={\mathcal {H}}_{0}u}
. Therefore, the Laplacian in cylindrical coordinates becomes an ordinary differential equation in the transformed function
U
{\displaystyle U}
.
== Orthogonality ==
The Bessel functions form an orthogonal basis with respect to the weighting factor r:
∫
0
∞
J
ν
(
k
r
)
J
ν
(
k
′
r
)
r
d
r
=
δ
(
k
−
k
′
)
k
,
k
,
k
′
>
0.
{\displaystyle \int _{0}^{\infty }J_{\nu }(kr)J_{\nu }(k'r)\,r\,\mathrm {d} r={\frac {\delta (k-k')}{k}},\quad k,k'>0.}
== The Plancherel theorem and Parseval's theorem ==
If f(r) and g(r) are such that their Hankel transforms Fν(k) and Gν(k) are well defined, then the Plancherel theorem states
∫
0
∞
f
(
r
)
g
(
r
)
r
d
r
=
∫
0
∞
F
ν
(
k
)
G
ν
(
k
)
k
d
k
.
{\displaystyle \int _{0}^{\infty }f(r)g(r)\,r\,\mathrm {d} r=\int _{0}^{\infty }F_{\nu }(k)G_{\nu }(k)\,k\,\mathrm {d} k.}
Parseval's theorem, which states
∫
0
∞
|
f
(
r
)
|
2
r
d
r
=
∫
0
∞
|
F
ν
(
k
)
|
2
k
d
k
,
{\displaystyle \int _{0}^{\infty }|f(r)|^{2}\,r\,\mathrm {d} r=\int _{0}^{\infty }|F_{\nu }(k)|^{2}\,k\,\mathrm {d} k,}
is a special case of the Plancherel theorem. These theorems can be proven using the orthogonality property.
== Relation to the multidimensional Fourier transform ==
The Hankel transform appears when one writes the multidimensional Fourier transform in hyperspherical coordinates, which is the reason why the Hankel transform often appears in physical problems with cylindrical or spherical symmetry.
Consider a function
f
(
r
)
{\displaystyle f(\mathbf {r} )}
of a
d
{\textstyle d}
-dimensional vector r. Its
d
{\textstyle d}
-dimensional Fourier transform is defined as
F
(
k
)
=
∫
R
d
f
(
r
)
e
−
i
k
⋅
r
d
r
.
{\displaystyle F(\mathbf {k} )=\int _{\mathbb {R} ^{d}}f(\mathbf {r} )e^{-i\mathbf {k} \cdot \mathbf {r} }\,\mathrm {d} \mathbf {r} .}
To rewrite it in hyperspherical coordinates, we can use the decomposition of a plane wave into
d
{\textstyle d}
-dimensional hyperspherical harmonics
Y
l
,
m
{\displaystyle Y_{l,m}}
:
e
−
i
k
⋅
r
=
(
2
π
)
d
/
2
(
k
r
)
1
−
d
/
2
∑
l
=
0
+
∞
(
−
i
)
l
J
d
/
2
−
1
+
l
(
k
r
)
∑
m
Y
l
,
m
(
Ω
k
)
Y
l
,
m
∗
(
Ω
r
)
,
{\displaystyle e^{-i\mathbf {k} \cdot \mathbf {r} }=(2\pi )^{d/2}(kr)^{1-d/2}\sum _{l=0}^{+\infty }(-i)^{l}J_{d/2-1+l}(kr)\sum _{m}Y_{l,m}(\Omega _{\mathbf {k} })Y_{l,m}^{*}(\Omega _{\mathbf {r} }),}
where
Ω
r
{\textstyle \Omega _{\mathbf {r} }}
and
Ω
k
{\textstyle \Omega _{\mathbf {k} }}
are the sets of all hyperspherical angles in the
r
{\displaystyle \mathbf {r} }
-space and
k
{\displaystyle \mathbf {k} }
-space. This gives the following expression for the
d
{\textstyle d}
-dimensional Fourier transform in hyperspherical coordinates:
F
(
k
)
=
(
2
π
)
d
/
2
k
1
−
d
/
2
∑
l
=
0
+
∞
(
−
i
)
l
∑
m
Y
l
,
m
(
Ω
k
)
∫
0
+
∞
J
d
/
2
−
1
+
l
(
k
r
)
r
d
/
2
d
r
∫
f
(
r
)
Y
l
,
m
∗
(
Ω
r
)
d
Ω
r
.
{\displaystyle F(\mathbf {k} )=(2\pi )^{d/2}k^{1-d/2}\sum _{l=0}^{+\infty }(-i)^{l}\sum _{m}Y_{l,m}(\Omega _{\mathbf {k} })\int _{0}^{+\infty }J_{d/2-1+l}(kr)r^{d/2}\mathrm {d} r\int f(\mathbf {r} )Y_{l,m}^{*}(\Omega _{\mathbf {r} })\mathrm {d} \Omega _{\mathbf {r} }.}
If we expand
f
(
r
)
{\displaystyle f(\mathbf {r} )}
and
F
(
k
)
{\displaystyle F(\mathbf {k} )}
in hyperspherical harmonics:
f
(
r
)
=
∑
l
=
0
+
∞
∑
m
f
l
,
m
(
r
)
Y
l
,
m
(
Ω
r
)
,
F
(
k
)
=
∑
l
=
0
+
∞
∑
m
F
l
,
m
(
k
)
Y
l
,
m
(
Ω
k
)
,
{\displaystyle f(\mathbf {r} )=\sum _{l=0}^{+\infty }\sum _{m}f_{l,m}(r)Y_{l,m}(\Omega _{\mathbf {r} }),\quad F(\mathbf {k} )=\sum _{l=0}^{+\infty }\sum _{m}F_{l,m}(k)Y_{l,m}(\Omega _{\mathbf {k} }),}
the Fourier transform in hyperspherical coordinates simplifies to
k
d
/
2
−
1
F
l
,
m
(
k
)
=
(
2
π
)
d
/
2
(
−
i
)
l
∫
0
+
∞
r
d
/
2
−
1
f
l
,
m
(
r
)
J
d
/
2
−
1
+
l
(
k
r
)
r
d
r
.
{\displaystyle k^{d/2-1}F_{l,m}(k)=(2\pi )^{d/2}(-i)^{l}\int _{0}^{+\infty }r^{d/2-1}f_{l,m}(r)J_{d/2-1+l}(kr)r\mathrm {d} r.}
This means that functions with angular dependence in form of a hyperspherical harmonic retain it upon the multidimensional Fourier transform, while the radial part undergoes the Hankel transform (up to some extra factors like
r
d
/
2
−
1
{\textstyle r^{d/2-1}}
).
=== Special cases ===
==== Fourier transform in two dimensions ====
If a two-dimensional function f(r) is expanded in a multipole series,
f
(
r
,
θ
)
=
∑
m
=
−
∞
∞
f
m
(
r
)
e
i
m
θ
r
,
{\displaystyle f(r,\theta )=\sum _{m=-\infty }^{\infty }f_{m}(r)e^{im\theta _{\mathbf {r} }},}
then its two-dimensional Fourier transform is given by
F
(
k
)
=
2
π
∑
m
i
−
m
e
i
m
θ
k
F
m
(
k
)
,
{\displaystyle F(\mathbf {k} )=2\pi \sum _{m}i^{-m}e^{im\theta _{\mathbf {k} }}F_{m}(k),}
where
F
m
(
k
)
=
∫
0
∞
f
m
(
r
)
J
m
(
k
r
)
r
d
r
{\displaystyle F_{m}(k)=\int _{0}^{\infty }f_{m}(r)J_{m}(kr)\,r\,\mathrm {d} r}
is the
m
{\textstyle m}
-th order Hankel transform of
f
m
(
r
)
{\displaystyle f_{m}(r)}
(in this case
m
{\textstyle m}
plays the role of the angular momentum, which was denoted by
l
{\textstyle l}
in the previous section).
==== Fourier transform in three dimensions ====
If a three-dimensional function f(r) is expanded in a multipole series over spherical harmonics,
f
(
r
,
θ
r
,
φ
r
)
=
∑
l
=
0
+
∞
∑
m
=
−
l
+
l
f
l
,
m
(
r
)
Y
l
,
m
(
θ
r
,
φ
r
)
,
{\displaystyle f(r,\theta _{\mathbf {r} },\varphi _{\mathbf {r} })=\sum _{l=0}^{+\infty }\sum _{m=-l}^{+l}f_{l,m}(r)Y_{l,m}(\theta _{\mathbf {r} },\varphi _{\mathbf {r} }),}
then its three-dimensional Fourier transform is given by
F
(
k
,
θ
k
,
φ
k
)
=
(
2
π
)
3
/
2
∑
l
=
0
+
∞
(
−
i
)
l
∑
m
=
−
l
+
l
F
l
,
m
(
k
)
Y
l
,
m
(
θ
k
,
φ
k
)
,
{\displaystyle F(k,\theta _{\mathbf {k} },\varphi _{\mathbf {k} })=(2\pi )^{3/2}\sum _{l=0}^{+\infty }(-i)^{l}\sum _{m=-l}^{+l}F_{l,m}(k)Y_{l,m}(\theta _{\mathbf {k} },\varphi _{\mathbf {k} }),}
where
k
F
l
,
m
(
k
)
=
∫
0
+
∞
r
f
l
,
m
(
r
)
J
l
+
1
/
2
(
k
r
)
r
d
r
.
{\displaystyle {\sqrt {k}}F_{l,m}(k)=\int _{0}^{+\infty }{\sqrt {r}}f_{l,m}(r)J_{l+1/2}(kr)r\mathrm {d} r.}
is the Hankel transform of
r
f
l
,
m
(
r
)
{\displaystyle {\sqrt {r}}f_{l,m}(r)}
of order
(
l
+
1
/
2
)
{\textstyle (l+1/2)}
.
This kind of Hankel transform of half-integer order is also known as the spherical Bessel transform.
==== Fourier transform in d dimensions (radially symmetric case) ====
If a d-dimensional function f(r) does not depend on angular coordinates, then its d-dimensional Fourier transform F(k) also does not depend on angular coordinates and is given by
k
d
/
2
−
1
F
(
k
)
=
(
2
π
)
d
/
2
∫
0
+
∞
r
d
/
2
−
1
f
(
r
)
J
d
/
2
−
1
(
k
r
)
r
d
r
.
{\displaystyle k^{d/2-1}F(k)=(2\pi )^{d/2}\int _{0}^{+\infty }r^{d/2-1}f(r)J_{d/2-1}(kr)r\mathrm {d} r.}
which is the Hankel transform of
r
d
/
2
−
1
f
(
r
)
{\displaystyle r^{d/2-1}f(r)}
of order
(
d
/
2
−
1
)
{\textstyle (d/2-1)}
up to a factor of
(
2
π
)
d
/
2
{\displaystyle (2\pi )^{d/2}}
.
==== 2D functions inside a limited radius ====
If a two-dimensional function f(r) is expanded in a multipole series and the expansion coefficients fm are sufficiently smooth near the origin and zero outside a radius R, the radial part f(r)/rm may be expanded into a power series of 1 − (r/R)^2:
f
m
(
r
)
=
r
m
∑
t
≥
0
f
m
,
t
(
1
−
(
r
R
)
2
)
t
,
0
≤
r
≤
R
,
{\displaystyle f_{m}(r)=r^{m}\sum _{t\geq 0}f_{m,t}\left(1-\left({\tfrac {r}{R}}\right)^{2}\right)^{t},\quad 0\leq r\leq R,}
such that the two-dimensional Fourier transform of f(r) becomes
F
(
k
)
=
2
π
∑
m
i
−
m
e
i
m
θ
k
∑
t
f
m
,
t
∫
0
R
r
m
(
1
−
(
r
R
)
2
)
t
J
m
(
k
r
)
r
d
r
=
2
π
∑
m
i
−
m
e
i
m
θ
k
R
m
+
2
∑
t
f
m
,
t
∫
0
1
x
m
+
1
(
1
−
x
2
)
t
J
m
(
k
x
R
)
d
x
(
x
=
r
R
)
=
2
π
∑
m
i
−
m
e
i
m
θ
k
R
m
+
2
∑
t
f
m
,
t
t
!
2
t
(
k
R
)
1
+
t
J
m
+
t
+
1
(
k
R
)
,
{\displaystyle {\begin{aligned}F(\mathbf {k} )&=2\pi \sum _{m}i^{-m}e^{im\theta _{k}}\sum _{t}f_{m,t}\int _{0}^{R}r^{m}\left(1-\left({\tfrac {r}{R}}\right)^{2}\right)^{t}J_{m}(kr)r\,\mathrm {d} r&&\\&=2\pi \sum _{m}i^{-m}e^{im\theta _{k}}R^{m+2}\sum _{t}f_{m,t}\int _{0}^{1}x^{m+1}(1-x^{2})^{t}J_{m}(kxR)\,\mathrm {d} x&&(x={\tfrac {r}{R}})\\&=2\pi \sum _{m}i^{-m}e^{im\theta _{k}}R^{m+2}\sum _{t}f_{m,t}{\frac {t!2^{t}}{(kR)^{1+t}}}J_{m+t+1}(kR),\end{aligned}}}
where the last equality follows from §6.567.1 of. The expansion coefficients fm,t are accessible with discrete Fourier transform techniques: if the radial distance is scaled with
r
/
R
≡
sin
θ
,
1
−
(
r
/
R
)
2
=
cos
2
θ
,
{\displaystyle r/R\equiv \sin \theta ,\quad 1-(r/R)^{2}=\cos ^{2}\theta ,}
the Fourier-Chebyshev series coefficients g emerge as
f
(
r
)
≡
r
m
∑
j
g
m
,
j
cos
(
j
θ
)
=
r
m
∑
j
g
m
,
j
T
j
(
cos
θ
)
.
{\displaystyle f(r)\equiv r^{m}\sum _{j}g_{m,j}\cos(j\theta )=r^{m}\sum _{j}g_{m,j}T_{j}(\cos \theta ).}
Using the re-expansion
cos
(
j
θ
)
=
2
j
−
1
cos
j
θ
−
j
1
2
j
−
3
cos
j
−
2
θ
+
j
2
(
j
−
3
1
)
2
j
−
5
cos
j
−
4
θ
−
j
3
(
j
−
4
2
)
2
j
−
7
cos
j
−
6
θ
+
⋯
{\displaystyle \cos(j\theta )=2^{j-1}\cos ^{j}\theta -{\frac {j}{1}}2^{j-3}\cos ^{j-2}\theta +{\frac {j}{2}}{\binom {j-3}{1}}2^{j-5}\cos ^{j-4}\theta -{\frac {j}{3}}{\binom {j-4}{2}}2^{j-7}\cos ^{j-6}\theta +\cdots }
yields fm,t expressed as sums of gm,j.
This is one flavor of fast Hankel transform techniques.
== Relation to the Fourier and Abel transforms ==
The Hankel transform is one member of the FHA cycle of integral operators. In two dimensions, if we define A as the Abel transform operator, F as the Fourier transform operator, and H as the zeroth-order Hankel transform operator, then the special case of the projection-slice theorem for circularly symmetric functions states that
F
A
=
H
.
{\displaystyle FA=H.}
In other words, applying the Abel transform to a 1-dimensional function and then applying the Fourier transform to that result is the same as applying the Hankel transform to that function. This concept can be extended to higher dimensions.
== Numerical evaluation ==
A simple and efficient approach to the numerical evaluation of the Hankel transform is based on the observation that it can be cast in the form of a convolution by a logarithmic change of variables
r
=
r
0
e
−
ρ
,
k
=
k
0
e
κ
.
{\displaystyle r=r_{0}e^{-\rho },\quad k=k_{0}\,e^{\kappa }.}
In these new variables, the Hankel transform reads
F
~
ν
(
κ
)
=
∫
−
∞
∞
f
~
(
ρ
)
J
~
ν
(
κ
−
ρ
)
d
ρ
,
{\displaystyle {\tilde {F}}_{\nu }(\kappa )=\int _{-\infty }^{\infty }{\tilde {f}}(\rho ){\tilde {J}}_{\nu }(\kappa -\rho )\,\mathrm {d} \rho ,}
where
f
~
(
ρ
)
=
(
r
0
e
−
ρ
)
1
−
n
f
(
r
0
e
−
ρ
)
,
{\displaystyle {\tilde {f}}(\rho )=\left(r_{0}\,e^{-\rho }\right)^{1-n}\,f(r_{0}e^{-\rho }),}
F
~
ν
(
κ
)
=
(
k
0
e
κ
)
1
+
n
F
ν
(
k
0
e
κ
)
,
{\displaystyle {\tilde {F}}_{\nu }(\kappa )=\left(k_{0}\,e^{\kappa }\right)^{1+n}\,F_{\nu }(k_{0}e^{\kappa }),}
J
~
ν
(
κ
−
ρ
)
=
(
k
0
r
0
e
κ
−
ρ
)
1
+
n
J
ν
(
k
0
r
0
e
κ
−
ρ
)
.
{\displaystyle {\tilde {J}}_{\nu }(\kappa -\rho )=\left(k_{0}\,r_{0}\,e^{\kappa -\rho }\right)^{1+n}\,J_{\nu }(k_{0}r_{0}e^{\kappa -\rho }).}
Now the integral can be calculated numerically with
O
(
N
log
N
)
{\textstyle O(N\log N)}
complexity using fast Fourier transform. The algorithm can be further simplified by using a known analytical expression for the Fourier transform of
J
~
ν
{\displaystyle {\tilde {J}}_{\nu }}
:
∫
−
∞
+
∞
J
~
ν
(
x
)
e
−
i
q
x
d
x
=
Γ
(
ν
+
1
+
n
−
i
q
2
)
Γ
(
ν
+
1
−
n
+
i
q
2
)
2
n
−
i
q
e
i
q
ln
(
k
0
r
0
)
.
{\displaystyle \int _{-\infty }^{+\infty }{\tilde {J}}_{\nu }(x)e^{-iqx}\,\mathrm {d} x={\frac {\Gamma \left({\frac {\nu +1+n-iq}{2}}\right)}{\Gamma \left({\frac {\nu +1-n+iq}{2}}\right)}}\,2^{n-iq}e^{iq\ln(k_{0}r_{0})}.}
The optimal choice of parameters
r
0
,
k
0
,
n
{\displaystyle r_{0},k_{0},n}
depends on the properties of
f
(
r
)
,
{\displaystyle f(r),}
in particular its asymptotic behavior at
r
→
0
{\displaystyle r\to 0}
and
r
→
∞
.
{\displaystyle r\to \infty .}
This algorithm is known as the "quasi-fast Hankel transform", or simply "fast Hankel transform".
Since it is based on fast Fourier transform in logarithmic variables,
f
(
r
)
{\displaystyle f(r)}
has to be defined on a logarithmic grid. For functions defined on a uniform grid, a number of other algorithms exist, including straightforward quadrature, methods based on the projection-slice theorem, and methods using the asymptotic expansion of Bessel functions.
== Some Hankel transform pairs ==
Kn(z) is a modified Bessel function of the second kind.
K(z) is the complete elliptic integral of the first kind.
The expression
d
2
F
0
d
k
2
+
1
k
d
F
0
d
k
{\displaystyle {\frac {\,\mathrm {d} ^{2}F_{0}\,}{\mathrm {d} k^{2}}}+{\frac {1}{k}}{\frac {\,\mathrm {d} F_{0}\,}{\mathrm {d} k}}}
coincides with the expression for the Laplace operator in polar coordinates ( k, θ ) applied to a spherically symmetric function F0(k) .
The Hankel transform of Zernike polynomials are essentially Bessel Functions (Noll 1976):
R
n
m
(
r
)
=
(
−
1
)
n
−
m
2
∫
0
∞
J
n
+
1
(
k
)
J
m
(
k
r
)
d
k
{\displaystyle R_{n}^{m}(r)=(-1)^{\frac {n-m}{2}}\int _{0}^{\infty }J_{n+1}(k)J_{m}(kr)\,\mathrm {d} k}
for even n − m ≥ 0.
== See also ==
Fourier transform
Integral transform
Abel transform
Fourier–Bessel series
Neumann polynomial
Y and H transforms
== References == | Wikipedia/Hankel_transform |
In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed into feature vector representations via a user-specified feature map: in contrast, kernel methods require only a user-specified kernel, i.e., a similarity function over all pairs of data points computed using inner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to the representer theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick". Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include the kernel perceptron, support-vector machines (SVM), Gaussian processes, principal components analysis (PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others.
Most kernel algorithms are based on convex optimization or eigenproblems and are statistically well-founded. Typically, their statistical properties are analyzed using statistical learning theory (for example, using Rademacher complexity).
== Motivation and informal explanation ==
Kernel methods can be thought of as instance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" the
i
{\displaystyle i}
-th training example
(
x
i
,
y
i
)
{\displaystyle (\mathbf {x} _{i},y_{i})}
and learn for it a corresponding weight
w
i
{\displaystyle w_{i}}
. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of a similarity function
k
{\displaystyle k}
, called a kernel, between the unlabeled input
x
′
{\displaystyle \mathbf {x'} }
and each of the training inputs
x
i
{\displaystyle \mathbf {x} _{i}}
. For instance, a kernelized binary classifier typically computes a weighted sum of similarities
y
^
=
sgn
∑
i
=
1
n
w
i
y
i
k
(
x
i
,
x
′
)
,
{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}
where
y
^
∈
{
−
1
,
+
1
}
{\displaystyle {\hat {y}}\in \{-1,+1\}}
is the kernelized binary classifier's predicted label for the unlabeled input
x
′
{\displaystyle \mathbf {x'} }
whose hidden true label
y
{\displaystyle y}
is of interest;
k
:
X
×
X
→
R
{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }
is the kernel function that measures similarity between any pair of inputs
x
,
x
′
∈
X
{\displaystyle \mathbf {x} ,\mathbf {x'} \in {\mathcal {X}}}
;
the sum ranges over the n labeled examples
{
(
x
i
,
y
i
)
}
i
=
1
n
{\displaystyle \{(\mathbf {x} _{i},y_{i})\}_{i=1}^{n}}
in the classifier's training set, with
y
i
∈
{
−
1
,
+
1
}
{\displaystyle y_{i}\in \{-1,+1\}}
;
the
w
i
∈
R
{\displaystyle w_{i}\in \mathbb {R} }
are the weights for the training examples, as determined by the learning algorithm;
the sign function
sgn
{\displaystyle \operatorname {sgn} }
determines whether the predicted classification
y
^
{\displaystyle {\hat {y}}}
comes out positive or negative.
Kernel classifiers were described as early as the 1960s, with the invention of the kernel perceptron. They rose to great prominence with the popularity of the support-vector machine (SVM) in the 1990s, when the SVM was found to be competitive with neural networks on tasks such as handwriting recognition.
== Mathematics: the kernel trick ==
The kernel trick avoids the explicit mapping that is needed to get linear learning algorithms to learn a nonlinear function or decision boundary. For all
x
{\displaystyle \mathbf {x} }
and
x
′
{\displaystyle \mathbf {x'} }
in the input space
X
{\displaystyle {\mathcal {X}}}
, certain functions
k
(
x
,
x
′
)
{\displaystyle k(\mathbf {x} ,\mathbf {x'} )}
can be expressed as an inner product in another space
V
{\displaystyle {\mathcal {V}}}
. The function
k
:
X
×
X
→
R
{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }
is often referred to as a kernel or a kernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum or integral.
Certain problems in machine learning have more structure than an arbitrary weighting function
k
{\displaystyle k}
. The computation is made much simpler if the kernel can be written in the form of a "feature map"
φ
:
X
→
V
{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}
which satisfies
k
(
x
,
x
′
)
=
⟨
φ
(
x
)
,
φ
(
x
′
)
⟩
V
.
{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}
The key restriction is that
⟨
⋅
,
⋅
⟩
V
{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}
must be a proper inner product. On the other hand, an explicit representation for
φ
{\displaystyle \varphi }
is not necessary, as long as
V
{\displaystyle {\mathcal {V}}}
is an inner product space. The alternative follows from Mercer's theorem: an implicitly defined function
φ
{\displaystyle \varphi }
exists whenever the space
X
{\displaystyle {\mathcal {X}}}
can be equipped with a suitable measure ensuring the function
k
{\displaystyle k}
satisfies Mercer's condition.
Mercer's theorem is similar to a generalization of the result from linear algebra that associates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure the counting measure
μ
(
T
)
=
|
T
|
{\displaystyle \mu (T)=|T|}
for all
T
⊂
X
{\displaystyle T\subset X}
, which counts the number of points inside the set
T
{\displaystyle T}
, then the integral in Mercer's theorem reduces to a summation
∑
i
=
1
n
∑
j
=
1
n
k
(
x
i
,
x
j
)
c
i
c
j
≥
0.
{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}
If this summation holds for all finite sequences of points
(
x
1
,
…
,
x
n
)
{\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}
in
X
{\displaystyle {\mathcal {X}}}
and all choices of
n
{\displaystyle n}
real-valued coefficients
(
c
1
,
…
,
c
n
)
{\displaystyle (c_{1},\dots ,c_{n})}
(cf. positive definite kernel), then the function
k
{\displaystyle k}
satisfies Mercer's condition.
Some algorithms that depend on arbitrary relationships in the native space
X
{\displaystyle {\mathcal {X}}}
would, in fact, have a linear interpretation in a different setting: the range space of
φ
{\displaystyle \varphi }
. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to compute
φ
{\displaystyle \varphi }
directly during computation, as is the case with support-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms.
Theoretically, a Gram matrix
K
∈
R
n
×
n
{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}
with respect to
{
x
1
,
…
,
x
n
}
{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}
(sometimes also called a "kernel matrix"), where
K
i
j
=
k
(
x
i
,
x
j
)
{\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}
, must be positive semi-definite (PSD). Empirically, for machine learning heuristics, choices of a function
k
{\displaystyle k}
that do not satisfy Mercer's condition may still perform reasonably if
k
{\displaystyle k}
at least approximates the intuitive idea of similarity. Regardless of whether
k
{\displaystyle k}
is a Mercer kernel,
k
{\displaystyle k}
may still be referred to as a "kernel".
If the kernel function
k
{\displaystyle k}
is also a covariance function as used in Gaussian processes, then the Gram matrix
K
{\displaystyle \mathbf {K} }
can also be called a covariance matrix.
== Applications ==
Application areas of kernel methods are diverse and include geostatistics, kriging, inverse distance weighting, 3D reconstruction, bioinformatics, cheminformatics, information extraction and handwriting recognition.
== Popular kernels ==
Fisher kernel
Graph kernels
Kernel smoother
Polynomial kernel
Radial basis function kernel (RBF)
String kernels
Neural tangent kernel
Neural network Gaussian process (NNGP) kernel
== See also ==
Kernel methods for vector output
Kernel density estimation
Representer theorem
Similarity learning
Cover's theorem
== References ==
== Further reading ==
Shawe-Taylor, J.; Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Cambridge University Press. ISBN 9780511809682.
Liu, W.; Principe, J.; Haykin, S. (2010). Kernel Adaptive Filtering: A Comprehensive Introduction. Wiley. ISBN 9781118211212.
Schölkopf, B.; Smola, A. J.; Bach, F. (2018). Learning with Kernels : Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press. ISBN 978-0-262-53657-8.
== External links ==
Kernel-Machines Org—community website
onlineprediction.net Kernel Methods Article | Wikipedia/Kernel_method |
In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis.
== Definition ==
The Fourier sine transform of
f
(
t
)
{\displaystyle f(t)}
is:
If
t
{\displaystyle t}
means time, then
ξ
{\displaystyle \xi }
is frequency in cycles per unit time, but in the abstract, they can be any dual pair of variables (e.g. position and spatial frequency).
The sine transform is necessarily an odd function of frequency, i.e. for all
ξ
{\displaystyle \xi }
:
f
^
s
(
−
ξ
)
=
−
f
^
s
(
ξ
)
.
{\displaystyle {\hat {f}}^{s}(-\xi )=-{\hat {f}}^{s}(\xi ).}
The Fourier cosine transform of
f
(
t
)
{\displaystyle f(t)}
is:
The cosine transform is necessarily an even function of frequency, i.e. for all
ξ
{\displaystyle \xi }
:
f
^
c
(
−
ξ
)
=
f
^
c
(
ξ
)
.
{\displaystyle {\hat {f}}^{c}(-\xi )={\hat {f}}^{c}(\xi ).}
=== Odd and even simplification ===
The multiplication rules for even and odd functions shown in the overbraces in the following equations dramatically simplify the integrands when transforming even and odd functions. Some authors even only define the cosine transform for even functions
f
even
(
t
)
{\displaystyle f_{\text{even}}(t)}
. Since cosine is an even function and because the integral of an even function from
−
∞
{\displaystyle {-}\infty }
to
∞
{\displaystyle \infty }
is twice its integral from
0
{\displaystyle 0}
to
∞
{\displaystyle \infty }
, the cosine transform of any even function can be simplified to avoid negative
t
{\displaystyle t}
:
f
^
c
(
ξ
)
=
∫
−
∞
∞
f
even
(
t
)
⋅
cos
(
2
π
ξ
t
)
⏞
even·even=even
d
t
=
2
∫
0
∞
f
even
(
t
)
cos
(
2
π
ξ
t
)
d
t
.
{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \cos(2\pi \xi t)} ^{\text{even·even=even}}\,dt=2\int _{0}^{\infty }f_{\text{even}}(t)\cos(2\pi \xi t)\,dt.}
And because the integral from
−
∞
{\displaystyle {-}\infty }
to
∞
{\displaystyle \infty }
of any odd function is zero, the cosine transform of any odd function is simply zero:
f
^
c
(
ξ
)
=
∫
−
∞
∞
f
odd
(
t
)
⋅
cos
(
2
π
ξ
t
)
⏞
odd·even=odd
d
t
=
0.
{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \cos(2\pi \xi t)} ^{\text{odd·even=odd}}\,dt=0.}
Similarly, because sin is odd, the sine transform of any odd function
f
odd
(
t
)
{\displaystyle f_{\text{odd}}(t)}
also simplifies to avoid negative
t
{\displaystyle t}
:
f
^
s
(
ξ
)
=
∫
−
∞
∞
f
odd
(
t
)
⋅
sin
(
2
π
ξ
t
)
⏞
odd·odd=even
d
t
=
2
∫
0
∞
f
odd
(
t
)
sin
(
2
π
ξ
t
)
d
t
{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \sin(2\pi \xi t)} ^{\text{odd·odd=even}}\,dt=2\int _{0}^{\infty }f_{\text{odd}}(t)\sin(2\pi \xi t)\,dt}
and the sine transform of any even function is simply zero:
f
^
s
(
ξ
)
=
∫
−
∞
∞
f
even
(
t
)
⋅
sin
(
2
π
ξ
t
)
⏞
even·odd=odd
d
t
=
0.
{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \sin(2\pi \xi t)} ^{\text{even·odd=odd}}\,dt=0.}
The sine transform represents the odd part of a function, while the cosine transform represents the even part of a function.
=== Other conventions ===
Just like the Fourier transform takes the form of different equations with different constant factors (see Fourier transform § Unitarity and definition for square integrable functions for discussion), other authors also define the cosine transform as
f
^
c
(
ξ
)
=
2
π
∫
0
∞
f
(
t
)
cos
(
2
π
ξ
t
)
d
t
{\displaystyle {\hat {f}}^{c}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\cos(2\pi \xi t)\,dt}
and the sine transform as
f
^
s
(
ξ
)
=
2
π
∫
0
∞
f
(
t
)
sin
(
2
π
ξ
t
)
d
t
.
{\displaystyle {\hat {f}}^{s}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\sin(2\pi \xi t)\,dt.}
Another convention defines the cosine transform as
F
c
(
α
)
=
2
π
∫
0
∞
f
(
x
)
cos
(
α
x
)
d
x
{\displaystyle F_{c}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\cos(\alpha x)\,dx}
and the sine transform as
F
s
(
α
)
=
2
π
∫
0
∞
f
(
x
)
sin
(
α
x
)
d
x
{\displaystyle F_{s}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\sin(\alpha x)\,dx}
using
α
{\displaystyle \alpha }
as the transformation variable. And while
t
{\displaystyle t}
is typically used to represent the time domain,
x
{\displaystyle x}
is often instead used to represent a spatial domain when transforming to spatial frequencies.
== Fourier inversion ==
The original function
f
{\displaystyle f}
can be recovered from its sine and cosine transforms under the usual hypotheses using the inversion formula:
=== Simplifications ===
Note that since both integrands are even functions of
ξ
{\displaystyle \xi }
, the concept of negative frequency can be avoided by doubling the result of integrating over non-negative frequencies:
f
(
t
)
=
2
∫
0
∞
f
^
s
(
ξ
)
sin
(
2
π
ξ
t
)
d
ξ
+
2
∫
0
∞
f
^
c
(
ξ
)
cos
(
2
π
ξ
t
)
d
ξ
.
{\displaystyle f(t)=2\int _{0}^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi \,+2\int _{0}^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi \,.}
Also, if
f
{\displaystyle f}
is an odd function, then the cosine transform is zero, so its inversion simplifies to:
f
(
t
)
=
∫
−
∞
∞
f
^
s
(
ξ
)
sin
(
2
π
ξ
t
)
d
ξ
,
only if
f
(
t
)
is odd.
{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is odd.}}}
Likewise, if the original function
f
{\displaystyle f}
is an even function, then the sine transform is zero, so its inversion also simplifies to:
f
(
t
)
=
∫
−
∞
∞
f
^
c
(
ξ
)
cos
(
2
π
ξ
t
)
d
ξ
,
only if
f
(
t
)
is even.
{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is even.}}}
Remarkably, these last two simplified inversion formulas look identical to the original sine and cosine transforms, respectively, though with
t
{\displaystyle t}
swapped with
ξ
{\displaystyle \xi }
(and with
f
{\displaystyle f}
swapped with
f
^
s
{\displaystyle {\hat {f}}^{s}}
or
f
^
c
{\displaystyle {\hat {f}}^{c}}
). A consequence of this symmetry is that their inversion and transform processes still work when the two functions are swapped. Two such functions are called transform pairs.
=== Overview of inversion proof ===
Using the addition formula for cosine, the full inversion formula can also be rewritten as Fourier's integral formula:
f
(
t
)
=
∫
−
∞
∞
∫
−
∞
∞
f
(
x
)
cos
(
2
π
ξ
(
x
−
t
)
)
d
x
d
ξ
.
{\displaystyle f(t)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .}
This theorem is often stated under different hypotheses, that
f
{\displaystyle f}
is integrable, and is of bounded variation on an open interval containing the point
t
{\displaystyle t}
, in which case
1
2
lim
h
→
0
(
f
(
t
+
h
)
+
f
(
t
−
h
)
)
=
2
∫
0
∞
∫
−
∞
∞
f
(
x
)
cos
(
2
π
ξ
(
x
−
t
)
)
d
x
d
ξ
.
{\displaystyle {\tfrac {1}{2}}\lim _{h\to 0}\left(f(t+h)+f(t-h)\right)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .}
This latter form is a useful intermediate step in proving the inverse formulae for the since and cosine transforms. One method of deriving it, due to Cauchy is to insert a
e
−
δ
ξ
{\displaystyle e^{-\delta \xi }}
into the integral, where
δ
>
0
{\displaystyle \delta >0}
is fixed. Then
2
∫
−
∞
∞
∫
0
∞
e
−
δ
ξ
cos
(
2
π
ξ
(
x
−
t
)
)
d
ξ
f
(
x
)
d
x
=
∫
−
∞
∞
f
(
x
)
2
δ
δ
2
+
4
π
2
(
x
−
t
)
2
d
x
.
{\displaystyle 2\int _{-\infty }^{\infty }\int _{0}^{\infty }e^{-\delta \xi }\cos(2\pi \xi (x-t))\,d\xi \,f(x)\,dx=\int _{-\infty }^{\infty }f(x){\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx.}
Now when
δ
→
0
{\displaystyle \delta \to 0}
, the integrand tends to zero except at
x
=
t
{\displaystyle x=t}
, so that formally the above is
f
(
t
)
∫
−
∞
∞
2
δ
δ
2
+
4
π
2
(
x
−
t
)
2
d
x
=
f
(
t
)
.
{\displaystyle f(t)\int _{-\infty }^{\infty }{\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx=f(t).}
== Relation with complex exponentials ==
The complex exponential form of the Fourier transform used more often today is
f
^
(
ξ
)
=
∫
−
∞
∞
f
(
t
)
e
−
2
π
i
ξ
t
d
t
{\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)e^{-2\pi i\xi t}\,dt\\\end{aligned}}\,}
where
i
{\displaystyle i}
is the square root of negative one. By applying Euler's formula
(
e
i
x
=
cos
x
+
i
sin
x
)
,
{\textstyle (e^{ix}=\cos x+i\sin x),}
it can be shown (for real-valued functions) that the Fourier transform's real component is the cosine transform (representing the even component of the original function) and the Fourier transform's imaginary component is the negative of the sine transform (representing the odd component of the original function):
f
^
(
ξ
)
=
∫
−
∞
∞
f
(
t
)
(
cos
(
2
π
ξ
t
)
−
i
sin
(
2
π
ξ
t
)
)
d
t
Euler's Formula
=
(
∫
−
∞
∞
f
(
t
)
cos
(
2
π
ξ
t
)
d
t
)
−
i
(
∫
−
∞
∞
f
(
t
)
sin
(
2
π
ξ
t
)
d
t
)
=
f
^
c
(
ξ
)
−
i
f
^
s
(
ξ
)
.
{\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)\left(\cos(2\pi \xi t)-i\,\sin(2\pi \xi t)\right)dt&&{\text{Euler's Formula}}\\&=\left(\int _{-\infty }^{\infty }f(t)\cos(2\pi \xi t)\,dt\right)-i\left(\int _{-\infty }^{\infty }f(t)\sin(2\pi \xi t)\,dt\right)\\&={\hat {f}}^{c}(\xi )-i\,{\hat {f}}^{s}(\xi )\,.\end{aligned}}}
Because of this relationship, the cosine transform of functions whose Fourier transform is known (e.g. in Fourier transform § Tables of important Fourier transforms) can be simply found by taking the real part of the Fourier transform:
f
^
c
(
ξ
)
=
R
e
[
f
^
(
ξ
)
]
{\displaystyle {\hat {f}}^{c}(\xi )=\mathrm {Re} {[\;{\hat {f}}(\xi )\;]}}
while the sine transform is simply the negative of the imaginary part of the Fourier transform:
f
^
s
(
ξ
)
=
−
I
m
[
f
^
(
ξ
)
]
.
{\displaystyle {\hat {f}}^{s}(\xi )=-\mathrm {Im} {[\;{\hat {f}}(\xi )\;]}\,.}
=== Pros and cons ===
An advantage of the modern Fourier transform is that while the sine and cosine transforms together are required to extract the phase information of a frequency, the modern Fourier transform instead compactly packs both phase and amplitude information inside its complex valued result. But a disadvantage is its requirement on understanding complex numbers, complex exponentials, and negative frequency.
The sine and cosine transforms meanwhile have the advantage that all quantities are real. Since positive frequencies can fully express them, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. They may also be convenient when the original function is already even or odd or can be made even or odd, in which case only the cosine or the sine transform respectively is needed. For instance, even though an input may not be even or odd, a discrete cosine transform may start by assuming an even extension of its input while a discrete sine transform may start by assuming an odd extension of its input, to avoid having to compute the entire discrete Fourier transform.
== Numerical evaluation ==
Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned.
Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals This method attempts to evaluate the integrand at locations which asymptotically approach the zeros of the oscillation (either the sine or cosine), quickly reducing the magnitude of positive and negative terms which are summed.
== See also ==
Discrete cosine transform
Discrete sine transform
List of Fourier-related transforms
== Notes ==
== References ==
Whittaker, Edmund, and James Watson, A Course in Modern Analysis, Fourth Edition, Cambridge Univ. Press, 1927, pp. 189, 211 | Wikipedia/Fourier_sine_transform |
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.
== Statement ==
Let
f
∈
L
1
(
R
n
)
{\displaystyle f\in L^{1}(\mathbb {R} ^{n})}
be an integrable function, i.e.
f
:
R
n
→
C
{\displaystyle f\colon \mathbb {R} ^{n}\rightarrow \mathbb {C} }
is a measurable function such that
‖
f
‖
L
1
=
∫
R
n
|
f
(
x
)
|
d
x
<
∞
,
{\displaystyle \|f\|_{L^{1}}=\int _{\mathbb {R} ^{n}}|f(x)|\mathrm {d} x<\infty ,}
and let
f
^
{\displaystyle {\hat {f}}}
be the Fourier transform of
f
{\displaystyle f}
, i.e.
f
^
:
R
n
→
C
,
ξ
↦
∫
R
n
f
(
x
)
e
−
i
x
⋅
ξ
d
x
.
{\displaystyle {\hat {f}}\colon \mathbb {R} ^{n}\rightarrow \mathbb {C} ,\ \xi \mapsto \int _{\mathbb {R} ^{n}}f(x)\mathrm {e} ^{-\mathrm {i} x\cdot \xi }\mathrm {d} x.}
Then
f
^
{\displaystyle {\hat {f}}}
vanishes at infinity:
|
f
^
(
ξ
)
|
→
0
{\displaystyle |{\hat {f}}(\xi )|\to 0}
as
|
ξ
|
→
∞
{\displaystyle |\xi |\to \infty }
.
Because the Fourier transform of an integrable function is continuous, the Fourier transform
f
^
{\displaystyle {\hat {f}}}
is a continuous function vanishing at infinity. If
C
0
(
R
n
)
{\displaystyle C_{0}(\mathbb {R} ^{n})}
denotes the vector space of continuous functions vanishing at infinity, the Riemann–Lebesgue lemma may be formulated as follows: The Fourier transformation maps
L
1
(
R
n
)
{\displaystyle L^{1}(\mathbb {R} ^{n})}
to
C
0
(
R
n
)
{\displaystyle C_{0}(\mathbb {R} ^{n})}
.
=== Proof ===
We will focus on the one-dimensional case
n
=
1
{\displaystyle n=1}
, the proof in higher dimensions is similar. First, suppose that
f
{\displaystyle f}
is continuous and compactly supported. For
ξ
≠
0
{\displaystyle \xi \neq 0}
, the substitution
x
→
x
+
π
ξ
{\displaystyle \textstyle x\to x+{\frac {\pi }{\xi }}}
leads to
f
^
(
ξ
)
=
∫
R
f
(
x
)
e
−
i
x
ξ
d
x
=
∫
R
f
(
x
+
π
ξ
)
e
−
i
x
ξ
e
−
i
π
d
x
=
−
∫
R
f
(
x
+
π
ξ
)
e
−
i
x
ξ
d
x
{\displaystyle {\hat {f}}(\xi )=\int _{\mathbb {R} }f(x)\mathrm {e} ^{-\mathrm {i} x\xi }\mathrm {d} x=\int _{\mathbb {R} }f\left(x+{\frac {\pi }{\xi }}\right)\mathrm {e} ^{-\mathrm {i} x\xi }\mathrm {e} ^{-\mathrm {i} \pi }\mathrm {d} x=-\int _{\mathbb {R} }f\left(x+{\frac {\pi }{\xi }}\right)\mathrm {e} ^{-\mathrm {i} x\xi }\mathrm {d} x}
.
This gives a second formula for
f
^
(
ξ
)
{\displaystyle {\hat {f}}(\xi )}
. Taking the mean of both formulas, we arrive at the following estimate:
|
f
^
(
ξ
)
|
≤
1
2
∫
R
|
f
(
x
)
−
f
(
x
+
π
ξ
)
|
d
x
{\displaystyle |{\hat {f}}(\xi )|\leq {\frac {1}{2}}\int _{\mathbb {R} }\left|f(x)-f\left(x+{\frac {\pi }{\xi }}\right)\right|\mathrm {d} x}
.
Because
f
{\displaystyle f}
is continuous,
|
f
(
x
)
−
f
(
x
+
π
ξ
)
|
{\displaystyle \left|f(x)-f\left(x+{\tfrac {\pi }{\xi }}\right)\right|}
converges to
0
{\displaystyle 0}
as
|
ξ
|
→
∞
{\displaystyle |\xi |\to \infty }
for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
. Thus,
|
f
^
(
ξ
)
|
{\displaystyle |{\hat {f}}(\xi )|}
converges to 0 as
|
ξ
|
→
∞
{\displaystyle |\xi |\to \infty }
due to the dominated convergence theorem.
If
f
{\displaystyle f}
is an arbitrary integrable function, it may be approximated in the
L
1
{\displaystyle L^{1}}
norm by a compactly supported continuous function. For
ε
>
0
{\displaystyle \varepsilon >0}
, pick a compactly supported continuous function
g
{\displaystyle g}
such that
‖
f
−
g
‖
L
1
≤
ε
{\displaystyle \|f-g\|_{L^{1}}\leq \varepsilon }
. Then
lim sup
ξ
→
±
∞
|
f
^
(
ξ
)
|
≤
lim sup
ξ
→
±
∞
|
∫
(
f
(
x
)
−
g
(
x
)
)
e
−
i
x
ξ
d
x
|
+
lim sup
ξ
→
±
∞
|
∫
g
(
x
)
e
−
i
x
ξ
d
x
|
≤
ε
+
0
=
ε
.
{\displaystyle \limsup _{\xi \rightarrow \pm \infty }|{\hat {f}}(\xi )|\leq \limsup _{\xi \to \pm \infty }\left|\int (f(x)-g(x))\mathrm {e} ^{-\mathrm {i} x\xi }\,\mathrm {d} x\right|+\limsup _{\xi \rightarrow \pm \infty }\left|\int g(x)\mathrm {e} ^{-\mathrm {i} x\xi }\,\mathrm {d} x\right|\leq \varepsilon +0=\varepsilon .}
Because this holds for any
ε
>
0
{\displaystyle \varepsilon >0}
, it follows that
|
f
^
(
ξ
)
|
→
0
{\displaystyle |{\hat {f}}(\xi )|\to 0}
as
|
ξ
|
→
∞
{\displaystyle |\xi |\to \infty }
.
== Other versions ==
The Riemann–Lebesgue lemma holds in a variety of other situations.
If
f
∈
L
1
[
0
,
∞
)
{\displaystyle f\in L^{1}[0,\infty )}
, then the Riemann–Lebesgue lemma also holds for the Laplace transform of
f
{\displaystyle f}
, that is,
∫
0
∞
f
(
t
)
e
−
t
z
d
t
→
0
{\displaystyle \int _{0}^{\infty }f(t)\mathrm {e} ^{-tz}\mathrm {d} t\to 0}
as
|
z
|
→
∞
{\displaystyle |z|\to \infty }
within the half-plane
R
e
(
z
)
≥
0
{\displaystyle \mathrm {Re} (z)\geq 0}
.
A version holds for Fourier series as well: if
f
{\displaystyle f}
is an integrable function on a bounded interval, then the Fourier coefficients
f
^
k
{\displaystyle {\hat {f}}_{k}}
of
f
{\displaystyle f}
tend to 0 as
k
→
±
∞
{\displaystyle k\to \pm \infty }
. This follows by extending
f
{\displaystyle f}
by zero outside the interval, and then applying the version of the Riemann–Lebesgue lemma on the entire real line.
However, the Riemann–Lebesgue lemma does not hold for arbitrary distributions. For example, the Dirac delta function distribution formally has a finite integral over the real line, but its Fourier transform is a constant and does not vanish at infinity.
== Applications ==
The Riemann–Lebesgue lemma can be used to prove the validity of asymptotic approximations for integrals. Rigorous treatments of the method of steepest descent and the method of stationary phase, amongst others, are based on the Riemann–Lebesgue lemma.
== References ==
Bochner S., Chandrasekharan K. (1949). Fourier Transforms. Princeton University Press.
Weisstein, Eric W. "Riemann–Lebesgue Lemma". MathWorld. | Wikipedia/Riemann–Lebesgue_lemma |
In mathematics, Laguerre transform is an integral transform named after the mathematician Edmond Laguerre, which uses generalized Laguerre polynomials
L
n
α
(
x
)
{\displaystyle L_{n}^{\alpha }(x)}
as kernels of the transform.
The Laguerre transform of a function
f
(
x
)
{\displaystyle f(x)}
is
L
{
f
(
x
)
}
=
f
~
α
(
n
)
=
∫
0
∞
e
−
x
x
α
L
n
α
(
x
)
f
(
x
)
d
x
{\displaystyle L\{f(x)\}={\tilde {f}}_{\alpha }(n)=\int _{0}^{\infty }e^{-x}x^{\alpha }\ L_{n}^{\alpha }(x)\ f(x)\ dx}
The inverse Laguerre transform is given by
L
−
1
{
f
~
α
(
n
)
}
=
f
(
x
)
=
∑
n
=
0
∞
(
n
+
α
n
)
−
1
1
Γ
(
α
+
1
)
f
~
α
(
n
)
L
n
α
(
x
)
{\displaystyle L^{-1}\{{\tilde {f}}_{\alpha }(n)\}=f(x)=\sum _{n=0}^{\infty }{\binom {n+\alpha }{n}}^{-1}{\frac {1}{\Gamma (\alpha +1)}}{\tilde {f}}_{\alpha }(n)L_{n}^{\alpha }(x)}
== Some Laguerre transform pairs ==
== References == | Wikipedia/Laguerre_transform |
In mathematics, the Abel transform, named for Niels Henrik Abel, is an integral transform often used in the analysis of spherically symmetric or axially symmetric functions. The Abel transform of a function f(r) is given by
F
(
y
)
=
2
∫
y
∞
f
(
r
)
r
r
2
−
y
2
d
r
.
{\displaystyle F(y)=2\int _{y}^{\infty }{\frac {f(r)r}{\sqrt {r^{2}-y^{2}}}}\,dr.}
Assuming that f(r) drops to zero more quickly than 1/r, the inverse Abel transform is given by
f
(
r
)
=
−
1
π
∫
r
∞
d
F
d
y
d
y
y
2
−
r
2
.
{\displaystyle f(r)=-{\frac {1}{\pi }}\int _{r}^{\infty }{\frac {dF}{dy}}\,{\frac {dy}{\sqrt {y^{2}-r^{2}}}}.}
In image analysis, the forward Abel transform is used to project an optically thin, axially symmetric emission function onto a plane, and the inverse Abel transform is used to calculate the emission function given a projection (i.e. a scan or a photograph) of that emission function.
In absorption spectroscopy of cylindrical flames or plumes, the forward Abel transform is the integrated absorbance along a ray with closest distance y from the center of the flame, while the inverse Abel transform gives the local absorption coefficient at a distance r from the center. Abel transform is limited to applications with axially symmetric geometries. For more general asymmetrical cases, more general-oriented reconstruction algorithms such as algebraic reconstruction technique (ART), maximum likelihood expectation maximization (MLEM), filtered back-projection (FBP) algorithms should be employed.
In recent years, the inverse Abel transform (and its variants) has become the cornerstone of data analysis in photofragment-ion imaging and photoelectron imaging. Among recent most notable extensions of inverse Abel transform are the "onion peeling" and "basis set expansion" (BASEX) methods of photoelectron and photoion image analysis.
== Geometrical interpretation ==
In two dimensions, the Abel transform F(y) can be interpreted as the projection of a circularly symmetric function f(r) along a set of parallel lines of sight at a distance y from the origin. Referring to the figure on the right, the observer (I) will see
F
(
y
)
=
∫
−
∞
∞
f
(
x
2
+
y
2
)
d
x
,
{\displaystyle F(y)=\int _{-\infty }^{\infty }f\left({\sqrt {x^{2}+y^{2}}}\right)\,dx,}
where f(r) is the circularly symmetric function represented by the gray color in the figure. It is assumed that the observer is actually at x = ∞, so that the limits of integration are ±∞, and all lines of sight are parallel to the x axis.
Realizing that the radius r is related to x and y as r2 = x2 + y2, it follows that
d
x
=
r
d
r
r
2
−
y
2
{\displaystyle dx={\frac {r\,dr}{\sqrt {r^{2}-y^{2}}}}}
for x > 0. Since f(r) is an even function in x, we may write
F
(
y
)
=
2
∫
0
∞
f
(
x
2
+
y
2
)
d
x
=
2
∫
|
y
|
∞
f
(
r
)
r
d
r
r
2
−
y
2
,
{\displaystyle F(y)=2\int _{0}^{\infty }f\left({\sqrt {x^{2}+y^{2}}}\right)\,dx=2\int _{|y|}^{\infty }f(r)\,{\frac {r\,dr}{\sqrt {r^{2}-y^{2}}}},}
which yields the Abel transform of f(r).
The Abel transform may be extended to higher dimensions. Of particular interest is the extension to three dimensions. If we have an axially symmetric function f(ρ, z), where ρ2 = x2 + y2 is the cylindrical radius, then we may want to know the projection of that function onto a plane parallel to the z axis. Without loss of generality, we can take that plane to be the yz plane, so that
F
(
y
,
z
)
=
∫
−
∞
∞
f
(
ρ
,
z
)
d
x
=
2
∫
y
∞
f
(
ρ
,
z
)
ρ
d
ρ
ρ
2
−
y
2
,
{\displaystyle F(y,z)=\int _{-\infty }^{\infty }f(\rho ,z)\,dx=2\int _{y}^{\infty }{\frac {f(\rho ,z)\rho \,d\rho }{\sqrt {\rho ^{2}-y^{2}}}},}
which is just the Abel transform of f(ρ, z) in ρ and y.
A particular type of axial symmetry is spherical symmetry. In this case, we have a function f(r), where r2 = x2 + y2 + z2.
The projection onto, say, the yz plane will then be circularly symmetric and expressible as F(s), where s2 = y2 + z2. Carrying out the integration, we have
F
(
s
)
=
∫
−
∞
∞
f
(
r
)
d
x
=
2
∫
s
∞
f
(
r
)
r
d
r
r
2
−
s
2
,
{\displaystyle F(s)=\int _{-\infty }^{\infty }f(r)\,dx=2\int _{s}^{\infty }{\frac {f(r)r\,dr}{\sqrt {r^{2}-s^{2}}}},}
which is again, the Abel transform of f(r) in r and s.
== Verification of the inverse Abel transform ==
Assuming
f
{\displaystyle f}
is continuously differentiable and
f
{\displaystyle f}
,
f
′
{\displaystyle f'}
drop to zero faster than
1
/
r
{\displaystyle 1/r}
, we can integrate by parts by setting
u
=
f
(
r
)
{\displaystyle u=f(r)}
and
v
′
=
r
/
r
2
−
y
2
{\displaystyle v'=r/{\sqrt {r^{2}-y^{2}}}}
to find
F
(
y
)
=
−
2
∫
y
∞
f
′
(
r
)
r
2
−
y
2
d
r
.
{\displaystyle F(y)=-2\int _{y}^{\infty }f'(r){\sqrt {r^{2}-y^{2}}}\,dr.}
Differentiating formally,
F
′
(
y
)
=
2
y
∫
y
∞
f
′
(
r
)
r
2
−
y
2
d
r
.
{\displaystyle F'(y)=2y\int _{y}^{\infty }{\frac {f'(r)}{\sqrt {r^{2}-y^{2}}}}\,dr.}
Now substitute this into the inverse Abel transform formula:
−
1
π
∫
r
∞
F
′
(
y
)
y
2
−
r
2
d
y
=
∫
r
∞
∫
y
∞
−
2
y
π
(
y
2
−
r
2
)
(
s
2
−
y
2
)
f
′
(
s
)
d
s
d
y
.
{\displaystyle -{\frac {1}{\pi }}\int _{r}^{\infty }{\frac {F'(y)}{\sqrt {y^{2}-r^{2}}}}\,dy=\int _{r}^{\infty }\int _{y}^{\infty }{\frac {-2y}{\pi {\sqrt {(y^{2}-r^{2})(s^{2}-y^{2})}}}}f'(s)\,dsdy.}
By Fubini's theorem, the last integral equals
∫
r
∞
∫
r
s
−
2
y
π
(
y
2
−
r
2
)
(
s
2
−
y
2
)
d
y
f
′
(
s
)
d
s
=
∫
r
∞
(
−
1
)
f
′
(
s
)
d
s
=
f
(
r
)
.
{\displaystyle \int _{r}^{\infty }\int _{r}^{s}{\frac {-2y}{\pi {\sqrt {(y^{2}-r^{2})(s^{2}-y^{2})}}}}\,dyf'(s)\,ds=\int _{r}^{\infty }(-1)f'(s)\,ds=f(r).}
== Generalization of the Abel transform to discontinuous F(y) ==
Consider the case where
F
(
y
)
{\displaystyle F(y)}
is discontinuous at
y
=
y
Δ
{\displaystyle y=y_{\Delta }}
, where it abruptly changes its value by a finite amount
Δ
F
{\displaystyle \Delta F}
. That is,
y
Δ
{\displaystyle y_{\Delta }}
and
Δ
F
{\displaystyle \Delta F}
are defined by
Δ
F
≡
lim
ϵ
→
0
[
F
(
y
Δ
−
ϵ
)
−
F
(
y
Δ
+
ϵ
)
]
{\displaystyle \Delta F\equiv \lim _{\epsilon \rightarrow 0}[F(y_{\Delta }-\epsilon )-F(y_{\Delta }+\epsilon )]}
. Such a situation is encountered in tethered polymers (Polymer brush) exhibiting a vertical phase separation, where
F
(
y
)
{\displaystyle F(y)}
stands for the polymer density profile and
f
(
r
)
{\displaystyle f(r)}
is related to the spatial distribution of terminal, non-tethered monomers of the polymers.
The Abel transform of a function f(r) is under these circumstances again given by:
F
(
y
)
=
2
∫
y
∞
f
(
r
)
r
d
r
r
2
−
y
2
.
{\displaystyle F(y)=2\int _{y}^{\infty }{\frac {f(r)r\,dr}{\sqrt {r^{2}-y^{2}}}}.}
Assuming f(r) drops to zero more quickly than 1/r, the inverse Abel transform is however given by
f
(
r
)
=
[
1
2
δ
(
r
−
y
Δ
)
1
−
(
y
Δ
/
r
)
2
−
1
π
H
(
y
Δ
−
r
)
y
Δ
2
−
r
2
]
Δ
F
−
1
π
∫
r
∞
d
F
d
y
d
y
y
2
−
r
2
.
{\displaystyle f(r)=\left[{\frac {1}{2}}\delta (r-y_{\Delta }){\sqrt {1-(y_{\Delta }/r)^{2}}}-{\frac {1}{\pi }}{\frac {H(y_{\Delta }-r)}{\sqrt {y_{\Delta }^{2}-r^{2}}}}\right]\Delta F-{\frac {1}{\pi }}\int _{r}^{\infty }{\frac {dF}{dy}}{\frac {dy}{\sqrt {y^{2}-r^{2}}}}.}
where
δ
{\displaystyle \delta }
is the Dirac delta function and
H
(
x
)
{\displaystyle H(x)}
the Heaviside step function. The extended version of the Abel transform for discontinuous F is proven upon applying the Abel transform to shifted, continuous
F
(
y
)
{\displaystyle F(y)}
, and it reduces to the classical Abel transform when
Δ
F
=
0
{\displaystyle \Delta F=0}
. If
F
(
y
)
{\displaystyle F(y)}
has more than a single discontinuity, one has to introduce shifts for any of them to come up with a generalized version of the inverse Abel transform which contains n additional terms, each of them corresponding to one of the n discontinuities.
== Relationship to other integral transforms ==
=== Relationship to the Fourier and Hankel transforms ===
The Abel transform is one member of the FHA cycle of integral operators. For example, in two dimensions, if we define A as the Abel transform operator, F as the Fourier transform operator and H as the zeroth-order Hankel transform operator, then the special case of the projection-slice theorem for circularly symmetric functions states that
F
A
=
H
.
{\displaystyle FA=H.}
In other words, applying the Abel transform to a 1-dimensional function and
then applying the Fourier transform to that result is the same as applying
the Hankel transform to that function. This concept can be extended to higher
dimensions.
=== Relationship to the Radon transform ===
Abel transform can be viewed as the Radon transform of an isotropic 2D function f(r). As f(r) is isotropic, its Radon transform is the same at different angles of the viewing axis. Thus, the Abel transform is a function of the distance along the viewing axis only.
== See also ==
GPS radio occultation
== References ==
Bracewell, R. (1965). The Fourier Transform and its Applications. New York: McGraw-Hill. ISBN 0-07-007016-4. | Wikipedia/Abel_transform |
In mathematics, the Hermite transform is an integral transform named after the mathematician Charles Hermite that uses Hermite polynomials
H
n
(
x
)
{\displaystyle H_{n}(x)}
as kernels of the transform.
The Hermite transform
H
{
F
(
x
)
}
≡
f
H
(
n
)
{\displaystyle H\{F(x)\}\equiv f_{H}(n)}
of a function
F
(
x
)
{\displaystyle F(x)}
is
H
{
F
(
x
)
}
≡
f
H
(
n
)
=
∫
−
∞
∞
e
−
x
2
H
n
(
x
)
F
(
x
)
d
x
{\displaystyle H\{F(x)\}\equiv f_{H}(n)=\int _{-\infty }^{\infty }e^{-x^{2}}\ H_{n}(x)\ F(x)\ dx}
The inverse Hermite transform
H
−
1
{
f
H
(
n
)
}
{\displaystyle H^{-1}\{f_{H}(n)\}}
is given by
H
−
1
{
f
H
(
n
)
}
≡
F
(
x
)
=
∑
n
=
0
∞
1
π
2
n
n
!
f
H
(
n
)
H
n
(
x
)
{\displaystyle H^{-1}\{f_{H}(n)\}\equiv F(x)=\sum _{n=0}^{\infty }{\frac {1}{{\sqrt {\pi }}2^{n}n!}}f_{H}(n)H_{n}(x)}
== Some Hermite transform pairs ==
== References ==
== Sources ==
Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz [in German]; Tricomi, Francesco G. (1955), Higher transcendental functions (PDF), vol. II, McGraw-Hill, ISBN 978-0-07-019546-2, archived from the original (PDF) on 2011-07-14, retrieved 2023-11-09 {{citation}}: ISBN / Date incompatibility (help) | Wikipedia/Hermite_transform |
In mathematics, Jacobi transform is an integral transform named after the mathematician Carl Gustav Jacob Jacobi, which uses Jacobi polynomials
P
n
α
,
β
(
x
)
{\displaystyle P_{n}^{\alpha ,\beta }(x)}
as kernels of the transform
.
The Jacobi transform of a function
F
(
x
)
{\displaystyle F(x)}
is
J
{
F
(
x
)
}
=
f
α
,
β
(
n
)
=
∫
−
1
1
(
1
−
x
)
α
(
1
+
x
)
β
P
n
α
,
β
(
x
)
F
(
x
)
d
x
{\displaystyle J\{F(x)\}=f^{\alpha ,\beta }(n)=\int _{-1}^{1}(1-x)^{\alpha }\ (1+x)^{\beta }\ P_{n}^{\alpha ,\beta }(x)\ F(x)\ dx}
The inverse Jacobi transform is given by
J
−
1
{
f
α
,
β
(
n
)
}
=
F
(
x
)
=
∑
n
=
0
∞
1
δ
n
f
α
,
β
(
n
)
P
n
α
,
β
(
x
)
,
where
δ
n
=
2
α
+
β
+
1
Γ
(
n
+
α
+
1
)
Γ
(
n
+
β
+
1
)
n
!
(
α
+
β
+
2
n
+
1
)
Γ
(
n
+
α
+
β
+
1
)
{\displaystyle J^{-1}\{f^{\alpha ,\beta }(n)\}=F(x)=\sum _{n=0}^{\infty }{\frac {1}{\delta _{n}}}f^{\alpha ,\beta }(n)P_{n}^{\alpha ,\beta }(x),\quad {\text{where}}\quad \delta _{n}={\frac {2^{\alpha +\beta +1}\Gamma (n+\alpha +1)\Gamma (n+\beta +1)}{n!(\alpha +\beta +2n+1)\Gamma (n+\alpha +\beta +1)}}}
== Some Jacobi transform pairs ==
== References == | Wikipedia/Jacobi_transform |
In mathematics, the Hartley transform (HT) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions (as opposed to requiring complex numbers) and of being its own inverse.
The discrete version of the transform, the discrete Hartley transform (DHT), was introduced by Ronald N. Bracewell in 1983.
The two-dimensional Hartley transform can be computed by an analog optical process similar to an optical Fourier transform (OFT), with the proposed advantage that only its amplitude and sign need to be determined rather than its complex phase. However, optical Hartley transforms do not seem to have seen widespread use.
== Definition ==
The Hartley transform of a function
f
(
t
)
{\displaystyle f(t)}
is defined by:
H
(
ω
)
=
{
H
f
}
(
ω
)
=
1
2
π
∫
−
∞
∞
f
(
t
)
cas
(
ω
t
)
d
t
,
{\displaystyle H(\omega )=\left\{{\mathcal {H}}f\right\}(\omega )={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(t)\operatorname {cas} (\omega t)\,\mathrm {d} t\,,}
where
ω
{\displaystyle \omega }
can in applications be an angular frequency and
cas
(
t
)
=
cos
(
t
)
+
sin
(
t
)
=
2
sin
(
t
+
π
/
4
)
=
2
cos
(
t
−
π
/
4
)
,
{\displaystyle \operatorname {cas} (t)=\cos(t)+\sin(t)={\sqrt {2}}\sin(t+\pi /4)={\sqrt {2}}\cos(t-\pi /4)\,,}
is the cosine-and-sine (cas) or Hartley kernel. In engineering terms, this transform takes a signal (function) from the time-domain to the Hartley spectral domain (frequency domain).
=== Inverse transform ===
The Hartley transform has the convenient property of being its own inverse (an involution):
f
=
{
H
{
H
f
}
}
.
{\displaystyle f=\{{\mathcal {H}}\{{\mathcal {H}}f\}\}\,.}
=== Conventions ===
The above is in accord with Hartley's original definition, but (as with the Fourier transform) various minor details are matters of convention and can be changed without altering the essential properties:
Instead of using the same transform for forward and inverse, one can remove the
1
/
2
π
{\displaystyle {1}/{\sqrt {2\pi }}}
from the forward transform and use
1
/
2
π
{\displaystyle {1}/{2\pi }}
for the inverse—or, indeed, any pair of normalizations whose product is
1
/
2
π
{\displaystyle {1}/{2\pi }}
. (Such asymmetrical normalizations are sometimes found in both purely mathematical and engineering contexts.)
One can also use
2
π
ν
t
{\displaystyle 2\pi \nu t}
instead of
ω
t
{\displaystyle \omega t}
(i.e., frequency instead of angular frequency), in which case the
1
/
2
π
{\displaystyle {1}/{\sqrt {2\pi }}}
coefficient is omitted entirely.
One can use
cos
−
sin
{\displaystyle \cos -\sin }
instead of
cos
+
sin
{\displaystyle \cos +\sin }
as the kernel.
== Relation to Fourier transform ==
This transform differs from the classic Fourier transform
F
(
ω
)
=
F
{
f
(
t
)
}
(
ω
)
{\displaystyle F(\omega )={\mathcal {F}}\{f(t)\}(\omega )}
in the choice of the kernel. In the Fourier transform, we have the exponential kernel,
exp
(
−
i
ω
t
)
=
cos
(
ω
t
)
−
i
sin
(
ω
t
)
{\displaystyle \exp \left({-\mathrm {i} \omega t}\right)=\cos(\omega t)-\mathrm {i} \sin(\omega t)}
,
where
i
{\displaystyle \mathrm {i} }
is the imaginary unit.
The two transforms are closely related, however, and the Fourier transform (assuming it uses the same
1
/
2
π
{\displaystyle 1/{\sqrt {2\pi }}}
normalization convention) can be computed from the Hartley transform via:
F
(
ω
)
=
H
(
ω
)
+
H
(
−
ω
)
2
−
i
H
(
ω
)
−
H
(
−
ω
)
2
.
{\displaystyle F(\omega )={\frac {H(\omega )+H(-\omega )}{2}}-\mathrm {i} {\frac {H(\omega )-H(-\omega )}{2}}\,.}
That is, the real and imaginary parts of the Fourier transform are simply given by the even and odd parts of the Hartley transform, respectively.
Conversely, for real-valued functions
f
(
t
)
{\displaystyle f(t)}
, the Hartley transform is given from the Fourier transform's real and imaginary parts:
{
H
f
}
=
ℜ
{
F
f
}
−
ℑ
{
F
f
}
=
ℜ
{
F
f
⋅
(
1
+
i
)
}
,
{\displaystyle \{{\mathcal {H}}f\}=\Re \{{\mathcal {F}}f\}-\Im \{{\mathcal {F}}f\}=\Re \{{\mathcal {F}}f\cdot (1+\mathrm {i} )\}\,,}
where
ℜ
{\displaystyle \Re }
and
ℑ
{\displaystyle \Im }
denote the real and imaginary parts.
== Properties ==
The Hartley transform is a real linear operator, and is symmetric (and Hermitian). From the symmetric and self-inverse properties, it follows that the transform is a unitary operator (indeed, orthogonal).
Convolution using Hartley transforms is
f
(
x
)
∗
g
(
x
)
=
F
(
ω
)
G
(
ω
)
+
F
(
−
ω
)
G
(
ω
)
+
F
(
ω
)
G
(
−
ω
)
−
F
(
−
ω
)
G
(
−
ω
)
2
{\displaystyle f(x)*g(x)={\frac {F(\omega )G(\omega )+F(-\omega )G(\omega )+F(\omega )G(-\omega )-F(-\omega )G(-\omega )}{2}}}
where
F
(
ω
)
=
{
H
f
}
(
ω
)
{\displaystyle F(\omega )=\{{\mathcal {H}}f\}(\omega )}
and
G
(
ω
)
=
{
H
g
}
(
ω
)
{\displaystyle G(\omega )=\{{\mathcal {H}}g\}(\omega )}
Similar to the Fourier transform, the Hartley transform of an even/odd function is even/odd, respectively.
=== cas ===
The properties of the Hartley kernel, for which Hartley introduced the name cas for the function (from cosine and sine) in 1942, follow directly from trigonometry, and its definition as a phase-shifted trigonometric function
cas
(
t
)
=
2
sin
(
t
+
π
/
4
)
=
sin
(
t
)
+
cos
(
t
)
{\displaystyle \operatorname {cas} (t)={\sqrt {2}}\sin(t+\pi /4)=\sin(t)+\cos(t)}
. For example, it has an angle-addition identity of:
2
cas
(
a
+
b
)
=
cas
(
a
)
cas
(
b
)
+
cas
(
−
a
)
cas
(
b
)
+
cas
(
a
)
cas
(
−
b
)
−
cas
(
−
a
)
cas
(
−
b
)
.
{\displaystyle 2\operatorname {cas} (a+b)=\operatorname {cas} (a)\operatorname {cas} (b)+\operatorname {cas} (-a)\operatorname {cas} (b)+\operatorname {cas} (a)\operatorname {cas} (-b)-\operatorname {cas} (-a)\operatorname {cas} (-b)\,.}
Additionally:
cas
(
a
+
b
)
=
cos
(
a
)
cas
(
b
)
+
sin
(
a
)
cas
(
−
b
)
=
cos
(
b
)
cas
(
a
)
+
sin
(
b
)
cas
(
−
a
)
,
{\displaystyle \operatorname {cas} (a+b)={\cos(a)\operatorname {cas} (b)}+{\sin(a)\operatorname {cas} (-b)}=\cos(b)\operatorname {cas} (a)+\sin(b)\operatorname {cas} (-a)\,,}
and its derivative is given by:
cas
′
(
a
)
=
d
d
a
cas
(
a
)
=
cos
(
a
)
−
sin
(
a
)
=
cas
(
−
a
)
.
{\displaystyle \operatorname {cas} '(a)={\frac {d}{da}}\operatorname {cas} (a)=\cos(a)-\sin(a)=\operatorname {cas} (-a)\,.}
== See also ==
cis (mathematics)
Fractional Fourier transform
== References ==
Bracewell, Ronald N. (1986). Written at Stanford, California, USA. The Hartley Transform. Oxford Engineering Science Series. Vol. 19 (1 ed.). New York, NY, USA: Oxford University Press, Inc. ISBN 0-19-503969-6. (NB. Also translated into German and Russian.)
Bracewell, Ronald N. (1994). "Aspects of the Hartley transform". Proceedings of the IEEE. 82 (3): 381–387. doi:10.1109/5.272142.
Millane, Rick P. (1994). "Analytic properties of the Hartley transform". Proceedings of the IEEE. 82 (3): 413–428. doi:10.1109/5.272146.
== Further reading ==
Olnejniczak, Kraig J.; Heydt, Gerald T., eds. (March 1994). "Scanning the Special Section on the Hartley transform". Special Issue on Hartley transform. Vol. 82. Proceedings of the IEEE. pp. 372–380. Retrieved 2017-10-31. (NB. Contains extensive bibliography.) | Wikipedia/Hartley_transform |
In mathematics, the Weierstrass transform of a function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
, named after Karl Weierstrass, is a "smoothed" version of
f
(
x
)
{\displaystyle f(x)}
obtained by averaging the values of
f
{\displaystyle f}
, weighted with a Gaussian centered at
x
{\displaystyle x}
.
Specifically, it is the function
F
{\displaystyle F}
defined by
F
(
x
)
=
1
4
π
∫
−
∞
∞
f
(
y
)
e
−
(
x
−
y
)
2
4
d
y
=
1
4
π
∫
−
∞
∞
f
(
x
−
y
)
e
−
y
2
4
d
y
,
{\displaystyle F(x)={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }f(y)\;e^{-{\frac {(x-y)^{2}}{4}}}\;dy={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }f(x-y)\;e^{-{\frac {y^{2}}{4}}}\;dy~,}
the convolution of
f
{\displaystyle f}
with the Gaussian function
1
4
π
e
−
x
2
/
4
.
{\displaystyle {\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}~.}
The factor
1
4
π
{\displaystyle {\frac {1}{\sqrt {4\pi }}}}
is chosen so that the Gaussian will have a total integral of 1, with the consequence that constant functions are not changed by the Weierstrass transform.
Instead of
F
(
x
)
{\displaystyle F(x)}
one also writes
W
[
f
]
(
x
)
{\displaystyle W[f](x)}
. Note that
F
(
x
)
{\displaystyle F(x)}
need not exist for every real number
x
{\displaystyle x}
, when the defining integral fails to converge.
The Weierstrass transform is intimately related to the heat equation (or, equivalently, the diffusion equation with constant diffusion coefficient). If the function
f
{\displaystyle f}
describes the initial temperature at each point of an infinitely long rod that has constant thermal conductivity equal to 1, then the temperature distribution of the rod
t
=
1
{\displaystyle t=1}
time units later will be given by the function
F
{\displaystyle F}
. By using values of
t
{\displaystyle t}
different from 1, we can define the generalized Weierstrass transform of
f
{\displaystyle f}
.
The generalized Weierstrass transform provides a means to approximate a given integrable function
f
{\displaystyle f}
arbitrarily well with analytic functions.
== Names ==
Weierstrass used this transform in his original proof of the Weierstrass approximation theorem. It is also known as the Gauss transform or Gauss–Weierstrass transform after Carl Friedrich Gauss and as the Hille transform after Einar Carl Hille who studied it extensively. The generalization
W
t
{\displaystyle W_{t}}
mentioned below is known in signal analysis as a Gaussian filter and in image processing (when implemented on
R
2
{\displaystyle \mathbb {R} ^{2}}
) as a Gaussian blur.
== Transforms of some important functions ==
=== Constant Functions ===
Every constant function is its own Weierstrass transform.
=== Polynomials ===
The Weierstrass transform of any polynomial is a polynomial of the same degree, and in fact has the same leading coefficient (the asymptotic growth is unchanged).
Indeed, if
H
n
{\displaystyle H_{n}}
denotes the (physicist's) Hermite polynomial of degree
n
{\displaystyle n}
, then the Weierstrass transform of
H
n
(
x
/
2
)
{\displaystyle H_{n}(x/2)}
is simply
x
n
{\displaystyle x^{n}}
. This can be shown by exploiting the fact that the generating function for the Hermite polynomials is closely related to the Gaussian kernel used in the definition of the Weierstrass transform.
=== Exponentials, Sines, and Cosines ===
The Weierstrass transform of the exponential function
x
↦
exp
(
a
x
)
{\displaystyle x\mapsto \exp(ax)}
(where
a
{\displaystyle a}
is an arbitrary constant) is
x
↦
exp
(
a
2
)
exp
(
a
x
)
{\displaystyle x\mapsto \exp(a^{2})\exp(ax)}
. The function
exp
(
a
x
)
{\displaystyle \exp(ax)}
is thus an eigenfunction of the Weierstrass transform, with eigenvalue
exp
(
a
2
)
{\displaystyle \exp(a^{2})}
.
Using Weierstrass transform of
x
↦
exp
(
a
x
)
{\displaystyle x\mapsto \exp(ax)}
with
a
=
b
i
{\displaystyle a=bi}
where
b
{\displaystyle b}
is an arbitrary real constant and
i
{\displaystyle i}
is the imaginary unit, and applying Euler's identity, one sees that the Weierstrass transform of the function
cos
(
b
x
)
{\displaystyle \cos(bx)}
is
exp
(
−
b
2
)
cos
(
b
x
)
{\displaystyle \exp(-b^{2})\cos(bx)}
and the Weierstrass transform of the function
sin
(
b
x
)
{\displaystyle \sin(bx)}
is
exp
(
−
b
2
)
sin
(
b
x
)
{\displaystyle \exp(-b^{2})\sin(bx)}
.
=== Gaussian Functions ===
The Weierstrass transform of the function
x
↦
exp
(
a
x
2
)
{\displaystyle x\mapsto \exp(ax^{2})}
is
F
(
x
)
=
{
1
1
−
4
a
exp
(
a
x
2
1
−
4
a
)
if
a
<
1
/
4
undefined
if
a
≥
1
/
4.
{\displaystyle F(x)={\begin{cases}{\dfrac {1}{\sqrt {1-4a}}}\exp \left({\dfrac {ax^{2}}{1-4a}}\right)&{\text{if }}a<1/4\\[5pt]{\text{undefined}}&{\text{if }}a\geq 1/4.\end{cases}}}
Of particular note is when
a
{\displaystyle a}
is chosen to be negative. If
a
<
0
{\displaystyle a<0}
, then
exp
(
a
x
2
)
{\displaystyle \exp(ax^{2})}
is a Gaussian function and its Weierstrass transform is also a Gaussian function, but a "wider" one.
== General properties ==
The Weierstrass transform assigns to each function
f
{\displaystyle f}
a new function
F
{\displaystyle F}
; this assignment is linear. It is also translation-invariant, meaning that the transform of the function
f
(
x
+
a
)
{\displaystyle f(x+a)}
is
F
(
x
+
a
)
{\displaystyle F(x+a)}
. Both of these facts are more generally true for any integral transform defined via convolution.
If the transform
F
(
x
)
{\displaystyle F(x)}
exists for the real numbers
x
=
a
{\displaystyle x=a}
and
x
=
b
{\displaystyle x=b}
, then it also exists for all real values in between and forms an analytic function there; moreover,
F
(
x
)
{\displaystyle F(x)}
will exist for all complex values of
x
{\displaystyle x}
with
a
≤
ℜ
(
x
)
≤
b
{\displaystyle a\leq \Re (x)\leq b}
and forms a holomorphic function on that strip of the complex plane. This is the formal statement of the "smoothness" of
F
{\displaystyle F}
mentioned above.
If
f
{\displaystyle f}
is integrable over the whole real axis (i.e.
f
∈
L
1
(
R
)
{\displaystyle f\in L^{1}(\mathbb {R} )}
), then so is its Weierstrass transform
F
{\displaystyle F}
, and if furthermore
f
(
x
)
≥
0
{\displaystyle f(x)\geq 0}
for all
x
{\displaystyle x}
, then also
F
(
x
)
≥
0
{\displaystyle F(x)\geq 0}
for all
x
{\displaystyle x}
and the integrals of
f
{\displaystyle f}
and
F
{\displaystyle F}
are equal. This expresses the physical fact that the total thermal energy or heat is conserved by the heat equation, or that the total amount of diffusing material is conserved by the diffusion equation.
Using the above, one can show that for
1
≤
p
≤
+
∞
{\displaystyle 1\leq p\leq +\infty }
and
f
∈
L
p
(
R
)
{\displaystyle f\in L^{p}(\mathbb {R} )}
, we have
F
∈
L
p
(
R
)
{\displaystyle F\in L^{p}(\mathbb {R} )}
and
|
|
F
|
|
p
≤
|
|
f
|
|
p
{\displaystyle ||F||_{p}\leq ||f||_{p}}
. The Weierstrass transform consequently yields a bounded operator
W
:
L
p
(
R
)
→
L
p
(
R
)
{\displaystyle W:L^{p}(\mathbb {R} )\to L^{p}(\mathbb {R} )}
.
If
f
{\displaystyle f}
is sufficiently smooth, then the Weierstrass transform of the
k
{\displaystyle k}
-th derivative of
f
{\displaystyle f}
is equal to the
k
{\displaystyle k}
-th derivative of the Weierstrass transform of
f
{\displaystyle f}
.
There is a formula relating the Weierstrass transform W and the two-sided Laplace transform
L
{\displaystyle L}
. If we define
g
(
x
)
=
e
−
x
2
4
f
(
x
)
{\displaystyle g(x)=e^{-{\frac {x^{2}}{4}}}f(x)}
then
W
[
f
]
(
x
)
=
1
4
π
e
−
x
2
/
4
L
[
g
]
(
−
x
2
)
.
{\displaystyle W[f](x)={\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}L[g]\left(-{\frac {x}{2}}\right).}
=== Low-pass filter ===
We have seen above that the Weierstrass transform of
cos
(
b
x
)
{\displaystyle \cos(bx)}
is
e
−
b
2
cos
(
b
x
)
{\displaystyle e^{-b^{2}}\cos(bx)}
, and analogously for
sin
(
b
x
)
{\displaystyle \sin(bx)}
. In terms of signal analysis, this suggests that if the signal
f
{\displaystyle f}
contains the frequency
b
{\displaystyle b}
(i.e. contains a summand which is a combination of
sin
(
b
x
)
{\displaystyle \sin(bx)}
and
cos
(
b
x
)
{\displaystyle \cos(bx)}
), then the transformed signal
F
{\displaystyle F}
will contain the same frequency, but with an amplitude multiplied by the factor
e
−
b
2
{\displaystyle e^{-b^{2}}}
. This has the consequence that higher frequencies are reduced more than lower ones, and the Weierstrass transform thus acts as a low-pass filter. This can also be shown with the continuous Fourier transform, as follows. The Fourier transform analyzes a signal in terms of its frequencies, transforms convolutions into products, and transforms Gaussians into Gaussians. The Weierstrass transform is convolution with a Gaussian and is therefore multiplication of the Fourier transformed signal with a Gaussian, followed by application of the inverse Fourier transform. This multiplication with a Gaussian in frequency space blends out high frequencies, which is another way of describing the "smoothing" property of the Weierstrass transform.
== The inverse transform ==
The following formula, closely related to the Laplace transform of a Gaussian function, and a real analogue to the Hubbard–Stratonovich transformation, is relatively easy to establish:
e
u
2
=
1
4
π
∫
−
∞
∞
e
−
u
y
e
−
y
2
/
4
d
y
.
{\displaystyle e^{u^{2}}={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }e^{-uy}e^{-y^{2}/4}\;dy.}
Now replace u with the formal differentiation operator D = d/dx and utilize the Lagrange shift operator
e
−
y
D
f
(
x
)
=
f
(
x
−
y
)
{\displaystyle e^{-yD}f(x)=f(x-y)}
,
(a consequence of the Taylor series formula and the definition of the exponential function), to obtain
e
D
2
f
(
x
)
=
1
4
π
∫
−
∞
∞
e
−
y
D
f
(
x
)
e
−
y
2
/
4
d
y
=
1
4
π
∫
−
∞
∞
f
(
x
−
y
)
e
−
y
2
/
4
d
y
=
W
[
f
]
(
x
)
{\displaystyle {\begin{aligned}e^{D^{2}}f(x)&={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }e^{-yD}f(x)e^{-y^{2}/4}\;dy\\&={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }f(x-y)e^{-y^{2}/4}\;dy=W[f](x)\end{aligned}}}
to thus obtain the following formal expression for the Weierstrass transform
W
{\displaystyle W}
,
where the operator on the right is to be understood as acting on the function f(x) as
e
D
2
f
(
x
)
=
∑
k
=
0
∞
D
2
k
f
(
x
)
k
!
.
{\displaystyle e^{D^{2}}f(x)=\sum _{k=0}^{\infty }{\frac {D^{2k}f(x)}{k!}}~.}
The above formal derivation glosses over details of convergence, and the formula
W
=
e
D
2
{\displaystyle W=e^{D^{2}}}
is thus not universally valid; there are several functions
f
{\displaystyle f}
which have a well-defined Weierstrass transform, but for which
e
D
2
(
f
)
{\displaystyle e^{D^{2}}(f)}
cannot be meaningfully defined.
Nevertheless, the rule is still quite useful and can, for example, be used to derive the Weierstrass transforms of polynomials, exponential and trigonometric functions mentioned above.
The formal inverse of the Weierstrass transform is thus given by
W
−
1
=
e
−
D
2
.
{\displaystyle W^{-1}=e^{-D^{2}}~.}
Again, this formula is not universally valid but can serve as a guide. It can be shown to be correct for certain classes of functions if the right-hand side operator is properly defined.
One may, alternatively, attempt to invert the Weierstrass transform in a slightly different way: given the analytic function
F
(
x
)
=
∑
n
=
0
∞
a
n
x
n
,
{\displaystyle F(x)=\sum _{n=0}^{\infty }a_{n}x^{n}~,}
apply
W
−
1
{\displaystyle W^{-1}}
to obtain
f
(
x
)
=
W
−
1
[
F
(
x
)
]
=
∑
n
=
0
∞
a
n
W
−
1
[
x
n
]
=
∑
n
=
0
∞
a
n
H
n
(
x
/
2
)
{\displaystyle f(x)=W^{-1}[F(x)]=\sum _{n=0}^{\infty }a_{n}W^{-1}[x^{n}]=\sum _{n=0}^{\infty }a_{n}H_{n}(x/2)}
once more using a fundamental property of the (physicists') Hermite polynomials
H
n
{\displaystyle H_{n}}
.
Again, this formula for
f
(
x
)
{\displaystyle f(x)}
is at best formal, since one didn't check whether the final series converges. But if, for instance,
f
∈
L
2
(
R
)
{\displaystyle f\in L^{2}(\mathbb {R} )}
, then knowledge of all the derivatives of
F
{\displaystyle F}
at
x
=
0
{\displaystyle x=0}
suffices to yield the coefficients
a
n
{\displaystyle a_{n}}
; and to thus reconstruct
f
{\displaystyle f}
as a series of Hermite polynomials.
A third method of inverting the Weierstrass transform exploits its connection to the Laplace transform mentioned above, and the well-known inversion formula for the Laplace transform. The result is stated below for distributions.
== Generalizations ==
We can use convolution with the Gaussian kernel
1
4
π
t
e
−
x
2
4
t
,
t
>
0
,
{\displaystyle {\frac {1}{\sqrt {4\pi t}}}e^{-{\frac {x^{2}}{4t}}},\quad t>0,}
instead of
1
4
π
e
−
x
2
4
,
{\displaystyle {\frac {1}{\sqrt {4\pi }}}e^{-{\frac {x^{2}}{4}}},}
thus defining an operator Wt, the generalized Weierstrass transform.
For small values of
t
{\displaystyle t}
,
W
t
[
f
]
{\displaystyle W_{t}[f]}
is very close to
f
{\displaystyle f}
, but smooth. The larger
t
{\displaystyle t}
, the more this operator averages out and changes
f
{\displaystyle f}
. Physically,
W
t
{\displaystyle W_{t}}
corresponds to following the heat (or diffusion) equation for
t
{\displaystyle t}
time units, and this is additive,
W
s
∘
W
t
=
W
s
+
t
,
{\displaystyle W_{s}\circ W_{t}=W_{s+t},}
corresponding to "diffusing for
t
{\displaystyle t}
time units, then
s
{\displaystyle s}
time units, is equivalent to diffusing for
s
+
t
{\displaystyle s+t}
time units". One can extend this to
t
=
0
{\displaystyle t=0}
by setting
W
0
{\displaystyle W_{0}}
to be the identity operator (i.e. convolution with the Dirac delta function), and these then form a one-parameter semigroup of operators.
The kernel
1
4
π
t
e
−
x
2
4
t
,
{\displaystyle {\frac {1}{\sqrt {4\pi t}}}e^{-{\frac {x^{2}}{4t}}},}
used for the generalized Weierstrass transform is sometimes called the Gauss–Weierstrass kernel, and is Green's function for the diffusion equation
(
∂
t
−
D
2
)
(
e
t
D
2
f
(
x
)
)
=
0
,
{\displaystyle (\partial _{t}-D^{2})(e^{tD^{2}}f(x))=0,}
on
R
{\displaystyle \mathbb {R} }
.
W
t
{\displaystyle W_{t}}
can be computed from
W
{\displaystyle W}
: given a function
f
(
x
)
{\displaystyle f(x)}
, define a new function
f
t
(
x
)
:=
f
(
x
t
)
,
{\displaystyle f_{t}(x):=f(x{\sqrt {t}}),}
then
W
t
[
f
]
(
x
)
=
W
[
f
t
]
(
x
t
)
,
{\displaystyle W_{t}[f](x)=W[f_{t}](x{\sqrt {t}}),}
a consequence of the substitution rule.
The Weierstrass transform can also be defined for certain classes of distributions or "generalized functions". For example, the Weierstrass transform of the Dirac delta is the Gaussian
1
4
π
e
−
x
2
/
4
.
{\displaystyle {\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}.}
In this context, rigorous inversion formulas can be proved, e.g.,
f
(
x
)
=
lim
r
→
∞
1
i
4
π
∫
x
0
−
i
r
x
0
+
i
r
F
(
z
)
e
(
x
−
z
)
2
4
d
z
,
{\displaystyle f(x)=\lim _{r\to \infty }{\frac {1}{i{\sqrt {4\pi }}}}\int _{x_{0}-ir}^{x_{0}+ir}F(z)e^{\frac {(x-z)^{2}}{4}}\;dz,}
where
x
0
{\displaystyle x_{0}}
is any fixed real number for which
F
(
x
0
)
{\displaystyle F(x_{0})}
exists, the integral extends over the vertical line in the complex plane with real part
x
0
{\displaystyle x_{0}}
, and the limit is to be taken in the sense of distributions.
Furthermore, the Weierstrass transform can be defined for real- (or complex-) valued functions (or distributions) defined on
R
n
{\displaystyle \mathbb {R} ^{n}}
. We use the same convolution formula as above but interpret the integral as extending over all of
R
n
{\displaystyle \mathbb {R} ^{n}}
and the expression
(
x
−
y
)
2
{\displaystyle (x-y)^{2}}
as the square of the Euclidean length of the vector
x
−
y
{\displaystyle x-y}
; the factor in front of the integral has to be adjusted so that the Gaussian will have a total integral of 1.
More generally, the Weierstrass transform can be defined on any Riemannian manifold: the heat equation can be formulated there (using the manifold's Laplace–Beltrami operator), and the Weierstrass transform
W
[
f
]
{\displaystyle W[f]}
is then given by following the solution of the heat equation for one time unit, starting with the initial "temperature distribution"
f
{\displaystyle f}
.
== Related transforms ==
If one considers convolution with the kernel
1
(
1
+
x
2
)
π
{\displaystyle {\frac {1}{(1+x^{2})\pi }}}
instead of with a Gaussian, one obtains the Poisson transform which smoothes and averages a given function in a manner similar to the Weierstrass transform.
== See also ==
Gaussian blur
Gaussian filter
Husimi Q representation
Heat equation#Fundamental solutions
== Notes ==
== References == | Wikipedia/Weierstrass_transform |
In mathematics, the X-ray transform (also called ray transform or John transform) is an integral transform introduced by Fritz John in 1938 that is one of the cornerstones of modern integral geometry. It is very closely related to the Radon transform, and coincides with it in two dimensions. In higher dimensions, the X-ray transform of a function is defined by integrating over lines rather than over hyperplanes as in the Radon transform. The X-ray transform derives its name from X-ray tomography (used in CT scans) because the X-ray transform of a function ƒ represents the attenuation data of a tomographic scan through an inhomogeneous medium whose density is represented by the function ƒ. Inversion of the X-ray transform is therefore of practical importance because it allows one to reconstruct an unknown density ƒ from its known attenuation data.
In detail, if ƒ is a compactly supported continuous function on the Euclidean space Rn, then the X-ray transform of ƒ is the function Xƒ defined on the set of all lines in Rn by
X
f
(
L
)
=
∫
L
f
=
∫
R
f
(
x
0
+
t
θ
)
d
t
{\displaystyle Xf(L)=\int _{L}f=\int _{\mathbf {R} }f(x_{0}+t\theta )dt}
where x0 is an initial point on the line and θ is a unit vector in Rn giving the direction of the line L. The latter integral is not regarded in the oriented sense: it is the integral with respect to the 1-dimensional Lebesgue measure on the Euclidean line L.
The X-ray transform satisfies an ultrahyperbolic wave equation called John's equation.
The Gaussian or ordinary hypergeometric function can be written as an X-ray transform.(Gelfand, Gindikin & Graev 2003, 2.1.2).
== References ==
Berenstein, Carlos A. (2001) [1994], "X-ray transform", Encyclopedia of Mathematics, EMS Press.
Gelfand, I. M.; Gindikin, S. G.; Graev, M. I. (2003) [2000], Selected topics in integral geometry, Translations of Mathematical Monographs, vol. 220, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2932-5, MR 2000133
Helgason, Sigurdur (2008), Geometric analysis on symmetric spaces, Mathematical Surveys and Monographs, vol. 39 (2nd ed.), Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-4530-1, MR 2463854
Helgason, Sigurdur (1999), The Radon Transform (PDF), Progress in Mathematics (2nd ed.), Boston, M.A.: Birkhauser | Wikipedia/X-ray_transform |
In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis.
== Definition ==
The Fourier sine transform of
f
(
t
)
{\displaystyle f(t)}
is:
If
t
{\displaystyle t}
means time, then
ξ
{\displaystyle \xi }
is frequency in cycles per unit time, but in the abstract, they can be any dual pair of variables (e.g. position and spatial frequency).
The sine transform is necessarily an odd function of frequency, i.e. for all
ξ
{\displaystyle \xi }
:
f
^
s
(
−
ξ
)
=
−
f
^
s
(
ξ
)
.
{\displaystyle {\hat {f}}^{s}(-\xi )=-{\hat {f}}^{s}(\xi ).}
The Fourier cosine transform of
f
(
t
)
{\displaystyle f(t)}
is:
The cosine transform is necessarily an even function of frequency, i.e. for all
ξ
{\displaystyle \xi }
:
f
^
c
(
−
ξ
)
=
f
^
c
(
ξ
)
.
{\displaystyle {\hat {f}}^{c}(-\xi )={\hat {f}}^{c}(\xi ).}
=== Odd and even simplification ===
The multiplication rules for even and odd functions shown in the overbraces in the following equations dramatically simplify the integrands when transforming even and odd functions. Some authors even only define the cosine transform for even functions
f
even
(
t
)
{\displaystyle f_{\text{even}}(t)}
. Since cosine is an even function and because the integral of an even function from
−
∞
{\displaystyle {-}\infty }
to
∞
{\displaystyle \infty }
is twice its integral from
0
{\displaystyle 0}
to
∞
{\displaystyle \infty }
, the cosine transform of any even function can be simplified to avoid negative
t
{\displaystyle t}
:
f
^
c
(
ξ
)
=
∫
−
∞
∞
f
even
(
t
)
⋅
cos
(
2
π
ξ
t
)
⏞
even·even=even
d
t
=
2
∫
0
∞
f
even
(
t
)
cos
(
2
π
ξ
t
)
d
t
.
{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \cos(2\pi \xi t)} ^{\text{even·even=even}}\,dt=2\int _{0}^{\infty }f_{\text{even}}(t)\cos(2\pi \xi t)\,dt.}
And because the integral from
−
∞
{\displaystyle {-}\infty }
to
∞
{\displaystyle \infty }
of any odd function is zero, the cosine transform of any odd function is simply zero:
f
^
c
(
ξ
)
=
∫
−
∞
∞
f
odd
(
t
)
⋅
cos
(
2
π
ξ
t
)
⏞
odd·even=odd
d
t
=
0.
{\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \cos(2\pi \xi t)} ^{\text{odd·even=odd}}\,dt=0.}
Similarly, because sin is odd, the sine transform of any odd function
f
odd
(
t
)
{\displaystyle f_{\text{odd}}(t)}
also simplifies to avoid negative
t
{\displaystyle t}
:
f
^
s
(
ξ
)
=
∫
−
∞
∞
f
odd
(
t
)
⋅
sin
(
2
π
ξ
t
)
⏞
odd·odd=even
d
t
=
2
∫
0
∞
f
odd
(
t
)
sin
(
2
π
ξ
t
)
d
t
{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \sin(2\pi \xi t)} ^{\text{odd·odd=even}}\,dt=2\int _{0}^{\infty }f_{\text{odd}}(t)\sin(2\pi \xi t)\,dt}
and the sine transform of any even function is simply zero:
f
^
s
(
ξ
)
=
∫
−
∞
∞
f
even
(
t
)
⋅
sin
(
2
π
ξ
t
)
⏞
even·odd=odd
d
t
=
0.
{\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \sin(2\pi \xi t)} ^{\text{even·odd=odd}}\,dt=0.}
The sine transform represents the odd part of a function, while the cosine transform represents the even part of a function.
=== Other conventions ===
Just like the Fourier transform takes the form of different equations with different constant factors (see Fourier transform § Unitarity and definition for square integrable functions for discussion), other authors also define the cosine transform as
f
^
c
(
ξ
)
=
2
π
∫
0
∞
f
(
t
)
cos
(
2
π
ξ
t
)
d
t
{\displaystyle {\hat {f}}^{c}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\cos(2\pi \xi t)\,dt}
and the sine transform as
f
^
s
(
ξ
)
=
2
π
∫
0
∞
f
(
t
)
sin
(
2
π
ξ
t
)
d
t
.
{\displaystyle {\hat {f}}^{s}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\sin(2\pi \xi t)\,dt.}
Another convention defines the cosine transform as
F
c
(
α
)
=
2
π
∫
0
∞
f
(
x
)
cos
(
α
x
)
d
x
{\displaystyle F_{c}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\cos(\alpha x)\,dx}
and the sine transform as
F
s
(
α
)
=
2
π
∫
0
∞
f
(
x
)
sin
(
α
x
)
d
x
{\displaystyle F_{s}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\sin(\alpha x)\,dx}
using
α
{\displaystyle \alpha }
as the transformation variable. And while
t
{\displaystyle t}
is typically used to represent the time domain,
x
{\displaystyle x}
is often instead used to represent a spatial domain when transforming to spatial frequencies.
== Fourier inversion ==
The original function
f
{\displaystyle f}
can be recovered from its sine and cosine transforms under the usual hypotheses using the inversion formula:
=== Simplifications ===
Note that since both integrands are even functions of
ξ
{\displaystyle \xi }
, the concept of negative frequency can be avoided by doubling the result of integrating over non-negative frequencies:
f
(
t
)
=
2
∫
0
∞
f
^
s
(
ξ
)
sin
(
2
π
ξ
t
)
d
ξ
+
2
∫
0
∞
f
^
c
(
ξ
)
cos
(
2
π
ξ
t
)
d
ξ
.
{\displaystyle f(t)=2\int _{0}^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi \,+2\int _{0}^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi \,.}
Also, if
f
{\displaystyle f}
is an odd function, then the cosine transform is zero, so its inversion simplifies to:
f
(
t
)
=
∫
−
∞
∞
f
^
s
(
ξ
)
sin
(
2
π
ξ
t
)
d
ξ
,
only if
f
(
t
)
is odd.
{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is odd.}}}
Likewise, if the original function
f
{\displaystyle f}
is an even function, then the sine transform is zero, so its inversion also simplifies to:
f
(
t
)
=
∫
−
∞
∞
f
^
c
(
ξ
)
cos
(
2
π
ξ
t
)
d
ξ
,
only if
f
(
t
)
is even.
{\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is even.}}}
Remarkably, these last two simplified inversion formulas look identical to the original sine and cosine transforms, respectively, though with
t
{\displaystyle t}
swapped with
ξ
{\displaystyle \xi }
(and with
f
{\displaystyle f}
swapped with
f
^
s
{\displaystyle {\hat {f}}^{s}}
or
f
^
c
{\displaystyle {\hat {f}}^{c}}
). A consequence of this symmetry is that their inversion and transform processes still work when the two functions are swapped. Two such functions are called transform pairs.
=== Overview of inversion proof ===
Using the addition formula for cosine, the full inversion formula can also be rewritten as Fourier's integral formula:
f
(
t
)
=
∫
−
∞
∞
∫
−
∞
∞
f
(
x
)
cos
(
2
π
ξ
(
x
−
t
)
)
d
x
d
ξ
.
{\displaystyle f(t)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .}
This theorem is often stated under different hypotheses, that
f
{\displaystyle f}
is integrable, and is of bounded variation on an open interval containing the point
t
{\displaystyle t}
, in which case
1
2
lim
h
→
0
(
f
(
t
+
h
)
+
f
(
t
−
h
)
)
=
2
∫
0
∞
∫
−
∞
∞
f
(
x
)
cos
(
2
π
ξ
(
x
−
t
)
)
d
x
d
ξ
.
{\displaystyle {\tfrac {1}{2}}\lim _{h\to 0}\left(f(t+h)+f(t-h)\right)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .}
This latter form is a useful intermediate step in proving the inverse formulae for the since and cosine transforms. One method of deriving it, due to Cauchy is to insert a
e
−
δ
ξ
{\displaystyle e^{-\delta \xi }}
into the integral, where
δ
>
0
{\displaystyle \delta >0}
is fixed. Then
2
∫
−
∞
∞
∫
0
∞
e
−
δ
ξ
cos
(
2
π
ξ
(
x
−
t
)
)
d
ξ
f
(
x
)
d
x
=
∫
−
∞
∞
f
(
x
)
2
δ
δ
2
+
4
π
2
(
x
−
t
)
2
d
x
.
{\displaystyle 2\int _{-\infty }^{\infty }\int _{0}^{\infty }e^{-\delta \xi }\cos(2\pi \xi (x-t))\,d\xi \,f(x)\,dx=\int _{-\infty }^{\infty }f(x){\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx.}
Now when
δ
→
0
{\displaystyle \delta \to 0}
, the integrand tends to zero except at
x
=
t
{\displaystyle x=t}
, so that formally the above is
f
(
t
)
∫
−
∞
∞
2
δ
δ
2
+
4
π
2
(
x
−
t
)
2
d
x
=
f
(
t
)
.
{\displaystyle f(t)\int _{-\infty }^{\infty }{\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx=f(t).}
== Relation with complex exponentials ==
The complex exponential form of the Fourier transform used more often today is
f
^
(
ξ
)
=
∫
−
∞
∞
f
(
t
)
e
−
2
π
i
ξ
t
d
t
{\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)e^{-2\pi i\xi t}\,dt\\\end{aligned}}\,}
where
i
{\displaystyle i}
is the square root of negative one. By applying Euler's formula
(
e
i
x
=
cos
x
+
i
sin
x
)
,
{\textstyle (e^{ix}=\cos x+i\sin x),}
it can be shown (for real-valued functions) that the Fourier transform's real component is the cosine transform (representing the even component of the original function) and the Fourier transform's imaginary component is the negative of the sine transform (representing the odd component of the original function):
f
^
(
ξ
)
=
∫
−
∞
∞
f
(
t
)
(
cos
(
2
π
ξ
t
)
−
i
sin
(
2
π
ξ
t
)
)
d
t
Euler's Formula
=
(
∫
−
∞
∞
f
(
t
)
cos
(
2
π
ξ
t
)
d
t
)
−
i
(
∫
−
∞
∞
f
(
t
)
sin
(
2
π
ξ
t
)
d
t
)
=
f
^
c
(
ξ
)
−
i
f
^
s
(
ξ
)
.
{\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)\left(\cos(2\pi \xi t)-i\,\sin(2\pi \xi t)\right)dt&&{\text{Euler's Formula}}\\&=\left(\int _{-\infty }^{\infty }f(t)\cos(2\pi \xi t)\,dt\right)-i\left(\int _{-\infty }^{\infty }f(t)\sin(2\pi \xi t)\,dt\right)\\&={\hat {f}}^{c}(\xi )-i\,{\hat {f}}^{s}(\xi )\,.\end{aligned}}}
Because of this relationship, the cosine transform of functions whose Fourier transform is known (e.g. in Fourier transform § Tables of important Fourier transforms) can be simply found by taking the real part of the Fourier transform:
f
^
c
(
ξ
)
=
R
e
[
f
^
(
ξ
)
]
{\displaystyle {\hat {f}}^{c}(\xi )=\mathrm {Re} {[\;{\hat {f}}(\xi )\;]}}
while the sine transform is simply the negative of the imaginary part of the Fourier transform:
f
^
s
(
ξ
)
=
−
I
m
[
f
^
(
ξ
)
]
.
{\displaystyle {\hat {f}}^{s}(\xi )=-\mathrm {Im} {[\;{\hat {f}}(\xi )\;]}\,.}
=== Pros and cons ===
An advantage of the modern Fourier transform is that while the sine and cosine transforms together are required to extract the phase information of a frequency, the modern Fourier transform instead compactly packs both phase and amplitude information inside its complex valued result. But a disadvantage is its requirement on understanding complex numbers, complex exponentials, and negative frequency.
The sine and cosine transforms meanwhile have the advantage that all quantities are real. Since positive frequencies can fully express them, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. They may also be convenient when the original function is already even or odd or can be made even or odd, in which case only the cosine or the sine transform respectively is needed. For instance, even though an input may not be even or odd, a discrete cosine transform may start by assuming an even extension of its input while a discrete sine transform may start by assuming an odd extension of its input, to avoid having to compute the entire discrete Fourier transform.
== Numerical evaluation ==
Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned.
Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals This method attempts to evaluate the integrand at locations which asymptotically approach the zeros of the oscillation (either the sine or cosine), quickly reducing the magnitude of positive and negative terms which are summed.
== See also ==
Discrete cosine transform
Discrete sine transform
List of Fourier-related transforms
== Notes ==
== References ==
Whittaker, Edmund, and James Watson, A Course in Modern Analysis, Fourth Edition, Cambridge Univ. Press, 1927, pp. 189, 211 | Wikipedia/Fourier_cosine_transform |
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.
== Description ==
An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials.
The sides of a polynomial equation contain one or more terms. For example, the equation
A
x
2
+
B
x
+
C
−
y
=
0
{\displaystyle Ax^{2}+Bx+C-y=0}
has left-hand side
A
x
2
+
B
x
+
C
−
y
{\displaystyle Ax^{2}+Bx+C-y}
, which has four terms, and right-hand side
0
{\displaystyle 0}
, consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount must be removed from the other pan to keep the scale in balance. More generally, an equation remains balanced if the same operation is performed on each side.
== Properties ==
Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero.
Multiplying or dividing both sides of an equation by a non-zero quantity.
Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum.
For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation
x
=
1
{\displaystyle x=1}
has the solution
x
=
1.
{\displaystyle x=1.}
Raising both sides to the exponent of 2 (which means applying the function
f
(
s
)
=
s
2
{\displaystyle f(s)=s^{2}}
to both sides of the equation) changes the equation to
x
2
=
1
{\displaystyle x^{2}=1}
, which not only has the previous solution but also introduces the extraneous solution,
x
=
−
1.
{\displaystyle x=-1.}
Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.
== Examples ==
=== Analogous illustration ===
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
=== Parameters and unknowns ===
Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.
An example of an equation involving x and y as unknowns and the parameter R is
x
2
+
y
2
=
R
2
.
{\displaystyle x^{2}+y^{2}=R^{2}.}
When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.
A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
3
x
+
5
y
=
2
5
x
+
8
y
=
3
{\displaystyle {\begin{aligned}3x+5y&=2\\5x+8y&=3\end{aligned}}}
has the unique solution x = −1, y = 1.
=== Identities ===
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
x
2
−
y
2
=
(
x
+
y
)
(
x
−
y
)
{\displaystyle x^{2}-y^{2}=(x+y)(x-y)}
which is true for all x and y.
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
sin
2
(
θ
)
+
cos
2
(
θ
)
=
1
{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1}
and
sin
(
2
θ
)
=
2
sin
(
θ
)
cos
(
θ
)
{\displaystyle \sin(2\theta )=2\sin(\theta )\cos(\theta )}
which are both true for all values of θ.
For example, to solve for the value of θ that satisfies the equation:
3
sin
(
θ
)
cos
(
θ
)
=
1
,
{\displaystyle 3\sin(\theta )\cos(\theta )=1\,,}
where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
3
2
sin
(
2
θ
)
=
1
,
{\displaystyle {\frac {3}{2}}\sin(2\theta )=1\,,}
yielding the following solution for θ:
θ
=
1
2
arcsin
(
2
3
)
≈
20.9
∘
.
{\displaystyle \theta ={\frac {1}{2}}\arcsin \left({\frac {2}{3}}\right)\approx 20.9^{\circ }.}
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
== Algebra ==
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
=== Polynomial equations ===
In general, an algebraic equation or polynomial equation is an equation of the form
P
=
0
{\displaystyle P=0}
, or
P
=
Q
{\displaystyle P=Q}
where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
For example,
x
5
−
3
x
+
1
=
0
{\displaystyle x^{5}-3x+1=0}
is a univariate algebraic (polynomial) equation with integer coefficients and
y
4
+
x
y
2
=
x
3
3
−
x
y
2
+
y
2
−
1
7
{\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}}
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
=== Systems of linear equations ===
A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example,
3
x
+
2
y
−
z
=
1
2
x
−
2
y
+
4
z
=
−
2
−
x
+
1
2
y
−
z
=
0
{\displaystyle {\begin{alignedat}{7}3x&&\;+\;&&2y&&\;-\;&&z&&\;=\;&&1&\\2x&&\;-\;&&2y&&\;+\;&&4z&&\;=\;&&-2&\\-x&&\;+\;&&{\tfrac {1}{2}}y&&\;-\;&&z&&\;=\;&&0&\end{alignedat}}}
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
x
=
1
y
=
−
2
z
=
−
2
{\displaystyle {\begin{alignedat}{2}x&\,=\,&1\\y&\,=\,&-2\\z&\,=\,&-2\end{alignedat}}}
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
== Geometry ==
=== Analytic geometry ===
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form
a
x
+
b
y
+
c
z
+
d
=
0
{\displaystyle ax+by+cz+d=0}
, where
a
,
b
,
c
{\displaystyle a,b,c}
and
d
{\displaystyle d}
are real numbers and
x
,
y
,
z
{\displaystyle x,y,z}
are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values
a
,
b
,
c
{\displaystyle a,b,c}
are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in
R
2
{\displaystyle \mathbb {R} ^{2}}
or as the solution set of two linear equations with values in
R
3
.
{\displaystyle \mathbb {R} ^{3}.}
A conic section is the intersection of a cone with equation
x
2
+
y
2
=
z
2
{\displaystyle x^{2}+y^{2}=z^{2}}
and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
=== Cartesian equations ===
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.
=== Parametric equations ===
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example,
x
=
cos
t
y
=
sin
t
{\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}}
are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
== Number theory ==
=== Diophantine equations ===
A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
=== Algebraic and transcendental numbers ===
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
=== Algebraic geometry ===
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
== Differential equations ==
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
=== Ordinary differential equations ===
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
=== Partial differential equations ===
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
== Types of equations ==
Equations can be classified according to the types of operations and quantities involved. Important types include:
An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree:
linear equation for degree one
quadratic equation for degree two
cubic equation for degree three
quartic equation for degree four
quintic equation for degree five
sextic equation for degree six
septic equation for degree seven
octic equation for degree eight
A Diophantine equation is an equation where the unknowns are required to be integers
A transcendental equation is an equation involving a transcendental function of its unknowns
A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations
A functional equation is an equation in which the unknowns are functions rather than simple quantities
Equations involving derivatives, integrals and finite differences:
A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as
f
′
(
x
)
=
x
2
{\displaystyle f'(x)=x^{2}}
. Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables
An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface
An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable.
A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as
f
′
(
x
)
=
f
(
x
−
2
)
{\displaystyle f'(x)=f(x-2)}
A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
== See also ==
== Notes ==
== References ==
== External links ==
Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations.
Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). | Wikipedia/equation |
In mathematics, an almost periodic function is, loosely speaking, a function of a real variable that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.
Almost periodicity is a property of dynamical systems that appear to retrace their paths through phase space, but not exactly. An example would be a planetary system, with planets in orbits moving with periods that are not commensurable (i.e., with a period vector that is not proportional to a vector of integers). A theorem of Kronecker from diophantine approximation can be used to show that any particular configuration that occurs once, will recur to within any specified accuracy: if we wait long enough we can observe the planets all return to within, say, a second of arc to the positions they once were in.
== Motivation ==
There are several inequivalent definitions of almost periodic functions. The first was given by Harald Bohr. His interest was initially in finite Dirichlet series. In fact by truncating the series for the Riemann zeta function ζ(s) to make it finite, one gets finite sums of terms of the type
e
s
log
n
{\displaystyle e^{s\log n}\,}
with s written as σ + it – the sum of its real part σ and imaginary part it. Fixing σ, so restricting attention to a single vertical line in the complex plane, we can see this also as
n
σ
e
(
log
n
)
i
t
.
{\displaystyle n^{\sigma }e^{(\log n)it}.\,}
Taking a finite sum of such terms avoids difficulties of analytic continuation to the region σ < 1. Here the 'frequencies' log n will not all be commensurable (they are as linearly independent over the rational numbers as the integers n are multiplicatively independent – which comes down to their prime factorizations).
With this initial motivation to consider types of trigonometric polynomial with independent frequencies, mathematical analysis was applied to discuss the closure of this set of basic functions, in various norms.
The theory was developed using other norms by Besicovitch, Stepanov, Weyl, von Neumann, Turing, Bochner and others in the 1920s and 1930s.
=== Uniform or Bohr or Bochner almost periodic functions ===
Bohr (1925) defined the uniformly almost-periodic functions as the closure of the trigonometric polynomials with respect to the uniform norm
‖
f
‖
∞
=
sup
x
|
f
(
x
)
|
{\displaystyle \|f\|_{\infty }=\sup _{x}|f(x)|}
(on bounded functions f on R). In other words, a function f is uniformly almost periodic if for every ε > 0 there is a finite linear combination of sine and cosine waves that is of distance less than ε from f with respect to the uniform norm. The sine and cosine frequencies can be arbitrary real numbers. Bohr proved that this definition was equivalent to the existence of a relatively dense set of ε almost-periods, for all ε > 0: that is, translations T(ε) = T of the variable t making
|
f
(
t
+
T
)
−
f
(
t
)
|
<
ε
.
{\displaystyle \left|f(t+T)-f(t)\right|<\varepsilon .}
An alternative definition due to Bochner (1926) is equivalent to that of Bohr and is relatively simple to state:
A function f is almost periodic if every sequence {ƒ(t + Tn)} of translations of f has a subsequence that converges uniformly for t in (−∞, +∞).
The Bohr almost periodic functions are essentially the same as continuous functions on the Bohr compactification of the reals.
=== Stepanov almost periodic functions ===
The space Sp of Stepanov almost periodic functions (for p ≥ 1) was introduced by V.V. Stepanov (1925). It contains the space of Bohr almost periodic functions. It is the closure of the trigonometric polynomials under the norm
‖
f
‖
S
,
r
,
p
=
sup
x
(
1
r
∫
x
x
+
r
|
f
(
s
)
|
p
d
s
)
1
/
p
{\displaystyle \|f\|_{S,r,p}=\sup _{x}\left({1 \over r}\int _{x}^{x+r}|f(s)|^{p}\,ds\right)^{1/p}}
for any fixed positive value of r; for different values of r these norms give the same topology and so the same space of almost periodic functions (though the norm on this space depends on the choice of r).
=== Weyl almost periodic functions ===
The space Wp of Weyl almost periodic functions (for p ≥ 1) was introduced by Weyl (1927). It contains the space Sp of Stepanov almost periodic functions.
It is the closure of the trigonometric polynomials under the seminorm
‖
f
‖
W
,
p
=
lim
r
→
∞
‖
f
‖
S
,
r
,
p
{\displaystyle \|f\|_{W,p}=\lim _{r\to \infty }\|f\|_{S,r,p}}
Warning: there are nonzero functions ƒ with ||ƒ||W,p = 0, such as any bounded function of compact support, so to get a Banach space one has to quotient out by these functions.
=== Besicovitch almost periodic functions ===
The space Bp of Besicovitch almost periodic functions was introduced by Besicovitch (1926).
It is the closure of the trigonometric polynomials under the seminorm
‖
f
‖
B
,
p
=
lim sup
x
→
∞
(
1
2
x
∫
−
x
x
|
f
(
s
)
|
p
d
s
)
1
/
p
{\displaystyle \|f\|_{B,p}=\limsup _{x\to \infty }\left({1 \over 2x}\int _{-x}^{x}|f(s)|^{p}\,ds\right)^{1/p}}
Warning: there are nonzero functions ƒ with ||ƒ||B,p = 0, such as any bounded function of compact support, so to get a Banach space one has to quotient out by these functions.
The Besicovitch almost periodic functions in B2 have an expansion (not necessarily convergent) as
∑
a
n
e
i
λ
n
t
{\displaystyle \sum a_{n}e^{i\lambda _{n}t}}
with Σa2n finite and λn real. Conversely every such series is the expansion of some Besicovitch periodic function (which is not unique).
The space Bp of Besicovitch almost periodic functions (for p ≥ 1) contains the space Wp of Weyl almost periodic functions. If one quotients out a subspace of "null" functions, it can be identified with the space of Lp functions on the Bohr compactification of the reals.
=== Almost periodic functions on a locally compact group ===
With these theoretical developments and the advent of abstract methods (the Peter–Weyl theorem, Pontryagin duality and Banach algebras) a general theory became possible. The general idea of almost-periodicity in relation to a locally compact abelian group G becomes that of a function F in L∞(G), such that its translates by G form a relatively compact set.
Equivalently, the space of almost periodic functions is the norm closure of the finite linear combinations of characters of G. If G is compact the almost periodic functions are the same as the continuous functions.
The Bohr compactification of G is the compact abelian group of all possibly discontinuous characters of the dual group of G, and is a compact group containing G as a dense subgroup. The space of uniform almost periodic functions on G can be identified with the space of all continuous functions on the Bohr compactification of G. More generally the Bohr compactification can be defined for any topological group G, and the spaces of continuous or Lp functions on the Bohr compactification can be considered as almost periodic functions on G.
For locally compact connected groups G the map from G to its Bohr compactification is injective if and only if G is a central extension of a compact group, or equivalently the product of a compact group and a finite-dimensional vector space.
A function on a locally compact group is called weakly almost periodic if its orbit is weakly relatively compact in
L
∞
{\displaystyle L^{\infty }}
.
Given a topological dynamical system
(
X
,
G
)
{\displaystyle (X,G)}
consisting of a compact topological space X with an action of the locally compact group G, a continuous function on X is (weakly) almost periodic if its orbit is (weakly) precompact in the Banach space
C
(
X
)
{\displaystyle C(X)}
.
== Quasiperiodic signals in audio and music synthesis ==
In speech processing, audio signal processing, and music synthesis, a quasiperiodic signal, sometimes called a quasiharmonic signal, is a waveform that is virtually periodic microscopically, but not necessarily periodic macroscopically. This does not give a quasiperiodic function, but something more akin to an almost periodic function, being a nearly periodic function where any one period is virtually identical to its adjacent periods but not necessarily similar to periods much farther away in time. This is the case for musical tones (after the initial attack transient) where all partials or overtones are harmonic (that is all overtones are at frequencies that are an integer multiple of a fundamental frequency of the tone).
When a signal
x
(
t
)
{\displaystyle x(t)\ }
is fully periodic with period
P
{\displaystyle P\ }
, then the signal exactly satisfies
x
(
t
)
=
x
(
t
+
P
)
∀
t
∈
R
{\displaystyle x(t)=x(t+P)\qquad \forall t\in \mathbb {R} }
or
|
x
(
t
)
−
x
(
t
+
P
)
|
=
0
∀
t
∈
R
.
{\displaystyle {\Big |}x(t)-x(t+P){\Big |}=0\qquad \forall t\in \mathbb {R} .\ }
The Fourier series representation would be
x
(
t
)
=
a
0
+
∑
n
=
1
∞
[
a
n
cos
(
2
π
n
f
0
t
)
−
b
n
sin
(
2
π
n
f
0
t
)
]
{\displaystyle x(t)=a_{0}+\sum _{n=1}^{\infty }{\big [}a_{n}\cos(2\pi nf_{0}t)-b_{n}\sin(2\pi nf_{0}t){\big ]}}
or
x
(
t
)
=
a
0
+
∑
n
=
1
∞
r
n
cos
(
2
π
n
f
0
t
+
φ
n
)
{\displaystyle x(t)=a_{0}+\sum _{n=1}^{\infty }r_{n}\cos(2\pi nf_{0}t+\varphi _{n})}
where
f
0
=
1
P
{\displaystyle f_{0}={\frac {1}{P}}}
is the fundamental frequency and the Fourier coefficients are
a
0
=
1
P
∫
t
0
t
0
+
P
x
(
t
)
d
t
{\displaystyle a_{0}={\frac {1}{P}}\int _{t_{0}}^{t_{0}+P}x(t)\,dt\ }
a
n
=
r
n
cos
(
φ
n
)
=
2
P
∫
t
0
t
0
+
P
x
(
t
)
cos
(
2
π
n
f
0
t
)
d
t
n
≥
1
{\displaystyle a_{n}=r_{n}\cos \left(\varphi _{n}\right)={\frac {2}{P}}\int _{t_{0}}^{t_{0}+P}x(t)\cos(2\pi nf_{0}t)\,dt\qquad n\geq 1}
b
n
=
r
n
sin
(
φ
n
)
=
−
2
P
∫
t
0
t
0
+
P
x
(
t
)
sin
(
2
π
n
f
0
t
)
d
t
{\displaystyle b_{n}=r_{n}\sin \left(\varphi _{n}\right)=-{\frac {2}{P}}\int _{t_{0}}^{t_{0}+P}x(t)\sin(2\pi nf_{0}t)\,dt\ }
where
t
0
{\displaystyle t_{0}\ }
can be any time:
−
∞
<
t
0
<
+
∞
{\displaystyle -\infty <t_{0}<+\infty \ }
.
The fundamental frequency
f
0
{\displaystyle f_{0}\ }
, and Fourier coefficients
a
n
{\displaystyle a_{n}\ }
,
b
n
{\displaystyle b_{n}\ }
,
r
n
{\displaystyle r_{n}\ }
, or
φ
n
{\displaystyle \varphi _{n}\ }
, are constants, i.e. they are not functions of time. The harmonic frequencies are exact integer multiples of the fundamental frequency.
When
x
(
t
)
{\displaystyle x(t)\ }
is quasiperiodic then
x
(
t
)
≈
x
(
t
+
P
(
t
)
)
{\displaystyle x(t)\approx x{\big (}t+P(t){\big )}\ }
or
|
x
(
t
)
−
x
(
t
+
P
(
t
)
)
|
<
ε
{\displaystyle {\Big |}x(t)-x{\big (}t+P(t){\big )}{\Big |}<\varepsilon \ }
where
0
<
ϵ
≪
‖
x
‖
=
x
2
¯
=
lim
τ
→
∞
1
τ
∫
−
τ
/
2
τ
/
2
x
2
(
t
)
d
t
.
{\displaystyle 0<\epsilon \ll {\big \Vert }x{\big \Vert }={\sqrt {\overline {x^{2}}}}={\sqrt {\lim _{\tau \to \infty }{\frac {1}{\tau }}\int _{-\tau /2}^{\tau /2}x^{2}(t)\,dt}}.\ }
Now the Fourier series representation would be
x
(
t
)
=
a
0
(
t
)
+
∑
n
=
1
∞
[
a
n
(
t
)
cos
(
2
π
n
∫
0
t
f
0
(
τ
)
d
τ
)
−
b
n
(
t
)
sin
(
2
π
n
∫
0
t
f
0
(
τ
)
d
τ
)
]
{\displaystyle x(t)=a_{0}(t)\ +\ \sum _{n=1}^{\infty }\left[a_{n}(t)\cos \left(2\pi n\int _{0}^{t}f_{0}(\tau )\,d\tau \right)-b_{n}(t)\sin \left(2\pi n\int _{0}^{t}f_{0}(\tau )\,d\tau \right)\right]}
or
x
(
t
)
=
a
0
(
t
)
+
∑
n
=
1
∞
r
n
(
t
)
cos
(
2
π
n
∫
0
t
f
0
(
τ
)
d
τ
+
φ
n
(
t
)
)
{\displaystyle x(t)=a_{0}(t)\ +\ \sum _{n=1}^{\infty }r_{n}(t)\cos \left(2\pi n\int _{0}^{t}f_{0}(\tau )\,d\tau +\varphi _{n}(t)\right)}
or
x
(
t
)
=
a
0
(
t
)
+
∑
n
=
1
∞
r
n
(
t
)
cos
(
2
π
∫
0
t
f
n
(
τ
)
d
τ
+
φ
n
(
0
)
)
{\displaystyle x(t)=a_{0}(t)+\sum _{n=1}^{\infty }r_{n}(t)\cos \left(2\pi \int _{0}^{t}f_{n}(\tau )\,d\tau +\varphi _{n}(0)\right)}
where
f
0
(
t
)
=
1
P
(
t
)
{\displaystyle f_{0}(t)={\frac {1}{P(t)}}}
is the possibly time-varying fundamental frequency and the time-varying Fourier coefficients are
a
0
(
t
)
=
1
P
(
t
)
∫
t
−
P
(
t
)
/
2
t
+
P
(
t
)
/
2
x
(
τ
)
d
τ
{\displaystyle a_{0}(t)={\frac {1}{P(t)}}\int _{t-P(t)/2}^{t+P(t)/2}x(\tau )\,d\tau \ }
a
n
(
t
)
=
r
n
(
t
)
cos
(
φ
n
(
t
)
)
=
2
P
(
t
)
∫
t
−
P
(
t
)
/
2
t
+
P
(
t
)
/
2
x
(
τ
)
cos
(
2
π
n
f
0
(
t
)
τ
)
d
τ
n
≥
1
{\displaystyle a_{n}(t)=r_{n}(t)\cos {\big (}\varphi _{n}(t){\big )}={\frac {2}{P(t)}}\int _{t-P(t)/2}^{t+P(t)/2}x(\tau )\cos {\big (}2\pi nf_{0}(t)\tau {\big )}\,d\tau \qquad n\geq 1}
b
n
(
t
)
=
r
n
(
t
)
sin
(
φ
n
(
t
)
)
=
−
2
P
(
t
)
∫
t
−
P
(
t
)
/
2
t
+
P
(
t
)
/
2
x
(
τ
)
sin
(
2
π
n
f
0
(
t
)
τ
)
d
τ
{\displaystyle b_{n}(t)=r_{n}(t)\sin {\big (}\varphi _{n}(t){\big )}=-{\frac {2}{P(t)}}\int _{t-P(t)/2}^{t+P(t)/2}x(\tau )\sin {\big (}2\pi nf_{0}(t)\tau {\big )}\,d\tau \ }
and the instantaneous frequency for each partial is
f
n
(
t
)
=
n
f
0
(
t
)
+
1
2
π
φ
n
′
(
t
)
.
{\displaystyle f_{n}(t)=nf_{0}(t)+{\frac {1}{2\pi }}\varphi _{n}^{\prime }(t).\,}
Whereas in this quasiperiodic case, the fundamental frequency
f
0
(
t
)
{\displaystyle f_{0}(t)\ }
, the harmonic frequencies
f
n
(
t
)
{\displaystyle f_{n}(t)\ }
, and the Fourier coefficients
a
n
(
t
)
{\displaystyle a_{n}(t)\ }
,
b
n
(
t
)
{\displaystyle b_{n}(t)\ }
,
r
n
(
t
)
{\displaystyle r_{n}(t)\ }
, or
φ
n
(
t
)
{\displaystyle \varphi _{n}(t)\ }
are not necessarily constant, and are functions of time albeit slowly varying functions of time. Stated differently these functions of time are bandlimited to much less than the fundamental frequency for
x
(
t
)
{\displaystyle x(t)\ }
to be considered to be quasiperiodic.
The partial frequencies
f
n
(
t
)
{\displaystyle f_{n}(t)\ }
are very nearly harmonic but not necessarily exactly so. The time-derivative of
φ
n
(
t
)
{\displaystyle \varphi _{n}(t)\ }
, that is
φ
n
′
(
t
)
{\displaystyle \varphi _{n}^{\prime }(t)\ }
, has the effect of detuning the partials from their exact integer harmonic value
n
f
0
(
t
)
{\displaystyle nf_{0}(t)\ }
. A rapidly changing
φ
n
(
t
)
{\displaystyle \varphi _{n}(t)\ }
means that the instantaneous frequency for that partial is severely detuned from the integer harmonic value which would mean that
x
(
t
)
{\displaystyle x(t)\ }
is not quasiperiodic.
== See also ==
Quasiperiodic function
Aperiodic function
Quasiperiodic tiling
Fourier series
Additive synthesis
Harmonic series (music)
Computer music
== References ==
== Bibliography ==
Amerio, Luigi; Prouse, Giovanni (1971), Almost-periodic functions and functional equations, The University Series in Higher Mathematics, New York–Cincinnati–Toronto–London–Melbourne: Van Nostrand Reinhold, pp. viii+184, ISBN 0-442-20295-4, MR 0275061, Zbl 0215.15701.
A.S. Besicovitch, "Almost periodic functions", Cambridge Univ. Press (1932)
Bochner, S. (1926), "Beitrage zur Theorie der fastperiodischen Funktionen", Math. Annalen, 96: 119–147, doi:10.1007/BF01209156, S2CID 118124462
S. Bochner and J. von Neumann, "Almost Periodic Function in a Group II", Trans. Amer. Math. Soc., 37 no. 1 (1935) pp. 21–50
H. Bohr, "Almost-periodic functions", Chelsea, reprint (1947)
Bredikhina, E.A. (2001) [1994], "Almost-periodic function", Encyclopedia of Mathematics, EMS Press
Bredikhina, E.A. (2001) [1994], "Besicovitch almost-periodic functions", Encyclopedia of Mathematics, EMS Press
Bredikhina, E.A. (2001) [1994], "Bohr almost-periodic functions", Encyclopedia of Mathematics, EMS Press
Bredikhina, E.A. (2001) [1994], "Stepanov almost-periodic functions", Encyclopedia of Mathematics, EMS Press
Bredikhina, E.A. (2001) [1994], "Weyl almost-periodic functions", Encyclopedia of Mathematics, EMS Press
J. von Neumann, "Almost Periodic Functions in a Group I", Trans. Amer. Math. Soc., 36 no. 3 (1934) pp. 445–492
== External links ==
"Almost periodic function (equivalent definition)". PlanetMath. | Wikipedia/Almost_periodic_function |
In mathematics, a doubly periodic function is a function defined on the complex plane and having two "periods", which are complex numbers u and v that are linearly independent as vectors over the field of real numbers. That u and v are periods of a function ƒ means that
f
(
z
+
u
)
=
f
(
z
+
v
)
=
f
(
z
)
{\displaystyle f(z+u)=f(z+v)=f(z)\,}
for all values of the complex number z.
The doubly periodic function is thus a two-dimensional extension of the simpler singly periodic function, which repeats itself in a single dimension. Familiar examples of functions with a single period on the real number line include the trigonometric functions like cosine and sine, In the complex plane the exponential function ez is a singly periodic function, with period 2πi.
== Examples ==
As an arbitrary mapping from pairs of reals (or complex numbers) to reals, a doubly periodic function can be constructed with little effort. For example, assume that the periods are 1 and i, so that the repeating lattice is the set of unit squares with vertices at the Gaussian integers. Values in the prototype square (i.e. x + iy where 0 ≤ x < 1 and 0 ≤ y < 1) can be assigned rather arbitrarily and then 'copied' to adjacent squares. This function will then be necessarily doubly periodic.
If the vectors 1 and i in this example are replaced by linearly independent vectors u and v, the prototype square becomes a prototype parallelogram that still tiles the plane. The "origin" of the lattice of parallelograms does not have to be the point 0: the lattice can start from any point. In other words, we can think of the plane and its associated functional values as remaining fixed, and mentally translate the lattice to gain insight into the function's characteristics.
== Use of complex analysis ==
If a doubly periodic function is also a complex function that satisfies the Cauchy–Riemann equations and provides an analytic function away from some set of isolated poles – in other words, a meromorphic function – then a lot of information about such a function can be obtained by applying some basic theorems from complex analysis.
A non-constant meromorphic doubly periodic function cannot be bounded on the prototype parallelogram. For if it were it would be bounded everywhere, and therefore constant by Liouville's theorem.
Since the function is meromorphic, it has no essential singularities and its poles are isolated. Therefore a translated lattice that does not pass through any pole can be constructed. The contour integral around any parallelogram in the lattice must vanish, because the values assumed by the doubly periodic function along the two pairs of parallel sides are identical, and the two pairs of sides are traversed in opposite directions as we move around the contour. Therefore, by the residue theorem, the function cannot have a single simple pole inside each parallelogram – it must have at least two simple poles within each parallelogram (Jacobian case), or it must have at least one pole of order greater than one (Weierstrassian case).
A similar argument can be applied to the function g = 1/ƒ where ƒ is meromorphic and doubly periodic. Under this inversion the zeroes of ƒ become the poles of g, and vice versa. So the meromorphic doubly periodic function ƒ cannot have one simple zero lying within each parallelogram on the lattice—it must have at least two simple zeroes, or it must have at least one zero of multiplicity greater than one. It follows that ƒ cannot attain any value just once, since ƒ minus that value would itself be a meromorphic doubly periodic function with just one zero.
== See also ==
Elliptic function
Abel elliptic functions
Jacobi elliptic functions
Weierstrass elliptic functions
Lemniscate elliptic functions
Dixon elliptic functions
Fundamental pair of periods
Period mapping
== Literature ==
Jacobi, C. G. J. (1835). "De functionibus duarum variabilium quadrupliciter periodicis, quibus theoria transcendentium Abelianarum innititur". Journal für die reine und angewandte Mathematik (in Latin). 13. A. L. Crelle. Reimer, Berlin: 55–56. Retrieved 3 October 2022. Reprinted in Gesammelte Werke, Vol. 2, 2nd ed. Providence, Rhode Island: American Mathematical Society, pp. 25-26, 1969.
Whittaker, E. T. and Watson, G. N.: A Course in Modern Analysis, 4th ed. reprinted Cambridge, England: Cambridge University Press, 1963, pp. 429-535. Chapters XX - XXII on elliptic functions, general theorems and Weierstrass elliptic functions, theta functions and Jacobian elliptic functions.
== References == | Wikipedia/Doubly_periodic_function |
In mathematics, the Hill equation or Hill differential equation is the second-order linear ordinary differential equation
d
2
y
d
t
2
+
f
(
t
)
y
=
0
,
{\displaystyle {\frac {d^{2}y}{dt^{2}}}+f(t)y=0,}
where
f
(
t
)
{\displaystyle f(t)}
is a periodic function with minimal period
π
{\displaystyle \pi }
and average zero. By these we mean that for all
t
{\displaystyle t}
f
(
t
+
π
)
=
f
(
t
)
,
{\displaystyle f(t+\pi )=f(t),}
and
∫
0
π
f
(
t
)
d
t
=
0
,
{\displaystyle \int _{0}^{\pi }f(t)\,dt=0,}
and if
p
{\displaystyle p}
is a number with
0
<
p
<
π
{\displaystyle 0<p<\pi }
, the equation
f
(
t
+
p
)
=
f
(
t
)
{\displaystyle f(t+p)=f(t)}
must fail for some
t
{\displaystyle t}
. It is named after George William Hill, who introduced it in 1886.
Because
f
(
t
)
{\displaystyle f(t)}
has period
π
{\displaystyle \pi }
, the Hill equation can be rewritten using the Fourier series of
f
(
t
)
{\displaystyle f(t)}
:
d
2
y
d
t
2
+
(
θ
0
+
2
∑
n
=
1
∞
θ
n
cos
(
2
n
t
)
+
∑
m
=
1
∞
ϕ
m
sin
(
2
m
t
)
)
y
=
0.
{\displaystyle {\frac {d^{2}y}{dt^{2}}}+\left(\theta _{0}+2\sum _{n=1}^{\infty }\theta _{n}\cos(2nt)+\sum _{m=1}^{\infty }\phi _{m}\sin(2mt)\right)y=0.}
Important special cases of Hill's equation include the Mathieu equation (in which only the terms corresponding to n = 0, 1 are included) and the Meissner equation.
Hill's equation is an important example in the understanding of periodic differential equations. Depending on the exact shape of
f
(
t
)
{\displaystyle f(t)}
, solutions may stay bounded for all time, or the amplitude of the oscillations in solutions may grow exponentially. The precise form of the solutions to Hill's equation is described by Floquet theory. Solutions can also be written in terms of Hill determinants.
Aside from its original application to lunar stability, the Hill equation appears in many settings including in modeling of a quadrupole mass spectrometer, as the one-dimensional Schrödinger equation of an electron in a crystal, quantum optics of two-level systems, accelerator physics and electromagnetic structures that are periodic in space and/or in time.
== References ==
== External links ==
"Hill equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Hill's Differential Equation". MathWorld.
Wolf, G. (2010), "Mathieu Functions and Hill's Equation", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. | Wikipedia/Hill_differential_equation |
In mathematics, a quasiperiodic function is a function that has a certain similarity to a periodic function. A function
f
{\displaystyle f}
is quasiperiodic with quasiperiod
ω
{\displaystyle \omega }
if
f
(
z
+
ω
)
=
g
(
z
,
f
(
z
)
)
{\displaystyle f(z+\omega )=g(z,f(z))}
, where
g
{\displaystyle g}
is a "simpler" function than
f
{\displaystyle f}
. What it means to be "simpler" is vague.
A simple case (sometimes called arithmetic quasiperiodic) is if the function obeys the equation:
f
(
z
+
ω
)
=
f
(
z
)
+
C
{\displaystyle f(z+\omega )=f(z)+C}
Another case (sometimes called geometric quasiperiodic) is if the function obeys the equation:
f
(
z
+
ω
)
=
C
f
(
z
)
{\displaystyle f(z+\omega )=Cf(z)}
An example of this is the Jacobi theta function, where
ϑ
(
z
+
τ
;
τ
)
=
e
−
2
π
i
z
−
π
i
τ
ϑ
(
z
;
τ
)
,
{\displaystyle \vartheta (z+\tau ;\tau )=e^{-2\pi iz-\pi i\tau }\vartheta (z;\tau ),}
shows that for fixed
τ
{\displaystyle \tau }
it has quasiperiod
τ
{\displaystyle \tau }
; it also is periodic with period one. Another example is provided by the Weierstrass sigma function, which is quasiperiodic in two independent quasiperiods, the periods of the corresponding Weierstrass ℘ function. Bloch's theorem says that the eigenfunctions of a periodic Schrödinger equation (or other periodic linear equations) can be found in quasiperiodic form, and a related form of quasi-periodic solution for periodic linear differential equations is expressed by Floquet theory.
Functions with an additive functional equation
f
(
z
+
ω
)
=
f
(
z
)
+
a
z
+
b
{\displaystyle f(z+\omega )=f(z)+az+b\ }
are also called quasiperiodic. An example of this is the Weierstrass zeta function, where
ζ
(
z
+
ω
,
Λ
)
=
ζ
(
z
,
Λ
)
+
η
(
ω
,
Λ
)
{\displaystyle \zeta (z+\omega ,\Lambda )=\zeta (z,\Lambda )+\eta (\omega ,\Lambda )\ }
for a z-independent η when ω is a period of the corresponding Weierstrass ℘ function.
In the special case where
f
(
z
+
ω
)
=
f
(
z
)
{\displaystyle f(z+\omega )=f(z)}
we say f is periodic with period ω in the period lattice
Λ
{\displaystyle \Lambda }
.
== Quasiperiodic signals ==
Quasiperiodic signals in the sense of audio processing are not quasiperiodic functions in the sense defined here; instead they have the nature of almost periodic functions and that article should be consulted. The more vague and general notion of quasiperiodicity has even less to do with quasiperiodic functions in the mathematical sense.
A useful example is the function:
f
(
z
)
=
sin
(
A
z
)
+
sin
(
B
z
)
{\displaystyle f(z)=\sin(Az)+\sin(Bz)}
If the ratio A/B is rational, this will have a true period, but if A/B is irrational there is no true period, but a succession of increasingly accurate "almost" periods.
== See also ==
Quasiperiodic motion
== References ==
== External links ==
Quasiperiodic function at PlanetMath | Wikipedia/Quasiperiodic_function |
In mathematics, the double Fourier sphere (DFS) method is a technique that transforms a function defined on the surface of the sphere to a function defined on a rectangular domain while preserving periodicity in both the longitude and latitude directions.
== Introduction ==
First, a function
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
on the sphere is written as
f
(
λ
,
θ
)
{\displaystyle f(\lambda ,\theta )}
using spherical coordinates, i.e.,
f
(
λ
,
θ
)
=
f
(
cos
λ
sin
θ
,
sin
λ
sin
θ
,
cos
θ
)
,
(
λ
,
θ
)
∈
[
−
π
,
π
]
×
[
0
,
π
]
.
{\displaystyle f(\lambda ,\theta )=f(\cos \lambda \sin \theta ,\sin \lambda \sin \theta ,\cos \theta ),(\lambda ,\theta )\in [-\pi ,\pi ]\times [0,\pi ].}
The function
f
(
λ
,
θ
)
{\displaystyle f(\lambda ,\theta )}
is
2
π
{\displaystyle 2\pi }
-periodic in
λ
{\displaystyle \lambda }
, but not periodic in
θ
{\displaystyle \theta }
. The periodicity in the latitude direction has been lost. To recover it, the function is "doubled up” and a related function on
[
−
π
,
π
]
×
[
−
π
,
π
]
{\displaystyle [-\pi ,\pi ]\times [-\pi ,\pi ]}
is defined as
f
~
(
λ
,
θ
)
=
{
g
(
λ
+
π
,
θ
)
,
(
λ
,
θ
)
∈
[
−
π
,
0
]
×
[
0
,
π
]
,
h
(
λ
,
θ
)
,
(
λ
,
θ
)
∈
[
0
,
π
]
×
[
0
,
π
]
,
g
(
λ
,
−
θ
)
,
(
λ
,
θ
)
∈
[
0
,
π
]
×
[
−
π
,
0
]
,
h
(
λ
+
π
,
−
θ
)
,
(
λ
,
θ
)
∈
[
−
π
,
0
]
×
[
−
π
,
0
]
,
{\displaystyle {\tilde {f}}(\lambda ,\theta )={\begin{cases}g(\lambda +\pi ,\theta ),&(\lambda ,\theta )\in [-\pi ,0]\times [0,\pi ],\\h(\lambda ,\theta ),&(\lambda ,\theta )\in [0,\pi ]\times [0,\pi ],\\g(\lambda ,-\theta ),&(\lambda ,\theta )\in [0,\pi ]\times [-\pi ,0],\\h(\lambda +\pi ,-\theta ),&(\lambda ,\theta )\in [-\pi ,0]\times [-\pi ,0],\\\end{cases}}}
where
g
(
λ
,
θ
)
=
f
(
λ
−
π
,
θ
)
{\displaystyle g(\lambda ,\theta )=f(\lambda -\pi ,\theta )}
and
h
(
λ
,
θ
)
=
f
(
λ
,
θ
)
{\displaystyle h(\lambda ,\theta )=f(\lambda ,\theta )}
for
(
λ
,
θ
)
∈
[
0
,
π
]
×
[
0
,
π
]
{\displaystyle (\lambda ,\theta )\in [0,\pi ]\times [0,\pi ]}
. The new function
f
~
{\displaystyle {\tilde {f}}}
is
2
π
{\displaystyle 2\pi }
-periodic in
λ
{\displaystyle \lambda }
and
θ
{\displaystyle \theta }
, and is constant along the lines
θ
=
0
{\displaystyle \theta =0}
and
θ
=
±
π
{\displaystyle \theta =\pm \pi }
, corresponding to the poles.
The function
f
~
{\displaystyle {\tilde {f}}}
can be expanded into a double Fourier series
f
~
≈
∑
j
=
−
n
n
∑
k
=
−
n
n
a
j
k
e
i
j
θ
e
i
k
λ
{\displaystyle {\tilde {f}}\approx \sum _{j=-n}^{n}\sum _{k=-n}^{n}a_{jk}e^{ij\theta }e^{ik\lambda }}
== History ==
The DFS method was proposed by Merilees and developed further by Steven Orszag. The DFS method has been the subject of relatively few investigations since (a notable exception is Fornberg's work), perhaps due to the dominance of spherical harmonics expansions. Over the last fifteen years it has begun to be used for the computation of gravitational fields near black holes and to novel space-time spectral analysis.
== References == | Wikipedia/Double_Fourier_sphere_method |
Frequency is the number of occurrences of a repeating event per unit of time. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.
The interval of time between events is called the period. It is the reciprocal of the frequency. For example, if a heart beats at a frequency of 120 times per minute (2 hertz), its period is one half of a second.
Special definitions of frequency are used in certain contexts, such as the angular frequency in rotational or cyclical properties, when the rate of angular progress is measured. Spatial frequency is defined for properties that vary or cccur repeatedly in geometry or space.
The unit of measurement of frequency in the International System of Units (SI) is the hertz, having the symbol Hz.
== Definitions and units ==
For cyclical phenomena such as oscillations, waves, or for examples of simple harmonic motion, the term frequency is defined as the number of cycles or repetitions per unit of time. The conventional symbol for frequency is f or ν (the Greek letter nu) is also used. The period T is the time taken to complete one cycle of an oscillation or rotation. The frequency and the period are related by the equation
f
=
1
T
.
{\displaystyle f={\frac {1}{T}}.}
The term temporal frequency is used to emphasise that the frequency is characterised by the number of occurrences of a repeating event per unit time.
The SI unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz by the International Electrotechnical Commission in 1930. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, cycle per second (cps). The SI unit for the period, as for all measurements of time, is the second. A traditional unit of frequency used with rotating mechanical devices, where it is termed rotational frequency, is revolution per minute, abbreviated r/min or rpm. Sixty rpm is equivalent to one hertz.
== Period versus frequency ==
As a matter of convenience, longer and slower waves, such as ocean surface waves, are more typically described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency. Some commonly used conversions are listed below:
== Related quantities ==
Rotational frequency, usually denoted by the Greek letter ν (nu), is defined as the instantaneous rate of change of the number of rotations, N, with respect to time: ν = dN/dt; it is a type of frequency applied to rotational motion.
Angular frequency, usually denoted by the Greek letter ω (omega), is defined as the rate of change of angular displacement (during rotation), θ (theta), or the rate of change of the phase of a sinusoidal waveform (notably in oscillations and waves), or as the rate of change of the argument to the sine function:
y
(
t
)
=
sin
θ
(
t
)
=
sin
(
ω
t
)
=
sin
(
2
π
f
t
)
{\displaystyle y(t)=\sin \theta (t)=\sin(\omega t)=\sin(2\mathrm {\pi } ft)}
d
θ
d
t
=
ω
=
2
π
f
.
{\displaystyle {\frac {\mathrm {d} \theta }{\mathrm {d} t}}=\omega =2\mathrm {\pi } f.}
The unit of angular frequency is the radian per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sampling interval, which is a dimensionless quantity. Angular frequency is frequency multiplied by 2π.
Spatial frequency, denoted here by ξ (xi), is analogous to temporal frequency, but with a spatial measurement replacing time measurement, e.g.:
y
(
t
)
=
sin
θ
(
t
,
x
)
=
sin
(
ω
t
+
k
x
)
{\displaystyle y(t)=\sin \theta (t,x)=\sin(\omega t+kx)}
d
θ
d
x
=
k
=
2
π
ξ
.
{\displaystyle {\frac {\mathrm {d} \theta }{\mathrm {d} x}}=k=2\pi \xi .}
Spatial period or wavelength is the spatial analog to temporal period.
== In wave propagation ==
For periodic waves in nondispersive media (that is, media in which the wave speed is independent of frequency), frequency has an inverse relationship to the wavelength, λ (lambda). Even in dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave:
f
=
v
λ
.
{\displaystyle f={\frac {v}{\lambda }}.}
In the special case of electromagnetic waves in vacuum, then v = c, where c is the speed of light in vacuum, and this expression becomes
f
=
c
λ
.
{\displaystyle f={\frac {c}{\lambda }}.}
When monochromatic waves travel from one medium to another, their frequency remains the same—only their wavelength and speed change.
== Measurement ==
Measurement of frequency can be done in the following ways:
=== Counting ===
Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the period. For example, if 71 events occur within 15 seconds the frequency is:
f
=
71
15
s
≈
4.73
Hz
.
{\displaystyle f={\frac {71}{15\,{\text{s}}}}\approx 4.73\,{\text{Hz}}.}
If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an average error in the calculated frequency of
Δ
f
=
1
2
T
m
{\textstyle \Delta f={\frac {1}{2T_{\text{m}}}}}
, or a fractional error of
Δ
f
f
=
1
2
f
T
m
{\textstyle {\frac {\Delta f}{f}}={\frac {1}{2fT_{\text{m}}}}}
where
T
m
{\displaystyle T_{\text{m}}}
is the timing interval and
f
{\displaystyle f}
is the measured frequency. This error decreases with frequency, so it is generally a problem at low frequencies where the number of counts N is small.
=== Stroboscope ===
An old method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary.
=== Frequency counter ===
Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. As of 2018, frequency counters can cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods.
=== Heterodyne methods ===
Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly utilizing heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency. To convert higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).
== Examples ==
=== Light ===
Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 400 THz (4×1014 Hz) is red light, 800 THz (8×1014 Hz) is violet light, and between these (in the range 400–800 THz) are all the other colors of the visible spectrum. An electromagnetic wave with a frequency less than 4×1014 Hz will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave with a frequency higher than 8×1014 Hz will also be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays.
All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through vacuum at the same speed (the speed of light), giving them wavelengths inversely proportional to their frequencies.
c
=
f
λ
,
{\displaystyle \displaystyle c=f\lambda ,}
where c is the speed of light (c in vacuum or less in other media), f is the frequency and λ is the wavelength.
In dispersive media, such as glass, the speed depends somewhat on frequency, so the wavelength is not quite inversely proportional to frequency.
=== Sound ===
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.
=== Line current ===
In Europe, Africa, Australia, southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and northern South America, the frequency of the alternating current in household electrical outlets is 60 Hz (between the tones B♭ and B; that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show in which of these general regions the recording was made.
== Aperiodic frequency ==
Aperiodic frequency is the rate of incidence or occurrence of non-cyclic phenomena, including random processes such as radioactive decay. It is expressed with the unit reciprocal second (s−1) or, in the case of radioactivity, with the unit becquerel.
It is defined as a rate, f = N/Δt, involving the number of entities counted or the number of events happened (N) during a given time duration (Δt); it is a physical quantity of type temporal rate.
== See also ==
== Notes ==
== References ==
== Sources ==
Davies, A. (1997). Handbook of Condition Monitoring: Techniques and Methodology. New York: Springer. ISBN 978-0-412-61320-3.
Serway, Raymond A.; Faughn, Jerry S. (1989). College Physics. London: Thomson/Brooks-Cole. ISBN 978-05344-0-814-5.
Young, Ian R. (1999). Wind Generated Ocean Waves. Elsevere Ocean Engineering. Vol. 2. Oxford: Elsevier. ISBN 978-0-08-043317-2.
== Further reading ==
Giancoli, D.C. (1988). Physics for Scientists and Engineers (2nd ed.). Prentice Hall. ISBN 978-0-13-669201-0.
== External links ==
Keyboard frequencies = naming of notes – The English and American system versus the German system
A frequency generator with sound, useful for hearing tests | Wikipedia/Period_(physics) |
Floquet theory is a branch of the theory of ordinary differential equations relating to the class of solutions to periodic linear differential equations of the form
x
˙
=
A
(
t
)
x
,
{\displaystyle {\dot {x}}=A(t)x,}
with
x
∈
R
n
{\displaystyle x\in {R^{n}}}
and
A
(
t
)
∈
R
n
×
n
{\displaystyle \displaystyle A(t)\in {R^{n\times n}}}
being a piecewise continuous periodic function with period
T
{\displaystyle T}
and defines the state of the stability of solutions.
The main theorem of Floquet theory, Floquet's theorem, due to Gaston Floquet (1883), gives a canonical form for each fundamental matrix solution of this common linear system. It gives a coordinate change
y
=
Q
−
1
(
t
)
x
{\displaystyle \displaystyle y=Q^{-1}(t)x}
with
Q
(
t
+
2
T
)
=
Q
(
t
)
{\displaystyle \displaystyle Q(t+2T)=Q(t)}
that transforms the periodic system to a traditional linear system with constant, real coefficients.
When applied to physical systems with periodic potentials, such as crystals in condensed matter physics, the result is known as Bloch's theorem.
Note that the solutions of the linear differential equation form a vector space. A matrix
ϕ
(
t
)
{\displaystyle \phi \,(t)}
is called a fundamental matrix solution if the columns form a basis of the solution set. A matrix
Φ
(
t
)
{\displaystyle \Phi (t)}
is called a principal fundamental matrix solution if all columns are linearly independent solutions and there exists
t
0
{\displaystyle t_{0}}
such that
Φ
(
t
0
)
{\displaystyle \Phi (t_{0})}
is the identity. A principal fundamental matrix can be constructed from a fundamental matrix using
Φ
(
t
)
=
ϕ
(
t
)
ϕ
−
1
(
t
0
)
{\displaystyle \Phi (t)=\phi \,(t){\phi \,}^{-1}(t_{0})}
. The solution of the linear differential equation with the initial condition
x
(
0
)
=
x
0
{\displaystyle x(0)=x_{0}}
is
x
(
t
)
=
ϕ
(
t
)
ϕ
−
1
(
0
)
x
0
{\displaystyle x(t)=\phi \,(t){\phi \,}^{-1}(0)x_{0}}
where
ϕ
(
t
)
{\displaystyle \phi \,(t)}
is any fundamental matrix solution.
== Floquet's theorem ==
Let
x
˙
=
A
(
t
)
x
{\displaystyle {\dot {x}}=A(t)x}
be a linear first order differential equation,
where
x
(
t
)
{\displaystyle x(t)}
is a column vector of length
n
{\displaystyle n}
and
A
(
t
)
{\displaystyle A(t)}
an
n
×
n
{\displaystyle n\times n}
periodic matrix with period
T
{\displaystyle T}
(that is
A
(
t
+
T
)
=
A
(
t
)
{\displaystyle A(t+T)=A(t)}
for all real values of
t
{\displaystyle t}
). Let
ϕ
(
t
)
{\displaystyle \phi \,(t)}
be a fundamental matrix solution of this differential equation. Then, for all
t
∈
R
{\displaystyle t\in \mathbb {R} }
,
ϕ
(
t
+
T
)
=
ϕ
(
t
)
ϕ
−
1
(
0
)
ϕ
(
T
)
.
{\displaystyle \phi (t+T)=\phi (t)\phi ^{-1}(0)\phi (T).}
Here
ϕ
−
1
(
0
)
ϕ
(
T
)
{\displaystyle \phi ^{-1}(0)\phi (T)}
is known as the monodromy matrix.
In addition, for each matrix
B
{\displaystyle B}
(possibly complex) such that
e
T
B
=
ϕ
−
1
(
0
)
ϕ
(
T
)
,
{\displaystyle e^{TB}=\phi ^{-1}(0)\phi (T),}
there is a periodic (period
T
{\displaystyle T}
) matrix function
t
↦
P
(
t
)
{\displaystyle t\mapsto P(t)}
such that
ϕ
(
t
)
=
P
(
t
)
e
t
B
for all
t
∈
R
.
{\displaystyle \phi (t)=P(t)e^{tB}{\text{ for all }}t\in \mathbb {R} .}
Also, there is a real matrix
R
{\displaystyle R}
and a real periodic (period-
2
T
{\displaystyle 2T}
) matrix function
t
↦
Q
(
t
)
{\displaystyle t\mapsto Q(t)}
such that
ϕ
(
t
)
=
Q
(
t
)
e
t
R
for all
t
∈
R
.
{\displaystyle \phi (t)=Q(t)e^{tR}{\text{ for all }}t\in \mathbb {R} .}
In the above
B
{\displaystyle B}
,
P
{\displaystyle P}
,
Q
{\displaystyle Q}
and
R
{\displaystyle R}
are
n
×
n
{\displaystyle n\times n}
matrices.
== Consequences and applications ==
This mapping
ϕ
(
t
)
=
Q
(
t
)
e
t
R
{\displaystyle \phi \,(t)=Q(t)e^{tR}}
gives rise to a time-dependent change of coordinates (
y
=
Q
−
1
(
t
)
x
{\displaystyle y=Q^{-1}(t)x}
), under which our original system becomes a linear system with real constant coefficients
y
˙
=
R
y
{\displaystyle {\dot {y}}=Ry}
. Since
Q
(
t
)
{\displaystyle Q(t)}
is continuous and periodic it must be bounded. Thus the stability of the zero solution for
y
(
t
)
{\displaystyle y(t)}
and
x
(
t
)
{\displaystyle x(t)}
is determined by the eigenvalues of
R
{\displaystyle R}
.
The representation
ϕ
(
t
)
=
P
(
t
)
e
t
B
{\displaystyle \phi \,(t)=P(t)e^{tB}}
is called a Floquet normal form for the fundamental matrix
ϕ
(
t
)
{\displaystyle \phi \,(t)}
.
The eigenvalues of
e
T
B
{\displaystyle e^{TB}}
are called the characteristic multipliers of the system. They are also the eigenvalues of the (linear) Poincaré maps
x
(
t
)
→
x
(
t
+
T
)
{\displaystyle x(t)\to x(t+T)}
. A Floquet exponent (sometimes called a characteristic exponent), is a complex
μ
{\displaystyle \mu }
such that
e
μ
T
{\displaystyle e^{\mu T}}
is a characteristic multiplier of the system. Notice that Floquet exponents are not unique, since
e
(
μ
+
2
π
i
k
T
)
T
=
e
μ
T
{\displaystyle e^{(\mu +{\frac {2\pi ik}{T}})T}=e^{\mu T}}
, where
k
{\displaystyle k}
is an integer. The real parts of the Floquet exponents are called Lyapunov exponents. The zero solution is asymptotically stable if all Lyapunov exponents are negative, Lyapunov stable if the Lyapunov exponents are nonpositive and unstable otherwise.
Floquet theory is very important for the study of dynamical systems, such as the Mathieu equation.
Floquet theory shows stability in Hill differential equation (introduced by George William Hill) approximating the motion of the moon as a harmonic oscillator in a periodic gravitational field.
Bond softening and bond hardening in intense laser fields can be described in terms of solutions obtained from the Floquet theorem.
Dynamics of strongly driven quantum systems are often examined using Floquet theory. In superconducting circuits, Floquet framework has been leveraged to shed light on the quantum electrodynamics of drive-induced multiqubit interactions.
== References ==
C. Chicone. Ordinary Differential Equations with Applications. Springer-Verlag, New York 1999.
M.S.P. Eastham, "The Spectral Theory of Periodic Differential Equations", Texts in Mathematics, Scottish Academic Press, Edinburgh, 1973. ISBN 978-0-7011-1936-2.
Ekeland, Ivar (1990). "One". Convexity methods in Hamiltonian mechanics. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Vol. 19. Berlin: Springer-Verlag. pp. x+247. ISBN 3-540-50613-6. MR 1051888.
Floquet, Gaston (1883), "Sur les équations différentielles linéaires à coefficients périodiques" (PDF), Annales Scientifiques de l'École Normale Supérieure, 12: 47–88, doi:10.24033/asens.220
Krasnosel'skii, M.A. (1968), The Operator of Translation along the Trajectories of Differential Equations, Providence: American Mathematical Society, Translation of Mathematical Monographs, 19, 294p.
W. Magnus, S. Winkler. Hill's Equation, Dover-Phoenix Editions, ISBN 0-486-49565-5.
N.W. McLachlan, Theory and Application of Mathieu Functions, New York: Dover, 1964.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Deng, Chunqing; Shen, Feiruo; Ashhab, Sahel; Lupascu, Adrian (2016-09-27). "Dynamics of a two-level system under strong driving: Quantum-gate optimization based on Floquet theory". Physical Review A. 94 (3): 032323. arXiv:1605.08826. Bibcode:2016PhRvA..94c2323D. doi:10.1103/PhysRevA.94.032323. ISSN 2469-9926.
Huang, Ziwen; Mundada, Pranav S.; Gyenis, András; Schuster, David I.; Houck, Andrew A.; Koch, Jens (2021-03-22). "Engineering Dynamical Sweet Spots to Protect Qubits from 1 / f Noise". Physical Review Applied. 15 (3): 034065. arXiv:2004.12458. Bibcode:2021PhRvP..15c4065H. doi:10.1103/PhysRevApplied.15.034065. ISSN 2331-7019.
Nguyen, L.B.; Kim, Y.; Hashim, A.; Goss, N.; Marinelli, B.; Bhandari, B.; Das, D.; Naik, R.K.; Kreikebaum, J.M.; Jordan, A.; Santiago, D.I.; Siddiqi, I. (16 January 2024). "Programmable Heisenberg interactions between Floquet qubits". Nature Physics. 20 (1): 240–246. arXiv:2211.10383. Bibcode:2024NatPh..20..240N. doi:10.1038/s41567-023-02326-7.
== External links ==
"Floquet theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Floquet_theory |
In mathematics, algebraic geometry and analytic geometry are two closely related subjects. While algebraic geometry studies algebraic varieties, analytic geometry deals with complex manifolds and the more general analytic spaces defined locally by the vanishing of analytic functions of several complex variables. The deep relation between these subjects has numerous applications in which algebraic techniques are applied to analytic spaces and analytic techniques to algebraic varieties.
== Main statement ==
Let
X
{\displaystyle X}
be a projective complex algebraic variety. Because
X
{\displaystyle X}
is a complex variety, its set of complex points
X
(
C
)
{\displaystyle X(\mathbb {C} )}
can be given the structure of a compact complex analytic space. This analytic space is denoted
X
a
n
{\displaystyle X^{\mathrm {an} }}
. Similarly, if
F
{\displaystyle {\mathcal {F}}}
is a sheaf on
X
{\displaystyle X}
, then there is a corresponding sheaf
F
an
{\displaystyle {\mathcal {F}}^{\text{an}}}
on
X
a
n
{\displaystyle X^{\mathrm {an} }}
. This association of an analytic object to an algebraic one is a functor. The prototypical theorem relating
X
{\displaystyle X}
and
X
a
n
{\displaystyle X^{\mathrm {an} }}
says that for any two coherent sheaves
F
{\displaystyle {\mathcal {F}}}
and
G
{\displaystyle {\mathcal {G}}}
on
X
{\displaystyle X}
, the natural homomorphism
Hom
O
X
(
F
,
G
)
→
Hom
O
X
an
(
F
an
,
G
an
)
{\displaystyle {\text{Hom}}_{{\mathcal {O}}_{X}}({\mathcal {F}},{\mathcal {G}})\rightarrow {\text{Hom}}_{{\mathcal {O}}_{X}^{\text{an}}}({\mathcal {F}}^{\text{an}},{\mathcal {G}}^{\text{an}})}
is an isomorphism. Here
O
X
{\displaystyle {\mathcal {O}}_{X}}
is the structure sheaf of the algebraic variety
X
{\displaystyle X}
and
O
X
an
{\displaystyle {\mathcal {O}}_{X}^{\text{an}}}
is the structure sheaf of the analytic variety
X
a
n
{\displaystyle X^{\mathrm {an} }}
. More precisely, the category of coherent sheaves on the algebraic variety
X
{\displaystyle X}
is equivalent to the category of analytic coherent sheaves on the analytic variety
X
a
n
{\displaystyle X^{\mathrm {an} }}
, and the equivalence is given on objects by mapping
F
{\displaystyle {\mathcal {F}}}
to
F
an
{\displaystyle {\mathcal {F}}^{\text{an}}}
. In particular,
O
X
an
{\displaystyle {\mathcal {O}}_{X}^{\text{an}}}
is itself coherent, a result known as the Oka coherence theorem, and also, it was proved in “Faisceaux Algebriques Coherents” that the structure sheaf of the algebraic variety
O
X
{\displaystyle {\mathcal {O}}_{X}}
is coherent.
Another important statement is as follows: for any coherent sheaf
F
{\displaystyle {\mathcal {F}}}
on an algebraic variety
X
{\displaystyle X}
the homomorphisms
ε
q
:
H
q
(
X
,
F
)
→
H
q
(
X
an
,
F
an
)
{\displaystyle \varepsilon _{q}\ :\ H^{q}(X,{\mathcal {F}})\rightarrow H^{q}(X^{\text{an}},{\mathcal {F}}^{\text{an}})}
are isomorphisms for all
q
{\displaystyle q}
's. This means that the
q
{\displaystyle q}
-th cohomology group on
X
{\displaystyle X}
is isomorphic to the cohomology group on
X
a
n
{\displaystyle X^{\mathrm {an} }}
.
The theorem applies much more generally than stated above (see the formal statement below). It and its proof have many consequences, such as Chow's theorem, the Lefschetz principle and Kodaira vanishing theorem.
== Background ==
Algebraic varieties are locally defined as the common zero sets of polynomials and since polynomials over the complex numbers are holomorphic functions, algebraic varieties over
C
{\displaystyle \mathbb {C} }
can be interpreted as analytic spaces. Similarly, regular morphisms between varieties are interpreted as holomorphic mappings between analytic spaces. Somewhat surprisingly, it is often possible to go the other way, to interpret analytic objects in an algebraic way.
For example, it is easy to prove that the analytic functions from the Riemann sphere to itself are either
the rational functions or the identically infinity function (an extension of Liouville's theorem). For if such a function
f
{\displaystyle f}
is nonconstant, then since the set of
z
{\displaystyle z}
where
f
(
z
)
{\displaystyle f(z)}
is infinity is isolated and the Riemann sphere is compact, there are finitely many
z
{\displaystyle z}
with
f
(
z
)
{\displaystyle f(z)}
equal to infinity. Consider the Laurent expansion at all such
z
{\displaystyle z}
and subtract off the singular part: we are left with a function on the Riemann sphere with values in
C
{\displaystyle \mathbb {C} }
, which by Liouville's theorem is constant. Thus
f
{\displaystyle f}
is a rational function. This fact shows there is no essential difference between the complex projective line as an algebraic variety, or as the Riemann sphere.
== Important results ==
There is a long history of comparison results between algebraic geometry and analytic geometry, beginning in the nineteenth century. Some of the more important advances are listed here in chronological order.
=== Riemann's existence theorem ===
Riemann surface theory shows that a compact Riemann surface has enough meromorphic functions on it, making it an (smooth projective) algebraic curve. Under the name Riemann's existence theorem a deeper result on ramified coverings of a compact Riemann surface was known: such finite coverings as topological spaces are classified by permutation representations of the fundamental group of the complement of the ramification points. Since the Riemann surface property is local, such coverings are quite easily seen to be coverings in the complex-analytic sense. It is then possible to conclude that they come from covering maps of algebraic curves—that is, such coverings all come from finite extensions of the function field.
=== The Lefschetz principle ===
In the twentieth century, the Lefschetz principle, named for Solomon Lefschetz, was cited in algebraic geometry to justify the use of topological techniques for algebraic geometry over any algebraically closed field K of characteristic 0, by treating K as if it were the complex number field. An elementary form of it asserts that true statements of the first order theory of fields about C are true for any algebraically closed field K of characteristic zero. A precise principle and its proof are due to Alfred Tarski and are based in mathematical logic.
This principle permits the carrying over of some results obtained using analytic or topological methods for algebraic varieties over C to other algebraically closed ground fields of characteristic 0. (e.g. Kodaira type vanishing theorem.)
=== Chow's theorem ===
Chow's theorem (Chow (1949)), proved by Wei-Liang Chow, is an example of the most immediately useful kind of comparison available. It states that an analytic subspace of complex projective space that is closed (in the ordinary topological sense) is an algebraic subvariety. This can be rephrased as "any analytic subspace of complex projective space that is closed in the strong topology is closed in the Zariski topology." This allows quite a free use of complex-analytic methods within the classical parts of algebraic geometry.
=== GAGA ===
Foundations for the many relations between the two theories were put in place during the early part of the 1950s, as part of the business of laying the foundations of algebraic geometry to include, for example, techniques from Hodge theory. The major paper consolidating the theory was Géometrie Algébrique et Géométrie Analytique by Jean-Pierre Serre, now usually referred to as GAGA. It proves general results that relate classes of algebraic varieties, regular morphisms and sheaves with classes of analytic spaces, holomorphic mappings and sheaves. It reduces all of these to the comparison of categories of sheaves.
Nowadays the phrase GAGA-style result is used for any theorem of comparison, allowing passage between a category of objects from algebraic geometry, and their morphisms, to a well-defined subcategory of analytic geometry objects and holomorphic mappings.
==== Formal statement of GAGA ====
Let
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
be a scheme of finite type over
C
{\displaystyle \mathbb {C} }
. Then there is a topological space
X
a
n
{\displaystyle X^{\mathrm {an} }}
that consists of the closed points of
X
{\displaystyle X}
with a continuous inclusion map
λ
X
:
X
a
n
→
X
{\displaystyle \lambda _{X}:X^{\mathrm {an} }\to X}
. The topology on
X
a
n
{\displaystyle X^{\mathrm {an} }}
is called the "complex topology" (and is very different from the subspace topology).
Suppose
ϕ
:
X
→
Y
{\displaystyle \phi :X\to Y}
is a morphism of schemes of locally finite type over
C
{\displaystyle \mathbb {C} }
. Then there exists a continuous map
ϕ
a
n
:
X
a
n
→
Y
a
n
{\displaystyle \phi ^{\mathrm {an} }:X^{\mathrm {an} }\to Y^{\mathrm {an} }}
such that
λ
Y
∘
ϕ
a
n
=
ϕ
∘
λ
X
{\displaystyle \lambda _{Y}\circ \phi ^{\mathrm {an} }=\phi \circ \lambda _{X}}
.
There is a sheaf
O
X
a
n
{\displaystyle {\mathcal {O}}_{X}^{\mathrm {an} }}
on
X
a
n
{\displaystyle X^{\mathrm {an} }}
such that
(
X
a
n
,
O
X
a
n
)
{\displaystyle (X^{\mathrm {an} },{\mathcal {O}}_{X}^{\mathrm {an} })}
is a ringed space and
λ
X
:
X
a
n
→
X
{\displaystyle \lambda _{X}:X^{\mathrm {an} }\to X}
becomes a map of ringed spaces. The space
(
X
a
n
,
O
X
a
n
)
{\displaystyle (X^{\mathrm {an} },{\mathcal {O}}_{X}^{\mathrm {an} })}
is called the "analytification" of
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
and is an analytic space. For every
ϕ
:
X
→
Y
{\displaystyle \phi :X\to Y}
the map
ϕ
a
n
{\displaystyle \phi ^{\mathrm {an} }}
defined above is a mapping of analytic spaces. Furthermore, the map
ϕ
↦
ϕ
a
n
{\displaystyle \phi \mapsto \phi ^{\mathrm {an} }}
maps open immersions into open immersions. If
X
=
Spec
(
C
[
x
1
,
…
,
x
n
]
)
{\displaystyle X=\operatorname {Spec} (\mathbb {C} [x_{1},\dots ,x_{n}])}
then
X
a
n
=
C
n
{\displaystyle X^{\mathrm {an} }=\mathbb {C} ^{n}}
and
O
X
a
n
(
U
)
{\displaystyle {\mathcal {O}}_{X}^{\mathrm {an} }(U)}
for every polydisc
U
{\displaystyle U}
is a suitable quotient of the space of holomorphic functions on
U
{\displaystyle U}
.
For every sheaf
F
{\displaystyle {\mathcal {F}}}
on
X
{\displaystyle X}
(called algebraic sheaf) there is a sheaf
F
a
n
{\displaystyle {\mathcal {F}}^{\mathrm {an} }}
on
X
a
n
{\displaystyle X^{\mathrm {an} }}
(called analytic sheaf) and a map of sheaves of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules
λ
X
∗
:
F
→
(
λ
X
)
∗
F
a
n
{\displaystyle \lambda _{X}^{*}:{\mathcal {F}}\rightarrow (\lambda _{X})_{*}{\mathcal {F}}^{\mathrm {an} }}
. The sheaf
F
a
n
{\displaystyle {\mathcal {F}}^{\mathrm {an} }}
is defined as
λ
X
−
1
F
⊗
λ
X
−
1
O
X
O
X
a
n
{\displaystyle \lambda _{X}^{-1}{\mathcal {F}}\otimes _{\lambda _{X}^{-1}{\mathcal {O}}_{X}}{\mathcal {O}}_{X}^{\mathrm {an} }}
. The correspondence
F
↦
F
a
n
{\displaystyle {\mathcal {F}}\mapsto {\mathcal {F}}^{\mathrm {an} }}
defines an exact functor from the category of sheaves over
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
to the category of sheaves of
(
X
a
n
,
O
X
a
n
)
{\displaystyle (X^{\mathrm {an} },{\mathcal {O}}_{X}^{\mathrm {an} })}
.The following two statements are the heart of Serre's GAGA theorem (as extended by Alexander Grothendieck, Amnon Neeman, and others).
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is an arbitrary morphism of schemes of finite type over
C
{\displaystyle \mathbb {C} }
and
F
{\displaystyle {\mathcal {F}}}
is coherent then the natural map
(
f
∗
F
)
a
n
→
f
∗
a
n
F
a
n
{\displaystyle (f_{*}{\mathcal {F}})^{\mathrm {an} }\rightarrow f_{*}^{\mathrm {an} }{\mathcal {F}}^{\mathrm {an} }}
is injective. If
C
{\displaystyle \mathbb {C} }
is proper then this map is an isomorphism. One also has isomorphisms of all higher direct image sheaves
(
R
i
f
∗
F
)
a
n
≅
R
i
f
∗
a
n
F
a
n
{\displaystyle (R^{i}f_{*}{\mathcal {F}})^{\mathrm {an} }\cong R^{i}f_{*}^{\mathrm {an} }{\mathcal {F}}^{\mathrm {an} }}
in this case.
Now assume that
X
a
n
{\displaystyle X^{\mathrm {an} }}
is Hausdorff and compact. If
F
,
G
{\displaystyle {\mathcal {F}},{\mathcal {G}}}
are two coherent algebraic sheaves on
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
and if
f
:
F
a
n
→
G
a
n
{\displaystyle f\colon {\mathcal {F}}^{\mathrm {an} }\rightarrow {\mathcal {G}}^{\mathrm {an} }}
is a map of sheaves of
O
X
a
n
{\displaystyle {\mathcal {O}}_{X}^{\mathrm {an} }}
-modules then there exists a unique map of sheaves of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules
φ
:
F
→
G
{\displaystyle \varphi :{\mathcal {F}}\rightarrow {\mathcal {G}}}
with
f
=
φ
a
n
{\displaystyle f=\varphi ^{\mathrm {an} }}
. If
R
{\displaystyle {\mathcal {R}}}
is a coherent analytic sheaf of
O
X
a
n
{\displaystyle {\mathcal {O}}_{X}^{\mathrm {an} }}
-modules over
X
a
n
{\displaystyle X^{\mathrm {an} }}
then there exists a coherent algebraic sheaf
F
{\displaystyle {\mathcal {F}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules and an isomorphism
F
a
n
≅
R
{\displaystyle {\mathcal {F}}^{\mathrm {an} }\cong {\mathcal {R}}}
.
In slightly lesser generality, the GAGA theorem asserts that the category of coherent algebraic sheaves on a complex projective variety
X
{\displaystyle X}
and the category of coherent analytic sheaves on the corresponding analytic space
X
a
n
{\displaystyle X^{\mathrm {an} }}
are equivalent. The analytic space
X
a
n
{\displaystyle X^{\mathrm {an} }}
is obtained roughly by pulling back to
X
{\displaystyle X}
the complex structure from
C
n
{\displaystyle \mathbb {C} ^{n}}
through the coordinate charts. Indeed, phrasing the theorem in this manner is closer in spirit to Serre's paper, seeing how the full scheme-theoretic language that the above formal statement uses heavily had not yet been invented by the time of GAGA's publication.
== See also ==
Flat module - Notion of flatness was introduced by Serre (1956). Algebraic and analytic local rings have the same completion, and thereby they become a "flat couple" (couple plat).
== Notes ==
== References ==
== External links ==
Kiran Kedlaya. 18.726 Algebraic Geometry (LEC # 30 - 33 GAGA)Spring 2009. Massachusetts Institute of Technology: MIT OpenCourseWare Creative Commons BY-NC-SA. | Wikipedia/Algebraic_geometry_and_analytic_geometry |
In mathematics, differential of the first kind is a traditional term used in the theories of Riemann surfaces (more generally, complex manifolds) and algebraic curves (more generally, algebraic varieties), for everywhere-regular differential 1-forms. Given a complex manifold M, a differential of the first kind ω is therefore the same thing as a 1-form that is everywhere holomorphic; on an algebraic variety V that is non-singular it would be a global section of the coherent sheaf Ω1 of Kähler differentials. In either case the definition has its origins in the theory of abelian integrals.
The dimension of the space of differentials of the first kind, by means of this identification, is the Hodge number
h1,0.
The differentials of the first kind, when integrated along paths, give rise to integrals that generalise the elliptic integrals to all curves over the complex numbers. They include for example the hyperelliptic integrals of type
∫
x
k
d
x
Q
(
x
)
{\displaystyle \int {\frac {x^{k}\,dx}{\sqrt {Q(x)}}}}
where Q is a square-free polynomial of any given degree > 4. The allowable power k has to be determined by analysis of the possible pole at the point at infinity on the corresponding hyperelliptic curve. When this is done, one finds that the condition is
k ≤ g − 1,
or in other words, k at most 1 for degree of Q 5 or 6, at most 2 for degree 7 or 8, and so on (as g = [(1+ deg Q)/2]).
Quite generally, as this example illustrates, for a compact Riemann surface or algebraic curve, the Hodge number is the genus g. For the case of algebraic surfaces, this is the quantity known classically as the irregularity q. It is also, in general, the dimension of the Albanese variety, which takes the place of the Jacobian variety.
== Differentials of the second and third kind ==
The traditional terminology also included differentials of the second kind and of the third kind. The idea behind this has been supported by modern theories of algebraic differential forms, both from the side of more Hodge theory, and through the use of morphisms to commutative algebraic groups.
The Weierstrass zeta function was called an integral of the second kind in elliptic function theory; it is a logarithmic derivative of a theta function, and therefore has simple poles, with integer residues. The decomposition of a (meromorphic) elliptic function into pieces of 'three kinds' parallels the representation as (i) a constant, plus (ii) a linear combination of translates of the Weierstrass zeta function, plus (iii) a function with arbitrary poles but no residues at them.
The same type of decomposition exists in general, mutatis mutandis, though the terminology is not completely consistent. In the algebraic group (generalized Jacobian) theory the three kinds are abelian varieties, algebraic tori, and affine spaces, and the decomposition is in terms of a composition series.
On the other hand, a meromorphic abelian differential of the second kind has traditionally been one with residues at all poles being zero. One of the third kind is one where all poles are simple. There is a higher-dimensional analogue available, using the Poincaré residue.
== See also ==
Logarithmic form
== References ==
"Abelian differential", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Differential_of_the_first_kind |
The Riemann–Roch theorem is an important theorem in mathematics, specifically in complex analysis and algebraic geometry, for the computation of the dimension of the space of meromorphic functions with prescribed zeros and allowed poles. It relates the complex analysis of a connected compact Riemann surface with the surface's purely topological genus g, in a way that can be carried over into purely algebraic settings.
Initially proved as Riemann's inequality by Riemann (1857), the theorem reached its definitive form for Riemann surfaces after work of Riemann's short-lived student Gustav Roch (1865). It was later generalized to algebraic curves, to higher-dimensional varieties and beyond.
== Preliminary notions ==
A Riemann surface
X
{\displaystyle X}
is a topological space that is locally homeomorphic to an open subset of
C
{\displaystyle \mathbb {C} }
, the set of complex numbers. In addition, the transition maps between these open subsets are required to be holomorphic. The latter condition allows one to transfer the notions and methods of complex analysis dealing with holomorphic and meromorphic functions on
C
{\displaystyle \mathbb {C} }
to the surface
X
{\displaystyle X}
. For the purposes of the Riemann–Roch theorem, the surface
X
{\displaystyle X}
is always assumed to be compact. Colloquially speaking, the genus
g
{\displaystyle g}
of a Riemann surface is its number of handles; for example the genus of the Riemann surface shown at the right is three. More precisely, the genus is defined as half of the first Betti number, i.e., half of the
C
{\displaystyle \mathbb {C} }
-dimension of the first singular homology group
H
1
(
X
,
C
)
{\displaystyle H_{1}(X,\mathbb {C} )}
with complex coefficients. The genus classifies compact Riemann surfaces up to homeomorphism, i.e., two such surfaces are homeomorphic if and only if their genus is the same. Therefore, the genus is an important topological invariant of a Riemann surface. On the other hand, Hodge theory shows that the genus coincides with the
C
{\displaystyle \mathbb {C} }
-dimension of the space of holomorphic one-forms on
X
{\displaystyle X}
, so the genus also encodes complex-analytic information about the Riemann surface.
A divisor
D
{\displaystyle D}
is an element of the free abelian group on the points of the surface. Equivalently, a divisor is a finite linear combination of points of the surface with integer coefficients.
Any meromorphic function
f
{\displaystyle f}
gives rise to a divisor denoted
(
f
)
{\displaystyle (f)}
defined as
(
f
)
:=
∑
z
ν
∈
R
(
f
)
s
ν
z
ν
{\displaystyle (f):=\sum _{z_{\nu }\in R(f)}s_{\nu }z_{\nu }}
where
R
(
f
)
{\displaystyle R(f)}
is the set of all zeroes and poles of
f
{\displaystyle f}
, and
s
ν
{\displaystyle s_{\nu }}
is given by
s
ν
:=
{
a
if
z
ν
is a zero of order
a
−
a
if
z
ν
is a pole of order
a
{\displaystyle s_{\nu }:={\begin{cases}a&{\text{if }}z_{\nu }{\text{ is a zero of order }}a\\-a&{\text{if }}z_{\nu }{\text{ is a pole of order }}a\end{cases}}}
.
The set
R
(
f
)
{\displaystyle R(f)}
is known to be finite; this is a consequence of
X
{\displaystyle X}
being compact and the fact that the zeros of a (non-zero) holomorphic function do not have an accumulation point. Therefore,
(
f
)
{\displaystyle (f)}
is well-defined. Any divisor of this form is called a principal divisor. Two divisors that differ by a principal divisor are called linearly equivalent. The divisor of a meromorphic 1-form is defined similarly. A divisor of a global meromorphic 1-form is called the canonical divisor (usually denoted
K
{\displaystyle K}
). Any two meromorphic 1-forms will yield linearly equivalent divisors, so the canonical divisor is uniquely determined up to linear equivalence (hence "the" canonical divisor).
The symbol
deg
(
D
)
{\displaystyle \deg(D)}
denotes the degree (occasionally also called index) of the divisor
D
{\displaystyle D}
, i.e. the sum of the coefficients occurring in
D
{\displaystyle D}
. It can be shown that the divisor of a global meromorphic function always has degree 0, so the degree of a divisor depends only on its linear equivalence class.
The number
ℓ
(
D
)
{\displaystyle \ell (D)}
is the quantity that is of primary interest: the dimension (over
C
{\displaystyle \mathbb {C} }
) of the vector space of meromorphic functions
h
{\displaystyle h}
on the surface, such that all the coefficients of
(
h
)
+
D
{\displaystyle (h)+D}
are non-negative. Intuitively, we can think of this as being all meromorphic functions whose poles at every point are no worse than the corresponding coefficient in
D
{\displaystyle D}
; if the coefficient in
D
{\displaystyle D}
at
z
{\displaystyle z}
is negative, then we require that
h
{\displaystyle h}
has a zero of at least that multiplicity at
z
{\displaystyle z}
– if the coefficient in
D
{\displaystyle D}
is positive,
h
{\displaystyle h}
can have a pole of at most that order. The vector spaces for linearly equivalent divisors are naturally isomorphic through multiplication with the global meromorphic function (which is well-defined up to a scalar).
== Statement of the theorem ==
The Riemann–Roch theorem for a compact Riemann surface of genus
g
{\displaystyle g}
with canonical divisor
K
{\displaystyle K}
states
ℓ
(
D
)
−
ℓ
(
K
−
D
)
=
deg
(
D
)
−
g
+
1
{\displaystyle \ell (D)-\ell (K-D)=\deg(D)-g+1}
.
Typically, the number
ℓ
(
D
)
{\displaystyle \ell (D)}
is the one of interest, while
ℓ
(
K
−
D
)
{\displaystyle \ell (K-D)}
is thought of as a correction term (also called index of speciality) so the theorem may be roughly paraphrased by saying
dimension − correction = degree − genus + 1.
Because it is the dimension of a vector space, the correction term
ℓ
(
K
−
D
)
{\displaystyle \ell (K-D)}
is always non-negative, so that
ℓ
(
D
)
≥
deg
(
D
)
−
g
+
1
{\displaystyle \ell (D)\geq \deg(D)-g+1}
.
This is called Riemann's inequality. Roch's part of the statement is the description of the possible difference between the sides of the inequality. On a general Riemann surface of genus
g
{\displaystyle g}
,
K
{\displaystyle K}
has degree
2
g
−
2
{\displaystyle 2g-2}
, independently of the meromorphic form chosen to represent the divisor. This follows from putting
D
=
K
{\displaystyle D=K}
in the theorem. In particular, as long as
D
{\displaystyle D}
has degree at least
2
g
−
1
{\displaystyle 2g-1}
, the correction term is 0, so that
ℓ
(
D
)
=
deg
(
D
)
−
g
+
1
{\displaystyle \ell (D)=\deg(D)-g+1}
.
The theorem will now be illustrated for surfaces of low genus. There are also a number other closely related theorems: an equivalent formulation of this theorem using line bundles and a generalization of the theorem to algebraic curves.
=== Examples ===
The theorem will be illustrated by picking a point
P
{\displaystyle P}
on the surface in question and regarding the sequence of numbers
ℓ
(
n
⋅
P
)
,
n
≥
0
{\displaystyle \ell (n\cdot P),n\geq 0}
i.e., the dimension of the space of functions that are holomorphic everywhere except at
P
{\displaystyle P}
where the function is allowed to have a pole of order at most
n
{\displaystyle n}
. For
n
=
0
{\displaystyle n=0}
, the functions are thus required to be entire, i.e., holomorphic on the whole surface
X
{\displaystyle X}
. By Liouville's theorem, such a function is necessarily constant. Therefore,
ℓ
(
0
)
=
1
{\displaystyle \ell (0)=1}
. In general, the sequence
ℓ
(
n
⋅
P
)
{\displaystyle \ell (n\cdot P)}
is an increasing sequence.
==== Genus zero ====
The Riemann sphere (also called complex projective line) is simply connected and hence its first singular homology is zero. In particular its genus is zero. The sphere can be covered by two copies of
C
{\displaystyle \mathbb {C} }
, with transition map being given by
C
∖
{
0
}
∋
z
↦
1
z
∈
C
∖
{
0
}
{\displaystyle \mathbb {C} \setminus \{0\}\ni z\mapsto {\frac {1}{z}}\in \mathbb {C} \setminus \{0\}}
.
Therefore, the form
ω
=
d
z
{\displaystyle \omega =dz}
on one copy of
C
{\displaystyle \mathbb {C} }
extends to a meromorphic form on the Riemann sphere: it has a double pole at infinity, since
d
(
1
z
)
=
−
1
z
2
d
z
{\displaystyle d\left({\frac {1}{z}}\right)=-{\frac {1}{z^{2}}}\,dz}
Thus, its canonical divisor is
K
:=
div
(
ω
)
=
−
2
P
{\displaystyle K:=\operatorname {div} (\omega )=-2P}
(where
P
{\displaystyle P}
is the point at infinity).
Therefore, the theorem says that the sequence
ℓ
(
n
⋅
P
)
{\displaystyle \ell (n\cdot P)}
reads
1, 2, 3, ... .
This sequence can also be read off from the theory of partial fractions. Conversely if this sequence starts this way, then
g
{\displaystyle g}
must be zero.
==== Genus one ====
The next case is a Riemann surface of genus
g
=
1
{\displaystyle g=1}
, such as a torus
C
/
Λ
{\displaystyle \mathbb {C} /\Lambda }
, where
Λ
{\displaystyle \Lambda }
is a two-dimensional lattice (a group isomorphic to
Z
2
{\displaystyle \mathbb {Z} ^{2}}
). Its genus is one: its first singular homology group is freely generated by two loops, as shown in the illustration at the right. The standard complex coordinate
z
{\displaystyle z}
on
C
{\displaystyle C}
yields a one-form
ω
=
d
z
{\displaystyle \omega =dz}
on
X
{\displaystyle X}
that is everywhere holomorphic, i.e., has no poles at all. Therefore,
K
{\displaystyle K}
, the divisor of
ω
{\displaystyle \omega }
is zero.
On this surface, this sequence is
1, 1, 2, 3, 4, 5 ... ;
and this characterises the case
g
=
1
{\displaystyle g=1}
. Indeed, for
D
=
0
{\displaystyle D=0}
,
ℓ
(
K
−
D
)
=
ℓ
(
0
)
=
1
{\displaystyle \ell (K-D)=\ell (0)=1}
, as was mentioned above. For
D
=
n
⋅
P
{\displaystyle D=n\cdot P}
with
n
>
0
{\displaystyle n>0}
, the degree of
K
−
D
{\displaystyle K-D}
is strictly negative, so that the correction term is 0. The sequence of dimensions can also be derived from the theory of elliptic functions.
==== Genus two and beyond ====
For
g
=
2
{\displaystyle g=2}
, the sequence mentioned above is
1, 1, ?, 2, 3, ... .
It is shown from this that the ? term of degree 2 is either 1 or 2, depending on the point. It can be proven that in any genus 2 curve there are exactly six points whose sequences are 1, 1, 2, 2, ... and the rest of the points have the generic sequence 1, 1, 1, 2, ... In particular, a genus 2 curve is a hyperelliptic curve. For
g
>
2
{\displaystyle g>2}
it is always true that at most points the sequence starts with
g
+
1
{\displaystyle g+1}
ones and there are finitely many points with other sequences (see Weierstrass points).
=== Riemann–Roch for line bundles ===
Using the close correspondence between divisors and holomorphic line bundles on a Riemann surface, the theorem can also be stated in a different, yet equivalent way: let L be a holomorphic line bundle on X. Let
H
0
(
X
,
L
)
{\displaystyle H^{0}(X,L)}
denote the space of holomorphic sections of L. This space will be finite-dimensional; its dimension is denoted
h
0
(
X
,
L
)
{\displaystyle h^{0}(X,L)}
. Let K denote the canonical bundle on X. Then, the Riemann–Roch theorem states that
h
0
(
X
,
L
)
−
h
0
(
X
,
L
−
1
⊗
K
)
=
deg
(
L
)
+
1
−
g
{\displaystyle h^{0}(X,L)-h^{0}(X,L^{-1}\otimes K)=\deg(L)+1-g}
.
The theorem of the previous section is the special case of when L is a point bundle.
The theorem can be applied to show that there are g linearly independent holomorphic sections of K, or one-forms on X, as follows. Taking L to be the trivial bundle,
h
0
(
X
,
L
)
=
1
{\displaystyle h^{0}(X,L)=1}
since the only holomorphic functions on X are constants. The degree of L is zero, and
L
−
1
{\displaystyle L^{-1}}
is the trivial bundle. Thus,
1
−
h
0
(
X
,
K
)
=
1
−
g
{\displaystyle 1-h^{0}(X,K)=1-g}
.
Therefore,
h
0
(
X
,
K
)
=
g
{\displaystyle h^{0}(X,K)=g}
, proving that there are g holomorphic one-forms.
=== Degree of canonical bundle ===
Since the canonical bundle
K
{\displaystyle K}
has
h
0
(
X
,
K
)
=
g
{\displaystyle h^{0}(X,K)=g}
, applying Riemann–Roch to
L
=
K
{\displaystyle L=K}
gives
h
0
(
X
,
K
)
−
h
0
(
X
,
K
−
1
⊗
K
)
=
deg
(
K
)
+
1
−
g
{\displaystyle h^{0}(X,K)-h^{0}(X,K^{-1}\otimes K)=\deg(K)+1-g}
which can be rewritten as
g
−
1
=
deg
(
K
)
+
1
−
g
{\displaystyle g-1=\deg(K)+1-g}
hence the degree of the canonical bundle is
deg
(
K
)
=
2
g
−
2
{\displaystyle \deg(K)=2g-2}
.
=== Riemann–Roch theorem for algebraic curves ===
Every item in the above formulation of the Riemann–Roch theorem for divisors on Riemann surfaces has an analogue in algebraic geometry. The analogue of a Riemann surface is a non-singular algebraic curve C over a field k. The difference in terminology (curve vs. surface) is because the dimension of a Riemann surface as a real manifold is two, but one as a complex manifold. The compactness of a Riemann surface is paralleled by the condition that the algebraic curve be complete, which is equivalent to being projective. Over a general field k, there is no good notion of singular (co)homology. The so-called geometric genus is defined as
g
(
C
)
:=
dim
k
Γ
(
C
,
Ω
C
1
)
{\displaystyle g(C):=\dim _{k}\Gamma (C,\Omega _{C}^{1})}
i.e., as the dimension of the space of globally defined (algebraic) one-forms (see Kähler differential). Finally, meromorphic functions on a Riemann surface are locally represented as fractions of holomorphic functions. Hence they are replaced by rational functions which are locally fractions of regular functions. Thus, writing
ℓ
(
D
)
{\displaystyle \ell (D)}
for the dimension (over k) of the space of rational functions on the curve whose poles at every point are not worse than the corresponding coefficient in D, the very same formula as above holds:
ℓ
(
D
)
−
ℓ
(
K
−
D
)
=
deg
(
D
)
−
g
+
1
{\displaystyle \ell (D)-\ell (K-D)=\deg(D)-g+1}
.
where C is a projective non-singular algebraic curve over an algebraically closed field k. In fact, the same formula holds for projective curves over any field, except that the degree of a divisor needs to take into account multiplicities coming from the possible extensions of the base field and the residue fields of the points supporting the divisor. Finally, for a proper curve over an Artinian ring, the Euler characteristic of the line bundle associated to a divisor is given by the degree of the divisor (appropriately defined) plus the Euler characteristic of the structural sheaf
O
{\displaystyle {\mathcal {O}}}
.
The smoothness assumption in the theorem can be relaxed, as well: for a (projective) curve over an algebraically closed field, all of whose local rings are Gorenstein rings, the same statement as above holds, provided that the geometric genus as defined above is replaced by the arithmetic genus ga, defined as
g
a
:=
dim
k
H
1
(
C
,
O
C
)
{\displaystyle g_{a}:=\dim _{k}H^{1}(C,{\mathcal {O}}_{C})}
.
(For smooth curves, the geometric genus agrees with the arithmetic one.) The theorem has also been extended to general singular curves (and higher-dimensional varieties).
== Applications ==
=== Hilbert polynomial ===
One of the important consequences of Riemann–Roch is it gives a formula for computing the Hilbert polynomial of line bundles on a curve. If a line bundle
L
{\displaystyle {\mathcal {L}}}
is ample, then the Hilbert polynomial will give the first degree
L
⊗
n
{\displaystyle {\mathcal {L}}^{\otimes n}}
giving an embedding into projective space. For example, the canonical sheaf
ω
C
{\displaystyle \omega _{C}}
has degree
2
g
−
2
{\displaystyle 2g-2}
, which gives an ample line bundle for genus
g
≥
2
{\displaystyle g\geq 2}
. If we set
ω
C
(
n
)
=
ω
C
⊗
n
{\displaystyle \omega _{C}(n)=\omega _{C}^{\otimes n}}
then the Riemann–Roch formula reads
χ
(
ω
C
(
n
)
)
=
deg
(
ω
C
⊗
n
)
−
g
+
1
=
n
(
2
g
−
2
)
−
g
+
1
=
2
n
g
−
2
n
−
g
+
1
=
(
2
n
−
1
)
(
g
−
1
)
{\displaystyle {\begin{aligned}\chi (\omega _{C}(n))&=\deg(\omega _{C}^{\otimes n})-g+1\\&=n(2g-2)-g+1\\&=2ng-2n-g+1\\&=(2n-1)(g-1)\end{aligned}}}
Giving the degree
1
{\displaystyle 1}
Hilbert polynomial of
ω
C
{\displaystyle \omega _{C}}
H
ω
C
(
t
)
=
2
(
g
−
1
)
t
−
g
+
1
{\displaystyle H_{\omega _{C}}(t)=2(g-1)t-g+1}
.
Because the tri-canonical sheaf
ω
C
⊗
3
{\displaystyle \omega _{C}^{\otimes 3}}
is used to embed the curve, the Hilbert polynomial
H
C
(
t
)
=
H
ω
C
⊗
3
(
t
)
{\displaystyle H_{C}(t)=H_{\omega _{C}^{\otimes 3}}(t)}
is generally considered while constructing the Hilbert scheme of curves (and the moduli space of algebraic curves). This polynomial is
H
C
(
t
)
=
(
6
t
−
1
)
(
g
−
1
)
=
6
(
g
−
1
)
t
+
(
1
−
g
)
{\displaystyle {\begin{aligned}H_{C}(t)&=(6t-1)(g-1)\\&=6(g-1)t+(1-g)\end{aligned}}}
and is called the Hilbert polynomial of a genus g curve.
=== Pluricanonical embedding ===
Analyzing this equation further, the Euler characteristic reads as
χ
(
ω
C
⊗
n
)
=
h
0
(
C
,
ω
C
⊗
n
)
−
h
0
(
C
,
ω
C
⊗
(
ω
C
⊗
n
)
∨
)
=
h
0
(
C
,
ω
C
⊗
n
)
−
h
0
(
C
,
(
ω
C
⊗
(
n
−
1
)
)
∨
)
{\displaystyle {\begin{aligned}\chi (\omega _{C}^{\otimes n})&=h^{0}\left(C,\omega _{C}^{\otimes n}\right)-h^{0}\left(C,\omega _{C}\otimes \left(\omega _{C}^{\otimes n}\right)^{\vee }\right)\\&=h^{0}\left(C,\omega _{C}^{\otimes n}\right)-h^{0}\left(C,\left(\omega _{C}^{\otimes (n-1)}\right)^{\vee }\right)\end{aligned}}}
Since
deg
(
ω
C
⊗
n
)
=
n
(
2
g
−
2
)
{\displaystyle \deg(\omega _{C}^{\otimes n})=n(2g-2)}
h
0
(
C
,
(
ω
C
⊗
(
n
−
1
)
)
∨
)
=
0
{\displaystyle h^{0}\left(C,\left(\omega _{C}^{\otimes (n-1)}\right)^{\vee }\right)=0}
.
for
n
≥
3
{\displaystyle n\geq 3}
, since its degree is negative for all
g
≥
2
{\displaystyle g\geq 2}
, implying it has no global sections, there is an embedding into some projective space from the global sections of
ω
C
⊗
n
{\displaystyle \omega _{C}^{\otimes n}}
. In particular,
ω
C
⊗
3
{\displaystyle \omega _{C}^{\otimes 3}}
gives an embedding into
P
N
≅
P
(
H
0
(
C
,
ω
C
⊗
3
)
)
{\displaystyle \mathbb {P} ^{N}\cong \mathbb {P} (H^{0}(C,\omega _{C}^{\otimes 3}))}
where
N
=
5
g
−
5
−
1
=
5
g
−
6
{\displaystyle N=5g-5-1=5g-6}
since
h
0
(
ω
C
⊗
3
)
=
6
g
−
6
−
g
+
1
{\displaystyle h^{0}(\omega _{C}^{\otimes 3})=6g-6-g+1}
. This is useful in the construction of the moduli space of algebraic curves because it can be used as the projective space to construct the Hilbert scheme with Hilbert polynomial
H
C
(
t
)
{\displaystyle H_{C}(t)}
.
=== Genus of plane curves with singularities ===
An irreducible plane algebraic curve of degree d has (d − 1)(d − 2)/2 − g singularities, when properly counted. It follows that, if a curve has (d − 1)(d − 2)/2 different singularities, it is a rational curve and, thus, admits a rational parameterization.
=== Riemann–Hurwitz formula ===
The Riemann–Hurwitz formula concerning (ramified) maps between Riemann surfaces or algebraic curves is a consequence of the Riemann–Roch theorem.
=== Clifford's theorem on special divisors ===
Clifford's theorem on special divisors is also a consequence of the Riemann–Roch theorem. It states that for a special divisor (i.e., such that
ℓ
(
K
−
D
)
>
0
{\displaystyle \ell (K-D)>0}
) satisfying
ℓ
(
D
)
>
0
{\displaystyle \ell (D)>0}
, the following inequality holds:
ℓ
(
D
)
≤
deg
D
2
+
1
{\displaystyle \ell (D)\leq {\frac {\deg D}{2}}+1}
.
== Proof ==
=== Proof for algebraic curves ===
The statement for algebraic curves can be proved using Serre duality. The integer
ℓ
(
D
)
{\displaystyle \ell (D)}
is the dimension of the space of global sections of the line bundle
L
(
D
)
{\displaystyle {\mathcal {L}}(D)}
associated to D (cf. Cartier divisor). In terms of sheaf cohomology, we therefore have
ℓ
(
D
)
=
d
i
m
H
0
(
X
,
L
(
D
)
)
{\displaystyle \ell (D)=\mathrm {dim} H^{0}(X,{\mathcal {L}}(D))}
, and likewise
ℓ
(
K
X
−
D
)
=
dim
H
0
(
X
,
ω
X
⊗
L
(
D
)
∨
)
{\displaystyle \ell ({\mathcal {K}}_{X}-D)=\dim H^{0}(X,\omega _{X}\otimes {\mathcal {L}}(D)^{\vee })}
. But Serre duality for non-singular projective varieties in the particular case of a curve states that
H
0
(
X
,
ω
X
⊗
L
(
D
)
∨
)
{\displaystyle H^{0}(X,\omega _{X}\otimes {\mathcal {L}}(D)^{\vee })}
is isomorphic to the dual
H
1
(
X
,
L
(
D
)
)
∨
{\displaystyle H^{1}(X,{\mathcal {L}}(D))^{\vee }}
. The left hand side thus equals the Euler characteristic of the divisor D. When D = 0, we find the Euler characteristic for the structure sheaf is
1
−
g
{\displaystyle 1-g}
by definition. To prove the theorem for general divisor, one can then proceed by adding points one by one to the divisor and ensure that the Euler characteristic transforms accordingly to the right hand side.
=== Proof for compact Riemann surfaces ===
The theorem for compact Riemann surfaces can be deduced from the algebraic version using Chow's Theorem and the GAGA principle: in fact, every compact Riemann surface is defined by algebraic equations in some complex projective space. (Chow's Theorem says that any closed analytic subvariety of projective space is defined by algebraic equations, and the GAGA principle says that sheaf cohomology of an algebraic variety is the same as the sheaf cohomology of the analytic variety defined by the same equations).
One may avoid the use of Chow's theorem by arguing identically to the proof in the case of algebraic curves, but replacing
L
(
D
)
{\displaystyle {\mathcal {L}}(D)}
with the sheaf
O
D
{\displaystyle {\mathcal {O}}_{D}}
of meromorphic functions h such that all coefficients of the divisor
(
h
)
+
D
{\displaystyle (h)+D}
are nonnegative. Here the fact that the Euler characteristic transforms as desired when one adds a point to the divisor can be read off from the long exact sequence induced by the short exact sequence
0
→
O
D
→
O
D
+
P
→
C
P
→
0
{\displaystyle 0\to {\mathcal {O}}_{D}\to {\mathcal {O}}_{D+P}\to \mathbb {C} _{P}\to 0}
where
C
P
{\displaystyle \mathbb {C} _{P}}
is the skyscraper sheaf at P, and the map
O
D
+
P
→
C
P
{\displaystyle {\mathcal {O}}_{D+P}\to \mathbb {C} _{P}}
returns the
−
k
−
1
{\displaystyle -k-1}
th Laurent coefficient, where
k
=
D
(
P
)
{\displaystyle k=D(P)}
.
== Arithmetic Riemann–Roch theorem ==
A version of the arithmetic Riemann–Roch theorem states that if k is a global field, and f is a suitably admissible function of the adeles of k, then for every idele a, one has a Poisson summation formula:
1
|
a
|
∑
x
∈
k
f
^
(
x
/
a
)
=
∑
x
∈
k
f
(
a
x
)
{\displaystyle {\frac {1}{|a|}}\sum _{x\in k}{\hat {f}}(x/a)=\sum _{x\in k}f(ax)}
.
In the special case when k is the function field of an algebraic curve over a finite field and f is any character that is trivial on k, this recovers the geometric Riemann–Roch theorem.
Other versions of the arithmetic Riemann–Roch theorem make use of Arakelov theory to resemble the traditional Riemann–Roch theorem more exactly.
== Generalizations of the Riemann–Roch theorem ==
The Riemann–Roch theorem for curves was proved for Riemann surfaces by Riemann and Roch in the 1850s and for algebraic curves by Friedrich Karl Schmidt in 1931 as he was working on perfect fields of finite characteristic. As stated by Peter Roquette,
The first main achievement of F. K. Schmidt is the discovery that the classical theorem of Riemann–Roch on compact Riemann surfaces can be transferred to function fields with finite base field. Actually, his proof of the Riemann–Roch theorem works for arbitrary perfect base fields, not necessarily finite.
It is foundational in the sense that the subsequent theory for curves tries to refine the information it yields (for example in the Brill–Noether theory).
There are versions in higher dimensions (for the appropriate notion of divisor, or line bundle). Their general formulation depends on splitting the theorem into two parts. One, which would now be called Serre duality, interprets the
ℓ
(
K
−
D
)
{\displaystyle \ell (K-D)}
term as a dimension of a first sheaf cohomology group; with
ℓ
(
D
)
{\displaystyle \ell (D)}
the dimension of a zeroth cohomology group, or space of sections, the left-hand side of the theorem becomes an Euler characteristic, and the right-hand side a computation of it as a degree corrected according to the topology of the Riemann surface.
In algebraic geometry of dimension two such a formula was found by the geometers of the Italian school; a Riemann–Roch theorem for surfaces was proved (there are several versions, with the first possibly being due to Max Noether).
An n-dimensional generalisation, the Hirzebruch–Riemann–Roch theorem, was found and proved by Friedrich Hirzebruch, as an application of characteristic classes in algebraic topology; he was much influenced by the work of Kunihiko Kodaira. At about the same time Jean-Pierre Serre was giving the general form of Serre duality, as we now know it.
Alexander Grothendieck proved a far-reaching generalization in 1957, now known as the Grothendieck–Riemann–Roch theorem. His work reinterprets Riemann–Roch not as a theorem about a variety, but about a morphism between two varieties. The details of the proofs were published by Armand Borel and Jean-Pierre Serre in 1958. Later, Grothendieck and his collaborators simplified and generalized the proof.
Finally a general version was found in algebraic topology, too. These developments were essentially all carried out between 1950 and 1960. After that the Atiyah–Singer index theorem opened another route to generalization. Consequently, the Euler characteristic of a coherent sheaf is reasonably computable. For just one summand within the alternating sum, further arguments such as vanishing theorems must be used.
== See also ==
Arakelov theory
Grothendieck–Riemann–Roch theorem
Hirzebruch–Riemann–Roch theorem
Kawasaki's Riemann–Roch formula
Hilbert polynomial
Moduli of algebraic curves
== Notes ==
== References ==
Serre, Jean-Pierre; Borel, Armand (1958). "Le théorème de Riemann-Roch". Bulletin de la Société Mathématique de France. 79: 97–136. doi:10.24033/bsmf.1500.
Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, doi:10.1002/9781118032527, ISBN 978-0-471-05059-9, MR 1288523
Grothendieck, Alexander, et al. (1966/67), Théorie des Intersections et Théorème de Riemann–Roch (SGA 6), LNM 225, Springer-Verlag, 1971.
Fulton, William (1974). Algebraic Curves (PDF). Mathematics Lecture Note Series. W.A. Benjamin. ISBN 0-8053-3080-1.
Jost, Jürgen (2006). Compact Riemann Surfaces. Berlin, New York: Springer-Verlag. ISBN 978-3-540-33065-3. See pages 208–219 for the proof in the complex situation. Note that Jost uses slightly different notation.
Hartshorne, Robin (1977). Algebraic Geometry. Berlin, New York: Springer-Verlag. ISBN 978-0-387-90244-9. MR 0463157. OCLC 13348052., contains the statement for curves over an algebraically closed field. See section IV.1.
"Riemann–Roch theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Hirzebruch, Friedrich (1995). Topological methods in algebraic geometry. Classics in Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-3-540-58663-0. MR 1335917..
Miranda, Rick (1995). Algebraic Curves and Riemann Surfaces. Graduate Studies in Mathematics. Vol. 5. doi:10.1090/gsm/005. ISBN 9780821802687.
Shigeru Mukai (2003). An Introduction to Invariants and Moduli. Cambridge studies in advanced mathematics. Vol. 81. William Oxbury (trans.). New York: Cambridge University Press. ISBN 0-521-80906-1.
Vector bundles on Compact Riemann Surfaces, M. S. Narasimhan, pp. 5–6.
Riemann, Bernhard (1857). "Theorie der Abel'schen Functionen". Journal für die reine und angewandte Mathematik. 1857 (54): 115–155. doi:10.1515/crll.1857.54.115. hdl:2027/coo.31924060183864. S2CID 16593204.
Roch, Gustav (1865). "Ueber die Anzahl der willkurlichen Constanten in algebraischen Functionen". Journal für die reine und angewandte Mathematik. 1865 (64): 372–376. doi:10.1515/crll.1865.64.372. S2CID 120178388.
Schmidt, Friedrich Karl (1931), "Analytische Zahlentheorie in Körpern der Charakteristik p", Mathematische Zeitschrift, 33: 1–32, doi:10.1007/BF01174341, S2CID 186228993, Zbl 0001.05401, archived from the original on 2017-12-22, retrieved 2020-05-16
Stichtenoth, Henning (1993). Algebraic Function Fields and Codes. Springer-Verlag. ISBN 3-540-56489-6.
Misha Kapovich, The Riemann–Roch Theorem (lecture note) an elementary introduction
J. Gray, The Riemann–Roch theorem and Geometry, 1854–1914.
Is there a Riemann–Roch for smooth projective curves over an arbitrary field? on MathOverflow | Wikipedia/Riemann–Roch_theorem_for_algebraic_curves |
In mathematics, Weber's theorem, named after Heinrich Martin Weber, is a result on algebraic curves. It states the following.
Consider two non-singular curves C and C′ having the same genus g > 1. If there is a rational correspondence φ between C and C′, then φ is a birational transformation.
== References ==
Coolidge, J. L. (1959). A Treatise on Algebraic Plane Curves. New York: Dover. p. 135. ISBN 0-486-60543-4. {{cite book}}: ISBN / Date incompatibility (help)
Weber, H. (1873). "Zur Theorie der Transformation algebraischer Functionen". Journal für die reine und angewandte Mathematik (in German). 76: 345–348. doi:10.1515/crll.1873.76.345.
== Further reading ==
Tsuji, Masatsugu (1941). "Theory of conformal mapping of a multiply connected domain". Japanese Journal of Mathematics :Transactions and Abstracts. 18: 759–775. doi:10.4099/jjm1924.18.0_759.
== External links ==
Weisstein, Eric W. "Weber's Theorem". MathWorld. | Wikipedia/Weber's_theorem_(Algebraic_curves) |
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
== In calculus and analysis ==
In calculus, a function
f
{\displaystyle f}
defined on a subset of the real numbers with real values is called monotonic if it is either entirely non-decreasing, or entirely non-increasing. That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.
A function is termed monotonically increasing (also increasing or non-decreasing) if for all
x
{\displaystyle x}
and
y
{\displaystyle y}
such that
x
≤
y
{\displaystyle x\leq y}
one has
f
(
x
)
≤
f
(
y
)
{\displaystyle f\!\left(x\right)\leq f\!\left(y\right)}
, so
f
{\displaystyle f}
preserves the order (see Figure 1). Likewise, a function is called monotonically decreasing (also decreasing or non-increasing) if, whenever
x
≤
y
{\displaystyle x\leq y}
, then
f
(
x
)
≥
f
(
y
)
{\displaystyle f\!\left(x\right)\geq f\!\left(y\right)}
, so it reverses the order (see Figure 2).
If the order
≤
{\displaystyle \leq }
in the definition of monotonicity is replaced by the strict order
<
{\displaystyle <}
, one obtains a stronger requirement. A function with this property is called strictly increasing (also increasing). Again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing (also decreasing). A function with either property is called strictly monotone. Functions that are strictly monotone are one-to-one (because for
x
{\displaystyle x}
not equal to
y
{\displaystyle y}
, either
x
<
y
{\displaystyle x<y}
or
x
>
y
{\displaystyle x>y}
and so, by monotonicity, either
f
(
x
)
<
f
(
y
)
{\displaystyle f\!\left(x\right)<f\!\left(y\right)}
or
f
(
x
)
>
f
(
y
)
{\displaystyle f\!\left(x\right)>f\!\left(y\right)}
, thus
f
(
x
)
≠
f
(
y
)
{\displaystyle f\!\left(x\right)\neq f\!\left(y\right)}
.)
To avoid ambiguity, the terms weakly monotone, weakly increasing and weakly decreasing are often used to refer to non-strict monotonicity.
The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
A function
f
{\displaystyle f}
is said to be absolutely monotonic over an interval
(
a
,
b
)
{\displaystyle \left(a,b\right)}
if the derivatives of all orders of
f
{\displaystyle f}
are nonnegative or all nonpositive at all points on the interval.
=== Inverse of function ===
All strictly monotonic functions are invertible because they are guaranteed to have a one-to-one mapping from their range to their domain.
However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one).
A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, if
y
=
g
(
x
)
{\displaystyle y=g(x)}
is strictly increasing on the range
[
a
,
b
]
{\displaystyle [a,b]}
, then it has an inverse
x
=
h
(
y
)
{\displaystyle x=h(y)}
on the range
[
g
(
a
)
,
g
(
b
)
]
{\displaystyle [g(a),g(b)]}
.
The term monotonic is sometimes used in place of strictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.
=== Monotonic transformation ===
The term monotonic transformation (or monotone transformation) may also cause confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of a utility function being preserved across a monotonic transform (see also monotone preferences). In this context, the term "monotonic transformation" refers to a positive monotonic transformation and is intended to distinguish it from a "negative monotonic transformation," which reverses the order of the numbers.
=== Some basic applications and results ===
The following properties are true for a monotonic function
f
:
R
→
R
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }
:
f
{\displaystyle f}
has limits from the right and from the left at every point of its domain;
f
{\displaystyle f}
has a limit at positive or negative infinity (
±
∞
{\displaystyle \pm \infty }
) of either a real number,
∞
{\displaystyle \infty }
, or
−
∞
{\displaystyle -\infty }
.
f
{\displaystyle f}
can only have jump discontinuities;
f
{\displaystyle f}
can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and may even be dense in an interval (a, b). For example, for any summable sequence
(
a
i
)
(a_{i})
of positive numbers and any enumeration
(
q
i
)
{\displaystyle (q_{i})}
of the rational numbers, the monotonically increasing function
f
(
x
)
=
∑
q
i
≤
x
a
i
{\displaystyle f(x)=\sum _{q_{i}\leq x}a_{i}}
is continuous exactly at every irrational number (cf. picture). It is the cumulative distribution function of the discrete measure on the rational numbers, where
a
i
{\displaystyle a_{i}}
is the weight of
q
i
{\displaystyle q_{i}}
.
If
f
{\displaystyle f}
is differentiable at
x
∗
∈
R
{\displaystyle x^{*}\in {\mathbb {R}}}
and
f
′
(
x
∗
)
>
0
{\displaystyle f'(x^{*})>0}
, then there is a non-degenerate interval I such that
x
∗
∈
I
{\displaystyle x^{*}\in I}
and
f
{\displaystyle f}
is increasing on I. As a partial converse, if f is differentiable and increasing on an interval, I, then its derivative is positive at every point in I.
These properties are the reason why monotonic functions are useful in technical work in analysis. Other important properties of these functions include:
if
f
{\displaystyle f}
is a monotonic function defined on an interval
I
{\displaystyle I}
, then
f
{\displaystyle f}
is differentiable almost everywhere on
I
{\displaystyle I}
; i.e. the set of numbers
x
{\displaystyle x}
in
I
{\displaystyle I}
such that
f
{\displaystyle f}
is not differentiable in
x
{\displaystyle x}
has Lebesgue measure zero. In addition, this result cannot be improved to countable: see Cantor function.
if this set is countable, then
f
{\displaystyle f}
is absolutely continuous
if
f
{\displaystyle f}
is a monotonic function defined on an interval
[
a
,
b
]
{\displaystyle \left[a,b\right]}
, then
f
{\displaystyle f}
is Riemann integrable.
An important application of monotonic functions is in probability theory. If
X
{\displaystyle X}
is a random variable, its cumulative distribution function
F
X
(
x
)
=
Prob
(
X
≤
x
)
{\displaystyle F_{X}\!\left(x\right)={\text{Prob}}\!\left(X\leq x\right)}
is a monotonically increasing function.
A function is unimodal if it is monotonically increasing up to some point (the mode) and then monotonically decreasing.
When
f
{\displaystyle f}
is a strictly monotonic function, then
f
{\displaystyle f}
is injective on its domain, and if
T
{\displaystyle T}
is the range of
f
{\displaystyle f}
, then there is an inverse function on
T
{\displaystyle T}
for
f
{\displaystyle f}
. In contrast, each constant function is monotonic, but not injective, and hence cannot have an inverse.
The graphic shows six monotonic functions. Their simplest forms are shown in the plot area and the expressions used to create them are shown on the y-axis.
== In topology ==
A map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is said to be monotone if each of its fibers is connected; that is, for each element
y
∈
Y
,
{\displaystyle y\in Y,}
the (possibly empty) set
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
is a connected subspace of
X
.
{\displaystyle X.}
== In functional analysis ==
In functional analysis on a topological vector space
X
{\displaystyle X}
, a (possibly non-linear) operator
T
:
X
→
X
∗
{\displaystyle T:X\rightarrow X^{*}}
is said to be a monotone operator if
(
T
u
−
T
v
,
u
−
v
)
≥
0
∀
u
,
v
∈
X
.
{\displaystyle (Tu-Tv,u-v)\geq 0\quad \forall u,v\in X.}
Kachurovskii's theorem shows that convex functions on Banach spaces have monotonic operators as their derivatives.
A subset
G
{\displaystyle G}
of
X
×
X
∗
{\displaystyle X\times X^{*}}
is said to be a monotone set if for every pair
[
u
1
,
w
1
]
{\displaystyle [u_{1},w_{1}]}
and
[
u
2
,
w
2
]
{\displaystyle [u_{2},w_{2}]}
in
G
{\displaystyle G}
,
(
w
1
−
w
2
,
u
1
−
u
2
)
≥
0.
{\displaystyle (w_{1}-w_{2},u_{1}-u_{2})\geq 0.}
G
{\displaystyle G}
is said to be maximal monotone if it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operator
G
(
T
)
{\displaystyle G(T)}
is a monotone set. A monotone operator is said to be maximal monotone if its graph is a maximal monotone set.
== In order theory ==
Order theory deals with arbitrary partially ordered sets and preordered sets as a generalization of real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are not total. Furthermore, the strict relations
<
{\displaystyle <}
and
>
{\displaystyle >}
are of little use in many non-total orders and hence no additional terminology is introduced for them.
Letting
≤
{\displaystyle \leq }
denote the partial order relation of any partially ordered set, a monotone function, also called isotone, or order-preserving, satisfies the property
x
≤
y
⟹
f
(
x
)
≤
f
(
y
)
{\displaystyle x\leq y\implies f(x)\leq f(y)}
for all x and y in its domain. The composite of two monotone mappings is also monotone.
The dual notion is often called antitone, anti-monotone, or order-reversing. Hence, an antitone function f satisfies the property
x
≤
y
⟹
f
(
y
)
≤
f
(
x
)
,
{\displaystyle x\leq y\implies f(y)\leq f(x),}
for all x and y in its domain.
A constant function is both monotone and antitone; conversely, if f is both monotone and antitone, and if the domain of f is a lattice, then f must be constant.
Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions are order embeddings (functions for which
x
≤
y
{\displaystyle x\leq y}
if and only if
f
(
x
)
≤
f
(
y
)
)
{\displaystyle f(x)\leq f(y))}
and order isomorphisms (surjective order embeddings).
== In the context of search algorithms ==
In the context of search algorithms monotonicity (also called consistency) is a condition applied to heuristic functions. A heuristic
h
(
n
)
{\displaystyle h(n)}
is monotonic if, for every node n and every successor n' of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n' plus the estimated cost of reaching the goal from n',
h
(
n
)
≤
c
(
n
,
a
,
n
′
)
+
h
(
n
′
)
.
{\displaystyle h(n)\leq c\left(n,a,n'\right)+h\left(n'\right).}
This is a form of triangle inequality, with n, n', and the goal Gn closest to n. Because every monotonic heuristic is also admissible, monotonicity is a stricter requirement than admissibility. Some heuristic algorithms such as A* can be proven optimal provided that the heuristic they use is monotonic.
== In Boolean functions ==
In Boolean algebra, a monotonic function is one such that for all ai and bi in {0,1}, if a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn (i.e. the Cartesian product {0, 1}n is ordered coordinatewise), then f(a1, ..., an) ≤ f(b1, ..., bn). In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that an n-ary Boolean function is monotonic when its representation as an n-cube labelled with truth values has no upward edge from true to false. (This labelled Hasse diagram is the dual of the function's labelled Venn diagram, which is the more common representation for n ≤ 3.)
The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operators and and or (in particular not is forbidden). For instance "at least two of a, b, c hold" is a monotonic function of a, b, c, since it can be written for instance as ((a and b) or (a and c) or (b and c)).
The number of such functions on n variables is known as the Dedekind number of n.
SAT solving, generally an NP-hard task, can be achieved efficiently when all involved functions and predicates are monotonic and Boolean.
== See also ==
Monotone cubic interpolation
Pseudo-monotone operator
Spearman's rank correlation coefficient - measure of monotonicity in a set of data
Total monotonicity
Cyclical monotonicity
Operator monotone function
Monotone set function
Absolutely and completely monotonic functions and sequences
== Notes ==
== Bibliography ==
Bartle, Robert G. (1976). The elements of real analysis (second ed.).
Grätzer, George (1971). Lattice theory: first concepts and distributive lattices. W. H. Freeman. ISBN 0-7167-0442-0.
Pemberton, Malcolm; Rau, Nicholas (2001). Mathematics for economists: an introductory textbook. Manchester University Press. ISBN 0-7190-3341-1.
Renardy, Michael & Rogers, Robert C. (2004). An introduction to partial differential equations. Texts in Applied Mathematics 13 (Second ed.). New York: Springer-Verlag. p. 356. ISBN 0-387-00444-0.
Riesz, Frigyes & Béla Szőkefalvi-Nagy (1990). Functional Analysis. Courier Dover Publications. ISBN 978-0-486-66289-3.
Russell, Stuart J.; Norvig, Peter (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 978-0-13-604259-4.
Simon, Carl P.; Blume, Lawrence (April 1994). Mathematics for Economists (first ed.). Norton. ISBN 978-0-393-95733-4. (Definition 9.31)
== External links ==
"Monotone function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Convergence of a Monotonic Sequence by Anik Debnath and Thomas Roxlo (The Harker School), Wolfram Demonstrations Project.
Weisstein, Eric W. "Monotonic Function". MathWorld. | Wikipedia/Monotone_function |
In mathematics, the Nagata conjecture on curves, named after Masayoshi Nagata, governs the minimal degree required for a plane algebraic curve to pass through a collection of very general points with prescribed multiplicities.
== History ==
Nagata arrived at the conjecture via work on the 14th problem of Hilbert, which asks whether the invariant ring of a linear group action on the polynomial ring k[x1, ..., xn] over some field k is finitely generated. Nagata published the conjecture in a 1959 paper in the American Journal of Mathematics, in which he presented a counterexample to Hilbert's 14th problem.
== Statement ==
Nagata Conjecture. Suppose p1, ..., pr are very general points in P2 and that m1, ..., mr are given positive integers. Then for r > 9 any curve C in P2 that passes through each of the points pi with multiplicity mi must satisfy
deg
C
>
1
r
∑
i
=
1
r
m
i
.
{\displaystyle \deg C>{\frac {1}{\sqrt {r}}}\sum _{i=1}^{r}m_{i}.}
The condition r > 9 is necessary: The cases r > 9 and r ≤ 9 are distinguished by whether or not the anti-canonical bundle on the blowup of P2 at a collection of r points is nef. In the case where r ≤ 9, the cone theorem essentially gives a complete description of the cone of curves of the blow-up of the plane.
== Current status ==
The only case when this is known to hold is when r is a perfect square, which was proved by Nagata. Despite much interest, the other cases remain open. A more modern formulation of this conjecture is often given in terms of Seshadri constants and has been generalised to other surfaces under the name of the Nagata–Biran conjecture.
== References ==
Harbourne, Brian (2001), "On Nagata's conjecture", Journal of Algebra, 236 (2): 692–702, arXiv:math/9909093, doi:10.1006/jabr.2000.8515, MR 1813496.
Nagata, Masayoshi (1959), "On the 14-th problem of Hilbert", American Journal of Mathematics, 81 (3): 766–772, doi:10.2307/2372927, JSTOR 2372927, MR 0105409.
Strycharz-Szemberg, Beata; Szemberg, Tomasz (2004), "Remarks on the Nagata conjecture", Serdica Mathematical Journal, 30 (2–3): 405–430, hdl:10525/1746, MR 2098342. | Wikipedia/Nagata's_conjecture_on_curves |
Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.: 58
Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility.
The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry.
Many algebraic varieties are differentiable manifolds, but an algebraic variety may have singular points while a differentiable manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces.
In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type.
== Overview and definitions ==
An affine variety over an algebraically closed field is conceptually the easiest type of variety to define, which will be done in this section. Next, one can define projective and quasi-projective varieties in a similar way. The most general definition of a variety is obtained by patching together smaller quasi-projective varieties. It is not obvious that one can construct genuinely new examples of varieties in this way, but Nagata gave an example of such a new variety in the 1950s.
=== Affine varieties ===
For an algebraically closed field K and a natural number n, let An be an affine n-space over K, identified to
K
n
{\displaystyle K^{n}}
through the choice of an affine coordinate system. The polynomials f in the ring K[x1, ..., xn] can be viewed as K-valued functions on An by evaluating f at the points in An, i.e. by choosing values in K for each xi. For each set S of polynomials in K[x1, ..., xn], define the zero-locus Z(S) to be the set of points in An on which the functions in S simultaneously vanish, that is to say
Z
(
S
)
=
{
x
∈
A
n
∣
f
(
x
)
=
0
for all
f
∈
S
}
.
{\displaystyle Z(S)=\left\{x\in \mathbf {A} ^{n}\mid f(x)=0{\text{ for all }}f\in S\right\}.}
A subset V of An is called an affine algebraic set if V = Z(S) for some S.: 2 A nonempty affine algebraic set V is called irreducible if it cannot be written as the union of two proper algebraic subsets.: 3 An irreducible affine algebraic set is also called an affine variety.: 3 (Some authors use the phrase affine variety to refer to any affine algebraic set, irreducible or not.)
Affine varieties can be given a natural topology by declaring the closed sets to be precisely the affine algebraic sets. This topology is called the Zariski topology.: 2
Given a subset V of An, we define I(V) to be the ideal of all polynomial functions vanishing on V:
I
(
V
)
=
{
f
∈
K
[
x
1
,
…
,
x
n
]
∣
f
(
x
)
=
0
for all
x
∈
V
}
.
{\displaystyle I(V)=\left\{f\in K[x_{1},\ldots ,x_{n}]\mid f(x)=0{\text{ for all }}x\in V\right\}.}
For any affine algebraic set V, the coordinate ring or structure ring of V is the quotient of the polynomial ring by this ideal.: 4
=== Projective varieties and quasi-projective varieties ===
Let k be an algebraically closed field and let Pn be the projective n-space over k. Let f in k[x0, ..., xn] be a homogeneous polynomial of degree d. It is not well-defined to evaluate f on points in Pn in homogeneous coordinates. However, because f is homogeneous, meaning that f (λx0, ..., λxn) = λd f (x0, ..., xn), it does make sense to ask whether f vanishes at a point [x0 : ... : xn]. For each set S of homogeneous polynomials, define the zero-locus of S to be the set of points in Pn on which the functions in S vanish:
Z
(
S
)
=
{
x
∈
P
n
∣
f
(
x
)
=
0
for all
f
∈
S
}
.
{\displaystyle Z(S)=\{x\in \mathbf {P} ^{n}\mid f(x)=0{\text{ for all }}f\in S\}.}
A subset V of Pn is called a projective algebraic set if V = Z(S) for some S.: 9 An irreducible projective algebraic set is called a projective variety.: 10
Projective varieties are also equipped with the Zariski topology by declaring all algebraic sets to be closed.
Given a subset V of Pn, let I(V) be the ideal generated by all homogeneous polynomials vanishing on V. For any projective algebraic set V, the coordinate ring of V is the quotient of the polynomial ring by this ideal.: 10
A quasi-projective variety is a Zariski open subset of a projective variety. Notice that every affine variety is quasi-projective. Notice also that the complement of an algebraic set in an affine variety is a quasi-projective variety; in the context of affine varieties, such a quasi-projective variety is usually not called a variety but a constructible set.
=== Abstract varieties ===
In classical algebraic geometry, all varieties were by definition quasi-projective varieties, meaning that they were open subvarieties of closed subvarieties of a projective space. For example, in Chapter 1 of Hartshorne a variety over an algebraically closed field is defined to be a quasi-projective variety,: 15 but from Chapter 2 onwards, the term variety (also called an abstract variety) refers to a more general object, which locally is a quasi-projective variety, but when viewed as a whole is not necessarily quasi-projective; i.e. it might not have an embedding into projective space.: 105 So classically the definition of an algebraic variety required an embedding into projective space, and this embedding was used to define the topology on the variety and the regular functions on the variety. The disadvantage of such a definition is that not all varieties come with natural embeddings into projective space. For example, under this definition, the product P1 × P1 is not a variety until it is embedded into a larger projective space; this is usually done by the Segre embedding. Furthermore, any variety that admits one embedding into projective space admits many others, for example by composing the embedding with the Veronese embedding; thus many notions that should be intrinsic, such as that of a regular function, are not obviously so.
The earliest successful attempt to define an algebraic variety abstractly, without an embedding, was made by André Weil. In his Foundations of Algebraic Geometry, using valuations. Claude Chevalley made a definition of a scheme, which served a similar purpose, but was more general. However, Alexander Grothendieck's definition of a scheme is more general still and has received the most widespread acceptance. In Grothendieck's language, an abstract algebraic variety is usually defined to be an integral, separated scheme of finite type over an algebraically closed field,: 104–105 although some authors drop the irreducibility or the reducedness or the separateness condition or allow the underlying field to be not algebraically closed. Classical algebraic varieties are the quasiprojective integral separated finite type schemes over an algebraically closed field.
==== Existence of non-quasiprojective abstract algebraic varieties ====
One of the earliest examples of a non-quasiprojective algebraic variety were given by Nagata. Nagata's example was not complete (the analog of compactness), but soon afterwards he found an algebraic surface that was complete and non-projective.: Remark 4.10.2 p.105 Since then other examples have been found: for example, it is straightforward to construct toric varieties that are not quasi-projective but complete.
== Examples ==
=== Subvariety ===
A subvariety is a subset of a variety that is itself a variety (with respect to the topological structure induced by the ambient variety). For example, every open subset of a variety is a variety. See also closed immersion.
Hilbert's Nullstellensatz says that closed subvarieties of an affine or projective variety are in one-to-one correspondence with the prime ideals or non-irrelevant homogeneous prime ideals of the coordinate ring of the variety.
=== Affine variety ===
==== Example 1 ====
Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element f (x, y):
f
(
x
,
y
)
=
x
+
y
−
1.
{\displaystyle f(x,y)=x+y-1.}
The zero-locus of f (x, y) is the set of points in A2 on which this function vanishes: it is the set of all pairs of complex numbers (x, y) such that y = 1 − x. This is called a line in the affine plane. (In the classical topology coming from the topology on the complex numbers, a complex line is a real manifold of dimension two.) This is the set Z( f ):
Z
(
f
)
=
{
(
x
,
1
−
x
)
∈
C
2
}
.
{\displaystyle Z(f)=\{(x,1-x)\in \mathbf {C} ^{2}\}.}
Thus the subset V = Z( f ) of A2 is an algebraic set. The set V is not empty. It is irreducible, as it cannot be written as the union of two proper algebraic subsets. Thus it is an affine algebraic variety.
==== Example 2 ====
Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element g(x, y):
g
(
x
,
y
)
=
x
2
+
y
2
−
1.
{\displaystyle g(x,y)=x^{2}+y^{2}-1.}
The zero-locus of g(x, y) is the set of points in A2 on which this function vanishes, that is the set of points (x,y) such that x2 + y2 = 1. As g(x, y) is an absolutely irreducible polynomial, this is an algebraic variety. The set of its real points (that is the points for which x and y are real numbers), is known as the unit circle; this name is also often given to the whole variety.
==== Example 3 ====
The following example is neither a hypersurface, nor a linear space, nor a single point. Let A3 be the three-dimensional affine space over C. The set of points (x, x2, x3) for x in C is an algebraic variety, and more precisely an algebraic curve that is not contained in any plane. It is the twisted cubic shown in the above figure. It may be defined by the equations
y
−
x
2
=
0
z
−
x
3
=
0
{\displaystyle {\begin{aligned}y-x^{2}&=0\\z-x^{3}&=0\end{aligned}}}
The irreducibility of this algebraic set needs a proof. One approach in this case is to check that the projection (x, y, z) → (x, y) is injective on the set of the solutions and that its image is an irreducible plane curve.
For more difficult examples, a similar proof may always be given, but may imply a difficult computation: first a Gröbner basis computation to compute the dimension, followed by a random linear change of variables (not always needed); then a Gröbner basis computation for another monomial ordering to compute the projection and to prove that it is generically injective and that its image is a hypersurface, and finally a polynomial factorization to prove the irreducibility of the image.
==== General linear group ====
The set of n-by-n matrices over the base field k can be identified with the affine n2-space
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
with coordinates
x
i
j
{\displaystyle x_{ij}}
such that
x
i
j
(
A
)
{\displaystyle x_{ij}(A)}
is the (i, j)-th entry of the matrix
A
{\displaystyle A}
. The determinant
det
{\displaystyle \det }
is then a polynomial in
x
i
j
{\displaystyle x_{ij}}
and thus defines the hypersurface
H
=
V
(
det
)
{\displaystyle H=V(\det )}
in
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
. The complement of
H
{\displaystyle H}
is then an open subset of
A
n
2
{\displaystyle \mathbb {A} ^{n^{2}}}
that consists of all the invertible n-by-n matrices, the general linear group
GL
n
(
k
)
{\displaystyle \operatorname {GL} _{n}(k)}
. It is an affine variety, since, in general, the complement of a hypersurface in an affine variety is affine. Explicitly, consider
A
n
2
×
A
1
{\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}}
where the affine line is given coordinate t. Then
GL
n
(
k
)
{\displaystyle \operatorname {GL} _{n}(k)}
amounts to the zero-locus in
A
n
2
×
A
1
{\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}}
of the polynomial in
x
i
j
,
t
{\displaystyle x_{ij},t}
:
t
⋅
det
[
x
i
j
]
−
1
,
{\displaystyle t\cdot \det[x_{ij}]-1,}
i.e., the set of matrices A such that
t
det
(
A
)
=
1
{\displaystyle t\det(A)=1}
has a solution. This is best seen algebraically: the coordinate ring of
GL
n
(
k
)
{\displaystyle \operatorname {GL} _{n}(k)}
is the localization
k
[
x
i
j
∣
0
≤
i
,
j
≤
n
]
[
det
−
1
]
{\displaystyle k[x_{ij}\mid 0\leq i,j\leq n][{\det }^{-1}]}
, which can be identified with
k
[
x
i
j
,
t
∣
0
≤
i
,
j
≤
n
]
/
(
t
det
−
1
)
{\displaystyle k[x_{ij},t\mid 0\leq i,j\leq n]/(t\det -1)}
.
The multiplicative group k* of the base field k is the same as
GL
1
(
k
)
{\displaystyle \operatorname {GL} _{1}(k)}
and thus is an affine variety. A finite product of it
(
k
∗
)
r
{\displaystyle (k^{*})^{r}}
is an algebraic torus, which is again an affine variety.
A general linear group is an example of a linear algebraic group, an affine variety that has a structure of a group in such a way the group operations are morphism of varieties.
==== Characteristic variety ====
Let A be a not-necessarily-commutative algebra over a field k. Even if A is not commutative, it can still happen that A has a
Z
{\displaystyle \mathbb {Z} }
-filtration so that the associated ring
gr
A
=
⨁
i
=
−
∞
∞
A
i
/
A
i
−
1
{\displaystyle \operatorname {gr} A=\bigoplus _{i=-\infty }^{\infty }A_{i}/{A_{i-1}}}
is commutative, reduced and finitely generated as a k-algebra; i.e.,
gr
A
{\displaystyle \operatorname {gr} A}
is the coordinate ring of an affine (reducible) variety X. For example, if A is the universal enveloping algebra of a finite-dimensional Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, then
gr
A
{\displaystyle \operatorname {gr} A}
is a polynomial ring (the PBW theorem); more precisely, the coordinate ring of the dual vector space
g
∗
{\displaystyle {\mathfrak {g}}^{*}}
.
Let M be a filtered module over A (i.e.,
A
i
M
j
⊂
M
i
+
j
{\displaystyle A_{i}M_{j}\subset M_{i+j}}
). If
gr
M
{\displaystyle \operatorname {gr} M}
is fintiely generated as a
gr
A
{\displaystyle \operatorname {gr} A}
-algebra, then the support of
gr
M
{\displaystyle \operatorname {gr} M}
in X; i.e., the locus where
gr
M
{\displaystyle \operatorname {gr} M}
does not vanish is called the characteristic variety of M. The notion plays an important role in the theory of D-modules.
=== Projective variety ===
A projective variety is a closed subvariety of a projective space. That is, it is the zero locus of a set of homogeneous polynomials that generate a prime ideal.
==== Example 1 ====
A plane projective curve is the zero locus of an irreducible homogeneous polynomial in three indeterminates. The projective line P1 is an example of a projective curve; it can be viewed as the curve in the projective plane P2 = {[x, y, z]} defined by x = 0. For another example, first consider the affine cubic curve
y
2
=
x
3
−
x
.
{\displaystyle y^{2}=x^{3}-x.}
in the 2-dimensional affine space (over a field of characteristic not two). It has the associated cubic homogeneous polynomial equation:
y
2
z
=
x
3
−
x
z
2
,
{\displaystyle y^{2}z=x^{3}-xz^{2},}
which defines a curve in P2 called an elliptic curve. The curve has genus one (genus formula); in particular, it is not isomorphic to the projective line P1, which has genus zero. Using genus to distinguish curves is very basic: in fact, the genus is the first invariant one uses to classify curves (see also the construction of moduli of algebraic curves).
==== Example 2: Grassmannian ====
Let V be a finite-dimensional vector space. The Grassmannian variety Gn(V) is the set of all n-dimensional subspaces of V. It is a projective variety: it is embedded into a projective space via the Plücker embedding:
{
G
n
(
V
)
↪
P
(
∧
n
V
)
⟨
b
1
,
…
,
b
n
⟩
↦
[
b
1
∧
⋯
∧
b
n
]
{\displaystyle {\begin{cases}G_{n}(V)\hookrightarrow \mathbf {P} \left(\wedge ^{n}V\right)\\\langle b_{1},\ldots ,b_{n}\rangle \mapsto [b_{1}\wedge \cdots \wedge b_{n}]\end{cases}}}
where bi are any set of linearly independent vectors in V,
∧
n
V
{\displaystyle \wedge ^{n}V}
is the n-th exterior power of V, and the bracket [w] means the line spanned by the nonzero vector w.
The Grassmannian variety comes with a natural vector bundle (or locally free sheaf in other terminology) called the tautological bundle, which is important in the study of characteristic classes such as Chern classes.
==== Jacobian variety and abelian variety ====
Let C be a smooth complete curve and
Pic
(
C
)
{\displaystyle \operatorname {Pic} (C)}
the Picard group of it; i.e., the group of isomorphism classes of line bundles on C. Since C is smooth,
Pic
(
C
)
{\displaystyle \operatorname {Pic} (C)}
can be identified as the divisor class group of C and thus there is the degree homomorphism
deg
:
Pic
(
C
)
→
Z
{\displaystyle \operatorname {deg} :\operatorname {Pic} (C)\to \mathbb {Z} }
. The Jacobian variety
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
of C is the kernel of this degree map; i.e., the group of the divisor classes on C of degree zero. A Jacobian variety is an example of an abelian variety, a complete variety with a compatible abelian group structure on it (the name "abelian" is however not because it is an abelian group). An abelian variety turns out to be projective (in short, algebraic theta functions give an embedding into a projective space. See equations defining abelian varieties); thus,
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
is a projective variety. The tangent space to
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
at the identity element is naturally isomorphic to
H
1
(
C
,
O
C
)
;
{\displaystyle \operatorname {H} ^{1}(C,{\mathcal {O}}_{C});}
hence, the dimension of
Jac
(
C
)
{\displaystyle \operatorname {Jac} (C)}
is the genus of
C
{\displaystyle C}
.
Fix a point
P
0
{\displaystyle P_{0}}
on
C
{\displaystyle C}
. For each integer
n
>
0
{\displaystyle n>0}
, there is a natural morphism
C
n
→
Jac
(
C
)
,
(
P
1
,
…
,
P
r
)
↦
[
P
1
+
⋯
+
P
n
−
n
P
0
]
{\displaystyle C^{n}\to \operatorname {Jac} (C),\,(P_{1},\dots ,P_{r})\mapsto [P_{1}+\cdots +P_{n}-nP_{0}]}
where
C
n
{\displaystyle C^{n}}
is the product of n copies of C. For
g
=
1
{\displaystyle g=1}
(i.e., C is an elliptic curve), the above morphism for
n
=
1
{\displaystyle n=1}
turns out to be an isomorphism;: Ch. IV, Example 1.3.7. in particular, an elliptic curve is an abelian variety.
==== Moduli varieties ====
Given an integer
g
≥
0
{\displaystyle g\geq 0}
, the set of isomorphism classes of smooth complete curves of genus
g
{\displaystyle g}
is called the moduli of curves of genus
g
{\displaystyle g}
and is denoted as
M
g
{\displaystyle {\mathfrak {M}}_{g}}
. There are few ways to show this moduli has a structure of a possibly reducible algebraic variety; for example, one way is to use geometric invariant theory which ensures a set of isomorphism classes has a (reducible) quasi-projective variety structure. Moduli such as the moduli of curves of fixed genus is typically not a projective variety; roughly the reason is that a degeneration (limit) of a smooth curve tends to be non-smooth or reducible. This leads to the notion of a stable curve of genus
g
≥
2
{\displaystyle g\geq 2}
, a not-necessarily-smooth complete curve with no terribly bad singularities and not-so-large automorphism group. The moduli of stable curves
M
¯
g
{\displaystyle {\overline {\mathfrak {M}}}_{g}}
, the set of isomorphism classes of stable curves of genus
g
≥
2
{\displaystyle g\geq 2}
, is then a projective variety which contains
M
g
{\displaystyle {\mathfrak {M}}_{g}}
as an open dense subset. Since
M
¯
g
{\displaystyle {\overline {\mathfrak {M}}}_{g}}
is obtained by adding boundary points to
M
g
{\displaystyle {\mathfrak {M}}_{g}}
,
M
¯
g
{\displaystyle {\overline {\mathfrak {M}}}_{g}}
is colloquially said to be a compactification of
M
g
{\displaystyle {\mathfrak {M}}_{g}}
. Historically a paper of Mumford and Deligne introduced the notion of a stable curve to show
M
g
{\displaystyle {\mathfrak {M}}_{g}}
is irreducible when
g
≥
2
{\displaystyle g\geq 2}
.
The moduli of curves exemplifies a typical situation: a moduli of nice objects tend not to be projective but only quasi-projective. Another case is a moduli of vector bundles on a curve. Here, there are the notions of stable and semistable vector bundles on a smooth complete curve
C
{\displaystyle C}
. The moduli of semistable vector bundles of a given rank
n
{\displaystyle n}
and a given degree
d
{\displaystyle d}
(degree of the determinant of the bundle) is then a projective variety denoted as
S
U
C
(
n
,
d
)
{\displaystyle SU_{C}(n,d)}
, which contains the set
U
C
(
n
,
d
)
{\displaystyle U_{C}(n,d)}
of isomorphism classes of stable vector bundles of rank
n
{\displaystyle n}
and degree
d
{\displaystyle d}
as an open subset. Since a line bundle is stable, such a moduli is a generalization of the Jacobian variety of
C
{\displaystyle C}
.
In general, in contrast to the case of moduli of curves, a compactification of a moduli need not be unique and, in some cases, different non-equivalent compactifications are constructed using different methods and by different authors. An example over
C
{\displaystyle \mathbb {C} }
is the problem of compactifying
D
/
Γ
{\displaystyle D/\Gamma }
, the quotient of a bounded symmetric domain
D
{\displaystyle D}
by an action of an arithmetic discrete group
Γ
{\displaystyle \Gamma }
. A basic example of
D
/
Γ
{\displaystyle D/\Gamma }
is when
D
=
H
g
{\displaystyle D={\mathfrak {H}}_{g}}
, Siegel's upper half-space and
Γ
{\displaystyle \Gamma }
commensurable with
Sp
(
2
g
,
Z
)
{\displaystyle \operatorname {Sp} (2g,\mathbb {Z} )}
; in that case,
D
/
Γ
{\displaystyle D/\Gamma }
has an interpretation as the moduli
A
g
{\displaystyle {\mathfrak {A}}_{g}}
of principally polarized complex abelian varieties of dimension
g
{\displaystyle g}
(a principal polarization identifies an abelian variety with its dual). The theory of toric varieties (or torus embeddings) gives a way to compactify
D
/
Γ
{\displaystyle D/\Gamma }
, a toroidal compactification of it. But there are other ways to compactify
D
/
Γ
{\displaystyle D/\Gamma }
; for example, there is the minimal compactification of
D
/
Γ
{\displaystyle D/\Gamma }
due to Baily and Borel: it is the projective variety associated to the graded ring formed by modular forms (in the Siegel case, Siegel modular forms; see also Siegel modular variety). The non-uniqueness of compactifications is due to the lack of moduli interpretations of those compactifications; i.e., they do not represent (in the category-theory sense) any natural moduli problem or, in the precise language, there is no natural moduli stack that would be an analog of moduli stack of stable curves.
=== Non-affine and non-projective example ===
An algebraic variety can be neither affine nor projective. To give an example, let X = P1 × A1 and p: X → A1 the projection. Here X is an algebraic variety since it is a product of varieties. It is not affine since P1 is a closed subvariety of X (as the zero locus of p), but an affine variety cannot contain a projective variety of positive dimension as a closed subvariety. It is not projective either, since there is a nonconstant regular function on X; namely, p.
Another example of a non-affine non-projective variety is X = A2 − (0, 0) (cf. Morphism of varieties § Examples.)
=== Non-examples ===
Consider the affine line
A
1
{\displaystyle \mathbb {A} ^{1}}
over
C
{\displaystyle \mathbb {C} }
. The complement of the circle
{
z
∈
C
with
|
z
|
2
=
1
}
{\displaystyle \{z\in \mathbb {C} {\text{ with }}|z|^{2}=1\}}
in
A
1
=
C
{\displaystyle \mathbb {A} ^{1}=\mathbb {C} }
is not an algebraic variety (nor even an algebraic set). Note that
|
z
|
2
−
1
{\displaystyle |z|^{2}-1}
is not a polynomial in
z
{\displaystyle z}
(although it is a polynomial in the real coordinates
x
,
y
{\displaystyle x,y}
). On the other hand, the complement of the origin in
A
1
=
C
{\displaystyle \mathbb {A} ^{1}=\mathbb {C} }
is an algebraic (affine) variety, since the origin is the zero-locus of
z
{\displaystyle z}
. This may be explained as follows: the affine line has dimension one and so any subvariety of it other than itself must have strictly less dimension; namely, zero.
For similar reasons, a unitary group (over the complex numbers) is not an algebraic variety, while the special linear group
SL
n
(
C
)
{\displaystyle \operatorname {SL} _{n}(\mathbb {C} )}
is a closed subvariety of
GL
n
(
C
)
{\displaystyle \operatorname {GL} _{n}(\mathbb {C} )}
, the zero-locus of
det
−
1
{\displaystyle \det -1}
. (Over a different base field, a unitary group can however be given a structure of a variety.)
== Basic results ==
An affine algebraic set V is a variety if and only if I(V) is a prime ideal; equivalently, V is a variety if and only if its coordinate ring is an integral domain.: 52 : 4
Every nonempty affine algebraic set may be written uniquely as a finite union of algebraic varieties (where none of the varieties in the decomposition is a subvariety of any other).: 5
The dimension of a variety may be defined in various equivalent ways. See Dimension of an algebraic variety for details.
A product of finitely many algebraic varieties (over an algebraically closed field) is an algebraic variety. A finite product of affine varieties is affine and a finite product of projective varieties is projective.
== Isomorphism of algebraic varieties ==
Let V1, V2 be algebraic varieties. We say V1 and V2 are isomorphic, and write V1 ≅ V2, if there are regular maps φ : V1 → V2 and ψ : V2 → V1 such that the compositions ψ ∘ φ and φ ∘ ψ are the identity maps on V1 and V2 respectively.
== Discussion and generalizations ==
The basic definitions and facts above enable one to do classical algebraic geometry. To be able to do more — for example, to deal with varieties over fields that are not algebraically closed — some foundational changes are required. The modern notion of a variety is considerably more abstract than the one above, though equivalent in the case of varieties over algebraically closed fields. An abstract algebraic variety is a particular kind of scheme; the generalization to schemes on the geometric side enables an extension of the correspondence described above to a wider class of rings. A scheme is a locally ringed space such that every point has a neighbourhood that, as a locally ringed space, is isomorphic to a spectrum of a ring. Basically, a variety over k is a scheme whose structure sheaf is a sheaf of k-algebras with the property that the rings R that occur above are all integral domains and are all finitely generated k-algebras, that is to say, they are quotients of polynomial algebras by prime ideals.
This definition works over any field k. It allows you to glue affine varieties (along common open sets) without worrying whether the resulting object can be put into some projective space. This also leads to difficulties since one can introduce somewhat pathological objects, e.g. an affine line with zero doubled. Such objects are usually not considered varieties, and are eliminated by requiring the schemes underlying a variety to be separated. (Strictly speaking, there is also a third condition, namely, that one needs only finitely many affine patches in the definition above.)
Some modern researchers also remove the restriction on a variety having integral domain affine charts, and when speaking of a variety only require that the affine charts have trivial nilradical.
A complete variety is a variety such that any map from an open subset of a nonsingular curve into it can be extended uniquely to the whole curve. Every projective variety is complete, but not vice versa.
These varieties have been called "varieties in the sense of Serre", since Serre's foundational paper FAC
on sheaf cohomology was written for them. They remain typical objects to start studying in algebraic geometry, even if more general objects are also used in an auxiliary way.
One way that leads to generalizations is to allow reducible algebraic sets (and fields k that aren't algebraically closed), so the rings R may not be integral domains. A more significant modification is to allow nilpotents in the sheaf of rings, that is, rings which are not reduced. This is one of several generalizations of classical algebraic geometry that are built into Grothendieck's theory of schemes.
Allowing nilpotent elements in rings is related to keeping track of "multiplicities" in algebraic geometry. For example, the closed subscheme of the affine line defined by x2 = 0 is different from the subscheme defined by x = 0 (the origin). More generally, the fiber of a morphism of schemes X → Y at a point of Y may be non-reduced, even if X and Y are reduced. Geometrically, this says that fibers of good mappings may have nontrivial "infinitesimal" structure.
There are further generalizations called algebraic spaces and stacks.
== Algebraic manifolds ==
An algebraic manifold is an algebraic variety that is also an m-dimensional manifold, and hence every sufficiently small local patch is isomorphic to km. Equivalently, the variety is smooth (free from singular points). When k is the real numbers, R, algebraic manifolds are called Nash manifolds. Algebraic manifolds can be defined as the zero set of a finite collection of analytic algebraic functions. Projective algebraic manifolds are an equivalent definition for projective varieties. The Riemann sphere is one example.
== See also ==
Variety (disambiguation) — listing also several mathematical meanings
Function field of an algebraic variety
Birational geometry
Motive (algebraic geometry)
Analytic variety
Zariski–Riemann space
Semi-algebraic set
Fano variety
Mnëv's universality theorem
== Notes ==
== References ==
=== Sources ===
This article incorporates material from Isomorphism of varieties on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Algebraic_set |
The sheaf of rational functions KX of a scheme X is the generalization to scheme theory of the notion of function field of an algebraic variety in classical algebraic geometry. In the case of algebraic varieties, such a sheaf associates to each open set U the ring of all rational functions on that open set; in other words, KX(U) is the set of fractions of regular functions on U. Despite its name, KX does not always give a field for a general scheme X.
== Simple cases ==
In the simplest cases, the definition of KX is straightforward. If X is an (irreducible) affine algebraic variety, and if U is an open subset of X, then KX(U) will be the fraction field of the ring of regular functions on U. Because X is affine, the ring of regular functions on U will be a localization of the global sections of X, and consequently KX will be the constant sheaf whose value is the fraction field of the global sections of X.
If X is integral but not affine, then any non-empty affine open set will be dense in X. This means there is not enough room for a regular function to do anything interesting outside of U, and consequently the behavior of the rational functions on U should determine the behavior of the rational functions on X. In fact, the fraction fields of the rings of regular functions on any affine open set will be the same, so we define, for any U, KX(U) to be the common fraction field of any ring of regular functions on any open affine subset of X. Alternatively, one can define the function field in this case to be the local ring of the generic point.
== General case ==
The trouble starts when X is no longer integral. Then it is possible to have zero divisors in the ring of regular functions, and consequently the fraction field no longer exists. The naive solution is to replace the fraction field by the total quotient ring, that is, to invert every element that is not a zero divisor. Unfortunately, in general, the total quotient ring does not produce a presheaf much less a sheaf. The well-known article of Kleiman, listed in the bibliography, gives such an example.
The correct solution is to proceed as follows:
For each open set U, let SU be the set of all elements in Γ(U, OX) that are not zero divisors in any stalk OX,x. Let KXpre be the presheaf whose sections on U are localizations SU−1Γ(U, OX) and whose restriction maps are induced from the restriction maps of OX by the universal property of localization. Then KX is the sheaf associated to the presheaf KXpre.
== Further issues ==
Once KX is defined, it is possible to study properties of X which depend only on KX. This is the subject of birational geometry.
If X is an algebraic variety over a field k, then over each open set U we have a field extension KX(U) of k. The dimension of U will be equal to the transcendence degree of this field extension. All finite transcendence degree field extensions of k correspond to the rational function field of some variety.
In the particular case of an algebraic curve C, that is, dimension 1, it follows that any two non-constant functions F and G on C satisfy a polynomial equation P(F,G) = 0.
== See also ==
Cartier divisor, a notion defined in terms of the function field
== Bibliography ==
Kleiman, S., "Misconceptions about KX", Enseign. Math. 25 (1979), 203–206, available at https://www.e-periodica.ch/cntmng?pid=ens-001:1979:25::101 | Wikipedia/Function_field_(scheme_theory) |
In mathematics, vector bundles on algebraic curves may be studied as holomorphic vector bundles on compact Riemann surfaces, which is the classical approach, or as locally free sheaves on algebraic curves C in a more general, algebraic setting (which can for example admit singular points).
Some foundational results on classification were known in the 1950s. The result of Grothendieck (1957), that holomorphic vector bundles on the Riemann sphere are sums of line bundles, is now often called the Birkhoff–Grothendieck theorem, since it is implicit in much earlier work of Birkhoff (1909) on the Riemann–Hilbert problem.
Atiyah (1957) gave the classification of vector bundles on elliptic curves.
The Riemann–Roch theorem for vector bundles was proved by Weil (1938), before the 'vector bundle' concept had really any official status. Although, associated ruled surfaces were classical objects. See Hirzebruch–Riemann–Roch theorem for his result. He was seeking a generalization of the Jacobian variety, by passing from holomorphic line bundles to higher rank. This idea would prove fruitful, in terms of moduli spaces of vector bundles. following on the work in the 1960s on geometric invariant theory.
== See also ==
Hitchin system
== References ==
Atiyah, M. (1957). "Vector bundles over an elliptic curve". Proc. London Math. Soc. VII: 414–452. doi:10.1112/plms/s3-7.1.414. Also in Collected Works vol. I
Birkhoff, George David (1909). "Singular points of ordinary linear differential equations". Transactions of the American Mathematical Society. 10 (4): 436–470. doi:10.2307/1988594. ISSN 0002-9947. JFM 40.0352.02. JSTOR 1988594.
Grothendieck, A. (1957). "Sur la classification des fibrés holomorphes sur la sphère de Riemann". Amer. J. Math. 79 (1): 121–138. doi:10.2307/2372388. JSTOR 2372388.
Weil, André (1938). "Zur algebraischen Theorie der algebraischen Funktionen". Journal für die reine und angewandte Mathematik. 179: 129–133. doi:10.1515/crll.1938.179.129. | Wikipedia/Vector_bundles_on_algebraic_curves |
In mathematics, in particular field theory, the conjugate elements or algebraic conjugates of an algebraic element α, over a field extension L/K, are the roots of the minimal polynomial pK,α(x) of α over K. Conjugate elements are commonly called conjugates in contexts where this is not ambiguous. Normally α itself is included in the set of conjugates of α.
Equivalently, the conjugates of α are the images of α under the field automorphisms of L that leave fixed the elements of K. The equivalence of the two definitions is one of the starting points of Galois theory.
The concept generalizes the complex conjugation, since the algebraic conjugates over
R
{\displaystyle \mathbb {R} }
of a complex number are the number itself and its complex conjugate.
== Example ==
The cube roots of the number one are:
1
3
=
{
1
−
1
2
+
3
2
i
−
1
2
−
3
2
i
{\displaystyle {\sqrt[{3}]{1}}={\begin{cases}1\\[3pt]-{\frac {1}{2}}+{\frac {\sqrt {3}}{2}}i\\[5pt]-{\frac {1}{2}}-{\frac {\sqrt {3}}{2}}i\end{cases}}}
The latter two roots are conjugate elements in Q[i√3] with minimal polynomial
(
x
+
1
2
)
2
+
3
4
=
x
2
+
x
+
1.
{\displaystyle \left(x+{\frac {1}{2}}\right)^{2}+{\frac {3}{4}}=x^{2}+x+1.}
== Properties ==
If K is given inside an algebraically closed field C, then the conjugates can be taken inside C. If no such C is specified, one can take the conjugates in some relatively small field L. The smallest possible choice for L is to take a splitting field over K of pK,α, containing α. If L is any normal extension of K containing α, then by definition it already contains such a splitting field.
Given then a normal extension L of K, with automorphism group Aut(L/K) = G, and containing α, any element g(α) for g in G will be a conjugate of α, since the automorphism g sends roots of p to roots of p. Conversely any conjugate β of α is of this form: in other words, G acts transitively on the conjugates. This follows as K(α) is K-isomorphic to K(β) by irreducibility of the minimal polynomial, and any isomorphism of fields F and F' that maps polynomial p to p' can be extended to an isomorphism of the splitting fields of p over F and p' over F', respectively.
In summary, the conjugate elements of α are found, in any normal extension L of K that contains K(α), as the set of elements g(α) for g in Aut(L/K). The number of repeats in that list of each element is the separable degree [L:K(α)]sep.
A theorem of Kronecker states that if α is a nonzero algebraic integer such that α and all of its conjugates in the complex numbers have absolute value at most 1, then α is a root of unity. There are quantitative forms of this, stating more precisely bounds (depending on degree) on the largest absolute value of a conjugate that imply that an algebraic integer is a root of unity.
== References ==
David S. Dummit, Richard M. Foote, Abstract algebra, 3rd ed., Wiley, 2004.
== External links ==
Weisstein, Eric W. "Conjugate Elements". MathWorld. | Wikipedia/Algebraic_conjugate |
In mathematics, if A is an associative algebra over K, then an element a of A is an algebraic element over K, or just algebraic over K, if there exists some non-zero polynomial
g
(
x
)
∈
K
[
x
]
{\displaystyle g(x)\in K[x]}
with coefficients in K such that g(a) = 0. Elements of A that are not algebraic over K are transcendental over K. A special case of an associative algebra over
K
{\displaystyle K}
is an extension field
L
{\displaystyle L}
of
K
{\displaystyle K}
.
These notions generalize the algebraic numbers and the transcendental numbers (where the field extension is C/Q, with C being the field of complex numbers and Q being the field of rational numbers).
== Examples ==
The square root of 2 is algebraic over Q, since it is the root of the polynomial g(x) = x2 − 2 whose coefficients are rational.
Pi is transcendental over Q but algebraic over the field of real numbers R: it is the root of g(x) = x − π, whose coefficients (1 and −π) are both real, but not of any polynomial with only rational coefficients. (The definition of the term transcendental number uses C/Q, not C/R.)
== Properties ==
The following conditions are equivalent for an element
a
{\displaystyle a}
of an extension field
L
{\displaystyle L}
of
K
{\displaystyle K}
:
a
{\displaystyle a}
is algebraic over
K
{\displaystyle K}
,
the field extension
K
(
a
)
/
K
{\displaystyle K(a)/K}
is algebraic, i.e. every element of
K
(
a
)
{\displaystyle K(a)}
is algebraic over
K
{\displaystyle K}
(here
K
(
a
)
{\displaystyle K(a)}
denotes the smallest subfield of
L
{\displaystyle L}
containing
K
{\displaystyle K}
and
a
{\displaystyle a}
),
the field extension
K
(
a
)
/
K
{\displaystyle K(a)/K}
has finite degree, i.e. the dimension of
K
(
a
)
{\displaystyle K(a)}
as a
K
{\displaystyle K}
-vector space is finite,
K
[
a
]
=
K
(
a
)
{\displaystyle K[a]=K(a)}
, where
K
[
a
]
{\displaystyle K[a]}
is the set of all elements of
L
{\displaystyle L}
that can be written in the form
g
(
a
)
{\displaystyle g(a)}
with a polynomial
g
{\displaystyle g}
whose coefficients lie in
K
{\displaystyle K}
.
To make this more explicit, consider the polynomial evaluation
ε
a
:
K
[
X
]
→
K
(
a
)
,
P
↦
P
(
a
)
{\displaystyle \varepsilon _{a}:K[X]\rightarrow K(a),\,P\mapsto P(a)}
. This is a homomorphism and its kernel is
{
P
∈
K
[
X
]
∣
P
(
a
)
=
0
}
{\displaystyle \{P\in K[X]\mid P(a)=0\}}
. If
a
{\displaystyle a}
is algebraic, this ideal contains non-zero polynomials, but as
K
[
X
]
{\displaystyle K[X]}
is a euclidean domain, it contains a unique polynomial
p
{\displaystyle p}
with minimal degree and leading coefficient
1
{\displaystyle 1}
, which then also generates the ideal and must be irreducible. The polynomial
p
{\displaystyle p}
is called the minimal polynomial of
a
{\displaystyle a}
and it encodes many important properties of
a
{\displaystyle a}
. Hence the ring isomorphism
K
[
X
]
/
(
p
)
→
i
m
(
ε
a
)
{\displaystyle K[X]/(p)\rightarrow \mathrm {im} (\varepsilon _{a})}
obtained by the homomorphism theorem is an isomorphism of fields, where we can then observe that
i
m
(
ε
a
)
=
K
(
a
)
{\displaystyle \mathrm {im} (\varepsilon _{a})=K(a)}
. Otherwise,
ε
a
{\displaystyle \varepsilon _{a}}
is injective and hence we obtain a field isomorphism
K
(
X
)
→
K
(
a
)
{\displaystyle K(X)\rightarrow K(a)}
, where
K
(
X
)
{\displaystyle K(X)}
is the field of fractions of
K
[
X
]
{\displaystyle K[X]}
, i.e. the field of rational functions on
K
{\displaystyle K}
, by the universal property of the field of fractions. We can conclude that in any case, we find an isomorphism
K
(
a
)
≅
K
[
X
]
/
(
p
)
{\displaystyle K(a)\cong K[X]/(p)}
or
K
(
a
)
≅
K
(
X
)
{\displaystyle K(a)\cong K(X)}
. Investigating this construction yields the desired results.
This characterization can be used to show that the sum, difference, product and quotient of algebraic elements over
K
{\displaystyle K}
are again algebraic over
K
{\displaystyle K}
. For if
a
{\displaystyle a}
and
b
{\displaystyle b}
are both algebraic, then
(
K
(
a
)
)
(
b
)
{\displaystyle (K(a))(b)}
is finite. As it contains the aforementioned combinations of
a
{\displaystyle a}
and
b
{\displaystyle b}
, adjoining one of them to
K
{\displaystyle K}
also yields a finite extension, and therefore these elements are algebraic as well. Thus set of all elements of
L
{\displaystyle L}
that are algebraic over
K
{\displaystyle K}
is a field that sits in between
L
{\displaystyle L}
and
K
{\displaystyle K}
.
Fields that do not allow any algebraic elements over them (except their own elements) are called algebraically closed. The field of complex numbers is an example. If
L
{\displaystyle L}
is algebraically closed, then the field of algebraic elements of
L
{\displaystyle L}
over
K
{\displaystyle K}
is algebraically closed, which can again be directly shown using the characterisation of simple algebraic extensions above. An example for this is the field of algebraic numbers.
== See also ==
Algebraic independence
== References ==
== Further reading ==
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001 | Wikipedia/Algebraic_element |
In mathematics, the gonality of an algebraic curve C is defined as the lowest degree of a nonconstant rational map from C to the projective line. In more algebraic terms, if C is defined over the field K and K(C) denotes the function field of C, then the gonality is the minimum value taken by the degrees of field extensions
K(C)/K(f)
of the function field over its subfields generated by single functions f.
If K is algebraically closed, then the gonality is 1 precisely for curves of genus 0. The gonality is 2 for curves of genus 1 (elliptic curves) and for hyperelliptic curves (this includes all curves of genus 2). For genus g ≥ 3 it is no longer the case that the genus determines the gonality. The gonality of the generic curve of genus g is the floor function of
(g + 3)/2.
Trigonal curves are those with gonality 3, and this case gave rise to the name in general. Trigonal curves include the Picard curves, of genus three and given by an equation
y3 = Q(x)
where Q is of degree 4.
The gonality conjecture, of M. Green and R. Lazarsfeld, predicts that the gonality of the algebraic curve C can be calculated by homological algebra means, from a minimal resolution of an invertible sheaf of high degree. In many cases the gonality is two more than the Clifford index. The Green–Lazarsfeld conjecture is an exact formula in terms of the graded Betti numbers for a degree d embedding in r dimensions, for d large with respect to the genus. Writing b(C), with respect to a given such embedding of C and the minimal free resolution for its homogeneous coordinate ring, for the minimum index i for which βi, i + 1 is zero, then the conjectured formula for the gonality is
r + 1 − b(C).
According to the 1900 ICM talk of Federico Amodeo, the notion (but not the terminology) originated in Section V of Riemann's Theory of Abelian Functions. Amodeo used the term "gonalità" as early as 1893.
== References ==
Eisenbud, David (2005). The Geometry of Syzygies. A second course in commutative algebra and algebraic geometry. Graduate Texts in Mathematics. Vol. 229. New York, NY: Springer-Verlag. pp. 171, 178. ISBN 0-387-22215-4. MR 2103875. Zbl 1066.14001.
Geometric introduction to trigonal curves of genus five
Code for constructing examples of special trigonal curves on GitHub, written in Macaulay2 | Wikipedia/Gonality_of_an_algebraic_curve |
Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve.
The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves. Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time.
This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm.
== Introduction ==
Let
E
{\displaystyle E}
be an elliptic curve defined over the finite field
F
q
{\displaystyle \mathbb {F} _{q}}
, where
q
=
p
n
{\displaystyle q=p^{n}}
for
p
{\displaystyle p}
a prime and
n
{\displaystyle n}
an integer
≥
1
{\displaystyle \geq 1}
. Over a field of characteristic
≠
2
,
3
{\displaystyle \neq 2,3}
an elliptic curve can be given by a (short) Weierstrass equation
y
2
=
x
3
+
A
x
+
B
{\displaystyle y^{2}=x^{3}+Ax+B}
with
A
,
B
∈
F
q
{\displaystyle A,B\in \mathbb {F} _{q}}
. The set of points defined over
F
q
{\displaystyle \mathbb {F} _{q}}
consists of the solutions
(
a
,
b
)
∈
F
q
2
{\displaystyle (a,b)\in \mathbb {F} _{q}^{2}}
satisfying the curve equation and a point at infinity
O
{\displaystyle O}
. Using the group law on elliptic curves restricted to this set one can see that this set
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
forms an abelian group, with
O
{\displaystyle O}
acting as the zero element.
In order to count points on an elliptic curve, we compute the cardinality of
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
.
Schoof's approach to computing the cardinality
#
E
(
F
q
)
{\displaystyle \#E(\mathbb {F} _{q})}
makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials.
== Hasse's theorem ==
Hasse's theorem states that if
E
/
F
q
{\displaystyle E/\mathbb {F} _{q}}
is an elliptic curve over the finite field
F
q
{\displaystyle \mathbb {F} _{q}}
, then
#
E
(
F
q
)
{\displaystyle \#E(\mathbb {F} _{q})}
satisfies
∣
q
+
1
−
#
E
(
F
q
)
∣≤
2
q
.
{\displaystyle \mid q+1-\#E(\mathbb {F} _{q})\mid \leq 2{\sqrt {q}}.}
This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down
#
E
(
F
q
)
{\displaystyle \#E(\mathbb {F} _{q})}
to a finite (albeit large) set of possibilities. Defining
t
{\displaystyle t}
to be
q
+
1
−
#
E
(
F
q
)
{\displaystyle q+1-\#E(\mathbb {F} _{q})}
, and making use of this result, we now have that computing the value of
t
{\displaystyle t}
modulo
N
{\displaystyle N}
where
N
>
4
q
{\displaystyle N>4{\sqrt {q}}}
, is sufficient for determining
t
{\displaystyle t}
, and thus
#
E
(
F
q
)
{\displaystyle \#E(\mathbb {F} _{q})}
. While there is no efficient way to compute
t
(
mod
N
)
{\displaystyle t{\pmod {N}}}
directly for general
N
{\displaystyle N}
, it is possible to compute
t
(
mod
l
)
{\displaystyle t{\pmod {l}}}
for
l
{\displaystyle l}
a small prime, rather efficiently. We choose
S
=
{
l
1
,
l
2
,
.
.
.
,
l
r
}
{\displaystyle S=\{l_{1},l_{2},...,l_{r}\}}
to be a set of distinct primes such that
∏
l
i
=
N
>
4
q
{\displaystyle \prod l_{i}=N>4{\sqrt {q}}}
. Given
t
(
mod
l
i
)
{\displaystyle t{\pmod {l_{i}}}}
for all
l
i
∈
S
{\displaystyle l_{i}\in S}
, the Chinese remainder theorem allows us to compute
t
(
mod
N
)
{\displaystyle t{\pmod {N}}}
.
In order to compute
t
(
mod
l
)
{\displaystyle t{\pmod {l}}}
for a prime
l
≠
p
{\displaystyle l\neq p}
, we make use of the theory of the Frobenius endomorphism
ϕ
{\displaystyle \phi }
and division polynomials. Note that considering primes
l
≠
p
{\displaystyle l\neq p}
is no loss since we can always pick a bigger prime to take its place to ensure the product is big enough. In any case Schoof's algorithm is most frequently used in addressing the case
q
=
p
{\displaystyle q=p}
since there are more efficient, so called
p
{\displaystyle p}
adic algorithms for small-characteristic fields.
== The Frobenius endomorphism ==
Given the elliptic curve
E
{\displaystyle E}
defined over
F
q
{\displaystyle \mathbb {F} _{q}}
we consider points on
E
{\displaystyle E}
over
F
¯
q
{\displaystyle {\bar {\mathbb {F} }}_{q}}
, the algebraic closure of
F
q
{\displaystyle \mathbb {F} _{q}}
; i.e. we allow points with coordinates in
F
¯
q
{\displaystyle {\bar {\mathbb {F} }}_{q}}
. The Frobenius endomorphism of
F
¯
q
{\displaystyle {\bar {\mathbb {F} }}_{q}}
over
F
q
{\displaystyle \mathbb {F} _{q}}
extends to the elliptic curve by
ϕ
:
(
x
,
y
)
↦
(
x
q
,
y
q
)
{\displaystyle \phi :(x,y)\mapsto (x^{q},y^{q})}
.
This map is the identity on
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
and one can extend it to the point at infinity
O
{\displaystyle O}
, making it a group morphism from
E
(
F
¯
q
)
{\displaystyle E({\bar {\mathbb {F} }}_{q})}
to itself.
The Frobenius endomorphism satisfies a quadratic polynomial which is linked to the cardinality of
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
by the following theorem:
Theorem: The Frobenius endomorphism given by
ϕ
{\displaystyle \phi }
satisfies the characteristic equation
ϕ
2
−
t
ϕ
+
q
=
0
,
{\displaystyle \phi ^{2}-t\phi +q=0,}
where
t
=
q
+
1
−
#
E
(
F
q
)
{\displaystyle t=q+1-\#E(\mathbb {F} _{q})}
Thus we have for all
P
=
(
x
,
y
)
∈
E
{\displaystyle P=(x,y)\in E}
that
(
x
q
2
,
y
q
2
)
+
q
(
x
,
y
)
=
t
(
x
q
,
y
q
)
{\displaystyle (x^{q^{2}},y^{q^{2}})+q(x,y)=t(x^{q},y^{q})}
, where + denotes addition on the elliptic curve and
q
(
x
,
y
)
{\displaystyle q(x,y)}
and
t
(
x
q
,
y
q
)
{\displaystyle t(x^{q},y^{q})}
denote scalar multiplication of
(
x
,
y
)
{\displaystyle (x,y)}
by
q
{\displaystyle q}
and of
(
x
q
,
y
q
)
{\displaystyle (x^{q},y^{q})}
by
t
{\displaystyle t}
.
One could try to symbolically compute these points
(
x
q
2
,
y
q
2
)
{\displaystyle (x^{q^{2}},y^{q^{2}})}
,
(
x
q
,
y
q
)
{\displaystyle (x^{q},y^{q})}
and
q
(
x
,
y
)
{\displaystyle q(x,y)}
as functions in the coordinate ring
F
q
[
x
,
y
]
/
(
y
2
−
x
3
−
A
x
−
B
)
{\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B)}
of
E
{\displaystyle E}
and then search for a value of
t
{\displaystyle t}
which satisfies the equation. However, the degrees get very large and this approach is impractical.
Schoof's idea was to carry out this computation restricted to points of order
l
{\displaystyle l}
for various small primes
l
{\displaystyle l}
.
Fixing an odd prime
l
{\displaystyle l}
, we now move on to solving the problem of determining
t
l
{\displaystyle t_{l}}
, defined as
t
(
mod
l
)
{\displaystyle t{\pmod {l}}}
, for a given prime
l
≠
2
,
p
{\displaystyle l\neq 2,p}
.
If a point
(
x
,
y
)
{\displaystyle (x,y)}
is in the
l
{\displaystyle l}
-torsion subgroup
E
[
l
]
=
{
P
∈
E
(
F
q
¯
)
∣
l
P
=
O
}
{\displaystyle E[l]=\{P\in E({\bar {\mathbb {F} _{q}}})\mid lP=O\}}
, then
q
P
=
q
¯
P
{\displaystyle qP={\bar {q}}P}
where
q
¯
{\displaystyle {\bar {q}}}
is the unique integer such that
q
≡
q
¯
(
mod
l
)
{\displaystyle q\equiv {\bar {q}}{\pmod {l}}}
and
∣
q
¯
∣<
l
/
2
{\displaystyle \mid {\bar {q}}\mid <l/2}
.
Note that
ϕ
(
O
)
=
O
{\displaystyle \phi (O)=O}
and that for any integer
r
{\displaystyle r}
we have
r
ϕ
(
P
)
=
ϕ
(
r
P
)
{\displaystyle r\phi (P)=\phi (rP)}
. Thus
ϕ
(
P
)
{\displaystyle \phi (P)}
will have the same order as
P
{\displaystyle P}
. Thus for
(
x
,
y
)
{\displaystyle (x,y)}
belonging to
E
[
l
]
{\displaystyle E[l]}
, we also have
t
(
x
q
,
y
q
)
=
t
¯
(
x
q
,
y
q
)
{\displaystyle t(x^{q},y^{q})={\bar {t}}(x^{q},y^{q})}
if
t
≡
t
¯
(
mod
l
)
{\displaystyle t\equiv {\bar {t}}{\pmod {l}}}
. Hence we have reduced our problem to solving the equation
(
x
q
2
,
y
q
2
)
+
q
¯
(
x
,
y
)
≡
t
¯
(
x
q
,
y
q
)
,
{\displaystyle (x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)\equiv {\bar {t}}(x^{q},y^{q}),}
where
t
¯
{\displaystyle {\bar {t}}}
and
q
¯
{\displaystyle {\bar {q}}}
have integer values in
[
−
(
l
−
1
)
/
2
,
(
l
−
1
)
/
2
]
{\displaystyle [-(l-1)/2,(l-1)/2]}
.
== Computation modulo primes ==
The lth division polynomial is such that its roots are precisely the x coordinates of points of order l. Thus, to restrict the computation of
(
x
q
2
,
y
q
2
)
+
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)}
to the l-torsion points means computing these expressions as functions in the coordinate ring of E and modulo the lth division polynomial. I.e. we are working in
F
q
[
x
,
y
]
/
(
y
2
−
x
3
−
A
x
−
B
,
ψ
l
)
{\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})}
. This means in particular that the degree of X and Y defined via
(
X
(
x
,
y
)
,
Y
(
x
,
y
)
)
:=
(
x
q
2
,
y
q
2
)
+
q
¯
(
x
,
y
)
{\displaystyle (X(x,y),Y(x,y)):=(x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)}
is at most 1 in y and at most
(
l
2
−
3
)
/
2
{\displaystyle (l^{2}-3)/2}
in x.
The scalar multiplication
q
¯
(
x
,
y
)
{\displaystyle {\bar {q}}(x,y)}
can be done either by double-and-add methods or by using the
q
¯
{\displaystyle {\bar {q}}}
th division polynomial. The latter approach gives:
q
¯
(
x
,
y
)
=
(
x
q
¯
,
y
q
¯
)
=
(
x
−
ψ
q
¯
−
1
ψ
q
¯
+
1
ψ
q
¯
2
,
ψ
2
q
¯
2
ψ
q
¯
4
)
{\displaystyle {\bar {q}}(x,y)=(x_{\bar {q}},y_{\bar {q}})=\left(x-{\frac {\psi _{{\bar {q}}-1}\psi _{{\bar {q}}+1}}{\psi _{\bar {q}}^{2}}},{\frac {\psi _{2{\bar {q}}}}{2\psi _{\bar {q}}^{4}}}\right)}
where
ψ
n
{\displaystyle \psi _{n}}
is the nth division polynomial. Note that
y
q
¯
/
y
{\displaystyle y_{\bar {q}}/y}
is a function in x only and denote it by
θ
(
x
)
{\displaystyle \theta (x)}
.
We must split the problem into two cases: the case in which
(
x
q
2
,
y
q
2
)
≠
±
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})\neq \pm {\bar {q}}(x,y)}
, and the case in which
(
x
q
2
,
y
q
2
)
=
±
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})=\pm {\bar {q}}(x,y)}
. Note that these equalities are checked modulo
ψ
l
{\displaystyle \psi _{l}}
.
=== Case 1: ===
(
x
q
2
,
y
q
2
)
≠
±
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})\neq \pm {\bar {q}}(x,y)}
By using the addition formula for the group
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
we obtain:
X
(
x
,
y
)
=
(
y
q
2
−
y
q
¯
x
q
2
−
x
q
¯
)
2
−
x
q
2
−
x
q
¯
.
{\displaystyle X(x,y)=\left({\frac {y^{q^{2}}-y_{\bar {q}}}{x^{q^{2}}-x_{\bar {q}}}}\right)^{2}-x^{q^{2}}-x_{\bar {q}}.}
Note that this computation fails in case the assumption of inequality was wrong.
We are now able to use the x-coordinate to narrow down the choice of
t
¯
{\displaystyle {\bar {t}}}
to two possibilities, namely the positive and negative case. Using the y-coordinate one later determines which of the two cases holds.
We first show that X is a function in x alone. Consider
(
y
q
2
−
y
q
¯
)
2
=
y
2
(
y
q
2
−
1
−
y
q
¯
/
y
)
2
{\displaystyle (y^{q^{2}}-y_{\bar {q}})^{2}=y^{2}(y^{q^{2}-1}-y_{\bar {q}}/y)^{2}}
.
Since
q
2
−
1
{\displaystyle q^{2}-1}
is even, by replacing
y
2
{\displaystyle y^{2}}
by
x
3
+
A
x
+
B
{\displaystyle x^{3}+Ax+B}
, we rewrite the expression as
(
x
3
+
A
x
+
B
)
(
(
x
3
+
A
x
+
B
)
q
2
−
1
2
−
θ
(
x
)
)
{\displaystyle (x^{3}+Ax+B)((x^{3}+Ax+B)^{\frac {q^{2}-1}{2}}-\theta (x))}
and have that
X
(
x
)
≡
(
x
3
+
A
x
+
B
)
(
(
x
3
+
A
x
+
B
)
q
2
−
1
2
−
θ
(
x
)
)
mod
ψ
l
(
x
)
.
{\displaystyle X(x)\equiv (x^{3}+Ax+B)((x^{3}+Ax+B)^{\frac {q^{2}-1}{2}}-\theta (x)){\bmod {\psi }}_{l}(x).}
Here, it seems not right, we throw away
x
q
2
−
x
q
¯
{\displaystyle x^{q^{2}}-x_{\bar {q}}}
?
Now if
X
≡
x
t
¯
q
mod
ψ
l
(
x
)
{\displaystyle X\equiv x_{\bar {t}}^{q}{\bmod {\psi }}_{l}(x)}
for one
t
¯
∈
[
0
,
(
l
−
1
)
/
2
]
{\displaystyle {\bar {t}}\in [0,(l-1)/2]}
then
t
¯
{\displaystyle {\bar {t}}}
satisfies
ϕ
2
(
P
)
∓
t
¯
ϕ
(
P
)
+
q
¯
P
=
O
{\displaystyle \phi ^{2}(P)\mp {\bar {t}}\phi (P)+{\bar {q}}P=O}
for all l-torsion points P.
As mentioned earlier, using Y and
y
t
¯
q
{\displaystyle y_{\bar {t}}^{q}}
we are now able to determine which of the two values of
t
¯
{\displaystyle {\bar {t}}}
(
t
¯
{\displaystyle {\bar {t}}}
or
−
t
¯
{\displaystyle -{\bar {t}}}
) works. This gives the value of
t
≡
t
¯
(
mod
l
)
{\displaystyle t\equiv {\bar {t}}{\pmod {l}}}
. Schoof's algorithm stores the values of
t
¯
(
mod
l
)
{\displaystyle {\bar {t}}{\pmod {l}}}
in a variable
t
l
{\displaystyle t_{l}}
for each prime l considered.
=== Case 2: ===
(
x
q
2
,
y
q
2
)
=
±
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})=\pm {\bar {q}}(x,y)}
We begin with the assumption that
(
x
q
2
,
y
q
2
)
=
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})={\bar {q}}(x,y)}
. Since l is an odd prime it cannot be that
q
¯
(
x
,
y
)
=
−
q
¯
(
x
,
y
)
{\displaystyle {\bar {q}}(x,y)=-{\bar {q}}(x,y)}
and thus
t
¯
≠
0
{\displaystyle {\bar {t}}\neq 0}
. The characteristic equation yields that
t
¯
ϕ
(
P
)
=
2
q
¯
P
{\displaystyle {\bar {t}}\phi (P)=2{\bar {q}}P}
. And consequently that
t
¯
2
q
¯
≡
(
2
q
)
2
(
mod
l
)
{\displaystyle {\bar {t}}^{2}{\bar {q}}\equiv (2q)^{2}{\pmod {l}}}
.
This implies that q is a square modulo l. Let
q
≡
w
2
(
mod
l
)
{\displaystyle q\equiv w^{2}{\pmod {l}}}
. Compute
w
ϕ
(
x
,
y
)
{\displaystyle w\phi (x,y)}
in
F
q
[
x
,
y
]
/
(
y
2
−
x
3
−
A
x
−
B
,
ψ
l
)
{\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})}
and check whether
q
¯
(
x
,
y
)
=
w
ϕ
(
x
,
y
)
{\displaystyle {\bar {q}}(x,y)=w\phi (x,y)}
. If so,
t
l
{\displaystyle t_{l}}
is
±
2
w
(
mod
l
)
{\displaystyle \pm 2w{\pmod {l}}}
depending on the y-coordinate.
If q turns out not to be a square modulo l or if the equation does not hold for any of w and
−
w
{\displaystyle -w}
, our assumption that
(
x
q
2
,
y
q
2
)
=
+
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})=+{\bar {q}}(x,y)}
is false, thus
(
x
q
2
,
y
q
2
)
=
−
q
¯
(
x
,
y
)
{\displaystyle (x^{q^{2}},y^{q^{2}})=-{\bar {q}}(x,y)}
. The characteristic equation gives
t
l
=
0
{\displaystyle t_{l}=0}
.
=== Additional case ===
l
=
2
{\displaystyle l=2}
If you recall, our initial considerations omit the case of
l
=
2
{\displaystyle l=2}
.
Since we assume q to be odd,
q
+
1
−
t
≡
t
(
mod
2
)
{\displaystyle q+1-t\equiv t{\pmod {2}}}
and in particular,
t
2
≡
0
(
mod
2
)
{\displaystyle t_{2}\equiv 0{\pmod {2}}}
if and only if
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
has an element of order 2. By definition of addition in the group, any element of order 2 must be of the form
(
x
0
,
0
)
{\displaystyle (x_{0},0)}
. Thus
t
2
≡
0
(
mod
2
)
{\displaystyle t_{2}\equiv 0{\pmod {2}}}
if and only if the polynomial
x
3
+
A
x
+
B
{\displaystyle x^{3}+Ax+B}
has a root in
F
q
{\displaystyle \mathbb {F} _{q}}
, if and only if
gcd
(
x
q
−
x
,
x
3
+
A
x
+
B
)
≠
1
{\displaystyle \gcd(x^{q}-x,x^{3}+Ax+B)\neq 1}
.
== The algorithm ==
Input:
1. An elliptic curve
E
=
y
2
−
x
3
−
A
x
−
B
{\displaystyle E=y^{2}-x^{3}-Ax-B}
.
2. An integer q for a finite field
F
q
{\displaystyle F_{q}}
with
q
=
p
b
,
b
≥
1
{\displaystyle q=p^{b},b\geq 1}
.
Output:
The number of points of E over
F
q
{\displaystyle F_{q}}
.
Choose a set of odd primes S not containing p such that
N
=
∏
l
∈
S
l
>
4
q
.
{\displaystyle N=\prod _{l\in S}l>4{\sqrt {q}}.}
Put
t
2
=
0
{\displaystyle t_{2}=0}
if
gcd
(
x
q
−
x
,
x
3
+
A
x
+
B
)
≠
1
{\displaystyle \gcd(x^{q}-x,x^{3}+Ax+B)\neq 1}
, else
t
2
=
1
{\displaystyle t_{2}=1}
.
Compute the division polynomial
ψ
l
{\displaystyle \psi _{l}}
.
All computations in the loop below are performed in the ring
F
q
[
x
,
y
]
/
(
y
2
−
x
3
−
A
x
−
B
,
ψ
l
)
.
{\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l}).}
For
l
∈
S
{\displaystyle l\in S}
do:
Let
q
¯
{\displaystyle {\bar {q}}}
be the unique integer such that
q
≡
q
¯
(
mod
l
)
{\displaystyle q\equiv {\bar {q}}{\pmod {l}}}
and
∣
q
¯
∣<
l
/
2
{\displaystyle \mid {\bar {q}}\mid <l/2}
.
Compute
(
x
q
,
y
q
)
{\displaystyle (x^{q},y^{q})}
,
(
x
q
2
,
y
q
2
)
{\displaystyle (x^{q^{2}},y^{q^{2}})}
and
(
x
q
¯
,
y
q
¯
)
{\displaystyle (x_{\bar {q}},y_{\bar {q}})}
.
if
x
q
2
≠
x
q
¯
{\displaystyle x^{q^{2}}\neq x_{\bar {q}}}
then
Compute
(
X
,
Y
)
{\displaystyle (X,Y)}
.
for
1
≤
t
¯
≤
(
l
−
1
)
/
2
{\displaystyle 1\leq {\bar {t}}\leq (l-1)/2}
do:
if
X
=
x
t
¯
q
{\displaystyle X=x_{\bar {t}}^{q}}
then
if
Y
=
y
t
¯
q
{\displaystyle Y=y_{\bar {t}}^{q}}
then
t
l
=
t
¯
{\displaystyle t_{l}={\bar {t}}}
;
else
t
l
=
−
t
¯
{\displaystyle t_{l}=-{\bar {t}}}
.
else if q is a square modulo l then
compute w with
q
≡
w
2
(
mod
l
)
{\displaystyle q\equiv w^{2}{\pmod {l}}}
compute
w
(
x
q
,
y
q
)
{\displaystyle w(x^{q},y^{q})}
if
w
(
x
q
,
y
q
)
=
(
x
q
2
,
y
q
2
)
{\displaystyle w(x^{q},y^{q})=(x^{q^{2}},y^{q^{2}})}
then
t
l
=
2
w
{\displaystyle t_{l}=2w}
else if
w
(
x
q
,
y
q
)
=
(
x
q
2
,
−
y
q
2
)
{\displaystyle w(x^{q},y^{q})=(x^{q^{2}},-y^{q^{2}})}
then
t
l
=
−
2
w
{\displaystyle t_{l}=-2w}
else
t
l
=
0
{\displaystyle t_{l}=0}
else
t
l
=
0
{\displaystyle t_{l}=0}
Use the Chinese Remainder Theorem to compute t modulo N
from the equations
t
≡
t
l
(
mod
l
)
{\displaystyle t\equiv t_{l}{\pmod {l}}}
, where
l
∈
S
{\displaystyle l\in S}
.
Output
q
+
1
−
t
{\displaystyle q+1-t}
.
== Complexity ==
Most of the computation is taken by the evaluation of
ϕ
(
P
)
{\displaystyle \phi (P)}
and
ϕ
2
(
P
)
{\displaystyle \phi ^{2}(P)}
, for each prime
l
{\displaystyle l}
, that is computing
x
q
{\displaystyle x^{q}}
,
y
q
{\displaystyle y^{q}}
,
x
q
2
{\displaystyle x^{q^{2}}}
,
y
q
2
{\displaystyle y^{q^{2}}}
for each prime
l
{\displaystyle l}
. This involves exponentiation in the ring
R
=
F
q
[
x
,
y
]
/
(
y
2
−
x
3
−
A
x
−
B
,
ψ
l
)
{\displaystyle R=\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})}
and requires
O
(
log
q
)
{\displaystyle O(\log q)}
multiplications. Since the degree of
ψ
l
{\displaystyle \psi _{l}}
is
l
2
−
1
2
{\displaystyle {\frac {l^{2}-1}{2}}}
, each element in the ring is a polynomial of degree
O
(
l
2
)
{\displaystyle O(l^{2})}
. By the prime number theorem, there are around
O
(
log
q
)
{\displaystyle O(\log q)}
primes of size
O
(
log
q
)
{\displaystyle O(\log q)}
, giving that
l
{\displaystyle l}
is
O
(
log
q
)
{\displaystyle O(\log q)}
and we obtain that
O
(
l
2
)
=
O
(
log
2
q
)
{\displaystyle O(l^{2})=O(\log ^{2}q)}
. Thus each multiplication in the ring
R
{\displaystyle R}
requires
O
(
log
4
q
)
{\displaystyle O(\log ^{4}q)}
multiplications in
F
q
{\displaystyle \mathbb {F} _{q}}
which in turn requires
O
(
log
2
q
)
{\displaystyle O(\log ^{2}q)}
bit operations. In total, the number of bit operations for each prime
l
{\displaystyle l}
is
O
(
log
7
q
)
{\displaystyle O(\log ^{7}q)}
. Given that this computation needs to be carried out for each of the
O
(
log
q
)
{\displaystyle O(\log q)}
primes, the total complexity of Schoof's algorithm turns out to be
O
(
log
8
q
)
{\displaystyle O(\log ^{8}q)}
. Using fast polynomial and integer arithmetic reduces this to
O
~
(
log
5
q
)
{\displaystyle {\tilde {O}}(\log ^{5}q)}
.
== Improvements to Schoof's algorithm ==
In the 1990s, Noam Elkies, followed by A. O. L. Atkin, devised improvements to Schoof's basic algorithm by restricting the set of primes
S
=
{
l
1
,
…
,
l
s
}
{\displaystyle S=\{l_{1},\ldots ,l_{s}\}}
considered before to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime
l
{\displaystyle l}
is called an Elkies prime if the characteristic equation:
ϕ
2
−
t
ϕ
+
q
=
0
{\displaystyle \phi ^{2}-t\phi +q=0}
splits over
F
l
{\displaystyle \mathbb {F} _{l}}
, while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials, which come from the study of modular forms and an interpretation of elliptic curves over the complex numbers as lattices. Once we have determined which case we are in, instead of using division polynomials, we are able to work with a polynomial that has lower degree than the corresponding division polynomial:
O
(
l
)
{\displaystyle O(l)}
rather than
O
(
l
2
)
{\displaystyle O(l^{2})}
. For efficient implementation, probabilistic root-finding algorithms are used, which makes this a Las Vegas algorithm rather than a deterministic algorithm.
Under the heuristic assumption that approximately half of the primes up to an
O
(
log
q
)
{\displaystyle O(\log q)}
bound are Elkies primes, this yields an algorithm that is more efficient than Schoof's, with an expected running time of
O
(
log
6
q
)
{\displaystyle O(\log ^{6}q)}
using naive arithmetic, and
O
~
(
log
4
q
)
{\displaystyle {\tilde {O}}(\log ^{4}q)}
using fast arithmetic. Although this heuristic assumption is known to hold for most elliptic curves, it is not known to hold in every case, even under the GRH.
== Implementations ==
Several algorithms were implemented in C++ by Mike Scott and are available with source code. The implementations are free (no terms, no conditions), and make use of the MIRACL library which is distributed under the AGPLv3.
Schoof's algorithm implementation for
E
(
F
p
)
{\displaystyle E(\mathbb {F} _{p})}
with prime
p
{\displaystyle p}
.
Schoof's algorithm implementation for
E
(
F
2
m
)
{\displaystyle E(\mathbb {F} _{2^{m}})}
.
== See also ==
Elliptic curve cryptography
Counting points on elliptic curves
Division Polynomials
Frobenius endomorphism
== References ==
R. Schoof: Elliptic Curves over Finite Fields and the Computation of Square Roots mod p. Math. Comp., 44(170):483–494, 1985. Available at http://www.mat.uniroma2.it/~schoof/ctpts.pdf
R. Schoof: Counting Points on Elliptic Curves over Finite Fields. J. Theor. Nombres Bordeaux 7:219–254, 1995. Available at http://www.mat.uniroma2.it/~schoof/ctg.pdf
G. Musiker: Schoof's Algorithm for Counting Points on
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
. Available at http://www.math.umn.edu/~musiker/schoof.pdf
V. Müller : Die Berechnung der Punktanzahl von elliptischen kurven über endlichen Primkörpern. Master's Thesis. Universität des Saarlandes, Saarbrücken, 1991. Available at http://lecturer.ukdw.ac.id/vmueller/publications.php
A. Enge: Elliptic Curves and their Applications to Cryptography: An Introduction. Kluwer Academic Publishers, Dordrecht, 1999.
L. C. Washington: Elliptic Curves: Number Theory and Cryptography. Chapman & Hall/CRC, New York, 2003.
N. Koblitz: A Course in Number Theory and Cryptography, Graduate Texts in Math. No. 114, Springer-Verlag, 1987. Second edition, 1994 | Wikipedia/Schoof's_algorithm |
The Schoof–Elkies–Atkin algorithm (SEA) is an algorithm used for finding the order of or calculating the number of points on an elliptic curve over a finite field. Its primary application is in elliptic curve cryptography. The algorithm is an extension of Schoof's algorithm by Noam Elkies and A. O. L. Atkin to significantly improve its efficiency (under heuristic assumptions).
== Details ==
The Elkies-Atkin extension to Schoof's algorithm works by restricting the set of primes
S
=
{
l
1
,
…
,
l
s
}
{\displaystyle S=\{l_{1},\ldots ,l_{s}\}}
considered to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime
l
{\displaystyle l}
is called an Elkies prime if the characteristic equation:
ϕ
2
−
t
ϕ
+
q
=
0
{\displaystyle \phi ^{2}-t\phi +q=0}
splits over
F
l
{\displaystyle \mathbb {F} _{l}}
, while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials
Φ
l
(
X
,
Y
)
{\displaystyle \Phi _{l}(X,Y)}
that parametrize pairs of
l
{\displaystyle l}
-isogenous elliptic curves in terms of their j-invariants (in practice alternative modular polynomials may also be used but for the same purpose).
If the instantiated polynomial
Φ
l
(
X
,
j
(
E
)
)
{\displaystyle \Phi _{l}(X,j(E))}
has a root
j
(
E
′
)
{\displaystyle j(E')}
in
F
q
{\displaystyle \mathbb {F} _{q}}
then
l
{\displaystyle l}
is an Elkies prime, and we may compute a polynomial
f
l
(
X
)
{\displaystyle f_{l}(X)}
whose roots correspond to points in the kernel of the
l
{\displaystyle l}
-isogeny from
E
{\displaystyle E}
to
E
′
{\displaystyle E'}
. The polynomial
f
l
{\displaystyle f_{l}}
is a divisor of the corresponding division polynomial used in Schoof's algorithm, and it has significantly lower degree,
O
(
l
)
{\displaystyle O(l)}
versus
O
(
l
2
)
{\displaystyle O(l^{2})}
. For Elkies primes, this allows one to compute the number of points on
E
{\displaystyle E}
modulo
l
{\displaystyle l}
more efficiently than in Schoof's algorithm.
In the case of an Atkin prime, we can gain some information from the factorization pattern of
Φ
l
(
X
,
j
(
E
)
)
{\displaystyle \Phi _{l}(X,j(E))}
in
F
l
[
X
]
{\displaystyle \mathbb {F} _{l}[X]}
, which constrains the possibilities for the number of points modulo
l
{\displaystyle l}
, but the asymptotic complexity of the algorithm depends entirely on the Elkies primes. Provided there are sufficiently many small Elkies primes (on average, we expect half the primes
l
{\displaystyle l}
to be Elkies primes), this results in a reduction in the running time. The resulting algorithm is probabilistic (of Las Vegas type), and its expected running time is, heuristically,
O
~
(
log
4
q
)
{\displaystyle {\tilde {O}}(\log ^{4}q)}
, making it more efficient in practice than Schoof's algorithm. Here the
O
~
{\displaystyle {\tilde {O}}}
notation is a variant of big O notation that suppresses terms that are logarithmic in the main term of an expression.
== Implementations ==
The Schoof–Elkies–Atkin algorithm is implemented in the PARI/GP computer algebra system in the GP function ellap.
== External links ==
"Schoof: Counting points on elliptic curves over finite fields"
article on Mathworld
"Remarks on the Schoof-Elkies-Atkin algorithm"
"The SEA Algorithm in Characteristic 2" | Wikipedia/Schoof–Elkies–Atkin_algorithm |
In algebraic geometry, Brill–Noether theory, introduced by Alexander von Brill and Max Noether (1874), is the study of special divisors, certain divisors on a curve C that determine more compatible functions than would be predicted. In classical language, special divisors move on the curve in a "larger than expected" linear system of divisors.
Throughout, we consider a projective smooth curve over the complex numbers (or over some other algebraically closed field).
The condition to be a special divisor D can be formulated in sheaf cohomology terms, as the non-vanishing of the H1 cohomology of the sheaf of sections of the invertible sheaf or line bundle associated to D. This means that, by the Riemann–Roch theorem, the H0 cohomology or space of holomorphic sections is larger than expected.
Alternatively, by Serre duality, the condition is that there exist holomorphic differentials with divisor ≥ –D on the curve.
== Main theorems of Brill–Noether theory ==
For a given genus g, the moduli space for curves C of genus g should contain a dense subset parameterizing those curves with the minimum in the way of special divisors. One goal of the theory is to 'count constants', for those curves: to predict the dimension of the space of special divisors (up to linear equivalence) of a given degree d, as a function of g, that must be present on a curve of that genus.
The basic statement can be formulated in terms of the Picard variety Pic(C) of a smooth curve C, and the subset of Pic(C) corresponding to divisor classes of divisors D, with given values d of deg(D) and r of l(D) – 1 in the notation of the Riemann–Roch theorem. There is a lower bound ρ for the dimension dim(d, r, g) of this subscheme in Pic(C):
dim
(
d
,
r
,
g
)
≥
ρ
=
g
−
(
r
+
1
)
(
g
−
d
+
r
)
{\displaystyle \dim(d,r,g)\geq \rho =g-(r+1)(g-d+r)}
called the Brill–Noether number. The formula can be memorized via the mnemonic (using our desired
h
0
(
D
)
=
r
+
1
{\displaystyle h^{0}(D)=r+1}
and Riemann-Roch)
g
−
(
r
+
1
)
(
g
−
d
+
r
)
=
g
−
h
0
(
D
)
h
1
(
D
)
{\displaystyle g-(r+1)(g-d+r)=g-h^{0}(D)h^{1}(D)}
For smooth curves C and for d ≥ 1, r ≥ 0 the basic results about the space
G
d
r
{\displaystyle G_{d}^{r}}
of linear systems on C of degree d and dimension r are as follows.
George Kempf proved that if ρ ≥ 0 then
G
d
r
{\displaystyle G_{d}^{r}}
is not empty, and every component has dimension at least ρ.
William Fulton and Robert Lazarsfeld proved that if ρ ≥ 1 then
G
d
r
{\displaystyle G_{d}^{r}}
is connected.
Griffiths & Harris (1980) showed that if C is generic then
G
d
r
{\displaystyle G_{d}^{r}}
is reduced and all components have dimension exactly ρ (so in particular
G
d
r
{\displaystyle G_{d}^{r}}
is empty if ρ < 0).
David Gieseker proved that if C is generic then
G
d
r
{\displaystyle G_{d}^{r}}
is smooth. By the connectedness result this implies it is irreducible if ρ > 0.
Other more recent results not necessarily in terms of space
G
d
r
{\displaystyle G_{d}^{r}}
of linear systems are:
Eric Larson (2017) proved that if ρ ≥ 0, r ≥ 3, and n ≥ 1, the restriction maps
H
0
(
O
P
r
(
n
)
)
→
H
0
(
O
C
(
n
)
)
{\displaystyle H^{0}({\mathcal {O}}_{\mathbb {P} ^{r}}(n))\rightarrow H^{0}({\mathcal {O}}_{C}(n))}
are of maximal rank, also known as the maximal rank conjecture.
Eric Larson and Isabel Vogt (2022) proved that if ρ ≥ 0 then there is a curve C interpolating through n general points in
P
r
{\displaystyle \mathbb {P} ^{r}}
if and only if
(
r
−
1
)
n
≤
(
r
+
1
)
d
−
(
r
−
3
)
(
g
−
1
)
,
{\displaystyle (r-1)n\leq (r+1)d-(r-3)(g-1),}
except in 4 exceptional cases: (d, g, r) ∈ {(5,2,3),(6,4,3),(7,2,5),(10,6,5)}.
== References ==
Barbon, Andrea (2014). Algebraic Brill–Noether Theory (PDF) (Master's thesis). Radboud University Nijmegen.
Arbarello, Enrico; Cornalba, Maurizio; Griffiths, Philip A.; Harris, Joe (1985). "The Basic Results of the Brill-Noether Theory". Geometry of Algebraic Curves. Grundlehren der Mathematischen Wissenschaften 267. Vol. I. pp. 203–224. doi:10.1007/978-1-4757-5323-3_5. ISBN 0-387-90997-4.
von Brill, Alexander; Noether, Max (1874). "Ueber die algebraischen Functionen und ihre Anwendung in der Geometrie". Mathematische Annalen. 7 (2): 269–316. doi:10.1007/BF02104804. JFM 06.0251.01. S2CID 120777748. Retrieved 2009-08-22.
Griffiths, Phillip; Harris, Joseph (1980). "On the variety of special linear systems on a general algebraic curve". Duke Mathematical Journal. 47 (1): 233–272. doi:10.1215/s0012-7094-80-04717-1. MR 0563378.
Eduardo Casas-Alvero (2019). Algebraic Curves, the Brill and Noether way. Universitext. Springer. ISBN 9783030290153.
Philip A. Griffiths; Joe Harris (1994). Principles of Algebraic Geometry. Wiley Classics Library. Wiley Interscience. p. 245. ISBN 978-0-471-05059-9.
== Notes == | Wikipedia/Brill–Noether_theory |
The Lotka–Volterra equations, also known as the Lotka–Volterra predator–prey model, are a pair of first-order nonlinear differential equations, frequently used to describe the dynamics of biological systems in which two species interact, one as a predator and the other as prey. The populations change through time according to the pair of equations:
d
x
d
t
=
α
x
−
β
x
y
,
d
y
d
t
=
−
γ
y
+
δ
x
y
,
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=\alpha x-\beta xy,\\{\frac {dy}{dt}}&=-\gamma y+\delta xy,\end{aligned}}}
where
the variable x is the population density of prey (for example, the number of rabbits per square kilometre);
the variable y is the population density of some predator (for example, the number of foxes per square kilometre);
d
y
d
t
{\displaystyle {\tfrac {dy}{dt}}}
and
d
x
d
t
{\displaystyle {\tfrac {dx}{dt}}}
represent the instantaneous growth rates of the two populations;
t represents time;
The prey's parameters, α and β, describe, respectively, the maximum prey per capita growth rate, and the effect of the presence of predators on the prey death rate.
The predator's parameters, γ, δ, respectively describe the predator's per capita death rate, and the effect of the presence of prey on the predator's growth rate.
All parameters are positive and real.
The solution of the differential equations is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.
The Lotka–Volterra system of equations is an example of a Kolmogorov population model (not to be confused with the better known Kolmogorov equations), which is a more general framework that can model the dynamics of ecological systems with predator–prey interactions, competition, disease, and mutualism.
== Biological interpretation and model assumptions ==
The prey are assumed to have an unlimited food supply and to reproduce exponentially, unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation on the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero, then there can be no predation. With these two terms the prey equation above can be interpreted as follows: the rate of change of the prey's population is given by its own growth rate minus the rate at which it is preyed upon.
The term δxy represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used, as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). The term γy represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses that the rate of change of the predator's population depends upon the rate at which it consumes prey, minus its intrinsic death rate.
The Lotka–Volterra predator-prey model makes a number of assumptions about the environment and biology of the predator and prey populations:
The prey population finds ample food at all times.
The food supply of the predator population depends entirely on the size of the prey population.
The rate of change of population is proportional to its size.
During the process, the environment does not change in favour of one species, and genetic adaptation is inconsequential.
Predators have limitless appetite.
Both populations can be described by a single variable. This amounts to assuming that the populations do not have a spatial or age distribution that contributes to the dynamics.
== Biological relevance of the model ==
None of the assumptions above are likely to hold for natural populations. Nevertheless, the Lotka–Volterra model shows two important properties of predator and prey populations and these properties often extend to variants of the model in which these assumptions are relaxed:
Firstly, the dynamics of predator and prey populations have a tendency to oscillate. Fluctuating numbers of predators and prey have been observed in natural populations, such as the lynx and snowshoe hare data of the Hudson's Bay Company and the moose and wolf populations in Isle Royale National Park.
Secondly, the population equilibrium of this model has the property that the prey equilibrium density (given by
x
=
γ
/
δ
{\displaystyle x=\gamma /\delta }
) depends on the predator's parameters, and the predator equilibrium density (given by
y
=
α
/
β
{\displaystyle y=\alpha /\beta }
) on the prey's parameters. This has as a consequence that an increase in, for instance, the prey growth rate,
α
{\displaystyle \alpha }
, leads to an increase in the predator equilibrium density, but not the prey equilibrium density. Making the environment better for the prey benefits the predator, not the prey (this is related to the paradox of the pesticides and to the paradox of enrichment). A demonstration of this phenomenon is provided by the increased percentage of predatory fish caught had increased during the years of World War I (1914–18), when prey growth rate was increased due to a reduced fishing effort.
A further example is provided by the experimental iron fertilization of the ocean. In several experiments large amounts of iron salts were dissolved in the ocean. The expectation was that iron, which is a limiting nutrient for phytoplankton, would boost growth of phytoplankton and that it would sequester carbon dioxide from the atmosphere. The addition of iron typically leads to a short bloom in phytoplankton, which is quickly consumed by other organisms (such as small fish or zooplankton) and limits the effect of enrichment mainly to increased predator density, which in turn limits the carbon sequestration. This is as predicted by the equilibrium population densities of the Lotka–Volterra predator-prey model, and is a feature that carries over to more elaborate models in which the restrictive assumptions of the simple model are relaxed.
== Applications to economics and marketing ==
The Lotka–Volterra model has additional applications to areas such as economics and marketing. It can be used to describe the dynamics in a market with several competitors, complementary platforms and products, a sharing economy, and more. There are situations in which one of the competitors drives the other competitors out of the market and other situations in which the market reaches an equilibrium where each firm stabilizes on its market share. It is also possible to describe situations in which there are cyclical changes in the industry or chaotic situations with no equilibrium and changes are frequent and unpredictable.
In economics, the Phillips curve, which shows the statistical relationship between unemployment and the rate of change in nominal wages, has been connected by the Goodwin model. This model reinterprets the dynamics of the biological prey-predator interaction, as described by the Lotka-Volterra model, in economic terms. The way the two species interact in this model led Goodwin to draw parallels with the Marxian class conflict. The Kolmogorov generalization of the prey-predator model, along with further developments of the Goodwin model, has extended these ideas.
== History ==
The Lotka–Volterra predator–prey model was initially proposed by Alfred J. Lotka in the theory of autocatalytic chemical reactions in 1910. This was effectively the logistic equation, originally derived by Pierre François Verhulst. In 1920 Lotka extended the model, via Andrey Kolmogorov, to "organic systems" using a plant species and a herbivorous animal species as an example and in 1925 he used the equations to analyse predator–prey interactions in his book on biomathematics. The same set of equations was published in 1926 by Vito Volterra, a mathematician and physicist, who had become interested in mathematical biology. Volterra's enquiry was inspired through his interactions with the marine biologist Umberto D'Ancona, who was courting his daughter at the time and later was to become his son-in-law. D'Ancona studied the fish catches in the Adriatic Sea and had noticed that the percentage of predatory fish caught had increased during the years of World War I (1914–18). This puzzled him, as the fishing effort had been very much reduced during the war years and, as prey fish the preferred catch, one would intuitively expect this to increase of prey fish percentage. Volterra developed his model to explain D'Ancona's observation and did this independently from Alfred Lotka. He did credit Lotka's earlier work in his publication, after which the model has become known as the "Lotka-Volterra model".
The model was later extended to include density-dependent prey growth and a functional response of the form developed by C. S. Holling; a model that has become known as the Rosenzweig–MacArthur model. Both the Lotka–Volterra and Rosenzweig–MacArthur models have been used to explain the dynamics of natural populations of predators and prey.
In the late 1980s, an alternative to the Lotka–Volterra predator–prey model (and its common-prey-dependent generalizations) emerged, the ratio dependent or Arditi–Ginzburg model. The validity of prey- or ratio-dependent models has been much debated.
The Lotka–Volterra equations have a long history of use in economic theory; their initial application is commonly credited to Richard Goodwin in 1965 or 1967.
== Solutions to the equations ==
The equations have periodic solutions. These solutions do not have a simple expression in terms of the usual trigonometric functions, although they are quite tractable.
If none of the non-negative parameters α, β, γ, δ vanishes, three can be absorbed into the normalization of variables to leave only one parameter: since the first equation is homogeneous in x, and the second one in y, the parameters β/α and δ/γ are absorbable in the normalizations of y and x respectively, and γ into the normalization of t, so that only α/γ remains arbitrary. It is the only parameter affecting the nature of the solutions.
A linearization of the equations yields a solution similar to simple harmonic motion with the population of predators trailing that of prey by 90° in the cycle.
=== A simple example ===
Suppose there are two species of animals, a rabbit (prey) and a fox (predator). If the initial densities are 10 rabbits and 10 foxes per square kilometre, one can plot the progression of the two species over time; given the parameters that the growth and death rates of rabbits are 1.1 and 0.4 while that of foxes are 0.1 and 0.4 respectively. The choice of time interval is arbitrary.
One may also plot solutions parametrically as orbits in phase space, without representing time, but with one axis representing the number of prey and the other axis representing the densities of predators for all times.
This corresponds to eliminating time from the two differential equations above to produce a single differential equation
d
y
d
x
=
−
y
x
δ
x
−
γ
β
y
−
α
{\displaystyle {\frac {dy}{dx}}=-{\frac {y}{x}}{\frac {\delta x-\gamma }{\beta y-\alpha }}}
relating the variables x (predator) and y (prey). The solutions of this equation are closed curves. It is amenable to separation of variables: integrating
β
y
−
α
y
d
y
+
δ
x
−
γ
x
d
x
=
0
{\displaystyle {\frac {\beta y-\alpha }{y}}\,dy+{\frac {\delta x-\gamma }{x}}\,dx=0}
yields the implicit relationship
V
=
δ
x
−
γ
ln
(
x
)
+
β
y
−
α
ln
(
y
)
,
{\displaystyle V=\delta x-\gamma \ln(x)+\beta y-\alpha \ln(y),}
where V is a constant quantity depending on the initial conditions and conserved on each curve.
An aside: These graphs illustrate a serious potential limitation in the application as a biological model: for this specific choice of parameters, in each cycle, the rabbit population is reduced to extremely low numbers, yet recovers (while the fox population remains sizeable at the lowest rabbit density). In real-life situations, however, chance fluctuations of the discrete numbers of individuals might cause the rabbits to actually go extinct, and, by consequence, the foxes as well. This modelling problem has been called the "atto-fox problem", an atto-fox being a notional 10−18 of a fox. A density of 10−18 foxes per square kilometre equates to an average of approximately 5×10−10 foxes on the surface of the earth, which in practical terms means that foxes are extinct.
=== Hamiltonian structure of the system ===
Since the quantity
V
(
x
,
y
)
{\displaystyle V(x,y)}
is conserved over time, it plays role of a Hamiltonian function of the system. To see this we can define Poisson bracket as follows
{
f
(
x
,
y
)
,
g
(
x
,
y
)
}
=
−
x
y
(
∂
f
∂
x
∂
g
∂
y
−
∂
f
∂
y
∂
g
∂
x
)
{\displaystyle \{f(x,y),g(x,y)\}=-xy\left({\frac {\partial f}{\partial x}}{\frac {\partial g}{\partial y}}-{\frac {\partial f}{\partial y}}{\frac {\partial g}{\partial x}}\right)}
. Then Hamilton's equations read
{
x
˙
=
{
x
,
V
}
=
α
x
−
β
x
y
,
y
˙
=
{
y
,
V
}
=
δ
x
y
−
γ
y
.
{\displaystyle {\begin{cases}{\dot {x}}=\{x,V\}=\alpha x-\beta xy,\\{\dot {y}}=\{y,V\}=\delta xy-\gamma y.\end{cases}}}
The variables
x
{\displaystyle x}
and
y
{\displaystyle y}
are not canonical, since
{
x
,
y
}
=
−
x
y
≠
1
{\displaystyle \{x,y\}=-xy\neq 1}
. However, using transformations
p
=
ln
(
x
)
{\displaystyle p=\ln(x)}
and
q
=
ln
(
y
)
{\displaystyle q=\ln(y)}
we came up to a canonical form of the Hamilton's equations featuring the Hamiltonian
H
(
q
,
p
)
=
V
(
x
(
q
,
p
)
,
y
(
q
,
p
)
)
=
δ
e
p
−
γ
p
+
β
e
q
−
α
q
{\displaystyle H(q,p)=V(x(q,p),y(q,p))=\delta e^{p}-\gamma p+\beta e^{q}-\alpha q}
:
{
q
˙
=
∂
H
∂
p
=
δ
e
p
−
γ
,
p
˙
=
−
∂
H
∂
q
=
α
−
β
e
q
.
{\displaystyle {\begin{cases}{\dot {q}}={\frac {\partial H}{\partial p}}=\delta e^{p}-\gamma ,\\{\dot {p}}=-{\frac {\partial H}{\partial q}}=\alpha -\beta e^{q}.\end{cases}}}
The Poisson bracket for the canonical variables
(
q
,
p
)
{\displaystyle (q,p)}
now takes the standard form
{
F
(
q
,
p
)
,
G
(
q
,
p
)
}
=
(
∂
F
∂
q
∂
G
∂
p
−
∂
F
∂
p
∂
G
∂
q
)
{\displaystyle \{F(q,p),G(q,p)\}=\left({\frac {\partial F}{\partial q}}{\frac {\partial G}{\partial p}}-{\frac {\partial F}{\partial p}}{\frac {\partial G}{\partial q}}\right)}
.
=== Phase-space plot of a further example ===
Another example covers:
α = 2/3, β = 4/3, γ = 1 = δ. Assume x, y quantify thousands each. Circles represent prey and predator initial conditions from x = y = 0.9 to 1.8, in steps of 0.1. The fixed point is at (1, 1/2).
== Dynamics of the system ==
In the model system, the predators thrive when prey is plentiful but, ultimately, outstrip their food supply and decline. As the predator population is low, the prey population will increase again. These dynamics continue in a population cycle of growth and decline.
=== Population equilibrium ===
Population equilibrium occurs in the model when neither of the population levels is changing, i.e. when both of the derivatives are equal to 0:
x
(
α
−
β
y
)
=
0
,
{\displaystyle x(\alpha -\beta y)=0,}
−
y
(
γ
−
δ
x
)
=
0.
{\displaystyle -y(\gamma -\delta x)=0.}
The above system of equations yields two solutions:
{
y
=
0
,
x
=
0
}
{\displaystyle \{y=0,\ \ x=0\}}
and
{
y
=
α
β
,
x
=
γ
δ
}
.
{\displaystyle \left\{y={\frac {\alpha }{\beta }},\ \ x={\frac {\gamma }{\delta }}\right\}.}
Hence, there are two equilibria.
The first solution effectively represents the extinction of both species. If both populations are at 0, then they will continue to be so indefinitely. The second solution represents a fixed point at which both populations sustain their current, non-zero numbers, and, in the simplified model, do so indefinitely. The levels of population at which this equilibrium is achieved depend on the chosen values of the parameters α, β, γ, and δ.
=== Stability of the fixed points ===
The stability of the fixed point at the origin can be determined by performing a linearization using partial derivatives.
The Jacobian matrix of the predator–prey model is
J
(
x
,
y
)
=
[
α
−
β
y
−
β
x
δ
y
δ
x
−
γ
]
.
{\displaystyle J(x,y)={\begin{bmatrix}\alpha -\beta y&-\beta x\\\delta y&\delta x-\gamma \end{bmatrix}}.}
and is known as the community matrix.
==== First fixed point (extinction) ====
When evaluated at the steady state of (0, 0), the Jacobian matrix J becomes
J
(
0
,
0
)
=
[
α
0
0
−
γ
]
.
{\displaystyle J(0,0)={\begin{bmatrix}\alpha &0\\0&-\gamma \end{bmatrix}}.}
The eigenvalues of this matrix are
λ
1
=
α
,
λ
2
=
−
γ
.
{\displaystyle \lambda _{1}=\alpha ,\quad \lambda _{2}=-\gamma .}
In the model α and γ are always greater than zero, and as such the sign of the eigenvalues above will always differ. Hence the fixed point at the origin is a saddle point.
The instability of this fixed point is of significance. If it were stable, non-zero populations might be attracted towards it, and as such the dynamics of the system might lead towards the extinction of both species for many cases of initial population levels. However, as the fixed point at the origin is a saddle point, and hence unstable, it follows that the extinction of both species is difficult in the model. (In fact, this could only occur if the prey were artificially completely eradicated, causing the predators to die of starvation. If the predators were eradicated, the prey population would grow without bound in this simple model.) The populations of prey and predator can get infinitesimally close to zero and still recover.
==== Second fixed point (oscillations) ====
Evaluating J at the second fixed point leads to
J
(
γ
δ
,
α
β
)
=
[
0
−
β
γ
δ
α
δ
β
0
]
.
{\displaystyle J\left({\frac {\gamma }{\delta }},{\frac {\alpha }{\beta }}\right)={\begin{bmatrix}0&-{\frac {\beta \gamma }{\delta }}\\{\frac {\alpha \delta }{\beta }}&0\end{bmatrix}}.}
The eigenvalues of this matrix are
λ
1
=
i
α
γ
,
λ
2
=
−
i
α
γ
.
{\displaystyle \lambda _{1}=i{\sqrt {\alpha \gamma }},\quad \lambda _{2}=-i{\sqrt {\alpha \gamma }}.}
As the eigenvalues are both purely imaginary and conjugate to each other, this fixed point must either be a center for closed orbits in the local vicinity or an attractive or repulsive spiral. In conservative systems, there must be closed orbits in the local vicinity of fixed points that exist at the minima and maxima of the conserved quantity. The conserved quantity is derived above to be
V
=
δ
x
−
γ
ln
(
x
)
+
β
y
−
α
ln
(
y
)
{\displaystyle V=\delta x-\gamma \ln(x)+\beta y-\alpha \ln(y)}
on orbits. Thus orbits about the fixed point are closed and elliptic, so the solutions are periodic, oscillating on a small ellipse around the fixed point, with a frequency
ω
=
λ
1
λ
2
=
α
γ
{\displaystyle \omega ={\sqrt {\lambda _{1}\lambda _{2}}}={\sqrt {\alpha \gamma }}}
and period
T
=
2
π
/
(
λ
1
λ
2
)
{\displaystyle T=2{\pi }/({\sqrt {\lambda _{1}\lambda _{2}}})}
.
As illustrated in the circulating oscillations in the figure above, the level curves are closed orbits surrounding the fixed point: the levels of the predator and prey populations cycle and oscillate without damping around the fixed point with frequency
ω
=
α
γ
{\displaystyle \omega ={\sqrt {\alpha \gamma }}}
.
The value of the constant of motion V, or, equivalently, K = exp(−V),
K
=
y
α
e
−
β
y
x
γ
e
−
δ
x
{\displaystyle K=y^{\alpha }e^{-\beta y}x^{\gamma }e^{-\delta x}}
, can be found for the closed orbits near the fixed point.
Increasing K moves a closed orbit closer to the fixed point. The largest value of the constant K is obtained by solving the optimization problem
y
α
e
−
β
y
x
γ
e
−
δ
x
=
y
α
x
γ
e
δ
x
+
β
y
⟶
max
x
,
y
>
0
.
{\displaystyle y^{\alpha }e^{-\beta y}x^{\gamma }e^{-\delta x}={\frac {y^{\alpha }x^{\gamma }}{e^{\delta x+\beta y}}}\longrightarrow \max _{x,y>0}.}
The maximal value of K is thus attained at the stationary (fixed) point
(
γ
δ
,
α
β
)
{\displaystyle \left({\frac {\gamma }{\delta }},{\frac {\alpha }{\beta }}\right)}
and amounts to
K
∗
=
(
α
β
e
)
α
(
γ
δ
e
)
γ
,
{\displaystyle K^{*}=\left({\frac {\alpha }{\beta e}}\right)^{\alpha }\left({\frac {\gamma }{\delta e}}\right)^{\gamma },}
where e is Euler's number.
== See also ==
== Notes ==
== Further reading ==
Hofbauer, Josef; Sigmund, Karl (1998). "Dynamical Systems and Lotka–Volterra Equations". Evolutionary Games and Population Dynamics. New York: Cambridge University Press. pp. 1–54. ISBN 0-521-62570-X.
Kaplan, Daniel; Glass, Leon (1995). Understanding Nonlinear Dynamics. New York: Springer. ISBN 978-0-387-94440-1.
Leigh, E. R. (1968). "The ecological role of Volterra's equations". Some Mathematical Problems in Biology. – a modern discussion using Hudson's Bay Company data on lynx and hares in Canada from 1847 to 1903.
Murray, J. D. (2003). Mathematical Biology I: An Introduction. New York: Springer. ISBN 978-0-387-95223-9.'
Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/
== External links ==
From the Wolfram Demonstrations Project — requires CDF player (free):
Predator–Prey Equations
Predator–Prey Model
Predator–Prey Dynamics with Type-Two Functional Response
Predator–Prey Ecosystem: A Real-Time Agent-Based Simulation
Lotka-Volterra Algorithmic Simulation (Web simulation). | Wikipedia/Lotka–Volterra_equations |
Hydrostatics is the branch of fluid mechanics that studies fluids at hydrostatic equilibrium and "the pressure in a fluid or exerted by a fluid on an immersed body". The word "hydrostatics" is sometimes used to refer specifically to water and other liquids, but more often it includes both gases and liquids, whether compressible or incompressible.
It encompasses the study of the conditions under which fluids are at rest in stable equilibrium. It is opposed to fluid dynamics, the study of fluids in motion.
Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to geophysics and astrophysics (for example, in understanding plate tectonics and the anomalies of the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields.
Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of still water is always level according to the curvature of the earth.
== History ==
Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes' Principle, which relates the buoyancy force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The Roman engineer Vitruvius warned readers about lead pipes bursting under hydrostatic pressure.
The concept of pressure and the way it is transmitted by fluids was formulated by the French mathematician and philosopher Blaise Pascal in 1647.
=== Hydrostatics in ancient Greece and Rome ===
==== Pythagorean Cup ====
The "fair cup" or Pythagorean cup, which dates from about the 6th century BC, is a hydraulic technology whose invention is credited to the Greek mathematician and geometer Pythagoras. It was used as a learning tool.
The cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied.
==== Heron's fountain ====
Heron's fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, apparently in violation of principles of hydrostatic pressure. The device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, and several cannula (a small tube for transferring fluid between vessels) connecting the various vessels. Trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir.
=== Pascal's contribution in hydrostatics ===
Pascal made contributions to developments in both hydrostatics and hydrodynamics. Pascal's law is a fundamental principle of fluid mechanics that states that any pressure applied to the surface of a fluid is transmitted uniformly throughout the fluid in all directions, in such a way that initial variations in pressure are not changed.
== Pressure in fluids at rest ==
Due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface. If a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force. Thus, the pressure on a fluid at rest is isotropic; i.e., it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes; i.e., a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in a slightly extended form, by Blaise Pascal, and is now called Pascal's law.
=== Hydrostatic pressure ===
In a fluid at rest, all frictional and inertial stresses vanish and the state of stress of the system is called hydrostatic. When this condition of V = 0 is applied to the Navier–Stokes equations for viscous fluids or Euler equations (fluid dynamics) for ideal inviscid fluid, the gradient of pressure becomes a function of body forces only.
The Navier-Stokes momentum equations are:
By setting the flow velocity
u
=
0
{\displaystyle \mathbf {u} =\mathbf {0} }
, they become simply:
0
=
−
∇
p
+
ρ
g
{\displaystyle \mathbf {0} =-\nabla p+\rho \mathbf {g} }
or:
∇
p
=
ρ
g
{\displaystyle \nabla p=\rho \mathbf {g} }
This is the general form of Stevin's law: the pressure gradient equals the body force force density field.
Let us now consider two particular cases of this law. In case of a conservative body force with scalar potential
ϕ
{\displaystyle \phi }
:
ρ
g
=
−
∇
ϕ
{\displaystyle \rho \mathbf {g} =-\nabla \phi }
the Stevin equation becomes:
∇
p
=
−
∇
ϕ
{\displaystyle \nabla p=-\nabla \phi }
That can be integrated to give:
Δ
p
=
−
Δ
ϕ
{\displaystyle \Delta p=-\Delta \phi }
So in this case the pressure difference is the opposite of the difference of the scalar potential associated to the body force.
In the other particular case of a body force of constant direction along z:
g
=
−
g
(
x
,
y
,
z
)
k
^
{\displaystyle \mathbf {g} =-g(x,y,z){\hat {k}}}
the generalised Stevin's law above becomes:
∂
p
∂
z
=
−
ρ
(
x
,
y
,
z
)
g
(
x
,
y
,
z
)
{\displaystyle {\frac {\partial p}{\partial z}}=-\rho (x,y,z)g(x,y,z)}
That can be integrated to give another (less-) generalised Stevin's law:
p
(
x
,
y
,
z
)
−
p
0
(
x
,
y
)
=
−
∫
0
z
ρ
(
x
,
y
,
z
′
)
g
(
x
,
y
,
z
′
)
d
z
′
{\displaystyle p(x,y,z)-p_{0}(x,y)=-\int _{0}^{z}\rho (x,y,z')g(x,y,z')dz'}
where:
p
{\displaystyle p}
is the hydrostatic pressure (Pa),
ρ
{\displaystyle \rho }
is the fluid density (kg/m3),
g
{\displaystyle g}
is gravitational acceleration (m/s2),
z
{\displaystyle z}
is the height (parallel to the direction of gravity) of the test area (m),
0
{\displaystyle 0}
is the height of the zero reference point of the pressure (m)
p
0
{\displaystyle p_{0}}
is the hydrostatic pressure field (Pa) along x and y at the zero reference point
For water and other liquids, this integral can be simplified significantly for many practical applications, based on the following two assumptions. Since many liquids can be considered incompressible, a reasonable good estimation can be made from assuming a constant density throughout the liquid. The same assumption cannot be made within a gaseous environment. Also, since the height
Δ
z
{\displaystyle \Delta z}
of the fluid column between z and z0 is often reasonably small compared to the radius of the Earth, one can neglect the variation of g. Under these circumstances, one can transport out of the integral the density and the gravity acceleration and the law is simplified into the formula
Δ
p
(
z
)
=
ρ
g
Δ
z
,
{\displaystyle \Delta p(z)=\rho g\Delta z,}
where
Δ
z
{\displaystyle \Delta z}
is the height z − z0 of the liquid column between the test volume and the zero reference point of the pressure. This formula is often called Stevin's law.
One could arrive to the above formula also by considering the first particular case of the equation for a conservative body force field: in fact the body force field of uniform intensity and direction:
ρ
g
(
x
,
y
,
z
)
=
−
ρ
g
k
^
{\displaystyle \rho \mathbf {g} (x,y,z)=-\rho g{\hat {k}}}
is conservative, so one can write the body force density as:
ρ
g
=
∇
(
−
ρ
g
z
)
{\displaystyle \rho \mathbf {g} =\nabla (-\rho gz)}
Then the body force density has a simple scalar potential:
ϕ
(
z
)
=
−
ρ
g
z
{\displaystyle \phi (z)=-\rho gz}
And the pressure difference follows another time the Stevin's law:
Δ
p
=
−
Δ
ϕ
=
ρ
g
Δ
z
{\displaystyle \Delta p=-\Delta \phi =\rho g\Delta z}
The reference point should lie at or below the surface of the liquid. Otherwise, one has to split the integral into two (or more) terms with the constant ρliquid and ρ(z′)above. For example, the absolute pressure compared to vacuum is
p
=
ρ
g
Δ
z
+
p
0
,
{\displaystyle p=\rho g\Delta z+p_{\mathrm {0} },}
where
Δ
z
{\displaystyle \Delta z}
is the total height of the liquid column above the test area to the surface, and p0 is the atmospheric pressure, i.e., the pressure calculated from the remaining integral over the air column from the liquid surface to infinity. This can easily be visualized using a pressure prism.
Hydrostatic pressure has been used in the preservation of foods in a process called pascalization.
=== Medicine ===
In medicine, hydrostatic pressure in blood vessels is the pressure of the blood against the wall. It is the opposing force to oncotic pressure. In capillaries, hydrostatic pressure (also known as capillary blood pressure) is higher than the opposing “colloid osmotic pressure” in blood—a “constant” pressure primarily produced by circulating albumin—at the arteriolar end of the capillary. This pressure forces plasma and nutrients out of the capillaries and into surrounding tissues. Fluid and the cellular wastes in the tissues enter the capillaries at the venule end, where the hydrostatic pressure is less than the osmotic pressure in the vessel.
=== Atmospheric pressure ===
Statistical mechanics shows that, for a pure ideal gas of constant temperature in a gravitational field, T, its pressure, p will vary with height, h, as
p
(
h
)
=
p
(
0
)
e
−
M
g
h
k
T
{\displaystyle p(h)=p(0)e^{-{\frac {Mgh}{kT}}}}
where
g is the acceleration due to gravity
T is the absolute temperature
k is Boltzmann constant
M is the molecular mass of the gas
p is the pressure
h is the height
This is known as the barometric formula, and may be derived from assuming the pressure is hydrostatic.
If there are multiple types of molecules in the gas, the partial pressure of each type will be given by this equation. Under most conditions, the distribution of each species of gas is independent of the other species.
=== Buoyancy ===
Any body of arbitrary shape which is immersed, partly or fully, in a fluid will experience the action of a net force in the opposite direction of the local pressure gradient. If this pressure gradient arises from gravity, the net force is in the vertical direction opposite that of the gravitational force. This vertical force is termed buoyancy or buoyant force and is equal in magnitude, but opposite in direction, to the weight of the displaced fluid. Mathematically,
F
=
ρ
g
V
{\displaystyle F=\rho gV}
where ρ is the density of the fluid, g is the acceleration due to gravity, and V is the volume of fluid directly above the curved surface. In the case of a ship, for instance, its weight is balanced by pressure forces from the surrounding water, allowing it to float. If more cargo is loaded onto the ship, it would sink more into the water – displacing more water and thus receive a higher buoyant force to balance the increased weight.
Discovery of the principle of buoyancy is attributed to Archimedes.
=== Hydrostatic force on submerged surfaces ===
The horizontal and vertical components of the hydrostatic force acting on a submerged surface are given by the following formula:
F
h
=
p
c
A
F
v
=
ρ
g
V
{\displaystyle {\begin{aligned}F_{\mathrm {h} }&=p_{\mathrm {c} }A\\F_{\mathrm {v} }&=\rho gV\end{aligned}}}
where
pc is the pressure at the centroid of the vertical projection of the submerged surface
A is the area of the same vertical projection of the surface
ρ is the density of the fluid
g is the acceleration due to gravity
V is the volume of fluid directly above the curved surface
== Liquids (fluids with free surfaces) ==
Liquids can have free surfaces at which they interface with gases, or with a vacuum. In general, the lack of the ability to sustain a shear stress entails that free surfaces rapidly adjust towards an equilibrium. However, on small length scales, there is an important balancing force from surface tension.
=== Capillary action ===
When liquids are constrained in vessels whose dimensions are small, compared to the relevant length scales, surface tension effects become important leading to the formation of a meniscus through capillary action. This capillary action has profound consequences for biological systems as it is part of one of the two driving mechanisms of the flow of water in plant xylem, the transpirational pull.
=== Hanging drops ===
Without surface tension, drops would not be able to form. The dimensions and stability of drops are determined by surface tension. The drop's surface tension is directly proportional to the cohesion property of the fluid.
== See also ==
Communicating vessels – Set of internally connected containers containing a homogeneous fluid
Hydrostatic test – Non-destructive test of pressure vessels
D-DIA – Apparatus used for high pressure and high temperature deformation experiments
== References ==
== Further reading ==
Batchelor, George K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2.
Falkovich, Gregory (2011). Fluid Mechanics (A short course for physicists). Cambridge University Press. ISBN 978-1-107-00575-4.
Kundu, Pijush K.; Cohen, Ira M. (2008). Fluid Mechanics (4th rev. ed.). Academic Press. ISBN 978-0-12-373735-9.
Currie, I. G. (1974). Fundamental Mechanics of Fluids. McGraw-Hill. ISBN 0-07-015000-1.
Massey, B.; Ward-Smith, J. (2005). Mechanics of Fluids (8th ed.). Taylor & Francis. ISBN 978-0-415-36206-1.
White, Frank M. (2003). Fluid Mechanics. McGraw–Hill. ISBN 0-07-240217-2.
== External links ==
The Flow of Dry Water - The Feynman Lectures on Physics | Wikipedia/Fluid_statics |
In continuum mechanics, plate theories are mathematical descriptions of the mechanics of flat plates that draw on the theory of beams. Plates are defined as plane structural elements with a small thickness compared to the planar dimensions. The typical thickness to width ratio of a plate structure is less than 0.1. A plate theory takes advantage of this disparity in length scale to reduce the full three-dimensional solid mechanics problem to a two-dimensional problem. The aim of plate theory is to calculate the deformation and stresses in a plate subjected to loads.
Of the numerous plate theories that have been developed since the late 19th century, two are widely accepted and used in engineering. These are
the Kirchhoff–Love theory of plates (classical plate theory)
The Reissner-Mindlin theory of plates (first-order shear plate theory)
== Kirchhoff–Love theory for thin plates ==
The Kirchhoff–Love theory is an extension of Euler–Bernoulli beam theory to thin plates. The theory was developed in 1888 by Love using assumptions proposed by Kirchhoff. It is assumed that a mid-surface plane can be used to represent the three-dimensional plate in two-dimensional form.
The following kinematic assumptions are made in this theory:
straight lines normal to the mid-surface remain straight after deformation
straight lines normal to the mid-surface remain normal to the mid-surface after deformation
the thickness of the plate does not change during a deformation.
=== Displacement field ===
The Kirchhoff hypothesis implies that the displacement field has the form
where
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
are the Cartesian coordinates on the mid-surface of the undeformed plate,
x
3
{\displaystyle x_{3}}
is the coordinate for the thickness direction,
u
1
0
,
u
2
0
{\displaystyle u_{1}^{0},u_{2}^{0}}
are the in-plane displacements of the mid-surface, and
w
0
{\displaystyle w^{0}}
is the displacement of the mid-surface in the
x
3
{\displaystyle x_{3}}
direction.
If
φ
α
{\displaystyle \varphi _{\alpha }}
are the angles of rotation of the normal to the mid-surface, then in the Kirchhoff–Love theory
φ
α
=
w
,
α
0
.
{\displaystyle \varphi _{\alpha }=w_{,\alpha }^{0}\,.}
=== Strain-displacement relations ===
For the situation where the strains in the plate are infinitesimal and the rotations of the mid-surface normals are less than 10° the strains-displacement relations are
ε
α
β
=
1
2
(
u
α
,
β
0
+
u
β
,
α
0
)
−
x
3
w
,
α
β
0
ε
α
3
=
−
w
,
α
0
+
w
,
α
0
=
0
ε
33
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\alpha \beta }&={\tfrac {1}{2}}(u_{\alpha ,\beta }^{0}+u_{\beta ,\alpha }^{0})-x_{3}~w_{,\alpha \beta }^{0}\\\varepsilon _{\alpha 3}&=-w_{,\alpha }^{0}+w_{,\alpha }^{0}=0\\\varepsilon _{33}&=0\end{aligned}}}
Therefore, the only non-zero strains are in the in-plane directions.
If the rotations of the normals to the mid-surface are in the range of 10° to 15°, the strain-displacement relations can be approximated using the von Kármán strains. Then the kinematic assumptions of Kirchhoff-Love theory lead to the following strain-displacement relations
ε
α
β
=
1
2
(
u
α
,
β
0
+
u
β
,
α
0
+
w
,
α
0
w
,
β
0
)
−
x
3
w
,
α
β
0
ε
α
3
=
−
w
,
α
0
+
w
,
α
0
=
0
ε
33
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\alpha \beta }&={\frac {1}{2}}(u_{\alpha ,\beta }^{0}+u_{\beta ,\alpha }^{0}+w_{,\alpha }^{0}~w_{,\beta }^{0})-x_{3}~w_{,\alpha \beta }^{0}\\\varepsilon _{\alpha 3}&=-w_{,\alpha }^{0}+w_{,\alpha }^{0}=0\\\varepsilon _{33}&=0\end{aligned}}}
This theory is nonlinear because of the quadratic terms in the strain-displacement relations.
=== Equilibrium equations ===
The equilibrium equations for the plate can be derived from the principle of virtual work. For the situation where the strains and rotations of the plate are small, the equilibrium equations for an unloaded plate are given by
N
α
β
,
α
=
0
M
α
β
,
α
β
=
0
{\displaystyle {\begin{aligned}N_{\alpha \beta ,\alpha }&=0\\M_{\alpha \beta ,\alpha \beta }&=0\end{aligned}}}
where the stress resultants and stress moment resultants are defined as
N
α
β
:=
∫
−
h
h
σ
α
β
d
x
3
;
M
α
β
:=
∫
−
h
h
x
3
σ
α
β
d
x
3
{\displaystyle N_{\alpha \beta }:=\int _{-h}^{h}\sigma _{\alpha \beta }~dx_{3}~;~~M_{\alpha \beta }:=\int _{-h}^{h}x_{3}~\sigma _{\alpha \beta }~dx_{3}}
and the thickness of the plate is
2
h
{\displaystyle 2h}
. The quantities
σ
α
β
{\displaystyle \sigma _{\alpha \beta }}
are the stresses.
If the plate is loaded by an external distributed load
q
(
x
)
{\displaystyle q(x)}
that is normal to the mid-surface and directed in the positive
x
3
{\displaystyle x_{3}}
direction, the principle of virtual work then leads to the equilibrium equations
For moderate rotations, the strain-displacement relations take the von Karman form and the equilibrium equations can be expressed as
N
α
β
,
α
=
0
M
α
β
,
α
β
+
[
N
α
β
w
,
β
0
]
,
α
−
q
=
0
{\displaystyle {\begin{aligned}N_{\alpha \beta ,\alpha }&=0\\M_{\alpha \beta ,\alpha \beta }+[N_{\alpha \beta }~w_{,\beta }^{0}]_{,\alpha }-q&=0\end{aligned}}}
=== Boundary conditions ===
The boundary conditions that are needed to solve the equilibrium equations of plate theory can be obtained from the boundary terms in the principle of virtual work.
For small strains and small rotations, the boundary conditions are
n
α
N
α
β
o
r
u
β
0
n
α
M
α
β
,
β
o
r
w
0
n
β
M
α
β
o
r
w
,
α
0
{\displaystyle {\begin{aligned}n_{\alpha }~N_{\alpha \beta }&\quad \mathrm {or} \quad u_{\beta }^{0}\\n_{\alpha }~M_{\alpha \beta ,\beta }&\quad \mathrm {or} \quad w^{0}\\n_{\beta }~M_{\alpha \beta }&\quad \mathrm {or} \quad w_{,\alpha }^{0}\end{aligned}}}
Note that the quantity
n
α
M
α
β
,
β
{\displaystyle n_{\alpha }~M_{\alpha \beta ,\beta }}
is an effective shear force.
=== Stress–strain relations ===
The stress–strain relations for a linear elastic Kirchhoff plate are given by
[
σ
11
σ
22
σ
12
]
=
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
[
ε
11
ε
22
ε
12
]
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{12}\end{bmatrix}}}
Since
σ
α
3
{\displaystyle \sigma _{\alpha 3}}
and
σ
33
{\displaystyle \sigma _{33}}
do not appear in the equilibrium equations it is implicitly assumed that these quantities do not have any effect on the momentum balance and are neglected.
It is more convenient to work with the stress and moment resultants that enter the equilibrium equations. These are related to the displacements by
[
N
11
N
22
N
12
]
=
{
∫
−
h
h
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
d
x
3
}
[
u
1
,
1
0
u
2
,
2
0
1
2
(
u
1
,
2
0
+
u
2
,
1
0
)
]
{\displaystyle {\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}=\left\{\int _{-h}^{h}{\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}u_{1,1}^{0}\\u_{2,2}^{0}\\{\frac {1}{2}}~(u_{1,2}^{0}+u_{2,1}^{0})\end{bmatrix}}}
and
[
M
11
M
22
M
12
]
=
−
{
∫
−
h
h
x
3
2
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
d
x
3
}
[
w
,
11
0
w
,
22
0
w
,
12
0
]
.
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-\left\{\int _{-h}^{h}x_{3}^{2}~{\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}w_{,11}^{0}\\w_{,22}^{0}\\w_{,12}^{0}\end{bmatrix}}\,.}
The extensional stiffnesses are the quantities
A
α
β
:=
∫
−
h
h
C
α
β
d
x
3
{\displaystyle A_{\alpha \beta }:=\int _{-h}^{h}C_{\alpha \beta }~dx_{3}}
The bending stiffnesses (also called flexural rigidity) are the quantities
D
α
β
:=
∫
−
h
h
x
3
2
C
α
β
d
x
3
{\displaystyle D_{\alpha \beta }:=\int _{-h}^{h}x_{3}^{2}~C_{\alpha \beta }~dx_{3}}
== Isotropic and homogeneous Kirchhoff plate ==
For an isotropic and homogeneous plate, the stress–strain relations are
[
σ
11
σ
22
σ
12
]
=
E
1
−
ν
2
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
ε
11
ε
22
ε
12
]
.
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}={\cfrac {E}{1-\nu ^{2}}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{12}\end{bmatrix}}\,.}
The moments corresponding to these stresses are
[
M
11
M
22
M
12
]
=
−
2
h
3
E
3
(
1
−
ν
2
)
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
w
,
11
0
w
,
22
0
w
,
12
0
]
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-{\cfrac {2h^{3}E}{3(1-\nu ^{2})}}~{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}w_{,11}^{0}\\w_{,22}^{0}\\w_{,12}^{0}\end{bmatrix}}}
=== Pure bending ===
The displacements
u
1
0
{\displaystyle u_{1}^{0}}
and
u
2
0
{\displaystyle u_{2}^{0}}
are zero under pure bending conditions. For an isotropic, homogeneous plate under pure bending the governing equation is
∂
4
w
∂
x
1
4
+
2
∂
4
w
∂
x
1
2
∂
x
2
2
+
∂
4
w
∂
x
2
4
=
0
where
w
:=
w
0
.
{\displaystyle {\frac {\partial ^{4}w}{\partial x_{1}^{4}}}+2{\frac {\partial ^{4}w}{\partial x_{1}^{2}\partial x_{2}^{2}}}+{\frac {\partial ^{4}w}{\partial x_{2}^{4}}}=0\quad {\text{where}}\quad w:=w^{0}\,.}
In index notation,
w
,
1111
0
+
2
w
,
1212
0
+
w
,
2222
0
=
0
.
{\displaystyle w_{,1111}^{0}+2~w_{,1212}^{0}+w_{,2222}^{0}=0\,.}
In direct tensor notation, the governing equation is
=== Transverse loading ===
For a transversely loaded plate without axial deformations, the governing equation has the form
∂
4
w
∂
x
1
4
+
2
∂
4
w
∂
x
1
2
∂
x
2
2
+
∂
4
w
∂
x
2
4
=
−
q
D
{\displaystyle {\frac {\partial ^{4}w}{\partial x_{1}^{4}}}+2{\frac {\partial ^{4}w}{\partial x_{1}^{2}\partial x_{2}^{2}}}+{\frac {\partial ^{4}w}{\partial x_{2}^{4}}}=-{\frac {q}{D}}}
where
D
:=
2
h
3
E
3
(
1
−
ν
2
)
.
{\displaystyle D:={\cfrac {2h^{3}E}{3(1-\nu ^{2})}}\,.}
for a plate with thickness
2
h
{\displaystyle 2h}
.
In index notation,
w
,
1111
0
+
2
w
,
1212
0
+
w
,
2222
0
=
−
q
D
{\displaystyle w_{,1111}^{0}+2\,w_{,1212}^{0}+w_{,2222}^{0}=-{\frac {q}{D}}}
and in direct notation
In cylindrical coordinates
(
r
,
θ
,
z
)
{\displaystyle (r,\theta ,z)}
, the governing equation is
1
r
d
d
r
[
r
d
d
r
{
1
r
d
d
r
(
r
d
w
d
r
)
}
]
=
−
q
D
.
{\displaystyle {\frac {1}{r}}{\cfrac {d}{dr}}\left[r{\cfrac {d}{dr}}\left\{{\frac {1}{r}}{\cfrac {d}{dr}}\left(r{\cfrac {dw}{dr}}\right)\right\}\right]=-{\frac {q}{D}}\,.}
== Orthotropic and homogeneous Kirchhoff plate ==
For an orthotropic plate
[
C
11
C
12
C
13
C
12
C
22
C
23
C
13
C
23
C
33
]
=
1
1
−
ν
12
ν
21
[
E
1
ν
12
E
2
0
ν
21
E
1
E
2
0
0
0
2
G
12
(
1
−
ν
12
ν
21
)
]
.
{\displaystyle {\begin{bmatrix}C_{11}&C_{12}&C_{13}\\C_{12}&C_{22}&C_{23}\\C_{13}&C_{23}&C_{33}\end{bmatrix}}={\cfrac {1}{1-\nu _{12}\nu _{21}}}{\begin{bmatrix}E_{1}&\nu _{12}E_{2}&0\\\nu _{21}E_{1}&E_{2}&0\\0&0&2G_{12}(1-\nu _{12}\nu _{21})\end{bmatrix}}\,.}
Therefore,
[
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
]
=
2
h
1
−
ν
12
ν
21
[
E
1
ν
12
E
2
0
ν
21
E
1
E
2
0
0
0
2
G
12
(
1
−
ν
12
ν
21
)
]
{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\\A_{31}&A_{32}&A_{33}\end{bmatrix}}={\cfrac {2h}{1-\nu _{12}\nu _{21}}}{\begin{bmatrix}E_{1}&\nu _{12}E_{2}&0\\\nu _{21}E_{1}&E_{2}&0\\0&0&2G_{12}(1-\nu _{12}\nu _{21})\end{bmatrix}}}
and
[
D
11
D
12
D
13
D
21
D
22
D
23
D
31
D
32
D
33
]
=
2
h
3
3
(
1
−
ν
12
ν
21
)
[
E
1
ν
12
E
2
0
ν
21
E
1
E
2
0
0
0
2
G
12
(
1
−
ν
12
ν
21
)
]
.
{\displaystyle {\begin{bmatrix}D_{11}&D_{12}&D_{13}\\D_{21}&D_{22}&D_{23}\\D_{31}&D_{32}&D_{33}\end{bmatrix}}={\cfrac {2h^{3}}{3(1-\nu _{12}\nu _{21})}}{\begin{bmatrix}E_{1}&\nu _{12}E_{2}&0\\\nu _{21}E_{1}&E_{2}&0\\0&0&2G_{12}(1-\nu _{12}\nu _{21})\end{bmatrix}}\,.}
=== Transverse loading ===
The governing equation of an orthotropic Kirchhoff plate loaded transversely by a distributed load
q
{\displaystyle q}
per unit area is
D
x
w
,
1111
0
+
2
D
x
y
w
,
1122
0
+
D
y
w
,
2222
0
=
−
q
{\displaystyle D_{x}w_{,1111}^{0}+2D_{xy}w_{,1122}^{0}+D_{y}w_{,2222}^{0}=-q}
where
D
x
=
D
11
=
2
h
3
E
1
3
(
1
−
ν
12
ν
21
)
D
y
=
D
22
=
2
h
3
E
2
3
(
1
−
ν
12
ν
21
)
D
x
y
=
D
33
+
1
2
(
ν
21
D
11
+
ν
12
D
22
)
=
D
33
+
ν
21
D
11
=
4
h
3
G
12
3
+
2
h
3
ν
21
E
1
3
(
1
−
ν
12
ν
21
)
.
{\displaystyle {\begin{aligned}D_{x}&=D_{11}={\frac {2h^{3}E_{1}}{3(1-\nu _{12}\nu _{21})}}\\D_{y}&=D_{22}={\frac {2h^{3}E_{2}}{3(1-\nu _{12}\nu _{21})}}\\D_{xy}&=D_{33}+{\tfrac {1}{2}}(\nu _{21}D_{11}+\nu _{12}D_{22})=D_{33}+\nu _{21}D_{11}={\frac {4h^{3}G_{12}}{3}}+{\frac {2h^{3}\nu _{21}E_{1}}{3(1-\nu _{12}\nu _{21})}}\,.\end{aligned}}}
== Dynamics of thin Kirchhoff plates ==
The dynamic theory of plates determines the propagation of waves in the plates, and the study of standing waves and vibration modes.
=== Governing equations ===
The governing equations for the dynamics of a Kirchhoff–Love plate are
where, for a plate with density
ρ
=
ρ
(
x
)
{\displaystyle \rho =\rho (x)}
,
J
1
:=
∫
−
h
h
ρ
d
x
3
=
2
ρ
h
;
J
3
:=
∫
−
h
h
x
3
2
ρ
d
x
3
=
2
3
ρ
h
3
{\displaystyle J_{1}:=\int _{-h}^{h}\rho ~dx_{3}=2~\rho ~h~;~~J_{3}:=\int _{-h}^{h}x_{3}^{2}~\rho ~dx_{3}={\frac {2}{3}}~\rho ~h^{3}}
and
u
˙
i
=
∂
u
i
∂
t
;
u
¨
i
=
∂
2
u
i
∂
t
2
;
u
i
,
α
=
∂
u
i
∂
x
α
;
u
i
,
α
β
=
∂
2
u
i
∂
x
α
∂
x
β
{\displaystyle {\dot {u}}_{i}={\frac {\partial u_{i}}{\partial t}}~;~~{\ddot {u}}_{i}={\frac {\partial ^{2}u_{i}}{\partial t^{2}}}~;~~u_{i,\alpha }={\frac {\partial u_{i}}{\partial x_{\alpha }}}~;~~u_{i,\alpha \beta }={\frac {\partial ^{2}u_{i}}{\partial x_{\alpha }\partial x_{\beta }}}}
The figures below show some vibrational modes of a circular plate.
=== Isotropic plates ===
The governing equations simplify considerably for isotropic and homogeneous plates for which the in-plane deformations can be neglected and have the form
D
(
∂
4
w
0
∂
x
1
4
+
2
∂
4
w
0
∂
x
1
2
∂
x
2
2
+
∂
4
w
0
∂
x
2
4
)
=
−
q
(
x
1
,
x
2
,
t
)
−
2
ρ
h
∂
2
w
0
∂
t
2
.
{\displaystyle D\,\left({\frac {\partial ^{4}w^{0}}{\partial x_{1}^{4}}}+2{\frac {\partial ^{4}w^{0}}{\partial x_{1}^{2}\partial x_{2}^{2}}}+{\frac {\partial ^{4}w^{0}}{\partial x_{2}^{4}}}\right)=-q(x_{1},x_{2},t)-2\rho h\,{\frac {\partial ^{2}w^{0}}{\partial t^{2}}}\,.}
where
D
{\displaystyle D}
is the bending stiffness of the plate. For a uniform plate of thickness
2
h
{\displaystyle 2h}
,
D
:=
2
h
3
E
3
(
1
−
ν
2
)
.
{\displaystyle D:={\cfrac {2h^{3}E}{3(1-\nu ^{2})}}\,.}
In direct notation
== Reissner-Mindlin theory for thick plates ==
In the theory of thick plates, contributed to by Eric Reissner, Raymond Mindlin, and Yakov S. Uflyand, the normal to the mid-surface remains straight but not necessarily perpendicular to the mid-surface. If
φ
1
{\displaystyle \varphi _{1}}
and
φ
2
{\displaystyle \varphi _{2}}
designate the angles which the mid-surface makes with the
x
3
{\displaystyle x_{3}}
axis then
φ
1
≠
w
,
1
;
φ
2
≠
w
,
2
{\displaystyle \varphi _{1}\neq w_{,1}~;~~\varphi _{2}\neq w_{,2}}
Then the Mindlin–Reissner hypothesis implies that
=== Strain-displacement relations ===
Depending on the amount of rotation of the plate normals two different approximations for the strains can be derived from the basic kinematic assumptions.
For small strains and small rotations the strain-displacement relations for Mindlin–Reissner plates are
ε
α
β
=
1
2
(
u
α
,
β
0
+
u
β
,
α
0
)
−
x
3
2
(
φ
α
,
β
+
φ
β
,
α
)
ε
α
3
=
1
2
(
w
,
α
0
−
φ
α
)
ε
33
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\alpha \beta }&={\frac {1}{2}}(u_{\alpha ,\beta }^{0}+u_{\beta ,\alpha }^{0})-{\frac {x_{3}}{2}}~(\varphi _{\alpha ,\beta }+\varphi _{\beta ,\alpha })\\\varepsilon _{\alpha 3}&={\cfrac {1}{2}}\left(w_{,\alpha }^{0}-\varphi _{\alpha }\right)\\\varepsilon _{33}&=0\end{aligned}}}
The shear strain, and hence the shear stress, across the thickness of the plate is not neglected in this theory. However, the shear strain is constant across the thickness of the plate. This cannot be accurate since the shear stress is known to be parabolic even for simple plate geometries. To account for the inaccuracy in the shear strain, a shear correction factor (
κ
{\displaystyle \kappa }
) is applied so that the correct amount of internal energy is predicted by the theory. Then
ε
α
3
=
1
2
κ
(
w
,
α
0
−
φ
α
)
{\displaystyle \varepsilon _{\alpha 3}={\cfrac {1}{2}}~\kappa ~\left(w_{,\alpha }^{0}-\varphi _{\alpha }\right)}
=== Equilibrium equations ===
The equilibrium equations have slightly different forms depending on the amount of bending expected in the plate. For the situation where the strains and rotations of the plate are small the equilibrium equations for a Mindlin–Reissner plate are
The resultant shear forces in the above equations are defined as
Q
α
:=
κ
∫
−
h
h
σ
α
3
d
x
3
.
{\displaystyle Q_{\alpha }:=\kappa ~\int _{-h}^{h}\sigma _{\alpha 3}~dx_{3}\,.}
=== Boundary conditions ===
The boundary conditions are indicated by the boundary terms in the principle of virtual work.
If the only external force is a vertical force on the top surface of the plate, the boundary conditions are
n
α
N
α
β
o
r
u
β
0
n
α
M
α
β
o
r
φ
α
n
α
Q
α
o
r
w
0
{\displaystyle {\begin{aligned}n_{\alpha }~N_{\alpha \beta }&\quad \mathrm {or} \quad u_{\beta }^{0}\\n_{\alpha }~M_{\alpha \beta }&\quad \mathrm {or} \quad \varphi _{\alpha }\\n_{\alpha }~Q_{\alpha }&\quad \mathrm {or} \quad w^{0}\end{aligned}}}
=== Constitutive relations ===
The stress–strain relations for a linear elastic Mindlin–Reissner plate are given by
σ
α
β
=
C
α
β
γ
θ
ε
γ
θ
σ
α
3
=
C
α
3
γ
θ
ε
γ
θ
σ
33
=
C
33
γ
θ
ε
γ
θ
{\displaystyle {\begin{aligned}\sigma _{\alpha \beta }&=C_{\alpha \beta \gamma \theta }~\varepsilon _{\gamma \theta }\\\sigma _{\alpha 3}&=C_{\alpha 3\gamma \theta }~\varepsilon _{\gamma \theta }\\\sigma _{33}&=C_{33\gamma \theta }~\varepsilon _{\gamma \theta }\end{aligned}}}
Since
σ
33
{\displaystyle \sigma _{33}}
does not appear in the equilibrium equations it is implicitly assumed that it do not have any effect on the momentum balance and is neglected. This assumption is also called the plane stress assumption. The remaining stress–strain relations for an orthotropic material, in matrix form, can be written as
[
σ
11
σ
22
σ
23
σ
31
σ
12
]
=
[
C
11
C
12
0
0
0
C
12
C
22
0
0
0
0
0
C
44
0
0
0
0
0
C
55
0
0
0
0
0
C
66
]
[
ε
11
ε
22
ε
23
ε
31
ε
12
]
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{23}\\\sigma _{31}\\\sigma _{12}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{12}&0&0&0\\C_{12}&C_{22}&0&0&0\\0&0&C_{44}&0&0\\0&0&0&C_{55}&0\\0&0&0&0&C_{66}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{23}\\\varepsilon _{31}\\\varepsilon _{12}\end{bmatrix}}}
Then,
[
N
11
N
22
N
12
]
=
{
∫
−
h
h
[
C
11
C
12
0
C
12
C
22
0
0
0
C
66
]
d
x
3
}
[
u
1
,
1
0
u
2
,
2
0
1
2
(
u
1
,
2
0
+
u
2
,
1
0
)
]
{\displaystyle {\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}=\left\{\int _{-h}^{h}{\begin{bmatrix}C_{11}&C_{12}&0\\C_{12}&C_{22}&0\\0&0&C_{66}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}u_{1,1}^{0}\\u_{2,2}^{0}\\{\frac {1}{2}}~(u_{1,2}^{0}+u_{2,1}^{0})\end{bmatrix}}}
and
[
M
11
M
22
M
12
]
=
−
{
∫
−
h
h
x
3
2
[
C
11
C
12
0
C
12
C
22
0
0
0
C
66
]
d
x
3
}
[
φ
1
,
1
φ
2
,
2
1
2
(
φ
1
,
2
+
φ
2
,
1
)
]
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-\left\{\int _{-h}^{h}x_{3}^{2}~{\begin{bmatrix}C_{11}&C_{12}&0\\C_{12}&C_{22}&0\\0&0&C_{66}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}\varphi _{1,1}\\\varphi _{2,2}\\{\frac {1}{2}}~(\varphi _{1,2}+\varphi _{2,1})\end{bmatrix}}}
For the shear terms
[
Q
1
Q
2
]
=
κ
2
{
∫
−
h
h
[
C
55
0
0
C
44
]
d
x
3
}
[
w
,
1
0
−
φ
1
w
,
2
0
−
φ
2
]
{\displaystyle {\begin{bmatrix}Q_{1}\\Q_{2}\end{bmatrix}}={\cfrac {\kappa }{2}}\left\{\int _{-h}^{h}{\begin{bmatrix}C_{55}&0\\0&C_{44}\end{bmatrix}}~dx_{3}\right\}{\begin{bmatrix}w_{,1}^{0}-\varphi _{1}\\w_{,2}^{0}-\varphi _{2}\end{bmatrix}}}
The extensional stiffnesses are the quantities
A
α
β
:=
∫
−
h
h
C
α
β
d
x
3
{\displaystyle A_{\alpha \beta }:=\int _{-h}^{h}C_{\alpha \beta }~dx_{3}}
The bending stiffnesses are the quantities
D
α
β
:=
∫
−
h
h
x
3
2
C
α
β
d
x
3
{\displaystyle D_{\alpha \beta }:=\int _{-h}^{h}x_{3}^{2}~C_{\alpha \beta }~dx_{3}}
== Isotropic and homogeneous Reissner-Mindlin plates ==
For uniformly thick, homogeneous, and isotropic plates, the stress–strain relations in the plane of the plate are
[
σ
11
σ
22
σ
12
]
=
E
1
−
ν
2
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
ε
11
ε
22
ε
12
]
.
{\displaystyle {\begin{bmatrix}\sigma _{11}\\\sigma _{22}\\\sigma _{12}\end{bmatrix}}={\cfrac {E}{1-\nu ^{2}}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}\\\varepsilon _{22}\\\varepsilon _{12}\end{bmatrix}}\,.}
where
E
{\displaystyle E}
is the Young's modulus,
ν
{\displaystyle \nu }
is the Poisson's ratio, and
ε
α
β
{\displaystyle \varepsilon _{\alpha \beta }}
are the in-plane strains. The through-the-thickness shear stresses and strains are related by
σ
31
=
2
G
ε
31
and
σ
32
=
2
G
ε
32
{\displaystyle \sigma _{31}=2G\varepsilon _{31}\quad {\text{and}}\quad \sigma _{32}=2G\varepsilon _{32}}
where
G
=
E
/
(
2
(
1
+
ν
)
)
{\displaystyle G=E/(2(1+\nu ))}
is the shear modulus.
=== Constitutive relations ===
The relations between the stress resultants and the generalized displacements for an isotropic Mindlin–Reissner plate are:
[
N
11
N
22
N
12
]
=
2
E
h
1
−
ν
2
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
u
1
,
1
0
u
2
,
2
0
1
2
(
u
1
,
2
0
+
u
2
,
1
0
)
]
,
{\displaystyle {\begin{bmatrix}N_{11}\\N_{22}\\N_{12}\end{bmatrix}}={\cfrac {2Eh}{1-\nu ^{2}}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}u_{1,1}^{0}\\u_{2,2}^{0}\\{\frac {1}{2}}~(u_{1,2}^{0}+u_{2,1}^{0})\end{bmatrix}}\,,}
[
M
11
M
22
M
12
]
=
−
2
E
h
3
3
(
1
−
ν
2
)
[
1
ν
0
ν
1
0
0
0
1
−
ν
]
[
φ
1
,
1
φ
2
,
2
1
2
(
φ
1
,
2
+
φ
2
,
1
)
]
,
{\displaystyle {\begin{bmatrix}M_{11}\\M_{22}\\M_{12}\end{bmatrix}}=-{\cfrac {2Eh^{3}}{3(1-\nu ^{2})}}{\begin{bmatrix}1&\nu &0\\\nu &1&0\\0&0&1-\nu \end{bmatrix}}{\begin{bmatrix}\varphi _{1,1}\\\varphi _{2,2}\\{\frac {1}{2}}(\varphi _{1,2}+\varphi _{2,1})\end{bmatrix}}\,,}
and
[
Q
1
Q
2
]
=
κ
G
h
[
w
,
1
0
−
φ
1
w
,
2
0
−
φ
2
]
.
{\displaystyle {\begin{bmatrix}Q_{1}\\Q_{2}\end{bmatrix}}=\kappa Gh{\begin{bmatrix}w_{,1}^{0}-\varphi _{1}\\w_{,2}^{0}-\varphi _{2}\end{bmatrix}}\,.}
The bending rigidity is defined as the quantity
D
=
2
E
h
3
3
(
1
−
ν
2
)
.
{\displaystyle D={\cfrac {2Eh^{3}}{3(1-\nu ^{2})}}\,.}
For a plate of thickness
H
{\displaystyle H}
, the bending rigidity has the form
D
=
E
H
3
12
(
1
−
ν
2
)
.
{\displaystyle D={\cfrac {EH^{3}}{12(1-\nu ^{2})}}\,.}
where
h
=
H
2
{\displaystyle h={\frac {H}{2}}}
=== Governing equations ===
If we ignore the in-plane extension of the plate, the governing equations are
M
α
β
,
β
−
Q
α
=
0
Q
α
,
α
+
q
=
0
.
{\displaystyle {\begin{aligned}M_{\alpha \beta ,\beta }-Q_{\alpha }&=0\\Q_{\alpha ,\alpha }+q&=0\,.\end{aligned}}}
In terms of the generalized deformations
w
0
,
φ
1
,
φ
2
{\displaystyle w^{0},\varphi _{1},\varphi _{2}}
, the three governing equations are
The boundary conditions along the edges of a rectangular plate are
simply supported
w
0
=
0
,
M
11
=
0
(
or
M
22
=
0
)
,
φ
1
=
0
(
or
φ
2
=
0
)
clamped
w
0
=
0
,
φ
1
=
0
,
φ
2
=
0
.
{\displaystyle {\begin{aligned}{\text{simply supported}}\quad &\quad w^{0}=0,M_{11}=0~({\text{or}}~M_{22}=0),\varphi _{1}=0~({\text{or}}~\varphi _{2}=0)\\{\text{clamped}}\quad &\quad w^{0}=0,\varphi _{1}=0,\varphi _{2}=0\,.\end{aligned}}}
== Reissner–Stein static theory for isotropic cantilever plates ==
In general, exact solutions for cantilever plates using plate theory are quite involved and few exact solutions can be found in the literature. Reissner and Stein provide a simplified theory for cantilever plates that is an improvement over older theories such as Saint-Venant plate theory.
The Reissner-Stein theory assumes a transverse displacement field of the form
w
(
x
,
y
)
=
w
x
(
x
)
+
y
θ
x
(
x
)
.
{\displaystyle w(x,y)=w_{x}(x)+y\,\theta _{x}(x)\,.}
The governing equations for the plate then reduce to two coupled ordinary differential equations:
where
q
1
(
x
)
=
∫
−
b
/
2
b
/
2
q
(
x
,
y
)
d
y
,
q
2
(
x
)
=
∫
−
b
/
2
b
/
2
y
q
(
x
,
y
)
d
y
,
n
1
(
x
)
=
∫
−
b
/
2
b
/
2
n
x
(
x
,
y
)
d
y
n
2
(
x
)
=
∫
−
b
/
2
b
/
2
y
n
x
(
x
,
y
)
d
y
,
n
3
(
x
)
=
∫
−
b
/
2
b
/
2
y
2
n
x
(
x
,
y
)
d
y
.
{\displaystyle {\begin{aligned}q_{1}(x)&=\int _{-b/2}^{b/2}q(x,y)\,{\text{d}}y~,~~q_{2}(x)=\int _{-b/2}^{b/2}y\,q(x,y)\,{\text{d}}y~,~~n_{1}(x)=\int _{-b/2}^{b/2}n_{x}(x,y)\,{\text{d}}y\\n_{2}(x)&=\int _{-b/2}^{b/2}y\,n_{x}(x,y)\,{\text{d}}y~,~~n_{3}(x)=\int _{-b/2}^{b/2}y^{2}\,n_{x}(x,y)\,{\text{d}}y\,.\end{aligned}}}
At
x
=
0
{\displaystyle x=0}
, since the beam is clamped, the boundary conditions are
w
(
0
,
y
)
=
d
w
d
x
|
x
=
0
=
0
⟹
w
x
(
0
)
=
d
w
x
d
x
|
x
=
0
=
θ
x
(
0
)
=
d
θ
x
d
x
|
x
=
0
=
0
.
{\displaystyle w(0,y)={\cfrac {dw}{dx}}{\Bigr |}_{x=0}=0\qquad \implies \qquad w_{x}(0)={\cfrac {dw_{x}}{dx}}{\Bigr |}_{x=0}=\theta _{x}(0)={\cfrac {d\theta _{x}}{dx}}{\Bigr |}_{x=0}=0\,.}
The boundary conditions at
x
=
a
{\displaystyle x=a}
are
b
D
d
3
w
x
d
x
3
+
n
1
(
x
)
d
w
x
d
x
+
n
2
(
x
)
d
θ
x
d
x
+
q
x
1
=
0
b
3
D
12
d
3
θ
x
d
x
3
+
[
n
3
(
x
)
−
2
b
D
(
1
−
ν
)
]
d
θ
x
d
x
+
n
2
(
x
)
d
w
x
d
x
+
t
=
0
b
D
d
2
w
x
d
x
2
+
m
1
=
0
,
b
3
D
12
d
2
θ
x
d
x
2
+
m
2
=
0
{\displaystyle {\begin{aligned}&bD{\cfrac {d^{3}w_{x}}{dx^{3}}}+n_{1}(x){\cfrac {dw_{x}}{dx}}+n_{2}(x){\cfrac {d\theta _{x}}{dx}}+q_{x1}=0\\&{\frac {b^{3}D}{12}}{\cfrac {d^{3}\theta _{x}}{dx^{3}}}+\left[n_{3}(x)-2bD(1-\nu )\right]{\cfrac {d\theta _{x}}{dx}}+n_{2}(x){\cfrac {dw_{x}}{dx}}+t=0\\&bD{\cfrac {d^{2}w_{x}}{dx^{2}}}+m_{1}=0\quad ,\quad {\frac {b^{3}D}{12}}{\cfrac {d^{2}\theta _{x}}{dx^{2}}}+m_{2}=0\end{aligned}}}
where
m
1
=
∫
−
b
/
2
b
/
2
m
x
(
y
)
d
y
,
m
2
=
∫
−
b
/
2
b
/
2
y
m
x
(
y
)
d
y
,
q
x
1
=
∫
−
b
/
2
b
/
2
q
x
(
y
)
d
y
t
=
q
x
2
+
m
3
=
∫
−
b
/
2
b
/
2
y
q
x
(
y
)
d
y
+
∫
−
b
/
2
b
/
2
m
x
y
(
y
)
d
y
.
{\displaystyle {\begin{aligned}m_{1}&=\int _{-b/2}^{b/2}m_{x}(y)\,{\text{d}}y~,~~m_{2}=\int _{-b/2}^{b/2}y\,m_{x}(y)\,{\text{d}}y~,~~q_{x1}=\int _{-b/2}^{b/2}q_{x}(y)\,{\text{d}}y\\t&=q_{x2}+m_{3}=\int _{-b/2}^{b/2}y\,q_{x}(y)\,{\text{d}}y+\int _{-b/2}^{b/2}m_{xy}(y)\,{\text{d}}y\,.\end{aligned}}}
== See also ==
Bending of plates
Vibration of plates
Infinitesimal strain theory
Membrane theory of shells
Finite strain theory
Stress (mechanics)
Stress resultants
Linear elasticity
Bending
Föppl–von Kármán equations
Euler–Bernoulli beam equation
Timoshenko beam theory
== References == | Wikipedia/Plate_theory |
Sandwich theory describes the behaviour of a beam, plate, or shell which consists of three layers—two facesheets and one core. The most commonly used sandwich theory is linear and is an extension of first-order beam theory. The linear sandwich theory is of importance for the design and analysis of sandwich panels, which are of use in building construction, vehicle construction, airplane construction and refrigeration engineering.
Some advantages of sandwich construction are:
Sandwich cross-sections are composite. They usually consist of a low to moderate stiffness core which is connected with two stiff exterior facesheets. The composite has a considerably higher shear stiffness to weight ratio than an equivalent beam made of only the core material or the facesheet material. The composite also has a high tensile strength to weight ratio.
The high stiffness of the facesheet leads to a high bending stiffness to weight ratio for the composite.
The behavior of a beam with sandwich cross-section under a load differs from a beam with a constant elastic cross section. If the radius of curvature during bending is large compared to the thickness of the sandwich beam and the strains in the component materials are small, the deformation of a sandwich composite beam can be separated into two parts
deformations due to bending moments or bending deformation, and
deformations due to transverse forces, also called shear deformation.
Sandwich beam, plate, and shell theories usually assume that the reference stress state is one of zero stress. However, during curing, differences of temperature between the facesheets persist because of the thermal separation by the core material. These temperature differences, coupled with different linear expansions of the facesheets, can lead to a bending of the sandwich beam in the direction of the warmer facesheet. If the bending is constrained during the manufacturing process, residual stresses can develop in the components of a sandwich composite. The superposition of a reference stress state on the solutions provided by sandwich theory is possible when the problem is linear. However, when large elastic deformations and rotations are expected, the initial stress state has to be incorporated directly into the sandwich theory.
== Engineering sandwich beam theory ==
In the engineering theory of sandwich beams, the axial strain is assumed to vary linearly over the cross-section of the beam as in Euler-Bernoulli theory, i.e.,
ε
x
x
(
x
,
z
)
=
−
z
d
2
w
d
x
2
{\displaystyle \varepsilon _{xx}(x,z)=-z~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
Therefore, the axial stress in the sandwich beam is given by
σ
x
x
(
x
,
z
)
=
−
z
E
(
z
)
d
2
w
d
x
2
{\displaystyle \sigma _{xx}(x,z)=-z~E(z)~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
where
E
(
z
)
{\displaystyle E(z)}
is the Young's modulus which is a function of the location along the thickness of the beam. The bending moment in the beam is then given by
M
x
(
x
)
=
∫
∫
z
σ
x
x
d
z
d
y
=
−
(
∫
∫
z
2
E
(
z
)
d
z
d
y
)
d
2
w
d
x
2
=:
−
D
d
2
w
d
x
2
{\displaystyle M_{x}(x)=\int \int z~\sigma _{xx}~\mathrm {d} z\,\mathrm {d} y=-\left(\int \int z^{2}E(z)~\mathrm {d} z\,\mathrm {d} y\right)~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}=:-D~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
The quantity
D
{\displaystyle D}
is called the flexural stiffness of the sandwich beam. The shear force
Q
x
{\displaystyle Q_{x}}
is defined as
Q
x
=
d
M
x
d
x
.
{\displaystyle Q_{x}={\frac {\mathrm {d} M_{x}}{\mathrm {d} x}}~.}
Using these relations, we can show that the stresses in a sandwich beam with a core of thickness
2
h
{\displaystyle 2h}
and modulus
E
c
{\displaystyle E^{c}}
and two facesheets each of thickness
f
{\displaystyle f}
and modulus
E
f
{\displaystyle E^{f}}
, are given by
σ
x
x
f
=
z
E
f
M
x
D
;
σ
x
x
c
=
z
E
c
M
x
D
τ
x
z
f
=
Q
x
E
f
2
D
[
(
h
+
f
)
2
−
z
2
]
;
τ
x
z
c
=
Q
x
2
D
[
E
c
(
h
2
−
z
2
)
+
E
f
f
(
f
+
2
h
)
]
{\displaystyle {\begin{aligned}\sigma _{xx}^{\mathrm {f} }&={\cfrac {zE^{\mathrm {f} }M_{x}}{D}}~;~~&\sigma _{xx}^{\mathrm {c} }&={\cfrac {zE^{\mathrm {c} }M_{x}}{D}}\\\tau _{xz}^{\mathrm {f} }&={\cfrac {Q_{x}E^{\mathrm {f} }}{2D}}\left[(h+f)^{2}-z^{2}\right]~;~~&\tau _{xz}^{\mathrm {c} }&={\cfrac {Q_{x}}{2D}}\left[E^{\mathrm {c} }\left(h^{2}-z^{2}\right)+E^{\mathrm {f} }f(f+2h)\right]\end{aligned}}}
For a sandwich beam with identical facesheets and unit width, the value of
D
{\displaystyle D}
is
D
=
E
f
∫
w
∫
−
h
−
f
−
h
z
2
d
z
d
y
+
E
c
∫
w
∫
−
h
h
z
2
d
z
d
y
+
E
f
∫
w
∫
h
h
+
f
z
2
d
z
d
y
=
2
3
E
f
f
3
+
2
3
E
c
h
3
+
2
E
f
f
h
(
f
+
h
)
.
{\displaystyle {\begin{aligned}D&=E^{f}\int _{w}\int _{-h-f}^{-h}z^{2}~\mathrm {d} z\,\mathrm {d} y+E^{c}\int _{w}\int _{-h}^{h}z^{2}~\mathrm {d} z\,\mathrm {d} y+E^{f}\int _{w}\int _{h}^{h+f}z^{2}~\mathrm {d} z\,\mathrm {d} y\\&={\frac {2}{3}}E^{f}f^{3}+{\frac {2}{3}}E^{c}h^{3}+2E^{f}fh(f+h)~.\end{aligned}}}
If
E
f
≫
E
c
{\displaystyle E^{f}\gg E^{c}}
, then
D
{\displaystyle D}
can be approximated as
D
≈
2
3
E
f
f
3
+
2
E
f
f
h
(
f
+
h
)
=
2
f
E
f
(
1
3
f
2
+
h
(
f
+
h
)
)
{\displaystyle D\approx {\frac {2}{3}}E^{f}f^{3}+2E^{f}fh(f+h)=2fE^{f}\left({\frac {1}{3}}f^{2}+h(f+h)\right)}
and the stresses in the sandwich beam can be approximated as
σ
x
x
f
≈
z
M
x
2
3
f
3
+
2
f
h
(
f
+
h
)
;
σ
x
x
c
≈
0
τ
x
z
f
≈
Q
x
4
3
f
3
+
4
f
h
(
f
+
h
)
[
(
h
+
f
)
2
−
z
2
]
;
τ
x
z
c
≈
Q
x
(
f
+
2
h
)
2
3
f
2
+
h
(
f
+
h
)
{\displaystyle {\begin{aligned}\sigma _{xx}^{\mathrm {f} }&\approx {\cfrac {zM_{x}}{{\frac {2}{3}}f^{3}+2fh(f+h)}}~;~~&\sigma _{xx}^{\mathrm {c} }&\approx 0\\\tau _{xz}^{\mathrm {f} }&\approx {\cfrac {Q_{x}}{{\frac {4}{3}}f^{3}+4fh(f+h)}}\left[(h+f)^{2}-z^{2}\right]~;~~&\tau _{xz}^{\mathrm {c} }&\approx {\cfrac {Q_{x}(f+2h)}{{\frac {2}{3}}f^{2}+h(f+h)}}\end{aligned}}}
If, in addition,
f
≪
2
h
{\displaystyle f\ll 2h}
, then
D
≈
2
E
f
f
h
(
f
+
h
)
{\displaystyle D\approx 2E^{f}fh(f+h)}
and the approximate stresses in the beam are
σ
x
x
f
≈
z
M
x
2
f
h
(
f
+
h
)
;
σ
x
x
c
≈
0
τ
x
z
f
≈
Q
x
4
f
h
(
f
+
h
)
[
(
h
+
f
)
2
−
z
2
]
;
τ
x
z
c
≈
Q
x
(
f
+
2
h
)
4
h
(
f
+
h
)
≈
Q
x
2
h
{\displaystyle {\begin{aligned}\sigma _{xx}^{\mathrm {f} }&\approx {\cfrac {zM_{x}}{2fh(f+h)}}~;~~&\sigma _{xx}^{\mathrm {c} }&\approx 0\\\tau _{xz}^{\mathrm {f} }&\approx {\cfrac {Q_{x}}{4fh(f+h)}}\left[(h+f)^{2}-z^{2}\right]~;~~&\tau _{xz}^{\mathrm {c} }&\approx {\cfrac {Q_{x}(f+2h)}{4h(f+h)}}\approx {\cfrac {Q_{x}}{2h}}\end{aligned}}}
If we assume that the facesheets are thin enough that the stresses may be assumed to be constant through the thickness, we have the approximation
σ
x
x
f
≈
±
M
x
2
f
h
;
σ
x
x
c
≈
0
τ
x
z
f
≈
0
;
τ
x
z
c
≈
Q
x
2
h
{\displaystyle {\begin{aligned}\sigma _{xx}^{\mathrm {f} }&\approx \pm {\cfrac {M_{x}}{2fh}}~;~~&\sigma _{xx}^{\mathrm {c} }&\approx 0\\\tau _{xz}^{\mathrm {f} }&\approx 0~;~~&\tau _{xz}^{\mathrm {c} }&\approx {\cfrac {Q_{x}}{2h}}\end{aligned}}}
Hence the problem can be split into two parts, one involving only core shear and the other involving only bending stresses in the facesheets.
== Linear sandwich theory ==
=== Bending of a sandwich beam with thin facesheets ===
The main assumptions of linear sandwich theories of beams with thin facesheets are:
the transverse normal stiffness of the core is infinite, i.e., the core thickness in the z-direction does not change during bending
the in-plane normal stiffness of the core is small compared to that of the facesheets, i.e., the core does not lengthen or compress in the x-direction
the facesheets behave according to the Euler-Bernoulli assumptions, i.e., there is no xz-shear in the facesheets and the z-direction thickness of the facesheets does not change
However, the xz shear-stresses in the core are not neglected.
==== Constitutive assumptions ====
The constitutive relations for two-dimensional orthotropic linear elastic materials are
[
σ
x
x
σ
z
z
σ
z
x
]
=
[
C
11
C
13
0
C
13
C
33
0
0
0
C
55
]
[
ε
x
x
ε
z
z
ε
z
x
]
{\displaystyle {\begin{bmatrix}\sigma _{xx}\\\sigma _{zz}\\\sigma _{zx}\end{bmatrix}}={\begin{bmatrix}C_{11}&C_{13}&0\\C_{13}&C_{33}&0\\0&0&C_{55}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{xx}\\\varepsilon _{zz}\\\varepsilon _{zx}\end{bmatrix}}}
The assumptions of sandwich theory lead to the simplified relations
σ
x
x
f
a
c
e
=
C
11
f
a
c
e
ε
x
x
f
a
c
e
;
σ
z
x
c
o
r
e
=
C
55
c
o
r
e
ε
z
x
c
o
r
e
;
σ
z
z
f
a
c
e
=
σ
x
z
f
a
c
e
=
0
;
σ
z
z
c
o
r
e
=
σ
x
x
c
o
r
e
=
0
{\displaystyle \sigma _{xx}^{\mathrm {face} }=C_{11}^{\mathrm {face} }~\varepsilon _{xx}^{\mathrm {face} }~;~~\sigma _{zx}^{\mathrm {core} }=C_{55}^{\mathrm {core} }~\varepsilon _{zx}^{\mathrm {core} }~;~~\sigma _{zz}^{\mathrm {face} }=\sigma _{xz}^{\mathrm {face} }=0~;~~\sigma _{zz}^{\mathrm {core} }=\sigma _{xx}^{\mathrm {core} }=0}
and
ε
z
z
f
a
c
e
=
ε
x
z
f
a
c
e
=
0
;
ε
z
z
c
o
r
e
=
ε
x
x
c
o
r
e
=
0
{\displaystyle \varepsilon _{zz}^{\mathrm {face} }=\varepsilon _{xz}^{\mathrm {face} }=0~;~~\varepsilon _{zz}^{\mathrm {core} }=\varepsilon _{xx}^{\mathrm {core} }=0}
The equilibrium equations in two dimensions are
∂
σ
x
x
∂
x
+
∂
σ
z
x
∂
z
=
0
;
∂
σ
z
x
∂
x
+
∂
σ
z
z
∂
z
=
0
{\displaystyle {\cfrac {\partial \sigma _{xx}}{\partial x}}+{\cfrac {\partial \sigma _{zx}}{\partial z}}=0~;~~{\cfrac {\partial \sigma _{zx}}{\partial x}}+{\cfrac {\partial \sigma _{zz}}{\partial z}}=0}
The assumptions for a sandwich beam and the equilibrium equation imply that
σ
x
x
f
a
c
e
≡
σ
x
x
f
a
c
e
(
z
)
;
σ
z
x
c
o
r
e
=
c
o
n
s
t
a
n
t
{\displaystyle \sigma _{xx}^{\mathrm {face} }\equiv \sigma _{xx}^{\mathrm {face} }(z)~;~~\sigma _{zx}^{\mathrm {core} }=\mathrm {constant} }
Therefore, for homogeneous facesheets and core, the strains also have the form
ε
x
x
f
a
c
e
≡
ε
x
x
f
a
c
e
(
z
)
;
ε
z
x
c
o
r
e
=
c
o
n
s
t
a
n
t
{\displaystyle \varepsilon _{xx}^{\mathrm {face} }\equiv \varepsilon _{xx}^{\mathrm {face} }(z)~;~~\varepsilon _{zx}^{\mathrm {core} }=\mathrm {constant} }
==== Kinematics ====
Let the sandwich beam be subjected to a bending moment
M
{\displaystyle M}
and a shear force
Q
{\displaystyle Q}
. Let the total deflection of the beam due to these loads be
w
{\displaystyle w}
. The adjacent figure shows that, for small displacements, the total deflection of the mid-surface of the beam can be expressed as the sum of two deflections, a pure bending deflection
w
b
{\displaystyle w_{b}}
and a pure shear deflection
w
s
{\displaystyle w_{s}}
, i.e.,
w
(
x
)
=
w
b
(
x
)
+
w
s
(
x
)
{\displaystyle w(x)=w_{b}(x)+w_{s}(x)}
From the geometry of the deformation we observe that the engineering shear strain (
γ
{\displaystyle \gamma }
) in the core is related the effective shear strain in the composite by the relation
γ
z
x
c
o
r
e
=
2
h
+
f
2
h
γ
z
x
b
e
a
m
{\displaystyle \gamma _{zx}^{\mathrm {core} }={\tfrac {2h+f}{2h}}~\gamma _{zx}^{\mathrm {beam} }}
Note the shear strain in the core is larger than the effective shear strain in the composite and that small deformations (
tan
γ
=
γ
{\displaystyle \tan \gamma =\gamma }
) are assumed in deriving the above relation. The effective shear strain in the beam is related to the shear displacement by the relation
γ
z
x
b
e
a
m
=
d
w
s
d
x
{\displaystyle \gamma _{zx}^{\mathrm {beam} }={\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}}
The facesheets are assumed to deform in accordance with the assumptions of Euler-Bernoulli beam theory. The total deflection of the facesheets is assumed to be the superposition of the deflections due to bending and that due to core shear. The
x
{\displaystyle x}
-direction displacements of the facesheets due to bending are given by
u
b
f
a
c
e
(
x
,
z
)
=
−
z
d
w
b
d
x
{\displaystyle u_{b}^{\mathrm {face} }(x,z)=-z~{\cfrac {\mathrm {d} w_{b}}{\mathrm {d} x}}}
The displacement of the top facesheet due to shear in the core is
u
s
t
o
p
f
a
c
e
(
x
,
z
)
=
−
(
z
−
h
−
f
2
)
d
w
s
d
x
{\displaystyle u_{s}^{\mathrm {topface} }(x,z)=-\left(z-h-{\tfrac {f}{2}}\right)~{\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}}
and that of the bottom facesheet is
u
s
b
o
t
f
a
c
e
(
x
,
z
)
=
−
(
z
+
h
+
f
2
)
d
w
s
d
x
{\displaystyle u_{s}^{\mathrm {botface} }(x,z)=-\left(z+h+{\tfrac {f}{2}}\right)~{\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}}
The normal strains in the two facesheets are given by
ε
x
x
=
∂
u
b
∂
x
+
∂
u
s
∂
x
{\displaystyle \varepsilon _{xx}={\cfrac {\partial u_{b}}{\partial x}}+{\cfrac {\partial u_{s}}{\partial x}}}
Therefore,
ε
x
x
t
o
p
f
a
c
e
=
−
z
d
2
w
b
d
x
2
−
(
z
−
h
−
f
2
)
d
2
w
s
d
x
2
;
ε
x
x
b
o
t
f
a
c
e
=
−
z
d
2
w
b
d
x
2
−
(
z
+
h
+
f
2
)
d
2
w
s
d
x
2
{\displaystyle \varepsilon _{xx}^{\mathrm {topface} }=-z~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}-\left(z-h-{\tfrac {f}{2}}\right)~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}~;~~\varepsilon _{xx}^{\mathrm {botface} }=-z~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}-\left(z+h+{\tfrac {f}{2}}\right)~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}}
==== Stress-displacement relations ====
The shear stress in the core is given by
σ
z
x
c
o
r
e
=
C
55
c
o
r
e
ε
z
x
c
o
r
e
=
C
55
c
o
r
e
2
γ
z
x
c
o
r
e
=
2
h
+
f
4
h
C
55
c
o
r
e
γ
z
x
b
e
a
m
{\displaystyle \sigma _{zx}^{\mathrm {core} }=C_{55}^{\mathrm {core} }~\varepsilon _{zx}^{\mathrm {core} }={\cfrac {C_{55}^{\mathrm {core} }}{2}}~\gamma _{zx}^{\mathrm {core} }={\tfrac {2h+f}{4h}}~C_{55}^{\mathrm {core} }~\gamma _{zx}^{\mathrm {beam} }}
or,
σ
z
x
c
o
r
e
=
2
h
+
f
4
h
C
55
c
o
r
e
d
w
s
d
x
{\displaystyle \sigma _{zx}^{\mathrm {core} }={\tfrac {2h+f}{4h}}~C_{55}^{\mathrm {core} }~{\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}}
The normal stresses in the facesheets are given by
σ
x
x
f
a
c
e
=
C
11
f
a
c
e
ε
x
x
f
a
c
e
{\displaystyle \sigma _{xx}^{\mathrm {face} }=C_{11}^{\mathrm {face} }~\varepsilon _{xx}^{\mathrm {face} }}
Hence,
σ
x
x
t
o
p
f
a
c
e
=
−
z
C
11
f
a
c
e
d
2
w
b
d
x
2
−
(
z
−
h
−
f
2
)
C
11
f
a
c
e
d
2
w
s
d
x
2
=
−
z
C
11
f
a
c
e
d
2
w
d
x
2
+
(
2
h
+
f
2
)
C
11
f
a
c
e
d
2
w
s
d
x
2
σ
x
x
b
o
t
f
a
c
e
=
−
z
C
11
f
a
c
e
d
2
w
b
d
x
2
−
(
z
+
h
+
f
2
)
C
11
f
a
c
e
d
2
w
s
d
x
2
=
−
z
C
11
f
a
c
e
d
2
w
d
x
2
−
(
2
h
+
f
2
)
C
11
f
a
c
e
d
2
w
s
d
x
2
{\displaystyle {\begin{aligned}\sigma _{xx}^{\mathrm {topface} }&=-z~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}-\left(z-h-{\tfrac {f}{2}}\right)~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}&=&-z~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}+\left({\tfrac {2h+f}{2}}\right)~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}\\\sigma _{xx}^{\mathrm {botface} }&=-z~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}-\left(z+h+{\tfrac {f}{2}}\right)~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}&=&-z~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}-\left({\tfrac {2h+f}{2}}\right)~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}\end{aligned}}}
==== Resultant forces and moments ====
The resultant normal force in a face sheet is defined as
N
x
x
f
a
c
e
:=
∫
−
f
/
2
f
/
2
σ
x
x
f
a
c
e
d
z
f
{\displaystyle N_{xx}^{\mathrm {face} }:=\int _{-f/2}^{f/2}\sigma _{xx}^{\mathrm {face} }~\mathrm {d} z_{f}}
and the resultant moments are defined as
M
x
x
f
a
c
e
:=
∫
−
f
/
2
f
/
2
z
f
σ
x
x
f
a
c
e
d
z
f
{\displaystyle M_{xx}^{\mathrm {face} }:=\int _{-f/2}^{f/2}z_{f}~\sigma _{xx}^{\mathrm {face} }~\mathrm {d} z_{f}}
where
z
f
t
o
p
f
a
c
e
:=
z
−
h
−
f
2
;
z
f
b
o
t
f
a
c
e
:=
z
+
h
+
f
2
{\displaystyle z_{f}^{\mathrm {topface} }:=z-h-{\tfrac {f}{2}}~;~~z_{f}^{\mathrm {botface} }:=z+h+{\tfrac {f}{2}}}
Using the expressions for the normal stress in the two facesheets gives
N
x
x
t
o
p
f
a
c
e
=
−
f
(
h
+
f
2
)
C
11
f
a
c
e
d
2
w
b
d
x
2
=
−
N
x
x
b
o
t
f
a
c
e
M
x
x
t
o
p
f
a
c
e
=
−
f
3
C
11
f
a
c
e
12
(
d
2
w
b
d
x
2
+
d
2
w
s
d
x
2
)
=
−
f
3
C
11
f
a
c
e
12
d
2
w
d
x
2
=
M
x
x
b
o
t
f
a
c
e
{\displaystyle {\begin{aligned}N_{xx}^{\mathrm {topface} }&=-f\left(h+{\tfrac {f}{2}}\right)~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}=-N_{xx}^{\mathrm {botface} }\\M_{xx}^{\mathrm {topface} }&=-{\cfrac {f^{3}~C_{11}^{\mathrm {face} }}{12}}\left({\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}+{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}\right)=-{\cfrac {f^{3}~C_{11}^{\mathrm {face} }}{12}}~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}=M_{xx}^{\mathrm {botface} }\end{aligned}}}
In the core, the resultant moment is
M
x
x
c
o
r
e
:=
∫
−
h
h
z
σ
x
x
c
o
r
e
d
z
=
0
{\displaystyle M_{xx}^{\mathrm {core} }:=\int _{-h}^{h}z~\sigma _{xx}^{\mathrm {core} }~\mathrm {d} z=0}
The total bending moment in the beam is
M
=
N
x
x
t
o
p
f
a
c
e
(
2
h
+
f
)
+
2
M
x
x
t
o
p
f
a
c
e
{\displaystyle M=N_{xx}^{\mathrm {topface} }~(2h+f)+2~M_{xx}^{\mathrm {topface} }}
or,
M
=
−
f
(
2
h
+
f
)
2
2
C
11
f
a
c
e
d
2
w
b
d
x
2
−
f
3
6
C
11
f
a
c
e
d
2
w
d
x
2
{\displaystyle M=-{\cfrac {f(2h+f)^{2}}{2}}~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}-{\cfrac {f^{3}}{6}}~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
The shear force
Q
x
{\displaystyle Q_{x}}
in the core is defined as
Q
x
c
o
r
e
=
κ
∫
−
h
h
σ
x
z
d
z
=
κ
(
2
h
+
f
)
2
C
55
c
o
r
e
d
w
s
d
x
{\displaystyle Q_{x}^{\mathrm {core} }=\kappa \int _{-h}^{h}\sigma _{xz}~dz={\tfrac {\kappa (2h+f)}{2}}~C_{55}^{\mathrm {core} }~{\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}}
where
κ
{\displaystyle \kappa }
is a shear correction coefficient. The shear force in the facesheets can be computed from the bending moments using the relation
Q
x
f
a
c
e
=
d
M
x
x
f
a
c
e
d
x
{\displaystyle Q_{x}^{\mathrm {face} }={\cfrac {\mathrm {d} M_{xx}^{\mathrm {face} }}{\mathrm {d} x}}}
or,
Q
x
f
a
c
e
=
−
f
3
C
11
f
a
c
e
12
d
3
w
d
x
3
{\displaystyle Q_{x}^{\mathrm {face} }=-{\cfrac {f^{3}~C_{11}^{\mathrm {face} }}{12}}~{\cfrac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}}
For thin facesheets, the shear force in the facesheets is usually ignored.
==== Bending and shear stiffness ====
The bending stiffness of the sandwich beam is given by
D
b
e
a
m
=
−
M
/
d
2
w
d
x
2
{\displaystyle D^{\mathrm {beam} }=-M/{\tfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
From the expression for the total bending moment in the beam, we have
M
=
−
f
(
2
h
+
f
)
2
2
C
11
f
a
c
e
d
2
w
b
d
x
2
−
f
3
6
C
11
f
a
c
e
d
2
w
d
x
2
{\displaystyle M=-{\cfrac {f(2h+f)^{2}}{2}}~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}-{\cfrac {f^{3}}{6}}~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
For small shear deformations, the above expression can be written as
M
≈
−
f
[
3
(
2
h
+
f
)
2
+
f
2
]
6
C
11
f
a
c
e
d
2
w
d
x
2
{\displaystyle M\approx -{\cfrac {f[3(2h+f)^{2}+f^{2}]}{6}}~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
Therefore, the bending stiffness of the sandwich beam (with
f
≪
2
h
{\displaystyle f\ll 2h}
) is given by
D
b
e
a
m
≈
f
[
3
(
2
h
+
f
)
2
+
f
2
]
6
C
11
f
a
c
e
≈
f
(
2
h
+
f
)
2
2
C
11
f
a
c
e
{\displaystyle D^{\mathrm {beam} }\approx {\cfrac {f[3(2h+f)^{2}+f^{2}]}{6}}~C_{11}^{\mathrm {face} }\approx {\cfrac {f(2h+f)^{2}}{2}}~C_{11}^{\mathrm {face} }}
and that of the facesheets is
D
f
a
c
e
=
f
3
12
C
11
f
a
c
e
{\displaystyle D^{\mathrm {face} }={\cfrac {f^{3}}{12}}~C_{11}^{\mathrm {face} }}
The shear stiffness of the beam is given by
S
b
e
a
m
=
Q
x
/
d
w
s
d
x
{\displaystyle S^{\mathrm {beam} }=Q_{x}/{\tfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}}
Therefore, the shear stiffness of the beam, which is equal to the shear stiffness of the core, is
S
b
e
a
m
=
S
c
o
r
e
=
κ
(
2
h
+
f
)
2
C
55
c
o
r
e
{\displaystyle S^{\mathrm {beam} }=S^{\mathrm {core} }={\cfrac {\kappa (2h+f)}{2}}~C_{55}^{\mathrm {core} }}
==== Relation between bending and shear deflections ====
A relation can be obtained between the bending and shear deflections by using the continuity of tractions between the core and the facesheets. If we equate the tractions directly we get
n
x
σ
x
x
f
a
c
e
=
n
z
σ
z
x
c
o
r
e
{\displaystyle n_{x}~\sigma _{xx}^{\mathrm {face} }=n_{z}~\sigma _{zx}^{\mathrm {core} }}
At both the facesheet-core interfaces
n
x
=
1
{\displaystyle n_{x}=1}
but at the top of the core
n
z
=
1
{\displaystyle n_{z}=1}
and at the bottom of the core
n
z
=
−
1
{\displaystyle n_{z}=-1}
. Therefore, traction continuity at
z
=
±
h
{\displaystyle z=\pm h}
leads to
2
f
h
C
11
f
a
c
e
d
2
w
s
d
x
2
−
(
2
h
+
f
)
C
55
c
o
r
e
d
w
s
d
x
=
4
h
2
C
11
f
a
c
e
d
2
w
b
d
x
2
{\displaystyle 2fh~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}-(2h+f)~C_{55}^{\mathrm {core} }~{\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}=4h^{2}~C_{11}^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{2}w_{b}}{\mathrm {d} x^{2}}}}
The above relation is rarely used because of the presence of second derivatives of the shear deflection. Instead it is assumed that
n
z
σ
z
x
c
o
r
e
=
d
N
x
x
f
a
c
e
d
x
{\displaystyle n_{z}~\sigma _{zx}^{\mathrm {core} }={\cfrac {\mathrm {d} N_{xx}^{\mathrm {face} }}{\mathrm {d} x}}}
which implies that
d
w
s
d
x
=
−
2
f
h
(
C
11
f
a
c
e
C
55
c
o
r
e
)
d
3
w
b
d
x
3
{\displaystyle {\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}=-2fh~\left({\cfrac {C_{11}^{\mathrm {face} }}{C_{55}^{\mathrm {core} }}}\right)~{\cfrac {\mathrm {d} ^{3}w_{b}}{\mathrm {d} x^{3}}}}
==== Governing equations ====
Using the above definitions, the governing balance equations for the bending moment and shear force are
M
=
D
b
e
a
m
d
2
w
s
d
x
2
−
(
D
b
e
a
m
+
2
D
f
a
c
e
)
d
2
w
d
x
2
Q
=
S
c
o
r
e
d
w
s
d
x
−
2
D
f
a
c
e
d
3
w
d
x
3
{\displaystyle {\begin{aligned}M&=D^{\mathrm {beam} }~{\cfrac {\mathrm {d} ^{2}w_{s}}{\mathrm {d} x^{2}}}-\left(D^{\mathrm {beam} }+2D^{\mathrm {face} }\right)~{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}\\Q&=S^{\mathrm {core} }~{\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}-2D^{\mathrm {face} }~{\cfrac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}\end{aligned}}}
We can alternatively express the above as two equations that can be solved for
w
{\displaystyle w}
and
w
s
{\displaystyle w_{s}}
as
(
2
D
f
a
c
e
S
c
o
r
e
)
d
4
w
d
x
4
−
(
1
+
2
D
f
a
c
e
D
b
e
a
m
)
d
2
w
d
x
2
+
(
1
S
c
o
r
e
)
d
Q
d
x
=
M
D
b
e
a
m
(
D
b
e
a
m
S
c
o
r
e
)
d
3
w
s
d
x
3
−
(
1
+
D
b
e
a
m
2
D
f
a
c
e
)
d
w
s
d
x
−
1
S
c
o
r
e
d
M
d
x
=
−
(
1
+
D
b
e
a
m
2
D
f
a
c
e
)
Q
S
c
o
r
e
{\displaystyle {\begin{aligned}&\left({\frac {2D^{\mathrm {face} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}-\left(1+{\frac {2D^{\mathrm {face} }}{D^{\mathrm {beam} }}}\right){\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}+\left({\cfrac {1}{S^{\mathrm {core} }}}\right)~{\cfrac {\mathrm {d} Q}{\mathrm {d} x}}={\frac {M}{D^{\mathrm {beam} }}}\\&\left({\frac {D^{\mathrm {beam} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{3}w_{s}}{\mathrm {d} x^{3}}}-\left(1+{\frac {D^{\mathrm {beam} }}{2D^{\mathrm {face} }}}\right){\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}-{\cfrac {1}{S^{\mathrm {core} }}}~{\cfrac {\mathrm {d} M}{\mathrm {d} x}}=-\left(1+{\cfrac {D^{\mathrm {beam} }}{2D^{\mathrm {face} }}}\right){\frac {Q}{S^{\mathrm {core} }}}\,\end{aligned}}}
Using the approximations
Q
≈
d
M
d
x
;
q
≈
d
Q
d
x
{\displaystyle Q\approx {\cfrac {\mathrm {d} M}{\mathrm {d} x}}~;~~q\approx {\cfrac {\mathrm {d} Q}{\mathrm {d} x}}}
where
q
{\displaystyle q}
is the intensity of the applied load on the beam, we have
(
2
D
f
a
c
e
S
c
o
r
e
)
d
4
w
d
x
4
−
(
1
+
2
D
f
a
c
e
D
b
e
a
m
)
d
2
w
d
x
2
=
M
D
b
e
a
m
−
q
S
c
o
r
e
(
D
b
e
a
m
S
c
o
r
e
)
d
3
w
s
d
x
3
−
(
1
+
D
b
e
a
m
2
D
f
a
c
e
)
d
w
s
d
x
=
−
(
D
b
e
a
m
2
D
f
a
c
e
)
Q
S
c
o
r
e
{\displaystyle {\begin{aligned}&\left({\frac {2D^{\mathrm {face} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}-\left(1+{\frac {2D^{\mathrm {face} }}{D^{\mathrm {beam} }}}\right){\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}={\frac {M}{D^{\mathrm {beam} }}}-{\cfrac {q}{S^{\mathrm {core} }}}\\&\left({\frac {D^{\mathrm {beam} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{3}w_{s}}{\mathrm {d} x^{3}}}-\left(1+{\frac {D^{\mathrm {beam} }}{2D^{\mathrm {face} }}}\right){\cfrac {\mathrm {d} w_{s}}{\mathrm {d} x}}=-\left({\cfrac {D^{\mathrm {beam} }}{2D^{\mathrm {face} }}}\right){\frac {Q}{S^{\mathrm {core} }}}\,\end{aligned}}}
Several techniques may be used to solve this system of two coupled ordinary differential equations given the applied load and the applied bending moment and displacement boundary conditions.
==== Temperature dependent alternative form of governing equations ====
Assuming that each partial cross section fulfills Bernoulli's hypothesis, the balance of forces and moments on the deformed sandwich beam element can be used to deduce the bending equation for the sandwich beam.
The stress resultants and the corresponding deformations of the beam and of the cross section can be seen in Figure 1. The following relationships can be derived using the theory of linear elasticity:
M
c
o
r
e
=
D
b
e
a
m
(
d
γ
2
d
x
+
ϑ
)
=
D
b
e
a
m
(
d
γ
d
x
−
d
2
w
d
x
2
+
ϑ
)
M
f
a
c
e
=
−
D
f
a
c
e
d
2
w
d
x
2
Q
c
o
r
e
=
S
c
o
r
e
γ
Q
f
a
c
e
=
−
D
f
a
c
e
d
3
w
d
x
3
{\displaystyle {\begin{aligned}M^{\mathrm {core} }&=D^{\mathrm {beam} }\left({\cfrac {\mathrm {d} \gamma _{2}}{\mathrm {d} x}}+\vartheta \right)=D^{\mathrm {beam} }\left({\cfrac {\mathrm {d} \gamma }{\mathrm {d} x}}-{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}+\vartheta \right)\\M^{\mathrm {face} }&=-D^{\mathrm {face} }{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}\\Q^{\mathrm {core} }&=S^{\mathrm {core} }\gamma \\Q^{\mathrm {face} }&=-D^{\mathrm {face} }{\cfrac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}\end{aligned}}\,}
where
Superposition of the equations for the facesheets and the core leads to the following equations for the total shear force
Q
{\displaystyle Q}
and the total bending moment
M
{\displaystyle M}
:
S
c
o
r
e
γ
−
D
f
a
c
e
d
3
w
d
x
3
=
Q
(
1
)
D
b
e
a
m
(
d
γ
d
x
+
ϑ
)
−
(
D
b
e
a
m
+
D
f
a
c
e
)
d
2
w
d
x
2
=
M
(
2
)
{\displaystyle {\begin{alignedat}{3}&S^{\mathrm {core} }\gamma -D^{\mathrm {face} }{\cfrac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}=Q&\quad \quad &(1)\\&D^{\mathrm {beam} }\left({\cfrac {\mathrm {d} \gamma }{\mathrm {d} x}}+\vartheta \right)-\left(D^{\mathrm {beam} }+D^{\mathrm {face} }\right){\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}=M&\quad \quad &(2)\,\end{alignedat}}}
We can alternatively express the above as two equations that can be solved for
w
{\displaystyle w}
and
γ
{\displaystyle \gamma }
, i.e.,
(
D
f
a
c
e
S
c
o
r
e
)
d
4
w
d
x
4
−
(
1
+
D
f
a
c
e
D
b
e
a
m
)
d
2
w
d
x
2
=
M
D
b
e
a
m
−
q
S
c
o
r
e
−
ϑ
(
D
b
e
a
m
S
c
o
r
e
)
d
2
γ
d
x
2
−
(
1
+
D
b
e
a
m
D
f
a
c
e
)
γ
=
−
(
D
b
e
a
m
D
f
a
c
e
)
Q
S
c
o
r
e
{\displaystyle {\begin{aligned}&\left({\frac {D^{\mathrm {face} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}-\left(1+{\frac {D^{\mathrm {face} }}{D^{\mathrm {beam} }}}\right){\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}={\frac {M}{D^{\mathrm {beam} }}}-{\cfrac {q}{S^{\mathrm {core} }}}-\vartheta \\&\left({\frac {D^{\mathrm {beam} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{2}\gamma }{\mathrm {d} x^{2}}}-\left(1+{\frac {D^{\mathrm {beam} }}{D^{\mathrm {face} }}}\right)\gamma =-\left({\cfrac {D^{\mathrm {beam} }}{D^{\mathrm {face} }}}\right){\frac {Q}{S^{\mathrm {core} }}}\,\end{aligned}}}
== Solution approaches ==
The bending behavior and stresses in a continuous sandwich beam can be computed by solving the two governing differential equations.
=== Analytical approach ===
For simple geometries such as double span beams under uniformly distributed loads, the governing equations can be solved by using appropriate boundary conditions and using the superposition principle. Such results are listed in the standard DIN EN 14509:2006 (Table E10.1). Energy methods may also be used to compute solutions directly.
=== Numerical approach ===
The differential equation of sandwich continuous beams can be solved by the use of numerical methods such as finite differences and finite elements. For finite differences Berner recommends a two-stage approach. After solving the differential equation for the normal forces in the cover sheets for a single span beam under a given load, the energy method can be used to expand the approach for the calculation of multi-span beams. Sandwich continuous beam with flexible cover sheets can also be laid on top of each other when using this technique. However, the cross-section of the beam has to be constant across the spans.
A more specialized approach recommended by Schwarze involves solving for the homogeneous part of the governing equation exactly and for the particular part approximately. Recall that the governing equation for a sandwich beam is
(
2
D
f
a
c
e
S
c
o
r
e
)
d
4
w
d
x
4
−
(
1
+
2
D
f
a
c
e
D
b
e
a
m
)
d
2
w
d
x
2
=
M
D
b
e
a
m
−
q
S
c
o
r
e
{\displaystyle \left({\frac {2D^{\mathrm {face} }}{S^{\mathrm {core} }}}\right){\cfrac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}-\left(1+{\frac {2D^{\mathrm {face} }}{D^{\mathrm {beam} }}}\right){\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}={\frac {M}{D^{\mathrm {beam} }}}-{\cfrac {q}{S^{\mathrm {core} }}}}
If we define
α
:=
2
D
f
a
c
e
D
b
e
a
m
;
β
:=
2
D
f
a
c
e
S
c
o
r
e
;
W
(
x
)
:=
d
2
w
d
x
2
{\displaystyle \alpha :={\cfrac {2D^{\mathrm {face} }}{D^{\mathrm {beam} }}}~;~~\beta :={\cfrac {2D^{\mathrm {face} }}{S^{\mathrm {core} }}}~;~~W(x):={\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
we get
d
2
W
d
x
2
−
(
1
+
α
β
)
W
=
M
β
D
b
e
a
m
−
q
D
f
a
c
e
{\displaystyle {\cfrac {\mathrm {d} ^{2}W}{\mathrm {d} x^{2}}}-\left({\cfrac {1+\alpha }{\beta }}\right)~W={\frac {M}{\beta D^{\mathrm {beam} }}}-{\cfrac {q}{D^{\mathrm {face} }}}}
Schwarze uses the general solution for the homogeneous part of the above equation and a polynomial approximation for the particular solution for sections of a sandwich beam. Interfaces between sections are tied together by matching boundary conditions. This approach has been used in the open source code swe2.
== Practical importance ==
Results predicted by linear sandwich theory correlate well with the experimentally determined results. The theory is used as a basis for the structural report which is needed for the construction of large industrial and commercial buildings which are clad with sandwich panels . Its use is explicitly demanded for approvals and in the relevant engineering standards.
Mohammed Rahif Hakmi and others conducted researches into numerical, experimental behavior of materials and fire and blast behavior of Composite material. He published multiple research articles:
Local buckling of sandwich panels.
Face buckling stress in sandwich panels.
Post-buckling behaviour of foam-filled thin-walled steel beams.
"Fire resistance of composite floor slabs using a model fire test facility"
Fire-resistant sandwich panels for offshore structures sandwich panels.
Numerical Temperature Analysis of Hygroscopic Panels Exposed to Fire.
Cost Effective Use of Fibre Reinforced Composites Offshore.
Hakmi developed a design method, which had been recommended by the CIB Working Commission W056 Sandwich Panels, ECCS/CIB Joint Committee and has been used in the European recommendations for the design of sandwich panels (CIB, 2000).
== See also ==
Bending
Beam theory
Composite material
Hill yield criterion
Sandwich structured composite
Sandwich plate system
Composite honeycomb
Timoshenko beam theory
Plate theory
Sandwich panel
== References ==
== Bibliography ==
Mohammed Rahif Hakmi
Klaus Berner, Oliver Raabe: Bemessung von Sandwichbauteilen. IFBS-Schrift 5.08, IFBS e.V., Düsseldorf 2006.
Ralf Möller, Hans Pöter, Knut Schwarze: Planen und Bauen mit Trapezprofilen und Sandwichelementen. Band 1, Ernst & Sohn, Berlin 2004, ISBN 3-433-01595-3.
== External links ==
Mohammed Rahif Hakmi Research for Sandwich Panels
https://web.archive.org/web/20081120190919/http://www.diabgroup.com/europe/literature/e_pdf_files/man_pdf/sandwich_hb.pdf DIAB Sandwich Handbook
http://www.swe1.com Programm zur Ermittlung der Schnittgrössen und Spannungen von Sandwich-Wandplatten mit biegeweichen Deckschichten (open source)
http://www.swe2.com Computation of sandwich beams with corrugated faces (open source) | Wikipedia/Sandwich_theory |
In fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section.
It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845.
The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation.
Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen).
Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled (as per Poiseuille's law). Both effects contribute to the actual pressure drop.
== Equation ==
In standard fluid-kinetics notation:
Δ
p
=
8
μ
L
Q
π
R
4
=
8
π
μ
L
Q
A
2
,
{\displaystyle \Delta p={\frac {8\mu LQ}{\pi R^{4}}}={\frac {8\pi \mu LQ}{A^{2}}},}
where
Δp is the pressure difference between the two ends,
L is the length of pipe,
μ is the dynamic viscosity,
Q is the volumetric flow rate,
R is the pipe radius,
A is the cross-sectional area of pipe.
The equation does not hold close to the pipe entrance.: 3
The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than 1/48 of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by
Δ
p
=
1
2
ρ
v
¯
max
2
=
1
2
ρ
(
Q
max
π
R
2
)
2
⇒
Q
max
=
π
R
2
2
Δ
p
ρ
,
{\displaystyle {\begin{aligned}\Delta p={\frac {1}{2}}\rho {\overline {v}}_{\text{max}}^{2}&={\frac {1}{2}}\rho \left({\frac {Q_{\text{max}}}{\pi R^{2}}}\right)^{2}\\\Rightarrow \quad Q_{\max }{}&=\pi R^{2}{\sqrt {\frac {2\Delta p}{\rho }}},\end{aligned}}}
because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow.
== Relation to the Darcy–Weisbach equation ==
Normally, Hagen–Poiseuille flow implies not just the relation for the pressure drop, above, but also the full solution for the laminar flow profile, which is parabolic. However, the result for the pressure drop can be extended to turbulent flow by inferring an effective turbulent viscosity in the case of turbulent flow, even though the flow profile in turbulent flow is strictly speaking not actually parabolic. In both cases, laminar or turbulent, the pressure drop is related to the stress at the wall, which determines the so-called friction factor. The wall stress can be determined phenomenologically by the Darcy–Weisbach equation in the field of hydraulics, given a relationship for the friction factor in terms of the Reynolds number. In the case of laminar flow, for a circular cross section:
Λ
=
64
R
e
,
R
e
=
ρ
v
d
μ
,
{\displaystyle \Lambda ={\frac {64}{\mathrm {Re} }},\quad \mathrm {Re} ={\frac {\rho vd}{\mu }},}
where Re is the Reynolds number, ρ is the fluid density, and v is the mean flow velocity, which is half the maximal flow velocity in the case of laminar flow. It proves more useful to define the Reynolds number in terms of the mean flow velocity because this quantity remains well defined even in the case of turbulent flow, whereas the maximal flow velocity may not be, or in any case, it may be difficult to infer. In this form the law approximates the Darcy friction factor, the energy (head) loss factor, friction loss factor or Darcy (friction) factor Λ in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of a slightly different form of the law was made independently by Wiedman in 1856 and Neumann and E. Hagenbach in 1858 (1859, 1860). Hagenbach was the first who called this law Poiseuille's law.
The law is also very important in hemorheology and hemodynamics, both fields of physiology.
Poiseuille's law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work.
== Derivation ==
The Hagen–Poiseuille equation can be derived from the Navier–Stokes equations. The laminar flow through a pipe of uniform (circular) cross-section is known as Hagen–Poiseuille flow. The equations governing the Hagen–Poiseuille flow can be derived directly from the Navier–Stokes momentum equations in 3D cylindrical coordinates (r,θ,x) by making the following set of assumptions:
The flow is steady ( ∂.../∂t = 0 ).
The radial and azimuthal components of the fluid velocity are zero ( ur = uθ = 0 ).
The flow is axisymmetric ( ∂.../∂θ = 0 ).
The flow is fully developed ( ∂ux/∂x = 0 ). Here however, this can be proved via mass conservation, and the above assumptions.
Then the angular equation in the momentum equations and the continuity equation are identically satisfied. The radial momentum equation reduces to ∂p/∂r = 0, i.e., the pressure p is a function of the axial coordinate x only. For brevity, use u instead of
u
x
{\displaystyle u_{x}}
. The axial momentum equation reduces to
1
r
∂
∂
r
(
r
∂
u
∂
r
)
=
1
μ
d
p
d
x
{\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial u}{\partial r}}\right)={\frac {1}{\mu }}{\frac {\mathrm {d} p}{\mathrm {d} x}}}
where μ is the dynamic viscosity of the fluid. In the above equation, the left-hand side is only a function of r and the right-hand side term is only a function of x, implying that both terms must be the same constant. Evaluating this constant is straightforward. If we take the length of the pipe to be L and denote the pressure difference between the two ends of the pipe by Δp (high pressure minus low pressure), then the constant is simply
−
d
p
d
x
=
Δ
p
L
=
G
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {\Delta p}{L}}=G}
defined such that G is positive. The solution is
u
=
−
G
r
2
4
μ
+
c
1
ln
r
+
c
2
{\displaystyle u=-{\frac {Gr^{2}}{4\mu }}+c_{1}\ln r+c_{2}}
Since u needs to be finite at r = 0, c1 = 0. The no slip boundary condition at the pipe wall requires that u = 0 at r = R (radius of the pipe), which yields c2 = GR2/4μ. Thus we have finally the following parabolic velocity profile:
u
=
G
4
μ
(
R
2
−
r
2
)
.
{\displaystyle u={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right).}
The maximum velocity occurs at the pipe centerline (r = 0), umax = GR2/4μ. The average velocity can be obtained by integrating over the pipe cross section,
u
a
v
g
=
1
π
R
2
∫
0
R
2
π
r
u
d
r
=
1
2
u
m
a
x
.
{\displaystyle {u}_{\mathrm {avg} }={\frac {1}{\pi R^{2}}}\int _{0}^{R}2\pi ru\mathrm {d} r={\tfrac {1}{2}}{u}_{\mathrm {max} }.}
The easily measurable quantity in experiments is the volumetric flow rate Q = πR2 uavg. Rearrangement of this gives the Hagen–Poiseuille equation
Δ
p
=
8
μ
Q
L
π
R
4
.
{\displaystyle \Delta p={\frac {8\mu QL}{\pi R^{4}}}.}
=== Startup of Poiseuille flow in a pipe ===
When a constant pressure gradient G = −dp/dx is applied between two ends of a long pipe, the flow will not immediately obtain Poiseuille profile, rather it develops through time and reaches the Poiseuille profile at steady state. The Navier–Stokes equations reduce to
∂
u
∂
t
=
G
ρ
+
ν
(
∂
2
u
∂
r
2
+
1
r
∂
u
∂
r
)
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {G}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial u}{\partial r}}\right)}
with initial and boundary conditions,
u
(
r
,
0
)
=
0
,
u
(
R
,
t
)
=
0.
{\displaystyle u(r,0)=0,\quad u(R,t)=0.}
The velocity distribution is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
−
2
G
R
2
μ
∑
n
=
1
∞
1
λ
n
3
J
0
(
λ
n
r
/
R
)
J
1
(
λ
n
)
e
−
λ
n
2
ν
t
/
R
2
,
J
0
(
λ
n
)
=
0
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)-{\frac {2GR^{2}}{\mu }}\sum _{n=1}^{\infty }{\frac {1}{\lambda _{n}^{3}}}{\frac {J_{0}(\lambda _{n}r/R)}{J_{1}(\lambda _{n})}}e^{-\lambda _{n}^{2}\nu t/R^{2}},\quad J_{0}\left(\lambda _{n}\right)=0}
where J0(λnr/R) is the Bessel function of the first kind of order zero and λn are the positive roots of this function and J1(λn) is the Bessel function of the first kind of order one. As t → ∞, Poiseuille solution is recovered.
== Poiseuille flow in an annular section ==
If R1 is the inner cylinder radii and R2 is the outer cylinder radii, with constant applied pressure gradient between the two ends G = −dp/dx, the velocity distribution and the volume flux through the annular pipe are
u
(
r
)
=
G
4
μ
(
R
1
2
−
r
2
)
+
G
4
μ
(
R
2
2
−
R
1
2
)
ln
r
/
R
1
ln
R
2
/
R
1
,
Q
=
G
π
8
μ
[
R
2
4
−
R
1
4
−
(
R
2
2
−
R
1
2
)
2
ln
R
2
/
R
1
]
.
{\displaystyle {\begin{aligned}u(r)&={\frac {G}{4\mu }}\left(R_{1}^{2}-r^{2}\right)+{\frac {G}{4\mu }}\left(R_{2}^{2}-R_{1}^{2}\right){\frac {\ln r/R_{1}}{\ln R_{2}/R_{1}}},\\[6pt]Q&={\frac {G\pi }{8\mu }}\left[R_{2}^{4}-R_{1}^{4}-{\frac {\left(R_{2}^{2}-R_{1}^{2}\right)^{2}}{\ln R_{2}/R_{1}}}\right].\end{aligned}}}
When R2 = R, R1 = 0, the original problem is recovered.
== Poiseuille flow in a pipe with an oscillating pressure gradient ==
Flow through pipes with an oscillating pressure gradient finds applications in blood flow through large arteries. The imposed pressure gradient is given by
∂
p
∂
x
=
−
G
−
α
cos
ω
t
−
β
sin
ω
t
{\displaystyle {\frac {\partial p}{\partial x}}=-G-\alpha \cos \omega t-\beta \sin \omega t}
where G, α and β are constants and ω is the frequency. The velocity field is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
+
[
α
F
2
+
β
(
F
1
−
1
)
]
cos
ω
t
ρ
ω
+
[
β
F
2
−
α
(
F
1
−
1
)
]
sin
ω
t
ρ
ω
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)+[\alpha F_{2}+\beta (F_{1}-1)]{\frac {\cos \omega t}{\rho \omega }}+[\beta F_{2}-\alpha (F_{1}-1)]{\frac {\sin \omega t}{\rho \omega }}}
where
F
1
(
k
r
)
=
b
e
r
(
k
r
)
b
e
r
(
k
R
)
+
b
e
i
(
k
r
)
b
e
i
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
F
2
(
k
r
)
=
b
e
r
(
k
r
)
b
e
i
(
k
R
)
−
b
e
i
(
k
r
)
b
e
r
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
{\displaystyle {\begin{aligned}F_{1}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {ber} (kR)+\mathrm {bei} (kr)\mathrm {bei} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\\[6pt]F_{2}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {bei} (kR)-\mathrm {bei} (kr)\mathrm {ber} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\end{aligned}}}
where ber and bei are the Kelvin functions and k2 = ρω/μ.
== Plane Poiseuille flow ==
Plane Poiseuille flow is flow created between two infinitely long parallel plates, separated by a distance h with a constant pressure gradient G = −dp/dx is applied in the direction of flow. The flow is essentially unidirectional because of infinite length. The Navier–Stokes equations reduce to
d
2
u
d
y
2
=
−
G
μ
{\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-{\frac {G}{\mu }}}
with no-slip condition on both walls
u
(
0
)
=
0
,
u
(
h
)
=
0
{\displaystyle u(0)=0,\quad u(h)=0}
Therefore, the velocity distribution and the volume flow rate per unit length are
u
(
y
)
=
G
2
μ
y
(
h
−
y
)
,
Q
=
G
h
3
12
μ
.
{\displaystyle u(y)={\frac {G}{2\mu }}y(h-y),\quad Q={\frac {Gh^{3}}{12\mu }}.}
== Poiseuille flow through some non-circular cross-sections ==
Joseph Boussinesq derived the velocity profile and volume flow rate in 1868 for rectangular channel and tubes of equilateral triangular cross-section and for elliptical cross-section. Joseph Proudman derived the same for isosceles triangles in 1914. Let G = −dp/dx be the constant pressure gradient acting in direction parallel to the motion.
The velocity and the volume flow rate in a rectangular channel of height 0 ≤ y ≤ h and width 0 ≤ z ≤ l are
u
(
y
,
z
)
=
G
2
μ
y
(
h
−
y
)
−
4
G
h
2
μ
π
3
∑
n
=
1
∞
1
(
2
n
−
1
)
3
sinh
(
β
n
z
)
+
sinh
[
β
n
(
l
−
z
)
]
sinh
(
β
n
l
)
sin
(
β
n
y
)
,
β
n
=
(
2
n
−
1
)
π
h
,
Q
=
G
h
3
l
12
μ
−
16
G
h
4
π
5
μ
∑
n
=
1
∞
1
(
2
n
−
1
)
5
cosh
(
β
n
l
)
−
1
sinh
(
β
n
l
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}y(h-y)-{\frac {4Gh^{2}}{\mu \pi ^{3}}}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{3}}}{\frac {\sinh(\beta _{n}z)+\sinh[\beta _{n}(l-z)]}{\sinh(\beta _{n}l)}}\sin(\beta _{n}y),\quad \beta _{n}={\frac {(2n-1)\pi }{h}},\\[6pt]Q&={\frac {Gh^{3}l}{12\mu }}-{\frac {16Gh^{4}}{\pi ^{5}\mu }}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{5}}}{\frac {\cosh(\beta _{n}l)-1}{\sinh(\beta _{n}l)}}.\end{aligned}}}
The velocity and the volume flow rate of tube with equilateral triangular cross-section of side length 2h/√3 are
u
(
y
,
z
)
=
−
G
4
μ
h
(
y
−
h
)
(
y
2
−
3
z
2
)
,
Q
=
G
h
4
60
3
μ
.
{\displaystyle {\begin{aligned}u(y,z)&=-{\frac {G}{4\mu h}}(y-h)\left(y^{2}-3z^{2}\right),\\[6pt]Q&={\frac {Gh^{4}}{60{\sqrt {3}}\mu }}.\end{aligned}}}
The velocity and the volume flow rate in the right-angled isosceles triangle y = π, y ± z = 0 are
u
(
y
,
z
)
=
G
2
μ
(
y
+
z
)
(
π
−
y
)
−
G
π
μ
∑
n
=
1
∞
1
β
n
3
sinh
(
2
π
β
n
)
{
sinh
[
β
n
(
2
π
−
y
+
z
)
]
sin
[
β
n
(
y
+
z
)
]
−
sinh
[
β
n
(
y
+
z
)
]
sin
[
β
n
(
y
−
z
)
]
}
,
β
n
=
n
+
1
2
,
Q
=
G
π
4
12
μ
−
G
2
π
μ
∑
n
=
1
∞
1
β
n
5
[
coth
(
2
π
β
n
)
+
csc
(
2
π
β
n
)
]
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}(y+z)(\pi -y)-{\frac {G}{\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{3}\sinh(2\pi \beta _{n})}}\left\{\sinh[\beta _{n}(2\pi -y+z)]\sin[\beta _{n}(y+z)]-\sinh[\beta _{n}(y+z)]\sin[\beta _{n}(y-z)]\right\},\quad \beta _{n}=n+{\tfrac {1}{2}},\\[6pt]Q&={\frac {G\pi ^{4}}{12\mu }}-{\frac {G}{2\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{5}}}\left[\coth(2\pi \beta _{n})+\csc(2\pi \beta _{n})\right].\end{aligned}}}
The velocity distribution for tubes of elliptical cross-section with semiaxes a and b is
u
(
y
,
z
)
=
G
2
μ
(
1
a
2
+
1
b
2
)
(
1
−
y
2
a
2
−
z
2
b
2
)
,
Q
=
π
G
a
3
b
3
4
μ
(
a
2
+
b
2
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu \left({\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}\right)}}\left(1-{\frac {y^{2}}{a^{2}}}-{\frac {z^{2}}{b^{2}}}\right),\\[6pt]Q&={\frac {\pi Ga^{3}b^{3}}{4\mu \left(a^{2}+b^{2}\right)}}.\end{aligned}}}
Here, when a = b, Poiseuille flow for circular pipe is recovered and when a → ∞, plane Poiseuille flow is recovered. More explicit solutions with cross-sections such as snail-shaped sections, sections having the shape of a notch circle following a semicircle, annular sections between homofocal ellipses, annular sections between non-concentric circles are also available, as reviewed by Ratip Berker.
== Poiseuille flow through arbitrary cross-section ==
The flow through arbitrary cross-section u(y,z) satisfies the condition that u = 0 on the walls. The governing equation reduces to
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
=
−
G
μ
.
{\displaystyle {\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=-{\frac {G}{\mu }}.}
If we introduce a new dependent variable as
U
=
u
+
G
4
μ
(
y
2
+
z
2
)
,
{\displaystyle U=u+{\frac {G}{4\mu }}\left(y^{2}+z^{2}\right),}
then it is easy to see that the problem reduces to that integrating a Laplace equation
∂
2
U
∂
y
2
+
∂
2
U
∂
z
2
=
0
{\displaystyle {\frac {\partial ^{2}U}{\partial y^{2}}}+{\frac {\partial ^{2}U}{\partial z^{2}}}=0}
satisfying the condition
U
=
G
4
μ
(
y
2
+
z
2
)
{\displaystyle U={\frac {G}{4\mu }}\left(y^{2}+z^{2}\right)}
on the wall.
== Poiseuille's equation for an ideal isothermal gas ==
For a compressible fluid in a tube the volumetric flow rate Q(x) and the axial velocity are not constant along the tube; but the mass flow rate is constant along the tube length. The volumetric flow rate is usually expressed at the outlet pressure. As fluid is compressed or expanded, work is done and the fluid is heated or cooled. This means that the flow rate depends on the heat transfer to and from the fluid. For an ideal gas in the isothermal case, where the temperature of the fluid is permitted to equilibrate with its surroundings, an approximate relation for the pressure drop can be derived. Using ideal gas equation of state for constant temperature process (i.e.,
p
/
ρ
{\displaystyle p/\rho }
is constant) and the conservation of mass flow rate (i.e.,
m
˙
=
ρ
Q
{\displaystyle {\dot {m}}=\rho Q}
is constant), the relation Qp = Q1p1 = Q2p2 can be obtained. Over a short section of the pipe, the gas flowing through the pipe can be assumed to be incompressible so that Poiseuille law can be used locally,
−
d
p
d
x
=
8
μ
Q
π
R
4
=
8
μ
Q
2
p
2
π
p
R
4
⇒
−
p
d
p
d
x
=
8
μ
Q
2
p
2
π
R
4
.
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q}{\pi R^{4}}}={\frac {8\mu Q_{2}p_{2}}{\pi pR^{4}}}\quad \Rightarrow \quad -p{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q_{2}p_{2}}{\pi R^{4}}}.}
Here we assumed the local pressure gradient is not too great to have any compressibility effects. Though locally we ignored the effects of pressure variation due to density variation, over long distances these effects are taken into account. Since μ is independent of pressure, the above equation can be integrated over the length L to give
p
1
2
−
p
2
2
=
16
μ
L
Q
2
p
2
π
R
4
.
{\displaystyle p_{1}^{2}-p_{2}^{2}={\frac {16\mu LQ_{2}p_{2}}{\pi R^{4}}}.}
Hence the volumetric flow rate at the pipe outlet is given by
Q
2
=
π
R
4
16
μ
L
(
p
1
2
−
p
2
2
p
2
)
=
π
R
4
(
p
1
−
p
2
)
8
μ
L
(
p
1
+
p
2
)
2
p
2
.
{\displaystyle Q_{2}={\frac {\pi R^{4}}{16\mu L}}\left({\frac {p_{1}^{2}-p_{2}^{2}}{p_{2}}}\right)={\frac {\pi R^{4}\left(p_{1}-p_{2}\right)}{8\mu L}}{\frac {\left(p_{1}+p_{2}\right)}{2p_{2}}}.}
This equation can be seen as Poiseuille's law with an extra correction factor p1 + p2/2p2 expressing the average pressure relative to the outlet pressure.
== Electrical circuits analogy ==
Electricity was originally understood to be a kind of fluid. This hydraulic analogy is still conceptually useful for understanding circuits. This analogy is also used to study the frequency response of fluid-mechanical networks using circuit tools, in which case the fluid network is termed a hydraulic circuit. Poiseuille's law corresponds to Ohm's law for electrical circuits, V = IR. Since the net force acting on the fluid is equal to ΔF = SΔp, where S = πr2, i.e. ΔF = πr2 ΔP, then from Poiseuille's law, it follows that
Δ
F
=
8
μ
L
Q
r
2
{\displaystyle \Delta F={\frac {8\mu LQ}{r^{2}}}}
.
For electrical circuits, let n be the concentration of free charged particles (in m−3) and let q* be the charge of each particle (in coulombs). (For electrons, q* = e = 1.6×10−19 C.) Then nQ is the number of particles in the volume Q, and nQq* is their total charge. This is the charge that flows through the cross section per unit time, i.e. the current I. Therefore, I = nQq*. Consequently, Q = I/nq*, and
Δ
F
=
8
μ
L
I
n
r
2
q
∗
.
{\displaystyle \Delta F={\frac {8\mu LI}{nr^{2}q^{*}}}.}
But ΔF = Eq, where q is the total charge in the volume of the tube. The volume of the tube is equal to πr2L, so the number of charged particles in this volume is equal to nπr2L, and their total charge is q = nπr2 Lq*. Since the voltage V = EL, it follows then
V
=
8
μ
L
I
n
2
π
r
4
(
q
∗
)
2
.
{\displaystyle V={\frac {8\mu LI}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}.}
This is exactly Ohm's law, where the resistance R = V/I is described by the formula
R
=
8
μ
L
n
2
π
r
4
(
q
∗
)
2
{\displaystyle R={\frac {8\mu L}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}}
.
It follows that the resistance R is proportional to the length L of the resistor, which is true. However, it also follows that the resistance R is inversely proportional to the fourth power of the radius r, i.e. the resistance R is inversely proportional to the second power of the cross section area S = πr2 of the resistor, which is different from the electrical formula. The electrical relation for the resistance is
R
=
ρ
L
S
,
{\displaystyle R={\frac {\rho L}{S}},}
where ρ is the resistivity; i.e. the resistance R is inversely proportional to the cross section area S of the resistor. The reason why Poiseuille's law leads to a different formula for the resistance R is the difference between the fluid flow and the electric current. Electron gas is inviscid, so its velocity does not depend on the distance to the walls of the conductor. The resistance is due to the interaction between the flowing electrons and the atoms of the conductor. Therefore, Poiseuille's law and the hydraulic analogy are useful only within certain limits when applied to electricity. Both Ohm's law and Poiseuille's law illustrate transport phenomena.
== Medical applications – intravenous access and fluid delivery ==
The Hagen–Poiseuille equation is useful in determining the vascular resistance and hence flow rate of intravenous (IV) fluids that may be achieved using various sizes of peripheral and central cannulas. The equation states that flow rate is proportional to the radius to the fourth power, meaning that a small increase in the internal diameter of the cannula yields a significant increase in flow rate of IV fluids. The radius of IV cannulas is typically measured in "gauge", which is inversely proportional to the radius. Peripheral IV cannulas are typically available as (from large to small) 14G, 16G, 18G, 20G, 22G, 26G. As an example, assuming cannula lengths are equal, the flow of a 14G cannula is 1.73 times that of a 16G cannula, and 4.16 times that of a 20G cannula. It also states that flow is inversely proportional to length, meaning that longer lines have lower flow rates. This is important to remember as in an emergency, many clinicians favor shorter, larger catheters compared to longer, narrower catheters. While of less clinical importance, an increased change in pressure (∆p) — such as by pressurizing the bag of fluid, squeezing the bag, or hanging the bag higher (relative to the level of the cannula) — can be used to speed up flow rate. It is also useful to understand that viscous fluids will flow slower (e.g. in blood transfusion).
== See also ==
Couette flow
Darcy's law
Pulse
Wave
Hydraulic circuit
== Cited references ==
== References ==
Sutera, S. P.; Skalak, R. (1993). "The history of Poiseuille's law". Annual Review of Fluid Mechanics. 25: 1–19. Bibcode:1993AnRFM..25....1S. doi:10.1146/annurev.fl.25.010193.000245..
Pfitzner, J (1976). "Poiseuille and his law". Anaesthesia. Vol. 31, no. 2 (published Mar 1976). pp. 273–5. doi:10.1111/j.1365-2044.1976.tb11804.x. PMID 779509..
Bennett, C. O.; Myers, J. E. (1962). Momentum, Heat, and Mass Transfer. McGraw-Hill..
== External links ==
Poiseuille's law for power-law non-Newtonian fluid
Poiseuille's law in a slightly tapered tube
Hagen–Poiseuille equation calculator | Wikipedia/Poiseuille_equation |
In continuum mechanics, shearing refers to the occurrence of a shear strain, which is a deformation of a material substance in which parallel internal surfaces slide past one another. It is induced by a shear stress in the material. Shear strain is distinguished from volumetric strain. The change in a material's volume in response to stress and change of angle is called the angle of shear.
== Overview ==
Often, the verb shearing refers more specifically to a mechanical process that causes a plastic shear strain in a material, rather than causing a merely elastic one. A plastic shear strain is a continuous (non-fracturing) deformation that is irreversible, such that the material does not recover its original shape. It occurs when the material is yielding. The process of shearing a material may induce a volumetric strain along with the shear strain. In soil mechanics, the volumetric strain associated with shearing is known as Reynolds' dilation if it increases the volume, or compaction if it decreases the volume.
The shear center (also known as the torsional axis) is an imaginary point on a section, where a shear force can be applied without inducing any torsion. In general, the shear center is not the centroid. For cross-sectional areas having one axis of symmetry, the shear center is located on the axis of symmetry. For those having two axes of symmetry, the shear center lies on the centroid of the cross-section.
In some materials such as metals, plastics, or granular materials like sand or soils, the shearing motion rapidly localizes into a narrow band, known as a shear band. In that case, all the sliding occurs within the band while the blocks of material on either side of the band simply slide past one another without internal deformation. A special case of shear localization occurs in brittle materials when they fracture along a narrow band. Then, all subsequent shearing occurs within the fracture. Plate tectonics, where the plates of the Earth's crust slide along fracture zones, is an example of this.
Shearing in soil mechanics is measured with a triaxial shear test or a direct shear test.
== See also ==
Shear strength
== Further reading ==
Terzaghi, K., 1943, Theoretical Soil Mechanics, John Wiley and Sons, New York 123
Popov, E., 1968, Introduction to mechanics of solids, Prentice-Hall, Inc., New Jersey | Wikipedia/Shearing_(physics) |
Elastography is any of a class of medical imaging diagnostic methods that map the elastic properties and stiffness of soft tissue. The main idea is that whether the tissue is hard or soft will give diagnostic information about the presence or status of disease. For example, cancerous tumours will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones.
The most prominent techniques use ultrasound or magnetic resonance imaging (MRI) to make both the stiffness map and an anatomical image for comparison.
== Historical background ==
Palpation is the practice of feeling the stiffness of a person's or animal's tissues with the health practitioner's hands. Manual palpation dates back at least to 1500 BC, with the Egyptian Ebers Papyrus and Edwin Smith Papyrus both giving instructions on diagnosis with palpation. In ancient Greece, Hippocrates gave instructions on many forms of diagnosis using palpation, including palpation of the breasts, wounds, bowels, ulcers, uterus, skin, and tumours. In the modern Western world, palpation became considered a respectable method of diagnosis in the 1930s. Since then, the practice of palpation has become widespread, and it is considered an effective method of detecting tumours and other pathologies.
Manual palpation has several important limitations: it is limited to tissues accessible to the physician's hand, it is distorted by any intervening tissue, and it is qualitative but not quantitative. Elastography, the measurement of tissue stiffness, seeks to address these challenges.
== How it works ==
There are numerous elastographic techniques, in development stages from early research to extensive clinical application. Each of these techniques works in a different way. What all methods have in common is that they create a distortion in the tissue, observe and process the tissue response to infer the mechanical properties of the tissue, and then display the results to the operator, usually as an image. Each elastographic method is characterized by the way it does each of these things.
=== Inducing a distortion ===
To image the mechanical properties of tissue, we need to see how it behaves when deformed. There are three main ways of inducing a distortion to observe. These are:
Pushing/deforming or vibrating the surface of the body (skin) or organ (prostate) with a probe or a tool,
Using acoustic radiation force impulse imaging using ultrasound to remotely create a 'push' inside the tissue, and
Using distortions created by normal physiological processes, e.g. pulse or heartbeat.
=== Observing the response ===
The primary way elastographic techniques are categorized is by what imaging modality (type) they use to observe the response. Elastographic techniques use ultrasound, magnetic resonance imaging (MRI) and pressure/stress sensors in tactile imaging (TI) using tactile sensor(s). There are a handful of other methods that exist as well.
The observation of the tissue response can take many forms. In terms of the image obtained, it can be 1-D (i.e. a line), 2-D (a plane), 3-D (a volume), or 0-D (a single value), and it can be a video or a single image. In most cases, the result is displayed to the operator along with a conventional image of the tissue, which shows where in the tissue the different stiffness values occur.
=== Processing and presentation ===
Once the response has been observed, the stiffness can be calculated from it. Most elastography techniques find the stiffness of tissue based on one of two main principles:
For a given applied force (stress), stiffer tissue deforms (strains) less than does softer tissue.
Mechanical waves (specifically shear waves) travel faster through stiffer tissue than through softer tissue.
Some techniques will simply display the distortion and/or response, or the wave speed to the operator, while others will compute the stiffness (specifically the Young's modulus or similar shear modulus) and display that instead. Some techniques present results quantitatively, while others only present qualitative (relative) results.
== Ultrasound elastography ==
There are a great many ultrasound elastographic techniques. The most prominent are highlighted below.
=== Quasistatic elastography / strain imaging ===
Quasistatic elastography (sometimes called simply 'elastography' for historical reasons) is one of the earliest elastography techniques. In this technique, an external compression is applied to tissue, and the ultrasound images before and after the compression are compared. The areas of the image that are least deformed are the ones that are the stiffest, while the most deformed areas are the least stiff. Generally, what is displayed to the operator is an image of the relative distortions (strains), which is often of clinical utility.
From the relative distortion image, however, making a quantitative stiffness map is often desired. To do this requires that assumptions be made about the nature of the soft tissue being imaged and about tissue outside of the image. Additionally, under compression, objects can move into or out of the image or around in the image, causing problems with interpretation. Another limit of this technique is that like manual palpation, it has difficulty with organs or tissues that are not close to the surface or easily compressed.
=== Acoustic radiation force impulse imaging (ARFI) ===
Acoustic radiation force impulse imaging (ARFI) uses ultrasound to create a qualitative 2-D map of tissue stiffness. It does so by creating a 'push' inside the tissue using the acoustic radiation force from a focused ultrasound beam. The amount the tissue along the axis of the beam is pushed down is reflective of tissue stiffness; softer tissue is more easily pushed than stiffer tissue. ARFI shows a qualitative stiffness value along the axis of the pushing beam. By pushing in many different places, a map of the tissue stiffness is built up. Virtual Touch imaging quantification (VTIQ) has been successfully used to identify malignant cervical lymph nodes.
=== Shear-wave elasticity imaging (SWEI) ===
In shear-wave elasticity imaging (SWEI), similar to ARFI, a 'push' is induced deep in the tissue by acoustic radiation force. The disturbance created by this push travels sideways through the tissue as a shear wave. By using an image modality like ultrasound or MRI to see how fast the wave gets to different lateral positions, the stiffness of the intervening tissue is inferred. Since the terms "elasticity imaging" and "elastography" are synonyms, the original term SWEI denoting the technology for elasticity mapping using shear waves is often replaced by SWE. The principal difference between SWEI and ARFI is that SWEI is based on the use of shear waves propagating laterally from the beam axis and creating elasticity map by measuring shear wave propagation parameters whereas ARFI gets elasticity information from the axis of the pushing beam and uses multiple pushes to create a 2-D stiffness map. No shear waves are involved in ARFI and no axial elasticity assessment is involved in SWEI. SWEI is implemented in supersonic shear imaging (SSI).
=== Supersonic shear imaging (SSI) ===
Supersonic shear imaging (SSI) gives a quantitative, real-time two-dimensional map of tissue stiffness. SSI is based on SWEI: it uses acoustic radiation force to induce a 'push' inside the tissue of interest generating shear waves and the tissue's stiffness is computed from how fast the resulting shear wave travels through the tissue. Local tissue velocity maps are obtained with a conventional speckle tracking technique and provide a full movie of the shear wave propagation through the tissue. There are two principal innovations implemented in SSI. First, by using many near-simultaneous pushes, SSI creates a source of shear waves which is moved through the medium at a supersonic speed. Second, the generated shear wave is visualized by using ultrafast imaging technique. Using inversion algorithms, the shear elasticity of medium is mapped quantitatively from the wave propagation movie. SSI is the first ultrasonic imaging technology able to reach more than 10,000 frames per second of deep-seated organs. SSI provides a set of quantitative and in vivo parameters describing the tissue mechanical properties: Young's modulus, viscosity, anisotropy.
This approach demonstrated clinical benefit in breast, thyroid, liver, prostate, and musculoskeletal imaging. SSI is used for breast examination with a number of high-resolution linear transducers. A large multi-center breast imaging study has demonstrated both reproducibility and significant improvement in the classification of breast lesions when shear wave elastography images are added to the interpretation of standard B-mode and Color mode ultrasound images.
=== Transient elastography ===
In the food industry, low-intensity ultrasonics has already been used since the 1980s to provide information about the concentration, structure, and physical state of components in foods such as vegetables, meats, and dairy products and also for quality control, for example to evaluate the rheological qualities of cheese.
Transient elastography was initially called time-resolved pulse elastography when it was introduced in the late 1990s. The technique relies on a transient mechanical vibration which is used to induce a shear wave into the tissue. The propagation of the shear wave is tracked using ultrasound in order to assess the shear wave speed from which the Young's modulus is deduced under hypothesis of homogeneity, isotropy and pure elasticity (E=3ρV²). An important advantage of transient elastography compared to harmonic elastography techniques is the separation of shear waves and compression waves. The technique can be implemented in 1D and 2D which required the development of an ultrafast ultrasound scanner.
Transient elastography gives a quantitative one-dimensional (i.e. a line) image of "tissue" stiffness. It functions by vibrating the skin with a motor to create a passing distortion in the tissue (a shear wave), and imaging the motion of that distortion as it passes deeper into the body using a 1D ultrasound beam. It then displays a quantitative line of tissue stiffness data (the Young's modulus). This technique is used mainly by the FibroScan system, which is used for liver assessment, for example, to diagnose cirrhosis. A specific implementation of 1D transient elastography called VCTE has been developed to assess average liver stiffness which correlates to liver fibrosis assessed by liver biopsy. This technique is implemented in a device which can also assess the controlled attenuation parameter (CAP) which is good surrogate marker of liver steatosis.
== Magnetic resonance elastography (MRE) ==
Magnetic resonance elastography (MRE) was introduced in the mid-1990s, and multiple clinical applications have been investigated. In MRE, a mechanical vibrator is used on the surface of the patient's body; this creates shear waves that travel into the patient's deeper tissues. An imaging acquisition sequence that measures the velocity of the waves is used, and this is used to infer the tissue's stiffness (the shear modulus). The result of an MRE scan is a quantitative 3-D map of the tissue stiffness, as well as a conventional 3-D MRI image.
One strength of MRE is the resulting 3-D elasticity map, which can cover an entire organ. Because MRI is not limited by air or bone, it can access some tissues ultrasound cannot, notably the brain. It also has the advantage of being more uniform across operators and less dependent on operator skill than most methods of ultrasound elastography.
MR elastography has made significant advances over the past few years with acquisition times down to a minute or less and has been used in a variety of medical applications including cardiology research on living human hearts. MR elastography's short acquisition time also makes it competitive with other elastography techniques.
== Optical elastography ==
Optical elastography is an emerging technique that utilizes optical microscopy to obtain tissue images. The most common form of optical elastography, optical coherence elastography (OCE), is based on optical coherence tomography (OCT), which combines interferometry with lateral beam scanning for rapid 3D image acquisition and achieves spatial resolutions of 5-15 μm. For OCE, a mechanical load is applied to the tissue and the resultant deformation is measured using speckle tracking or phase sensitive detection. Early implementations of OCE involved applying a quasi-static compression to the tissue, though more recently dynamic loading has been achieved through the application of a sinusoidal modulation via a contact transducer or acoustic wave. Other imaging modalities with greater optical resolution have also been introduced for optical elastography to probe the microscale between cells and whole tissues. OCT relies on longer wavelengths, of 850 - 1050 nm, and therefore provides a lower optical resolution compared to common light microscopy, which uses visible wavelengths of 400-700 nm, and provides lateral spatial resolutions of <1 μm. Examples of higher resolution analysis include the use of confocal and light-sheet microscopy respectively for mechanical characterization of multicellular spheroids and for structural analysis of 3D organoids at a single-cell resolution. When using these imaging modalities, quasi-static compression may be induced in the tissue sample by a micro-indentation device, such as a microtweezer. The resultant deformation can be measured from the microscopy images using image-based nodal tracking algorithms, and mechanical properties can be discerned using finite element method (FEM) analyses.
== Applications ==
Elastography is used for the investigation of many disease conditions in many organs. It can be used for additional diagnostic information compared to a mere anatomical image, and it can be used to guide biopsies or, increasingly, replace them entirely. Biopsies are invasive and painful, presenting a risk of hemorrhage or infection, whereas elastography is completely noninvasive.
Elastography is used to investigate disease in the liver. Liver stiffness is usually indicative of fibrosis or steatosis (fatty liver disease), which are in turn indicative of numerous disease conditions, including cirrhosis and hepatitis. Elastography is particularly advantageous in this case because when fibrosis is diffuse (spread around in clumps rather than continuous scarring), a biopsy can easily miss sampling the diseased tissue, which results in a false negative misdiagnosis.
Naturally, elastography sees use for organs and diseases where manual palpation was already widespread. Elastography is used for detection and diagnosis of breast, thyroid, and prostate cancers. Certain types of elastography are also suitable for musculoskeletal imaging, and they can determine the mechanical properties and state of muscles and tendons.
Because elastography does not have the same limitations as manual palpation, it is being investigated in some areas for which there is no history of diagnosis with manual palpation. For example, magnetic resonance elastography is capable of assessing the stiffness of the brain, and there is a growing body of scientific literature on elastography in healthy and diseased brains.
In 2015, preliminary reports on elastography used on transplanted kidneys to evaluate cortical fibrosis have been published showing promising results.
In Bristol University's study Children of the 90s, 2.5% of 4,000 people born in 1991 and 1992 were found by ultrasound scanning at the age of 18 to have non-alcoholic fatty liver disease; five years later transient elastography found over 20% to have the fatty deposits on the liver of steatosis, indicating non-alcoholic fatty liver disease; half of those were classified as severe. The scans also found that 2.4% had the liver scarring of fibrosis, which can lead to cirrhosis.
Other techniques include elastography with optical coherence tomography (i.e. light).
Tactile imaging involves translating the results of a digital "touch" into an image. Many physical principles have been explored for the realization of tactile sensors: resistive, inductive, capacitive, optoelectric, magnetic, piezoelectric, and electroacoustic principles, in a variety of configurations.
== Notes ==
†^ In the case of endogenous motion imaging, instead of inducing a disturbance, disturbances naturally created by physiological processes are observed.
== References == | Wikipedia/Elastography |
In solid mechanics, the Johnson–Holmquist damage model is used to model the mechanical behavior of damaged brittle materials, such as ceramics, rocks, and concrete, over a range of strain rates. Such materials usually have high compressive strength but low tensile strength and tend to exhibit progressive damage under load due to the growth of microfractures.
There are two variations of the Johnson-Holmquist model that are used to model the impact performance of ceramics under ballistically delivered loads. These models were developed by Gordon R. Johnson and Timothy J. Holmquist in the 1990s with the aim of facilitating predictive numerical simulations of ballistic armor penetration. The first version of the model is called the 1992 Johnson-Holmquist 1 (JH-1) model. This original version was developed to account for large deformations but did not take into consideration progressive damage with increasing deformation; though the multi-segment stress-strain curves in the model can be interpreted as incorporating damage implicitly. The second version, developed in 1994, incorporated a damage evolution rule and is called the Johnson-Holmquist 2 (JH-2) model or, more accurately, the Johnson-Holmquist damage material model.
== Johnson-Holmquist 2 (JH-2) material model ==
The Johnson-Holmquist material model (JH-2), with damage, is useful when modeling brittle materials, such as ceramics,
subjected to large pressures, shear strain and high strain rates. The model attempts to include the phenomena encountered when brittle materials are subjected to load and damage, and is one of the most widely used models when dealing with ballistic impact on ceramics. The model simulates the increase in strength shown by ceramics subjected to hydrostatic pressure as well as the reduction in strength shown by damaged ceramics. This is done by basing the model on two sets
of curves that plot the yield stress against the pressure. The first set of curves accounts for the intact material, while the second one accounts for the failed material. Each curve set depends on the plastic strain and plastic strain rate. A damage variable D accounts for the level of fracture.
=== Intact elastic behavior ===
The JH-2 material assumes that the material is initially elastic and isotropic and can be described by a relation of the form (summation is implied over repeated indices)
σ
i
j
=
−
p
(
ϵ
k
k
)
δ
i
j
+
2
μ
ϵ
i
j
{\displaystyle \sigma _{ij}=-p(\epsilon _{kk})~\delta _{ij}+2~\mu ~\epsilon _{ij}}
where
σ
i
j
{\displaystyle \sigma _{ij}}
is a stress measure,
p
(
ϵ
k
k
)
{\displaystyle p(\epsilon _{kk})}
is an equation of state for the pressure,
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
is a strain measure that is energy conjugate to
σ
i
j
{\displaystyle \sigma _{ij}}
, and
μ
{\displaystyle \mu }
is a shear modulus. The quantity
ϵ
k
k
{\displaystyle \epsilon _{kk}}
is frequently replaced by the hydrostatic compression
ξ
{\displaystyle \xi }
so that the equation of state is expressed as
p
(
ξ
)
=
p
(
ξ
(
ϵ
k
k
)
)
=
p
(
ρ
ρ
0
−
1
)
;
ξ
:=
ρ
ρ
0
−
1
{\displaystyle p(\xi )=p(\xi (\epsilon _{kk}))=p\left({\cfrac {\rho }{\rho _{0}}}-1\right)~;~~\xi :={\cfrac {\rho }{\rho _{0}}}-1}
where
ρ
{\displaystyle \rho }
is the current mass density and
ρ
0
{\displaystyle \rho _{0}}
is the initial mass density.
The stress at the Hugoniot elastic limit is assumed to be given by a relation of the form
σ
h
=
H
(
ρ
,
μ
)
=
p
H
E
L
(
ρ
)
+
2
3
σ
H
E
L
(
ρ
,
μ
)
{\displaystyle \sigma _{h}={\mathcal {H}}(\rho ,\mu )=p_{\rm {HEL}}(\rho )+{\cfrac {2}{3}}~\sigma _{\rm {HEL}}(\rho ,\mu )}
where
p
H
E
L
{\displaystyle p_{\rm {HEL}}}
is the pressure at the Hugoniot elastic limit and
σ
H
E
L
{\displaystyle \sigma _{\rm {HEL}}}
is the stress at the Hugoniot elastic limit.
=== Intact material strength ===
The uniaxial failure strength of the intact material is assumed to be given by an equation of the form
σ
i
n
t
a
c
t
∗
=
A
(
p
∗
+
T
∗
)
n
[
1
+
C
ln
(
d
ϵ
p
d
t
)
]
{\displaystyle \sigma _{\rm {intact}}^{*}=A~(p^{*}+T^{*})^{n}~\left[1+C~\ln \left({\cfrac {d\epsilon _{p}}{dt}}\right)\right]}
where
A
,
C
,
n
{\displaystyle A,C,n}
are material constants,
t
{\displaystyle t}
is the time,
ϵ
p
{\displaystyle \epsilon _{p}}
is the inelastic strain. The inelastic strain rate is usually normalized by a reference strain rate to remove the time dependence. The reference strain rate is generally 1/s.
The quantities
σ
∗
{\displaystyle \sigma ^{*}}
and
p
∗
{\displaystyle p^{*}}
are normalized stresses and
T
∗
{\displaystyle T^{*}}
is a normalized tensile hydrostatic pressure, defined as
σ
∗
=
σ
σ
H
E
L
;
p
∗
=
p
p
H
E
L
;
T
∗
=
T
p
H
E
L
{\displaystyle \sigma ^{*}={\cfrac {\sigma }{\sigma _{\rm {HEL}}}}~;~p^{*}={\cfrac {p}{p_{\rm {HEL}}}}~;~~T^{*}={\cfrac {T}{p_{\rm {HEL}}}}}
=== Stress at complete fracture ===
The uniaxial stress at complete fracture is assumed to be given by
σ
f
r
a
c
t
u
r
e
∗
=
B
(
p
∗
)
m
[
1
+
C
ln
(
d
ϵ
p
d
t
)
]
{\displaystyle \sigma _{\rm {fracture}}^{*}=B~(p^{*})^{m}~\left[1+C~\ln \left({\cfrac {d\epsilon _{p}}{dt}}\right)\right]}
where
B
,
C
,
m
{\displaystyle B,C,m}
are material constants.
=== Current material strength ===
The uniaxial strength of the material at a given state of damage is then computed at a linear interpolation between the initial strength and the stress for complete failure, and is given by
σ
∗
=
σ
i
n
i
t
i
a
l
∗
−
D
(
σ
i
n
i
t
i
a
l
∗
−
σ
f
r
a
c
t
u
r
e
∗
)
{\displaystyle \sigma ^{*}=\sigma _{\rm {initial}}^{*}-D~\left(\sigma _{\rm {initial}}^{*}-\sigma _{\rm {fracture}}^{*}\right)}
The quantity
D
{\displaystyle D}
is a scalar variable that indicates damage accumulation.
=== Damage evolution rule ===
The evolution of the damage variable
D
{\displaystyle D}
is given by
d
D
d
t
=
1
ϵ
f
d
ϵ
p
d
t
{\displaystyle {\cfrac {dD}{dt}}={\cfrac {1}{\epsilon _{f}}}~{\cfrac {d\epsilon _{p}}{dt}}}
where the strain to failure
ϵ
f
{\displaystyle \epsilon _{f}}
is assumed to be
ϵ
f
=
D
1
(
p
∗
+
T
∗
)
D
2
{\displaystyle \epsilon _{f}=D_{1}~(p^{*}+T^{*})^{D_{2}}}
where
D
1
,
D
2
{\displaystyle D_{1},D_{2}}
are material constants.
=== Material parameters for some ceramics ===
== Johnson–Holmquist equation of state ==
The function
p
(
ξ
)
{\displaystyle p(\xi )}
used in the Johnson–Holmquist material model is often called the Johnson–Holmquist equation of state and has the form
p
(
ξ
)
=
{
k
1
ξ
+
k
2
ξ
2
+
k
3
ξ
3
+
Δ
p
Compression
k
1
ξ
Tension
{\displaystyle p(\xi )={\begin{cases}k_{1}~\xi +k_{2}~\xi ^{2}+k_{3}~\xi ^{3}+\Delta p&\qquad {\text{Compression}}\\k_{1}~\xi &\qquad {\text{Tension}}\end{cases}}}
where
Δ
p
{\displaystyle \Delta p}
is an increment in the pressure and
k
1
,
k
2
,
k
3
{\displaystyle k_{1},k_{2},k_{3}}
are material constants. The increment in pressure arises from the conversion of energy loss due to damage into internal energy. Frictional effects are neglected.
== Implementation in LS-DYNA ==
The Johnson-Holmquist material model is implemented in LS-DYNA as * MAT_JOHNSON_HOLMQUIST_CERAMICS.
== Implementation in the IMPETUS Afea Solver ==
The Johnson-Holmquist material model is implemented in the IMPETUS Afea Solver as * MAT_JH_CERAMIC.
== Implementation in Altair Radioss and OpenRadioss ==
The Johnson-Holmquist material model is implemented in Radioss Solver as /MAT/LAW79 (JOHN_HOLM).
== Implementation in Abaqus ==
The Johnson-Holmquist (JH-2) material model is implemented in Abaqus as ABQ_JH2 material name.
== References ==
== See also ==
Failure
Material failure theory | Wikipedia/Johnson–Holmquist_damage_model |
Damage mechanics is concerned with the representation, or modeling, of damage of materials that is suitable for making engineering predictions about the initiation, propagation, and fracture of materials without resorting to a microscopic description that would be too complex for practical engineering analysis.
Damage mechanics illustrates the typical engineering approach to model complex phenomena. To quote Dusan Krajcinovic, "It is often argued that the ultimate task of engineering research is to provide not so much a better insight into the examined phenomenon but to supply a rational predictive tool applicable in design." Damage mechanics is a topic of applied mechanics that relies heavily on continuum mechanics. Most of the work on damage mechanics uses state variables to represent the effects of damage on the stiffness and remaining life of the material that is damaging as a result of thermomechanical load and ageing. The state variables may be measurable, e.g., crack density, or inferred from the effect they have on some macroscopic property, such as stiffness, coefficient of thermal expansion, remaining life, etc. The state variables have conjugate thermodynamic forces that motivate further damage. Initially the material is pristine, or intact. A damage activation criterion is needed to predict damage initiation. Damage evolution does not progress spontaneously after initiation, thus requiring a damage evolution model. In plasticity like formulations, the damage evolution is controlled by a hardening function but this requires additional phenomenological parameters that must be found through experimentation, which is expensive, time consuming, and virtually no one does. On the other hand, micromechanics of damage formulations are able to predict both damage initiation and evolution without additional material properties.
== Creep continuum damage mechanics ==
When mechanical structures are exposed to temperatures exceeding one-third of the melting temperature of the material of construction, time-dependent deformation (creep) and associated material degradation mechanisms become dominant modes of structural failure. While these deformation and damage mechanisms originate at the microscale where discrete processes dominate, practical application of failure theories to macroscale components is most readily achieved using the formalism of continuum mechanics. In this context, microscopic damage is idealized as a continuous state variable defined at all points within a structure. State equations are defined which govern the time evolution of damage. These equations may be readily integrated into finite element codes to analyze the damage evolution in complex 3D structures and calculate how long a component may safely be used before failure occurs.
=== Lumped damage state variable ===
L. M. Kachanov and Y. N. Rabotnov suggested the following evolution equations for the creep strain ε and a lumped damage state variable ω:
ϵ
˙
=
ϵ
˙
0
(
σ
1
−
ω
)
n
{\displaystyle {\dot {\epsilon }}={\dot {\epsilon }}_{0}\left({\frac {\sigma }{1-\omega }}\right)^{n}}
ω
˙
=
ω
˙
0
(
σ
1
−
ω
)
m
{\displaystyle {\dot {\omega }}={\dot {\omega }}_{0}\left({\frac {\sigma }{1-\omega }}\right)^{m}}
where
ϵ
˙
{\displaystyle {\dot {\epsilon }}}
is the creep strain rate,
ϵ
˙
0
{\displaystyle {\dot {\epsilon }}_{0}}
is the creep-rate multiplier,
σ
{\displaystyle \sigma }
is the applied stress,
n
{\displaystyle n}
is the creep stress exponent of the material of interest,
ω
˙
{\displaystyle {\dot {\omega }}}
is the rate of damage accumulation,
ω
˙
0
{\displaystyle {\dot {\omega }}_{0}}
is the damage-rate multiplier, and
m
{\displaystyle m}
is the damage stress exponent.
In this simple case, the strain rate is governed by power-law creep with the stress enhanced by the damage state variable as damage accumulates. The damage term ω is interpreted as a distributed loss of load bearing area which results in an increased local stress at the microscale. The time to failure is determined by integrating the damage evolution equation from an initial undamaged state
(
ω
=
0
)
{\displaystyle (\omega =0)}
to a specified critical damage
(
ω
=
ω
f
)
{\displaystyle \left(\omega =\omega _{f}\right)}
. If
ω
f
{\displaystyle \omega _{f}}
is taken to be 1, this results in the following prediction for a structure loaded under a constant uniaxial stress
σ
{\displaystyle \sigma }
:
t
f
=
1
(
m
+
1
)
ω
˙
0
σ
m
{\displaystyle t_{f}={\frac {1}{\left(m+1\right){\dot {\omega }}_{0}\sigma ^{m}}}}
Model parameters
ϵ
0
˙
{\displaystyle {\dot {\epsilon _{0}}}}
and n are found by fitting the creep strain rate equation at zero damage to minimum creep rate measurements. Model parameters
ω
0
˙
{\displaystyle {\dot {\omega _{0}}}}
and m are found by fitting the above equation to creep rupture life data.
=== Mechanistically informed damage state variables ===
While easy to apply, the lumped damage model proposed by Kachanov and Robotnov is limited by the fact that the damage state variable cannot be directly tied to a specific mechanism of strain and damage evolution. Correspondingly, extrapolation of the model beyond the original dataset of test data is not justified. This limitation was remedied by researchers such as A.C.F. Cocks, M.F. Ashby, and B.F. Dyson, who proposed mechanistically informed strain and damage evolution equations. Extrapolation using such equations is justified if the dominant damage mechanism remains the same at the conditions of interest.
==== Void-growth by power-law creep ====
In the power-law creep regime, global deformation is controlled by glide and climb of dislocations. If internal voids are present within the microstructure, global structural continuity requires that the voids must both elongate and expand laterally, further reducing the local section. When cast in the damage mechanics formalism, the growth of internal voids by power-law creep can be represented by the following equations.
ϵ
˙
=
ϵ
˙
0
σ
n
(
1
+
2
r
h
0
d
[
1
(
1
−
ω
)
n
−
1
]
)
{\displaystyle {\dot {\epsilon }}={\dot {\epsilon }}_{0}\sigma ^{n}\left(1+{\frac {2r_{h}^{0}}{d}}\left[{\frac {1}{\left(1-\omega \right)^{n}}}-1\right]\right)}
ω
˙
=
ϵ
˙
0
σ
n
(
1
(
1
−
ω
)
n
−
(
1
−
ω
)
)
{\displaystyle {\dot {\omega }}={\dot {\epsilon }}_{0}\sigma ^{n}\left({\frac {1}{\left(1-\omega \right)^{n}}}-\left(1-\omega \right)\right)}
where
ϵ
˙
0
{\displaystyle {\dot {\epsilon }}_{0}}
is the creep-rate multiplier,
σ
{\displaystyle \sigma }
is the applied stress, n is the creep stress exponent,
r
h
0
{\displaystyle r_{h}^{0}}
is the average initial void radius, and d is the grain size.
==== Void-growth by boundary diffusion ====
At very high temperature and/or low stresses, void growth on grain boundaries is primarily controlled by the diffusive flux of vacancies along the grain boundary. As matter diffuses away from the void and plates onto the adjacent grain boundaries, a roughly spherical void is maintained by rapid diffusion of vacancies along the surface of the void. When cast in the damage mechanics formalism, the growth of internal voids by boundary diffusion can be represented by the following equations.
ϵ
˙
=
ϵ
˙
0
ϕ
0
σ
2
l
d
ln
(
1
ω
)
{\displaystyle {\dot {\epsilon }}={\dot {\epsilon }}_{0}\phi _{0}\sigma {\frac {2l}{d\ln \left({\frac {1}{\omega }}\right)}}}
ω
˙
=
ϵ
˙
0
ϕ
0
σ
1
ω
1
/
2
ln
(
1
ω
)
{\displaystyle {\dot {\omega }}={\dot {\epsilon }}_{0}\phi _{0}\sigma {\frac {1}{\omega ^{1/2}\ln \left({\frac {1}{\omega }}\right)}}}
ϕ
0
=
2
D
B
δ
B
Ω
k
T
l
3
1
ε
˙
0
{\displaystyle \phi _{0}={\frac {2D_{B}\delta _{B}\Omega }{kTl^{3}}}{\frac {1}{{\dot {\varepsilon }}_{0}}}}
where
ϵ
˙
0
{\displaystyle {\dot {\epsilon }}_{0}}
is the creep-rate multiplier,
σ
{\displaystyle \sigma }
is the applied stress,
2
l
{\displaystyle 2l}
is the center-to-center void spacing,
d
{\displaystyle d}
is the grain size,
D
B
{\displaystyle D_{B}}
is the grain-boundary diffusion coefficient,
δ
B
{\displaystyle \delta _{B}}
is the grain boundary thickness,
Ω
{\displaystyle \Omega }
is the atomic volume,
k
{\displaystyle k}
is the Boltzmann constant, and
T
{\displaystyle T}
is the absolute temperatures. It is noted that factors present in
ϕ
0
{\displaystyle \phi _{0}}
are very similar to the Coble creep pre-factors due to the similarity of the two mechanisms.
==== Precipitate coarsening ====
Many modern steels and alloys are designed such that precipitates will precipitate either within the matrix or along grain boundaries during casting. These precipitates restrict dislocation motion and, if present on grain boundaries, grain boundary sliding during creep. Many precipitates are not thermodynamically stable and grow via diffusion when exposed to elevated temperatures. As the precipitates coarsen, their ability to restrict dislocation motion decreases as the average spacing between particles increases, thus decreasing the required Orowan stress for bowing. In the case of grain boundary precipitates, precipitate growth means that fewer grain boundaries are impeded from grain boundary sliding. When cast into the damage mechanics formalism, precipitation coarsening and its effect on strain rate may be represented by the following equations.
ϵ
˙
=
ϵ
˙
0
σ
n
(
1
+
K
′
′
ω
)
n
{\displaystyle {\dot {\epsilon }}={\dot {\epsilon }}_{0}\sigma ^{n}\left(1+K^{\prime \prime }\omega \right)^{n}}
ω
˙
=
K
′
3
(
1
−
ω
)
4
{\displaystyle {\dot {\omega }}={\frac {K^{\prime }}{3}}\left(1-\omega \right)^{4}}
where
ϵ
˙
0
{\displaystyle \ {\dot {\epsilon }}_{0}}
is the creep-rate multiplier,
σ
{\displaystyle \sigma }
is the applied stress,
n
{\displaystyle n}
is the creep-rate stress exponent,
K
′
′
{\displaystyle K^{\prime \prime }}
is a parameter linking the precipitation damage to the strain rate,
K
′
{\displaystyle K^{\prime }}
determines the rate of precipitate coarsening.
=== Combining damage mechanisms ===
Multiple damage mechanism can be combined to represent a broader range of phenomena. For instance, if both void-growth by power-law creep and precipitate coarsening are relevant mechanisms, the following combined set of equations may be used:
ϵ
˙
=
ϵ
˙
0
σ
n
(
1
+
2
r
h
0
d
[
1
(
1
−
ω
1
)
n
−
1
]
)
(
1
+
K
′
′
ω
2
)
n
{\displaystyle {\dot {\epsilon }}={\dot {\epsilon }}_{0}\sigma ^{n}\left(1+{\frac {2r_{h}^{0}}{d}}\left[{\frac {1}{\left(1-\omega _{1}\right)^{n}}}-1\right]\right)\left(1+K^{\prime \prime }\omega _{2}\right)^{n}}
ω
˙
1
=
ϵ
˙
0
σ
n
(
1
(
1
−
ω
1
)
n
−
(
1
−
ω
1
)
)
(
1
+
K
′
′
ω
2
)
n
{\displaystyle {\dot {\omega }}_{1}={\dot {\epsilon }}_{0}\sigma ^{n}\left({\frac {1}{\left(1-\omega _{1}\right)^{n}}}-\left(1-\omega _{1}\right)\right)\left(1+K^{\prime \prime }\omega _{2}\right)^{n}}
ω
˙
2
=
K
′
3
(
1
−
ω
2
)
4
{\displaystyle {\dot {\omega }}_{2}={\frac {K^{\prime }}{3}}\left(1-\omega _{2}\right)^{4}}
Note that both damage mechanisms are included in the creep strain rate equation. The precipitate coarsening damage mechanisms influences the void-growth damage mechanism as the void-growth mechanism depends on the global strain rate. The precipitate growth mechanisms is only time and temperature dependent and hence does not depend on the void-growth damage
ω
1
{\displaystyle \omega _{1}}
.
=== Multiaxial effects ===
The preceding equations are valid under uniaxial tension only. When a multiaxial state of stress is present in the system, each equation must be adapted so that the driving multiaxial stress is considered. For void-growth by power-law creep, the relevant stress is the von Mises stress as this drives the global creep deformation; however, for void-growth by boundary diffusion, the maximum principal stress drives the vacancy flux.
== See also ==
Lumped damage mechanics
Failure analysis
Critical plane analysis
== References == | Wikipedia/Damage_mechanics |
In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry) when it is convenient for applications that occur at constant pressure. For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.
The concept of free energy was developed by Hermann von Helmholtz, a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes". From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy. In physics, the symbol F is also used in reference to free energy or Helmholtz function.
== Definition ==
The Helmholtz free energy is defined as
A
≡
U
−
T
S
,
{\displaystyle A\equiv U-TS,}
where
A is the Helmholtz free energy (sometimes also called F, particularly in the field of physics) (SI: joules, CGS: ergs),
U is the internal energy of the system (SI: joules, CGS: ergs),
T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).
The Helmholtz energy is the Legendre transformation of the internal energy U, in which temperature replaces entropy as the independent variable.
== Formal development ==
The first law of thermodynamics in a closed system provides
d
U
=
δ
Q
+
δ
W
,
{\displaystyle \mathrm {d} U=\delta Q\ +\delta W,}
where
U
{\displaystyle U}
is the internal energy,
δ
Q
{\displaystyle \delta Q}
is the energy added as heat, and
δ
W
{\displaystyle \delta W}
is the work done on the system. The second law of thermodynamics for a reversible process yields
δ
Q
=
T
d
S
{\displaystyle \delta Q=T\,\mathrm {d} S}
. In case of a reversible change, the work done can be expressed as
δ
W
=
−
p
d
V
{\displaystyle \delta W=-p\,\mathrm {d} V}
(ignoring electrical and other non-PV work) and so:
d
U
=
T
d
S
−
p
d
V
.
{\displaystyle \mathrm {d} U=T\,\mathrm {d} S-p\,\mathrm {d} V.}
Applying the product rule for differentiation to
d
(
T
S
)
=
T
d
S
+
S
d
T
{\displaystyle \mathrm {d} (TS)=T\mathrm {d} S\,+S\mathrm {d} T}
, it follows
d
U
=
d
(
T
S
)
−
S
d
T
−
p
d
V
,
{\displaystyle \mathrm {d} U=\mathrm {d} (TS)-S\,\mathrm {d} T-p\,\mathrm {d} V,}
and
d
(
U
−
T
S
)
=
−
S
d
T
−
p
d
V
.
{\displaystyle \mathrm {d} (U-TS)=-S\,\mathrm {d} T-p\,\mathrm {d} V.}
The definition of
A
=
U
−
T
S
{\displaystyle A=U-TS}
allows us to rewrite this as
d
A
=
−
S
d
T
−
p
d
V
.
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V.}
Because A is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.
== Minimum free energy and maximum work principles ==
The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.
Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase
Δ
U
{\displaystyle \Delta U}
, the entropy increase
Δ
S
{\displaystyle \Delta S}
, and the total amount of work that can be extracted, performed by the system,
W
{\displaystyle W}
, are well defined quantities. Conservation of energy implies
Δ
U
bath
+
Δ
U
+
W
=
0.
{\displaystyle \Delta U_{\text{bath}}+\Delta U+W=0.}
The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by
Q
bath
=
Δ
U
bath
=
−
(
Δ
U
+
W
)
.
{\displaystyle Q_{\text{bath}}=\Delta U_{\text{bath}}=-(\Delta U+W).}
The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is
Δ
S
bath
=
Q
bath
T
=
−
Δ
U
+
W
T
.
{\displaystyle \Delta S_{\text{bath}}={\frac {Q_{\text{bath}}}{T}}=-{\frac {\Delta U+W}{T}}.}
The total entropy change is thus given by
Δ
S
bath
+
Δ
S
=
−
Δ
U
−
T
Δ
S
+
W
T
.
{\displaystyle \Delta S_{\text{bath}}+\Delta S=-{\frac {\Delta U-T\Delta S+W}{T}}.}
Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:
Δ
S
bath
+
Δ
S
=
−
Δ
A
+
W
T
.
{\displaystyle \Delta S_{\text{bath}}+\Delta S=-{\frac {\Delta A+W}{T}}.}
Since the total change in entropy must always be larger or equal to zero, we obtain the inequality
W
≤
−
Δ
A
.
{\displaystyle W\leq -\Delta A.}
We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then
Δ
A
≤
0
,
{\displaystyle \Delta A\leq 0,}
and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.
This result seems to contradict the equation
d
A
=
−
S
d
T
−
p
d
V
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V}
, as keeping T and V constant seems to imply
d
A
=
0
{\displaystyle \mathrm {d} A=0}
, and hence
A
=
c
o
n
s
t
.
{\displaystyle A=\mathrm {const.} }
In reality there is no contradiction: In a simple one-component system, to which the validity of the equation
d
A
=
−
S
d
T
−
p
d
V
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-p\,\mathrm {d} V}
is restricted, no process can occur at constant T and V, since there is a unique
P
(
T
,
V
)
{\displaystyle P(T,V)}
relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to
d
A
=
−
S
d
T
−
P
d
V
+
∑
j
μ
j
d
N
j
,
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-P\,\mathrm {d} V+\sum _{j}\mu _{j}\,\mathrm {d} N_{j},}
where the
N
j
{\displaystyle N_{j}}
are the numbers of particles of type j and the
μ
j
{\displaystyle \mu _{j}}
are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative.
In case there are other external parameters, the above relation further generalizes to
d
A
=
−
S
d
T
−
∑
i
X
i
d
x
i
+
∑
j
μ
j
d
N
j
.
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-\sum _{i}X_{i}\,\mathrm {d} x_{i}+\sum _{j}\mu _{j}\,\mathrm {d} N_{j}.}
Here the
x
i
{\displaystyle x_{i}}
are the external variables, and the
X
i
{\displaystyle X_{i}}
the corresponding generalized forces.
== Relation to the canonical partition function ==
A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability of finding the system in some energy eigenstate r, for any microstate i, is given by
P
r
=
e
−
β
E
r
Z
,
{\displaystyle P_{r}={\frac {e^{-\beta E_{r}}}{Z}},}
where
β
=
1
k
T
,
{\displaystyle \beta ={\frac {1}{kT}},}
E
r
{\displaystyle E_{r}}
is the energy of accessible state
r
{\displaystyle r}
Z
=
∑
i
e
−
β
E
i
.
{\textstyle Z=\sum _{i}e^{-\beta E_{i}}.}
Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.
The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:
U
≡
⟨
E
⟩
=
∑
r
P
r
E
r
=
∑
r
e
−
β
E
r
E
r
Z
=
∑
r
−
∂
∂
β
e
−
β
E
r
Z
=
−
∂
∂
β
∑
r
e
−
β
E
r
Z
=
−
∂
log
Z
∂
β
.
{\displaystyle U\equiv \langle E\rangle =\sum _{r}P_{r}E_{r}=\sum _{r}{\frac {e^{-\beta E_{r}}E_{r}}{Z}}=\sum _{r}{\frac {-{\frac {\partial }{\partial \beta }}e^{-\beta E_{r}}}{Z}}={\frac {-{\frac {\partial }{\partial \beta }}\sum _{r}e^{-\beta E_{r}}}{Z}}=-{\frac {\partial \log Z}{\partial \beta }}.}
If the system is in state r, then the generalized force corresponding to an external variable x is given by
X
r
=
−
∂
E
r
∂
x
.
{\displaystyle X_{r}=-{\frac {\partial E_{r}}{\partial x}}.}
The thermal average of this can be written as
X
=
∑
r
P
r
X
r
=
1
β
∂
log
Z
∂
x
.
{\displaystyle X=\sum _{r}P_{r}X_{r}={\frac {1}{\beta }}{\frac {\partial \log Z}{\partial x}}.}
Suppose that the system has one external variable
x
{\displaystyle x}
. Then changing the system's temperature parameter by
d
β
{\displaystyle d\beta }
and the external variable by
d
x
{\displaystyle dx}
will lead to a change in
log
Z
{\displaystyle \log Z}
:
d
(
log
Z
)
=
∂
log
Z
∂
β
d
β
+
∂
log
Z
∂
x
d
x
=
−
U
d
β
+
β
X
d
x
.
{\displaystyle d(\log Z)={\frac {\partial \log Z}{\partial \beta }}\,d\beta +{\frac {\partial \log Z}{\partial x}}\,dx=-U\,d\beta +\beta X\,dx.}
If we write
U
d
β
{\displaystyle U\,d\beta }
as
U
d
β
=
d
(
β
U
)
−
β
d
U
,
{\displaystyle U\,d\beta =d(\beta U)-\beta \,dU,}
we get
d
(
log
Z
)
=
−
d
(
β
U
)
+
β
d
U
+
β
X
d
x
.
{\displaystyle d(\log Z)=-d(\beta U)+\beta \,dU+\beta X\,dx.}
This means that the change in the internal energy is given by
d
U
=
1
β
d
(
log
Z
+
β
U
)
−
X
d
x
.
{\displaystyle dU={\frac {1}{\beta }}\,d(\log Z+\beta U)-X\,dx.}
In the thermodynamic limit, the fundamental thermodynamic relation should hold:
d
U
=
T
d
S
−
X
d
x
.
{\displaystyle dU=T\,dS-X\,dx.}
This then implies that the entropy of the system is given by
S
=
k
log
Z
+
U
T
+
c
,
{\displaystyle S=k\log Z+{\frac {U}{T}}+c,}
where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes
S
=
k
log
Ω
0
{\displaystyle S=k\log \Omega _{0}}
, where
Ω
0
{\displaystyle \Omega _{0}}
is the ground-state degeneracy. The partition function in this limit is
Ω
0
e
−
β
U
0
{\displaystyle \Omega _{0}e^{-\beta U_{0}}}
, where
U
0
{\displaystyle U_{0}}
is the ground-state energy. Thus, we see that
c
=
0
{\displaystyle c=0}
and that
=== Relating free energy to other variables ===
Combining the definition of Helmholtz free energy
A
=
U
−
T
S
{\displaystyle A=U-TS}
along with the fundamental thermodynamic relation
d
A
=
−
S
d
T
−
P
d
V
+
μ
d
N
,
{\displaystyle \mathrm {d} A=-S\,\mathrm {d} T-P\,\mathrm {d} V+\mu \,\mathrm {d} N,}
one can find expressions for entropy, pressure and chemical potential:
S
=
−
(
∂
A
∂
T
)
|
V
,
N
,
P
=
−
(
∂
A
∂
V
)
|
T
,
N
,
μ
=
(
∂
A
∂
N
)
|
T
,
V
.
{\displaystyle S=\left.-\left({\frac {\partial A}{\partial T}}\right)\right|_{V,N},\quad P=\left.-\left({\frac {\partial A}{\partial V}}\right)\right|_{T,N},\quad \mu =\left.\left({\frac {\partial A}{\partial N}}\right)\right|_{T,V}.}
These three equations, along with the free energy in terms of the partition function,
A
=
−
k
T
log
Z
,
{\displaystyle A=-kT\log Z,}
allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that
m
=
−
(
∂
A
∂
B
)
|
T
,
N
,
V
=
(
∂
A
∂
Q
)
|
N
,
T
.
{\displaystyle m=\left.-\left({\frac {\partial A}{\partial B}}\right)\right|_{T,N},\quad V=\left.\left({\frac {\partial A}{\partial Q}}\right)\right|_{N,T}.}
== Bogoliubov inequality ==
Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.
Suppose we replace the real Hamiltonian
H
{\displaystyle H}
of the model by a trial Hamiltonian
H
~
{\displaystyle {\tilde {H}}}
, which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that
⟨
H
~
⟩
=
⟨
H
⟩
,
{\displaystyle \left\langle {\tilde {H}}\right\rangle =\langle H\rangle ,}
where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian
H
~
{\displaystyle {\tilde {H}}}
, then the Bogoliubov inequality states
A
≤
A
~
,
{\displaystyle A\leq {\tilde {A}},}
where
A
{\displaystyle A}
is the free energy of the original Hamiltonian, and
A
~
{\displaystyle {\tilde {A}}}
is the free energy of the trial Hamiltonian. We will prove this below.
By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy.
The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as
H
=
H
0
+
Δ
H
,
{\displaystyle H=H_{0}+\Delta H,}
where
H
0
{\displaystyle H_{0}}
is some exactly solvable Hamiltonian, then we can apply the above inequality by defining
H
~
=
H
0
+
⟨
Δ
H
⟩
0
.
{\displaystyle {\tilde {H}}=H_{0}+\langle \Delta H\rangle _{0}.}
Here we have defined
⟨
X
⟩
0
{\displaystyle \langle X\rangle _{0}}
to be the average of X over the canonical ensemble defined by
H
0
{\displaystyle H_{0}}
. Since
H
~
{\displaystyle {\tilde {H}}}
defined this way differs from
H
0
{\displaystyle H_{0}}
by a constant, we have in general
⟨
X
⟩
0
=
⟨
X
⟩
.
{\displaystyle \langle X\rangle _{0}=\langle X\rangle .}
where
⟨
X
⟩
{\displaystyle \langle X\rangle }
is still the average over
H
~
{\displaystyle {\tilde {H}}}
, as specified above. Therefore,
⟨
H
~
⟩
=
⟨
H
0
+
⟨
Δ
H
⟩
⟩
=
⟨
H
⟩
,
{\displaystyle \left\langle {\tilde {H}}\right\rangle ={\big \langle }H_{0}+\langle \Delta H\rangle {\big \rangle }=\langle H\rangle ,}
and thus the inequality
A
≤
A
~
{\displaystyle A\leq {\tilde {A}}}
holds. The free energy
A
~
{\displaystyle {\tilde {A}}}
is the free energy of the model defined by
H
0
{\displaystyle H_{0}}
plus
⟨
Δ
H
⟩
{\displaystyle \langle \Delta H\rangle }
. This means that
A
~
=
⟨
H
0
⟩
0
−
T
S
0
+
⟨
Δ
H
⟩
0
=
⟨
H
⟩
0
−
T
S
0
,
{\displaystyle {\tilde {A}}=\langle H_{0}\rangle _{0}-TS_{0}+\langle \Delta H\rangle _{0}=\langle H\rangle _{0}-TS_{0},}
and thus
A
≤
⟨
H
⟩
0
−
T
S
0
.
{\displaystyle A\leq \langle H\rangle _{0}-TS_{0}.}
=== Proof of the Bogoliubov inequality ===
For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by
P
r
{\displaystyle P_{r}}
and
P
~
r
{\displaystyle {\tilde {P}}_{r}}
, respectively. From Gibbs' inequality we know that:
∑
r
P
~
r
log
(
P
~
r
)
≥
∑
r
P
~
r
log
(
P
r
)
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\tilde {P}}_{r}\right)\geq \sum _{r}{\tilde {P}}_{r}\log \left(P_{r}\right)\,}
holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:
∑
r
P
~
r
log
(
P
~
r
P
r
)
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\frac {{\tilde {P}}_{r}}{P_{r}}}\right)\,}
Since
log
(
x
)
≥
1
−
1
x
{\displaystyle \log \left(x\right)\geq 1-{\frac {1}{x}}\,}
it follows that:
∑
r
P
~
r
log
(
P
~
r
P
r
)
≥
∑
r
(
P
~
r
−
P
r
)
=
0
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log \left({\frac {{\tilde {P}}_{r}}{P_{r}}}\right)\geq \sum _{r}\left({\tilde {P}}_{r}-P_{r}\right)=0\,}
where in the last step we have used that both probability distributions are normalized to 1.
We can write the inequality as:
⟨
log
P
~
r
⟩
≥
⟨
log
P
r
⟩
{\displaystyle \left\langle \log {\tilde {P}}_{r}\right\rangle \geq \left\langle \log P_{r}\right\rangle }
where the averages are taken with respect to
P
~
r
{\displaystyle {\tilde {P}}_{r}}
. If we now substitute in here the expressions for the probability distributions:
P
r
=
exp
[
−
β
H
(
r
)
]
Z
{\displaystyle P_{r}={\frac {\exp \left[-\beta H(r)\right]}{Z}}}
and
P
~
r
=
exp
[
−
β
H
~
(
r
)
]
Z
~
{\displaystyle {\tilde {P}}_{r}={\frac {\exp \left[-\beta {\tilde {H}}(r)\right]}{\tilde {Z}}}}
we get:
⟨
−
β
H
~
−
log
Z
~
⟩
≥
⟨
−
β
H
−
log
Z
⟩
{\displaystyle \left\langle -\beta {\tilde {H}}-\log {\tilde {Z}}\right\rangle \geq \left\langle -\beta H-\log Z\right\rangle }
Since the averages of
H
{\displaystyle H}
and
H
~
{\displaystyle {\tilde {H}}}
are, by assumption, identical we have:
A
≤
A
~
{\displaystyle A\leq {\tilde {A}}}
Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.
We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of
H
~
{\displaystyle {\tilde {H}}}
by
|
r
⟩
{\displaystyle \left|r\right\rangle }
. We denote the diagonal components of the density matrices for the canonical distributions for
H
{\displaystyle H}
and
H
~
{\displaystyle {\tilde {H}}}
in this basis as:
P
r
=
⟨
r
|
exp
[
−
β
H
]
Z
|
r
⟩
{\displaystyle P_{r}=\left\langle r\left|{\frac {\exp \left[-\beta H\right]}{Z}}\right|r\right\rangle \,}
and
P
~
r
=
⟨
r
|
exp
[
−
β
H
~
]
Z
~
|
r
⟩
=
exp
(
−
β
E
~
r
)
Z
~
{\displaystyle {\tilde {P}}_{r}=\left\langle r\left|{\frac {\exp \left[-\beta {\tilde {H}}\right]}{\tilde {Z}}}\right|r\right\rangle ={\frac {\exp \left(-\beta {\tilde {E}}_{r}\right)}{\tilde {Z}}}\,}
where the
E
~
r
{\displaystyle {\tilde {E}}_{r}}
are the eigenvalues of
H
~
{\displaystyle {\tilde {H}}}
We assume again that the averages of H and
H
~
{\displaystyle {\tilde {H}}}
in the canonical ensemble defined by
H
~
{\displaystyle {\tilde {H}}}
are the same:
⟨
H
~
⟩
=
⟨
H
⟩
{\displaystyle \left\langle {\tilde {H}}\right\rangle =\left\langle H\right\rangle \,}
where
⟨
H
⟩
=
∑
r
P
~
r
⟨
r
|
H
|
r
⟩
{\displaystyle \left\langle H\right\rangle =\sum _{r}{\tilde {P}}_{r}\left\langle r\left|H\right|r\right\rangle \,}
The inequality
∑
r
P
~
r
log
P
~
r
≥
∑
r
P
~
r
log
P
r
{\displaystyle \sum _{r}{\tilde {P}}_{r}\log {\tilde {P}}_{r}\geq \sum _{r}{\tilde {P}}_{r}\log P_{r}}
still holds as both the
P
r
{\displaystyle P_{r}}
and the
P
~
r
{\displaystyle {\tilde {P}}_{r}}
sum to 1. On the left-hand side we can replace:
log
P
~
r
=
−
β
E
~
r
−
log
Z
~
{\displaystyle \log {\tilde {P}}_{r}=-\beta {\tilde {E}}_{r}-\log {\tilde {Z}}}
On the right-hand side we can use the inequality
⟨
e
X
⟩
r
≥
e
⟨
X
⟩
r
{\displaystyle \left\langle e^{X}\right\rangle _{r}\geq e^{{\left\langle X\right\rangle }_{r}}}
where we have introduced the notation
⟨
Y
⟩
r
≡
⟨
r
|
Y
|
r
⟩
{\displaystyle \left\langle Y\right\rangle _{r}\equiv \left\langle r\left|Y\right|r\right\rangle \,}
for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:
log
[
⟨
e
X
⟩
r
]
≥
⟨
X
⟩
r
{\displaystyle \log \left[\left\langle e^{X}\right\rangle _{r}\right]\geq \left\langle X\right\rangle _{r}\,}
This allows us to write:
log
P
r
=
log
[
⟨
exp
(
−
β
H
−
log
Z
)
⟩
r
]
≥
⟨
−
β
H
−
log
Z
⟩
r
{\displaystyle \log P_{r}=\log \left[\left\langle \exp \left(-\beta H-\log Z\right)\right\rangle _{r}\right]\geq \left\langle -\beta H-\log Z\right\rangle _{r}}
The fact that the averages of H and
H
~
{\displaystyle {\tilde {H}}}
are the same then leads to the same conclusion as in the classical case:
A
≤
A
~
{\displaystyle A\leq {\tilde {A}}}
== Generalized Helmholtz energy ==
In the more general case, the mechanical term
p
d
V
{\displaystyle p\mathrm {d} V}
must be replaced by the product of volume, stress, and an infinitesimal strain:
d
A
=
V
∑
i
j
σ
i
j
d
ε
i
j
−
S
d
T
+
∑
i
μ
i
d
N
i
,
{\displaystyle \mathrm {d} A=V\sum _{ij}\sigma _{ij}\,\mathrm {d} \varepsilon _{ij}-S\,\mathrm {d} T+\sum _{i}\mu _{i}\,\mathrm {d} N_{i},}
where
σ
i
j
{\displaystyle \sigma _{ij}}
is the stress tensor, and
ε
i
j
{\displaystyle \varepsilon _{ij}}
is the strain tensor. In the case of linear elastic materials that obey Hooke's law, the stress is related to the strain by
σ
i
j
=
C
i
j
k
l
ε
k
l
,
{\displaystyle \sigma _{ij}=C_{ijkl}\varepsilon _{kl},}
where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for
d
A
{\displaystyle \mathrm {d} A}
to obtain the Helmholtz energy:
A
=
1
2
V
C
i
j
k
l
ε
i
j
ε
k
l
−
S
T
+
∑
i
μ
i
N
i
=
1
2
V
σ
i
j
ε
i
j
−
S
T
+
∑
i
μ
i
N
i
.
{\displaystyle {\begin{aligned}A&={\frac {1}{2}}VC_{ijkl}\varepsilon _{ij}\varepsilon _{kl}-ST+\sum _{i}\mu _{i}N_{i}\\&={\frac {1}{2}}V\sigma _{ij}\varepsilon _{ij}-ST+\sum _{i}\mu _{i}N_{i}.\end{aligned}}}
== Application to fundamental equations of state ==
The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.
== Application to training auto-encoders ==
Hinton and Zemel "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is
A
=
∑
i
p
i
E
i
−
H
,
{\displaystyle A=\sum _{i}p_{i}E_{i}-H,}
"which has exactly the form of Helmholtz free energy".
== See also ==
Gibbs free energy and thermodynamic free energy for thermodynamics history overview and discussion of free energy
Grand potential
Enthalpy
Statistical mechanics
This page details the Helmholtz energy from the point of view of thermal and statistical physics.
Bennett acceptance ratio for an efficient way to calculate free energy differences and comparison with other methods.
== References ==
== Further reading ==
Atkins' Physical Chemistry, 7th edition, by Peter Atkins and Julio de Paula, Oxford University Press
HyperPhysics Helmholtz Free Energy Helmholtz and Gibbs Free Energies | Wikipedia/Helmholtz_free_energy |
In material science, resilience is the ability of a material to absorb energy when it is deformed elastically, and release that energy upon unloading. Proof resilience is defined as the maximum energy that can be absorbed up to the elastic limit, without creating a permanent distortion. The modulus of resilience is defined as the maximum energy that can be absorbed per unit volume without creating a permanent distortion. It can be calculated by integrating the stress–strain curve from zero to the elastic limit. In uniaxial tension, under the assumptions of linear elasticity,
U
r
=
σ
y
2
2
E
=
σ
y
ε
y
2
{\displaystyle U_{r}={\frac {\sigma _{y}^{2}}{2E}}={\frac {\sigma _{y}\varepsilon _{y}}{2}}}
where Ur is the modulus of resilience, σy is the yield strength, εy is the yield strain, and E is the Young's modulus. This analysis is not valid for non-linear elastic materials like rubber, for which the approach of area under the curve until elastic limit must be used.
== Unit of resilience ==
Modulus of resilience (Ur) is measured in a unit of joule per cubic meter (J·m−3) in the SI system, i.e. elastical deformation energy per surface of test specimen (merely for gauge-length part).
Like the unit of tensile toughness (UT), the unit of resilience can be easily calculated by using area underneath the stress–strain (σ–ε) curve, which gives resilience value, as given below:
Ur = Area underneath the stress–strain (σ–ε) curve up to yield = σ × ε
Ur [=] Pa × % = (N·m−2)·(unitless)
Ur [=] N·m·m−3
Ur [=] J·m−3
== See also ==
Toughness
== References ==
Guha S. Quantification of inherent energy resilience of process systems for optimization of energy usage. Environ Prog Sustainable Energy. 2019;e13308. https://doi.org/10.1002/ep.13308
Guha S. Quantification of inherent energy resilience of process systems pertaining to a gas sweetening unit. International Journal of Industrial Chemistry (2020) 11:71–90 https://doi.org/10.1007/s40090-020-00203-3 | Wikipedia/Resilience_(materials_science) |
A strain energy density function or stored energy density function is a scalar-valued function that relates the strain energy density of a material to the deformation gradient.
W
=
W
^
(
C
)
=
W
^
(
F
T
⋅
F
)
=
W
¯
(
F
)
=
W
¯
(
B
1
/
2
⋅
R
)
=
W
~
(
B
,
R
)
{\displaystyle W={\hat {W}}({\boldsymbol {C}})={\hat {W}}({\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}})={\bar {W}}({\boldsymbol {F}})={\bar {W}}({\boldsymbol {B}}^{1/2}\cdot {\boldsymbol {R}})={\tilde {W}}({\boldsymbol {B}},{\boldsymbol {R}})}
Equivalently,
W
=
W
^
(
C
)
=
W
^
(
R
T
⋅
B
⋅
R
)
=
W
~
(
B
,
R
)
{\displaystyle W={\hat {W}}({\boldsymbol {C}})={\hat {W}}({\boldsymbol {R}}^{T}\cdot {\boldsymbol {B}}\cdot {\boldsymbol {R}})={\tilde {W}}({\boldsymbol {B}},{\boldsymbol {R}})}
where
F
{\displaystyle {\boldsymbol {F}}}
is the (two-point) deformation gradient tensor,
C
{\displaystyle {\boldsymbol {C}}}
is the right Cauchy–Green deformation tensor,
B
{\displaystyle {\boldsymbol {B}}}
is the left Cauchy–Green deformation tensor,
and
R
{\displaystyle {\boldsymbol {R}}}
is the rotation tensor from the polar decomposition of
F
{\displaystyle {\boldsymbol {F}}}
.
For an anisotropic material, the strain energy density function
W
^
(
C
)
{\displaystyle {\hat {W}}({\boldsymbol {C}})}
depends implicitly on reference vectors or tensors (such as the initial orientation of fibers in a composite) that characterize internal material texture. The spatial representation,
W
~
(
B
,
R
)
{\displaystyle {\tilde {W}}({\boldsymbol {B}},{\boldsymbol {R}})}
must further depend explicitly on the polar rotation tensor
R
{\displaystyle {\boldsymbol {R}}}
to provide sufficient information to convect the reference texture vectors or tensors into the spatial configuration.
For an isotropic material, consideration of the principle of material frame indifference leads to the conclusion that the strain energy density function depends only on the invariants of
C
{\displaystyle {\boldsymbol {C}}}
(or, equivalently, the invariants of
B
{\displaystyle {\boldsymbol {B}}}
since both have the same eigenvalues). In other words, the strain energy density function can be expressed uniquely in terms of the principal stretches or in terms of the invariants of the left Cauchy–Green deformation tensor or right Cauchy–Green deformation tensor and we have:
For isotropic materials,
W
=
W
^
(
λ
1
,
λ
2
,
λ
3
)
=
W
~
(
I
1
,
I
2
,
I
3
)
=
W
¯
(
I
¯
1
,
I
¯
2
,
J
)
=
U
(
I
1
c
,
I
2
c
,
I
3
c
)
{\displaystyle W={\hat {W}}(\lambda _{1},\lambda _{2},\lambda _{3})={\tilde {W}}(I_{1},I_{2},I_{3})={\bar {W}}({\bar {I}}_{1},{\bar {I}}_{2},J)=U(I_{1}^{c},I_{2}^{c},I_{3}^{c})}
with
I
¯
1
=
J
−
2
/
3
I
1
;
I
1
=
λ
1
2
+
λ
2
2
+
λ
3
2
;
J
=
det
(
F
)
I
¯
2
=
J
−
4
/
3
I
2
;
I
2
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
{\displaystyle {\begin{aligned}{\bar {I}}_{1}&=J^{-2/3}~I_{1}~;~~I_{1}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}~;~~J=\det({\boldsymbol {F}})\\{\bar {I}}_{2}&=J^{-4/3}~I_{2}~;~~I_{2}=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\end{aligned}}}
For linear isotropic materials undergoing small strains, the strain energy density function specializes to
W
=
1
2
∑
i
=
1
3
∑
j
=
1
3
σ
i
j
ϵ
i
j
=
1
2
(
σ
x
ϵ
x
+
σ
y
ϵ
y
+
σ
z
ϵ
z
+
2
σ
x
y
ϵ
x
y
+
2
σ
y
z
ϵ
y
z
+
2
σ
x
z
ϵ
x
z
)
{\displaystyle W={\frac {1}{2}}\sum _{i=1}^{3}\sum _{j=1}^{3}\sigma _{ij}\epsilon _{ij}={\frac {1}{2}}(\sigma _{x}\epsilon _{x}+\sigma _{y}\epsilon _{y}+\sigma _{z}\epsilon _{z}+2\sigma _{xy}\epsilon _{xy}+2\sigma _{yz}\epsilon _{yz}+2\sigma _{xz}\epsilon _{xz})}
A strain energy density function is used to define a hyperelastic material by postulating that the stress in the material can be obtained by taking the derivative of
W
{\displaystyle W}
with respect to the strain. For an isotropic hyperelastic material, the function relates the energy stored in an elastic material, and thus the stress–strain relationship, only to the three strain (elongation) components, thus disregarding the deformation history, heat dissipation, stress relaxation etc.
For isothermal elastic processes, the strain energy density function relates to the specific Helmholtz free energy function
ψ
{\displaystyle \psi }
,
W
=
ρ
0
ψ
.
{\displaystyle W=\rho _{0}\psi \;.}
For isentropic elastic processes, the strain energy density function relates to the internal energy function
u
{\displaystyle u}
,
W
=
ρ
0
u
.
{\displaystyle W=\rho _{0}u\;.}
== Examples ==
Some examples of hyperelastic constitutive equations are:
Saint Venant–Kirchhoff
Neo-Hookean
Generalized Rivlin
Mooney–Rivlin
Ogden
Yeoh
Arruda–Boyce model
Gent
== See also ==
Finite strain theory
Helmholtz and Gibbs free energy in thermoelasticity
Hyperelastic material
Ogden–Roxburgh model
== References == | Wikipedia/Strain_energy_density_function |
In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.
Internal energy, enthalpy, and entropy are examples of state quantities or state functions because they quantitatively describe an equilibrium state of a thermodynamic system, regardless of how the system has arrived in that state. They are expressed by exact differentials. In contrast, mechanical work and heat are process quantities or path functions because their values depend on a specific "transition" (or "path") between two equilibrium states that a system has taken to reach the final equilibrium state, being expressed by inexact differentials. Exchanged heat (in certain discrete amounts) can be associated with changes of state function such as enthalpy. The description of the system heat exchange is done by a state function, and thus enthalpy changes point to an amount of heat. This can also apply to entropy when heat is compared to temperature. The description breaks down for quantities exhibiting hysteresis.
== History ==
It is likely that the term "functions of state" was used in a loose sense during the 1850s and 1860s by those such as Rudolf Clausius, William Rankine, Peter Tait, and William Thomson. By the 1870s, the term had acquired a use of its own. In his 1873 paper "Graphical Methods in the Thermodynamics of Fluids", Willard Gibbs states: "The quantities v, p, t, ε, and η are determined when the state of the body is given, and it may be permitted to call them functions of the state of the body."
== Overview ==
A thermodynamic system is described by a number of thermodynamic parameters (e.g. temperature, volume, or pressure) which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the state space of the system (D). For example, a monatomic gas with a fixed number of particles is a simple case of a two-dimensional system (D = 2). Any two-dimensional system is uniquely specified by two parameters. Choosing a different pair of parameters, such as pressure and volume instead of pressure and temperature, creates a different coordinate system in two-dimensional thermodynamic state space but is otherwise equivalent. Pressure and temperature can be used to find volume, pressure and volume can be used to find temperature, and temperature and volume can be used to find pressure. An analogous statement holds for higher-dimensional spaces, as described by the state postulate.
Generally, a state space is defined by an equation of the form
F
(
P
,
V
,
T
,
…
)
=
0
{\displaystyle F(P,V,T,\ldots )=0}
, where P denotes pressure, T denotes temperature, V denotes volume, and the ellipsis denotes other possible state variables like particle number N and entropy S. If the state space is two-dimensional as in the above example, it can be visualized as a three-dimensional graph (a surface in three-dimensional space). However, the labels of the axes are not unique (since there are more than three state variables in this case), and only two independent variables are necessary to define the state.
When a system changes state continuously, it traces out a "path" in the state space. The path can be specified by noting the values of the state parameters as the system traces out the path, whether as a function of time or a function of some other external variable. For example, having the pressure P(t) and volume V(t) as functions of time from time t0 to t1 will specify a path in two-dimensional state space. Any function of time can then be integrated over the path. For example, to calculate the work done by the system from time t0 to time t1, calculate
W
(
t
0
,
t
1
)
=
∫
0
1
P
d
V
=
∫
t
0
t
1
P
(
t
)
d
V
(
t
)
d
t
d
t
{\textstyle W(t_{0},t_{1})=\int _{0}^{1}P\,dV=\int _{t_{0}}^{t_{1}}P(t){\frac {dV(t)}{dt}}\,dt}
. In order to calculate the work W in the above integral, the functions P(t) and V(t) must be known at each time t over the entire path. In contrast, a state function only depends upon the system parameters' values at the endpoints of the path. For example, the following equation can be used to calculate the work plus the integral of V dP over the path:
Φ
(
t
0
,
t
1
)
=
∫
t
0
t
1
P
d
V
d
t
d
t
+
∫
t
0
t
1
V
d
P
d
t
d
t
=
∫
t
0
t
1
d
(
P
V
)
d
t
d
t
=
P
(
t
1
)
V
(
t
1
)
−
P
(
t
0
)
V
(
t
0
)
.
{\displaystyle {\begin{aligned}\Phi (t_{0},t_{1})&=\int _{t_{0}}^{t_{1}}P{\frac {dV}{dt}}\,dt+\int _{t_{0}}^{t_{1}}V{\frac {dP}{dt}}\,dt\\&=\int _{t_{0}}^{t_{1}}{\frac {d(PV)}{dt}}\,dt=P(t_{1})V(t_{1})-P(t_{0})V(t_{0}).\end{aligned}}}
In the equation,
d
(
P
V
)
d
t
d
t
=
d
(
P
V
)
{\displaystyle {\frac {d(PV)}{dt}}dt=d(PV)}
can be expressed as the exact differential of the function P(t)V(t). Therefore, the integral can be expressed as the difference in the value of P(t)V(t) at the end points of the integration. The product PV is therefore a state function of the system.
The notation d will be used for an exact differential. In other words, the integral of dΦ will be equal to Φ(t1) − Φ(t0). The symbol δ will be reserved for an inexact differential, which cannot be integrated without full knowledge of the path. For example, δW = PdV will be used to denote an infinitesimal increment of work.
State functions represent quantities or properties of a thermodynamic system, while non-state functions represent a process during which the state functions change. For example, the state function PV is proportional to the internal energy of an ideal gas, but the work W is the amount of energy transferred as the system performs work. Internal energy is identifiable; it is a particular form of energy. Work is the amount of energy that has changed its form or location.
== List of state functions ==
The following are considered to be state functions in thermodynamics:
== See also ==
Markov property
Conservative vector field
Nonholonomic system
Equation of state
State variable
== Notes ==
== References ==
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics. Wiley & Sons. ISBN 978-0-471-86256-7.
Gibbs, Josiah Willard (1873). "Graphical Methods in the Thermodynamics of Fluids". Transactions of the Connecticut Academy. II. ASIN B00088UXBK – via WikiSource.
Mandl, F. (May 1988). Statistical physics (2nd ed.). Wiley & Sons. ISBN 978-0-471-91533-1.
== External links ==
Media related to State functions at Wikimedia Commons | Wikipedia/Functions_of_state |
Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.
For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
∇
⋅
(
ρ
′
v
)
=
0
(Conservation of Mass)
(
ρ
0
+
ρ
′
)
∂
v
∂
t
+
(
ρ
0
+
ρ
′
)
(
v
⋅
∇
)
v
+
∇
p
′
=
0
(Equation of Motion)
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\nabla \cdot (\rho '\mathbf {v} )&=0\qquad {\text{(Conservation of Mass)}}\\(\rho _{0}+\rho '){\frac {\partial \mathbf {v} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla p'&=0\qquad {\text{(Equation of Motion)}}\end{aligned}}}
In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
=
0
∂
v
∂
t
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} &=0\\{\frac {\partial \mathbf {v} }{\partial t}}+{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
Where
v
(
x
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)}
is the perturbed velocity of the fluid,
p
0
{\displaystyle p_{0}}
is the pressure of the fluid at rest,
p
′
(
x
,
t
)
{\displaystyle p'(\mathbf {x} ,t)}
is the perturbed pressure of the system as a function of space and time,
ρ
0
{\displaystyle \rho _{0}}
is the density of the fluid at rest, and
ρ
′
(
x
,
t
)
{\displaystyle \rho '(\mathbf {x} ,t)}
is the variance in the density of the fluid over space and time.
In the case that the velocity is irrotational (
∇
×
v
=
0
{\displaystyle \nabla \times \mathbf {v} =0}
), we then have the acoustic wave equation that describes the system:
1
c
2
∂
2
ϕ
∂
t
2
−
∇
2
ϕ
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}\phi }{\partial t^{2}}}-\nabla ^{2}\phi =0}
Where we have
v
=
−
∇
ϕ
c
2
=
(
∂
p
∂
ρ
)
s
p
′
=
ρ
0
∂
ϕ
∂
t
ρ
′
=
ρ
0
c
2
∂
ϕ
∂
t
{\displaystyle {\begin{aligned}\mathbf {v} &=-\nabla \phi \\c^{2}&=({\frac {\partial p}{\partial \rho }})_{s}\\p'&=\rho _{0}{\frac {\partial \phi }{\partial t}}\\\rho '&={\frac {\rho _{0}}{c^{2}}}{\frac {\partial \phi }{\partial t}}\end{aligned}}}
== Derivation for a medium at rest ==
Starting with the Continuity Equation and the Euler Equation:
∂
ρ
∂
t
+
∇
⋅
ρ
v
=
0
ρ
∂
v
∂
t
+
ρ
(
v
⋅
∇
)
v
+
∇
p
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho }{\partial t}}+\nabla \cdot \rho \mathbf {v} &=0\\\rho {\frac {\partial \mathbf {v} }{\partial t}}+\rho (\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla p&=0\end{aligned}}}
If we take small perturbations of a constant pressure and density:
ρ
=
ρ
0
+
ρ
′
p
=
p
0
+
p
′
{\displaystyle {\begin{aligned}\rho &=\rho _{0}+\rho '\\p&=p_{0}+p'\end{aligned}}}
Then the equations of the system are
∂
∂
t
(
ρ
0
+
ρ
′
)
+
∇
⋅
(
ρ
0
+
ρ
′
)
v
=
0
(
ρ
0
+
ρ
′
)
∂
v
∂
t
+
(
ρ
0
+
ρ
′
)
(
v
⋅
∇
)
v
+
∇
(
p
0
+
p
′
)
=
0
{\displaystyle {\begin{aligned}{\frac {\partial }{\partial t}}(\rho _{0}+\rho ')+\nabla \cdot (\rho _{0}+\rho ')\mathbf {v} &=0\\(\rho _{0}+\rho '){\frac {\partial \mathbf {v} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla (p_{0}+p')&=0\end{aligned}}}
Noting that the equilibrium pressures and densities are constant, this simplifies to
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
∇
⋅
ρ
′
v
=
0
(
ρ
0
+
ρ
′
)
∂
v
∂
t
+
(
ρ
0
+
ρ
′
)
(
v
⋅
∇
)
v
+
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\nabla \cdot \rho '\mathbf {v} &=0\\(\rho _{0}+\rho '){\frac {\partial \mathbf {v} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla p'&=0\end{aligned}}}
=== A Moving Medium ===
Starting with
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
w
+
∇
⋅
ρ
′
w
=
0
(
ρ
0
+
ρ
′
)
∂
w
∂
t
+
(
ρ
0
+
ρ
′
)
(
w
⋅
∇
)
w
+
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {w} +\nabla \cdot \rho '\mathbf {w} &=0\\(\rho _{0}+\rho '){\frac {\partial \mathbf {w} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {w} \cdot \nabla )\mathbf {w} +\nabla p'&=0\end{aligned}}}
We can have these equations work for a moving medium by setting
w
=
u
+
v
{\displaystyle \mathbf {w} =\mathbf {u} +\mathbf {v} }
, where
u
{\displaystyle \mathbf {u} }
is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and
v
{\displaystyle \mathbf {v} }
is the fluid velocity.
In this case the equations look very similar:
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
u
⋅
∇
ρ
′
+
∇
⋅
ρ
′
v
=
0
(
ρ
0
+
ρ
′
)
∂
v
∂
t
+
(
ρ
0
+
ρ
′
)
(
u
⋅
∇
)
v
+
(
ρ
0
+
ρ
′
)
(
v
⋅
∇
)
v
+
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\mathbf {u} \cdot \nabla \rho '+\nabla \cdot \rho '\mathbf {v} &=0\\(\rho _{0}+\rho '){\frac {\partial \mathbf {v} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {u} \cdot \nabla )\mathbf {v} +(\rho _{0}+\rho ')(\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla p'&=0\end{aligned}}}
Note that setting
u
=
0
{\displaystyle \mathbf {u} =0}
returns the equations at rest.
== Linearized Waves ==
Starting with the above given equations of motion for a medium at rest:
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
∇
⋅
ρ
′
v
=
0
(
ρ
0
+
ρ
′
)
∂
v
∂
t
+
(
ρ
0
+
ρ
′
)
(
v
⋅
∇
)
v
+
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\nabla \cdot \rho '\mathbf {v} &=0\\(\rho _{0}+\rho '){\frac {\partial \mathbf {v} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla p'&=0\end{aligned}}}
Let us now take
v
,
ρ
′
,
p
′
{\displaystyle \mathbf {v} ,\rho ',p'}
to all be small quantities.
In the case that we keep terms to first order, for the continuity equation, we have the
ρ
′
v
{\displaystyle \rho '\mathbf {v} }
term going to 0. This similarly applies for the density perturbation times the time derivative of the velocity. Moreover, the spatial components of the material derivative go to 0. We thus have, upon rearranging the equilibrium density:
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
=
0
∂
v
∂
t
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} &=0\\{\frac {\partial \mathbf {v} }{\partial t}}+{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
Next, given that our sound wave occurs in an ideal fluid, the motion is adiabatic, and then we can relate the small change in the pressure to the small change in the density by
p
′
=
(
∂
p
∂
ρ
0
)
s
ρ
′
{\displaystyle p'=\left({\frac {\partial p}{\partial \rho _{0}}}\right)_{s}\rho '}
Under this condition, we see that we now have
∂
p
′
∂
t
+
ρ
0
(
∂
p
∂
ρ
0
)
s
∇
⋅
v
=
0
∂
v
∂
t
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial p'}{\partial t}}+\rho _{0}\left({\frac {\partial p}{\partial \rho _{0}}}\right)_{s}\nabla \cdot \mathbf {v} &=0\\{\frac {\partial \mathbf {v} }{\partial t}}+{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
Defining the speed of sound of the system:
c
≡
(
∂
p
∂
ρ
0
)
s
{\displaystyle c\equiv {\sqrt {\left({\frac {\partial p}{\partial \rho _{0}}}\right)_{s}}}}
Everything becomes
∂
p
′
∂
t
+
ρ
0
c
2
∇
⋅
v
=
0
∂
v
∂
t
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial p'}{\partial t}}+\rho _{0}c^{2}\nabla \cdot \mathbf {v} &=0\\{\frac {\partial \mathbf {v} }{\partial t}}+{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
=== For Irrotational Fluids ===
In the case that the fluid is irrotational, that is
∇
×
v
=
0
{\displaystyle \nabla \times \mathbf {v} =0}
, we can then write
v
=
−
∇
ϕ
{\displaystyle \mathbf {v} =-\nabla \phi }
and thus write our equations of motion as
∂
p
′
∂
t
−
ρ
0
c
2
∇
2
ϕ
=
0
−
∇
∂
ϕ
∂
t
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial p'}{\partial t}}-\rho _{0}c^{2}\nabla ^{2}\phi &=0\\-\nabla {\frac {\partial \phi }{\partial t}}+{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
The second equation tells us that
p
′
=
ρ
0
∂
ϕ
∂
t
{\displaystyle p'=\rho _{0}{\frac {\partial \phi }{\partial t}}}
And the use of this equation in the continuity equation tells us that
ρ
0
∂
2
ϕ
∂
t
−
ρ
0
c
2
∇
2
ϕ
=
0
{\displaystyle \rho _{0}{\frac {\partial ^{2}\phi }{\partial t}}-\rho _{0}c^{2}\nabla ^{2}\phi =0}
This simplifies to
1
c
2
∂
2
ϕ
∂
t
2
−
∇
2
ϕ
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}\phi }{\partial t^{2}}}-\nabla ^{2}\phi =0}
Thus the velocity potential
ϕ
{\displaystyle \phi }
obeys the wave equation in the limit of small disturbances. The boundary conditions required to solve for the potential come from the fact that the velocity of the fluid must be 0 normal to the fixed surfaces of the system.
Taking the time derivative of this wave equation and multiplying all sides by the unperturbed density, and then using the fact that
p
′
=
ρ
0
∂
ϕ
∂
t
{\displaystyle p'=\rho _{0}{\frac {\partial \phi }{\partial t}}}
tells us that
1
c
2
∂
2
p
′
∂
t
2
−
∇
2
p
′
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}p'}{\partial t^{2}}}-\nabla ^{2}p'=0}
Similarly, we saw that
p
′
=
(
∂
p
∂
ρ
0
)
s
ρ
′
=
c
2
ρ
′
{\displaystyle p'=\left({\frac {\partial p}{\partial \rho _{0}}}\right)_{s}\rho '=c^{2}\rho '}
. Thus we can multiply the above equation appropriately and see that
1
c
2
∂
2
ρ
′
∂
t
2
−
∇
2
ρ
′
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}\rho '}{\partial t^{2}}}-\nabla ^{2}\rho '=0}
Thus, the velocity potential, pressure, and density all obey the wave equation. Moreover, we only need to solve one such equation to determine all other three. In particular, we have
v
=
−
∇
ϕ
p
′
=
ρ
0
∂
ϕ
∂
t
ρ
′
=
ρ
0
c
2
∂
ϕ
∂
t
{\displaystyle {\begin{aligned}\mathbf {v} &=-\nabla \phi \\p'&=\rho _{0}{\frac {\partial \phi }{\partial t}}\\\rho '&={\frac {\rho _{0}}{c^{2}}}{\frac {\partial \phi }{\partial t}}\end{aligned}}}
=== For a moving medium ===
Again, we can derive the small-disturbance limit for sound waves in a moving medium. Again, starting with
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
u
⋅
∇
ρ
′
+
∇
⋅
ρ
′
v
=
0
(
ρ
0
+
ρ
′
)
∂
v
∂
t
+
(
ρ
0
+
ρ
′
)
(
u
⋅
∇
)
v
+
(
ρ
0
+
ρ
′
)
(
v
⋅
∇
)
v
+
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\mathbf {u} \cdot \nabla \rho '+\nabla \cdot \rho '\mathbf {v} &=0\\(\rho _{0}+\rho '){\frac {\partial \mathbf {v} }{\partial t}}+(\rho _{0}+\rho ')(\mathbf {u} \cdot \nabla )\mathbf {v} +(\rho _{0}+\rho ')(\mathbf {v} \cdot \nabla )\mathbf {v} +\nabla p'&=0\end{aligned}}}
We can linearize these into
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
u
⋅
∇
ρ
′
=
0
∂
v
∂
t
+
(
u
⋅
∇
)
v
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\mathbf {u} \cdot \nabla \rho '&=0\\{\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {v} +{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
==== For Irrotational Fluids in a Moving Medium ====
Given that we saw that
∂
ρ
′
∂
t
+
ρ
0
∇
⋅
v
+
u
⋅
∇
ρ
′
=
0
∂
v
∂
t
+
(
u
⋅
∇
)
v
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {\partial \rho '}{\partial t}}+\rho _{0}\nabla \cdot \mathbf {v} +\mathbf {u} \cdot \nabla \rho '&=0\\{\frac {\partial \mathbf {v} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {v} +{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
If we make the previous assumptions of the fluid being ideal and the velocity being irrotational, then we have
p
′
=
(
∂
p
∂
ρ
0
)
s
ρ
′
=
c
2
ρ
′
v
=
−
∇
ϕ
{\displaystyle {\begin{aligned}p'&=\left({\frac {\partial p}{\partial \rho _{0}}}\right)_{s}\rho '=c^{2}\rho '\\\mathbf {v} &=-\nabla \phi \end{aligned}}}
Under these assumptions, our linearized sound equations become
1
c
2
∂
p
′
∂
t
−
ρ
0
∇
2
ϕ
+
1
c
2
u
⋅
∇
p
′
=
0
−
∂
∂
t
(
∇
ϕ
)
−
(
u
⋅
∇
)
[
∇
ϕ
]
+
1
ρ
0
∇
p
′
=
0
{\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial p'}{\partial t}}-\rho _{0}\nabla ^{2}\phi +{\frac {1}{c^{2}}}\mathbf {u} \cdot \nabla p'&=0\\-{\frac {\partial }{\partial t}}(\nabla \phi )-(\mathbf {u} \cdot \nabla )[\nabla \phi ]+{\frac {1}{\rho _{0}}}\nabla p'&=0\end{aligned}}}
Importantly, since
u
{\displaystyle \mathbf {u} }
is a constant, we have
(
u
⋅
∇
)
[
∇
ϕ
]
=
∇
[
(
u
⋅
∇
)
ϕ
]
{\displaystyle (\mathbf {u} \cdot \nabla )[\nabla \phi ]=\nabla [(\mathbf {u} \cdot \nabla )\phi ]}
, and then the second equation tells us that
1
ρ
0
∇
p
′
=
∇
[
∂
ϕ
∂
t
+
(
u
⋅
∇
)
ϕ
]
{\displaystyle {\frac {1}{\rho _{0}}}\nabla p'=\nabla \left[{\frac {\partial \phi }{\partial t}}+(\mathbf {u} \cdot \nabla )\phi \right]}
Or just that
p
′
=
ρ
0
[
∂
ϕ
∂
t
+
(
u
⋅
∇
)
ϕ
]
{\displaystyle p'=\rho _{0}\left[{\frac {\partial \phi }{\partial t}}+(\mathbf {u} \cdot \nabla )\phi \right]}
Now, when we use this relation with the fact that
1
c
2
∂
p
′
∂
t
−
ρ
0
∇
2
ϕ
+
1
c
2
u
⋅
∇
p
′
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial p'}{\partial t}}-\rho _{0}\nabla ^{2}\phi +{\frac {1}{c^{2}}}\mathbf {u} \cdot \nabla p'=0}
, alongside cancelling and rearranging terms, we arrive at
1
c
2
∂
2
ϕ
∂
t
2
−
∇
2
ϕ
+
1
c
2
∂
∂
t
[
(
u
⋅
∇
)
ϕ
]
+
1
c
2
∂
∂
t
(
u
⋅
∇
ϕ
)
+
1
c
2
u
⋅
∇
[
(
u
⋅
∇
)
ϕ
]
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}\phi }{\partial t^{2}}}-\nabla ^{2}\phi +{\frac {1}{c^{2}}}{\frac {\partial }{\partial t}}[(\mathbf {u} \cdot \nabla )\phi ]+{\frac {1}{c^{2}}}{\frac {\partial }{\partial t}}(\mathbf {u} \cdot \nabla \phi )+{\frac {1}{c^{2}}}\mathbf {u} \cdot \nabla [(\mathbf {u} \cdot \nabla )\phi ]=0}
We can write this in a familiar form as
[
1
c
2
(
∂
∂
t
+
u
⋅
∇
)
2
−
∇
2
]
ϕ
=
0
{\displaystyle \left[{\frac {1}{c^{2}}}\left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla \right)^{2}-\nabla ^{2}\right]\phi =0}
This differential equation must be solved with the appropriate boundary conditions. Note that setting
u
=
0
{\displaystyle \mathbf {u} =0}
returns us the wave equation. Regardless, upon solving this equation for a moving medium, we then have
v
=
−
∇
ϕ
p
′
=
ρ
0
(
∂
∂
t
+
u
⋅
∇
)
ϕ
ρ
′
=
ρ
0
c
2
(
∂
∂
t
+
u
⋅
∇
)
ϕ
{\displaystyle {\begin{aligned}\mathbf {v} &=-\nabla \phi \\p'&=\rho _{0}\left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla \right)\phi \\\rho '&={\frac {\rho _{0}}{c^{2}}}\left({\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla \right)\phi \end{aligned}}}
== See also ==
Acoustic attenuation
Sound
Fourier analysis
== References ==
Landau, L.D.; Lifshitz, E.M. (1984). Fluid Mechanics (2nd ed.). Butterworth-Heinenann. ISBN 0-7506-2767-0.
Fetter, Alexander; Walecka, John (2003). Fluid Mechanics (1st ed.). Courier Corporation. ISBN 0-486-43261-0. | Wikipedia/Acoustic_theory |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Cauchy-Green_deformation_tensor |
Structural mechanics or mechanics of structures is the computation of deformations, deflections, and internal forces or stresses (stress equivalents) within structures, either for design or for performance evaluation of existing structures. It is one subset of structural analysis. Structural mechanics analysis needs input data such as structural loads, the structure's geometric representation and support conditions, and the materials' properties. Output quantities may include support reactions, stresses and displacements. Advanced structural mechanics may include the effects of stability and non-linear behaviors.
Mechanics of structures is a field of study within applied mechanics that investigates the behavior of structures under mechanical loads, such as bending of a beam, buckling of a column, torsion of a shaft, deflection of a thin shell, and vibration of a bridge.
There are three approaches to the analysis: the energy methods, flexibility method or direct stiffness method which later developed into finite element method and the plastic analysis approach.
== Energy method ==
Energy principles in structural mechanics
== Flexibility method ==
Flexibility method
== Stiffness methods ==
Direct stiffness method
Finite element method in structural mechanics
== Plastic analysis approach ==
Plastic analysis
== Major topics ==
Beam theory
Buckling
Earthquake engineering
Finite element method in structural mechanics
Plates and shells
Torsion
Trusses
Stiffening
Structural dynamics
Structural instability
== References == | Wikipedia/Structural_mechanics |
Algorithm engineering focuses on the design, analysis, implementation, optimization, profiling and experimental evaluation of computer algorithms, bridging the gap between algorithmics theory and practical applications of algorithms in software engineering.
It is a general methodology for algorithmic research.
== Origins ==
In 1995, a report from an NSF-sponsored workshop "with the purpose of assessing the current goals and directions of the Theory of Computing (TOC) community" identified the slow speed of adoption of theoretical insights by practitioners as an important issue and suggested measures to
reduce the uncertainty by practitioners whether a certain theoretical breakthrough will translate into practical gains in their field of work, and
tackle the lack of ready-to-use algorithm libraries, which provide stable, bug-free and well-tested implementations for algorithmic problems and expose an easy-to-use interface for library consumers.
But also, promising algorithmic approaches have been neglected due to difficulties in mathematical analysis.
The term "algorithm engineering" was first used with specificity in 1997, with the first Workshop on Algorithm Engineering (WAE97), organized by Giuseppe F. Italiano.
== Difference from algorithm theory ==
Algorithm engineering does not intend to replace or compete with algorithm theory, but tries to enrich, refine and reinforce its formal approaches with experimental algorithmics (also called empirical algorithmics).
This way it can provide new insights into the efficiency and performance of algorithms in cases where
the algorithm at hand is less amenable to algorithm theoretic analysis,
formal analysis pessimistically suggests bounds which are unlikely to appear on inputs of practical interest,
the algorithm relies on the intricacies of modern hardware architectures like data locality, branch prediction, instruction stalls, instruction latencies which the machine model used in Algorithm Theory is unable to capture in the required detail,
the crossover between competing algorithms with different constant costs and asymptotic behaviors needs to be determined.
== Methodology ==
Some researchers describe algorithm engineering's methodology as a cycle consisting of algorithm design, analysis, implementation and experimental evaluation, joined by further aspects like machine models or realistic inputs.
They argue that equating algorithm engineering with experimental algorithmics is too limited, because viewing design and analysis, implementation and experimentation as separate activities ignores the crucial feedback loop between those elements of algorithm engineering.
=== Realistic models and real inputs ===
While specific applications are outside the methodology of algorithm engineering, they play an important role in shaping realistic models of the problem and the underlying machine, and supply real inputs and other design parameters for experiments.
=== Design ===
Compared to algorithm theory, which usually focuses on the asymptotic behavior of algorithms, algorithm engineers need to keep further requirements in mind: Simplicity of the algorithm, implementability in programming languages on real hardware, and allowing code reuse.
Additionally, constant factors of algorithms have such a considerable impact on real-world inputs that sometimes an algorithm with worse asymptotic behavior performs better in practice due to lower constant factors.
=== Analysis ===
Some problems can be solved with heuristics and randomized algorithms in a simpler and more efficient fashion than with deterministic algorithms. Unfortunately, this makes even simple randomized algorithms difficult to analyze because there are subtle dependencies to be taken into account.
=== Implementation ===
Huge semantic gaps between theoretical insights, formulated algorithms, programming languages and hardware pose a challenge to efficient implementations of even simple algorithms, because small implementation details can have rippling effects on execution behavior.
The only reliable way to compare several implementations of an algorithm is to spend an considerable amount of time on tuning and profiling, running those algorithms on multiple architectures, and looking at the generated machine code.
=== Experiments ===
See: Experimental algorithmics
=== Application engineering ===
Implementations of algorithms used for experiments differ in significant ways from code usable in applications.
While the former prioritizes fast prototyping, performance and instrumentation for measurements during experiments, the latter requires thorough testing, maintainability, simplicity, and tuning for particular classes of inputs.
=== Algorithm libraries ===
Stable, well-tested algorithm libraries like LEDA play an important role in technology transfer by speeding up the adoption of new algorithms in applications.
Such libraries reduce the required investment and risk for practitioners, because it removes the burden of understanding and implementing the results of academic research.
== Conferences ==
Two main conferences on Algorithm Engineering are organized annually, namely:
Symposium on Experimental Algorithms (SEA), established in 1997 (formerly known as WEA).
SIAM Meeting on Algorithm Engineering and Experiments (ALENEX), established in 1999.
The 1997 Workshop on Algorithm Engineering (WAE'97) was held in Venice (Italy) on September 11–13, 1997. The Third International Workshop on Algorithm Engineering (WAE'99) was held in London, UK in July 1999.
The first Workshop on Algorithm Engineering and Experimentation (ALENEX99) was held in Baltimore, Maryland on January 15–16, 1999. It was sponsored by DIMACS, the Center for Discrete Mathematics and Theoretical Computer Science (at Rutgers University), with additional support from SIGACT, the ACM Special Interest Group on Algorithms and Computation Theory, and SIAM, the Society for Industrial and Applied Mathematics.
== References == | Wikipedia/Algorithm_engineering |
Algorithm aversion is defined as a "biased assessment of an algorithm which manifests in negative behaviors and attitudes towards the algorithm compared to a human agent." This phenomenon describes the tendency of humans to reject advice or recommendations from an algorithm in situations where they would accept the same advice if it came from a human.
Algorithms, particularly those utilizing machine learning methods or artificial intelligence (AI), play a growing role in decision-making across various fields. Examples include recommender systems in e-commerce for identifying products a customer might like and AI systems in healthcare that assist in diagnoses and treatment decisions. Despite their proven ability to outperform humans in many contexts, algorithmic recommendations are often met with resistance or rejection, which can lead to inefficiencies and suboptimal outcomes.
The study of algorithm aversion is critical as algorithms become increasingly embedded in our daily lives. Factors such as perceived accountability, lack of transparency, and skepticism towards machine judgment contribute to this aversion. Conversely, there are scenarios where individuals are more likely to trust and follow algorithmic advice over human recommendations, a phenomenon referred to as algorithm appreciation. Understanding these dynamics is essential for improving human-algorithm interactions and fostering greater acceptance of AI-driven decision-making.
== Examples of algorithm aversion ==
Algorithm aversion manifests in various domains where algorithms are employed to assist or replace human decision-making. Below are examples from diverse contexts, highlighting situations where people tend to resist algorithmic advice or decisions:
=== Healthcare ===
Patients often resist AI-based medical diagnostics and treatment recommendations, despite the proven accuracy of such systems. For instance, patients tend to trust human doctors more, as they perceive AI systems as lacking empathy and the ability to handle nuanced emotional interactions. Negative emotions are more likely to arise as AI plays a larger role in healthcare decision-making.
=== Recruitment and employment ===
Algorithmic agents used in recruitment are often perceived as less capable of fulfilling relational roles, such as providing emotional support or career development. While algorithms are trusted for transactional tasks like salary negotiations, human recruiters are favored for relational tasks due to their perceived ability to connect on an emotional level.
=== Consumer behavior ===
Consumers generally react less favorably to decisions made by algorithms compared to those made by humans. For example, when a decision results in a positive outcome, consumers find it harder to internalize the result if it comes from an algorithm. Conversely, negative outcomes tend to elicit similar responses regardless of whether the decision was made by an algorithm or a human.
=== Marketing and content creation ===
In the marketing domain, AI influencers can be as effective as human influencers in promoting products. However, trust levels remain lower for AI-driven recommendations, as consumers often perceive human influencers as more authentic. Similarly, participants tend to favor content explicitly identified as human-generated over AI-generated, even when the quality of AI content matches or surpasses human-created content.
=== Cultural differences ===
Cultural norms play a significant role in algorithm aversion. In individualistic cultures, such as in the United States, there is a higher tendency to reject algorithmic recommendations due to an emphasis on autonomy and personalized decision-making. In contrast, collectivist cultures, such as in India, exhibit lower aversion, particularly when familiarity with algorithms is higher or when decisions align with societal norms.
=== Moral and emotional decisions ===
Algorithms are less trusted for tasks involving moral or emotional judgment, such as ethical dilemmas or empathetic decision-making. For example, individuals may reject algorithmic decisions in scenarios where they perceive moral stakes to be high, such as autonomous vehicle decisions or medical life-or-death situations.
== Mechanisms underlying algorithm aversion ==
Algorithm aversion arises from a combination of psychological, task-related, cultural, and design-related factors. These mechanisms interact to shape individuals' negative perceptions and behaviors toward algorithms, even in cases where algorithmic performance is objectively superior to human decision-making.
=== Psychological mechanisms ===
==== Perceived responsibility ====
Individuals often feel a heightened sense of accountability when using algorithmic advice compared to human advice. This stems from the belief that, if a decision goes wrong, they will be solely responsible because an algorithm lacks the capacity to share blame. By contrast, decisions made with human input are perceived as more collaborative, allowing for shared accountability. For example, users are less likely to rely on algorithmic recommendations in high-stakes domains like healthcare or financial advising, where the repercussions of errors are significant.
==== Locus of control ====
People with an internal locus of control, who believe they have direct influence over outcomes, are more reluctant to trust algorithms. They may perceive algorithmic decision-making as undermining their autonomy, preferring human input that feels more modifiable or personal. Conversely, individuals with an external locus of control, who attribute outcomes to external forces, may accept algorithmic decisions more readily, viewing algorithms as neutral and effective tools. This tendency is particularly evident in decision-making contexts where users seek to maintain agency.
==== Neuroticism ====
Neurotic individuals are more prone to anxiety and fear of uncertainty, making them less likely to trust algorithms. This aversion may be fueled by concerns about the perceived "coldness" of algorithms or their inability to account for nuanced emotional factors. For example, in emotionally sensitive tasks like healthcare or recruitment, neurotic individuals may reject algorithmic inputs in favor of human recommendations, even when the algorithm performs equally well or better.
=== Task-related mechanisms ===
==== Task complexity and risk ====
The nature of the task significantly influences algorithm aversion. For routine and low-risk tasks, such as recommending movies or predicting product preferences, users are generally comfortable relying on algorithms. However, for high-stakes or subjective tasks, such as making medical diagnoses, financial decisions, or moral judgments, algorithm aversion increases. Users perceive these tasks as requiring empathy, ethical reasoning, or nuanced understanding—qualities that they believe algorithms lack. This disparity highlights why algorithms are better received in technical fields (e.g., logistics) but face resistance in human-centric domains.
==== Outcome valence ====
People's reactions to algorithmic decisions are influenced by the nature of the decision outcome. When algorithms deliver positive results, users are more likely to trust and accept them. However, when outcomes are negative, users are more inclined to reject algorithms and attribute blame to their use. This phenomenon is linked to the perception that algorithms lack accountability, unlike human decision-makers, who can offer justifications or accept responsibility for failures.
=== Cultural mechanisms ===
==== Individualism vs. collectivism ====
Cultural norms significantly shape attitudes toward algorithmic decision-making. In individualistic cultures, such as the United States, people value autonomy and personalization, making them more skeptical of algorithmic systems that they perceive as impersonal or rigid. Conversely, in collectivist cultures like India, individuals are more likely to accept algorithmic recommendations, particularly when these systems align with group norms or social expectations. Familiarity with algorithms in collectivist societies also reduces aversion, as users view algorithms as tools to reinforce societal goals rather than threats to individual autonomy.
==== Cultural influences ====
Cultural norms and values significantly impact algorithm acceptance. Individualistic cultures, such as those in the United States, tend to display higher algorithm aversion due to an emphasis on autonomy, personal agency, and distrust of generalized systems. On the other hand, collectivist cultures, such as in India, exhibit greater acceptance of algorithms, particularly when familiarity is high and the decision aligns with societal norms. These differences highlight the importance of tailoring algorithmic systems to align with cultural expectations.
==== Organizational support ====
The role of organizations in supporting and explaining the use of algorithms can greatly influence aversion levels. When organizations actively promote algorithmic tools and provide training on their usage, employees are less likely to resist them. Transparency about how algorithms support decision-making processes fosters trust and reduces anxiety, particularly in high-stakes or workplace settings.
=== Agency and role of the algorithm ===
==== Advisory vs. autonomous algorithms ====
Algorithm aversion is higher for autonomous systems that make decisions independently (performative algorithms) compared to advisory systems that provide recommendations but allow humans to retain final decision-making power. Users tend to view advisory algorithms as supportive tools that enhance their control, whereas autonomous algorithms may be perceived as threatening to their authority or ability to intervene.
==== Perceived capabilities of the algorithm ====
Algorithms are often perceived as lacking human-specific skills, such as empathy or moral reasoning. This perception leads to greater aversion in tasks involving subjective judgment, ethical dilemmas, or emotional interactions. Users are generally more accepting of algorithms in objective, technical tasks where human qualities are less critical.
=== Social and human-agent characteristics ===
==== Expertise ====
In high-stakes or expertise-intensive tasks, users tend to favor human experts over algorithms. This preference stems from the belief that human experts can account for context, nuance, and situational complexity in ways that algorithms cannot. Algorithm aversion is particularly pronounced when humans with expertise are available as an alternative to the algorithm.
==== Social distance ====
Users are more likely to reject algorithms when the alternative is their own input or the input of someone they know and relate to personally. In contrast, when the alternative is an anonymous or distant human agent, algorithms may be viewed more favorably. This preference for closer, more relatable human agents highlights the importance of perceived social connection in algorithmic decision acceptance.
=== Design-related mechanisms ===
==== Transparency ====
A lack of transparency in algorithmic systems, often referred to as the "black box" problem, creates distrust among users. Without clear explanations of how decisions are made, users may feel uneasy relying on algorithmic outputs, particularly in high-stakes scenarios. For instance, transparency in medical AI systems—such as providing explanations for diagnostic recommendations—can significantly improve trust and reduce aversion. Transparent algorithms empower users by demystifying decision-making processes, making them feel more in control.
==== Error tolerance ====
Users are generally less forgiving of algorithmic errors than human errors, even when the frequency of errors is lower for algorithms. This heightened scrutiny stems from the belief that algorithms should be "perfect" or error-free, unlike humans, who are expected to make mistakes. However, algorithms that demonstrate the ability to learn from their mistakes and adapt over time can foster greater trust. For example, users are more likely to accept algorithms in financial forecasting if they observe improvements based on feedback.
==== Anthropomorphic design ====
Designing algorithms with human-like traits, such as avatars, conversational interfaces, or relatable language, can reduce aversion by making interactions feel more natural and personal. For instance, AI-powered chatbots with empathetic communication styles are better received in customer service than purely mechanical interfaces. This design strategy helps mitigate the perception that algorithms are "cold" or impersonal, encouraging users to engage with them more comfortably.
=== Delivery factors ===
==== Mode of delivery ====
The format in which algorithms present their recommendations significantly affects user trust. Systems that use conversational or audio interfaces are generally more trusted than those relying solely on textual outputs, as they create a sense of human-like interaction.
==== Presentation style ====
Algorithms that provide clear, concise, and well-organized explanations of their recommendations are more likely to gain user acceptance. Systems that offer detailed yet accessible insights into their decision-making process are perceived as more reliable and trustworthy.
=== General distrust and favoritism toward humans ===
==== Default skepticism ====
Many individuals harbor an ingrained skepticism toward algorithms, particularly when they lack familiarity with the system or its capabilities. Early negative experiences with algorithms can entrench this distrust, making it difficult to rebuild confidence. Even when algorithms perform better, this bias often persists, leading to outright rejection.
==== Favoritism toward humans ====
People often display a preference for human decisions over algorithmic ones, particularly for positive outcomes. Yalsin et al. highlighted that individuals are more likely to internalize favorable decisions made by humans, attributing success to human expertise or effort. In contrast, decisions made by algorithms are viewed as impersonal, reducing the sense of achievement or satisfaction. This favoritism contributes to a persistent bias against algorithmic systems, even when their performance matches or exceeds that of humans.
=== Reputational concerns ===
People may also be averse to using algorithms if doing so conveys negative information about the human's ability. This can occur if humans have private information about their own ability.
== Proposed methods to overcome algorithm aversion ==
Algorithms are often capable of outperforming humans or performing tasks much more cost-effectively. Despite this, algorithm aversion persists due to a range of psychological, cultural, and design-related factors. To mitigate resistance and build trust, researchers and practitioners have proposed several strategies.
=== Human-in-the-loop ===
One effective way to reduce algorithmic aversion is by incorporating a human-in-the-loop approach, where the human decision-maker retains control over the final decision. This approach addresses concerns about agency and accountability by positioning algorithms as advisory tools rather than autonomous decision-makers.
==== Advisory role ====
Algorithms can provide recommendations while leaving the ultimate decision-making authority with humans. This allows users to view algorithms as supportive rather than threatening. For example, in healthcare, AI systems can suggest diagnoses or treatments, but the human doctor makes the final call.
==== Collaboration and trust ====
Integrating humans into algorithmic processes fosters a sense of collaboration and encourages users to engage with the system more openly. This method is particularly effective in domains where human intuition and context are critical, such as recruitment, education, and financial planning.
=== System transparency ===
Transparency is crucial for overcoming algorithm aversion, as it helps to build trust and reduce the "black box" effect that often causes discomfort among users. Providing explanations about how algorithms work enables users to understand and evaluate their recommendations. Transparency can take several forms, such as global explanations that describe the overall functioning of an algorithm, case-specific explanations that clarify why a particular recommendation was made, or confidence levels that highlight the algorithm's certainty in its decisions. For example, in financial advising, transparency about how investment recommendations are generated can increase user confidence in the system. Explainable AI (XAI) methods, such as visualizations of decision pathways or feature importance metrics, make these explanations accessible and comprehensible, allowing users to make informed decisions about whether to trust the algorithm.
=== User training ===
Familiarizing users with algorithms through training can significantly reduce aversion, especially for those who are unfamiliar or skeptical. Training programs that simulate real-world interactions with algorithms allow users to see their capabilities and limitations firsthand. For instance, healthcare professionals using diagnostic AI systems can benefit from hands-on training that demonstrates how the system arrives at recommendations and how to interpret its outputs. Such training helps bridge knowledge gaps and demystifies algorithms, making users more comfortable with their use. Furthermore, repeated interactions and feedback loops help users build trust in the system over time. Financial incentives, such as rewards for accurate decisions made with the help of algorithms, have also been shown to encourage users to engage more readily with these systems.
=== Incorporating user control ===
Allowing users to interact with and adjust algorithmic outputs can greatly enhance their sense of control, which is a key factor in overcoming aversion. For example, interactive interfaces that let users modify parameters, simulate outcomes, or personalize recommendations make algorithms feel less rigid and more adaptable. Providing confidence thresholds that users can adjust—such as setting stricter criteria for medical diagnoses—further empowers them to feel involved in the decision-making process. Feedback mechanisms are another important feature, as they allow users to provide input or correct errors, fostering a sense of collaboration between the user and the algorithm. These design features not only reduce resistance but also demonstrate that algorithms are flexible tools rather than fixed, inflexible systems.
=== Personalization and customization ===
Personalization is another critical factor in reducing algorithm aversion. Algorithms that adapt to individual preferences or contexts are more likely to gain user acceptance. For instance, recommendation systems in e-commerce that learn a user's shopping habits over time are often trusted more than generic systems. Customization features, such as the ability to prioritize certain factors (e.g., cost or sustainability in product recommendations), further enhance user satisfaction by aligning outputs with their unique needs. In healthcare, personalized AI systems that incorporate a patient's medical history and specific conditions are better received than generalized tools. By tailoring outputs to the user's preferences and circumstances, algorithms can foster greater engagement and trust.
== Algorithm appreciation ==
Studies do not consistently show people demonstrating bias against algorithms and sometimes show the opposite, preferring advice from an algorithm instead of a human. This effect is called algorithm appreciation. Results are mixed, showing that people sometimes seem to prefer advice that comes from an algorithm instead of a human.
For example, customers are more likely to indicate initial interest to human sales agents compared to automated sales agents but less likely to provide contact information to them. This is due to "lower levels of performance expectancy and effort expectancy associated with human sales agents versus automated sales agents".
== References == | Wikipedia/Algorithm_aversion |
High-level synthesis (HLS), sometimes referred to as C synthesis, electronic system-level (ESL) synthesis, algorithmic synthesis, or behavioral synthesis, is an automated design process that takes an abstract behavioral specification of a digital system and finds a register-transfer level structure that realizes the given behavior.
Synthesis begins with a high-level specification of the problem, where behavior is generally decoupled from low-level circuit mechanics such as clock-level timing. Early HLS explored a variety of input specification languages, although recent research and commercial applications generally accept synthesizable subsets of ANSI C/C++/SystemC/MATLAB. The code is analyzed, architecturally constrained, and scheduled to transcompile from a transaction-level model (TLM) into a register-transfer level (RTL) design in a hardware description language (HDL), which is in turn commonly synthesized to the gate level by the use of a logic synthesis tool.
The goal of HLS is to let hardware designers efficiently build and verify hardware, by giving them better control over optimization of their design architecture, and through the nature of allowing the designer to describe the design at a higher level of abstraction while the tool does the RTL implementation. Verification of the RTL is an important part of the process.
Hardware can be designed at varying levels of abstraction. The commonly used levels of abstraction are gate level, register-transfer level (RTL), and algorithmic level.
While logic synthesis uses an RTL description of the design, high-level synthesis works at a higher level of abstraction, starting with an algorithmic description in a high-level language such as SystemC and ANSI C/C++. The designer typically develops the module functionality and the interconnect protocol. The high-level synthesis tools handle the micro-architecture and transform untimed or partially timed functional code into fully timed RTL implementations, automatically creating cycle-by-cycle detail for hardware implementation. The (RTL) implementations are then used directly in a conventional logic synthesis flow to create a gate-level implementation.
== History ==
Early academic work extracted scheduling, allocation, and binding as the basic steps for high-level-synthesis. Scheduling partitions the algorithm in control steps that are used to define the states in the finite-state machine. Each control step contains one small section of the algorithm that can be performed in a single clock cycle in the hardware. Allocation and binding maps the instructions and variables to the hardware components, multiplexers, registers and wires of the data path.
First generation behavioral synthesis was introduced by Synopsys in 1994 as Behavioral Compiler and used Verilog or VHDL as input languages. The abstraction level used was partially timed (clocked) processes. Tools based on behavioral Verilog or VHDL were not widely adopted in part because neither languages nor the partially timed abstraction were well suited to modeling behavior at a high level. 10 years later, in early 2004, Synopsys end-of-lifed Behavioral Compiler.
In 1998, Forte Design Systems introduced its Cynthesizer tool which used SystemC as an entry language instead of Verilog or VHDL. Cynthesizer was adopted by many Japanese companies in 2000 as Japan had a very mature SystemC user community. The first high-level synthesis tapeout was achieved in 2001 by Sony using Cynthesizer. Adoption in the United States started in earnest in 2008.
In 2006, an efficient and scalable "SDC modulo scheduling" technique was developed on control and data flow graphs and was later extended to pipeline scheduling. This technique uses the integer linear programming formulation. But it shows that the underlying constraint matrix is totally unimodular (after approximating the resource constraints). Thus, the problem can be solved in polynomial time optimally using a linear programming solver in polynomial time. This work was inducted to the FPGA and Reconfigurable Computing Hall of Fame 2022.
The SDC scheduling algorithm was implemented in the xPilot HLS system developed at UCLA, and later licensed to the AutoESL Design Technologies, a spin-off from UCLA. AutoESL was acquired by Xilinx (now part of AMD) in 2011, and the HLS tool developed by AutoESL became the base of Xilinx HLS solutions, Vivado HLS and Vitis HLS, widely used for FPGA designs.
== Source input ==
The most common source inputs for high-level synthesis are based on standard languages such as ANSI C/C++, SystemC and MATLAB.
High-level synthesis typically also includes a bit-accurate executable specification as input, since to derive an efficient hardware implementation, additional information is needed on what is an acceptable Mean-Square Error or Bit-Error Rate etc. For example, if the designer starts with an FIR filter written using the "double" floating type, before he can derive an efficient hardware implementation, they need to perform numerical refinement to arrive at a fixed-point implementation. The refinement requires additional information on the level of quantization noise that can be tolerated, the valid input ranges etc. This bit-accurate specification makes the high level synthesis source specification functionally complete.
Normally the tools infer from the high level code a Finite State Machine and a Datapath that implement arithmetic operations.
== Process stages ==
The high-level synthesis process consists of a number of activities. Various high-level synthesis tools perform these activities in different orders using different algorithms. Some high-level synthesis tools combine some of these activities or perform them iteratively to converge on the desired solution.
Lexical processing
Algorithm optimization
Control/Dataflow analysis
Library processing
Resource allocation
Scheduling
Functional unit binding
Register binding
Output processing
Input Rebundling
== Functionality ==
In general, an algorithm can be performed over many clock cycles with few hardware resources, or over fewer clock cycles using a larger number of ALUs, registers and memories. Correspondingly, from one algorithmic description, a variety of hardware microarchitectures can be generated by an HLS compiler according to the directives given to the tool. This is the same trade off of execution speed for hardware complexity as seen when a given program is run on conventional processors of differing performance, yet all running at roughly the same clock frequency.
=== Architectural constraints ===
Synthesis constraints for the architecture can automatically be applied based on the design analysis. These constraints can be broken into
Hierarchy
Interface
Memory
Loop
Low-level timing constraints
Iteration
=== Interface synthesis ===
Interface Synthesis refers to the ability to accept pure C/C++ description as its input, then use automated interface synthesis technology to control the timing and communications protocol on the design interface. This enables interface analysis and exploration of a full range of hardware interface options such as streaming, single- or dual-port RAM plus various handshaking mechanisms. With interface synthesis the designer does not embed interface protocols in the source description. Examples might be: direct connection, one line, 2 line handshake, FIFO.
== Vendors ==
Data reported on recent Survey
Dynamatic from EPFL/ETH Zurich
MATLAB HDL Coder [1] from Mathworks
HLS-QSP from CircuitSutra Technologies
C-to-Silicon from Cadence Design Systems
Concurrent Acceleration from Concurrent EDA
Symphony C Compiler from Synopsys
QuickPlay from PLDA
PowerOpt from ChipVision
Cynthesizer from Forte Design Systems (now Stratus HLS from Cadence Design Systems)
Catapult C from Calypto Design Systems, part of Mentor Graphics as of 2015, September 16. In November 2016 Siemens announced plans to acquire Mentor Graphics, Mentor Graphics became styled as "Mentor, a Siemens Business". In January 2021, the legal merger of Mentor Graphics with Siemens was completed - merging into the Siemens Industry Software Inc legal entity. Mentor Graphics' name was changed to Siemens EDA, a division of Siemens Digital Industries Software.
PipelineC [2]
CyberWorkBench from NEC
Mega Hardware
C2R from CebaTech
CoDeveloper from Impulse Accelerated Technologies
HercuLeS by Nikolaos Kavvadias
Program In/Code Out (PICO) from Synfora, acquired by Synopsys in June 2010
xPilot from University of California, Los Angeles
Vsyn from vsyn.ru
ngDesign from SynFlow
== See also ==
C to HDL
Electronic design automation (EDA)
Electronic system-level (ESL)
Logic synthesis
High-level verification (HLV)
SystemVerilog
Hardware acceleration
== References ==
== Further reading ==
Jason Cong, Jason Lau, Gai Liu, Stephen Neuendorffer, Peichen Pan, Kees Vissers, Zhiru Zhang. FPGA HLS Today: Successes, Challenges, and Opportunities. ACM Transactions on Reconfigurable Technology and Systems, Volume 15, Issue 4, Article No. 5, pp 1–42, December 2022, https://doi.org/10.1145/3530775.
Michael Fingeroff (2010). High-Level Synthesis Blue Book. Xlibris Corporation. ISBN 978-1-4500-9724-6.
Coussy, P.; Gajski, D. D.; Meredith, M.; Takach, A. (2009). "An Introduction to High-Level Synthesis". IEEE Design & Test of Computers. 26 (4): 8–17. doi:10.1109/MDT.2009.69. S2CID 52870966.
Ewout S. J. Martens; Georges Gielen (2008). High-level modeling and synthesis of analog integrated systems. Springer. ISBN 978-1-4020-6801-0.
Saraju Mohanty; N. Ranganathan; E. Kougianos & P. Patra (2008). Low-Power High-Level Synthesis for Nanoscale CMOS Circuits. Springer. ISBN 978-0387764733.
Alice C. Parker; Yosef Tirat-Gefen; Suhrid A. Wadekar (2007). "System-Level Design". In Wai-Kai Chen (ed.). The VLSI handbook (2nd ed.). CRC Press. ISBN 978-0-8493-4199-1. chapter 76.
Shahrzad Mirkhani; Zainalabedin Navabi (2007). "System Level Design Languages". In Wai-Kai Chen (ed.). The VLSI handbook (2nd ed.). CRC Press. ISBN 978-0-8493-4199-1. chapter 86. covers the use of C/C++, SystemC, TML and even UML
Liming Xiu (2007). VLSI circuit design methodology demystified: a conceptual taxonomy. Wiley-IEEE. ISBN 978-0-470-12742-1.
John P. Elliott (1999). Understanding behavioral synthesis: a practical guide to high-level design. Springer. ISBN 978-0-7923-8542-4.
Nane, Razvan; Sima, Vlad-Mihai; Pilato, Christian; Choi, Jongsok; Fort, Blair; Canis, Andrew; Chen, Yu Ting; Hsiao, Hsuan; Brown, Stephen; Ferrandi, Fabrizio; Anderson, Jason; Bertels, Koen (2016). "A Survey and Evaluation of FPGA High-Level Synthesis Tools". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 35 (10): 1591–1604. doi:10.1109/TCAD.2015.2513673. hdl:11311/998432. S2CID 8749577.
Gupta, Rajesh; Brewer, Forrest (2008). "High-Level Synthesis: A Retrospective". "High-level Synthesis: A Retrospective". Springer. pp. 13–28. doi:10.1007/978-1-4020-8588-8_2. ISBN 978-1-4020-8587-1.
== External links ==
Vivado HLS course on Youtube
Deepchip Discussion Forum | Wikipedia/Algorithmic_synthesis |
In computer programming, an assignment statement sets and/or re-sets the value stored in the storage location(s) denoted by a variable name; in other words, it copies a value into the variable. In most imperative programming languages, the assignment statement (or expression) is a fundamental construct.
Today, the most commonly used notation for this operation is x = expr (originally Superplan 1949–51, popularized by Fortran 1957 and C). The second most commonly used notation is x := expr (originally ALGOL 1958, popularised by Pascal). Many other notations are also in use. In some languages, the symbol used is regarded as an operator (meaning that the assignment statement as a whole returns a value). Other languages define assignment as a statement (meaning that it cannot be used in an expression).
Assignments typically allow a variable to hold different values at different times during its life-span and scope. However, some languages (primarily strictly functional languages) do not allow that kind of "destructive" reassignment, as it might imply changes of non-local state. The purpose is to enforce referential transparency, i.e. functions that do not depend on the state of some variable(s), but produce the same results for a given set of parametric inputs at any point in time. Modern programs in other languages also often use similar strategies, although less strict, and only in certain parts, in order to reduce complexity, normally in conjunction with complementing methodologies such as data structuring, structured programming and object orientation.
== Semantics ==
An assignment operation is a process in imperative programming in which different values are associated with a particular variable name as time passes. The program, in such model, operates by changing its state using successive assignment statements. Primitives of imperative programming languages rely on assignment to do iteration. At the lowest level, assignment is implemented using machine operations such as MOVE or STORE.
Variables are containers for values. It is possible to put a value into a variable and later replace it with a new one. An assignment operation modifies the current state of the executing program. Consequently, assignment is dependent on the concept of variables. In an assignment:
The expression is evaluated in the current state of the program.
The variable is assigned the computed value, replacing the prior value of that variable.
Example: Assuming that a is a numeric variable, the assignment a := 2*a means that the content of the variable a is doubled after the execution of the statement.
An example segment of C code:
In this sample, the variable x is first declared as an int, and is then assigned the value of 10. Notice that the declaration and assignment occur in the same statement. In the second line, y is declared without an assignment. In the third line, x is reassigned the value of 23. Finally, y is assigned the value of 32.4.
For an assignment operation, it is necessary that the value of the expression is well-defined (it is a valid rvalue) and that the variable represents a modifiable entity (it is a valid modifiable (non-const) lvalue). In some languages, typically dynamic ones, it is not necessary to declare a variable prior to assigning it a value. In such languages, a variable is automatically declared the first time it is assigned to, with the scope it is declared in varying by language.
== Single assignment ==
Any assignment that changes an existing value (e.g. x := x + 1) is disallowed in purely functional languages. In functional programming, assignment is discouraged in favor of single assignment, more commonly known as initialization. Single assignment is an example of name binding and differs from assignment as described in this article in that it can only be done once, usually when the variable is created; no subsequent reassignment is allowed.
An evaluation of an expression does not have a side effect if it does not change an observable state of the machine, other than producing the result, and always produces same value for the same input. Imperative assignment can introduce side effects while destroying and making the old value unavailable while substituting it with a new one, and is referred to as destructive assignment for that reason in LISP and functional programming, similar to destructive updating.
Single assignment is the only form of assignment available in purely functional languages, such as Haskell, which do not have variables in the sense of imperative programming languages but rather named constant values possibly of compound nature, with their elements progressively defined on-demand, for the lazy languages. Purely functional languages can provide an opportunity for computation to be performed in parallel, avoiding the von Neumann bottleneck of sequential one step at a time execution, since values are independent of each other.
Impure functional languages provide both single assignment as well as true assignment (though true assignment is typically used with less frequency than in imperative programming languages). For example, in Scheme, both single assignment (with let) and true assignment (with set!) can be used on all variables, and specialized primitives are provided for destructive update inside lists, vectors, strings, etc. In OCaml, only single assignment is allowed for variables, via the let name = value syntax; however destructive update can be used on elements of arrays and strings with separate <- operator, as well as on fields of records and objects that have been explicitly declared mutable (meaning capable of being changed after their initial declaration) by the programmer.
Functional programming languages that use single assignment include Clojure (for data structures, not vars), Erlang (it accepts multiple assignment if the values are equal, in contrast to Haskell), F#, Haskell, JavaScript (for constants), Lava, OCaml, Oz (for dataflow variables, not cells), Racket (for some data structures like lists, not symbols), SASL, Scala (for vals), SISAL, Standard ML. Non-backtracking Prolog code can be considered explicit single-assignment, explicit in a sense that its (named) variables can be in explicitly unassigned state, or be set exactly once. In Haskell, by contrast, there can be no unassigned variables, and every variable can be thought of as being implicitly set, when it is created, to its value (or rather to a computational object that will produce its value on demand).
== Value of an assignment ==
In some programming languages, an assignment statement returns a value, while in others it does not.
In most expression-oriented programming languages (for example, C), the assignment statement returns the assigned value, allowing such idioms as x = y = a, in which the assignment statement y = a returns the value of a, which is then assigned to x. In a statement such as while ((ch = getchar()) != EOF) {…}, the return value of a function is used to control a loop while assigning that same value to a variable.
In other programming languages, Scheme for example, the return value of an assignment is undefined and such idioms are invalid.
In Haskell, there is no variable assignment; but operations similar to assignment (like assigning to a field of an array or a field of a mutable data structure) usually evaluate to the unit type, which is represented as (). This type has only one possible value, therefore containing no information. It is typically the type of an expression that is evaluated purely for its side effects.
== Variant forms of assignment ==
Certain use patterns are very common, and thus often have special syntax to support them. These are primarily syntactic sugar to reduce redundancy in the source code, but also assists readers of the code in understanding the programmer's intent, and provides the compiler with a clue to possible optimization.
=== Augmented assignment ===
The case where the assigned value depends on a previous one is so common that many imperative languages, most notably C and the majority of its descendants, provide special operators called augmented assignment, like *=, so a = 2*a can instead be written as a *= 2. Beyond syntactic sugar, this assists the task of the compiler by making clear that in-place modification of the variable a is possible.
=== Chained assignment ===
A statement like w = x = y = z is called a chained assignment in which the value of z is assigned to multiple variables w, x, and y. Chained assignments are often used to initialize multiple variables, as in
a = b = c = d = f = 0
Not all programming languages support chained assignment. Chained assignments are equivalent to a sequence of assignments, but the evaluation strategy differs between languages. For simple chained assignments, like initializing multiple variables, the evaluation strategy does not matter, but if the targets (l-values) in the assignment are connected in some way, the evaluation strategy affects the result.
In some programming languages (C for example), chained assignments are supported because assignments are expressions, and have values. In this case chain assignment can be implemented by having a right-associative assignment, and assignments happen right-to-left. For example, i = arr[i] = f() is equivalent to arr[i] = f(); i = arr[i]. In C++ they are also available for values of class types by declaring the appropriate return type for the assignment operator.
In Python, assignment statements are not expressions and thus do not have a value. Instead, chained assignments are a series of statements with multiple targets for a single expression. The assignments are executed left-to-right so that i = arr[i] = f() evaluates the expression f(), then assigns the result to the leftmost target, i, and then assigns the same result to the next target, arr[i], using the new value of i. This is essentially equivalent to tmp = f(); i = tmp; arr[i] = tmp though no actual variable is produced for the temporary value.
=== Parallel assignment ===
Some programming languages, such as APL, Common Lisp, Go, JavaScript (since 1.7), Julia, PHP, Maple, Lua, occam 2, Perl, Python, REBOL, Ruby, and PowerShell allow several variables to be assigned in parallel, with syntax like:
a, b := 0, 1
which simultaneously assigns 0 to a and 1 to b. This is most often known as parallel assignment; it was introduced in CPL in 1963, under the name simultaneous assignment, and is sometimes called multiple assignment, though this is confusing when used with "single assignment", as these are not opposites. If the right-hand side of the assignment is a single variable (e.g. an array or structure), the feature is called unpacking or destructuring assignment:
var list := {0, 1}
a, b := list
The list will be unpacked so that 0 is assigned to a and 1 to b. Furthermore,
a, b := b, a
swaps the values of a and b. In languages without parallel assignment, this would have to be written to use a temporary variable
var t := a
a := b
b := t
since a := b; b := a leaves both a and b with the original value of b.
Some languages, such as Go, F# and Python, combine parallel assignment, tuples, and automatic tuple unpacking to allow multiple return values from a single function, as in this Python example,
while other languages, such as C# and Rust, shown here, require explicit tuple construction and deconstruction with parentheses:
This provides an alternative to the use of output parameters for returning multiple values from a function. This dates to CLU (1974), and CLU helped popularize parallel assignment generally.
C# additionally allows generalized deconstruction assignment with implementation defined by the expression on the right-hand side, as the compiler searches for an appropriate instance or extension Deconstruct method on the expression, which must have output parameters for the variables being assigned to. For example, one such method that would give the class it appears in the same behavior as the return value of f() above would be
In C and C++, the comma operator is similar to parallel assignment in allowing multiple assignments to occur within a single statement, writing a = 1, b = 2 instead of a, b = 1, 2.
This is primarily used in for loops, and is replaced by parallel assignment in other languages such as Go.
However, the above C++ code does not ensure perfect simultaneity, since the right side of the following code a = b, b = a+1 is evaluated after the left side. In languages such as Python, a, b = b, a+1 will assign the two variables concurrently, using the initial value of a to compute the new b.
== Assignment versus equality ==
The use of the equals sign = as an assignment operator has been frequently criticized, due to the conflict with equals as comparison for equality. This results both in confusion by novices in writing code, and confusion even by experienced programmers in reading code. The use of equals for assignment dates back to Heinz Rutishauser's language Superplan, designed from 1949 to 1951, and was particularly popularized by Fortran:
A notorious example for a bad idea was the choice of the equal sign to denote assignment. It goes back to Fortran in 1957 and has blindly been copied by armies of language designers. Why is it a bad idea? Because it overthrows a century old tradition to let “=” denote a comparison for equality, a predicate which is either true or false. But Fortran made it to mean assignment, the enforcing of equality. In this case, the operands are on unequal footing: The left operand (a variable) is to be made equal to the right operand (an expression). x = y does not mean the same thing as y = x.
Beginning programmers sometimes confuse assignment with the relational operator for equality, as "=" means equality in mathematics, and is used for assignment in many languages. But assignment alters the value of a variable, while equality testing tests whether two expressions have the same value.
In some languages, such as BASIC, a single equals sign ("=") is used for both the assignment operator and the equality relational operator, with context determining which is meant. Other languages use different symbols for the two operators. For example:
In ALGOL and Pascal, the assignment operator is a colon and an equals sign (":=") while the equality operator is a single equals ("=").
In C, the assignment operator is a single equals sign ("=") while the equality operator is a pair of equals signs ("==").
In R, the assignment operator is basically <-, as in x <- value, but a single equals sign can be used in certain contexts.
The similarity in the two symbols can lead to errors if the programmer forgets which form ("=", "==", ":=") is appropriate, or mistypes "=" when "==" was intended. This is a common programming problem with languages such as C (including one famous attempt to backdoor the Linux kernel), where the assignment operator also returns the value assigned (in the same way that a function returns a value), and can be validly nested inside expressions. If the intention was to compare two values in an if statement, for instance, an assignment is quite likely to return a value interpretable as Boolean true, in which case the then clause will be executed, leading the program to behave unexpectedly. Some language processors (such as gcc) can detect such situations, and warn the programmer of the potential error.
== Notation ==
The two most common representations for the copying assignment are equals sign (=) and colon-equals (:=). Both forms may semantically denote either an assignment statement or an assignment operator (which also has a value), depending on language and/or usage.
Other possibilities include a left arrow or a keyword, though there are other, rarer, variants:
Mathematical pseudo code assignments are generally depicted with a left-arrow.
Some platforms put the expression on the left and the variable on the right:
Some expression-oriented languages, such as Lisp and Tcl, uniformly use prefix (or postfix) syntax for all statements, including assignment.
== See also ==
Assignment operator (C++)
Unification (computer science)
Immutable object
Assignment problem
== Notes ==
== References == | Wikipedia/Assignment_(computer_science) |
Government by algorithm (also known as algorithmic regulation, regulation by algorithms, algorithmic governance, algocratic governance, algorithmic legal order or algocracy) is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope.
Government by algorithm raises new challenges that are not captured in the e-government literature and the practice of public administration. Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information. Nello Cristianini and Teresa Scantamburlo argued that the combination of a human society and certain regulation algorithms (such as reputation-based scoring) forms a social machine.
== History ==
In 1962, the director of the Institute for Information Transmission Problems of the Russian Academy of Sciences in Moscow (later Kharkevich Institute), Alexander Kharkevich, published an article in the journal "Communist" about a computer network for processing information and control of the economy. In fact, he proposed to make a network like the modern Internet for the needs of algorithmic governance (Project OGAS). This created a serious concern among CIA analysts. In particular, Arthur M. Schlesinger Jr. warned that "by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employing self-teaching computers".
Between 1971 and 1973, the Chilean government carried out Project Cybersyn during the presidency of Salvador Allende. This project was aimed at constructing a distributed decision support system to improve the management of the national economy. Elements of the project were used in 1972 to successfully overcome the traffic collapse caused by a CIA-sponsored strike of forty thousand truck drivers.
Also in the 1960s and 1970s, Herbert A. Simon championed expert systems as tools for rationalization and evaluation of administrative behavior. The automation of rule-based processes was an ambition of tax agencies over many decades resulting in varying success. Early work from this period includes Thorne McCarty's influential TAXMAN project in the US and Ronald Stamper's LEGOL project in the UK. In 1993, the computer scientist Paul Cockshott from the University of Glasgow and the economist Allin Cottrell from the Wake Forest University published the book Towards a New Socialism, where they claim to demonstrate the possibility of a democratically planned economy built on modern computer technology. The Honourable Justice Michael Kirby published a paper in 1998, where he expressed optimism that the then-available computer technologies such as legal expert system could evolve to computer systems, which will strongly affect the practice of courts. In 2006, attorney Lawrence Lessig, known for the slogan "Code is law", wrote:
[T]he invisible hand of cyberspace is building an architecture that is quite the opposite of its architecture at its birth. This invisible hand, pushed by government and by commerce, is constructing an architecture that will perfect control and make highly efficient regulation possible
Since the 2000s, algorithms have been designed and used to automatically analyze surveillance videos.
In his 2006 book Virtual Migration, A. Aneesh developed the concept of algocracy — information technologies constrain human participation in public decision making. Aneesh differentiated algocratic systems from bureaucratic systems (legal-rational regulation) as well as market-based systems (price-based regulation).
In 2013, algorithmic regulation was coined by Tim O'Reilly, founder and CEO of O'Reilly Media Inc.:
Sometimes the "rules" aren't really even rules. Gordon Bruce, the former CIO of the city of Honolulu, explained to me that when he entered government from the private sector and tried to make changes, he was told, "That's against the law." His reply was "OK. Show me the law." "Well, it isn't really a law. It's a regulation." "OK. Show me the regulation." "Well, it isn't really a regulation. It's a policy that was put in place by Mr. Somebody twenty years ago." "Great. We can change that!" [...] Laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time. Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws. [...] It's time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.
In 2017, Ukraine's Ministry of Justice ran experimental government auctions using blockchain technology to ensure transparency and hinder corruption in governmental transactions. "Government by Algorithm?" was the central theme introduced at Data for Policy 2017 conference held on 6–7 September 2017 in London.
== Examples ==
=== Smart cities ===
A smart city is an urban area where collected surveillance data is used to improve various operations. Increase in computational power allows more automated decision making and replacement of public agencies by algorithmic governance. In particular, the combined use of artificial intelligence and blockchains for IoT may lead to the creation of sustainable smart city ecosystems. Intelligent street lighting in Glasgow is an example of successful government application of AI algorithms. A study of smart city initiatives in the US shows that it requires public sector as a main organizer and coordinator, the private sector as a technology and infrastructure provider, and universities as expertise contributors.
The cryptocurrency millionaire Jeffrey Berns proposed the operation of local governments in Nevada by tech firms in 2021. Berns bought 67,000 acres (271 km2) in Nevada's rural Storey County (population 4,104) for $170,000,000 (£121,000,000) in 2018 in order to develop a smart city with more than 36,000 residents that could generate an annual output of $4,600,000,000. Cryptocurrency would be allowed for payments. Blockchains, Inc. "Innovation Zone" was canceled in September 2021 after it failed to secure enough water for the planned 36,000 residents, through water imports from a site located 100 miles away in the neighboring Washoe County. A similar water pipeline proposed in 2007 was estimated to cost $100 million and would have taken about 10 years to develop. With additional water rights purchased from Tahoe Reno Industrial General Improvement District, "Innovation Zone" would have acquired enough water for about 15,400 homes - meaning that it would have barely covered its planned 15,000 dwelling units, leaving nothing for the rest of the projected city and its 22 million square-feet of industrial development.
In Saudi Arabia, the planners of The Line assert that it will be monitored by AI to improve life by using data and predictive modeling.
=== Reputation systems ===
Tim O'Reilly suggested that data sources and reputation systems combined in algorithmic regulation can outperform traditional regulations. For instance, once taxi-drivers are rated by passengers, the quality of their services will improve automatically and "drivers who provide poor service are eliminated". O'Reilly's suggestion is based on the control-theoretic concept of feed-back loop—improvements and disimprovements of reputation enforce desired behavior. The usage of feedback-loops for the management of social systems has already been suggested in management cybernetics by Stafford Beer before.
These connections are explored by Nello Cristianini and Teresa Scantamburlo, where the reputation-credit scoring system is modeled as an incentive given to the citizens and computed by a social machine, so that rational agents would be motivated to increase their score by adapting their behaviour. Several ethical aspects of that technology are still being discussed.
China's Social Credit System was said to be a mass surveillance effort with a centralized numerical score for each citizen given for their actions, though newer reports say that this is a widespread misconception.
=== Smart contracts ===
Smart contracts, cryptocurrencies, and decentralized autonomous organization are mentioned as means to replace traditional ways of governance. Cryptocurrencies are currencies which are enabled by algorithms without a governmental central bank. Central bank digital currency often employs similar technology, but is differentiated from the fact that it does use a central bank. It is soon to be employed by major unions and governments such as the European Union and China. Smart contracts are self-executable contracts, whose objectives are the reduction of need in trusted governmental intermediators, arbitrations and enforcement costs. A decentralized autonomous organization is an organization represented by smart contracts that is transparent, controlled by shareholders and not influenced by a central government. Smart contracts have been discussed for use in such applications as use in (temporary) employment contracts and automatic transfership of funds and property (i.e. inheritance, upon registration of a death certificate). Some countries such as Georgia and Sweden have already launched blockchain programs focusing on property (land titles and real estate ownership) Ukraine is also looking at other areas too such as state registers.
=== Algorithms in government agencies ===
According to a study of Stanford University, 45% of the studied US federal agencies have experimented with AI and related machine learning (ML) tools up to 2020. US federal agencies counted the number of artificial intelligence applications, which are listed below. 53% of these applications were produced by in-house experts. Commercial providers of residual applications include Palantir Technologies.
In 2012, NOPD started a collaboration with Palantir Technologies in the field of predictive policing. Besides Palantir's Gotham software, other similar (numerical analysis software) used by police agencies (such as the NCRIC) include SAS.
In the fight against money laundering, FinCEN employs the FinCEN Artificial Intelligence System (FAIS) since 1995.
National health administration entities and organisations such as AHIMA (American Health Information Management Association) hold medical records. Medical records serve as the central repository for planning patient care and documenting communication among patient and health care provider and professionals contributing to the patient's care. In the EU, work is ongoing on a European Health Data Space which supports the use of health data.
US Department of Homeland Security has employed the software ATLAS, which run on Amazon Cloud. It scanned more than 16.5 million records of naturalized Americans and flagged approximately 124,000 of them for manual analysis and review by USCIS officers regarding denaturalization. They were flagged due to potential fraud, public safety and national security issues. Some of the scanned data came from Terrorist Screening Database and National Crime Information Center.
The NarxCare is a US software, which combines data from the prescription registries of various U.S. states and uses machine learning to generate various three-digit "risk scores" for prescriptions of medications and an overall "Overdose Risk Score", collectively referred to as Narx Scores, in a process that potentially includes EMS and criminal justice data as well as court records.
In Estonia, artificial intelligence is used in its e-government to make it more automated and seamless. A virtual assistant will guide citizens through any interactions they have with the government. Automated and proactive services "push" services to citizens at key events of their lives (including births, bereavements, unemployment). One example is the automated registering of babies when they are born. Estonia's X-Road system will also be rebuilt to include even more privacy control and accountability into the way the government uses citizen's data.
In Costa Rica, the possible digitalization of public procurement activities (i.e. tenders for public works) has been investigated. The paper discussing this possibility mentions that the use of ICT in procurement has several benefits such as increasing transparency, facilitating digital access to public tenders, reducing direct interaction between procurement officials and companies at moments of high integrity risk, increasing outreach and competition, and easier detection of irregularities.
Besides using e-tenders for regular public works (construction of buildings, roads), e-tenders can also be used for reforestation projects and other carbon sink restoration projects. Carbon sink restoration projects may be part of the nationally determined contributions plans in order to reach the national Paris agreement goals.
Government procurement audit software can also be used. Audits are performed in some countries after subsidies have been received.
Some government agencies provide track and trace systems for services they offer. An example is track and trace for applications done by citizens (i.e. driving license procurement).
Some government services use issue tracking systems to keep track of ongoing issues.
=== Justice by algorithm ===
Judges' decisions in Australia are supported by the "Split Up" software in cases of determining the percentage of a split after a divorce. COMPAS software is used in the USA to assess the risk of recidivism in courts. According to the statement of Beijing Internet Court, China is the first country to create an internet court or cyber court. The Chinese AI judge is a virtual recreation of an actual female judge. She "will help the court's judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work". Also, Estonia plans to employ artificial intelligence to decide small-claim cases of less than €7,000.
Lawbots can perform tasks that are typically done by paralegals or young associates at law firms. One such technology used by US law firms to assist in legal research is from ROSS Intelligence, and others vary in sophistication and dependence on scripted algorithms. Another legal technology chatbot application is DoNotPay.
=== Algorithms in education ===
Due to the COVID-19 pandemic in 2020, in-person final exams were impossible for thousands of students. The public high school Westminster High employed algorithms to assign grades. UK's Department for Education also employed a statistical calculus to assign final grades in A-levels, due to the pandemic.
Besides use in grading, software systems like AI were used in preparation for college entrance exams.
AI teaching assistants are being developed and used for education (e.g. Georgia Tech's Jill Watson) and there is also an ongoing debate on the possibility of teachers being entirely replaced by AI systems (e.g. in homeschooling).
=== AI politicians ===
In 2018, an activist named Michihito Matsuda ran for mayor in the Tama city area of Tokyo as a human proxy for an artificial intelligence program. While election posters and campaign material used the term robot, and displayed stock images of a feminine android, the "AI mayor" was in fact a machine learning algorithm trained using Tama city datasets. The project was backed by high-profile executives Tetsuzo Matsumoto of Softbank and Norio Murakami of Google. Michihito Matsuda came third in the election, being defeated by Hiroyuki Abe. Organisers claimed that the 'AI mayor' was programmed to analyze citizen petitions put forward to the city council in a more 'fair and balanced' way than human politicians.
In 2018, Cesar Hidalgo presented the idea of augumented democracy. In an augumented democracy, legislation is done by digital twins of every single person.
In 2019, AI-powered messenger chatbot SAM participated in the discussions on social media connected to an electoral race in New Zealand. The creator of SAM, Nick Gerritsen, believed SAM would be advanced enough to run as a candidate by late 2020, when New Zealand had its next general election.
In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for The Synthetic Party to run in the 2022 Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world.
In 2023, In the Japanese town of Manazuru, a mayoral candidate called "AI Mayer" hopes to be the first AI-powered officeholder in Japan in November 2023. This candidacy is said to be supported by a group led by Michihito Matsuda
In the 2024 United Kingdom general election, a businessman named Steve Endacott ran for the constituency of Brighton Pavilion as an AI avatar named "AI Steve", saying that constituents could interact with AI Steve to shape policy. Endacott stated that he would only attend Parliament to vote based on policies which had garnered at least 50% support. AI Steve placed last with 179 votes.
=== Management of infection ===
In February 2020, China launched a mobile app to deal with the Coronavirus outbreak called "close-contact-detector". Users are asked to enter their name and ID number. The app is able to detect "close contact" using surveillance data (i.e. using public transport records, including trains and flights) and therefore a potential risk of infection. Every user can also check the status of three other users. To make this inquiry users scan a Quick Response (QR) code on their smartphones using apps like Alipay or WeChat. The close contact detector can be accessed via popular mobile apps including Alipay. If a potential risk is detected, the app not only recommends self-quarantine, it also alerts local health officials.
Alipay also has the Alipay Health Code which is used to keep citizens safe. This system generates a QR code in one of three colors (green, yellow, or red) after users fill in a form on Alipay with personal details. A green code enables the holder to move around unrestricted. A yellow code requires the user to stay at home for seven days and red means a two-week quarantine. In some cities such as Hangzhou, it has become nearly impossible to get around without showing one's Alipay code.
In Cannes, France, monitoring software has been used on footage shot by CCTV cameras, allowing to monitor their compliance to local social distancing and mask wearing during the COVID-19 pandemic. The system does not store identifying data, but rather allows to alert city authorities and police where breaches of the mask and mask wearing rules are spotted (allowing fining to be carried out where needed). The algorithms used by the monitoring software can be incorporated into existing surveillance systems in public spaces (hospitals, stations, airports, shopping centres, ...)
Cellphone data is used to locate infected patients in South Korea, Taiwan, Singapore and other countries. In March 2020, the Israeli government enabled security agencies to track mobile phone data of people supposed to have coronavirus. The measure was taken to enforce quarantine and protect those who may come into contact with infected citizens. Also in March 2020, Deutsche Telekom shared private cellphone data with the federal government agency, Robert Koch Institute, in order to research and prevent the spread of the virus. Russia deployed facial recognition technology to detect quarantine breakers. Italian regional health commissioner Giulio Gallera said that "40% of people are continuing to move around anyway", as he has been informed by mobile phone operators. In USA, Europe and UK, Palantir Technologies is taken in charge to provide COVID-19 tracking services.
=== Prevention and management of environmental disasters ===
Tsunamis can be detected by Tsunami warning systems. They can make use of AI. Floodings can also be detected using AI systems. Wildfires can be predicted using AI systems. Wildfire detection is possible by AI systems (i.e. through satellite data, aerial imagery, and GPS phone personnel position) and can help in the evacuation of people during wildfires, to investigate how householders responded in wildfires and spotting wildfire in real time using computer vision. Earthquake detection systems are now improving alongside the development of AI technology through measuring seismic data and implementing complex algorithms to improve detection and prediction rates. Earthquake monitoring, phase picking, and seismic signal detection have developed through AI algorithms of deep-learning, analysis, and computational models. Locust breeding areas can be approximated using machine learning, which could help to stop locust swarms in an early phase.
== Reception ==
=== Benefits ===
Algorithmic regulation is supposed to be a system of governance where more exact data, collected from citizens via their smart devices and computers, is used to more efficiently organize human life as a collective. As Deloitte estimated in 2017, automation of US government work could save 96.7 million federal hours annually, with a potential savings of $3.3 billion; at the high end, this rises to 1.2 billion hours and potential annual savings of $41.1 billion.
=== Criticism ===
There are potential risks associated with the use of algorithms in government. Those include:
algorithms becoming susceptible to bias,
a lack of transparency in how an algorithm may make decisions,
the accountability for any such decisions.
According to a 2016's book Weapons of Math Destruction, algorithms and big data are suspected to increase inequality due to opacity, scale and damage.
There is also a serious concern that gaming by the regulated parties might occur, once more transparency is brought into the decision making by algorithmic governance, regulated parties might try to manipulate their outcome in own favor and even use adversarial machine learning. According to Harari, the conflict between democracy and dictatorship is seen as a conflict of two different data-processing systems—AI and algorithms may swing the advantage toward the latter by processing enormous amounts of information centrally.
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived as being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).
The contributors of the 2019 documentary iHuman expressed apprehension of "infinitely stable dictatorships" created by government AI.
Due to public criticism, the Australian government announced the suspension of Robodebt scheme key functions in 2019, and a review of all debts raised using the programme.
In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back.
In 2020, the US government software ATLAS, which run on Amazon Cloud, sparked uproar from activists and Amazon's own employees.
In 2021, Eticas Foundation launched a database of governmental algorithms called Observatory of Algorithms with Social Impact (OASI).
==== Algorithmic bias and transparency ====
An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities.
=== Public acceptance ===
A 2019 poll conducted by IE University's Center for the Governance of Change in Spain found that 25% of citizens from selected European countries were somewhat or totally in favor of letting an artificial intelligence make important decisions about how their country is run. The following table lists the results by country:
Researchers found some evidence that when citizens perceive their political leaders or security providers to be untrustworthy, disappointing, or immoral, they prefer to replace them by artificial agents, whom they consider to be more reliable. The evidence is established by survey experiments on university students of all genders.
A 2021 poll by IE University indicates that 51% of Europeans are in favor of reducing the number of national parliamentarians and reallocating these seats to an algorithm. This proposal has garnered substantial support in Spain (66%), Italy (59%), and Estonia (56%). Conversely, the citizens of Germany, the Netherlands, the United Kingdom, and Sweden largely oppose the idea. The survey results exhibit significant generational differences. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 support the measure, while a majority of respondents over the age of 55 are against it. International perspectives also vary: 75% of Chinese respondents support the proposal, whereas 60% of Americans are opposed.
== In popular culture ==
The 1970 David Bowie song "Saviour Machine" depicts an algocratic society run by the titular mechanism, which ended famine and war through "logic" but now threatens to cause an apocalypse due to its fear that its subjects have become excessively complacent.
The novels Daemon (2006) and Freedom™ (2010) by Daniel Suarez describe a fictional scenario of global algorithmic regulation. Matthew De Abaitua's If Then imagines an algorithm supposedly based on "fairness" recreating a premodern rural economy.
== See also ==
== Citations ==
== General and cited references ==
Lessig, Lawrence (2006). Code: Version 2.0. New York: Basic Books. ISBN 978-0-465-03914-2. OCLC 133467669. Wikipedia article: Code: Version 2.0.
Oliva, Jennifer (2020-01-08). "Prescription-Drug Policing: The Right To Health Information Privacy Pre- and Post-Carpenter". Duke Law Journal. 69 (4): 775–853. ISSN 0012-7086.
Szalavitz, Maia (October 2021). "The Pain Algorithm". WIRED. pp. 36–47. ISSN 1059-1028.
Yeung, Karen; Lodge, Martin (2019). Algorithmic Regulation. Oxford University Press. ISBN 9780198838494.
== External links ==
Government by Algorithm? by Data for Policy 2017 Conference
Government by Algorithm Archived 2022-08-15 at the Wayback Machine by Stanford University
A governance framework for algorithmic accountability and transparency by European Parliament
Algorithmic Government by Zeynep Engin and Philip Treleaven, University College London
Algorithmic Government by Prof. Philip C. Treleaven of University College London
Artificial Intelligence for Citizen Services and Government by Hila Mehr of Harvard University
The OASI Register, algorithms with social impact
iHuman (Documentary, 2019) by Tonje Hessen Schei
How Blockchain can transform India: Jaspreet Bindra
Can An AI Design Our Tax Policy?
New development: Blockchain—a revolutionary tool for the public sector, An introduction on the Blockchain's usage in the public sector by Vasileios Yfantis
A bold idea to replace politicians by César Hidalgo | Wikipedia/Government_by_algorithm |
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.
Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure) do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as while and for.
Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for certain problems, algorithmic or compiler-optimization techniques such as tail call optimization may improve computational performance over a naive recursive implementation.
== Recursive functions and algorithms ==
A common algorithm design tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization.
=== Base case ===
A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n! = n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case".
The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. (Functions that are not intended to terminate under normal circumstances—for example, some system and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop.
For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not an obvious base case implied by the input data; for these one may add a parameter (such as the number of terms to be added, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is more naturally treated by corecursion, where successive terms in the output are the partial sums; this can be converted to a recursion by using the indexing parameter to say "compute the nth term (nth partial sum)".
== Recursive data types ==
Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is a technique for representing data whose exact size is unknown to the programmer: the programmer can specify this data with a self-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.
=== Inductively defined data ===
An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively (here, using Haskell syntax):
The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings. The self-reference in the definition permits the construction of lists of any (finite) number of strings.
Another example of inductive definition is the natural numbers (or positive integers):
A natural number is either 1 or n+1, where n is a natural number.
Similarly recursive definitions are often used to model the structure of expressions and statements in programming languages. Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition:
This says that an expression is either a number, a product of two expressions, or a sum of two expressions. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complicated arithmetic expressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.
=== Coinductively defined data and corecursion ===
A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size.
A coinductive definition of infinite streams of strings, given informally, might look like this:
A stream of strings is an object s such that:
head(s) is a string, and
tail(s) is a stream of strings.
This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from.
Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. As a programming technique, it is used most often in the context of lazy programming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of that result. The problem of computing the first n prime numbers is one that can be solved with a corecursive program (e.g. here).
== Types of recursion ==
=== Single recursion and multiple recursion ===
Recursion that contains only a single self-reference is known as single recursion, while recursion that contains multiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal, such as in a linear search, or computing the factorial function, while standard examples of multiple recursion include tree traversal, such as in a depth-first search.
Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterative computation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require exponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without an explicit stack.
Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example, while computing the Fibonacci sequence naively entails multiple iteration, as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. This is more naturally framed as corecursion, building up from the initial values, while tracking two successive values at each step – see corecursion: examples. A more sophisticated example involves using a threaded binary tree, which allows iterative tree traversal, rather than multiple recursion.
=== Indirect recursion ===
Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in which a function calls itself. Indirect recursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which calls f, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 calls function 2, function 2 calls function 3, and function 3 calls function 1 again.
Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a difference of emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions.
=== Anonymous recursion ===
Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitly calling a function based on the current context, which is particularly useful for anonymous functions, and is known as anonymous recursion.
=== Structural versus generative recursion ===
Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data:
[Functions that consume structured data] typically decompose their arguments into their immediate structural components and then process those components. If one of the immediate components belongs to the same class of data as the input, the function is recursive. For that reason, we refer to these functions as (STRUCTURALLY) RECURSIVE FUNCTIONS.
Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call is the content of a field of the original input. Structural recursion includes nearly all tree traversals, including XML processing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (that is, a natural number is either zero or the successor of a natural number), functions such as factorial may also be regarded as structural recursion.
Generative recursion is the alternative:
Many well-known recursive algorithms generate an entirely new piece of data from the given data and recur on it. HtDP (How to Design Programs) refers to this kind as generative recursion. Examples of generative recursion include: gcd, quicksort, binary search, mergesort, Newton's method, fractals, and adaptive integration.
This distinction is important in proving termination of a function.
All structurally recursive functions on finite (inductively defined) data structures can easily be shown to terminate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a base case is reached.
Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, so proof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. These generatively recursive functions can often be interpreted as corecursive functions – each step generates the new data, such as successive approximation in Newton's method – and terminating this corecursion requires that the data eventually satisfy some condition, which is not necessarily guaranteed.
In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or complexity, which starts off finite and decreases at each recursive step.
By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends on a function, such as "error of approximation" that does not necessarily decrease to zero, and thus termination is not guaranteed without further analysis.
== Implementation issues ==
In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step), a number of modifications may be made, for purposes of clarity or efficiency. These include:
Wrapper function (at top)
Short-circuiting the base case, aka "Arm's-length recursion" (at bottom)
Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough
On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frowned upon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursion in small cases, and arm's-length recursion is a special case of this.
=== Wrapper function ===
A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliary function which actually does the recursion.
Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initialization (allocate memory, initialize variables), particularly for auxiliary variables such as "level of recursion" or partial computations for memoization, and handle exceptions and errors. In languages that support nested functions, the auxiliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions, auxiliary functions are instead a separate function, if possible private (as they are not called directly), and information is shared with the wrapper function by using pass-by-reference.
=== Short-circuiting the base case ===
Short-circuiting the base case, also known as arm's-length recursion, consists of checking the base case before making a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking for the base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call that immediately returns. Note that since the base case has already been checked for (immediately before the recursive step), it does not need to be checked for separately, but one does need to use a wrapper function for the case when the overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is 0! = 1, while immediately returning 1 for 1! is a short circuit, and may miss 0; this can be mitigated by a wrapper function. The box shows C code to shortcut factorial cases 0 and 1.
Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, which can be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below for a depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children) as the base case, rather than considering an empty node as the base case. If there is only a single base case, such as in computing the factorial, short-circuiting provides only O(1) savings.
Conceptually, short-circuiting can be considered to either have the same base case and recursive step, checking the base case only before the recursion, or it can be considered to have a different base case (one step removed from standard base case) and a more complex recursive step, namely "check valid then recurse", as in considering leaf nodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, compared with the clear separation of base case and recursive step in standard recursion, it is often considered poor style, particularly in academia.
==== Depth-first search ====
A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section for standard recursive discussion.
The standard recursive algorithm for a DFS is:
base case: If current node is Null, return false
recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children
In short-circuiting, this is instead:
check value of current node, return true if match,
otherwise, on children, if not Null, then recurse.
In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can be considered a different form of base case and recursive step, respectively. Note that this requires a wrapper function to handle the case when the tree itself is empty (root node is Null).
In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for each of the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.
In C, the standard recursive algorithm may be implemented as:
The short-circuited algorithm may be implemented as:
Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is made only if the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is a Boolean, so the overall expression evaluates to a Boolean. This is a common idiom in recursive short-circuiting. This is in addition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left child fails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a return statement, but legibility suffers at no benefit to efficiency.
=== Hybrid algorithm ===
Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small. An important example is merge sort, which is often implemented by switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybrid recursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.
== Recursion versus iteration ==
Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit call stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion, as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to little overhead. Implementing an algorithm using iteration may not be easily achievable.
Compare the templates to compute xn defined by xn = f(n, xn-1) from xbase:
For an imperative language the overhead is to define the function, and for a functional language the overhead is to define the accumulator variable x.
For example, a factorial function may be implemented iteratively in C by assigning to a loop index variable and accumulator variable, rather than by passing arguments and returning values by recursion:
=== Expressive power ===
Most programming languages in use today allow the direct specification of recursive functions and procedures. When such a function is called, the program's runtime environment keeps track of the various instances of the function (often using a call stack, although other methods may be used). Every recursive function can be transformed into an iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with a stack explicitly managed by the program.
Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness) can be expressed in terms of recursive functions; iterative control constructs such as while loops and for loops are routinely rewritten in recursive form in functional languages. However, in practice this rewriting depends on tail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languages in which all function calls, including tail calls, may cause stack allocation that would not occur with the use of looping constructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack, although tail call elimination may be a feature that is not covered by a language's specification, and different implementations of the same language may differ in tail call elimination capabilities.
=== Performance issues ===
In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and space cost associated with recursive programs, due to the overhead required to manage the stack and the relative slowness of function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, and the difference is usually less noticeable.
As a concrete example, the difference in performance between recursive and iterative implementations of the "factorial" example above depends highly on the compiler used. In languages where looping constructs are preferred, the iterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages, the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the larger numbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelm any time saved by choosing iteration.
=== Stack space ===
In some programming languages, the maximum size of the call stack is much less than the space available in the heap, and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languages sometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language. Note the caveat below regarding the special case of tail recursion.
=== Vulnerability ===
Because recursive algorithms can be subject to stack overflows, they may be vulnerable to pathological or malicious input. Some malware specifically targets a program's call stack and takes advantage of the stack's inherently recursive nature. Even in the absence of malware, a stack overflow caused by unbounded recursion can be fatal to the program, and exception handling logic may not prevent the corresponding process from being terminated.
=== Multiply recursive problems ===
Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is tree traversal as in depth-first search; though both recursive and iterative methods are used, they contrast with list traversal and linear search in a list, which is a singly recursive and thus naturally iterative method. Other examples include divide-and-conquer algorithms such as Quicksort, and functions such as the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack, but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguably outweigh any advantages of the iterative solution.
=== Refactoring recursion ===
Recursive algorithms can be replaced with non-recursive counterparts. One method for replacing recursive algorithms is to simulate them using heap memory in place of stack memory. An alternative is to develop a replacement algorithm entirely based on non-recursive methods, which can be challenging. For example, recursive algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, were once typical. Non-recursive algorithms for the same purpose, such as the Krauss matching wildcards algorithm, have been developed to avoid the drawbacks of recursion and have improved only gradually based on techniques such as collecting tests and profiling performance.
== Tail-recursive functions ==
Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function (also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplication operations that must be performed after the final recursive call completes. With a compiler or interpreter that treats tail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially iterative, equivalent to using imperative language control structures like the "for" and "while" loops.
The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller's return position need not be saved on the call stack; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space and time.
== Order of execution ==
Consider these two functions:
=== Function 1 ===
=== Function 2 ===
The output of function 2 is that of function 1 with the lines swapped.
In the case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached.
Also note that the order of the print statements is reversed, which is due to the way the functions and statements are stored on the call stack.
== Recursive procedures ==
=== Factorial ===
A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:
fact
(
n
)
=
{
1
if
n
=
0
n
⋅
fact
(
n
−
1
)
if
n
>
0
{\displaystyle \operatorname {fact} (n)={\begin{cases}1&{\mbox{if }}n=0\\n\cdot \operatorname {fact} (n-1)&{\mbox{if }}n>0\\\end{cases}}}
The function can also be written as a recurrence relation:
b
n
=
n
b
n
−
1
{\displaystyle b_{n}=nb_{n-1}}
b
0
=
1
{\displaystyle b_{0}=1}
This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating the pseudocode above:
This factorial function can also be described without using recursion by making use of the typical looping constructs found in imperative programming languages:
The imperative code above is equivalent to this mathematical definition using an accumulator variable t:
fact
(
n
)
=
f
a
c
t
a
c
c
(
n
,
1
)
f
a
c
t
a
c
c
(
n
,
t
)
=
{
t
if
n
=
0
f
a
c
t
a
c
c
(
n
−
1
,
n
t
)
if
n
>
0
{\displaystyle {\begin{aligned}\operatorname {fact} (n)&=\operatorname {fact_{acc}} (n,1)\\\operatorname {fact_{acc}} (n,t)&={\begin{cases}t&{\mbox{if }}n=0\\\operatorname {fact_{acc}} (n-1,nt)&{\mbox{if }}n>0\\\end{cases}}\end{aligned}}}
The definition above translates straightforwardly to functional programming languages such as Scheme; this is an example of iteration implemented recursively.
=== Greatest common divisor ===
The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.
Function definition:
gcd
(
x
,
y
)
=
{
x
if
y
=
0
gcd
(
y
,
remainder
(
x
,
y
)
)
if
y
>
0
{\displaystyle \gcd(x,y)={\begin{cases}x&{\mbox{if }}y=0\\\gcd(y,\operatorname {remainder} (x,y))&{\mbox{if }}y>0\\\end{cases}}}
Recurrence relation for greatest common divisor, where
x
%
y
{\displaystyle x\%y}
expresses the remainder of
x
/
y
{\displaystyle x/y}
:
gcd
(
x
,
y
)
=
gcd
(
y
,
x
%
y
)
{\displaystyle \gcd(x,y)=\gcd(y,x\%y)}
if
y
≠
0
{\displaystyle y\neq 0}
gcd
(
x
,
0
)
=
x
{\displaystyle \gcd(x,0)=x}
The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shown above shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a version of the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintaining its state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls and growing the call stack.
The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.
=== Towers of Hanoi ===
The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion. There are three pegs which can hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting with n disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to move the stack?
Function definition:
hanoi
(
n
)
=
{
1
if
n
=
1
2
⋅
hanoi
(
n
−
1
)
+
1
if
n
>
1
{\displaystyle \operatorname {hanoi} (n)={\begin{cases}1&{\mbox{if }}n=1\\2\cdot \operatorname {hanoi} (n-1)+1&{\mbox{if }}n>1\\\end{cases}}}
Recurrence relation for hanoi:
h
n
=
2
h
n
−
1
+
1
{\displaystyle h_{n}=2h_{n-1}+1}
h
1
=
1
{\displaystyle h_{1}=1}
Example implementations:
Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula.
=== Binary search ===
The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for.
Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass.
Example implementation of binary search in C:
== Recursive data structures (structural recursion) ==
An important application of recursion in computer science is in defining dynamic data structures such as lists and trees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, the size of a static array must be set at compile time.
"Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms."
The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively.
As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value.
=== Linked lists ===
Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself. The "next" element of struct node is a pointer to another struct node, effectively creating a list type.
Because the struct node data structure is defined recursively, procedures that operate on it can be implemented naturally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e., the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation, the list remains unchanged by the list_print procedure.
=== Binary trees ===
Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself, recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the right sub-tree).
Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), tree operations may require two recursive calls:
At most two recursive calls will be made for any given call to tree_contains as defined above.
The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of the binary tree where the data elements of each node are in order.
=== Filesystem traversal ===
Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerate its contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversal are applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversal of a filesystem.
This code is both recursion and iteration - the files and directories are iterated, and each directory is opened recursively.
The "rtraverse" method is an example of direct recursion, whilst the "traverse" method is a wrapper function.
The "base case" scenario is that there will always be a fixed number of files and/or directories in a given filesystem.
== Time-efficiency of recursive algorithms ==
The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can (usually) then be simplified into a single Big-O term.
=== Shortcut rule (master theorem) ===
If the time-complexity of the function is in the form
T
(
n
)
=
a
⋅
T
(
n
/
b
)
+
f
(
n
)
{\displaystyle T(n)=a\cdot T(n/b)+f(n)}
Then the Big O of the time-complexity is thus:
If
f
(
n
)
=
O
(
n
log
b
a
−
ε
)
{\displaystyle f(n)=O(n^{\log _{b}a-\varepsilon })}
for some constant
ε
>
0
{\displaystyle \varepsilon >0}
, then
T
(
n
)
=
Θ
(
n
log
b
a
)
{\displaystyle T(n)=\Theta (n^{\log _{b}a})}
If
f
(
n
)
=
Θ
(
n
log
b
a
)
{\displaystyle f(n)=\Theta (n^{\log _{b}a})}
, then
T
(
n
)
=
Θ
(
n
log
b
a
log
n
)
{\displaystyle T(n)=\Theta (n^{\log _{b}a}\log n)}
If
f
(
n
)
=
Ω
(
n
log
b
a
+
ε
)
{\displaystyle f(n)=\Omega (n^{\log _{b}a+\varepsilon })}
for some constant
ε
>
0
{\displaystyle \varepsilon >0}
, and if
a
⋅
f
(
n
/
b
)
≤
c
⋅
f
(
n
)
{\displaystyle a\cdot f(n/b)\leq c\cdot f(n)}
for some constant c < 1 and all sufficiently large n, then
T
(
n
)
=
Θ
(
f
(
n
)
)
{\displaystyle T(n)=\Theta (f(n))}
where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller the input is for the next level of recursion (i.e. the number of pieces you divide the problem into), and f(n) represents the work that the function does independently of any recursion (e.g. partitioning, recombining) at each level of recursion.
== Recursion in Logic Programming ==
In the procedural interpretation of logic programs, clauses (or rules) of the form A :- B are treated as procedures, which reduce goals of the form A to subgoals of the form B.
For example, the Prolog clauses:
define a procedure, which can be used to search for a path from X to Y, either by finding a direct arc from X to Y, or by finding an arc from X to Z, and then searching recursively for a path from Z to Y. Prolog executes the procedure by reasoning top-down (or backwards) and searching the space of possible paths depth-first, one branch at a time. If it tries the second clause, and finitely fails to find a path from Z to Y, it backtracks and tries to find an arc from X to another node, and then searches for a path from that other node to Y.
However, in the logical reading of logic programs, clauses are understood declaratively as universally quantified conditionals. For example, the recursive clause of the path-finding procedure is understood as representing the knowledge that, for every X, Y and Z, if there is an arc from X to Z and a path from Z to Y then there is a path from X to Y. In symbolic form:
∀
X
,
Y
,
Z
(
a
r
c
(
X
,
Z
)
∧
p
a
t
h
(
Z
,
Y
)
→
p
a
t
h
(
X
,
Y
)
)
.
{\displaystyle \forall X,Y,Z(arc(X,Z)\land path(Z,Y)\rightarrow path(X,Y)).}
The logical reading frees the reader from needing to know how the clause is used to solve problems. The clause can be used top-down, as in Prolog, to reduce problems to subproblems. Or it can be used bottom-up (or forwards), as in Datalog, to derive conclusions from conditions. This separation of concerns is a form of abstraction, which separates declarative knowledge from problem solving methods (see Algorithm#Algorithm = Logic + Control).
== Infinite recursion ==
A common mistake among programmers is not providing a way to exit a recursive function, often by omitting or incorrectly checking the base case, letting it run (at least theoretically) infinitely by endlessly calling itself recursively. This is called infinite recursion, and the program will never terminate. In practice, this typically exhausts the available stack space. In most programming environments, a program with infinite recursion will not really run forever. Eventually, something will break and the program will report an error.
Below is a Java code that would use infinite recursion:
Running this code will result in a stack overflow error.
== See also ==
Functional programming
Computational problem
Hierarchical and recursive queries in SQL
Kleene–Rosser paradox
Open recursion
Recursion (in general)
Sierpiński curve
McCarthy 91 function
μ-recursive functions
Primitive recursive functions
Tak (function)
Logic programming
== Notes ==
== References == | Wikipedia/Recursive_algorithm |
In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as sorting (e.g., quicksort, merge sort), multiplying large numbers (e.g., the Karatsuba algorithm), finding the closest pair of points, syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFT).
Designing efficient divide-and-conquer algorithms can be difficult. As in mathematical induction, it is often necessary to generalize the problem to make it amenable to a recursive solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving recurrence relations.
== Divide and conquer ==
The divide-and-conquer paradigm is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly.
For example, to sort a given list of n natural numbers, split it into two lists of about n/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the merge sort algorithm.
The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the binary search algorithm for finding a record in a sorted list (or its analogue in numerical computing, the bisection algorithm for root finding). These algorithms can be implemented more efficiently than general divide-and-conquer algorithms; in particular, if they use tail recursion, they can be converted into simple loops. Under this broad definition, however, every algorithm that uses recursion or loops could be regarded as a "divide-and-conquer algorithm". Therefore, some authors consider that the name "divide and conquer" should be used only when each problem may generate two or more subproblems. The name decrease and conquer has been proposed instead for the single-subproblem class.
An important application of divide and conquer is in optimization, where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the geometric series); this is known as prune and search.
== Early historical examples ==
Early examples of these algorithms are primarily decrease and conquer – the original problem is successively broken down into single subproblems, and indeed can be solved iteratively.
Binary search, a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by John Mauchly, the idea of using a sorted list of items to facilitate searching dates back at least as far as Babylonia in 200 BC. Another ancient decrease-and-conquer algorithm is the Euclidean algorithm to compute the greatest common divisor of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC.
An early example of a divide-and-conquer algorithm with multiple subproblems is Gauss's 1805 description of what is now called the Cooley–Tukey fast Fourier transform (FFT) algorithm, although he did not analyze its operation count quantitatively, and FFTs did not become widespread until they were rediscovered over a century later.
An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the merge sort algorithm, invented by John von Neumann in 1945.
Another notable example is the algorithm invented by Anatolii A. Karatsuba in 1960 that could multiply two n-digit numbers in
O
(
n
log
2
3
)
{\displaystyle O(n^{\log _{2}3})}
operations (in Big O notation). This algorithm disproved Andrey Kolmogorov's 1956 conjecture that
Ω
(
n
2
)
{\displaystyle \Omega (n^{2})}
operations would be required for that task.
As another example of a divide-and-conquer algorithm that did not originally involve computers, Donald Knuth gives the method a post office typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered. This is related to a radix sort, described for punch-card sorting machines as early as 1929.
== Advantages ==
=== Solving difficult problems ===
Divide and conquer is a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking the problem into sub-problems, of solving the trivial cases, and of combining sub-problems to the original problem. Similarly, decrease and conquer only requires reducing the problem to a single smaller problem, such as the classic Tower of Hanoi puzzle, which reduces moving a tower of height
n
{\displaystyle n}
to move a tower of height
n
−
1
{\displaystyle n-1}
.
=== Algorithm efficiency ===
The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort and mergesort algorithms, the Strassen algorithm for matrix multiplication, and fast Fourier transforms.
In all these examples, the D&C approach led to an improvement in the asymptotic cost of the solution. For example, if (a) the base cases have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size
n
{\displaystyle n}
, and (b) there is a bounded number
p
{\displaystyle p}
of sub-problems of size ~
n
p
{\displaystyle {\frac {n}{p}}}
at each stage, then the cost of the divide-and-conquer algorithm will be
O
(
n
log
p
n
)
{\displaystyle O(n\log _{p}n)}
.
For other types of divide-and-conquer approaches, running times can also be generalized. For example, when a) the work of splitting the problem and combining the partial solutions take
c
n
{\displaystyle cn}
time, where
n
{\displaystyle n}
is the input size and
c
{\displaystyle c}
is some constant; b) when
n
<
2
{\displaystyle n<2}
, the algorithm takes time upper-bounded by
c
{\displaystyle c}
, and c) there are
q
{\displaystyle q}
subproblems where each subproblem has size ~
n
2
{\displaystyle {\frac {n}{2}}}
. Then, the running times are as follows:
if the number of subproblems
q
>
2
{\displaystyle q>2}
, then the divide-and-conquer algorithm's running time is bounded by
O
(
n
log
2
q
)
{\displaystyle O(n^{\log _{2}q})}
.
if the number of subproblems is exactly one, then the divide-and-conquer algorithm's running time is bounded by
O
(
n
)
{\displaystyle O(n)}
.
If, instead, the work of splitting the problem and combining the partial solutions take
c
n
2
{\displaystyle cn^{2}}
time, and there are 2 subproblems where each has size
n
2
{\displaystyle {\frac {n}{2}}}
, then the running time of the divide-and-conquer algorithm is bounded by
O
(
n
2
)
{\displaystyle O(n^{2})}
.
=== Parallelism ===
Divide-and-conquer algorithms are naturally adapted for execution in multi-processor machines, especially shared-memory systems where the communication of data between processors does not need to be planned in advance because distinct sub-problems can be executed on different processors.
=== Memory access ===
Divide-and-conquer algorithms naturally tend to make efficient use of memory caches. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called cache-oblivious, because it does not contain the cache size as an explicit parameter. Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be optimal cache-oblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is blocking, as in loop nest optimization, where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache sizes of a particular machine.
The same advantage exists with regards to other hierarchical storage systems, such as NUMA or virtual memory, as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels.
=== Roundoff control ===
In computations with rounded arithmetic, e.g. with floating-point numbers, a divide-and-conquer algorithm may yield more accurate results than a superficially equivalent iterative method. For example, one can add N numbers either by a simple loop that adds each datum to a single variable, or by a D&C algorithm called pairwise summation that breaks the data set into two halves, recursively computes the sum of each half, and then adds the two sums. While the second method performs the same number of additions as the first and pays the overhead of the recursive calls, it is usually more accurate.
== Implementation issues ==
=== Recursion ===
Divide-and-conquer algorithms are naturally implemented as recursive procedures. In that case, the partial sub-problems leading to the one currently being solved are automatically stored in the procedure call stack. A recursive function is a function that calls itself within its definition.
=== Explicit stack ===
Divide-and-conquer algorithms can also be implemented by a non-recursive program that stores the partial sub-problems in some explicit data structure, such as a stack, queue, or priority queue. This approach allows more freedom in the choice of the sub-problem that is to be solved next, a feature that is important in some applications — e.g. in breadth-first recursion and the branch-and-bound method for function optimization. This approach is also the standard solution in programming languages that do not provide support for recursive procedures.
=== Stack size ===
In recursive implementations of D&C algorithms, one must make sure that there is sufficient memory allocated for the recursion stack, otherwise, the execution may fail because of stack overflow. D&C algorithms that are time-efficient often have relatively small recursion depth. For example, the quicksort algorithm can be implemented so that it never requires more than
log
2
n
{\displaystyle \log _{2}n}
nested recursive calls to sort
n
{\displaystyle n}
items.
Stack overflow may be difficult to avoid when using recursive procedures since many compilers assume that the recursion stack is a contiguous area of memory, and some allocate a fixed amount of space for it. Compilers may also save more information in the recursion stack than is strictly necessary, such as return address, unchanging parameters, and the internal variables of the procedure. Thus, the risk of stack overflow can be reduced by minimizing the parameters and internal variables of the recursive procedure or by using an explicit stack structure.
=== Choosing the base cases ===
In any recursive algorithm, there is considerable freedom in the choice of the base cases, the small subproblems that are solved directly in order to terminate the recursion.
Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example, a Fast Fourier Transform algorithm could stop the recursion when the input is a single sample, and the quicksort list-sorting algorithm could stop when the input is the empty list; in both examples, there is only one base case to consider, and it requires no processing.
On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a hybrid algorithm. This strategy avoids the overhead of recursive calls that do little or no work and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is short-circuiting the base case, also known as arm's-length recursion. In this case, whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking whether it is null, checking null before recursing; avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack.
Thus, for example, many library implementations of quicksort will switch to a simple loop-based insertion sort (or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with
n
{\displaystyle n}
entries would entail maximally
n
{\displaystyle n}
quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack manipulation.
Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely unrolled into code that has no recursion, loops, or conditionals (related to the technique of partial evaluation). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes. Source-code generation methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently.
The generalized version of this idea is known as recursion "unrolling" or "coarsening", and various techniques have been proposed for automating the procedure of enlarging the base case.
=== Dynamic programming for overlapping subproblems ===
For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique which is commonly known as memoization. Followed to the limit, it leads to bottom-up divide-and-conquer algorithms such as dynamic programming.
== See also ==
Akra–Bazzi method – Method in computer science
Decomposable aggregation function – Type of function in database managementPages displaying short descriptions of redirect targets
"Divide and conquer" – Strategy in politics and sociology
Fork–join model – Way of setting up and executing parallel computer programs
Master theorem (analysis of algorithms) – Tool for analyzing divide-and-conquer algorithms
Mathematical induction – Form of mathematical proof
MapReduce – Parallel programming model
Heuristic (computer science) – Type of algorithm, produces approximately correct solutions
== References == | Wikipedia/Divide-and-conquer_algorithm |
In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can do multiple operations in a given time. It has been a tradition of computer science to describe serial algorithms in abstract machine models, often the one known as random-access machine. Similarly, many computer science researchers have used a so-called parallel random-access machine (PRAM) as a parallel abstract machine (shared-memory).
Many parallel algorithms are executed concurrently – though in general concurrent algorithms are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as "sequential algorithms", by contrast with concurrent algorithms.
== Parallelizability ==
Algorithms vary significantly in how parallelizable they are, ranging from easily parallelizable to completely unparallelizable. Further, a given problem may accommodate different algorithms, which may be more or less parallelizable.
Some problems are easy to divide up into pieces in this way – these are called embarrassingly parallel problems. Examples include many algorithms to solve Rubik's Cubes and find values which result in a given hash.
Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called inherently serial problems. Examples include iterative numerical methods, such as Newton's method, iterative solutions to the three-body problem, and most of the available algorithms to compute pi (π). Some sequential algorithms can be converted into parallel algorithms using automatic parallelization.
In many cases developing an effective parallel algorithm for solution of some task requires attraction of new ideas and methods comparing to creating a sequential algorithm version. These are, for instance, practically important problems of searching a target element in data structures, evaluation of an algebraic expression, etc.
== Motivation ==
Parallel algorithms on individual devices have become more common since the early 2000s because of substantial improvements in multiprocessing systems and the rise of multi-core processors. Up until the end of 2004, single-core processor performance rapidly increased via frequency scaling, and thus it was easier to construct a computer with a single fast core than one with many slower cores with the same throughput, so multicore systems were of more limited use. Since 2004 however, frequency scaling hit a wall, and thus multicore systems have become more widespread, making parallel algorithms of more general use.
== Issues ==
=== Communication ===
The cost or complexity of serial algorithms is estimated in terms of the space (memory) and time (processor cycles) that they take. Parallel algorithms need to optimize one more resource, the communication between different processors. There are two ways parallel processors communicate, shared memory or message passing.
Shared memory processing needs additional locking for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm.
Message passing processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special buses like crossbar so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic.
If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters parallel slowdown.
=== Load balancing ===
Another problem with parallel algorithms is ensuring that they are suitably load balanced, by ensuring that load (overall work) is balanced, rather than input size being balanced. For example, checking all numbers from one to a hundred thousand for primality is easy to split among processors; however, if the numbers are simply divided out evenly (1–1,000, 1,001–2,000, etc.), the amount of work will be unbalanced, as smaller numbers are easier to process by this algorithm (easier to test for primality), and thus some processors will get more work to do than the others, which will sit idle until the loaded processors complete.
== Distributed algorithms ==
A subtype of parallel algorithms, distributed algorithms, are algorithms designed to work in cluster computing and distributed computing environments, where additional concerns beyond the scope of "classical" parallel algorithms need to be addressed.
== See also ==
Multiple-agent system (MAS)
Parallel algorithms for matrix multiplication
Parallel algorithms for minimum spanning trees
Parallel computing
Parareal
== References ==
== External links ==
Designing and Building Parallel Programs, US Argonne National Laboratory | Wikipedia/Parallel_algorithm |
Algorithmic bias describes systematic and repeatable harmful tendency in a computerized sociotechnical system to create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's General Data Protection Regulation (proposed 2018) and the Artificial Intelligence Act (proposed 2021, approved 2024).
As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of automation bias), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design.
Algorithmic bias has been cited in cases ranging from election outcomes to the spread of online hate speech. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service.
A 2021 survey identified multiple forms of algorithmic bias, including historical, representation, and measurement biases, each of which can contribute to unfair outcomes.
== Definitions ==
Algorithms are difficult to define, but may be generally understood as lists of instructions that determine how programs read, collect, process, and analyze data to generate output.: 13 For a rigorous technical introduction, see Algorithms. Advances in computer hardware have led to an increased ability to process, store and transmit data. This has in turn boosted the design and adoption of technologies such as machine learning and artificial intelligence.: 14–15 By analyzing and processing data, algorithms are the backbone of search engines, social media websites, recommendation engines, online retail, online advertising, and more.
Contemporary social scientists are concerned with algorithmic processes embedded into hardware and software applications because of their political and social impact, and question the underlying assumptions of an algorithm's neutrality.: 2 : 563 : 294 The term algorithmic bias describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. For example, a credit score algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across multiple occurrences, an algorithm can be described as biased.: 332 This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on).
== Methods ==
Bias can be introduced to an algorithm in several ways. During the assemblage of a dataset, data may be collected, digitized, adapted, and entered into a database according to human-designed cataloging criteria.: 3 Next, programmers assign priorities, or hierarchies, for how a program assesses and sorts that data. This requires human decisions about how data is categorized, and which data is included or discarded.: 4 Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers.: 8 Other algorithms may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users.: 6
Beyond assembling and processing data, bias can emerge as a result of design. For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in credit scores).: 36 Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes. Another example comes from determining criteria for what is included and excluded from results. These criteria could present unanticipated outcomes for search results, such as with flight-recommendation software that omits flights that do not follow the sponsoring airline's flight paths. Algorithms may also display an uncertainty bias, offering more confident assessments when larger data sets are available. This can skew algorithmic processes toward results that more closely correspond with larger samples, which may disregard data from underrepresented populations.: 4
== History ==
=== Early critiques ===
The earliest computer programs were designed to mimic human reasoning and deductions, and were deemed to be functioning when they successfully and consistently reproduced that human logic. In his 1976 book Computer Power and Human Reason, artificial intelligence pioneer Joseph Weizenbaum suggested that bias could arise both from the data used in a program, but also from the way a program is coded.: 149
Weizenbaum wrote that programs are a sequence of rules created by humans for a computer to follow. By following those rules consistently, such programs "embody law",: 40 that is, enforce a specific way to solve problems. The rules a computer follows are based on the assumptions of a computer programmer for how these problems might be solved. That means the code could incorporate the programmer's imagination of how the world works, including their biases and expectations.: 109 While a computer program can incorporate bias in this way, Weizenbaum also noted that any data fed to a machine additionally reflects "human decision making processes" as data is being selected.: 70, 105
Finally, he noted that machines might also transfer good information with unintended consequences if users are unclear about how to interpret the results.: 65 Weizenbaum warned against trusting decisions made by computer programs that a user doesn't understand, comparing such faith to a tourist who can find his way to a hotel room exclusively by turning left or right on a coin toss. Crucially, the tourist has no basis of understanding how or why he arrived at his destination, and a successful arrival does not mean the process is accurate or reliable.: 226
An early example of algorithmic bias resulted in as many as 60 women and ethnic minorities denied entry to St. George's Hospital Medical School per year from 1982 to 1986, based on implementation of a new computer-guidance assessment system that denied entry to women and men with "foreign-sounding names" based on historical trends in admissions. While many schools at the time employed similar biases in their selection process, St. George was most notable for automating said bias through the use of an algorithm, thus gaining the attention of people on a much wider scale.
In recent years, as algorithms increasingly rely on machine learning methods applied to real-world data, algorithmic bias has become more prevalent due to inherent biases within the data itself. For instance, facial recognition systems have been shown to misidentify individuals from marginalized groups at significantly higher rates than white individuals, highlighting how biases in training datasets manifest in deployed systems. A 2018 study by Joy Buolamwini and Timnit Gebru found that commercial facial recognition technologies exhibited error rates of up to 35% when identifying darker-skinned women, compared to less than 1% for lighter-skinned men.
Algorithmic biases are not only technical failures but often reflect systemic inequities embedded in historical and societal data. Researchers and critics, such as Cathy O'Neil in her book Weapons of Math Destruction (2016), emphasize that these biases can amplify existing social inequalities under the guise of objectivity. O'Neil argues that opaque, automated decision-making processes in areas such as credit scoring, predictive policing, and education can reinforce discriminatory practices while appearing neutral or scientific.
=== Contemporary critiques and responses ===
Though well-designed algorithms frequently determine outcomes that are equally (or more) equitable than the decisions of human beings, cases of bias still occur, and are difficult to predict and analyze. The complexity of analyzing algorithmic bias has grown alongside the complexity of programs and their design. Decisions made by one designer, or team of designers, may be obscured among the many pieces of code created for a single program; over time these decisions and their collective impact on the program's output may be forgotten.: 115 In theory, these biases may create new patterns of behavior, or "scripts", in relationship to specific technologies as the code interacts with other elements of society. Biases may also impact how society shapes itself around the data points that algorithms require. For example, if data shows a high number of arrests in a particular area, an algorithm may assign more police patrols to that area, which could lead to more arrests.: 180
The decisions of algorithmic programs can be seen as more authoritative than the decisions of the human beings they are meant to assist,: 15 a process described by author Clay Shirky as "algorithmic authority". Shirky uses the term to describe "the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources", such as search results. This neutrality can also be misrepresented by the language used by experts and the media when results are presented to the public. For example, a list of news items selected and presented as "trending" or "popular" may be created based on significantly wider criteria than just their popularity.: 14
Because of their convenience and authority, algorithms are theorized as a means of delegating responsibility away from humans.: 16 : 6 This can have the effect of reducing alternative options, compromises, or flexibility.: 16 Sociologist Scott Lash has critiqued algorithms as a new form of "generative power", in that they are a virtual means of generating actual ends. Where previously human behavior generated data to be collected and studied, powerful algorithms increasingly could shape and define human behaviors.: 71
While blind adherence to algorithmic decisions is a concern, an opposite issue arises when human decision-makers exhibit "selective adherence" to algorithmic advice. In such cases, individuals accept recommendations that align with their preexisting beliefs and disregard those that do not, thereby perpetuating existing biases and undermining the fairness objectives of algorithmic interventions. Consequently, incorporating fair algorithmic tools into decision-making processes does not automatically eliminate human biases.
Concerns over the impact of algorithms on society have led to the creation of working groups in organizations such as Google and Microsoft, which have co-created a working group named Fairness, Accountability,
and Transparency in Machine Learning.: 115 Ideas from Google have included community groups that patrol the outcomes of algorithms and vote to control or restrict outputs they deem to have negative consequences.: 117 In recent years, the study of the Fairness, Accountability,
and Transparency (FAT) of algorithms has emerged as its own interdisciplinary research area with an annual conference called FAccT. Critics have suggested that FAT initiatives cannot serve effectively as independent watchdogs when many are funded by corporations building the systems being studied.
== Types ==
=== Pre-existing ===
Pre-existing bias in an algorithm is a consequence of underlying social and institutional ideologies. Such ideas may influence or create personal biases within individual designers or programmers. Such prejudices can be explicit and conscious, or implicit and unconscious.: 334 : 294 Poorly selected input data, or simply data from a biased source, will influence the outcomes created by machines.: 17 Encoding pre-existing bias into software can preserve social and institutional bias, and, without correction, could be replicated in all future uses of that algorithm.: 116 : 8
An example of this form of bias is the British Nationality Act Program, designed to automate the evaluation of new British citizens after the 1981 British Nationality Act.: 341 The program accurately reflected the tenets of the law, which stated that "a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not.": 341 : 375 In its attempt to transfer a particular logic into an algorithmic process, the BNAP inscribed the logic of the British Nationality Act into its algorithm, which would perpetuate it even if the act was eventually repealed.: 342
Another source of bias, which has been called "label choice bias", arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program.
=== Machine learning bias ===
Machine learning bias refers to systematic and unfair disparities in the output of machine learning algorithms. These biases can manifest in various ways and are often a reflection of the data used to train these algorithms. Here are some key aspects:
==== Language bias ====
Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al.'s work shows that current large language models, as they are predominately trained on English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent. Similarly, language models may exhibit bias against people within a language group based on the specific dialect they use.
==== Selection bias ====
Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as “A”) when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model’s performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings.
==== Gender bias ====
Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.
==== Stereotyping ====
Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that homogenize, or unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.
A recent focus in research has been on the complex interplay between the grammatical properties of a language and real-world biases that can become embedded in AI systems, potentially perpetuating harmful stereotypes and assumptions. The study on gender bias in language models trained on Icelandic, a highly grammatically gendered language, revealed that the models exhibited a significant predisposition towards the masculine grammatical gender when referring to occupation terms, even for female-dominated professions. This suggests the models amplified societal gender biases present in the training data.
==== Political bias ====
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
==== Racial bias ====
Racial bias refers to the tendency of machine learning models to produce outcomes that unfairly discriminate against or stereotype individuals based on race or ethnicity. This bias often stems from training data that reflects historical and systemic inequalities. For example, AI systems used in hiring, law enforcement, or healthcare may disproportionately disadvantage certain racial groups by reinforcing existing stereotypes or underrepresenting them in key areas. Such biases can manifest in ways like facial recognition systems misidentifying individuals of certain racial backgrounds or healthcare algorithms underestimating the medical needs of minority patients. Addressing racial bias requires careful examination of data, improved transparency in algorithmic processes, and efforts to ensure fairness throughout the AI development lifecycle.
=== Technical ===
Technical bias emerges through limitations of a program, computational power, its design, or other constraint on the system.: 332 Such bias can also be a restraint of design, for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more than the next three, as in an airline price display.: 336 Another case is software that relies on randomness for fair distributions of results. If the random number generation mechanism is not truly random, it can introduce bias, for example, by skewing selections toward items at the end or beginning of a list.: 332
A decontextualized algorithm uses unrelated information to sort results, for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of American Airlines over United Airlines.: 332 The opposite may also apply, in which results are evaluated in contexts different from which they are collected. Data may be collected without crucial external context: for example, when facial recognition software is used by surveillance cameras, but evaluated by remote staff in another country or region, or evaluated by non-human algorithms with no awareness of what takes place beyond the camera's field of vision. This could create an incomplete understanding of a crime scene, for example, potentially mistaking bystanders for those who commit the crime.: 574
Lastly, technical bias can be created by attempting to formalize decisions into concrete steps on the assumption that human behavior works in the same way. For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a jury.: 332 Another unintended result of this form of bias was found in the plagiarism-detection software Turnitin, which compares student-written texts to information found online and returns a probability score that the student's work is copied. Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms. Because it is easier for native speakers to evade detection as a result of the technical constraints of the software, this creates a scenario where Turnitin identifies foreign-speakers of English for plagiarism while allowing more native-speakers to evade detection.: 21–22
=== Emergent ===
Emergent bias is the result of the use and reliance on algorithms across new or unanticipated contexts.: 334 Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, business models, or shifting cultural norms.: 334, 336 This may exclude groups through technology, without providing clear outlines to understand who is responsible for their exclusion.: 179 : 294 Similarly, problems may emerge when training data (the samples "fed" to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world.
In 1990, an example of emergent bias was identified in the software used to place US medical students into residencies, the National Residency Match Program (NRMP).: 338 The algorithm was designed at a time when few married couples would seek residencies together. As more women entered medical schools, more students were likely to request a residency alongside their partners. The process called for each applicant to provide a list of preferences for placement across the US, which was then sorted and assigned when a hospital and an applicant both agreed to a match. In the case of married couples where both sought residencies, the algorithm weighed the location choices of the higher-rated partner first. The result was a frequent assignment of highly preferred schools to the first partner and lower-preferred schools to the second partner, rather than sorting for compromises in placement preference.: 338
Additional emergent biases include:
==== Correlations ====
Unpredictable correlations can emerge when large data sets are compared to each other. For example, data collected about web-browsing patterns may align with signals marking sensitive data (such as race or sexual orientation). By selecting according to certain behavior or browsing patterns, the end effect would be almost identical to discrimination through the use of direct race or sexual orientation data.: 6 In other cases, the algorithm draws conclusions from correlations, without being able to understand those correlations. For example, one triage program gave lower priority to asthmatics who had pneumonia than asthmatics who did not have pneumonia. The program algorithm did this because it simply compared survival rates: asthmatics with pneumonia are at the highest risk. Historically, for this same reason, hospitals typically give such asthmatics the best and most immediate care.
==== Unanticipated uses ====
Emergent bias can occur when an algorithm is used by unanticipated audiences. For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand.: 334 These exclusions can become compounded, as biased or exclusionary technology is more deeply integrated into society.: 179
Apart from exclusion, unanticipated uses may emerge from the end user relying on the software rather than their own knowledge. In one example, an unanticipated user group led to algorithmic bias in the UK, when the British National Act Program was created as a proof-of-concept by computer scientists and immigration lawyers to evaluate suitability for British citizenship. The designers had access to legal expertise beyond the end users in immigration offices, whose understanding of both software and immigration law would likely have been unsophisticated. The agents administering the questions relied entirely on the software, which excluded alternative pathways to citizenship, and used the software even after new case laws and legal interpretations led the algorithm to become outdated. As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software's algorithm indirectly led to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the more broader criteria of British immigration law.: 342
==== Feedback loops ====
Emergent bias may also create a feedback loop, or recursion, if data collected for an algorithm results in real-world responses which are fed back into the algorithm. For example, simulations of the predictive policing software (PredPol), deployed in Oakland, California, suggested an increased police presence in black neighborhoods based on crime data reported by the public. The simulation showed that the public reported crime based on the sight of police cars, regardless of what police were doing. The simulation interpreted police car sightings in modeling its predictions of crime, and would in turn assign an even larger increase of police presence within those neighborhoods. The Human Rights Data Analysis Group, which conducted the simulation, warned that in places where racial discrimination is a factor in arrests, such feedback loops could reinforce and perpetuate racial discrimination in policing. Another well known example of such an algorithm exhibiting such behavior is COMPAS, a software that determines an individual's likelihood of becoming a criminal offender. The software is often criticized for labeling Black individuals as criminals much more likely than others, and then feeds the data back into itself in the event individuals become registered criminals, further enforcing the bias created by the dataset the algorithm is acting on.
Recommender systems such as those used to recommend online videos or news articles can create feedback loops. When users click on content that is suggested by algorithms, it influences the next set of suggestions. Over time this may lead to users entering a filter bubble and being unaware of important or useful content.
== Impact ==
=== Commercial influences ===
Corporate algorithms could be skewed to invisibly favor financial arrangements or agreements between companies, without the knowledge of a user who may mistake the algorithm as being impartial. For example, American Airlines created a flight-finding algorithm in the 1980s. The software presented a range of flights from various airlines to customers, but weighed factors that boosted its own flights, regardless of price or convenience. In testimony to the United States Congress, the president of the airline stated outright that the system was created with the intention of gaining competitive advantage through preferential treatment.: 2 : 331
In a 1998 paper describing Google, the founders of the company had adopted a policy of transparency in search results regarding paid placement, arguing that "advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers." This bias would be an "invisible" manipulation of the user.: 3
=== Voting behavior ===
A series of studies about undecided voters in the US and in India found that search engine results were able to shift voting outcomes by about 20%. The researchers concluded that candidates have "no means of competing" if an algorithm, with or without intent, boosted page listings for a rival candidate. Facebook users who saw messages related to voting were more likely to vote. A 2010 randomized trial of Facebook users showed a 20% increase (340,000 votes) among users who saw messages encouraging voting, as well as images of their friends who had voted. Legal scholar Jonathan Zittrain has warned that this could create a "digital gerrymandering" effect in elections, "the selective presentation of information by an intermediary to meet its agenda, rather than to serve its users", if intentionally manipulated.: 335
=== Gender discrimination ===
In 2016, the professional networking site LinkedIn was discovered to recommend male variations of women's names in response to search queries. The site did not make similar recommendations in searches for male names. For example, "Andrea" would bring up a prompt asking if users meant "Andrew", but queries for "Andrew" did not ask if users meant to find "Andrea". The company said this was the result of an analysis of users' interactions with the site.
In 2012, the department store franchise Target was cited for gathering data points to infer when women customers were pregnant, even if they had not announced it, and then sharing that information with marketing partners.: 94 Because the data had been predicted, rather than directly observed or reported, the company had no legal obligation to protect the privacy of those customers.: 98
Web search algorithms have also been accused of bias. Google's results may prioritize pornographic content in search terms related to sexuality, for example, "lesbian". This bias extends to the search engine showing popular but sexualized content in neutral searches. For example, "Top 25 Sexiest Women Athletes" articles displayed as first-page results in searches for "women athletes".: 31 In 2017, Google adjusted these results along with others that surfaced hate groups, racist views, child abuse and pornography, and other upsetting and offensive content. Other examples include the display of higher-paying jobs to male applicants on job search websites. Researchers have also identified that machine translation exhibits a strong tendency towards male defaults. In particular, this is observed in fields linked to unbalanced gender distribution, including STEM occupations. In fact, current machine translation systems fail to reproduce the real world distribution of female workers.
In 2015, Amazon.com turned off an AI system it developed to screen job applications when they realized it was biased against women. The recruitment tool excluded applicants who attended all-women's colleges and resumes that included the word "women's". A similar problem emerged with music streaming services—In 2019, it was discovered that the recommender system algorithm used by Spotify was biased against women artists. Spotify's song recommendations suggested more male artists over women artists.
=== Racial and ethnic discrimination ===
Algorithms have been criticized as a method for obscuring racial prejudices in decision-making.: 158 Because of how certain races and ethnic groups were treated in the past, data can often contain hidden biases. For example, black people are likely to receive longer sentences than white people who committed the same crime. This could potentially mean that a system amplifies the original biases in the data.
In 2015, Google apologized when a couple of black users complained that an image-identification algorithm in its Photos application identified them as gorillas. In 2010, Nikon cameras were criticized when image-recognition algorithms consistently asked Asian users if they were blinking. Such examples are the product of bias in biometric data sets. Biometric data is drawn from aspects of the body, including racial features either observed or inferred, which can then be transferred into data points.: 154 Speech recognition technology can have different accuracies depending on the user's accent. This may be caused by the a lack of training data for speakers of that accent.
Biometric data about race may also be inferred, rather than observed. For example, a 2012 study showed that names commonly associated with blacks were more likely to yield search results implying arrest records, regardless of whether there is any police record of that individual's name. A 2015 study also found that Black and Asian people are assumed to have lesser functioning lungs due to racial and occupational exposure data not being incorporated into the prediction algorithm's model of lung function.
In 2019, a research study revealed that a healthcare algorithm sold by Optum favored white patients over sicker black patients. The algorithm predicts how much patients would cost the health-care system in the future. However, cost is not race-neutral, as black patients incurred about $1,800 less in medical costs per year than white patients with the same number of chronic conditions, which led to the algorithm scoring white patients as equally at risk of future health problems as black patients who suffered from significantly more diseases.
A study conducted by researchers at UC Berkeley in November 2019 revealed that mortgage algorithms have been discriminatory towards Latino and African Americans which discriminated against minorities based on "creditworthiness" which is rooted in the U.S. fair-lending law which allows lenders to use measures of identification to determine if an individual is worthy of receiving loans. These particular algorithms were present in FinTech companies and were shown to discriminate against minorities.
Another study, published in August 2024, on Large language model investigates how language models perpetuate covert racism, particularly through dialect prejudice against speakers of African American English (AAE). It highlights that these models exhibit more negative stereotypes about AAE speakers than any recorded human biases, while their overt stereotypes are more positive. This discrepancy raises concerns about the potential harmful consequences of such biases in decision-making processes.
A study published by the Anti-Defamation League in 2025 found that several major LLMs, including ChatGPT, Llama, Claude, and Gemini showed antisemitic bias.
A 2018 study found that commercial gender classification systems had significantly higher error rates for darker-skinned women, with error rates up to 34.7%, compared to near-perfect accuracy for lighter-skinned men.
==== Law enforcement and legal proceedings ====
Algorithms already have numerous applications in legal systems. An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than the average COMPAS-assigned risk level of white defendants, and that black defendants are twice as likely to be erroneously assigned the label "high-risk" as white defendants.
One example is the use of risk assessments in criminal sentencing in the United States and parole hearings, judges were presented with an algorithmically generated score intended to reflect the risk that a prisoner will repeat a crime. For the time period starting in 1920 and ending in 1970, the nationality of a criminal's father was a consideration in those risk assessment scores.: 4 Today, these scores are shared with judges in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin. An independent investigation by ProPublica found that the scores were inaccurate 80% of the time, and disproportionately skewed to suggest blacks to be at risk of relapse, 77% more often than whites.
One study that set out to examine "Risk, Race, & Recidivism: Predictive Bias and Disparate Impact" alleges a two-fold (45 percent vs. 23 percent) adverse likelihood for black vs. Caucasian defendants to be misclassified as imposing a higher risk despite having objectively remained without any documented recidivism over a two-year period of observation.
In the pretrial detention context, a law review article argues that algorithmic risk assessments violate 14th Amendment Equal Protection rights on the basis of race, since the algorithms are argued to be facially discriminatory, to result in disparate treatment, and to not be narrowly tailored.
==== Online hate speech ====
In 2017 a Facebook algorithm designed to remove online hate speech was found to advantage white men over black children when assessing objectionable content, according to internal Facebook documents. The algorithm, which is a combination of computer programs and human content reviewers, was created to protect broad categories rather than specific subsets of categories. For example, posts denouncing "Muslims" would be blocked, while posts denouncing "Radical Muslims" would be allowed. An unanticipated outcome of the algorithm is to allow hate speech against black children, because they denounce the "children" subset of blacks, rather than "all blacks", whereas "all white men" would trigger a block, because whites and males are not considered subsets. Facebook was also found to allow ad purchasers to target "Jew haters" as a category of users, which the company said was an inadvertent outcome of algorithms used in assessing and categorizing data. The company's design also allowed ad buyers to block African-Americans from seeing housing ads.
While algorithms are used to track and block hate speech, some were found to be 1.5 times more likely to flag information posted by Black users and 2.2 times likely to flag information as hate speech if written in African American English. Without context for slurs and epithets, even when used by communities which have re-appropriated them, were flagged.
Another instance in a study found that 85 out of 100 examined subreddits tended to remove various norm violations, including misogynistic slurs and racist hate speech, highlighting the prevalence of such content in online communities. As platforms like Reddit update their hate speech policies, they must balance free expression with the protection of marginalized communities, emphasizing the need for context-sensitive moderation and nuanced algorithms.
==== Surveillance ====
Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors, and to determine who belongs in certain locations at certain times.: 572 The ability of such algorithms to recognize faces across a racial spectrum has been shown to be limited by the racial diversity of images in its training database; if the majority of photos belong to one race or gender, the software is better at recognizing other members of that race or gender. However, even audits of these image-recognition systems are ethically fraught, and some scholars have suggested the technology's context will always have a disproportionate impact on communities whose actions are over-surveilled. For example, a 2002 analysis of software used to identify individuals in CCTV images found several examples of bias when run against criminal databases. The software was assessed as identifying men more frequently than women, older people more frequently than the young, and identified Asians, African-Americans and other races more often than whites.: 190 A 2018 study found that facial recognition software most likely accurately identified light-skinned (typically European) males, with slightly lower accuracy rates for light-skinned females. Dark-skinned males and females were significanfly less likely to be accurately identified by facial recognition software. These disparities are attributed to the under-representation of darker-skinned participants in data sets used to develop this software.
=== Discrimination against the LGBTQ community ===
In 2011, users of the gay hookup application Grindr reported that the Android store's recommendation algorithm was linking Grindr to applications designed to find sex offenders, which critics said inaccurately related homosexuality with pedophilia. Writer Mike Ananny criticized this association in The Atlantic, arguing that such associations further stigmatized gay men. In 2009, online retailer Amazon de-listed 57,000 books after an algorithmic change expanded its "adult content" blacklist to include any book addressing sexuality or gay themes, such as the critically acclaimed novel Brokeback Mountain.: 5
In 2019, it was found that on Facebook, searches for "photos of my female friends" yielded suggestions such as "in bikinis" or "at the beach". In contrast, searches for "photos of my male friends" yielded no results.
Facial recognition technology has been seen to cause problems for transgender individuals. In 2018, there were reports of Uber drivers who were transgender or transitioning experiencing difficulty with the facial recognition software that Uber implements as a built-in security measure. As a result of this, some of the accounts of trans Uber drivers were suspended which cost them fares and potentially cost them a job, all due to the facial recognition software experiencing difficulties with recognizing the face of a trans driver who was transitioning. Although the solution to this issue would appear to be including trans individuals in training sets for machine learning models, an instance of trans YouTube videos that were collected to be used in training data did not receive consent from the trans individuals that were included in the videos, which created an issue of violation of privacy.
There has also been a study that was conducted at Stanford University in 2017 that tested algorithms in a machine learning system that was said to be able to detect an individual's sexual orientation based on their facial images. The model in the study predicted a correct distinction between gay and straight men 81% of the time, and a correct distinction between gay and straight women 74% of the time. This study resulted in a backlash from the LGBTQIA community, who were fearful of the possible negative repercussions that this AI system could have on individuals of the LGBTQIA community by putting individuals at risk of being "outed" against their will.
=== Disability discrimination ===
While the modalities of algorithmic fairness have been judged on the basis of different aspects of bias – like gender, race and socioeconomic status, disability often is left out of the list. The marginalization people with disabilities currently face in society is being translated into AI systems and algorithms, creating even more exclusion
The shifting nature of disabilities and its subjective characterization, makes it more difficult to computationally address. The lack of historical depth in defining disabilities, collecting its incidence and prevalence in questionnaires, and establishing recognition add to the controversy and ambiguity in its quantification and calculations. The definition of disability has been long debated shifting from a medical model to a social model of disability most recently, which establishes that disability is a result of the mismatch between people's interactions and barriers in their environment, rather than impairments and health conditions. Disabilities can also be situational or temporary, considered in a constant state of flux. Disabilities are incredibly diverse, fall within a large spectrum, and can be unique to each individual. People's identity can vary based on the specific types of disability they experience, how they use assistive technologies, and who they support. The high level of variability across people's experiences greatly personalizes how a disability can manifest. Overlapping identities and intersectional experiences are excluded from statistics and datasets, hence underrepresented and nonexistent in training data. Therefore, machine learning models are trained inequitably and artificial intelligent systems perpetuate more algorithmic bias. For example, if people with speech impairments are not included in training voice control features and smart AI assistants –they are unable to use the feature or the responses received from a Google Home or Alexa are extremely poor.
Given the stereotypes and stigmas that still exist surrounding disabilities, the sensitive nature of revealing these identifying characteristics also carries vast privacy challenges. As disclosing disability information can be taboo and drive further discrimination against this population, there is a lack of explicit disability data available for algorithmic systems to interact with. People with disabilities face additional harms and risks with respect to their social support, cost of health insurance, workplace discrimination and other basic necessities upon disclosing their disability status. Algorithms are further exacerbating this gap by recreating the biases that already exist in societal systems and structures.
=== Google Search ===
While users generate results that are "completed" automatically, Google has failed to remove sexist and racist autocompletion text. For example, Algorithms of Oppression: How Search Engines Reinforce Racism Safiya Noble notes an example of the search for "black girls", which was reported to result in pornographic images. Google claimed it was unable to erase those pages unless they were considered unlawful.
== Obstacles to research ==
Several problems impede the study of large-scale algorithmic bias, hindering the application of academically rigorous studies and public understanding.: 5
=== Defining fairness ===
Literature on algorithmic bias has focused on the remedy of fairness, but definitions of fairness are often incompatible with each other and the realities of machine learning optimization. For example, defining fairness as an "equality of outcomes" may simply refer to a system producing the same result for all people, while fairness defined as "equality of treatment" might explicitly consider differences between individuals.: 2 As a result, fairness is sometimes described as being in conflict with the accuracy of a model, suggesting innate tensions between the priorities of social welfare and the priorities of the vendors designing these systems.: 2 In response to this tension, researchers have suggested more care to the design and use of systems that draw on potentially biased algorithms, with "fairness" defined for specific applications and contexts.
=== Complexity ===
Algorithmic processes are complex, often exceeding the understanding of the people who use them.: 2 : 7 Large-scale operations may not be understood even by those involved in creating them. The methods and processes of contemporary programs are often obscured by the inability to know every permutation of a code's input or output.: 183 Social scientist Bruno Latour has identified this process as blackboxing, a process in which "scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become." Others have critiqued the black box metaphor, suggesting that current algorithms are not one black box, but a network of interconnected ones.: 92
An example of this complexity can be found in the range of inputs into customizing feedback. The social media site Facebook factored in at least 100,000 data points to determine the layout of a user's social media feed in 2013. Furthermore, large teams of programmers may operate in relative isolation from one another, and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms.: 118 Not all code is original, and may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems.: 22
Additional complexity occurs through machine learning and the personalization of algorithms based on user interactions such as clicks, time spent on site, and other metrics. These personal adjustments can confuse general attempts to understand algorithms.: 367 : 7 One unidentified streaming radio service reported that it used five unique music-selection algorithms it selected for its users, based on their behavior. This creates different experiences of the same streaming services between different users, making it harder to understand what these algorithms do.: 5
Companies also run frequent A/B tests to fine-tune algorithms based on user response. For example, the search engine Bing can run up to ten million subtle variations of its service per day, creating different experiences of the service between each use and/or user.: 5
=== Lack of transparency ===
Commercial algorithms are proprietary, and may be treated as trade secrets.: 2 : 7 : 183 Treating algorithms as trade secrets protects companies, such as search engines, where a transparent algorithm might reveal tactics to manipulate search rankings.: 366 This makes it difficult for researchers to conduct interviews or analysis to discover how algorithms function.: 20 Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output.: 369 Other critics, such as lawyer and activist Katarzyna Szymielewicz, have suggested that the lack of transparency is often disguised as a result of algorithmic complexity, shielding companies from disclosing or investigating its own algorithmic processes.
=== Lack of data about sensitive categories ===
A significant barrier to understanding the tackling of bias in practice is that categories, such as demographics of individuals protected by anti-discrimination law, are often not explicitly considered when collecting and processing data. In some cases, there is little opportunity to collect this data explicitly, such as in device fingerprinting, ubiquitous computing and the Internet of Things. In other cases, the data controller may not wish to collect such data for reputational reasons, or because it represents a heightened liability and security risk. It may also be the case that, at least in relation to the European Union's General Data Protection Regulation, such data falls under the 'special category' provisions (Article 9), and therefore comes with more restrictions on potential collection and processing.
Some practitioners have tried to estimate and impute these missing sensitive categorizations in order to allow bias mitigation, for example building systems to infer ethnicity from names, however this can introduce other forms of bias if not undertaken with care. Machine learning researchers have drawn upon cryptographic privacy-enhancing technologies such as secure multi-party computation to propose methods whereby algorithmic bias can be assessed or mitigated without these data ever being available to modellers in cleartext.
Algorithmic bias does not only include protected categories, but can also concern characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult. Furthermore, false and accidental correlations can emerge from a lack of understanding of protected categories, for example, insurance rates based on historical data of car accidents which may overlap, strictly by coincidence, with residential clusters of ethnic minorities.
== Solutions ==
A study of 84 policy guidelines on ethical AI found that fairness and "mitigation of unwanted bias" was a common point of concern, and were addressed through a blend of technical solutions, transparency and monitoring, right to remedy and increased oversight, and diversity and inclusion efforts.
=== Technical ===
There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also analyze a program's output and its usefulness and therefore may involve the analysis of its confusion matrix (or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, "conducting an AI audit", where the "auditor" is an algorithm that goes through the AI model and the training data to identify biases.
Ensuring that an AI tool such as a classifier is free from bias is more difficult than just removing the sensitive information
from its input signals, because this is typically implicit in other signals. For example, the hobbies, sports and schools attended
by a job candidate might reveal their gender to the software, even when this is removed from the analysis. Solutions to this
problem involve ensuring that the intelligent agent does not have any information that could be used to reconstruct the protected
and sensitive information about the subject, as first demonstrated in where a deep learning network was simultaneously trained to learn a task while at the same time being completely agnostic about the protected feature. A simpler method was proposed in the context of word embeddings, and involves removing information that is correlated with the protected characteristic.
Currently, a new IEEE standard is being drafted that aims to specify methodologies which help creators of algorithms eliminate issues of bias and articulate transparency (i.e. to authorities or end users) about the function and possible effects of their algorithms. The project was approved February 2017 and is sponsored by the Software & Systems Engineering Standards Committee, a committee chartered by the IEEE Computer Society. A draft of the standard is expected to be submitted for balloting in June 2019.The standard was published in January 2025.
In 2022, the IEEE released a standard aimed at specifying methodologies to help creators of algorithms address issues of bias and promote transparency regarding the function and potential effects of their algorithms. The project, initially approved in February 2017, was sponsored by the Software & Systems Engineering Standards Committee, a committee under the IEEE Computer Society. The standard provides guidelines for articulating transparency to authorities or end users and mitigating algorithmic biases.
=== Transparency and monitoring ===
Ethics guidelines on AI point to the need for accountability, recommending that steps be taken to improve the interpretability of results. Such solutions include the consideration of the "right to understanding" in machine learning algorithms, and to resist deployment of machine learning in situations where the decisions could not be explained or reviewed. Toward this end, a movement for "Explainable AI" is already underway within organizations such as DARPA, for reasons that go beyond the remedy of bias. Price Waterhouse Coopers, for example, also suggests that monitoring output means designing systems in such a way as to ensure that solitary components of the system can be isolated and shut down if they skew results.
An initial approach towards transparency included the open-sourcing of algorithms. Software code can be looked into and improvements can be proposed through source-code-hosting facilities. However, this approach doesn't necessarily produce the intended effects. Companies and organizations can share all possible documentation and code, but this does not establish transparency if the audience doesn't understand the information given. Therefore, the role of an interested critical audience is worth exploring in relation to transparency. Algorithms cannot be held accountable without a critical audience.
=== Right to remedy ===
From a regulatory perspective, the Toronto Declaration calls for applying a human rights framework to harms caused by algorithmic bias. This includes legislating expectations of due diligence on behalf of designers of these algorithms, and creating accountability when private actors fail to protect the public interest, noting that such rights may be obscured by the complexity of determining responsibility within a web of complex, intertwining processes. Others propose the need for clear liability insurance mechanisms.
=== Diversity and inclusion ===
Amid concerns that the design of AI systems is primarily the domain of white, male engineers, a number of scholars have suggested that algorithmic bias may be minimized by expanding inclusion in the ranks of those designing AI systems. For example, just 12% of machine learning engineers are women, with black AI leaders pointing to a "diversity crisis" in the field. Groups like Black in AI and Queer in AI are attempting to create more inclusive spaces in the AI community and work against the often harmful desires of corporations that control the trajectory of AI research. Critiques of simple inclusivity efforts suggest that diversity programs can not address overlapping forms of inequality, and have called for applying a more deliberate lens of intersectionality to the design of algorithms.: 4 Researchers at the University of Cambridge have argued that addressing racial diversity is hampered by the "whiteness" of the culture of AI.
=== Interdisciplinarity and Collaboration ===
Integrating interdisciplinarity and collaboration in developing of AI systems can play a critical role in tackling algorithmic bias. Integrating insights, expertise, and perspectives from disciplines outside of computer science can foster a better understanding of the impact data driven solutions have on society. An example of this in AI research is PACT or Participatory Approach to enable Capabilities in communiTies, a proposed framework for facilitating collaboration when developing AI driven solutions concerned with social impact. This framework identifies guiding principals for stakeholder participation when working on AI for Social Good (AI4SG) projects. PACT attempts to reify the importance of decolonizing and power-shifting efforts in the design of human-centered AI solutions. An academic initiative in this regard is the Stanford University's Institute for Human-Centered Artificial Intelligence which aims to foster multidisciplinary collaboration. The mission of the institute is to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition.
Collaboration with outside experts and various stakeholders facilitates ethical, inclusive, and accountable development of intelligent systems. It incorporates ethical considerations, understands the social and cultural context, promotes human-centered design, leverages technical expertise, and addresses policy and legal considerations. Collaboration across disciplines is essential to effectively mitigate bias in AI systems and ensure that AI technologies are fair, transparent, and accountable.
== Regulation ==
=== Europe ===
The General Data Protection Regulation (GDPR), the European Union's revised data protection regime that was implemented in 2018, addresses "Automated individual decision-making, including profiling" in Article 22. These rules prohibit "solely" automated decisions which have a "significant" or "legal" effect on an individual, unless they are explicitly authorised by consent, contract, or member state law. Where they are permitted, there must be safeguards in place, such as a right to a human-in-the-loop, and a non-binding right to an explanation of decisions reached. While these regulations are commonly considered to be new, nearly identical provisions have existed across Europe since 1995, in Article 15 of the Data Protection Directive. The original automated decision rules and safeguards found in French law since the late 1970s.
The GDPR addresses algorithmic bias in profiling systems, as well as the statistical approaches possible to clean it, directly in recital 71, noting thatthe controller should use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures appropriate ... that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect.Like the non-binding right to an explanation in recital 71, the problem is the non-binding nature of recitals. While it has been treated as a requirement by the Article 29 Working Party that advised on the implementation of data protection law, its practical dimensions are unclear. It has been argued that the Data Protection Impact Assessments for high risk data profiling (alongside other pre-emptive measures within data protection) may be a better way to tackle issues of algorithmic discrimination, as it restricts the actions of those deploying algorithms, rather than requiring consumers to file complaints or request changes.
=== United States ===
The United States has no general legislation controlling algorithmic bias, approaching the problem through various state and federal laws that might vary by industry, sector, and by how an algorithm is used. Many policies are self-enforced or controlled by the Federal Trade Commission. In 2016, the Obama administration released the National Artificial Intelligence Research and Development Strategic Plan, which was intended to guide policymakers toward a critical assessment of algorithms. It recommended researchers to "design these systems so that their actions and decision-making are transparent and easily interpretable by humans, and thus can be examined for any bias they may contain, rather than just learning and repeating these biases". Intended only as guidance, the report did not create any legal precedent.: 26
In 2017, New York City passed the first algorithmic accountability bill in the United States. The bill, which went into effect on January 1, 2018, required "the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public, and how agencies may address instances where people are harmed by agency automated decision systems." In 2023, New York City implemented a law requiring employers using automated hiring tools to conduct independent "bias audits" and publish the results. This law marked one of the first legally mandated transparency measures for AI systems used in employment decisions in the United States. The task force is required to present findings and recommendations for further regulatory action in 2019.
On February 11, 2019, according to Executive Order 13859, the federal government unveiled the "American AI Initiative," a comprehensive strategy to maintain U.S. leadership in artificial intelligence. The initiative highlights the importance of sustained AI research and development, ethical standards, workforce training, and the protection of critical AI technologies. This aligns with broader efforts to ensure transparency, accountability, and innovation in AI systems across public and private sectors. Furthermore, on October 30, 2023, the President signed Executive Order 14110, which emphasizes the safe, secure, and trustworthy development and use of artificial intelligence (AI). The order outlines a coordinated, government-wide approach to harness AI's potential while mitigating its risks, including fraud, discrimination, and national security threats. An important point in the commitment is promoting responsible innovation and collaboration across sectors to ensure that AI benefits society as a whole. With this order, President Joe Biden mandated the federal government to create best practices for companies to optimize AI's benefits and minimize its harms.
=== India ===
On July 31, 2018, a draft of the Personal Data Bill was presented. The draft proposes standards for the storage, processing and transmission of data. While it does not use the term algorithm, it makes for provisions for "harm resulting from any processing or any kind of processing undertaken by the fiduciary". It defines "any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal" or "any discriminatory treatment" as a source of harm that could arise from improper use of data. It also makes special provisions for people of "Intersex status".
== See also ==
Algorithmic wage discrimination
Ethics of artificial intelligence
Fairness (machine learning)
Hallucination (artificial intelligence)
Misaligned goals in artificial intelligence
Predictive policing
SenseTime
== References ==
== Further reading ==
Baer, Tobias (2019). Understand, Manage, and Prevent Algorithmic Bias: A Guide for Business Users and Data Scientists. New York: Apress. ISBN 9781484248843.
Noble, Safiya Umoja (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. ISBN 9781479837243. | Wikipedia/Algorithmic_bias |
In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.
== Redundancy ==
Unlike tree traversal, graph traversal may require that some vertices be visited more than once, since it is not necessarily known before transitioning to a vertex that it has already been explored. As graphs become more dense, this redundancy becomes more prevalent, causing computation time to increase; as graphs become more sparse, the opposite holds true.
Thus, it is usually necessary to remember which vertices have already been explored by the algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to prevent the traversal from continuing indefinitely). This may be accomplished by associating each vertex of the graph with a "color" or "visitation" state during the traversal, which is then checked and updated as the algorithm visits each vertex. If the vertex has already been visited, it is ignored and the path is pursued no further; otherwise, the algorithm checks/updates the vertex and continues down its current path.
Several special cases of graphs imply the visitation of other vertices in their structure, and thus do not require that visitation be explicitly recorded during the traversal. An important example of this is a tree: during a traversal it may be assumed that all "ancestor" vertices of the current vertex (and others depending on the algorithm) have already been visited. Both the depth-first and breadth-first graph searches are adaptations of tree-based algorithms, distinguished primarily by the lack of a structurally determined "root" vertex and the addition of a data structure to record the traversal's visitation state.
== Graph traversal algorithms ==
Note. — If each vertex in a graph is to be traversed by a tree-based algorithm (such as DFS or BFS), then the algorithm must be called at least once for each connected component of the graph. This is easily accomplished by iterating through all the vertices of the graph, performing the algorithm on each vertex that is still unvisited when examined.
=== Depth-first search ===
A depth-first search (DFS) is an algorithm for traversing a finite graph. DFS visits the child vertices before visiting the sibling vertices; that is, it traverses the depth of any particular path before exploring its breadth. A stack (often the program's call stack via recursion) is generally used when implementing the algorithm.
The algorithm begins with a chosen "root" vertex; it then iteratively transitions from the current vertex to an adjacent, unvisited vertex, until it can no longer find an unexplored vertex to transition to from its current location. The algorithm then backtracks along previously visited vertices, until it finds a vertex connected to yet more uncharted territory. It will then proceed down the new path as it had before, backtracking as it encounters dead-ends, and ending only when the algorithm has backtracked past the original "root" vertex from the very first step.
DFS is the basis for many graph-related algorithms, including topological sorts and planarity testing.
==== Pseudocode ====
Input: A graph G and a vertex v of G.
Output: A labeling of the edges in the connected component of v as discovery edges and back edges.
procedure DFS(G, v) is
label v as explored
for all edges e in G.incidentEdges(v) do
if edge e is unexplored then
w ← G.adjacentVertex(v, e)
if vertex w is unexplored then
label e as a discovered edge
recursively call DFS(G, w)
else
label e as a back edge
=== Breadth-first search ===
A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits the sibling vertices before visiting the child vertices, and a queue is used in the search process. This algorithm is often used to find the shortest path from one vertex to another.
==== Pseudocode ====
Input: A graph G and a vertex v of G.
Output: The closest vertex to v satisfying some conditions, or null if no such vertex exists.
procedure BFS(G, v) is
create a queue Q
enqueue v onto Q
mark v
while Q is not empty do
w ← Q.dequeue()
if w is what we are looking for then
return w
for all edges e in G.adjacentEdges(w) do
x ← G.adjacentVertex(w, e)
if x is not marked then
mark x
enqueue x onto Q
return null
== Applications ==
Breadth-first search can be used to solve many problems in graph theory, for example:
finding all vertices within one connected component;
Cheney's algorithm;
finding the shortest path between two vertices;
testing a graph for bipartiteness;
Cuthill–McKee algorithm mesh numbering;
Ford–Fulkerson algorithm for computing the maximum flow in a flow network;
serialization/deserialization of a binary tree vs serialization in sorted order (allows the tree to be re-constructed in an efficient manner);
maze generation algorithms;
flood fill algorithm for marking contiguous regions of a two dimensional image or n-dimensional array;
analysis of networks and relationships.
== Graph exploration ==
The problem of graph exploration can be seen as a variant of graph traversal. It is an online problem, meaning that the information about the graph is only revealed during the runtime of the algorithm. A common model is as follows: given a connected graph G = (V, E) with non-negative edge weights. The algorithm starts at some vertex, and knows all incident outgoing edges and the vertices at the end of these edges—but not more. When a new vertex is visited, then again all incident outgoing edges and the vertices at the end are known. The goal is to visit all n vertices and return to the starting vertex, but the sum of the weights of the tour should be as small as possible. The problem can also be understood as a specific version of the travelling salesman problem, where the salesman has to discover the graph on the go.
For general graphs, the best known algorithms for both undirected and directed graphs is a simple greedy algorithm:
In the undirected case, the greedy tour is at most O(ln n)-times longer than an optimal tour. The best lower bound known for any deterministic online algorithm is 10/3.
Unit weight undirected graphs can be explored with a competitive ration of 2 − ε, which is already a tight bound on Tadpole graphs.
In the directed case, the greedy tour is at most (n − 1)-times longer than an optimal tour. This matches the lower bound of n − 1. An analogous competitive lower bound of Ω(n) also holds for randomized algorithms that know the coordinates of each node in a geometric embedding. If instead of visiting all nodes just a single "treasure" node has to be found, the competitive bounds are Θ(n2) on unit weight directed graphs, for both deterministic and randomized algorithms.
== Universal traversal sequences ==
A universal traversal sequence is a sequence of instructions comprising a graph traversal for any regular graph with a set number of vertices and for any starting vertex. A probabilistic proof was used by Aleliunas et al. to show that there exists a universal traversal sequence with number of instructions proportional to O(n5) for any regular graph with n vertices. The steps specified in the sequence are relative to the current node, not absolute. For example, if the current node is vj, and vj has d neighbors, then the traversal sequence will specify the next node to visit, vj+1, as the ith neighbor of vj, where 1 ≤ i ≤ d.
== See also ==
External memory graph traversal
== References == | Wikipedia/Graph_traversal |
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.
Intuitively, problem A is reducible to problem B, if an algorithm for solving problem B efficiently (if it exists) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. "Harder" means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). The existence of a reduction from A to B can be written in the shorthand notation A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : many-one reduction, p : polynomial reduction).
The mathematical structure generated on a set of problems by the reductions of a particular type generally forms a preorder, whose equivalence classes may be used to define degrees of unsolvability and complexity classes.
== Introduction ==
There are two main situations where we need to use reductions:
First, we find ourselves trying to solve a problem that is similar to a problem we've already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions.
Second: suppose we have a problem that we've proven is hard to solve, and we have a similar new problem. We might suspect that it is also hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard.
A very simple example of a reduction is from multiplication to squaring. Suppose all we know how to do is to add, subtract, take squares, and divide by two. We can use this knowledge, combined with the following formula, to obtain the product of any two numbers:
a
×
b
=
(
(
a
+
b
)
2
−
a
2
−
b
2
)
2
{\displaystyle a\times b={\frac {\left(\left(a+b\right)^{2}-a^{2}-b^{2}\right)}{2}}}
We also have a reduction in the other direction; obviously, if we can multiply two numbers, we can square a number. This seems to imply that these two problems are equally hard. This kind of reduction corresponds to Turing reduction.
However, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. In this case, even if we're allowed to use all the basic arithmetic operations, including multiplication, no reduction exists in general, because in order to get the desired result as a square we have to compute its square root first, and this square root could be an irrational number like
2
{\displaystyle {\sqrt {2}}}
that cannot be constructed by arithmetic operations on rational numbers. Going in the other direction, however, we can certainly square a number with just one multiplication, only at the end. Using this limited form of reduction, we have shown the unsurprising result that multiplication is harder in general than squaring. This corresponds to many-one reduction.
== Properties ==
A reduction is a preordering, that is a reflexive and transitive relation, on P(N)×P(N), where P(N) is the power set of the natural numbers.
== Types and applications of reductions ==
As described in the example above, there are two main types of reductions used in computational complexity, the many-one reduction and the Turing reduction. Many-one reductions map instances of one problem to instances of another; Turing reductions compute the solution to one problem, assuming the other problem is easy to solve. The many-one reduction is a stronger type of Turing reduction, and is more effective at separating problems into distinct complexity classes. However, the increased restrictions on many-one reductions make them more difficult to find.
A problem is complete for a complexity class if every problem in the class reduces to that problem, and it is also in the class itself. In this sense the problem represents the class, since any solution to it can, in combination with the reductions, be used to solve every problem in the class.
However, in order to be useful, reductions must be easy. For example, it's quite possible to reduce a difficult-to-solve NP-complete problem like the boolean satisfiability problem to a trivial problem, like determining if a number equals zero, by having the reduction machine solve the problem in exponential time and output zero only if there is a solution. However, this does not achieve much, because even though we can solve the new problem, performing the reduction is just as hard as solving the old problem. Likewise, a reduction computing a noncomputable function can reduce an undecidable problem to a decidable one. As Michael Sipser points out in Introduction to the Theory of Computation: "The reduction must be easy, relative to the complexity of typical problems in the class [...] If the reduction itself were difficult to compute, an easy solution to the complete problem wouldn't necessarily yield an easy solution to the problems reducing to it."
Therefore, the appropriate notion of reduction depends on the complexity class being studied. When studying the complexity class NP and harder classes such as the polynomial hierarchy, polynomial-time reductions are used. When studying classes within P such as NC and NL, log-space reductions are used. Reductions are also used in computability theory to show whether problems are or are not solvable by machines at all; in this case, reductions are restricted only to computable functions.
In case of optimization (maximization or minimization) problems, we often think in terms of approximation-preserving reduction. Suppose we have two optimization problems such that instances of one problem can be mapped onto instances of the other, in a way that nearly optimal solutions to instances of the latter problem can be transformed back to yield nearly optimal solutions to the former. This way, if we have an optimization algorithm (or approximation algorithm) that finds near-optimal (or optimal) solutions to instances of problem B, and an efficient approximation-preserving reduction from problem A to problem B, by composition we obtain an optimization algorithm that yields near-optimal solutions to instances of problem A. Approximation-preserving reductions are often used to prove hardness of approximation results: if some optimization problem A is hard to approximate (under some complexity assumption) within a factor better than α for some α, and there is a β-approximation-preserving reduction from problem A to problem B, we can conclude that problem B is hard to approximate within factor α/β.
== Examples ==
To show that a decision problem P is undecidable we must find a reduction from a decision problem which is already known to be undecidable to P. That reduction function must be a computable function. In particular, we often show that a problem P is undecidable by showing that the halting problem reduces to P.
The complexity classes P, NP and PSPACE are closed under (many-one, "Karp") polynomial-time reductions.
The complexity classes L, NL, P, NP and PSPACE are closed under log-space reduction.
=== Detailed example ===
The following example shows how to use reduction from the halting problem to prove that a language is undecidable. Suppose H(M, w) is the problem of determining whether a given Turing machine M halts (by accepting or rejecting) on input string w. This language is known to be undecidable. Suppose E(M) is the problem of determining whether the language a given Turing machine M accepts is empty (in other words, whether M accepts any strings at all). We show that E is undecidable by a reduction from H.
To obtain a contradiction, suppose R is a decider for E. We will use this to produce a decider S for H (which we know does not exist). Given input M and w (a Turing machine and some input string), define S(M, w) with the following behavior: S creates a Turing machine N that accepts only if the input string to N is w and M halts on input w, and does not halt otherwise. The decider S can now evaluate R(N) to check whether the language accepted by N is empty. If R accepts N, then the language accepted by N is empty, so in particular M does not halt on input w, so S can reject. If R rejects N, then the language accepted by N is nonempty, so M does halt on input w, so S can accept. Thus, if we had a decider R for E, we would be able to produce a decider S for the halting problem H(M, w) for any machine M and input w. Since we know that such an S cannot exist, it follows that the language E is also undecidable.
== See also ==
Gadget (computer science)
Many-one reduction
Parsimonious reduction
Reduction (recursion theory)
Truth table reduction
Turing reduction
== References ==
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein, Introduction to Algorithms, MIT Press, 2001, ISBN 978-0-262-03293-3
Hartley Rogers, Jr.: Theory of Recursive Functions and Effective Computability, McGraw-Hill, 1967, ISBN 978-0-262-68052-3.
Peter Bürgisser: Completeness and Reduction in Algebraic Complexity Theory, Springer, 2000, ISBN 978-3-540-66752-0.
E.R. Griffor: Handbook of Computability Theory, North Holland, 1999, ISBN 978-0-444-89882-1. | Wikipedia/Transform_and_conquer_algorithm |
Control tables are tables that control the control flow or play a major part in program control. There are no rigid rules about the structure or content of a control table—its qualifying attribute is its ability to direct control flow in some way through "execution" by a processor or interpreter. The design of such tables is sometimes referred to as table-driven design (although this typically refers to generating code automatically from external tables rather than direct run-time tables). In some cases, control tables can be specific implementations of finite-state-machine-based automata-based programming. If there are several hierarchical levels of control table they may behave in a manner equivalent to UML state machines.
Control tables often have the equivalent of conditional expressions or function references embedded in them, usually implied by their relative column position in the association list. Control tables reduce the need for programming similar structures or program statements over and over again. The two-dimensional nature of most tables makes them easier to view and update than the one-dimensional nature of program code.
In some cases, non-programmers can be assigned to maintain the content of control tables. For example, if a user-entered search phrase contains a certain phrase, a URL (web address) can be assigned in a table that controls where the search user is taken. If the phrase contains "skirt", then the table can route the user to "www.shopping.example/catalogs/skirts", which is the skirts product catalog page. (The example URL doesn't work in practice). Marketing personnel may manage such a table instead of programmers.
== Typical usage ==
Transformation of input values to:
an index, for later branching or pointer lookup
a program name, relative subroutine number, program label or program offset, to alter control flow
Controlling a main loop in event-driven programming using a control variable for state transitions
Controlling the program cycle for online transaction processing applications
== More advanced usage ==
Acting as virtual instructions for a virtual machine processed by an interpreter
similar to bytecode – but usually with operations implied by the table structure itself
== Table structure ==
The tables can have multiple dimensions, of fixed or variable lengths and are usually portable between computer platforms, requiring only a change to the interpreter, not the algorithm itself – the logic of which is essentially embodied within the table structure and content. The structure of the table may be similar to a multimap associative array, where a data value (or combination of data values) may be mapped to one or more functions to be performed.
=== One-dimensional tables ===
In perhaps its simplest implementation, a control table may sometimes be a one-dimensional table for directly translating a raw data value to a corresponding subroutine offset, index or pointer using the raw data value either directly as the index to the array, or by performing some basic arithmetic on the data beforehand. This can be achieved in constant time (without a linear search or binary search using a typical lookup table on an associative array). In most architectures, this can be accomplished in two or three machine instructions – without any comparisons or loops. The technique is known as a "trivial hash function" or, when used specifically for branch tables, "double dispatch".
For this to be feasible, the range of all possible values of the data needs to be small (e.g. an ASCII or EBCDIC character value which have a range of hexadecimal '00' – 'FF'. If the actual range is guaranteed to be smaller than this, the array can be truncated to less than 256 bytes).
In automata-based programming and pseudoconversational transaction processing, if the number of distinct program states is small, a "dense sequence" control variable can be used to efficiently dictate the entire flow of the main program loop.
A two byte raw data value would require a minimum table size of 65,536 bytes – to handle all input possibilities – whilst allowing just 256 different output values. However, this direct translation technique provides an extremely fast validation & conversion to a (relative) subroutine pointer if the heuristics, together with sufficient fast access memory, permits its use.
=== Branch tables ===
A branch table is a one-dimensional 'array' of contiguous machine code branch/jump instructions to effect a multiway branch to a program label when branched into by an immediately preceding, and indexed branch. It is sometimes generated by an optimizing compiler to execute a switch statement – provided that the input range is small and dense, with few gaps. (as created by the previous array example)
Although quite compact – compared to the multiple equivalent If statements – the branch instructions still carry some redundancy, since the branch opcode and condition code mask are repeated alongside the branch offsets. Control tables containing only the offsets to the program labels can be constructed to overcome this redundancy (at least in assembly languages) and yet requiring only minor execution time overhead compared to a conventional branch table.
=== Multi-dimensional tables ===
More usually, a control table can be thought of as a truth table or as an executable ("binary") implementation of a printed decision table (or a tree of decision tables, at several levels). They contain (often implied) propositions, together with one or more associated 'actions'. These actions are usually performed by generic or custom-built subroutines that are called by an "interpreter" program. The interpreter in this instance effectively functions as a virtual machine, that 'executes' the control table entries and thus provides a higher level of abstraction than the underlying code of the interpreter.
A control table can be constructed along similar lines to a language dependent switch statement but with the added possibility of testing for combinations of input values (using boolean style AND/OR conditions) and potentially calling multiple subroutines (instead of just a single set of values and 'branch to' program labels). (The switch statement construct in any case may not be available, or has confusingly differing implementations in high level languages (HLL). The control table concept, by comparison, has no intrinsic language dependencies, but might nevertheless be implemented differently according to the available data definition features of the chosen programming language.)
=== Table content ===
A control table essentially embodies the 'essence' of a conventional program, stripped of its programming language syntax and platform dependent components (e.g. IF/THEN DO.., FOR.., DO WHILE.., SWITCH, GOTO, CALL) and 'condensed' to its variables (e.g. input1), values (e.g. 'A','S','M' and 'D'), and subroutine identities (e.g. 'Add','subtract,..' or #1, #2,..). The structure of the table itself typically implies the (default) logical operations involved – such as 'testing for equality', performing a subroutine and 'next operation' or following the default sequence (rather than these being explicitly stated within program statements – as required in other programming paradigms).
A multi-dimensional control table will normally, as a minimum, contain value/action pairs and may additionally contain operators and type information such as, the location, size and format of input or output data, whether data conversion (or other run-time processing nuances) is required before or after processing (if not already implicit in the function itself). The table may or may not contain indexes or relative or absolute pointers to generic or customized primitives or subroutines to be executed depending upon other values in the "row".
The table illustrated below applies only to 'input1' since no specific input is specified in the table.
(This side-by-side pairing of value and action has similarities to constructs in event-driven programming, namely 'event-detection' and 'event-handling' but without (necessarily) the asynchronous nature of the event itself)
The variety of values that can be encoded within a control table is largely dependent upon the computer language used. Assembly language provides the widest scope for data types including (for the actions), the option of directly executable machine code. Typically a control table will contain values for each possible matching class of input together with a corresponding pointer to an action subroutine. Some languages claim not to support pointers (directly) but nevertheless can instead support an index which can be used to represent a 'relative subroutine number' to perform conditional execution, controlled by the value in the table entry (e.g. for use in an optimized SWITCH statement – designed with zero gaps i.e. a multiway branch).
Comments positioned above each column (or even embedded textual documentation) can render a decision table 'human readable' even after 'condensing down' (encoding) to its essentials (and still broadly in-line with the original program specification – especially if a printed decision table, enumerating each unique action, is created before coding begins).
The table entries can also optionally contain counters to collect run-time statistics for 'in-flight' or later optimization
== Table location ==
Control tables can reside in static storage, on auxiliary storage, such as a flat file or on a database or may alternatively be partially or entirely built dynamically at program initialization time from parameters (which themselves may reside in a table). For optimum efficiency, the table should be memory resident when the interpreter begins to use it.
== The interpreter and subroutines ==
The interpreter can be written in any suitable programming language including a high level language. A suitably designed generic interpreter, together with a well chosen set of generic subroutines (able to process the most commonly occurring primitives), would require additional conventional coding only for new custom subroutines (in addition to specifying the control table itself). The interpreter, optionally, may only apply to some well-defined sections of a complete application program (such as the main control loop) and not other, 'less conditional', sections (such as program initialization, termination and so on).
The interpreter does not need to be unduly complex, or produced by a programmer with the advanced knowledge of a compiler writer, and can be written just as any other application program – except that it is usually designed with efficiency in mind. Its primary function is to "execute" the table entries as a set of "instructions". There need be no requirement for parsing of control table entries and these should therefore be designed, as far as possible, to be 'execution ready', requiring only the "plugging in" of variables from the appropriate columns to the already compiled generic code of the interpreter. The program instructions are, in theory, infinitely extensible and constitute (possibly arbitrary) values within the table that are meaningful only to the interpreter. The control flow of the interpreter is normally by sequential processing of each table row but may be modified by specific actions in the table entries.
These arbitrary values can thus be designed with efficiency in mind – by selecting values that can be used as direct indexes to data or function pointers. For particular platforms/language, they can be specifically designed to minimize instruction path lengths using branch table values or even, in some cases such as in JIT compilers, consist of directly executable machine code "snippets" (or pointers to them).
The subroutines may be coded either in the same language as the interpreter itself or any other supported program language (provided that suitable inter-language 'Call' linkage mechanisms exist). The choice of language for the interpreter and/or subroutines will usually depend upon how portable it needs to be across various platforms. There may be several versions of the interpreter to enhance the portability of a control table. A subordinate control table pointer may optionally substitute for a subroutine pointer in the 'action' columns if the interpreter supports this construct, representing a conditional 'drop' to a lower logical level, mimicking a conventional structured program structure.
== Performance considerations ==
At first sight, the use of control tables would appear to add quite a lot to a program's overhead, requiring, as it does, an interpreter process before the 'native' programming language statements are executed. This however is not always the case. By separating (or 'encapsulating') the executable coding from the logic, as expressed in the table, it can be more readily targeted to perform its function most efficiently. This may be experienced most obviously in a spreadsheet application – where the underlying spreadsheet software transparently converts complex logical 'formulae' in the most efficient manner it is able, in order to display its results.
The examples below have been chosen partly to illustrate potential performance gains that may not only compensate significantly for the additional tier of abstraction, but also improve upon – what otherwise might have been – less efficient, less maintainable and lengthier code. Although the examples given are for a 'low level' assembly language and for the C language, it can be seen, in both cases, that very few lines of code are required to implement the control table approach and yet can achieve very significant constant time performance improvements, reduce repetitive source coding and aid clarity, as compared with verbose conventional program language constructs. See also the quotations by Donald Knuth, concerning tables and the efficiency of multiway branching in this article.
== Examples of control tables ==
The following examples are arbitrary (and based upon just a single input for simplicity), however the intention is merely to demonstrate how control flow can be effected via the use of tables instead of regular program statements. It should be clear that this technique can easily be extended to deal with multiple inputs, either by increasing the number of columns or utilizing multiple table entries (with optional and/or operator). Similarly, by using (hierarchical) 'linked' control tables, structured programming can be accomplished (optionally using indentation to help highlight subordinate control tables).
"CT1" is an example of a control table that is a simple lookup table. The first column represents the input value to be tested (by an implied 'IF input1 = x') and, if TRUE, the corresponding 2nd column (the 'action') contains a subroutine address to perform by a call (or jump to – similar to a SWITCH statement). It is, in effect, a multiway branch with return (a form of "dynamic dispatch"). The last entry is the default case where no match is found.
For programming languages that support pointers within data structures alongside other data values, the above table (CT1) can be used to direct control flow to an appropriate subroutines according to matching value from the table (without a column to indicate otherwise, equality is assumed in this simple case).
=== Assembly language ===
No attempt is made to optimize the lookup in coding for this first example (for IBM/360 maximum 16Mb address range or Z/Architecture), and it uses instead a simple linear search technique – purely to illustrate the concept and demonstrate fewer source lines. To handle all 256 different input values, approximately 265 lines of source code would be required (mainly single line table entries) whereas multiple 'compare and branch' would have normally required around 512 source lines (the size of the binary is also approximately halved, each table entry requiring only 4 bytes instead of approximately 8 bytes for a series of 'compare immediate'/branch instructions (For larger input variables, the saving is even greater).
* ------------------ interpreter --------------------------------------------*
LM R14,R0,=A(4,CT1,N) Set R14=4, R15 --> table, and R0 =no. of entries in table (N)
TRY CLC INPUT1,0(R15) ********* Found value in table entry ?
BE ACTION * loop * YES, Load register pointer to sub-routine from table
AR R15,R14 * * NO, Point to next entry in CT1 by adding R14 (=4)
BCT R0,TRY ********* Back until count exhausted, then drop through
. default action ... none of the values in table match, do something else
LA R15,4(R15) point to default entry (beyond table end)
ACTION L R15,0(R15) get pointer into R15,from where R15 points
BALR R14,R15 Perform the sub-routine ("CALL" and return)
B END go terminate this program
* ------------------ control table -----------------------------------------*
* | this column of allowable EBCDIC or ASCII values is tested '=' against variable 'input1'
* | | this column is the 3-byte address of the appropriate subroutine
* v v
CT1 DC C'A',AL3(ADD) START of Control Table (4 byte entry length)
DC C'S',AL3(SUBTRACT)
DC C'M',AL3(MULTIPLY)
DC C'D',AL3(DIVIDE)
N EQU (*-CT1)/4 number of valid entries in table (total length / entry length)
DC C'?',AL3(DEFAULT) default entry – used on drop through to catch all
INPUT1 DS C input variable is in this variable
* ------------------ sub-routines ------------------------------------------*
ADD CSECT sub-routine #1 (shown as separate CSECT here but might
. alternatively be in-line code)
. instruction(s) to add
BR R14 return
SUBTRACT CSECT sub-routine #2
. instruction(s) to subtract
BR R14 return
. etc..
improving the performance of the interpreter in above example
To make a selection in the example above, the average instruction path length (excluding the subroutine code) is '4n/2 +3', but can easily be reduced, where n = 1 to 64, to a constant time
O
(
1
)
{\displaystyle O(1)\,}
with a path length of '5' with zero comparisons, if a 256 byte translate table is first utilized to create a direct index to CT1 from the raw EBCDIC data. Where n = 6, this would then be equivalent to just 3 sequential compare & branch instructions. However, where n<=64, on average it would need approximately 13 times less instructions than using multiple compares. Where n=1 to 256, on average it would use approximately 42 times less instructions – since, in this case, one additional instruction would be required (to multiply the index by 4).
Improved interpreter (up to 26 times less executed instructions than the above example on average, where n= 1 to 64 and up to 13 times less than would be needed using multiple comparisons).
To handle 64 different input values, approximately 85 lines of source code (or less) are required (mainly single line table entries) whereas multiple 'compare and branch' would require around 128 lines (the size of the binary is also almost halved – despite the additional 256 byte table required to extract the 2nd index).
* ------------------ interpreter --------------------------------------------*
SR R14,R14 ********* Set R14=0
CALC IC R14,INPUT1 * calc * put EBCDIC byte into lo order bits (24–31) of R14
IC R14,CT1X(R14) * * use EBCDIC value as index on table 'CT1X' to get new index
FOUND L R15,CT1(R14) ********* get pointer to subroutine using index (0,4, 8 etc.)
BALR R14,R15 Perform the sub-routine ("CALL" and return or Default)
B END go terminate this program
* --------------- additional translate table (EBCDIC --> pointer table INDEX) 256 bytes----*
CT1X DC 12AL1(00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00) 12 identical sets of 16 bytes of x'00
* representing X'00 – x'BF'
DC AL1(00,04,00,00,16,00,00,00,00,00,00,00,00,00,00,00) ..x'C0' – X'CF'
DC AL1(00,00,00,00,12,00,00,00,00,00,00,00,00,00,00,00) ..x'D0' – X'DF'
DC AL1(00,00,08,00,00,00,00,00,00,00,00,00,00,00,00,00) ..x'E0' – X'EF'
DC AL1(00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00) ..x'F0' – X'FF'
* the assembler can be used to automatically calculate the index values and make the values more user friendly
* (for e.g. '04' could be replaced with the symbolic expression 'PADD-CT1' in table CT1X above)
* modified CT1 (added a default action when index = 00, single dimension, full 31 bit address)
CT1 DC A(DEFAULT) index =00 START of Control Table (4 byte address constants)
PADD DC A(ADD) =04
PSUB DC A(SUBTRACT) =08
PMUL DC A(MULTIPLY) =12
PDIV DC A(DIVIDE) =16
* the rest of the code remains the same as first example
Further improved interpreter (up to 21 times less executed instructions (where n>=64) than the first example on average and up to 42 times less than would be needed using multiple comparisons).
To handle 256 different input values, approximately 280 lines of source code or less, would be required (mainly single line table entries), whereas multiple 'compare and branch' would require around 512 lines (the size of the binary is also almost halved once more).
* ------------------ interpreter --------------------------------------------*
SR R14,R14 ********* Set R14=0
CALC IC R14,INPUT1 * calc * put EBCDIC byte into lo order bits (24–31) of R14
IC R14,CT1X(R14) * * use EBCDIC value as index on table 'CT1X' to get new index
SLL R14,2 * * multiply index by 4 (additional instruction)
FOUND L R15,CT1(R14) ********* get pointer to subroutine using index (0,4, 8 etc.)
BALR R14,R15 Perform the sub-routine ("CALL" and return or Default)
B END go terminate this program
* --------------- additional translate table (EBCDIC --> pointer table INDEX) 256 bytes----*
CT1X DC 12AL1(00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00) 12 identical sets of 16 bytes of x'00'
* representing X'00 – x'BF'
DC AL1(00,01,00,00,04,00,00,00,00,00,00,00,00,00,00,00) ..x'C0' – X'CF'
DC AL1(00,00,00,00,03,00,00,00,00,00,00,00,00,00,00,00) ..x'D0' – X'DF'
DC AL1(00,00,02,00,00,00,00,00,00,00,00,00,00,00,00,00) ..x'E0' – X'EF'
DC AL1(00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00) ..x'F0' – X'FF'
* the assembler can be used to automatically calculate the index values and make the values more user friendly
* (for e.g. '01' could be replaced with the symbolic expression 'PADD-CT1/4' in table CT1X above)
* modified CT1 (index now based on 0,1,2,3,4 not 0,4,8,12,16 to allow all 256 variations)
CT1 DC A(DEFAULT) index =00 START of Control Table (4 byte address constants)
PADD DC A(ADD) =01
PSUB DC A(SUBTRACT) =02
PMUL DC A(MULTIPLY) =03
PDIV DC A(DIVIDE) =04
* the rest of the code remains the same as the 2nd example
=== C language ===
This example in C uses two tables, the first (CT1) is a simple linear search one-dimensional lookup table – to obtain an index by matching the input (x), and the second, associated table (CT1p), is a table of addresses of labels to jump to.
This can be made more efficient if a 256 byte table is used to translate the raw ASCII value (x) directly to a dense sequential index value for use in directly locating the branch address from CT1p (i.e. "index mapping" with a byte-wide array). It will then execute in constant time for all possible values of x (If CT1p contained the names of functions instead of labels, the jump could be replaced with a dynamic function call, eliminating the switch-like goto – but decreasing performance by the additional cost of function housekeeping).
The next example below illustrates how a similar effect can be achieved in languages that do not support pointer definitions in data structures but do support indexed branching to a subroutine – contained within a (0-based) array of subroutine pointers. The table (CT2) is used to extract the index (from 2nd column) to the pointer array (CT2P). If pointer arrays are not supported, a SWITCH statement or equivalent can be used to alter the control flow to one of a sequence of program labels (e.g.: case0, case1, case2, case3, case4) which then either process the input directly, or else perform a call (with return) to the appropriate subroutine (default, Add, Subtract, Multiply or Divide,..) to deal with it.
As in above examples, it is possible to very efficiently translate the potential ASCII input values (A,S,M,D or unknown) into a pointer array index without actually using a table lookup, but is shown here as a table for consistency with the first example.
Multi-dimensional control tables can be constructed (i.e. customized) that can be 'more complex' than the above examples that might test for multiple conditions on multiple inputs or perform more than one 'action', based on some matching criteria. An 'action' can include a pointer to another subordinate control table. The simple example below has had an implicit 'OR' condition incorporated as an extra column (to handle lower case input, however in this instance, this could equally have been handled simply by having an extra entry for each of the lower case characters specifying the same subroutine identifier as the upper case characters). An extra column to count the actual run-time events for each input as they occur is also included.
The control table entries are then much more similar to conditional statements in procedural languages but, crucially, without the actual (language dependent) conditional statements (i.e. instructions) being present (the generic code is physically in the interpreter that processes the table entries, not in the table itself – which simply embodies the program logic via its structure and values).
In tables such as these, where a series of similar table entries defines the entire logic, a table entry number or pointer may effectively take the place of a program counter in more conventional programs and may be reset in an 'action', also specified in the table entry. The example below (CT4) shows how extending the earlier table, to include a 'next' entry (and/or including an 'alter flow' (jump) subroutine) can create a loop (This example is actually not the most efficient way to construct such a control table but, by demonstrating a gradual 'evolution' from the first examples above, shows how additional columns can be used to modify behaviour.) The fifth column demonstrates that more than one action can be initiated with a single table entry – in this case an action to be performed after the normal processing of each entry ('-' values mean 'no conditions' or 'no action').
Structured programming or "Goto-less" code, (incorporating the equivalent of 'DO WHILE' or 'for loop' constructs), can also be accommodated with suitably designed and 'indented' control table structures.
=== Table-driven rating ===
In the specialist field of telecommunications rating (concerned with the determining the cost of a particular call),
table-driven rating techniques illustrate the use of control tables in applications where the rules may change frequently because of market forces. The tables that determine the charges may be changed at short notice by non-programmers in many cases.
If the algorithms are not pre-built into the interpreter (and therefore require additional runtime interpretation of an expression held in the table), it is known as "Rule-based Rating" rather than table-driven rating (and consequently consumes significantly more overhead).
=== Spreadsheets ===
A spreadsheet data sheet can be thought of as a two dimensional control table, with the non empty cells representing data to the underlying spreadsheet program (the interpreter). The cells containing formula are usually prefixed with an equals sign and simply designate a special type of data input that dictates the processing of other referenced cells – by altering the control flow within the interpreter. It is the externalization of formulae from the underlying interpreter that clearly identifies both spreadsheets, and the above cited "rule based rating" example as readily identifiable instances of the use of control tables by non programmers.
== Programming paradigm ==
If the control tables technique could be said to belong to any particular programming paradigm, the closest analogy might be automata-based programming or "reflective" (a form of metaprogramming – since the table entries could be said to 'modify' the behaviour of the interpreter). The interpreter itself however, and the subroutines, can be programmed using any one of the available paradigms or even a mixture. The table itself can be essentially a collection of "raw data" values that do not even need to be compiled and could be read in from an external source (except in specific, platform dependent, implementations using memory pointers directly for greater efficiency).
== Analogy to bytecode / virtual machine instruction set ==
A multi-dimensional control table has some conceptual similarities to bytecode operating on a virtual machine, in that a platform dependent "interpreter" program is usually required to perform the actual execution (that is largely conditionally determined by the tables content). There are also some conceptual similarities to the recent Common Intermediate Language (CIL) in the aim of creating a common intermediate 'instruction set' that is independent of platform (but unlike CIL, no pretensions to be used as a common resource for other languages). P-code can also be considered a similar but earlier implementation with origins as far back as 1966.
== Instruction fetch ==
When a multi-dimensional control table is used to determine program flow, the normal "hardware" program counter function is effectively simulated with either a pointer to the first (or next) table entry or else an index to it. "Fetching" the instruction involves decoding the data in that table entry – without necessarily copying all or some of the data within the entry first. Programming languages that are able to use pointers have the dual advantage that less overhead is involved, both in accessing the contents and also advancing the counter to point to the next table entry after execution. Calculating the next 'instruction' address (i.e. table entry) can even be performed as an optional additional action of every individual table entry allowing loops and or jump instructions at any stage.
== Monitoring control table execution ==
The interpreter program can optionally save the program counter (and other relevant details depending upon instruction type) at each stage to record a full or partial trace of the actual program flow for debugging purposes, hot spot detection, code coverage analysis and performance analysis (see examples CT3 & CT4 above).
== Advantages ==
clarity – information tables are ubiquitous and mostly inherently understood even by the general public (especially fault diagnostic tables in product guides)
portability – can be designed to be 100% language independent (and platform independent – except for the interpreter)
flexibility – ability to execute either primitives or subroutines transparently and be custom designed to suit the problem
compactness – table usually shows condition/action pairing side-by-side (without the usual platform/language implementation dependencies), often also resulting in
binary file – reduced in size through less duplication of instructions
source file – reduced in size through elimination of multiple conditional statements
improved program load (or download) speeds
maintainability – tables often reduce the number of source lines needed to be maintained v. multiple compares
locality of reference – compact tables structures result in tables remaining in cache
code re-use – the "interpreter" is usually reusable. Frequently it can be easily adapted to new programming tasks using precisely the same technique and can grow 'organically' becoming, in effect, a standard library of tried and tested subroutines, controlled by the table definitions.
efficiency – systemwide optimization possible. Any performance improvement to the interpreter usually improves all applications using it (see examples in 'CT1' above).
extensible – new 'instructions' can be added – simply by extending the interpreter
interpreter can be written like an application program
Optionally:-
the interpreter can be introspective and "self optimize" using runtime metrics collected within the table itself (see CT3 and CT4 – with entries that could be periodically sorted by descending count). The interpreter can also optionally choose the most efficient lookup technique dynamically from metrics gathered at run-time (e.g. size of array, range of values, sorted or unsorted)
dynamic dispatch – common functions can be pre-loaded and less common functions fetched only on first encounter to reduce memory usage. In-table memoization can be employed to achieve this.
The interpreter can have debugging, trace and monitor features built-in – that can then be switched on or off at will according to test or 'live' mode
control tables can be built 'on-the-fly' (according to some user input or from parameters) and then executed by the interpreter (without building code literally).
== Disadvantages ==
training requirement – application programmers are not usually trained to produce generic solutions
The following mainly apply to their use in multi-dimensional tables, not the one-dimensional tables discussed earlier.
overhead – some increase because of extra level of indirection caused by virtual instructions having to be 'interpreted' (this however can usually be more than offset by a well designed generic interpreter taking full advantage of efficient direct translate, search and conditional testing techniques that may not otherwise have been utilized)
Complex expressions cannot always be used directly in data table entries for comparison purposes
(these 'intermediate values' can however be calculated beforehand instead within a subroutine and their values referred to in the conditional table entries. Alternatively, a subroutine can perform the complete complex conditional test (as an unconditional 'action') and, by setting a truth flag as its result, it can then be tested in the next table entry. See Structured program theorem)
== Quotations ==
Multiway branching is an important programming technique which is all too often replaced by an inefficient sequence of if tests. Peter Naur recently wrote me that he considers the use of tables to control program flow as a basic idea of computer science that has been nearly forgotten; but he expects it will be ripe for rediscovery any day now. It is the key to efficiency in all the best compilers I have studied.
There is another way to look at a program written in interpretative language. It may be regarded as a series of subroutine calls, one after another. Such a program may in fact be expanded into a long sequence of calls on subroutines, and, conversely, such a sequence can usually be packed into a coded form that is readily interpreted. The advantage of interpretive techniques are the compactness of representation, the machine independence, and the increased diagnostic capability. An interpreter can often be written so that the amount of time spent in interpretation of the code itself and branching to the appropriate routine is negligible
The space required to represent a program can often be decreased by the use of interpreters in which common sequences of operations are represented compactly. A typical example is the use of a finite-state machine to encode a complex protocol or lexical format into a small table
Jump tables can be especially efficient if the range tests can be omitted. For example, if the control value is an enumerated type (or a character) then it can only contain a small fixed range of values and a range test is redundant provided the jump table is large enough to handle all possible values
Programs must be written for people to read, and only incidentally for machines to execute.
Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious.
== See also ==
Database-centric architecture
Data-driven testing
Keyword-driven testing
Threaded code
Token threading
== Notes ==
== References ==
Decision Table Based Methodology
Structured Programming with go to Statements by Donald Knuth
Compiler code generation for multiway branch statements as a static search problem 1I994, by David A. Spuler
== External links ==
Switch statement in Windows PowerShell describes extensions to standard switch statement (providing some similar features to control tables)
Control Table example in "C" language using pointers, by Christopher Sawtell c1993, Department of Computer Science, University of Auckland
Table driven design Archived 10 June 2016 at the Wayback Machine by Wayne Cunneyworth of DataKinetics
From Requirements to Tables to Code and Tests By George Brooke
Some comments on the use of ambiguous decision tables and their conversion to computer programs by P. J. H. King and R. G. Johnson, Univ. of London, London, UK
Ambiguity in limited entry decision tables by P. J. H. King
Conversion of decision tables to computer programs by rule mask techniques by P. J. H. King
A Superoptimizer Analysis of Multiway Branch Code Generation Archived 27 February 2012 at the Wayback Machine section 3.9, page 16 index mapping
Jump Tables via Function Pointer Arrays in C/C++ Jones, Nigel. "Arrays of Pointers to Functions [1]" Embedded Systems Programming, May 1999.
Page view statistics for this article for December 2009
Modelling software with finite state machines – a practical approach
Finite State Tables for General Computer Programming Applications January 1988 by Mark Leininger
MSDN:Trigger-Based Event Processing
Control Table in c2.com | Wikipedia/Control_table |
In mathematical optimization and computer science, heuristic (from Greek εὑρίσκω "I find, discover") is a technique designed for problem solving more quickly when classic methods are too slow for finding an exact or approximate solution, or when classic methods fail to find any exact solution in a search space. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut.
A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.
== Definition and motivation ==
The objective of a heuristic is to produce a solution in a reasonable time frame that is good enough for solving the problem at hand. This solution may not be the best of all the solutions to this problem, or it may simply approximate the exact solution. But it is still valuable because finding it does not require a prohibitively long time.
Heuristics may produce results by themselves, or they may be used in conjunction with optimization algorithms to improve their efficiency (e.g., they may be used to generate good seed values).
Results about NP-hardness in theoretical computer science make heuristics the only viable option for a variety of complex optimization problems that need to be routinely solved in real-world applications.
Heuristics underlie the whole field of Artificial Intelligence and the computer simulation of thinking, as they may be used in situations where there are no known algorithms.
== Examples ==
=== Simpler problem ===
One way of achieving the computational performance gain expected of a heuristic consists of solving a simpler problem whose solution is also a solution to the initial problem.
=== Travelling salesman problem ===
An example of approximation is described by Jon Bentley for solving the travelling salesman problem (TSP):
"Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?"
so as to select the order to draw using a pen plotter. TSP is known to be NP-hard so an optimal solution for even a moderate size problem is difficult to solve. Instead, the greedy algorithm can be used to give a good but not optimal solution (it is an approximation to the optimal answer) in a reasonably short amount of time. The greedy algorithm heuristic says to pick whatever is currently the best next step regardless of whether that prevents (or even makes impossible) good steps later. It is a heuristic in the sense that practice indicates it is a good enough solution, while theory indicates that there are better solutions (and even indicates how much better, in some cases).
=== Search ===
Another example of heuristic making an algorithm faster occurs in certain search problems. Initially, the heuristic tries every possibility at each step, like the full-space search algorithm. But it can stop the search at any time if the current possibility is already worse than the best solution already found. In such search problems, a heuristic can be used to try good choices first so that bad paths can be eliminated early (see alpha–beta pruning). In the case of best-first search algorithms, such as A* search, the heuristic improves the algorithm's convergence while maintaining its correctness as long as the heuristic is admissible.
=== Newell and Simon: heuristic search hypothesis ===
In their Turing Award acceptance speech, Allen Newell and Herbert A. Simon discuss the heuristic search hypothesis: a physical symbol system will repeatedly generate and modify known symbol structures until the created structure matches the solution structure. Each following step depends upon the step before it, thus the heuristic search learns what avenues to pursue and which ones to disregard by measuring how close the current step is to the solution. Therefore, some possibilities will never be generated as they are measured to be less likely to complete the solution.
A heuristic method can accomplish its task by using search trees. However, instead of generating all possible solution branches, a heuristic selects branches more likely to produce outcomes than other branches. It is selective at each decision point, picking branches that are more likely to produce solutions.
=== Antivirus software ===
Antivirus software often uses heuristic rules for detecting viruses and other forms of malware. Heuristic scanning looks for code and/or behavioral patterns common to a class or family of viruses, with different sets of rules for different viruses. If a file or executing process is found to contain matching code patterns and/or to be performing that set of activities, then the scanner infers that the file is infected. The most advanced part of behavior-based heuristic scanning is that it can work against highly randomized self-modifying/mutating (polymorphic) viruses that cannot be easily detected by simpler string scanning methods. Heuristic scanning has the potential to detect future viruses without requiring the virus to be first detected somewhere else, submitted to the virus scanner developer, analyzed, and a detection update for the scanner provided to the scanner's users.
== Pitfalls ==
Some heuristics have a strong underlying theory; they are either derived in a top-down manner from the theory or are arrived at based on either experimental or real world data. Others are just rules of thumb based on real-world observation or experience without even a glimpse of theory. The latter are exposed to a larger number of pitfalls.
When a heuristic is reused in various contexts because it has been seen to "work" in one context, without having been mathematically proven to meet a given set of requirements, it is possible that the current data set does not necessarily represent future data sets (see: overfitting) and that purported "solutions" turn out to be akin to noise.
Statistical analysis can be conducted when employing heuristics to estimate the probability of incorrect outcomes. To use a heuristic for solving a search problem or a knapsack problem, it is necessary to check that the heuristic is admissible. Given a heuristic function
h
(
v
i
,
v
g
)
{\displaystyle h(v_{i},v_{g})}
meant to approximate the true optimal distance
d
⋆
(
v
i
,
v
g
)
{\displaystyle d^{\star }(v_{i},v_{g})}
to the goal node
v
g
{\displaystyle v_{g}}
in a directed graph
G
{\displaystyle G}
containing
n
{\displaystyle n}
total nodes or vertices labeled
v
0
,
v
1
,
⋯
,
v
n
{\displaystyle v_{0},v_{1},\cdots ,v_{n}}
, "admissible" means roughly that the heuristic underestimates the cost to the goal or formally that
h
(
v
i
,
v
g
)
≤
d
⋆
(
v
i
,
v
g
)
{\displaystyle h(v_{i},v_{g})\leq d^{\star }(v_{i},v_{g})}
for all
(
v
i
,
v
g
)
{\displaystyle (v_{i},v_{g})}
where
i
,
g
∈
[
0
,
1
,
.
.
.
,
n
]
{\displaystyle {i,g}\in [0,1,...,n]}
.
If a heuristic is not admissible, it may never find the goal, either by ending up in a dead end of graph
G
{\displaystyle G}
or by skipping back and forth between two nodes
v
i
{\displaystyle v_{i}}
and
v
j
{\displaystyle v_{j}}
where
i
,
j
≠
g
{\displaystyle {i,j}\neq g}
.
== Etymology ==
The word "heuristic" came into usage in the early 19th century. It is formed irregularly from the Greek word heuriskein, meaning "to find".
== See also ==
Constructive heuristic
Metaheuristic: Methods for controlling and tuning basic heuristic algorithms, usually with usage of memory and learning.
Matheuristics: Optimization algorithms made by the interoperation of metaheuristics and mathematical programming (MP) techniques.
Reactive search optimization: Methods using online machine learning principles for self-tuning of heuristics.
== References == | Wikipedia/Heuristic_(computer_science) |
An algorithm is an unambiguous method of solving a specific problem.
Algorithm or algorhythm may also refer to:
== People ==
Al-Khwarizmi, 8th century Persian originator of algebra, whose name (romanized variously as Algorithm, Alghoarism, Algorism, etc.) is the etymological origin of algorithm
== Music ==
The Algorithm, a French musical project
Algorithm (My Heart to Fear album), 2013, or the title track
Algorithm (Lucky Daye album), 2024
Algorithm, a 2024 song by South Korean singer Chung Ha
The Algorithm (Filter album), 2023
Algorythm (album), a 2018 album by Beyond Creation
Algorhythm (Todrick Hall album), a 2022 album by Todrick Hall
Snoop Dogg Presents Algorithm, a 2021 compilation album by Snoop Dogg
Algorythm, a 1994 album by Boxcar
"Algorithm" (song), a 2018 song by Muse
"Algorhythm", a 2018 song by Childish Gambino
Algorythm Records, an imprint of the drum and bass group Counterstrike
"Algorythm", a 2024 song by South Korean girl group Itzy
== Other ==
Algorithm (C++), a C++ Standard Library header that provides implementations of common algorithms
Algorithms (journal), a technical periodical
A temporal inversion device in the 2020 science-fiction film Tenet
Recommendation algorithms (often called informally simply as "the algorithm" or "algorithms"), used by social media websites for personalization
== See also ==
Algorithmic (disambiguation)
Algorism
Ruleset (disambiguation) | Wikipedia/Algorithm_(disambiguation) |
Introduction to Algorithms is a book on computer programming by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. The book is described by its publisher as "the leading algorithms text in universities worldwide as well as the standard reference for professionals". It is commonly cited as a reference for algorithms in published papers, with over 10,000 citations documented on CiteSeerX, and over 70,000 citations on Google Scholar as of 2024. The book sold half a million copies during its first 20 years, and surpassed a million copies sold in 2022. Its fame has led to the common use of the abbreviation "CLRS" (Cormen, Leiserson, Rivest, Stein), or, in the first edition, "CLR" (Cormen, Leiserson, Rivest).
In the preface, the authors write about how the book was written to be comprehensive and useful in both teaching and professional environments. Each chapter focuses on an algorithm, and discusses its design techniques and areas of application. Instead of using a specific programming language, the algorithms are written in pseudocode. The descriptions focus on the aspects of the algorithm itself, its mathematical properties, and emphasize efficiency.
== Editions ==
The first edition of the textbook did not include Stein as an author, and thus the book became known by the initialism CLR. It included two chapters ("Arithmetic Circuits" & "Algorithms for Parallel Computers") that were dropped in the second edition. After the addition of the fourth author in the second edition, many began to refer to the book as "CLRS". This first edition of the book was also known as "The Big White Book (of Algorithms)." With the second edition, the predominant color of the cover changed to green, causing the nickname to be shortened to just "The Big Book (of Algorithms)." The third edition was published in August 2009. The fourth edition was published in April 2022, which has colors added to improve visual presentations.
== Cover design ==
The mobile depicted on the cover, Big Red (1959) by Alexander Calder, can be found at the Whitney Museum of American Art in New York City.
== Publication history ==
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN 0-262-03141-8.
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03293-7. 12 printings up to 2009, errata:
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03384-4. 1320 pp., 5 printings up to 2016, errata:
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2022) [1990]. Introduction to Algorithms (4th ed.). MIT Press and McGraw-Hill. ISBN 0-262-04630-X. 1312 pp., errata:
== Reviews ==
Akl, Selim G. (1991). "Review of 1st edition". Mathematical Reviews. MR 1066870.
Mann, C. J. H. (April 2003). "New edition of algorithms book [review of 2nd edition]". Kybernetes. 32 (3). doi:10.1108/k.2003.06732cae.004.
Thimbleby, Harold (December 3, 2009). "No excuse to be illiterate about IT [review of 3rd edition]". Times Higher Education.
El-Sharoud, Walid (September 2019). "Review of 3rd edition". Science Progress. 102 (3): 278–279. doi:10.1177/0036850419873799b. PMC 10424523.
== See also ==
The Art of Computer Programming
== References ==
== External links ==
Official website on MIT Press | Wikipedia/Introduction_to_Algorithms |
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms. For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users. Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice to healthcare—many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines.
== Regulation of artificial intelligence ==
=== Public discussion ===
In 2016, Joy Buolamwini founded Algorithmic Justice League after a personal experience with biased facial detection software in order to raise awareness of the social implications of artificial intelligence through art and research.
In 2017 Elon Musk advocated regulation of algorithms in the context of the existential risk from artificial general intelligence. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. One suggestion has been for the development of a global governance board to regulate AI development. In 2020, the European Union published its draft strategy paper for promoting and regulating AI.
Algorithmic tacit collusion is a legally dubious antitrust practise committed by means of algorithms, which the courts are not able to prosecute. This danger concerns scientists and regulators in EU, US and beyond. European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows:
"A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy."
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR).
In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." This protest was successful and the grades were taken back.
=== Implementation ===
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national, and international levels and in fields from public service management to law enforcement, the financial sector, robotics, the military, and international law. There are many concerns that there is not enough visibility and monitoring of AI in these sectors. In the United States financial sector, for example, there have been calls for the Consumer Financial Protection Bureau to more closely examine source code and algorithms when conducting audits of financial institutions' non-public data.
In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, the National Security Commission on Artificial Intelligence has published an interim report, and the Defense Innovation Board has issued recommendations on the ethical use of AI.
In April 2016, for the first time in more than two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage, and use of personal information, the General Data Protection Regulation (GDPR)1 (European Union, Parliament and Council 2016).[6] The GDPR's policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design.
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. In the United States, steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence.
In 2017, the U.K. Vehicle Technology and Aviation Bill imposes liability on the owner of an uninsured automated vehicle when driving itself and makes provisions for cases where the owner has made “unauthorized alterations” to the vehicle or failed to update its software. Further ethical issues arise when, e.g., a self-driving car swerves to avoid a pedestrian and causes a fatal accident.
In 2021, the European Commission proposed the Artificial Intelligence Act.
== Algorithm certification ==
There is a concept of algorithm certification emerging as a method of regulating algorithms. Algorithm certification involves auditing whether the algorithm used during the life cycle 1) conforms to the protocoled requirements (e.g., for correctness, completeness, consistency, and accuracy); 2) satisfies the standards, practices, and conventions; and 3) solves the right problem (e.g., correctly model physical laws), and satisfies the intended use and user needs in the operational environment.
== Regulation of blockchain algorithms ==
Blockchain systems provide transparent and fixed records of transactions and hereby contradict the goal of the European GDPR, which is to give individuals full control of their private data.
By implementing the Decree on Development of Digital Economy, Belarus has become the first-ever country to legalize smart contracts. Belarusian lawyer Denis Aleinikov is considered to be the author of a smart contract legal concept introduced by the decree. There are strong arguments that the existing US state laws are already a sound basis for the smart contracts' enforceability — Arizona, Nevada, Ohio and Tennessee have amended their laws specifically to allow for the enforceability of blockchain-based contracts nevertheless.
== Regulation of robots and autonomous algorithms ==
There have been proposals to regulate robots and autonomous algorithms. These include:
the South Korean Government's proposal in 2007 of a Robot Ethics Charter;
a 2011 proposal from the U.K. Engineering and Physical Sciences Research Council of five ethical “principles for designers, builders, and users of robots”;
the Association for Computing Machinery's seven principles for algorithmic transparency and accountability, published in 2017.
== In popular culture ==
In 1942, author Isaac Asimov addressed regulation of algorithms by introducing the fictional Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The main alternative to regulation is a ban, and the banning of algorithms is presently highly unlikely. However, in Frank Herbert's Dune universe, thinking machines is a collective term for artificial intelligence, which were completely destroyed and banned after a revolt known as the Butlerian Jihad:
JIHAD, BUTLERIAN: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as "Thou shalt not make a machine in the likeness of a human mind."
== See also ==
Algorithmic transparency
Algorithmic accountability
Artificial intelligence
Artificial intelligence arms race
Artificial intelligence in government
Ethics of artificial intelligence
Government by algorithm
Privacy law
== References == | Wikipedia/Regulation_of_algorithms |
In computer science, streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes, typically just one. These algorithms are designed to operate with limited memory, generally logarithmic in the size of the stream and/or in the maximum value in the stream, and may also have limited processing time per item.
As a result of these constraints, streaming algorithms often produce approximate answers based on a summary or "sketch" of the data stream.
== History ==
Though streaming algorithms had already been studied by Munro and Paterson as early as 1978, as well as Philippe Flajolet and G. Nigel Martin in 1982/83, the field of streaming algorithms was first formalized and popularized in a 1996 paper by Noga Alon, Yossi Matias, and Mario Szegedy. For this paper, the authors later won the Gödel Prize in 2005 "for their foundational contribution to streaming algorithms." There has since been a large body of work centered around data streaming algorithms that spans a diverse spectrum of computer science fields such as theory, databases, networking, and natural language processing.
Semi-streaming algorithms were introduced in 2005 as a relaxation of streaming algorithms for graphs, in which the space allowed is linear in the number of vertices n, but only logarithmic in the number of edges m. This relaxation is still meaningful for dense graphs, and can solve interesting problems (such as connectivity) that are insoluble in
o
(
n
)
{\displaystyle o(n)}
space.
== Models ==
=== Data stream model ===
In the data stream model, some or all of the input is represented as a finite sequence of integers (from some finite domain) which is generally not available for random access, but instead arrives one at a time in a "stream". If the stream has length n and the domain has size m, algorithms are generally constrained to use space that is logarithmic in m and n. They can generally make only some small constant number of passes over the stream, sometimes just one.
=== Turnstile and cash register models ===
Much of the streaming literature is concerned with computing statistics on
frequency distributions that are too large to be stored. For this class of
problems, there is a vector
a
=
(
a
1
,
…
,
a
n
)
{\displaystyle \mathbf {a} =(a_{1},\dots ,a_{n})}
(initialized to the zero vector
0
{\displaystyle \mathbf {0} }
) that has updates presented to it in a stream. The goal of these algorithms is to compute
functions of
a
{\displaystyle \mathbf {a} }
using considerably less space than it
would take to represent
a
{\displaystyle \mathbf {a} }
precisely. There are two
common models for updating such streams, called the "cash register" and
"turnstile" models.
In the cash register model, each update is of the form
⟨
i
,
c
⟩
{\displaystyle \langle i,c\rangle }
, so that
a
i
{\displaystyle a_{i}}
is incremented by some positive
integer
c
{\displaystyle c}
. A notable special case is when
c
=
1
{\displaystyle c=1}
(only unit insertions are permitted).
In the turnstile model, each update is of the form
⟨
i
,
c
⟩
{\displaystyle \langle i,c\rangle }
, so that
a
i
{\displaystyle a_{i}}
is incremented by some (possibly negative) integer
c
{\displaystyle c}
. In the "strict turnstile" model, no
a
i
{\displaystyle a_{i}}
at any time may be less than zero.
=== Sliding window model ===
Several papers also consider the "sliding window" model. In this model,
the function of interest is computing over a fixed-size window in the
stream. As the stream progresses, items from the end of the window are
removed from consideration while new items from the stream take their
place.
Besides the above frequency-based problems, some other types of problems
have also been studied. Many graph problems are solved in the setting
where the adjacency matrix or the adjacency list of the graph is streamed in
some unknown order. There are also some problems that are very dependent
on the order of the stream (i.e., asymmetric functions), such as counting
the number of inversions in a stream and finding the longest increasing
subsequence.
== Evaluation ==
The performance of an algorithm that operates on data streams is measured by three basic factors:
The number of passes the algorithm must make over the stream.
The available memory.
The running time of the algorithm.
These algorithms have many similarities with online algorithms since they both require decisions to be made before all data are available, but they are not identical. Data stream algorithms only have limited memory available but they may be able to defer action until a group of points arrive, while online algorithms are required to take action as soon as each point arrives.
If the algorithm is an approximation algorithm then the accuracy of the answer is another key factor. The accuracy is often stated as an
(
ϵ
,
δ
)
{\displaystyle (\epsilon ,\delta )}
approximation meaning that the algorithm achieves an error of less than
ϵ
{\displaystyle \epsilon }
with probability
1
−
δ
{\displaystyle 1-\delta }
.
== Applications ==
Streaming algorithms have several applications in networking such as
monitoring network links for elephant flows, counting the number of
distinct flows, estimating the distribution of flow sizes, and so
on. They also have applications in
databases, such as estimating the size of a join .
== Some streaming problems ==
=== Frequency moments ===
The kth frequency moment of a set of frequencies
a
{\displaystyle \mathbf {a} }
is defined as
F
k
(
a
)
=
∑
i
=
1
n
a
i
k
{\displaystyle F_{k}(\mathbf {a} )=\sum _{i=1}^{n}a_{i}^{k}}
.
The first moment
F
1
{\displaystyle F_{1}}
is simply the sum of the frequencies (i.e., the total count). The second moment
F
2
{\displaystyle F_{2}}
is useful for computing statistical properties of the data, such as the Gini coefficient
of variation.
F
∞
{\displaystyle F_{\infty }}
is defined as the frequency of the most frequent items.
The seminal paper of Alon, Matias, and Szegedy dealt with the problem of estimating the frequency moments.
==== Calculating frequency moments ====
A direct approach to find the frequency moments requires to maintain a register mi for all distinct elements ai ∈ (1,2,3,4,...,N) which requires at least memory
of order
Ω
(
N
)
{\displaystyle \Omega (N)}
. But we have space limitations and require an algorithm that computes in much lower memory. This can be achieved by using approximations instead of exact values. An algorithm that computes an (ε,δ)approximation of Fk, where F'k is the (ε,δ)-
approximated value of Fk. Where ε is the approximation parameter and δ is the confidence parameter.
===== Calculating F0 (distinct elements in a data stream) =====
====== FM-Sketch algorithm ======
Flajolet et al. in introduced probabilistic method of counting which was inspired from a paper by Robert Morris. Morris in his paper says that if the requirement of accuracy is dropped, a counter n can be replaced by a counter log n which can be stored in log log n bits. Flajolet et al. in improved this method by using a hash function h which is assumed to uniformly distribute the element in the hash space (a binary string of length L).
h
:
[
m
]
→
[
0
,
2
L
−
1
]
{\displaystyle h:[m]\rightarrow [0,2^{L}-1]}
Let bit(y,k) represent the kth bit in binary representation of y
y
=
∑
k
≥
0
b
i
t
(
y
,
k
)
∗
2
k
{\displaystyle y=\sum _{k\geq 0}\mathrm {bit} (y,k)*2^{k}}
Let
ρ
(
y
)
{\displaystyle \rho (y)}
represents the position of least
significant 1-bit in the binary representation of yi with a suitable convention for
ρ
(
0
)
{\displaystyle \rho (0)}
.
ρ
(
y
)
=
{
M
i
n
(
k
:
b
i
t
(
y
,
k
)
==
1
)
if
y
>
0
L
if
y
=
0
{\displaystyle \rho (y)={\begin{cases}\mathrm {Min} (k:\mathrm {bit} (y,k)==1)&{\text{if }}y>0\\L&{\text{if }}y=0\end{cases}}}
Let A be the sequence of data stream of length M whose cardinality need to be determined. Let BITMAP [0...L − 1] be the
hash space where the ρ(hashedvalues) are recorded. The below algorithm then determines approximate cardinality of A.Procedure FM-Sketch:
for i in 0 to L − 1 do
BITMAP[i] := 0
end for
for x in A: do
Index := ρ(hash(x))
if BITMAP[index] = 0 then
BITMAP[index] := 1
end if
end for
B := Position of left most 0 bit of BITMAP[]
return 2 ^ B
If there are N distinct elements in a data stream.
For
i
≫
log
(
N
)
{\displaystyle i\gg \log(N)}
then BITMAP[i] is certainly 0
For
i
≪
log
(
N
)
{\displaystyle i\ll \log(N)}
then BITMAP[i] is certainly 1
For
i
≈
log
(
N
)
{\displaystyle i\approx \log(N)}
then BITMAP[i] is a fringes of 0's and 1's
====== K-minimum value algorithm ======
The previous algorithm describes the first attempt to approximate F0 in the data stream by Flajolet and Martin. Their algorithm picks a random hash function which they assume to uniformly distribute the hash values in hash space.
Bar-Yossef et al. in introduced k-minimum value algorithm for determining number of distinct elements in data stream. They used a similar hash function h which can be normalized to [0,1] as
h
:
[
m
]
→
[
0
,
1
]
{\displaystyle h:[m]\rightarrow [0,1]}
. But they fixed a limit t to number of values in hash space. The value of t is assumed of the order
O
(
1
ε
2
)
{\displaystyle O\left({\dfrac {1}{\varepsilon _{2}}}\right)}
(i.e. less approximation-value ε requires more t). KMV algorithm keeps only t-smallest hash values in the hash space. After all the m values of stream have arrived,
υ
=
M
a
x
(
h
(
a
i
)
)
{\displaystyle \upsilon =\mathrm {Max} (h(a_{i}))}
is used to calculate
F
0
′
=
t
υ
{\displaystyle F'_{0}={\dfrac {t}{\upsilon }}}
. That is, in a close-to uniform hash space, they expect at-least t elements to be less than
O
(
t
F
0
)
{\displaystyle O\left({\dfrac {t}{F_{0}}}\right)}
.Procedure 2 K-Minimum Value
Initialize first t values of KMV
for a in a1 to an do
if h(a) < Max(KMV) then
Remove Max(KMV) from KMV set
Insert h(a) to KMV
end if
end for
return t/Max(KMV)
====== Complexity analysis of KMV ======
KMV algorithm can be implemented in
O
(
(
1
ε
2
)
⋅
log
(
m
)
)
{\displaystyle O\left(\left({\dfrac {1}{\varepsilon _{2}}}\right)\cdot \log(m)\right)}
memory bits space. Each hash value requires space of order
O
(
log
(
m
)
)
{\displaystyle O(\log(m))}
memory bits. There are hash values of the order
O
(
1
ε
2
)
{\displaystyle O\left({\dfrac {1}{\varepsilon _{2}}}\right)}
. The access time can be reduced if we store the t hash values in a binary tree. Thus the time complexity will be reduced to
O
(
log
(
1
ε
)
⋅
log
(
m
)
)
{\displaystyle O\left(\log \left({\dfrac {1}{\varepsilon }}\right)\cdot \log(m)\right)}
.
===== Calculating Fk =====
Alon et al. estimates Fk by defining random variables that can be computed within given space and time. The expected value of random variables gives the approximate value of Fk.
Assume length of sequence m is known in advance. Then construct a random variable X as follows:
Select ap be a random member of sequence A with index at p,
a
p
=
l
∈
(
1
,
2
,
3
,
…
,
n
)
{\displaystyle a_{p}=l\in (1,2,3,\ldots ,n)}
Let
r
=
|
{
q
:
q
≥
p
,
a
q
=
l
}
|
{\displaystyle r=|\{q:q\geq p,a_{q}=l\}|}
, represents the number of occurrences of l within the members of the sequence A following ap.
Random variable
X
=
m
(
r
k
−
(
r
−
1
)
k
)
{\displaystyle X=m(r^{k}-(r-1)^{k})}
.
Assume S1 be of the order
O
(
n
1
−
1
/
k
/
λ
2
)
{\displaystyle O(n^{1-1/k}/\lambda ^{2})}
and S2 be of the order
O
(
log
(
1
/
ε
)
)
{\displaystyle O(\log(1/\varepsilon ))}
. Algorithm takes S2 random variable
Y
1
,
Y
2
,
.
.
.
,
Y
S
2
{\displaystyle Y_{1},Y_{2},...,Y_{S2}}
and outputs the median
Y
{\displaystyle Y}
. Where Yi is the average of Xij where 1 ≤ j ≤ S1.
Now calculate expectation of random variable E(X).
E
(
X
)
=
∑
i
=
1
n
∑
i
=
1
m
i
(
j
k
−
(
j
−
1
)
k
)
=
m
m
[
(
1
k
+
(
2
k
−
1
k
)
+
…
+
(
m
1
k
−
(
m
1
−
1
)
k
)
)
+
(
1
k
+
(
2
k
−
1
k
)
+
…
+
(
m
2
k
−
(
m
2
−
1
)
k
)
)
+
…
+
(
1
k
+
(
2
k
−
1
k
)
+
…
+
(
m
n
k
−
(
m
n
−
1
)
k
)
)
]
=
∑
i
=
1
n
m
i
k
=
F
k
{\displaystyle {\begin{array}{lll}E(X)&=&\sum _{i=1}^{n}\sum _{i=1}^{m_{i}}(j^{k}-(j-1)^{k})\\&=&{\frac {m}{m}}[(1^{k}+(2^{k}-1^{k})+\ldots +(m_{1}^{k}-(m_{1}-1)^{k}))\\&&\;+\;(1^{k}+(2^{k}-1^{k})+\ldots +(m_{2}^{k}-(m_{2}-1)^{k}))+\ldots \\&&\;+\;(1^{k}+(2^{k}-1^{k})+\ldots +(m_{n}^{k}-(m_{n}-1)^{k}))]\\&=&\sum _{i=1}^{n}m_{i}^{k}=F_{k}\end{array}}}
====== Complexity of Fk ======
From the algorithm to calculate Fk discussed above, we can see that each random variable X stores value of ap and r. So, to compute X we need to maintain only log(n) bits for storing ap and log(n) bits for storing r. Total number of random variable X will be the
S
1
∗
S
2
{\displaystyle S_{1}*S_{2}}
.
Hence the total space complexity the algorithm takes is of the order of
O
(
k
log
1
ε
λ
2
n
1
−
1
k
(
log
n
+
log
m
)
)
{\displaystyle O\left({\dfrac {k\log {1 \over \varepsilon }}{\lambda ^{2}}}n^{1-{1 \over k}}\left(\log n+\log m\right)\right)}
====== Simpler approach to calculate F2 ======
The previous algorithm calculates
F
2
{\displaystyle F_{2}}
in order of
O
(
n
(
log
m
+
log
n
)
)
{\displaystyle O({\sqrt {n}}(\log m+\log n))}
memory bits. Alon et al. in simplified this algorithm using four-wise independent random variable with values mapped to
{
−
1
,
1
}
{\displaystyle \{-1,1\}}
.
This further reduces the complexity to calculate
F
2
{\displaystyle F_{2}}
to
O
(
log
1
ε
λ
2
(
log
n
+
log
m
)
)
{\displaystyle O\left({\dfrac {\log {1 \over \varepsilon }}{\lambda ^{2}}}\left(\log n+\log m\right)\right)}
=== Frequent elements ===
In the data stream model, the frequent elements problem is to output a set of elements that constitute more than some fixed fraction of the stream. A special case is the majority problem, which is to determine whether or not any value constitutes a majority of the stream.
More formally, fix some positive constant c > 1, let the length of the stream be m, and let fi denote the frequency of value i in the stream. The frequent elements problem is to output the set { i | fi > m/c }.
Some notable algorithms are:
Boyer–Moore majority vote algorithm
Count-Min sketch
Lossy counting
Multi-stage Bloom filters
Misra–Gries heavy hitters algorithm
Misra–Gries summary
=== Event detection ===
Detecting events in data streams is often done using a heavy hitters algorithm as listed above: the most frequent items and their frequency are determined using one of these algorithms, then the largest increase over the previous time point is reported as trend. This approach can be refined by using exponentially weighted moving averages and variance for normalization.
=== Counting distinct elements ===
Counting the number of distinct elements in a stream (sometimes called the
F0 moment) is another problem that has been well studied.
The first algorithm for it was proposed by Flajolet and Martin. In 2010, Daniel Kane, Jelani Nelson and David Woodruff found an asymptotically optimal algorithm for this problem. It uses O(ε2 + log d) space, with O(1) worst-case update and reporting times, as well as universal hash functions and a r-wise independent hash family where r = Ω(log(1/ε) / log log(1/ε)).
=== Entropy ===
The (empirical) entropy of a set of frequencies
a
{\displaystyle \mathbf {a} }
is
defined as
F
k
(
a
)
=
∑
i
=
1
n
a
i
m
log
a
i
m
{\displaystyle F_{k}(\mathbf {a} )=\sum _{i=1}^{n}{\frac {a_{i}}{m}}\log {\frac {a_{i}}{m}}}
, where
m
=
∑
i
=
1
n
a
i
{\displaystyle m=\sum _{i=1}^{n}a_{i}}
.
=== Online learning ===
Learn a model (e.g. a classifier) by a single pass over a training set.
Feature hashing
Stochastic gradient descent
== Lower bounds ==
Lower bounds have been computed for many of the data streaming problems
that have been studied. By far, the most common technique for computing
these lower bounds has been using communication complexity.
== See also ==
Data stream mining
Data stream clustering
Online algorithm
Stream processing
Sequential algorithm
== Notes ==
== References ==
Alon, Noga; Matias, Yossi; Szegedy, Mario (1999), "The space complexity of approximating the frequency moments", Journal of Computer and System Sciences, 58 (1): 137–147, doi:10.1006/jcss.1997.1545, ISSN 0022-0000. First published as Alon, Noga; Matias, Yossi; Szegedy, Mario (1996), "The space complexity of approximating the frequency moments", Proceedings of the 28th ACM Symposium on Theory of Computing (STOC 1996), pp. 20–29, CiteSeerX 10.1.1.131.4984, doi:10.1145/237814.237823, ISBN 978-0-89791-785-8, S2CID 1627911.
Babcock, Brian; Babu, Shivnath; Datar, Mayur; Motwani, Rajeev; Widom, Jennifer (2002), "Models and issues in data stream systems", Proceedings of the 21st ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS 2002) (PDF), pp. 1–16, CiteSeerX 10.1.1.138.190, doi:10.1145/543613.543615, ISBN 978-1-58113-507-7, S2CID 2071130, archived from the original (PDF) on 2017-07-09, retrieved 2013-07-15.
Flajolet, Philippe; Martin, G. Nigel (1985). "Probabilistic counting algorithms for data base applications" (PDF). Journal of Computer and System Sciences. 31 (2): 182–209. doi:10.1016/0022-0000(85)90041-8. Retrieved 2016-12-11.
Gilbert, A. C.; Kotidis, Y.; Muthukrishnan, S.; Strauss, M. J. (2001), "Surfing Wavelets on Streams: One-Pass Summaries for Approximate Aggregate Queries" (PDF), Proceedings of the International Conference on Very Large Data Bases: 79–88.
Kane, Daniel M.; Nelson, Jelani; Woodruff, David P. (2010). "An optimal algorithm for the distinct elements problem". Proceedings of the Twenty-Ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems. PODS '10. New York, NY, USA: ACM. pp. 41–52. CiteSeerX 10.1.1.164.142. doi:10.1145/1807085.1807094. ISBN 978-1-4503-0033-9. S2CID 10006932..
Karp, R. M.; Papadimitriou, C. H.; Shenker, S. (2003), "A simple algorithm for finding frequent elements in streams and bags", ACM Transactions on Database Systems, 28 (1): 51–55, CiteSeerX 10.1.1.116.8530, doi:10.1145/762471.762473, S2CID 952840.
Lall, Ashwin; Sekar, Vyas; Ogihara, Mitsunori; Xu, Jun; Zhang, Hui (2006). "Data streaming algorithms for estimating entropy of network traffic". Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems (ACM SIGMETRICS 2006). p. 145. doi:10.1145/1140277.1140295. hdl:1802/2537. ISBN 978-1-59593-319-5. S2CID 240982..
Xu, Jun (Jim) (2007), A Tutorial on Network Data Streaming (PDF).
Heath, D., Kasif, S., Kosaraju, R., Salzberg, S., Sullivan, G., "Learning Nested Concepts With Limited Storage", Proceeding IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2, Pages 777–782, Morgan Kaufmann Publishers Inc. San Francisco, CA, USA ©1991
Morris, Robert (1978), "Counting large numbers of events in small registers", Communications of the ACM, 21 (10): 840–842, doi:10.1145/359619.359627, S2CID 36226357. | Wikipedia/Streaming_algorithm |
Borůvka's algorithm is a greedy algorithm for finding a minimum spanning tree in a graph,
or a minimum spanning forest in the case of a graph that is not connected.
It was first published in 1926 by Otakar Borůvka as a method of constructing an efficient electricity network for Moravia.
The algorithm was rediscovered by Choquet in 1938; again by Florek, Łukasiewicz, Perkal, Steinhaus, and Zubrzycki in 1951; and again by Georges Sollin in 1965. This algorithm is frequently called Sollin's algorithm, especially in the parallel computing literature.
The algorithm begins by finding the minimum-weight edge incident to each vertex of the graph, and adding all of those edges to the forest.
Then, it repeats a similar process of finding the minimum-weight edge from each tree constructed so far to a different tree, and adding all of those edges to the forest.
Each repetition of this process reduces the number of trees, within each connected component of the graph, to at most half of this former value,
so after logarithmically many repetitions the process finishes. When it does, the set of edges it has added forms the minimum spanning forest.
== Pseudocode ==
The following pseudocode illustrates a basic implementation of Borůvka's algorithm.
In the conditional clauses, every edge uv is considered cheaper than "None". The purpose of the completed variable is to determine whether the forest F is yet a spanning forest.
If edges do not have distinct weights, then a consistent tie-breaking rule must be used, e.g. based on some total order on vertices or edges.
This can be achieved by representing vertices as integers and comparing them directly; comparing their memory addresses; etc.
A tie-breaking rule is necessary to ensure that the created graph is indeed a forest, that is, it does not contain cycles. For example, consider a triangle graph with nodes {a,b,c} and all edges of weight 1. Then a cycle could be created if we select ab as the minimal weight edge for {a}, bc for {b}, and ca for {c}.
A tie-breaking rule which orders edges first by source, then by destination, will prevent creation of a cycle, resulting in the minimal spanning tree {ab, bc}.
algorithm Borůvka is
input: A weighted undirected graph G = (V, E).
output: F, a minimum spanning forest of G.
Initialize a forest F to (V, E′) where E′ = {}.
completed := false
while not completed do
Find the connected components of F and assign to each vertex its component
Initialize the cheapest edge for each component to "None"
for each edge uv in E, where u and v are in different components of F:
let wx be the cheapest edge for the component of u
if is-preferred-over(uv, wx) then
Set uv as the cheapest edge for the component of u
let yz be the cheapest edge for the component of v
if is-preferred-over(uv, yz) then
Set uv as the cheapest edge for the component of v
if all components have cheapest edge set to "None" then
// no more trees can be merged -- we are finished
completed := true
else
completed := false
for each component whose cheapest edge is not "None" do
Add its cheapest edge to E'
function is-preferred-over(edge1, edge2) is
return (edge2 is "None") or
(weight(edge1) < weight(edge2)) or
(weight(edge1) = weight(edge2) and tie-breaking-rule(edge1, edge2))
function tie-breaking-rule(edge1, edge2) is
The tie-breaking rule; returns true if and only if edge1
is preferred over edge2 in the case of a tie.
As an optimization, one could remove from G each edge that is found to connect two vertices in the same component, so that it does not contribute to the time for searching for cheapest edges in later components.
== Complexity ==
Borůvka's algorithm can be shown to take O(log V) iterations of the outer loop until it terminates, and therefore to run in time O(E log V), where E is the number of edges, and V is the number of vertices in G (assuming E ≥ V). In planar graphs, and more generally in families of graphs closed under graph minor operations, it can be made to run in linear time, by removing all but the cheapest edge between each pair of components after each stage of the algorithm.
== Example ==
== Other algorithms ==
Other algorithms for this problem include Prim's algorithm and Kruskal's algorithm. Fast parallel algorithms can be obtained by combining Prim's algorithm with Borůvka's.
A faster randomized minimum spanning tree algorithm based in part on Borůvka's algorithm due to Karger, Klein, and Tarjan runs in expected O(E) time. The best known (deterministic) minimum spanning tree algorithm by Bernard Chazelle is also based in part on Borůvka's and runs in O(E α(E,V)) time, where α is the inverse Ackermann function. These randomized and deterministic algorithms combine steps of Borůvka's algorithm, reducing the number of components that remain to be connected, with steps of a different type that reduce the number of edges between pairs of components.
== Notes == | Wikipedia/Sollin's_algorithm |
Algorithmic composition is the technique of using algorithms to create music.
Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures used to plot voice-leading in Western counterpoint, for example, can often be reduced to algorithmic determinacy. The term can be used to describe music-generating techniques that run without ongoing human intervention, for example through the introduction of chance procedures. However through live coding and other interactive interfaces, a fully human-centric approach to algorithmic composition is possible.
Some algorithms or data that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.
== Models ==
Compositional algorithms are usually classified by the specific programming techniques they use. The results of the process can then be divided into 1) music composed by computer and 2) music composed with the aid of computer. Music may be considered composed by computer when the algorithm is able to make choices of its own during the creation process.
Another way to sort compositional algorithms is to examine the results of their compositional processes. Algorithms can either 1) provide notational information (sheet music or MIDI) for other instruments or 2) provide an independent way of sound synthesis (playing the composition by itself). There are also algorithms creating both notational data and sound synthesis.
One way to categorize compositional algorithms is by their structure and the way of processing data, as seen in this model of six partly overlapping types:
mathematical models
knowledge-based systems
grammars
evolutionary methods
systems which learn
hybrid systems
=== Translational models ===
This is an approach to music synthesis that involves "translating" information from an existing non-musical medium into a new sound. The translation can be either rule-based or stochastic. For example, when translating a picture into sound, a JPEG image of a horizontal line may be interpreted in sound as a constant pitch, while an upwards-slanted line may be an ascending scale. Oftentimes, the software seeks to extract concepts or metaphors from the medium, (such as height or sentiment) and apply the extracted information to generate songs using the ways music theory typically represents those concepts. Another example is the translation of text into music, which can approach composition by extracting sentiment (positive or negative) from the text using machine learning methods like sentiment analysis and represents that sentiment in terms of chord quality such as minor (sad) or major (happy) chords in the musical output generated.
=== Mathematical models ===
Mathematical models are based on mathematical equations and random events. The most common way to create compositions through mathematics is stochastic processes. In stochastic models a piece of music is composed as a result of non-deterministic methods. The compositional process is only partially controlled by the composer by weighting the possibilities of random events. Prominent examples of stochastic algorithms are Markov chains and various uses of Gaussian distributions. Stochastic algorithms are often used together with other algorithms in various decision-making processes.
Music has also been composed through natural phenomena. These chaotic models create compositions from the harmonic and inharmonic phenomena of nature. For example, since the 1970s fractals have been studied also as models for algorithmic composition.
As an example of deterministic compositions through mathematical models, the On-Line Encyclopedia of Integer Sequences provides an option to play an integer sequence as 12-tone equal temperament music. (It is initially set to convert each integer to a note on an 88-key musical keyboard by computing the integer modulo 88, at a steady rhythm. Thus 123456, the natural numbers, equals half of a chromatic scale.) As another example, the all-interval series has been used for computer-aided composition.
=== Knowledge-based systems ===
One way to create compositions is to isolate the aesthetic code of a certain musical genre and use this code to create new similar compositions. Knowledge-based systems are based on a pre-made set of arguments that can be used to compose new works of the same style or genre. Usually this is accomplished by a set of tests or rules requiring fulfillment for the composition to be complete.
=== Grammars ===
Music can also be examined as a language with a distinctive grammar set. Compositions are created by first constructing a musical grammar, which is then used to create comprehensible musical pieces. Grammars often include rules for macro-level composing, for instance harmonies and rhythm, rather than single notes.
=== Optimization approaches ===
When generating well defined styles, music can be seen as a combinatorial optimization problem, whereby the aim is to find the right combination of notes such that the objective function is minimized. This objective function typically contains rules of a particular style, but could be learned using machine learning methods such as Markov models. Researchers have generated music using a myriad of different optimization methods, including integer programming, variable neighbourhood search, and evolutionary methods as mentioned in the next subsection.
=== Evolutionary methods ===
Evolutionary methods of composing music are based on genetic algorithms. The composition is being built by the means of evolutionary process. Through mutation and natural selection, different solutions evolve towards a suitable musical piece. Iterative action of the algorithm cuts out bad solutions and creates new ones from those surviving the process. The results of the process are supervised by the critic, a vital part of the algorithm controlling the quality of created compositions.
=== Evo-Devo approach ===
Evolutionary methods, combined with developmental processes, constitute the evo-devo approach for generation and optimization of complex structures. These methods have also been applied to music composition, where the musical structure is obtained by an iterative process that transform a very simple composition (made of a few notes) into a complex fully-fledged piece (be it a score, or a MIDI file).
=== Systems that learn ===
Learning systems are programs that have no given knowledge of the genre of music they are working with. Instead, they collect the learning material by themselves from the example material supplied by the user or programmer. The material is then processed into a piece of music similar to the example material. This method of algorithmic composition is strongly linked to algorithmic modeling of style, machine improvisation, and such studies as cognitive science and the study of neural networks. Assayag and Dubnov proposed a variable length Markov model to learn motif and phrase continuations of different length. Marchini and Purwins presented a system that learns the structure of an audio recording of a rhythmical percussion fragment using unsupervised clustering and variable length Markov chains and that synthesizes musical variations from it.
=== Hybrid systems ===
Programs based on a single algorithmic model rarely succeed in creating aesthetically satisfying results. For that reason algorithms of different type are often used together to combine the strengths and diminish the weaknesses of these algorithms. Creating hybrid systems for music composition has opened up the field of algorithmic composition and created also many brand new ways to construct compositions algorithmically. The only major problem with hybrid systems is their growing complexity and the need of resources to combine and test these algorithms.
Another approach, which can be called computer-assisted composition, is to algorithmically create certain structures for finally "hand-made" compositions. As early as in the 1960s, Gottfried Michael Koenig developed computer programs Project 1 and Project 2 for aleatoric music, the output of which was sensibly structured "manually" by means of performance instructions. In the 2000s, Andranik Tangian developed a computer algorithm to determine the time event structures for rhythmic canons and rhythmic fugues,
which were then worked out into harmonic compositions Eine kleine Mathmusik I and Eine kleine Mathmusik II; for scores and recordings see.
== See also ==
AIVA
Change ringing
Computational creativity
David Cope
Euclidean rhythm (traditional musical rhythms that are generated by Euclid's algorithm)
Generative music
Musical dice game
Pop music automation
List of music software
== References ==
== Further reading ==
"Algorithmic Composition: Computational Thinking in Music" by Michael Edwards. Communications of the ACM, vol. 54, no. 7, pp. 58–67, July 2011 doi:10.1145/1965724.1965742
Karlheinz Essl: Algorithmic Composition. in: Cambridge Companion to Electronic Music, ed. by Nicholas Collins and Julio d'Escrivan, Cambridge University Press 2007. ISBN 978-0-521-68865-9. Abstract
Computer Music Algorithms by Dr. John Francis. Music algorithmic computer programs representing all styles of music, with C source code, produces midi files. 19th ed 2019, now contain 57 programs, 20 styles, and 24 chapters.
"A Functional Taxonomy of Music Generation systems" by Dorien Herremans, Ching-Hua Chuang and Elaine Chew. ACM Computing Surveys, vol. 55, no. 5, pp. 69:1–30 doi:10.1145/3108242.
Eduardo Reck Miranda: Composing Music with Computers. Focal Press 2001
Gerhard Nierhaus: Algorithmic Composition – Paradigms of Automated Music Generation. Springer 2008. ISBN 978-3-211-75539-6
Curtis Roads: The Computer Music Tutorial. MIT Press 1996. ISBN 9780262680820.
"Automatic Composition from Non-musical Inspiration Sources", by Robert Smith, et al.
"A Few Remarks on Algorithmic Composition" by Martin Supper. Computer Music Journal 25.1 (2001) 48–53
Phil Winsor and Gene De Lisa: Computer Music in C. Windcrest 1990. ISBN 978-1-57441-116-4
Wooller, Rene, Brown, Andrew R, Miranda, Eduardo, Diederich, Joachim, & Berry, Rodney (2005) "A framework for comparison of process in algorithmic music systems." In: Generative Arts Practice, 5–7 December 2005, Sydney, Australia.
"Composing with Process: Perspectives on Generative and Systems Music", podcast
== External links ==
Drew Krause: Introduction to Algorithmic Composition on Vimeo
Algorithmic Composer, series of algorithmic composition tutorials | Wikipedia/Algorithmic_composition |
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
Binary search runs in logarithmic time in the worst case, making
O
(
log
n
)
{\displaystyle O(\log n)}
comparisons, where
n
{\displaystyle n}
is the number of elements in the array. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search.
== Algorithm ==
Binary search works on sorted arrays. Binary search begins by comparing an element in the middle of the array with the target value. If the target value matches the element, its position in the array is returned. If the target value is less than the element, the search continues in the lower half of the array. If the target value is greater than the element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration.
=== Procedure ===
Given an array
A
{\displaystyle A}
of
n
{\displaystyle n}
elements with values or records
A
0
,
A
1
,
A
2
,
…
,
A
n
−
1
{\displaystyle A_{0},A_{1},A_{2},\ldots ,A_{n-1}}
sorted such that
A
0
≤
A
1
≤
A
2
≤
⋯
≤
A
n
−
1
{\displaystyle A_{0}\leq A_{1}\leq A_{2}\leq \cdots \leq A_{n-1}}
, and target value
T
{\displaystyle T}
, the following subroutine uses binary search to find the index of
T
{\displaystyle T}
in
A
{\displaystyle A}
.
Set
L
{\displaystyle L}
to
0
{\displaystyle 0}
and
R
{\displaystyle R}
to
n
−
1
{\displaystyle n-1}
.
If
L
>
R
{\displaystyle L>R}
, the search terminates as unsuccessful.
Set
m
{\displaystyle m}
(the position of the middle element) to
L
{\displaystyle L}
plus the floor of
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
, which is the greatest integer less than or equal to
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
.
If
A
m
<
T
{\displaystyle A_{m}<T}
, set
L
{\displaystyle L}
to
m
+
1
{\displaystyle m+1}
and go to step 2.
If
A
m
>
T
{\displaystyle A_{m}>T}
, set
R
{\displaystyle R}
to
m
−
1
{\displaystyle m-1}
and go to step 2.
Now
A
m
=
T
{\displaystyle A_{m}=T}
, the search is done; return
m
{\displaystyle m}
.
This iterative procedure keeps track of the search boundaries with the two variables
L
{\displaystyle L}
and
R
{\displaystyle R}
. The procedure may be expressed in pseudocode as follows, where the variable names and types remain the same as above, floor is the floor function, and unsuccessful refers to a specific value that conveys the failure of the search.
function binary_search(A, n, T) is
L := 0
R := n − 1
while L ≤ R do
m := L + floor((R - L) / 2)
if A[m] < T then
L := m + 1
else if A[m] > T then
R := m − 1
else:
return m
return unsuccessful
Alternatively, the algorithm may take the ceiling of
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
. This may change the result if the target value appears more than once in the array.
==== Alternative procedure ====
In the above procedure, the algorithm checks whether the middle element (
m
{\displaystyle m}
) is equal to the target (
T
{\displaystyle T}
) in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when
L
=
R
{\displaystyle L=R}
). This results in a faster comparison loop, as one comparison is eliminated per iteration, while it requires only one more iteration on average.
Hermann Bottenbruch published the first implementation to leave out this check in 1962.
Set
L
{\displaystyle L}
to
0
{\displaystyle 0}
and
R
{\displaystyle R}
to
n
−
1
{\displaystyle n-1}
.
While
L
≠
R
{\displaystyle L\neq R}
,
Set
m
{\displaystyle m}
(the position of the middle element) to
L
{\displaystyle L}
plus the ceiling of
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
, which is the least integer greater than or equal to
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
.
If
A
m
>
T
{\displaystyle A_{m}>T}
, set
R
{\displaystyle R}
to
m
−
1
{\displaystyle m-1}
.
Else,
A
m
≤
T
{\displaystyle A_{m}\leq T}
; set
L
{\displaystyle L}
to
m
{\displaystyle m}
.
Now
L
=
R
{\displaystyle L=R}
, the search is done. If
A
L
=
T
{\displaystyle A_{L}=T}
, return
L
{\displaystyle L}
. Otherwise, the search terminates as unsuccessful.
Where ceil is the ceiling function, the pseudocode for this version is:
function binary_search_alternative(A, n, T) is
L := 0
R := n − 1
while L != R do
m := L + ceil((R - L) / 2)
if A[m] > T then
R := m − 1
else:
L := m
if A[L] = T then
return L
return unsuccessful
=== Duplicate elements ===
The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was
[
1
,
2
,
3
,
4
,
4
,
5
,
6
,
7
]
{\displaystyle [1,2,3,4,4,5,6,7]}
and the target was
4
{\displaystyle 4}
, then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3) in this case. It does not always return the first duplicate (consider
[
1
,
2
,
4
,
4
,
4
,
5
,
6
,
7
]
{\displaystyle [1,2,4,4,4,5,6,7]}
which still returns the 4th element). However, it is sometimes necessary to find the leftmost element or the rightmost element for a target value that is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if such an element exists.
==== Procedure for finding the leftmost element ====
To find the leftmost element, the following procedure can be used:
Set
L
{\displaystyle L}
to
0
{\displaystyle 0}
and
R
{\displaystyle R}
to
n
{\displaystyle n}
.
While
L
<
R
{\displaystyle L<R}
,
Set
m
{\displaystyle m}
(the position of the middle element) to
L
{\displaystyle L}
plus the floor of
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
, which is the greatest integer less than or equal to
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
.
If
A
m
<
T
{\displaystyle A_{m}<T}
, set
L
{\displaystyle L}
to
m
+
1
{\displaystyle m+1}
.
Else,
A
m
≥
T
{\displaystyle A_{m}\geq T}
; set
R
{\displaystyle R}
to
m
{\displaystyle m}
.
Return
L
{\displaystyle L}
.
If
L
<
n
{\displaystyle L<n}
and
A
L
=
T
{\displaystyle A_{L}=T}
, then
A
L
{\displaystyle A_{L}}
is the leftmost element that equals
T
{\displaystyle T}
. Even if
T
{\displaystyle T}
is not in the array,
L
{\displaystyle L}
is the rank of
T
{\displaystyle T}
in the array, or the number of elements in the array that are less than
T
{\displaystyle T}
.
Where floor is the floor function, the pseudocode for this version is:
function binary_search_leftmost(A, n, T):
L := 0
R := n
while L < R:
m := L + floor((R - L) / 2)
if A[m] < T:
L := m + 1
else:
R := m
return L
==== Procedure for finding the rightmost element ====
To find the rightmost element, the following procedure can be used:
Set
L
{\displaystyle L}
to
0
{\displaystyle 0}
and
R
{\displaystyle R}
to
n
{\displaystyle n}
.
While
L
<
R
{\displaystyle L<R}
,
Set
m
{\displaystyle m}
(the position of the middle element) to
L
{\displaystyle L}
plus the floor of
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
, which is the greatest integer less than or equal to
R
−
L
2
{\displaystyle {\frac {R-L}{2}}}
.
If
A
m
>
T
{\displaystyle A_{m}>T}
, set
R
{\displaystyle R}
to
m
{\displaystyle m}
.
Else,
A
m
≤
T
{\displaystyle A_{m}\leq T}
; set
L
{\displaystyle L}
to
m
+
1
{\displaystyle m+1}
.
Return
R
−
1
{\displaystyle R-1}
.
If
R
>
0
{\displaystyle R>0}
and
A
R
−
1
=
T
{\displaystyle A_{R-1}=T}
, then
A
R
−
1
{\displaystyle A_{R-1}}
is the rightmost element that equals
T
{\displaystyle T}
. Even if
T
{\displaystyle T}
is not in the array,
n
−
R
{\displaystyle n-R}
is the number of elements in the array that are greater than
T
{\displaystyle T}
.
Where floor is the floor function, the pseudocode for this version is:
function binary_search_rightmost(A, n, T):
L := 0
R := n
while L < R:
m := L + floor((R - L) / 2)
if A[m] > T:
R := m
else:
L := m + 1
return R - 1
=== Approximate matches ===
The above procedure only performs exact matches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries seeking the number of elements between two values can be performed with two rank queries.
Rank queries can be performed with the procedure for finding the leftmost element. The number of elements less than the target value is returned by the procedure.
Predecessor queries can be performed with rank queries. If the rank of the target value is
r
{\displaystyle r}
, its predecessor is
r
−
1
{\displaystyle r-1}
.
For successor queries, the procedure for finding the rightmost element can be used. If the result of running the procedure for the target value is
r
{\displaystyle r}
, then the successor of the target value is
r
+
1
{\displaystyle r+1}
.
The nearest neighbor of the target value is either its predecessor or successor, whichever is closer.
Range queries are also straightforward. Once the ranks of the two values are known, the number of elements greater than or equal to the first value and less than the second is the difference of the two ranks. This count can be adjusted up or down by one according to whether the endpoints of the range should be considered to be part of the range and whether the array contains entries matching those endpoints.
== Performance ==
In terms of the number of comparisons, the performance of binary search can be analyzed by viewing the run of the procedure on a binary tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root, and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration.
In the worst case, binary search makes
⌊
log
2
(
n
)
+
1
⌋
{\textstyle \lfloor \log _{2}(n)+1\rfloor }
iterations of the comparison loop, where the
⌊
⋅
⌋
{\textstyle \lfloor \cdot \rfloor }
notation denotes the floor function that yields the greatest integer less than or equal to the argument, and
log
2
{\textstyle \log _{2}}
is the binary logarithm. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always
⌊
log
2
(
n
)
+
1
⌋
{\textstyle \lfloor \log _{2}(n)+1\rfloor }
levels in the tree for any binary search.
The worst case may also be reached when the target element is not in the array. If
n
{\textstyle n}
is one less than a power of two, then this is always the case. Otherwise, the search may perform
⌊
log
2
(
n
)
+
1
⌋
{\textstyle \lfloor \log _{2}(n)+1\rfloor }
iterations if the search reaches the deepest level of the tree. However, it may make
⌊
log
2
(
n
)
⌋
{\textstyle \lfloor \log _{2}(n)\rfloor }
iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree.
On average, assuming that each element is equally likely to be searched, binary search makes
⌊
log
2
(
n
)
⌋
+
1
−
(
2
⌊
log
2
(
n
)
⌋
+
1
−
⌊
log
2
(
n
)
⌋
−
2
)
/
n
{\displaystyle \lfloor \log _{2}(n)\rfloor +1-(2^{\lfloor \log _{2}(n)\rfloor +1}-\lfloor \log _{2}(n)\rfloor -2)/n}
iterations when the target element is in the array. This is approximately equal to
log
2
(
n
)
−
1
{\displaystyle \log _{2}(n)-1}
iterations. When the target element is not in the array, binary search makes
⌊
log
2
(
n
)
⌋
+
2
−
2
⌊
log
2
(
n
)
⌋
+
1
/
(
n
+
1
)
{\displaystyle \lfloor \log _{2}(n)\rfloor +2-2^{\lfloor \log _{2}(n)\rfloor +1}/(n+1)}
iterations on average, assuming that the range between and outside elements is equally likely to be searched.
In the best case, where the target value is the middle element of the array, its position is returned after one iteration.
In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely. Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over all elements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible.
=== Space complexity ===
Binary search requires three pointers to elements, which may be array indices or pointers to memory locations, regardless of the size of the array. Therefore, the space complexity of binary search is
O
(
1
)
{\displaystyle O(1)}
in the word RAM model of computation.
=== Derivation of average case ===
The average number of iterations performed by binary search depends on the probability of each element being searched. The average case is different for successful searches and unsuccessful searches. It will be assumed that each element is equally likely to be searched for successful searches. For unsuccessful searches, it will be assumed that the intervals between and outside elements are equally likely to be searched. The average case for successful searches is the number of iterations required to search every element exactly once, divided by
n
{\displaystyle n}
, the number of elements. The average case for unsuccessful searches is the number of iterations required to search an element within every interval exactly once, divided by the
n
+
1
{\displaystyle n+1}
intervals.
==== Successful searches ====
In the binary tree representation, a successful search can be represented by a path from the root to the target node, called an internal path. The length of a path is the number of edges (connections between nodes) that the path passes through. The number of iterations performed by a search, given that the corresponding path has length l, is
l
+
1
{\displaystyle l+1}
counting the initial iteration. The internal path length is the sum of the lengths of all unique internal paths. Since there is only one path from the root to any single node, each internal path represents a search for a specific element. If there are n elements, which is a positive integer, and the internal path length is
I
(
n
)
{\displaystyle I(n)}
, then the average number of iterations for a successful search
T
(
n
)
=
1
+
I
(
n
)
n
{\displaystyle T(n)=1+{\frac {I(n)}{n}}}
, with the one iteration added to count the initial iteration.
Since binary search is the optimal algorithm for searching with comparisons, this problem is reduced to calculating the minimum internal path length of all binary trees with n nodes, which is equal to:
I
(
n
)
=
∑
k
=
1
n
⌊
log
2
(
k
)
⌋
{\displaystyle I(n)=\sum _{k=1}^{n}\left\lfloor \log _{2}(k)\right\rfloor }
For example, in a 7-element array, the root requires one iteration, the two elements below the root require two iterations, and the four elements below require three iterations. In this case, the internal path length is:
∑
k
=
1
7
⌊
log
2
(
k
)
⌋
=
0
+
2
(
1
)
+
4
(
2
)
=
2
+
8
=
10
{\displaystyle \sum _{k=1}^{7}\left\lfloor \log _{2}(k)\right\rfloor =0+2(1)+4(2)=2+8=10}
The average number of iterations would be
1
+
10
7
=
2
3
7
{\displaystyle 1+{\frac {10}{7}}=2{\frac {3}{7}}}
based on the equation for the average case. The sum for
I
(
n
)
{\displaystyle I(n)}
can be simplified to:
I
(
n
)
=
∑
k
=
1
n
⌊
log
2
(
k
)
⌋
=
(
n
+
1
)
⌊
log
2
(
n
+
1
)
⌋
−
2
⌊
log
2
(
n
+
1
)
⌋
+
1
+
2
{\displaystyle I(n)=\sum _{k=1}^{n}\left\lfloor \log _{2}(k)\right\rfloor =(n+1)\left\lfloor \log _{2}(n+1)\right\rfloor -2^{\left\lfloor \log _{2}(n+1)\right\rfloor +1}+2}
Substituting the equation for
I
(
n
)
{\displaystyle I(n)}
into the equation for
T
(
n
)
{\displaystyle T(n)}
:
T
(
n
)
=
1
+
(
n
+
1
)
⌊
log
2
(
n
+
1
)
⌋
−
2
⌊
log
2
(
n
+
1
)
⌋
+
1
+
2
n
=
⌊
log
2
(
n
)
⌋
+
1
−
(
2
⌊
log
2
(
n
)
⌋
+
1
−
⌊
log
2
(
n
)
⌋
−
2
)
/
n
{\displaystyle T(n)=1+{\frac {(n+1)\left\lfloor \log _{2}(n+1)\right\rfloor -2^{\left\lfloor \log _{2}(n+1)\right\rfloor +1}+2}{n}}=\lfloor \log _{2}(n)\rfloor +1-(2^{\lfloor \log _{2}(n)\rfloor +1}-\lfloor \log _{2}(n)\rfloor -2)/n}
For integer n, this is equivalent to the equation for the average case on a successful search specified above.
==== Unsuccessful searches ====
Unsuccessful searches can be represented by augmenting the tree with external nodes, which forms an extended binary tree. If an internal node, or a node present in the tree, has fewer than two child nodes, then additional child nodes, called external nodes, are added so that each internal node has two children. By doing so, an unsuccessful search can be represented as a path to an external node, whose parent is the single element that remains during the last iteration. An external path is a path from the root to an external node. The external path length is the sum of the lengths of all unique external paths. If there are
n
{\displaystyle n}
elements, which is a positive integer, and the external path length is
E
(
n
)
{\displaystyle E(n)}
, then the average number of iterations for an unsuccessful search
T
′
(
n
)
=
E
(
n
)
n
+
1
{\displaystyle T'(n)={\frac {E(n)}{n+1}}}
, with the one iteration added to count the initial iteration. The external path length is divided by
n
+
1
{\displaystyle n+1}
instead of
n
{\displaystyle n}
because there are
n
+
1
{\displaystyle n+1}
external paths, representing the intervals between and outside the elements of the array.
This problem can similarly be reduced to determining the minimum external path length of all binary trees with
n
{\displaystyle n}
nodes. For all binary trees, the external path length is equal to the internal path length plus
2
n
{\displaystyle 2n}
. Substituting the equation for
I
(
n
)
{\displaystyle I(n)}
:
E
(
n
)
=
I
(
n
)
+
2
n
=
[
(
n
+
1
)
⌊
log
2
(
n
+
1
)
⌋
−
2
⌊
log
2
(
n
+
1
)
⌋
+
1
+
2
]
+
2
n
=
(
n
+
1
)
(
⌊
log
2
(
n
)
⌋
+
2
)
−
2
⌊
log
2
(
n
)
⌋
+
1
{\displaystyle E(n)=I(n)+2n=\left[(n+1)\left\lfloor \log _{2}(n+1)\right\rfloor -2^{\left\lfloor \log _{2}(n+1)\right\rfloor +1}+2\right]+2n=(n+1)(\lfloor \log _{2}(n)\rfloor +2)-2^{\lfloor \log _{2}(n)\rfloor +1}}
Substituting the equation for
E
(
n
)
{\displaystyle E(n)}
into the equation for
T
′
(
n
)
{\displaystyle T'(n)}
, the average case for unsuccessful searches can be determined:
T
′
(
n
)
=
(
n
+
1
)
(
⌊
log
2
(
n
)
⌋
+
2
)
−
2
⌊
log
2
(
n
)
⌋
+
1
(
n
+
1
)
=
⌊
log
2
(
n
)
⌋
+
2
−
2
⌊
log
2
(
n
)
⌋
+
1
/
(
n
+
1
)
{\displaystyle T'(n)={\frac {(n+1)(\lfloor \log _{2}(n)\rfloor +2)-2^{\lfloor \log _{2}(n)\rfloor +1}}{(n+1)}}=\lfloor \log _{2}(n)\rfloor +2-2^{\lfloor \log _{2}(n)\rfloor +1}/(n+1)}
==== Performance of alternative procedure ====
Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only
⌊
log
2
(
n
)
+
1
⌋
{\textstyle \lfloor \log _{2}(n)+1\rfloor }
times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but very large
n
{\textstyle n}
.
=== Running time and cache use ===
In analyzing the performance of binary search, another consideration is the time required to compare two elements. For integers and strings, the time required increases linearly as the encoding length (usually the number of bits) of the elements increase. For example, comparing a pair of 64-bit unsigned integers would require comparing up to double the bits as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers are equal. This can be significant when the encoding lengths of the elements are large, such as with large integer types or long strings, which makes comparing elements expensive. Furthermore, comparing floating-point values (the most common digital representation of real numbers) is often more expensive than comparing integers or short strings.
On most computer architectures, the processor has a hardware cache separate from RAM. Since they are located within the processor itself, caches are much faster to access but usually store much less data than RAM. Therefore, most processors store memory locations that have been accessed recently, along with memory locations close to it. For example, when an array element is accessed, the element itself may be stored along with the elements that are stored close to it in RAM, making it faster to sequentially access array elements that are close in index to each other (locality of reference). On a sorted array, binary search can jump to distant memory locations if the array is large, unlike algorithms (such as linear search and linear probing in hash tables) which access elements in sequence. This adds slightly to the running time of binary search for large arrays on most systems.
== Binary search versus other schemes ==
Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking
O
(
n
)
{\textstyle O(n)}
time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array. There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and set membership (determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in
O
(
log
n
)
{\textstyle O(\log n)}
time regardless of the type or structure of the values themselves. In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array.
=== Linear search ===
Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand. All sorting algorithms based on comparing elements, such as quicksort and merge sort, require at least
O
(
n
log
n
)
{\textstyle O(n\log n)}
comparisons in the worst case. Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array.
=== Trees ===
A binary search tree is a binary tree data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries.
However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to balanced binary search trees, binary search trees that balance their own nodes, because they rarely produce the tree with the fewest possible levels. Except for balanced binary search trees, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching
n
{\textstyle n}
comparisons. Binary search trees take more space than sorted arrays.
Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can be efficiently structured in filesystems. The B-tree generalizes this method of tree organization. B-trees are frequently used to organize long-term storage such as databases and filesystems.
=== Hashing ===
For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records. Most hash table implementations require only amortized constant time on average. However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record. Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables.
=== Set membership algorithms ===
A related problem to search is set membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A bit array is the simplest, useful when the range of keys is limited. It compactly stores a collection of bits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only
O
(
1
)
{\textstyle O(1)}
time. The Judy1 type of Judy array handles 64-bit keys efficiently.
For approximate results, Bloom filters, another probabilistic data structure based on hashing, store a set of keys by encoding the keys using a bit array and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with
k
{\textstyle k}
hash functions, membership queries require only
O
(
k
)
{\textstyle O(k)}
time. However, Bloom filters suffer from false positives.
=== Other data structures ===
There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute. As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching.
== Variations ==
=== Uniform binary search ===
Uniform binary search stores, instead of the lower and upper bounds, the difference in the index of the middle element from the current iteration to the next iteration. A lookup table containing the differences is computed beforehand. For example, if the array to be searched is [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], the middle element (
m
{\displaystyle m}
) would be 6. In this case, the middle element of the left subarray ([1, 2, 3, 4, 5]) is 3 and the middle element of the right subarray ([7, 8, 9, 10, 11]) is 9. Uniform binary search would store the value of 3 as both indices differ from 6 by this same amount. To reduce the search space, the algorithm either adds or subtracts this change from the index of the middle element. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on decimal computers.
=== Exponential search ===
Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes
⌊
log
2
x
+
1
⌋
{\textstyle \lfloor \log _{2}x+1\rfloor }
iterations before binary search is started and at most
⌊
log
2
x
⌋
{\textstyle \lfloor \log _{2}x\rfloor }
iterations of the binary search, where
x
{\textstyle x}
is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array.
=== Interpolation search ===
Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array.
A common interpolation function is linear interpolation. If
A
{\displaystyle A}
is the array,
L
,
R
{\displaystyle L,R}
are the lower and upper bounds respectively, and
T
{\displaystyle T}
is the target, then the target is estimated to be about
(
T
−
A
L
)
/
(
A
R
−
A
L
)
{\displaystyle (T-A_{L})/(A_{R}-A_{L})}
of the way between
L
{\displaystyle L}
and
R
{\displaystyle R}
. When linear interpolation is used, and the distribution of the array elements is uniform or near uniform, interpolation search makes
O
(
log
log
n
)
{\textstyle O(\log \log n)}
comparisons.
In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays.
=== Fractional cascading ===
Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires
O
(
k
log
n
)
{\textstyle O(k\log n)}
time, where
k
{\textstyle k}
is the number of arrays. Fractional cascading reduces this to
O
(
k
+
log
n
)
{\textstyle O(k+\log n)}
by storing specific information in each array about each element and its position in the other arrays.
Fractional cascading was originally developed to efficiently solve various computational geometry problems. Fractional cascading has been applied elsewhere, such as in data mining and Internet Protocol routing.
=== Generalization to graphs ===
Binary search has been generalized to work on certain types of graphs, where the target value is stored in a vertex instead of an array element. Binary search trees are one such generalization—when a vertex (node) in the tree is queried, the algorithm either learns that the vertex is the target, or otherwise which subtree the target would be located in. However, this can be further generalized as follows: given an undirected, positively weighted graph and a target vertex, the algorithm learns upon querying a vertex that it is equal to the target, or it is given an incident edge that is on the shortest path from the queried vertex to the target. The standard binary search algorithm is simply the case where the graph is a path. Similarly, binary search trees are the case where the edges to the left or right subtrees are given when the queried vertex is unequal to the target. For all undirected, positively weighted graphs, there is an algorithm that finds the target vertex in
O
(
log
n
)
{\displaystyle O(\log n)}
queries in the worst case.
=== Noisy binary search ===
Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Every noisy binary search procedure must make at least
(
1
−
τ
)
log
2
(
n
)
H
(
p
)
−
10
H
(
p
)
{\displaystyle (1-\tau ){\frac {\log _{2}(n)}{H(p)}}-{\frac {10}{H(p)}}}
comparisons on average, where
H
(
p
)
=
−
p
log
2
(
p
)
−
(
1
−
p
)
log
2
(
1
−
p
)
{\displaystyle H(p)=-p\log _{2}(p)-(1-p)\log _{2}(1-p)}
is the binary entropy function and
τ
{\displaystyle \tau }
is the probability that the procedure yields the wrong position. The noisy binary search problem can be considered as a case of the Rényi-Ulam game, a variant of Twenty Questions where the answers may be wrong.
=== Quantum binary search ===
Classical computers are bounded to the worst case of exactly
⌊
log
2
n
+
1
⌋
{\textstyle \lfloor \log _{2}n+1\rfloor }
iterations when performing binary search. Quantum algorithms for binary search are still bounded to a proportion of
log
2
n
{\textstyle \log _{2}n}
queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for a lower time complexity on quantum computers. Any exact quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least
1
π
(
ln
n
−
1
)
≈
0.22
log
2
n
{\textstyle {\frac {1}{\pi }}(\ln n-1)\approx 0.22\log _{2}n}
queries in the worst case, where
ln
{\textstyle \ln }
is the natural logarithm. There is an exact quantum binary search procedure that runs in
4
log
605
n
≈
0.433
log
2
n
{\textstyle 4\log _{605}n\approx 0.433\log _{2}n}
queries in the worst case. In comparison, Grover's algorithm is the optimal quantum algorithm for searching an unordered list of elements, and it requires
O
(
n
)
{\displaystyle O({\sqrt {n}})}
queries.
== History ==
The idea of sorting a list of items to allow for faster searching dates back to antiquity. The earliest known example was the Inakibit-Anu tablet from Babylon dating back to c. 200 BCE. The tablet contained about 500 sexagesimal numbers and their reciprocals sorted in lexicographical order, which made searching for a specific entry easier. In addition, several lists of names that were sorted by their first letter were discovered on the Aegean Islands. Catholicon, a Latin dictionary finished in 1286 CE, was the first work to describe rules for sorting words into alphabetical order, as opposed to just the first few letters.
In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, a seminal and foundational college course in computing. In 1957, William Wesley Peterson published the first method for interpolation search. Every published binary search algorithm worked only for arrays whose length is one less than a power of two until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays. In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration. The uniform binary search was developed by A. K. Chandra of Stanford University in 1971. In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading as a method to solve numerous search problems in computational geometry.
== Implementation issues ==
Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky
When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases. A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks. Furthermore, Bentley's own implementation of binary search, published in his 1986 book Programming Pearls, contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years.
In a practical implementation, the variables used to represent the indices will often be of fixed size (integers), and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as
L
+
R
2
{\displaystyle {\frac {L+R}{2}}}
, then the value of
L
+
R
{\displaystyle L+R}
may exceed the range of integers of the data type used to store the midpoint, even if
L
{\displaystyle L}
and
R
{\displaystyle R}
are within the range. If
L
{\displaystyle L}
and
R
{\displaystyle R}
are nonnegative, this can be avoided by calculating the midpoint as
L
+
R
−
L
2
{\displaystyle L+{\frac {R-L}{2}}}
.
An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once
L
{\displaystyle L}
exceeds
R
{\displaystyle R}
, the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions.
== Library support ==
Many languages' standard libraries include binary search routines:
C provides the function bsearch() in its standard library, which is typically implemented via binary search, although the official standard does not require it so.
C++'s standard library provides the functions binary_search(), lower_bound(), upper_bound() and equal_range().
D's standard library Phobos, in std.range module provides a type SortedRange (returned by sort() and assumeSorted() functions) with methods contains(), equaleRange(), lowerBound() and trisect(), that use binary search techniques by default for ranges that offer random access.
COBOL provides the SEARCH ALL verb for performing binary searches on COBOL ordered tables.
Go's sort standard library package contains the functions Search, SearchInts, SearchFloat64s, and SearchStrings, which implement general binary search, as well as specific implementations for searching slices of integers, floating-point numbers, and strings, respectively.
Java offers a set of overloaded binarySearch() static methods in the classes Arrays and Collections in the standard java.util package for performing binary searches on Java arrays and on Lists, respectively.
Microsoft's .NET Framework 2.0 offers static generic versions of the binary search algorithm in its collection base classes. An example would be System.Array's method BinarySearch<T>(T[] array, T value).
For Objective-C, the Cocoa framework provides the NSArray -indexOfObject:inSortedRange:options:usingComparator: method in Mac OS X 10.6+. Apple's Core Foundation C framework also contains a CFArrayBSearchValues() function.
Python provides the bisect module that keeps a list in sorted order without having to sort the list after each insertion.
Ruby's Array class includes a bsearch method with built-in approximate matching.
Rust's slice primitive provides binary_search(), binary_search_by(), binary_search_by_key(), and partition_point().
== See also ==
Bisection method – Algorithm for finding a zero of a function – the same idea used to solve equations in the real numbers
Multiplicative binary search – Binary search variation with simplified midpoint calculation
== Notes and references ==
This article was submitted to WikiJournal of Science for external academic peer review in 2018 (reviewer reports). The updated content was reintegrated into the Wikipedia page under a CC-BY-SA-3.0 license (2019). The version of record as reviewed is:
Anthony Lin; et al. (2 July 2019). "Binary search algorithm" (PDF). WikiJournal of Science. 2 (1): 5. doi:10.15347/WJS/2019.005. ISSN 2470-6345. Wikidata Q81434400.
=== Notes ===
=== Citations ===
=== Sources ===
== External links ==
NIST Dictionary of Algorithms and Data Structures: binary search
Comparisons and benchmarks of a variety of binary search implementations in C Archived 25 September 2019 at the Wayback Machine | Wikipedia/Binary_search_algorithm |
In computational geometry, a sweep line algorithm or plane sweep algorithm is an algorithmic paradigm that uses a conceptual sweep line or sweep surface to solve various problems in Euclidean space. It is one of the critical techniques in computational geometry.
The idea behind algorithms of this type is to imagine that a line (often a vertical line) is swept or moved across the plane, stopping at some points. Geometric operations are restricted to geometric objects that either intersect or are in the immediate vicinity of the sweep line whenever it stops, and the complete solution is available once the line has passed over all objects.
== Applications ==
An application of the approach had led to a breakthrough in the computational complexity of geometric algorithms when Shamos and Hoey presented algorithms for line segment intersection in the plane in 1976. In particular, they described how a combination of the scanline approach with efficient data structures (self-balancing binary search trees) makes it possible to detect whether there are intersections among N segments in the plane in time complexity of O(N log N). The closely related Bentley–Ottmann algorithm uses a sweep line technique to report all K intersections among any N segments in the plane in time complexity of O((N + K) log N) and space complexity of O(N).
Since then, this approach has been used to design efficient algorithms for a number of problems in computational geometry, such as the construction of the Voronoi diagram (Fortune's algorithm) and the Delaunay triangulation or boolean operations on polygons.
== Generalizations and extensions ==
Topological sweeping is a form of plane sweep with a simple ordering of processing points, which avoids the necessity of completely sorting the points; it allows some sweep line algorithms to be performed more efficiently.
The rotating calipers technique for designing geometric algorithms may also be interpreted as a form of the plane sweep, in the projective dual of the input plane: a form of projective duality transforms the slope of a line in one plane into the x-coordinate of a point in the dual plane, so the progression through lines in sorted order by their slope as performed by a rotating calipers algorithm is dual to the progression through points sorted by their x-coordinates in a plane sweep algorithm.
The sweeping approach may be generalised to higher dimensions.
== References == | Wikipedia/Sweep_line_algorithm |
In computer science, empirical algorithmics (or experimental algorithmics) is the practice of using empirical methods to study the behavior of algorithms. The practice combines algorithm development and experimentation: algorithms are not just designed, but also implemented and tested in a variety of situations. In this process, an initial design of an algorithm is analyzed so that the algorithm may be developed in a stepwise manner.
== Overview ==
Methods from empirical algorithmics complement theoretical methods for the analysis of algorithms. Through the principled application of empirical methods, particularly from statistics, it is often possible to obtain insights into the behavior of algorithms such as high-performance heuristic algorithms for hard combinatorial problems that are (currently) inaccessible to theoretical analysis. Empirical methods can also be used to achieve substantial improvements in algorithmic efficiency.
American computer scientist Catherine McGeoch identifies two main branches of empirical algorithmics: the first (known as empirical analysis) deals with the analysis and characterization of the behavior of algorithms, and the second (known as algorithm design or algorithm engineering) is focused on empirical methods for improving the performance of algorithms. The former often relies on techniques and tools from statistics, while the latter is based on approaches from statistics, machine learning and optimization. Dynamic analysis tools, typically performance profilers, are commonly used when applying empirical methods for the selection and refinement of algorithms of various types for use in various contexts.
Research in empirical algorithmics is published in several journals, including the ACM Journal on Experimental Algorithmics (JEA) and the Journal of Artificial Intelligence Research (JAIR). Besides Catherine McGeoch, well-known researchers in empirical algorithmics include Bernard Moret, Giuseppe F. Italiano, Holger H. Hoos, David S. Johnson, and Roberto Battiti.
== Performance profiling in the design of complex algorithms ==
In the absence of empirical algorithmics, analyzing the complexity of an algorithm can involve various theoretical methods applicable to various situations in which the algorithm may be used. Memory and cache considerations are often significant factors to be considered in the theoretical choice of a complex algorithm, or the approach to its optimization, for a given purpose. Performance profiling is a dynamic program analysis technique typically used for finding and analyzing bottlenecks in an entire application's code or for analyzing an entire application to identify poorly performing code. A profiler can reveal the code most relevant to an application's performance issues.
A profiler may help to determine when to choose one algorithm over another in a particular situation. When an individual algorithm is profiled, as with complexity analysis, memory and cache considerations are often more significant than instruction counts or clock cycles; however, the profiler's findings can be considered in light of how the algorithm accesses data rather than the number of instructions it uses.
Profiling may provide intuitive insight into an algorithm's behavior by revealing performance findings as a visual representation. Performance profiling has been applied, for example, during the development of algorithms for matching wildcards. Early algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, typically relied on recursion, a technique criticized on grounds of performance. The Krauss matching wildcards algorithm was developed based on an attempt to formulate a non-recursive alternative using test cases followed by optimizations suggested via performance profiling, resulting in a new algorithmic strategy conceived in light of the profiling along with other considerations. Profilers that collect data at the level of basic blocks or that rely on hardware assistance provide results that can be accurate enough to assist software developers in optimizing algorithms for a particular computer or situation. Performance profiling can aid developer understanding of the characteristics of complex algorithms applied in complex situations, such as coevolutionary algorithms applied to arbitrary test-based problems, and may help lead to design improvements.
== See also ==
Algorithm engineering
Analysis of algorithms
Profiling (computer programming)
Performance tuning
Software development
== References == | Wikipedia/Empirical_algorithmics |
In mathematics and computer science, an algorithmic technique is a general approach for implementing a process or computation.
== General techniques ==
There are several broadly recognized algorithmic techniques that offer a proven method or process for designing and constructing algorithms. Different techniques may be used depending on the objective, which may include searching, sorting, mathematical optimization, constraint satisfaction, categorization, analysis, and prediction.
=== Brute force ===
Brute force is a simple, exhaustive technique that evaluates every possible outcome to find a solution.
=== Divide and conquer ===
The divide and conquer technique decomposes complex problems recursively into smaller sub-problems. Each sub-problem is then solved and these partial solutions are recombined to determine the overall solution. This technique is often used for searching and sorting.
=== Dynamic ===
Dynamic programming is a systematic technique in which a complex problem is decomposed recursively into smaller, overlapping subproblems for solution. Dynamic programming stores the results of the overlapping sub-problems locally using an optimization technique called memoization.
=== Evolutionary ===
An evolutionary approach develops candidate solutions and then, in a manner similar to biological evolution, performs a series of random alterations or combinations of these solutions and evaluates the new results against a fitness function. The most fit or promising results are selected for additional iterations, to achieve an overall optimal solution.
=== Graph traversal ===
Graph traversal is a technique for finding solutions to problems that can be represented as graphs. This approach is broad, and includes depth-first search, breadth-first search, tree traversal, and many specific variations that may include local optimizations and excluding search spaces that can be determined to be non-optimum or not possible. These techniques may be used to solve a variety of problems including shortest path and constraint satisfaction problems.
=== Greedy ===
A greedy approach begins by evaluating one possible outcome from the set of possible outcomes, and then searches locally for an improvement on that outcome. When a local improvement is found, it will repeat the process and again search locally for additional improvements near this local optimum. A greedy technique is generally simple to implement, and these series of decisions can be used to find local optimums depending on where the search began. However, greedy techniques may not identify the global optimum across the entire set of possible outcomes.,
=== Heuristic ===
A heuristic approach employs a practical method to reach an immediate solution not guaranteed to be optimal.
=== Learning ===
Learning techniques employ statistical methods to perform categorization and analysis without explicit programming. Supervised learning, unsupervised learning, reinforcement learning, and deep learning techniques are included in this category.
=== Mathematical optimization ===
Mathematical optimization is a technique that can be used to calculate a mathematical optimum by minimizing or maximizing a function.
=== Modeling ===
Modeling is a general technique for abstracting a real-world problem into a framework or paradigm that assists with solution.
=== Recursion ===
Recursion is a general technique for designing an algorithm that calls itself with a progressively simpler part of the task down to one or more base cases with defined results.
=== Sliding Window ===
The sliding window reduces the use of nested loops and replaces them with a single loop, thereby reducing the time complexity.
=== Two Pointers ===
Two pointers is an algorithmic technique that uses two indices (or pointers) to traverse a data structure, usually an array or string, often from different ends or at different speeds. It’s widely used to solve problems involving searching, sorting, or scanning with linear time complexity.
=== Backtracking ===
Backtracking is a general algorithmic technique used for solving problems recursively by trying to build a solution incrementally, one piece at a time, and removing those solutions that fail to satisfy the problem constraints as soon as possible.
== See also ==
Algorithm engineering
Algorithm characterizations
Theory of computation
== Notes ==
== External links ==
Algorithmic Design and Techniques - edX
Algorithmic Techniques and Analysis – Carnegie Mellon
Algorithmic Techniques for Massive Data – MIT | Wikipedia/Algorithmic_technique |
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.
Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure) do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as while and for.
Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for certain problems, algorithmic or compiler-optimization techniques such as tail call optimization may improve computational performance over a naive recursive implementation.
== Recursive functions and algorithms ==
A common algorithm design tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization.
=== Base case ===
A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n! = n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the "terminating case".
The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. (Functions that are not intended to terminate under normal circumstances—for example, some system and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop.
For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not an obvious base case implied by the input data; for these one may add a parameter (such as the number of terms to be added, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is more naturally treated by corecursion, where successive terms in the output are the partial sums; this can be converted to a recursion by using the indexing parameter to say "compute the nth term (nth partial sum)".
== Recursive data types ==
Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is a technique for representing data whose exact size is unknown to the programmer: the programmer can specify this data with a self-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.
=== Inductively defined data ===
An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively (here, using Haskell syntax):
The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings. The self-reference in the definition permits the construction of lists of any (finite) number of strings.
Another example of inductive definition is the natural numbers (or positive integers):
A natural number is either 1 or n+1, where n is a natural number.
Similarly recursive definitions are often used to model the structure of expressions and statements in programming languages. Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition:
This says that an expression is either a number, a product of two expressions, or a sum of two expressions. By recursively referring to expressions in the second and third lines, the grammar permits arbitrarily complicated arithmetic expressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.
=== Coinductively defined data and corecursion ===
A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically, self-referential coinductive definitions are used for data structures of infinite size.
A coinductive definition of infinite streams of strings, given informally, might look like this:
A stream of strings is an object s such that:
head(s) is a string, and
tail(s) is a stream of strings.
This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from.
Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. As a programming technique, it is used most often in the context of lazy programming languages, and can be preferable to recursion when the desired size or precision of a program's output is unknown. In such cases the program requires both a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of that result. The problem of computing the first n prime numbers is one that can be solved with a corecursive program (e.g. here).
== Types of recursion ==
=== Single recursion and multiple recursion ===
Recursion that contains only a single self-reference is known as single recursion, while recursion that contains multiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal, such as in a linear search, or computing the factorial function, while standard examples of multiple recursion include tree traversal, such as in a depth-first search.
Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterative computation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require exponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without an explicit stack.
Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example, while computing the Fibonacci sequence naively entails multiple iteration, as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. This is more naturally framed as corecursion, building up from the initial values, while tracking two successive values at each step – see corecursion: examples. A more sophisticated example involves using a threaded binary tree, which allows iterative tree traversal, rather than multiple recursion.
=== Indirect recursion ===
Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in which a function calls itself. Indirect recursion occurs when a function is called not by itself but by another function that it called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which calls f, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 calls function 2, function 2 calls function 3, and function 3 calls function 1 again.
Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a difference of emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions.
=== Anonymous recursion ===
Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitly calling a function based on the current context, which is particularly useful for anonymous functions, and is known as anonymous recursion.
=== Structural versus generative recursion ===
Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data:
[Functions that consume structured data] typically decompose their arguments into their immediate structural components and then process those components. If one of the immediate components belongs to the same class of data as the input, the function is recursive. For that reason, we refer to these functions as (STRUCTURALLY) RECURSIVE FUNCTIONS.
Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call is the content of a field of the original input. Structural recursion includes nearly all tree traversals, including XML processing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (that is, a natural number is either zero or the successor of a natural number), functions such as factorial may also be regarded as structural recursion.
Generative recursion is the alternative:
Many well-known recursive algorithms generate an entirely new piece of data from the given data and recur on it. HtDP (How to Design Programs) refers to this kind as generative recursion. Examples of generative recursion include: gcd, quicksort, binary search, mergesort, Newton's method, fractals, and adaptive integration.
This distinction is important in proving termination of a function.
All structurally recursive functions on finite (inductively defined) data structures can easily be shown to terminate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a base case is reached.
Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, so proof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. These generatively recursive functions can often be interpreted as corecursive functions – each step generates the new data, such as successive approximation in Newton's method – and terminating this corecursion requires that the data eventually satisfy some condition, which is not necessarily guaranteed.
In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or complexity, which starts off finite and decreases at each recursive step.
By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends on a function, such as "error of approximation" that does not necessarily decrease to zero, and thus termination is not guaranteed without further analysis.
== Implementation issues ==
In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step), a number of modifications may be made, for purposes of clarity or efficiency. These include:
Wrapper function (at top)
Short-circuiting the base case, aka "Arm's-length recursion" (at bottom)
Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough
On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frowned upon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursion in small cases, and arm's-length recursion is a special case of this.
=== Wrapper function ===
A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliary function which actually does the recursion.
Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initialization (allocate memory, initialize variables), particularly for auxiliary variables such as "level of recursion" or partial computations for memoization, and handle exceptions and errors. In languages that support nested functions, the auxiliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions, auxiliary functions are instead a separate function, if possible private (as they are not called directly), and information is shared with the wrapper function by using pass-by-reference.
=== Short-circuiting the base case ===
Short-circuiting the base case, also known as arm's-length recursion, consists of checking the base case before making a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking for the base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call that immediately returns. Note that since the base case has already been checked for (immediately before the recursive step), it does not need to be checked for separately, but one does need to use a wrapper function for the case when the overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is 0! = 1, while immediately returning 1 for 1! is a short circuit, and may miss 0; this can be mitigated by a wrapper function. The box shows C code to shortcut factorial cases 0 and 1.
Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, which can be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below for a depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children) as the base case, rather than considering an empty node as the base case. If there is only a single base case, such as in computing the factorial, short-circuiting provides only O(1) savings.
Conceptually, short-circuiting can be considered to either have the same base case and recursive step, checking the base case only before the recursion, or it can be considered to have a different base case (one step removed from standard base case) and a more complex recursive step, namely "check valid then recurse", as in considering leaf nodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, compared with the clear separation of base case and recursive step in standard recursion, it is often considered poor style, particularly in academia.
==== Depth-first search ====
A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section for standard recursive discussion.
The standard recursive algorithm for a DFS is:
base case: If current node is Null, return false
recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children
In short-circuiting, this is instead:
check value of current node, return true if match,
otherwise, on children, if not Null, then recurse.
In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can be considered a different form of base case and recursive step, respectively. Note that this requires a wrapper function to handle the case when the tree itself is empty (root node is Null).
In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for each of the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.
In C, the standard recursive algorithm may be implemented as:
The short-circuited algorithm may be implemented as:
Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is made only if the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is a Boolean, so the overall expression evaluates to a Boolean. This is a common idiom in recursive short-circuiting. This is in addition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left child fails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a return statement, but legibility suffers at no benefit to efficiency.
=== Hybrid algorithm ===
Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. For this reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch to a different algorithm when the input becomes small. An important example is merge sort, which is often implemented by switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybrid recursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.
== Recursion versus iteration ==
Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit call stack, while iteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consideration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion, as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to little overhead. Implementing an algorithm using iteration may not be easily achievable.
Compare the templates to compute xn defined by xn = f(n, xn-1) from xbase:
For an imperative language the overhead is to define the function, and for a functional language the overhead is to define the accumulator variable x.
For example, a factorial function may be implemented iteratively in C by assigning to a loop index variable and accumulator variable, rather than by passing arguments and returning values by recursion:
=== Expressive power ===
Most programming languages in use today allow the direct specification of recursive functions and procedures. When such a function is called, the program's runtime environment keeps track of the various instances of the function (often using a call stack, although other methods may be used). Every recursive function can be transformed into an iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with a stack explicitly managed by the program.
Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness) can be expressed in terms of recursive functions; iterative control constructs such as while loops and for loops are routinely rewritten in recursive form in functional languages. However, in practice this rewriting depends on tail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languages in which all function calls, including tail calls, may cause stack allocation that would not occur with the use of looping constructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack, although tail call elimination may be a feature that is not covered by a language's specification, and different implementations of the same language may differ in tail call elimination capabilities.
=== Performance issues ===
In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and space cost associated with recursive programs, due to the overhead required to manage the stack and the relative slowness of function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, and the difference is usually less noticeable.
As a concrete example, the difference in performance between recursive and iterative implementations of the "factorial" example above depends highly on the compiler used. In languages where looping constructs are preferred, the iterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages, the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the larger numbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelm any time saved by choosing iteration.
=== Stack space ===
In some programming languages, the maximum size of the call stack is much less than the space available in the heap, and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languages sometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language. Note the caveat below regarding the special case of tail recursion.
=== Vulnerability ===
Because recursive algorithms can be subject to stack overflows, they may be vulnerable to pathological or malicious input. Some malware specifically targets a program's call stack and takes advantage of the stack's inherently recursive nature. Even in the absence of malware, a stack overflow caused by unbounded recursion can be fatal to the program, and exception handling logic may not prevent the corresponding process from being terminated.
=== Multiply recursive problems ===
Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is tree traversal as in depth-first search; though both recursive and iterative methods are used, they contrast with list traversal and linear search in a list, which is a singly recursive and thus naturally iterative method. Other examples include divide-and-conquer algorithms such as Quicksort, and functions such as the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack, but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguably outweigh any advantages of the iterative solution.
=== Refactoring recursion ===
Recursive algorithms can be replaced with non-recursive counterparts. One method for replacing recursive algorithms is to simulate them using heap memory in place of stack memory. An alternative is to develop a replacement algorithm entirely based on non-recursive methods, which can be challenging. For example, recursive algorithms for matching wildcards, such as Rich Salz' wildmat algorithm, were once typical. Non-recursive algorithms for the same purpose, such as the Krauss matching wildcards algorithm, have been developed to avoid the drawbacks of recursion and have improved only gradually based on techniques such as collecting tests and profiling performance.
== Tail-recursive functions ==
Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferred operations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function (also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplication operations that must be performed after the final recursive call completes. With a compiler or interpreter that treats tail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constant space. Thus the program is essentially iterative, equivalent to using imperative language control structures like the "for" and "while" loops.
The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller's return position need not be saved on the call stack; when the recursive call returns, it will branch directly on the previously saved return position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space and time.
== Order of execution ==
Consider these two functions:
=== Function 1 ===
=== Function 2 ===
The output of function 2 is that of function 1 with the lines swapped.
In the case of a function calling itself only once, instructions placed before the recursive call are executed once per recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly after the maximum recursion has been reached.
Also note that the order of the print statements is reversed, which is due to the way the functions and statements are stored on the call stack.
== Recursive procedures ==
=== Factorial ===
A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:
fact
(
n
)
=
{
1
if
n
=
0
n
⋅
fact
(
n
−
1
)
if
n
>
0
{\displaystyle \operatorname {fact} (n)={\begin{cases}1&{\mbox{if }}n=0\\n\cdot \operatorname {fact} (n-1)&{\mbox{if }}n>0\\\end{cases}}}
The function can also be written as a recurrence relation:
b
n
=
n
b
n
−
1
{\displaystyle b_{n}=nb_{n-1}}
b
0
=
1
{\displaystyle b_{0}=1}
This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating the pseudocode above:
This factorial function can also be described without using recursion by making use of the typical looping constructs found in imperative programming languages:
The imperative code above is equivalent to this mathematical definition using an accumulator variable t:
fact
(
n
)
=
f
a
c
t
a
c
c
(
n
,
1
)
f
a
c
t
a
c
c
(
n
,
t
)
=
{
t
if
n
=
0
f
a
c
t
a
c
c
(
n
−
1
,
n
t
)
if
n
>
0
{\displaystyle {\begin{aligned}\operatorname {fact} (n)&=\operatorname {fact_{acc}} (n,1)\\\operatorname {fact_{acc}} (n,t)&={\begin{cases}t&{\mbox{if }}n=0\\\operatorname {fact_{acc}} (n-1,nt)&{\mbox{if }}n>0\\\end{cases}}\end{aligned}}}
The definition above translates straightforwardly to functional programming languages such as Scheme; this is an example of iteration implemented recursively.
=== Greatest common divisor ===
The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.
Function definition:
gcd
(
x
,
y
)
=
{
x
if
y
=
0
gcd
(
y
,
remainder
(
x
,
y
)
)
if
y
>
0
{\displaystyle \gcd(x,y)={\begin{cases}x&{\mbox{if }}y=0\\\gcd(y,\operatorname {remainder} (x,y))&{\mbox{if }}y>0\\\end{cases}}}
Recurrence relation for greatest common divisor, where
x
%
y
{\displaystyle x\%y}
expresses the remainder of
x
/
y
{\displaystyle x/y}
:
gcd
(
x
,
y
)
=
gcd
(
y
,
x
%
y
)
{\displaystyle \gcd(x,y)=\gcd(y,x\%y)}
if
y
≠
0
{\displaystyle y\neq 0}
gcd
(
x
,
0
)
=
x
{\displaystyle \gcd(x,0)=x}
The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shown above shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a version of the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintaining its state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls and growing the call stack.
The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is more difficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.
=== Towers of Hanoi ===
The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion. There are three pegs which can hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting with n disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to move the stack?
Function definition:
hanoi
(
n
)
=
{
1
if
n
=
1
2
⋅
hanoi
(
n
−
1
)
+
1
if
n
>
1
{\displaystyle \operatorname {hanoi} (n)={\begin{cases}1&{\mbox{if }}n=1\\2\cdot \operatorname {hanoi} (n-1)+1&{\mbox{if }}n>1\\\end{cases}}}
Recurrence relation for hanoi:
h
n
=
2
h
n
−
1
+
1
{\displaystyle h_{n}=2h_{n-1}+1}
h
1
=
1
{\displaystyle h_{1}=1}
Example implementations:
Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to an explicit formula.
=== Binary search ===
The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in half with each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that point with the data being searched and then responding to one of three possible conditions: the data is found at the midpoint, the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the data being searched for.
Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. The binary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array's size is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growth because it essentially divides the problem domain in half with each pass.
Example implementation of binary search in C:
== Recursive data structures (structural recursion) ==
An important application of recursion in computer science is in defining dynamic data structures such as lists and trees. Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; in contrast, the size of a static array must be set at compile time.
"Recursive algorithms are particularly appropriate when the underlying problem or the data to be treated are defined in recursive terms."
The examples in this section illustrate what is known as "structural recursion". This term refers to the fact that the recursive procedures are acting on data that is defined recursively.
As long as a programmer derives the template from a data definition, functions employ structural recursion. That is, the recursions in a function's body consume some immediate piece of a given compound value.
=== Linked lists ===
Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself. The "next" element of struct node is a pointer to another struct node, effectively creating a list type.
Because the struct node data structure is defined recursively, procedures that operate on it can be implemented naturally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e., the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation, the list remains unchanged by the list_print procedure.
=== Binary trees ===
Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself, recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the right sub-tree).
Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers (left and right), tree operations may require two recursive calls:
At most two recursive calls will be made for any given call to tree_contains as defined above.
The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of the binary tree where the data elements of each node are in order.
=== Filesystem traversal ===
Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerate its contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversal are applicable to traversing a filesystem. More specifically, the code below would be an example of a preorder traversal of a filesystem.
This code is both recursion and iteration - the files and directories are iterated, and each directory is opened recursively.
The "rtraverse" method is an example of direct recursion, whilst the "traverse" method is a wrapper function.
The "base case" scenario is that there will always be a fixed number of files and/or directories in a given filesystem.
== Time-efficiency of recursive algorithms ==
The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can (usually) then be simplified into a single Big-O term.
=== Shortcut rule (master theorem) ===
If the time-complexity of the function is in the form
T
(
n
)
=
a
⋅
T
(
n
/
b
)
+
f
(
n
)
{\displaystyle T(n)=a\cdot T(n/b)+f(n)}
Then the Big O of the time-complexity is thus:
If
f
(
n
)
=
O
(
n
log
b
a
−
ε
)
{\displaystyle f(n)=O(n^{\log _{b}a-\varepsilon })}
for some constant
ε
>
0
{\displaystyle \varepsilon >0}
, then
T
(
n
)
=
Θ
(
n
log
b
a
)
{\displaystyle T(n)=\Theta (n^{\log _{b}a})}
If
f
(
n
)
=
Θ
(
n
log
b
a
)
{\displaystyle f(n)=\Theta (n^{\log _{b}a})}
, then
T
(
n
)
=
Θ
(
n
log
b
a
log
n
)
{\displaystyle T(n)=\Theta (n^{\log _{b}a}\log n)}
If
f
(
n
)
=
Ω
(
n
log
b
a
+
ε
)
{\displaystyle f(n)=\Omega (n^{\log _{b}a+\varepsilon })}
for some constant
ε
>
0
{\displaystyle \varepsilon >0}
, and if
a
⋅
f
(
n
/
b
)
≤
c
⋅
f
(
n
)
{\displaystyle a\cdot f(n/b)\leq c\cdot f(n)}
for some constant c < 1 and all sufficiently large n, then
T
(
n
)
=
Θ
(
f
(
n
)
)
{\displaystyle T(n)=\Theta (f(n))}
where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller the input is for the next level of recursion (i.e. the number of pieces you divide the problem into), and f(n) represents the work that the function does independently of any recursion (e.g. partitioning, recombining) at each level of recursion.
== Recursion in Logic Programming ==
In the procedural interpretation of logic programs, clauses (or rules) of the form A :- B are treated as procedures, which reduce goals of the form A to subgoals of the form B.
For example, the Prolog clauses:
define a procedure, which can be used to search for a path from X to Y, either by finding a direct arc from X to Y, or by finding an arc from X to Z, and then searching recursively for a path from Z to Y. Prolog executes the procedure by reasoning top-down (or backwards) and searching the space of possible paths depth-first, one branch at a time. If it tries the second clause, and finitely fails to find a path from Z to Y, it backtracks and tries to find an arc from X to another node, and then searches for a path from that other node to Y.
However, in the logical reading of logic programs, clauses are understood declaratively as universally quantified conditionals. For example, the recursive clause of the path-finding procedure is understood as representing the knowledge that, for every X, Y and Z, if there is an arc from X to Z and a path from Z to Y then there is a path from X to Y. In symbolic form:
∀
X
,
Y
,
Z
(
a
r
c
(
X
,
Z
)
∧
p
a
t
h
(
Z
,
Y
)
→
p
a
t
h
(
X
,
Y
)
)
.
{\displaystyle \forall X,Y,Z(arc(X,Z)\land path(Z,Y)\rightarrow path(X,Y)).}
The logical reading frees the reader from needing to know how the clause is used to solve problems. The clause can be used top-down, as in Prolog, to reduce problems to subproblems. Or it can be used bottom-up (or forwards), as in Datalog, to derive conclusions from conditions. This separation of concerns is a form of abstraction, which separates declarative knowledge from problem solving methods (see Algorithm#Algorithm = Logic + Control).
== Infinite recursion ==
A common mistake among programmers is not providing a way to exit a recursive function, often by omitting or incorrectly checking the base case, letting it run (at least theoretically) infinitely by endlessly calling itself recursively. This is called infinite recursion, and the program will never terminate. In practice, this typically exhausts the available stack space. In most programming environments, a program with infinite recursion will not really run forever. Eventually, something will break and the program will report an error.
Below is a Java code that would use infinite recursion:
Running this code will result in a stack overflow error.
== See also ==
Functional programming
Computational problem
Hierarchical and recursive queries in SQL
Kleene–Rosser paradox
Open recursion
Recursion (in general)
Sierpiński curve
McCarthy 91 function
μ-recursive functions
Primitive recursive functions
Tak (function)
Logic programming
== Notes ==
== References == | Wikipedia/Recursion_(computer_science) |
In functional programming, fold (also termed reduce, accumulate, aggregate, compress, or inject) refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure's hierarchy, using the function in a systematic way.
Folds are in a sense dual to unfolds, which take a seed value and apply a function corecursively to decide how to progressively construct a corecursive data structure, whereas a fold recursively breaks that structure down, replacing it with the results of applying a combining function at each node on its terminal values and the recursive results (catamorphism, versus anamorphism of unfolds).
== As structural transformations ==
Folds can be regarded as consistently replacing the structural components of a data structure with functions and values. Lists, for example, are built up in many functional languages from two primitives: any list is either an empty list, commonly called nil ([]), or is constructed by prefixing an element in front of another list, creating what is called a cons node ( Cons(X1,Cons(X2,Cons(...(Cons(Xn,nil))))) ), resulting from application of a cons function (written down as a colon (:) in Haskell). One can view a fold on lists as replacing the nil at the end of the list with a specific value, and replacing each cons with a specific function. These replacements can be viewed as a diagram:
There's another way to perform the structural transformation in a consistent manner, with the order of the two links of each node flipped when fed into the combining function:
These pictures illustrate right and left fold of a list visually. They also highlight the fact that foldr (:) [] is the identity function on lists (a shallow copy in Lisp parlance), as replacing cons with cons and nil with nil will not change the result. The left fold diagram suggests an easy way to reverse a list, foldl (flip (:)) []. Note that the parameters to cons must be flipped, because the element to add is now the right hand parameter of the combining function. Another easy result to see from this vantage-point is to write the higher-order map function in terms of foldr, by composing the function to act on the elements with cons, as:
where the period (.) is an operator denoting function composition.
This way of looking at things provides a simple route to designing fold-like functions on other algebraic data types and structures, like various sorts of trees. One writes a function which recursively replaces the constructors of the datatype with provided functions, and any constant values of the type with provided values. Such a function is generally referred to as a catamorphism.
== On lists ==
The folding of the list [1,2,3,4,5] with the addition operator would result in 15, the sum of the elements of the list [1,2,3,4,5]. To a rough approximation, one can think of this fold as replacing the commas in the list with the + operation, giving 1 + 2 + 3 + 4 + 5.
In the example above, + is an associative operation, so the final result will be the same regardless of parenthesization, although the specific way in which it is calculated will be different. In the general case of non-associative binary functions, the order in which the elements are combined may influence the final result's value. On lists, there are two obvious ways to carry this out: either by combining the first element with the result of recursively combining the rest (called a right fold), or by combining the result of recursively combining all elements but the last one, with the last element (called a left fold). This corresponds to a binary operator being either right-associative or left-associative, in Haskell's or Prolog's terminology. With a right fold, the sum would be parenthesized as 1 + (2 + (3 + (4 + 5))), whereas with a left fold it would be parenthesized as (((1 + 2) + 3) + 4) + 5.
In practice, it is convenient and natural to have an initial value which in the case of a right fold is used when one reaches the end of the list, and in the case of a left fold is what is initially combined with the first element of the list. In the example above, the value 0 (the additive identity) would be chosen as an initial value, giving 1 + (2 + (3 + (4 + (5 + 0)))) for the right fold, and ((((0 + 1) + 2) + 3) + 4) + 5 for the left fold. For multiplication, an initial choice of 0 wouldn't work: 0 * 1 * 2 * 3 * 4 * 5 = 0. The identity element for multiplication is 1. This would give us the outcome 1 * 1 * 2 * 3 * 4 * 5 = 120 = 5!.
=== Linear vs. tree-like folds ===
The use of an initial value is necessary when the combining function f is asymmetrical in its types (e.g. a → b → b), i.e. when the type of its result is different from the type of the list's elements. Then an initial value must be used, with the same type as that of f 's result, for a linear chain of applications to be possible. Whether it will be left- or right-oriented will be determined by the types expected of its arguments by the combining function. If it is the second argument that must be of the same type as the result, then f could be seen as a binary operation that associates on the right, and vice versa.
When the function is a magma, i.e. symmetrical in its types (a → a → a), and the result type is the same as the list elements' type, the parentheses may be placed in arbitrary fashion thus creating a binary tree of nested sub-expressions, e.g., ((1 + 2) + (3 + 4)) + 5. If the binary operation f is associative this value will be well-defined, i.e., same for any parenthesization, although the operational details of how it is calculated will be different. This can have significant impact on efficiency if f is non-strict.
Whereas linear folds are node-oriented and operate in a consistent manner for each node of a list, tree-like folds are whole-list oriented and operate in a consistent manner across groups of nodes.
=== Special folds for non-empty lists ===
One often wants to choose the identity element of the operation f as the initial value z. When no initial value seems appropriate, for example, when one wants to fold the function which computes the maximum of its two parameters over a non-empty list to get the maximum element of the list, there are variants of foldr and foldl which use the last and first element of the list respectively as the initial value. In Haskell and several other languages, these are called foldr1 and foldl1, the 1 making reference to the automatic provision of an initial element, and the fact that the lists they are applied to must have at least one element.
These folds use type-symmetrical binary operation: the types of both its arguments, and its result, must be the same. Richard Bird in his 2010 book proposes "a general fold function on non-empty lists" foldrn which transforms its last element, by applying an additional argument function to it, into a value of the result type before starting the folding itself, and is thus able to use type-asymmetrical binary operation like the regular foldr to produce a result of type different from the list's elements type.
=== Implementation ===
==== Linear folds ====
Using Haskell as an example, foldl and foldr can be formulated in a few equations.
If the list is empty, the result is the initial value. If not, fold the tail of the list using as new initial value the result of applying f to the old initial value and the first element.
If the list is empty, the result is the initial value z. If not, apply f to the first element and the result of folding the rest.
==== Tree-like folds ====
Lists can be folded over in a tree-like fashion, both for finite and for indefinitely defined lists:
In the case of foldi function, to avoid its runaway evaluation on indefinitely defined lists the function f must not always demand its second argument's value, at least not all of it, or not immediately (see example below).
==== Folds for non-empty lists ====
=== Evaluation order considerations ===
In the presence of lazy, or non-strict evaluation, foldr will immediately return the application of f to the head of the list and the recursive case of folding over the rest of the list. Thus, if f is able to produce some part of its result without reference to the recursive case on its "right" i.e., in its second argument, and the rest of the result is never demanded, then the recursion will stop (e.g., head == foldr (\a b->a) (error "empty list")). This allows right folds to operate on infinite lists. By contrast, foldl will immediately call itself with new parameters until it reaches the end of the list. This tail recursion can be efficiently compiled as a loop, but can't deal with infinite lists at all — it will recurse forever in an infinite loop.
Having reached the end of the list, an expression is in effect built by foldl of nested left-deepening f-applications, which is then presented to the caller to be evaluated. Were the function f to refer to its second argument first here, and be able to produce some part of its result without reference to the recursive case (here, on its left i.e., in its first argument), then the recursion would stop. This means that while foldr recurses on the right, it allows for a lazy combining function to inspect list's elements from the left; and conversely, while foldl recurses on the left, it allows for a lazy combining function to inspect list's elements from the right, if it so chooses (e.g., last == foldl (\a b->b) (error "empty list")).
Reversing a list is also tail-recursive (it can be implemented using rev = foldl (\ys x -> x : ys) []). On finite lists, that means that left-fold and reverse can be composed to perform a right fold in a tail-recursive way (cf. 1+>(2+>(3+>0)) == ((0<+3)<+2)<+1), with a modification to the function f so it reverses the order of its arguments (i.e., foldr f z == foldl (flip f) z . foldl (flip (:)) []), tail-recursively building a representation of expression that right-fold would build. The extraneous intermediate list structure can be eliminated with the continuation-passing style technique, foldr f z xs == foldl (\k x-> k . f x) id xs z; similarly, foldl f z xs == foldr (\x k-> k . flip f x) id xs z ( flip is only needed in languages like Haskell with its flipped order of arguments to the combining function of foldl unlike e.g., in Scheme where the same order of arguments is used for combining functions to both foldl and foldr).
Another technical point is that, in the case of left folds using lazy evaluation, the new initial parameter is not being evaluated before the recursive call is made. This can lead to stack overflows when one reaches the end of the list and tries to evaluate the resulting potentially gigantic expression. For this reason, such languages often provide a stricter variant of left folding which forces the evaluation of the initial parameter before making the recursive call. In Haskell this is the foldl' (note the apostrophe, pronounced 'prime') function in the Data.List library (one needs to be aware of the fact though that forcing a value built with a lazy data constructor won't force its constituents automatically by itself). Combined with tail recursion, such folds approach the efficiency of loops, ensuring constant space operation, when lazy evaluation of the final result is impossible or undesirable.
=== Examples ===
Using a Haskell interpreter, the structural transformations which fold functions perform can be illustrated by constructing a string:
Infinite tree-like folding is demonstrated e.g., in recursive primes production by unbounded sieve of Eratosthenes in Haskell:
where the function union operates on ordered lists in a local manner to efficiently produce their set union, and minus their set difference.
A finite prefix of primes is concisely defined as a folding of set difference operation over the lists of enumerated multiples of integers, as
For finite lists, e.g., merge sort (and its duplicates-removing variety, nubsort) could be easily defined using tree-like folding as
with the function merge a duplicates-preserving variant of union.
Functions head and last could have been defined through folding as
== In various languages ==
== Universality ==
Fold is a polymorphic function. For any g having a definition
then g can be expressed as
Also, in a lazy language with infinite lists, a fixed point combinator can be implemented via fold, proving that iterations can be reduced to folds:
== See also ==
Aggregate function
Iterated binary operation
Catamorphism, a generalization of fold
Homomorphism
Map (higher-order function)
Prefix sum
Recursive data type
Reduction operator
Structural recursion
== References ==
== External links ==
"Higher order functions — map, fold and filter"
"Unit 6: The Higher-order fold Functions"
"Fold in Tcl"
"Constructing List Homomorphism from Left and Right Folds"
"The magic foldr" | Wikipedia/Fold_(higher-order_function) |
The export of cryptography is the transfer from one country to another of devices and technology related to cryptography.
In the early days of the Cold War, the United States and its allies developed an elaborate series of export control regulations designed to prevent a wide range of Western technology from falling into the hands of others, particularly the Eastern bloc. All export of technology classed as 'critical' required a license. CoCom was organized to coordinate Western export controls.
Many countries, notably those participating in the Wassenaar Arrangement, introduced restrictions. The Wassenaar restrictions were largely loosened in the late 2010s.
== See also ==
Crypto wars
Export of cryptography from the United States
Restrictions on the import of cryptography
== References == | Wikipedia/Export_of_cryptography |
In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.
Although search engines use search algorithms, they belong to the study of information retrieval, not algorithmics.
The appropriate search algorithm to use often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes.
Search algorithms can be classified based on their mechanism of searching into three types of algorithms: linear, binary, and hashing. Linear search algorithms check every record for the one associated with a target key in a linear fashion. Binary, or half-interval, searches repeatedly target the center of the search structure and divide the search space in half. Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order. Digital search algorithms work based on the properties of digits in data structures by using numerical keys. Finally, hashing directly maps keys to records based on a hash function.
Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of O(log n), or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space.
== Applications of search algorithms ==
Specific applications of search algorithms include:
Problems in combinatorial optimization, such as:
The vehicle routing problem, a form of shortest path problem
The knapsack problem: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
The nurse scheduling problem
Problems in constraint satisfaction, such as:
The map coloring problem
Filling in a sudoku or crossword puzzle
In game theory and especially combinatorial game theory, choosing the best move to make next (such as with the minmax algorithm)
Finding a combination or password from the whole set of possibilities
Factoring an integer (an important problem in cryptography)
Search engine optimization (SEO) and content optimization for web crawlers
Optimizing an industrial process, such as a chemical reaction, by changing the parameters of the process (like temperature, pressure, and pH)
Retrieving a record from a database
Finding the maximum or minimum value in a list or array
Checking to see if a given value is present in a set of values
== Classes ==
=== For virtual search spaces ===
Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities. They are also used when the goal is to find a variable assignment that will maximize or minimize a certain function of those variables. Algorithms for these problems include the basic brute-force search (also called "naïve" or "uninformed" search), and a variety of heuristics that try to exploit partial knowledge about the structure of this space, such as linear relaxation, constraint generation, and constraint propagation.
An important subclass are the local search methods, that view the elements of the search space as the vertices of a graph, with edges defined by a set of heuristics applicable to the case; and scan the space by moving from item to item along the edges, for example according to the steepest descent or best-first criterion, or in a stochastic search. This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways. The opposite of local search would be global search methods. This method is applicable when the search space is not limited and all aspects of the given network are available to the entity running the search algorithm.
This class also includes various tree search algorithms, that view the elements as vertices of a tree, and traverse that tree in some special order. Examples of the latter include the exhaustive methods such as depth-first search and breadth-first search, as well as various heuristic-based search tree pruning methods such as backtracking and branch and bound. Unlike general metaheuristics, which at best work only in a probabilistic sense, many of these tree-search methods are guaranteed to find the exact or optimal solution, if given enough time. This is called "completeness".
Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s). Similar problems occur when humans or machines have to make successive decisions whose outcomes are not entirely under one's control, such as in robot guidance or in marketing, financial, or military strategy planning. This kind of problem — combinatorial search — has been extensively studied in the context of artificial intelligence. Examples of algorithms for this class are the minimax algorithm, alpha–beta pruning, and the A* algorithm and its variants.
=== For sub-structures of a given structure ===
An important and extensively studied subclass are the graph algorithms, in particular graph traversal algorithms, for finding specific sub-structures in a given graph — such as subgraphs, paths, circuits, and so on. Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm.
Another important subclass of this category are the string searching algorithms, that search for patterns within strings. Two famous examples are the Boyer–Moore and Knuth–Morris–Pratt algorithms, and several algorithms based on the suffix tree data structure.
=== Search for the maximum of a function ===
In 1953, American statistician Jack Kiefer devised Fibonacci search which can be used to find the maximum of a unimodal function and has many other applications in computer science.
=== For quantum computers ===
There are also search methods designed for quantum computers, like Grover's algorithm, that are theoretically faster than linear or brute-force search even without the help of data structures or heuristics. While the ideas and applications behind quantum computers are still entirely theoretical, studies have been conducted with algorithms like Grover's that accurately replicate the hypothetical physical versions of quantum computing systems.
== See also ==
Backward induction – Process of reasoning backwards in sequence
Content-addressable memory – Type of computer memory hardware
Dual-phase evolution – Process that drives self-organization within complex adaptive systems
Linear search problem – Computational search problem
No free lunch in search and optimization – Average solution cost is the same with any method
Recommender system – System to predict users' preferences, also use statistical methods to rank results in very large data sets
Search engine (computing) – System to help searching for information
Search game – Two-person zero-sum game
Selection algorithm – Method for finding kth smallest value
Solver – Software for a class of mathematical problems
Sorting algorithm – Algorithm that arranges lists in order, necessary for executing certain search algorithms
Web search engine – Software system for finding relevant information on the WebPages displaying short descriptions of redirect targets
Categories:
Category:Search algorithms
== References ==
=== Citations ===
=== Bibliography ===
==== Books ====
Knuth, Donald (1998). Sorting and Searching. The Art of Computer Programming. Vol. 3 (2nd ed.). Reading, MA: Addison-Wesley Professional.
==== Articles ====
Beame, Paul; Fich, Faith (August 2002). "Optimal Bounds for the Predecessor Problem and Related Problems". Journal of Computer and System Sciences. 65 (1): 38–72. doi:10.1006/jcss.2002.1822. S2CID 1991980.
Schmittou, Thomas; Schmittou, Faith E. (2002-08-01). "Optimal Bounds for the Predecessor Problem and Related Problems". Journal of Computer and System Sciences. 65 (1): 38–72. doi:10.1006/jcss.2002.1822.
== External links == | Wikipedia/Search_algorithm |
A string-searching algorithm, sometimes called string-matching algorithm, is an algorithm that searches a body of text for portions that match by pattern.
A basic example of string searching is when the pattern and the searched text are arrays of elements of an alphabet (finite set) Σ. Σ may be a human language alphabet, for example, the letters A through Z and other applications may use a binary alphabet (Σ = {0,1}) or a DNA alphabet (Σ = {A,C,G,T}) in bioinformatics.
In practice, the method of feasible string-search algorithm may be affected by the string encoding. In particular, if a variable-width encoding is in use, then it may be slower to find the Nth character, perhaps requiring time proportional to N. This may significantly slow some search algorithms. One of many possible solutions is to search for the sequence of code units instead, but doing so may produce false matches unless the encoding is specifically designed to avoid it.
== Overview ==
The most basic case of string searching involves one (often very long) string, sometimes called the haystack, and one (often very short) string, sometimes called the needle. The goal is to find one or more occurrences of the needle within the haystack. For example, one might search for to within:
Some books are to be tasted, others to be swallowed, and some few to be chewed and digested.
One might request the first occurrence of "to", which is the fourth word; or all occurrences, of which there are 3; or the last, which is the fifth word from the end.
Very commonly, however, various constraints are added. For example, one might want to match the "needle" only where it consists of one (or more) complete words—perhaps defined as not having other letters immediately adjacent on either side. In that case a search for "hew" or "low" should fail for the example sentence above, even though those literal strings do occur.
Another common example involves "normalization". For many purposes, a search for a phrase such as "to be" should succeed even in places where there is something else intervening between the "to" and the "be":
More than one space
Other "whitespace" characters such as tabs, non-breaking spaces, line-breaks, etc.
Less commonly, a hyphen or soft hyphen
In structured texts, tags or even arbitrarily large but "parenthetical" things such as footnotes, list-numbers or other markers, embedded images, and so on.
Many symbol systems include characters that are synonymous (at least for some purposes):
Latin-based alphabets distinguish lower-case from upper-case, but for many purposes string search is expected to ignore the distinction.
Many languages include ligatures, where one composite character is equivalent to two or more other characters.
Many writing systems involve diacritical marks such as accents or vowel points, which may vary in their usage, or be of varying importance in matching.
DNA sequences can involve non-coding segments which may be ignored for some purposes, or polymorphisms that lead to no change in the encoded proteins, which may not count as a true difference for some other purposes.
Some languages have rules where a different character or form of character must be used at the start, middle, or end of words.
Finally, for strings that represent natural language, aspects of the language itself become involved. For example, one might wish to find all occurrences of a "word" despite it having alternate spellings, prefixes or suffixes, etc.
Another more complex type of search is regular expression searching, where the user constructs a pattern of characters or other symbols, and any match to the pattern should fulfill the search. For example, to catch both the American English word "color" and the British equivalent "colour", instead of searching for two different literal strings, one might use a regular expression such as:
colou?r
where the "?" conventionally makes the preceding character ("u") optional.
This article mainly discusses algorithms for the simpler kinds of string searching.
A similar problem introduced in the field of bioinformatics and genomics is the maximal exact matching (MEM). Given two strings, MEMs are common substrings that cannot be extended left or right without causing a mismatch.
== Examples of search algorithms ==
=== Naive string search ===
A simple and inefficient way to see where one string occurs inside another is to check at each index, one by one. First, we see if there is a copy of the needle starting at the first character of the haystack; if not, we look to see if there's a copy of the needle starting at the second character of the haystack, and so forth. In the normal case, we only have to look at one or two characters for each wrong position to see that it is a wrong position, so in the average case, this takes O(n + m) steps, where n is the length of the haystack and m is the length of the needle; but in the worst case, searching for a string like "aaaab" in a string like "aaaaaaaaab", it takes O(nm)
=== Finite-state-automaton-based search ===
In this approach, backtracking is avoided by constructing a deterministic finite automaton (DFA) that recognizes a stored search string. These are expensive to construct—they are usually created using the powerset construction—but are very quick to use. For example, the DFA shown to the right recognizes the word "MOMMY". This approach is frequently generalized in practice to search for arbitrary regular expressions.
=== Stubs ===
Knuth–Morris–Pratt computes a DFA that recognizes inputs with the string to search for as a suffix, Boyer–Moore starts searching from the end of the needle, so it can usually jump ahead a whole needle-length at each step. Baeza–Yates keeps track of whether the previous j characters were a prefix of the search string, and is therefore adaptable to fuzzy string searching. The bitap algorithm is an application of Baeza–Yates' approach.
=== Index methods ===
Faster search algorithms preprocess the text. After building a substring index, for example a suffix tree or suffix array, the occurrences of a pattern can be found quickly. As an example, a suffix tree can be built in
Θ
(
n
)
{\displaystyle \Theta (n)}
time, and all
z
{\displaystyle z}
occurrences of a pattern can be found in
O
(
m
)
{\displaystyle O(m)}
time under the assumption that the alphabet has a constant size and all inner nodes in the suffix tree know what leaves are underneath them. The latter can be accomplished by running a DFS algorithm from the root of the suffix tree.
=== Other variants ===
Some search methods, for instance trigram search, are intended to find a "closeness" score between the search string and the text rather than a "match/non-match". These are sometimes called "fuzzy" searches.
== Classification of search algorithms ==
=== Classification by a number of patterns ===
The various algorithms can be classified by the number of patterns each uses.
==== Single-pattern algorithms ====
In the following compilation, m is the length of the pattern, n the length of the searchable text, and k = |Σ| is the size of the alphabet.
1.^ Asymptotic times are expressed using O, Ω, and Θ notation.
2.^ Used to implement the memmem and strstr search functions in the glibc and musl C standard libraries.
3.^ Can be extended to handle approximate string matching and (potentially-infinite) sets of patterns represented as regular languages.
The Boyer–Moore string-search algorithm has been the standard benchmark for the practical string-search literature.
==== Algorithms using a finite set of patterns ====
In the following compilation, M is the length of the longest pattern, m their total length, n the length of the searchable text, o the number of occurrences.
==== Algorithms using an infinite number of patterns ====
Naturally, the patterns can not be enumerated finitely in this case. They are represented usually by a regular grammar or regular expression.
=== Classification by the use of preprocessing programs ===
Other classification approaches are possible. One of the most common uses preprocessing as main criteria.
=== Classification by matching strategies ===
Another one classifies the algorithms by their matching strategy:
Match the prefix first (Knuth–Morris–Pratt, Shift-And, Aho–Corasick)
Match the suffix first (Boyer–Moore and variants, Commentz-Walter)
Match the best factor first (BNDM, BOM, Set-BOM)
Other strategy (Naïve, Rabin–Karp, Vectorized)
== See also ==
Sequence alignment
Graph matching
Pattern matching
Compressed pattern matching
Matching wildcards
Full-text search
== References ==
R. S. Boyer and J. S. Moore, A fast string searching algorithm, Carom. ACM 20, (10), 262–272(1977).
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. MIT Press and McGraw-Hill, 2009. ISBN 0-262-03293-7. Chapter 32: String Matching, pp. 985–1013.
== External links ==
Huge list of pattern matching links Last updated: 12/27/2008 20:18:38
Large (maintained) list of string-matching algorithms
NIST list of string-matching algorithms
StringSearch – high-performance pattern matching algorithms in Java – Implementations of many String-Matching-Algorithms in Java (BNDM, Boyer-Moore-Horspool, Boyer-Moore-Horspool-Raita, Shift-Or)
StringsAndChars – Implementations of many String-Matching-Algorithms (for single and multiple patterns) in Java
Exact String Matching Algorithms — Animation in Java, Detailed description and C implementation of many algorithms.
(PDF) Improved Single and Multiple Approximate String Matching Archived 2017-03-11 at the Wayback Machine
Kalign2: high-performance multiple alignment of protein and nucleotide sequences allowing external features
NyoTengu – high-performance pattern matching algorithm in C – Implementations of Vector and Scalar String-Matching-Algorithms in C | Wikipedia/String-searching_algorithm |
Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.
== The problem of definition ==
Over the last 200 years, the definition of the algorithm has become more complicated and detailed as researchers have tried to pin down the term. Indeed, there may be more than one type of "algorithm". But most agree that algorithm has something to do with defining generalized processes for the creation of "output" integers from other "input" integers – "input parameters" arbitrary and infinite in extent, or limited in extent but still variable—by the manipulation of distinguishable symbols (counting numbers) with finite collections of rules that a person can perform with paper and pencil.
The most common number-manipulation schemes—both in formal mathematics and in routine life—are: (1) the recursive functions calculated by a person with paper and pencil, and (2) the Turing machine or its Turing equivalents—the primitive register-machine or "counter-machine" model, the random-access machine model (RAM), the random-access stored-program machine model (RASP) and its functional equivalent "the computer".
When we are doing "arithmetic" we are really calculating by the use of "recursive functions" in the shorthand algorithms we learned in grade school, for example, adding and subtracting.
The proofs that every "recursive function" we can calculate by hand we can compute by machine and vice versa—note the usage of the words calculate versus compute—is remarkable. But this equivalence together with the thesis (unproven assertion) that this includes every calculation/computation indicates why so much emphasis has been placed upon the use of Turing-equivalent machines in the definition of specific algorithms, and why the definition of "algorithm" itself often refers back to "the Turing machine". This is discussed in more detail under Stephen Kleene's characterization.
The following are summaries of the more famous characterizations (Kleene, Markov, Knuth) together with those that introduce novel elements—elements that further expand the definition or contribute to a more precise definition.
[
A mathematical problem and its result can be considered as two points in a space, and the solution consists of a sequence of steps or a path linking them. Quality of the solution is a function of the path. There might be more than one attribute defined for the path, e.g. length, complexity of shape, an ease of generalizing, difficulty, and so on.
]
== Chomsky hierarchy ==
There is more consensus on the "characterization" of the notion of "simple algorithm".
All algorithms need to be specified in a formal language, and the "simplicity notion" arises from the simplicity of the language. The Chomsky (1956) hierarchy is a containment hierarchy of classes of formal grammars that generate formal languages. It is used for classifying of programming languages and abstract machines.
From the Chomsky hierarchy perspective, if the algorithm can be specified on a simpler language (than unrestricted), it can be characterized by this kind of language, else it is a typical "unrestricted algorithm".
Examples: a "general purpose" macro language, like M4 is unrestricted (Turing complete), but the C preprocessor macro language is not, so any algorithm expressed in C preprocessor is a "simple algorithm".
See also Relationships between complexity classes.
== Features of a good algorithm ==
The following are desirable features of a well-defined algorithm, as discussed in Scheider and Gersting (1995):
Unambiguous Operations: an algorithm must have specific, outlined steps. The steps should be exact enough to precisely specify what to do at each step.
Well-Ordered: The exact order of operations performed in an algorithm should be concretely defined.
Feasibility: All steps of an algorithm should be possible (also known as effectively computable).
Input: an algorithm should be able to accept a well-defined set of inputs.
Output: an algorithm should produce some result as an output, so that its correctness can be reasoned about.
Finiteness: an algorithm should terminate after a finite number of instructions.
Properties of specific algorithms that may be desirable include space and time efficiency, generality (i.e. being able to handle many inputs), or determinism.
== 1881 John Venn's negative reaction to W. Stanley Jevons's Logical Machine of 1870 ==
In early 1870 W. Stanley Jevons presented a "Logical Machine" (Jevons 1880:200) for analyzing a syllogism or other logical form e.g. an argument reduced to a Boolean equation. By means of what Couturat (1914) called a "sort of logical piano [,] ... the equalities which represent the premises ... are "played" on a keyboard like that of a typewriter. ... When all the premises have been "played", the panel shows only those constituents whose sum is equal to 1, that is, ... its logical whole. This mechanical method has the advantage over VENN's geometrical method..." (Couturat 1914:75).
For his part John Venn, a logician contemporary to Jevons, was less than thrilled, opining that "it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines" (italics added, Venn 1881:120). But of historical use to the developing notion of "algorithm" is his explanation for his negative reaction with respect to a machine that "may subserve a really valuable purpose by enabling us to avoid otherwise inevitable labor":
(1) "There is, first, the statement of our data in accurate logical language",
(2) "Then secondly, we have to throw these statements into a form fit for the engine to work with – in this case the reduction of each proposition to its elementary denials",
(3) "Thirdly, there is the combination or further treatment of our premises after such reduction,"
(4) "Finally, the results have to be interpreted or read off. This last generally gives rise to much opening for skill and sagacity."
He concludes that "I cannot see that any machine can hope to help us except in the third of these steps; so that it seems very doubtful whether any thing of this sort really deserves the name of a logical engine."(Venn 1881:119–121).
== 1943, 1952 Stephen Kleene's characterization ==
This section is longer and more detailed than the others because of its importance to the topic: Kleene was the first to propose that all calculations/computations—of every sort, the totality of—can equivalently be (i) calculated by use of five "primitive recursive operators" plus one special operator called the mu-operator, or be (ii) computed by the actions of a Turing machine or an equivalent model.
Furthermore, he opined that either of these would stand as a definition of algorithm.
A reader first confronting the words that follow may well be confused, so a brief explanation is in order. Calculation means done by hand, computation means done by Turing machine (or equivalent). (Sometimes an author slips and interchanges the words). A "function" can be thought of as an "input-output box" into which a person puts natural numbers called "arguments" or "parameters" (but only the counting numbers including 0—the nonnegative integers) and gets out a single nonnegative integer (conventionally called "the answer"). Think of the "function-box" as a little man either calculating by hand using "general recursion" or computing by Turing machine (or an equivalent machine).
"Effectively calculable/computable" is more generic and means "calculable/computable by some procedure, method, technique ... whatever...". "General recursive" was Kleene's way of writing what today is called just "recursion"; however, "primitive recursion"—calculation by use of the five recursive operators—is a lesser form of recursion that lacks access to the sixth, additional, mu-operator that is needed only in rare instances. Thus most of life goes on requiring only the "primitive recursive functions."
=== 1943 "Thesis I", 1952 "Church's Thesis" ===
In 1943 Kleene proposed what has come to be known as Church's thesis:
"Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive" (First stated by Kleene in 1943 (reprinted page 274 in Davis, ed. The Undecidable; appears also verbatim in Kleene (1952) p.300)
In a nutshell: to calculate any function the only operations a person needs (technically, formally) are the 6 primitive operators of "general" recursion (nowadays called the operators of the mu recursive functions).
Kleene's first statement of this was under the section title "12. Algorithmic theories". He would later amplify it in his text (1952) as follows:
"Thesis I and its converse provide the exact definition of the notion of a calculation (decision) procedure or algorithm, for the case of a function (predicate) of natural numbers" (p. 301, boldface added for emphasis)
(His use of the word "decision" and "predicate" extends the notion of calculability to the more general manipulation of symbols such as occurs in mathematical "proofs".)
This is not as daunting as it may sound – "general" recursion is just a way of making our everyday arithmetic operations from the five "operators" of the primitive recursive functions together with the additional mu-operator as needed. Indeed, Kleene gives 13 examples of primitive recursive functions and Boolos–Burgess–Jeffrey add some more, most of which will be familiar to the reader—e.g. addition, subtraction, multiplication and division, exponentiation, the CASE function, concatenation, etc., etc.; for a list see Some common primitive recursive functions.
Why general-recursive functions rather than primitive-recursive functions?
Kleene et al. (cf §55 General recursive functions p. 270 in Kleene 1952) had to add a sixth recursion operator called the minimization-operator (written as μ-operator or mu-operator) because Ackermann (1925) produced a hugely growing function—the Ackermann function—and Rózsa Péter (1935) produced a general method of creating recursive functions using Cantor's diagonal argument, neither of which could be described by the 5 primitive-recursive-function operators. With respect to the Ackermann function:
"...in a certain sense, the length of the computation algorithm of a recursive function which is not also primitive recursive grows faster with the arguments than the value of any primitive recursive function" (Kleene (1935) reprinted p. 246 in The Undecidable, plus footnote 13 with regards to the need for an additional operator, boldface added).
But the need for the mu-operator is a rarity. As indicated above by Kleene's list of common calculations, a person goes about their life happily computing primitive recursive functions without fear of encountering the monster numbers created by Ackermann's function (e.g. super-exponentiation).
=== 1952 "Turing's thesis" ===
Turing's Thesis hypothesizes the computability of "all computable functions" by the Turing machine model and its equivalents.
To do this in an effective manner, Kleene extended the notion of "computable" by casting the net wider—by allowing into the notion of "functions" both "total functions" and "partial functions". A total function is one that is defined for all natural numbers (positive integers including 0). A partial function is defined for some natural numbers but not all—the specification of "some" has to come "up front". Thus the inclusion of "partial function" extends the notion of function to "less-perfect" functions. Total- and partial-functions may either be calculated by hand or computed by machine.
Examples:
"Functions": include "common subtraction m − n" and "addition m + n"
"Partial function": "Common subtraction" m − n is undefined when only natural numbers (positive integers and zero) are allowed as input – e.g. 6 − 7 is undefined
Total function: "Addition" m + n is defined for all positive integers and zero.
We now observe Kleene's definition of "computable" in a formal sense:
Definition: "A partial function φ is computable, if there is a machine M which computes it" (Kleene (1952) p. 360)
"Definition 2.5. An n-ary function f(x1, ..., xn) is partially computable if there exists a Turing machine Z such that
f(x1, ..., xn) = ΨZ(n)(x1, ..., [xn)
In this case we say that [machine] Z computes f. If, in addition, f(x1, ..., xn) is a total function, then it is called computable" (Davis (1958) p. 10)
Thus we have arrived at Turing's Thesis:
"Every function which would naturally be regarded as computable is computable ... by one of his machines..." (Kleene (1952) p.376)
Although Kleene did not give examples of "computable functions" others have. For example, Davis (1958) gives Turing tables for the Constant, Successor and Identity functions, three of the five operators of the primitive recursive functions:
Computable by Turing machine:
Addition (also is the Constant function if one operand is 0)
Increment (Successor function)
Common subtraction (defined only if x ≥ y). Thus "x − y" is an example of a partially computable function.
Proper subtraction x┴y (as defined above)
The identity function: for each i, a function UZn = ΨZn(x1, ..., xn) exists that plucks xi out of the set of arguments (x1, ..., xn)
Multiplication
Boolos–Burgess–Jeffrey (2002) give the following as prose descriptions of Turing machines for:
Doubling: 2p
Parity
Addition
Multiplication
With regards to the counter machine, an abstract machine model equivalent to the Turing machine:
Examples Computable by Abacus machine (cf Boolos–Burgess–Jeffrey (2002))
Addition
Multiplication
Exponention: (a flow-chart/block diagram description of the algorithm)
Demonstrations of computability by abacus machine (Boolos–Burgess–Jeffrey (2002)) and by counter machine (Minsky 1967):
The six recursive function operators:
Zero function
Successor function
Identity function
Composition function
Primitive recursion (induction)
Minimization
The fact that the abacus/counter-machine models can simulate the recursive functions provides the proof that: If a function is "machine computable" then it is "hand-calculable by partial recursion". Kleene's Theorem XXIX :
"Theorem XXIX: "Every computable partial function φ is partial recursive..." (italics in original, p. 374).
The converse appears as his Theorem XXVIII. Together these form the proof of their equivalence, Kleene's Theorem XXX.
=== 1952 Church–Turing Thesis ===
With his Theorem XXX Kleene proves the equivalence of the two "Theses"—the Church Thesis and the Turing Thesis. (Kleene can only hypothesize (conjecture) the truth of both thesis – these he has not proven):
THEOREM XXX: The following classes of partial functions ... have the same members: (a) the partial recursive functions, (b) the computable functions ..."(p. 376)
Definition of "partial recursive function": "A partial function φ is partial recursive in [the partial functions] ψ1, ... ψn if there is a system of equations E which defines φ recursively from [partial functions] ψ1, ... ψn" (p. 326)
Thus by Kleene's Theorem XXX: either method of making numbers from input-numbers—recursive functions calculated by hand or computated by Turing-machine or equivalent—results in an "effectively calculable/computable function". If we accept the hypothesis that every calculation/computation can be done by either method equivalently we have accepted both Kleene's Theorem XXX (the equivalence) and the Church–Turing Thesis (the hypothesis of "every").
=== A note of dissent: "There's more to algorithm..." Blass and Gurevich (2003) ===
The notion of separating out Church's and Turing's theses from the "Church–Turing thesis" appears not only in Kleene (1952) but in Blass-Gurevich (2003) as well. But while there are agreements, there are disagreements too:
"...we disagree with Kleene that the notion of algorithm is that well understood. In fact the notion of algorithm is richer these days than it was in Turing's days. And there are algorithms, of modern and classical varieties, not covered directly by Turing's analysis, for example, algorithms that interact with their environments, algorithms whose inputs are abstract structures, and geometric or, more generally, non-discrete algorithms" (Blass-Gurevich (2003) p. 8, boldface added)
== 1954 A. A. Markov Jr.'s characterization ==
Andrey Markov Jr. (1954) provided the following definition of algorithm:
"1. In mathematics, "algorithm" is commonly understood to be an exact prescription, defining a computational process, leading from various initial data to the desired result...."
"The following three features are characteristic of algorithms and determine their role in mathematics:
"a) the precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility -- the definiteness of the algorithm;
"b) the possibility of starting out with initial data, which may vary within given limits -- the generality of the algorithm;
"c) the orientation of the algorithm toward obtaining some desired result, which is indeed obtained in the end with proper initial data -- the conclusiveness of the algorithm." (p.1)
He admitted that this definition "does not pretend to mathematical precision" (p. 1). His 1954 monograph was his attempt to define algorithm more accurately; he saw his resulting definition—his "normal" algorithm—as "equivalent to the concept of a recursive function" (p. 3). His definition included four major components (Chapter II.3 pp. 63ff):
"1. Separate elementary steps, each of which will be performed according to one of [the substitution] rules... [rules given at the outset]
"2. ... steps of local nature ... [Thus the algorithm won't change more than a certain number of symbols to the left or right of the observed word/symbol]
"3. Rules for the substitution formulas ... [he called the list of these "the scheme" of the algorithm]
"4. ...a means to distinguish a "concluding substitution" [i.e. a distinguishable "terminal/final" state or states]
In his Introduction Markov observed that "the entire significance for mathematics" of efforts to define algorithm more precisely would be "in connection with the problem of a constructive foundation for mathematics" (p. 2). Ian Stewart (cf Encyclopædia Britannica) shares a similar belief: "...constructive analysis is very much in the same algorithmic spirit as computer science...". For more see constructive mathematics and Intuitionism.
Distinguishability and Locality: Both notions first appeared with Turing (1936–1937) --
"The new observed squares must be immediately recognizable by the computer [sic: a computer was a person in 1936]. I think it reasonable to suppose that they can only be squares whose distance from the closest of the immediately observed squares does not exceed a certain fixed amount. Let us stay that each of the new observed squares is within L squares of one of the previously observed squares." (Turing (1936) p. 136 in Davis ed. Undecidable)
Locality appears prominently in the work of Gurevich and Gandy (1980) (whom Gurevich cites). Gandy's "Fourth Principle for Mechanisms" is "The Principle of Local Causality":
"We now come to the most important of our principles. In Turing's analysis the requirement that the action depend only on a bounded portion of the record was based on a human limitiation. We replace this by a physical limitation which we call the principle of local causation. Its justification lies in the finite velocity of propagation of effects and signals: contemporary physics rejects the possibility of instantaneous action at a distance." (Gandy (1980) p. 135 in J. Barwise et al.)
== 1936, 1963, 1964 Gödel's characterization ==
1936: A rather famous quote from Kurt Gödel appears in a "Remark added in proof [of the original German publication] in his paper "On the Length of Proofs" translated by Martin Davis appearing on pp. 82–83 of The Undecidable. A number of authors—Kleene, Gurevich, Gandy etc. -- have quoted the following:
"Thus, the concept of "computable" is in a certain definite sense "absolute," while practically all other familiar metamathematical concepts (e.g. provable, definable, etc.) depend quite essentially on the system with respect to which they are defined." (p. 83)
1963: In a "Note" dated 28 August 1963 added to his famous paper On Formally Undecidable Propositions (1931) Gödel states (in a footnote) his belief that "formal systems" have "the characteristic property that reasoning in them, in principle, can be completely replaced by mechanical devices" (p. 616 in van Heijenoort). ". . . due to "A. M. Turing's work a precise and unquestionably adequate definition of the general notion of formal system can now be given [and] a completely general version of Theorems VI and XI is now possible." (p. 616). In a 1964 note to another work he expresses the same opinion more strongly and in more detail.
1964: In a Postscriptum, dated 1964, to a paper presented to the Institute for Advanced Study in spring 1934, Gödel amplified his conviction that "formal systems" are those that can be mechanized:
"In consequence of later advances, in particular of the fact that, due to A. M. Turing's work, a precise and unquestionably adequate definition of the general concept of formal system can now be given . . . Turing's work gives an analysis of the concept of "mechanical procedure" (alias "algorithm" or "computational procedure" or "finite combinatorial procedure"). This concept is shown to be equivalent with that of a "Turing machine".* A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas . . . ." (p. 72 in Martin Davis ed. The Undecidable: "Postscriptum" to "On Undecidable Propositions of Formal Mathematical Systems" appearing on p. 39, loc. cit.)
The * indicates a footnote in which Gödel cites the papers by Alan Turing (1937) and Emil Post (1936) and then goes on to make the following intriguing statement:
"As for previous equivalent definitions of computability, which however, are much less suitable for our purpose, see Alonzo Church, Am. J. Math., vol. 58 (1936) [appearing in The Undecidable pp. 100-102]).
Church's definitions encompass so-called "recursion" and the "lambda calculus" (i.e. the λ-definable functions). His footnote 18 says that he discussed the relationship of "effective calculatibility" and "recursiveness" with Gödel but that he independently questioned "effectively calculability" and "λ-definability":
"We now define the notion . . . of an effectively calculable function of positive integers by identifying it with the notion of a recursive function of positive integers18 (or of a λ-definable function of positive integers.
"It has already been pointed out that, for every function of positive integers which is effectively calculable in the sense just defined, there exists an algorithm for the calculation of its value.
"Conversely it is true . . ." (p. 100, The Undecidable).
It would appear from this, and the following, that far as Gödel was concerned, the Turing machine was sufficient and the lambda calculus was "much less suitable." He goes on to make the point that, with regards to limitations on human reason, the jury is still out:
("Note that the question of whether there exist finite non-mechanical procedures** not equivalent with any algorithm, has nothing whatsoever to do with the adequacy of the definition of "formal system" and of "mechanical procedure.") (p. 72, loc. cit.)
"(For theories and procedures in the more general sense indicated in footnote ** the situation may be different. Note that the results mentioned in the postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics.) (p. 73 loc. cit.)
Footnote **: "I.e., such as involve the use of abstract terms on the basis of their meaning. See my paper in Dial. 12(1958), p. 280." (this footnote appears on p. 72, loc. cit).
== 1967 Minsky's characterization ==
Minsky (1967) baldly asserts that "an algorithm is "an effective procedure" and declines to use the word "algorithm" further in his text; in fact his index makes it clear what he feels about "Algorithm, synonym for Effective procedure"(p. 311):
"We will use the latter term [an effective procedure] in the sequel. The terms are roughly synonymous, but there are a number of shades of meaning used in different contexts, especially for 'algorithm'" (italics in original, p. 105)
Other writers (see Knuth below) use the word "effective procedure". This leads one to wonder: What is Minsky's notion of "an effective procedure"? He starts off with:
"...a set of rules which tell us, from moment to moment, precisely how to behave" (p. 106)
But he recognizes that this is subject to a criticism:
"... the criticism that the interpretation of the rules is left to depend on some person or agent" (p. 106)
His refinement? To "specify, along with the statement of the rules, the details of the mechanism that is to interpret them". To avoid the "cumbersome" process of "having to do this over again for each individual procedure" he hopes to identify a "reasonably uniform family of rule-obeying mechanisms". His "formulation":
"(1) a language in which sets of behavioral rules are to be expressed, and
"(2) a single machine which can interpret statements in the language and thus carry out the steps of each specified process." (italics in original, all quotes this para. p. 107)
In the end, though, he still worries that "there remains a subjective aspect to the matter. Different people may not agree on whether a certain procedure should be called effective" (p. 107)
But Minsky is undeterred. He immediately introduces "Turing's Analysis of Computation Process" (his chapter 5.2). He quotes what he calls "Turing's thesis"
"Any process which could naturally be called an effective procedure can be realized by a Turing machine" (p. 108. (Minsky comments that in a more general form this is called "Church's thesis").
After an analysis of "Turing's Argument" (his chapter 5.3)
he observes that "equivalence of many intuitive formulations" of Turing, Church, Kleene, Post, and Smullyan "...leads us to suppose that there is really here an 'objective' or 'absolute' notion. As Rogers [1959] put it:
"In this sense, the notion of effectively computable function is one of the few 'absolute' concepts produced by modern work in the foundations of mathematics'" (Minsky p. 111 quoting Rogers, Hartley Jr (1959) The present theory of Turing machine computability, J. SIAM 7, 114-130.)
== 1967 Rogers' characterization ==
In his 1967 Theory of Recursive Functions and Effective Computability Hartley Rogers' characterizes "algorithm" roughly as "a clerical (i.e., deterministic, bookkeeping) procedure . . . applied to . . . symbolic inputs and which will eventually yield, for each such input, a corresponding symbolic output"(p. 1). He then goes on to describe the notion "in approximate and intuitive terms" as having 10 "features", 5 of which he asserts that "virtually all mathematicians would agree [to]" (p. 2). The remaining 5 he asserts "are less obvious than *1 to *5 and about which we might find less general agreement" (p. 3).
The 5 "obvious" are:
1 An algorithm is a set of instructions of finite size,
2 There is a capable computing agent,
3 "There are facilities for making, storing, and retrieving steps in a computation"
4 Given #1 and #2 the agent computes in "discrete stepwise fashion" without use of continuous methods or analogue devices",
5 The computing agent carries the computation forward "without resort to random methods or devices, e.g., dice" (in a footnote Rogers wonders if #4 and #5 are really the same)
The remaining 5 that he opens to debate, are:
6 No fixed bound on the size of the inputs,
7 No fixed bound on the size of the set of instructions,
8 No fixed bound on the amount of memory storage available,
9 A fixed finite bound on the capacity or ability of the computing agent (Rogers illustrates with example simple mechanisms similar to a Post–Turing machine or a counter machine),
10 A bound on the length of the computation -- "should we have some idea, 'ahead of time', how long the computationwill take?" (p. 5). Rogers requires "only that a computation terminate after some finite number of steps; we do not insist on an a priori ability to estimate this number." (p. 5).
== 1968, 1973 Knuth's characterization ==
Knuth (1968, 1973) has given a list of five properties that are widely accepted as requirements for an algorithm:
Finiteness: "An algorithm must always terminate after a finite number of steps ... a very finite number, a reasonable number"
Definiteness: "Each step of an algorithm must be precisely defined; the actions to be carried out must be rigorously and unambiguously specified for each case"
Input: "...quantities which are given to it initially before the algorithm begins. These inputs are taken from specified sets of objects"
Output: "...quantities which have a specified relation to the inputs"
Effectiveness: "... all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exactly and in a finite length of time by a man using paper and pencil"
Knuth offers as an example the Euclidean algorithm for determining the greatest common divisor of two natural numbers (cf. Knuth Vol. 1 p. 2).
Knuth admits that, while his description of an algorithm may be intuitively clear, it lacks formal rigor, since it is not exactly clear what "precisely defined" means, or "rigorously and unambiguously specified" means, or "sufficiently basic", and so forth. He makes an effort in this direction in his first volume where he defines in detail what he calls the "machine language" for his "mythical MIX...the world's first polyunsaturated computer" (pp. 120ff). Many of the algorithms in his books are written in the MIX language. He also uses tree diagrams, flow diagrams and state diagrams.
"Goodness" of an algorithm, "best" algorithms: Knuth states that "In practice, we not only want algorithms, we want good algorithms...." He suggests that some criteria of an algorithm's goodness are the number of steps to perform the algorithm, its "adaptability to computers, its simplicity and elegance, etc." Given a number of algorithms to perform the same computation, which one is "best"? He calls this sort of inquiry "algorithmic analysis: given an algorithm, to determine its performance characteristcis" (all quotes this paragraph: Knuth Vol. 1 p. 7)
== 1972 Stone's characterization ==
Stone (1972) and Knuth (1968, 1973) were professors at Stanford University at the same time so it is not surprising if there are similarities in their definitions (boldface added for emphasis):
"To summarize ... we define an algorithm to be a set of rules that precisely defines a sequence of operations such that each rule is effective and definite and such that the sequence terminates in a finite time." (boldface added, p. 8)
Stone is noteworthy because of his detailed discussion of what constitutes an “effective” rule – his robot, or person-acting-as-robot, must have some information and abilities within them, and if not the information and the ability must be provided in "the algorithm":
"For people to follow the rules of an algorithm, the rules must be formulated so that they can be followed in a robot-like manner, that is, without the need for thought... however, if the instructions [to solve the quadratic equation, his example] are to be obeyed by someone who knows how to perform arithmetic operations but does not know how to extract a square root, then we must also provide a set of rules for extracting a square root in order to satisfy the definition of algorithm" (p. 4-5)
Furthermore, "...not all instructions are acceptable, because they may require the robot to have abilities beyond those that we consider reasonable.” He gives the example of a robot confronted with the question is “Henry VIII a King of England?” and to print 1 if yes and 0 if no, but the robot has not been previously provided with this information. And worse, if the robot is asked if Aristotle was a King of England and the robot only had been provided with five names, it would not know how to answer. Thus:
“an intuitive definition of an acceptable sequence of instructions is one in which each instruction is precisely defined so that the robot is guaranteed to be able to obey it” (p. 6)
After providing us with his definition, Stone introduces the Turing machine model and states that the set of five-tuples that are the machine's instructions are “an algorithm ... known as a Turing machine program” (p. 9). Immediately thereafter he goes on say that a “computation of a Turing machine is described by stating:
"1. The tape alphabet
"2. The form in which the [input] parameters are presented on the tape
"3. The initial state of the Turing machine
"4. The form in which answers [output] will be represented on the tape when the Turing machine halts
"5. The machine program" (italics added, p. 10)
This precise prescription of what is required for "a computation" is in the spirit of what will follow in the work of Blass and Gurevich.
== 1995 Soare's characterization ==
"A computation is a process whereby we proceed from initially given objects, called inputs, according to a fixed set of rules, called a program, procedure, or algorithm, through a series of steps and arrive at the end of these steps with a final result, called the output. The algorithm, as a set of rules proceeding from inputs to output, must be precise and definite with each successive step clearly determined. The concept of computability concerns those objects which may be specified in principle by computations . . ."(italics in original, boldface added p. 3)
== 2000 Berlinski's characterization ==
While a student at Princeton in the mid-1960s, David Berlinski was a student of Alonzo Church (cf p. 160). His year-2000 book The Advent of the Algorithm: The 300-year Journey from an Idea to the Computer contains the following definition of algorithm:
"In the logician's voice:
"an algorithm is
a finite procedure,
written in a fixed symbolic vocabulary,
governed by precise instructions,
moving in discrete steps, 1, 2, 3, . . .,
whose execution requires no insight, cleverness,
intuition, intelligence, or perspicuity,
and that sooner or later comes to an end." (boldface and italics in the original, p. xviii)
== 2000, 2002 Gurevich's characterization ==
A careful reading of Gurevich 2000 leads one to conclude (infer?) that he believes that "an algorithm" is actually "a Turing machine" or "a pointer machine" doing a computation. An "algorithm" is not just the symbol-table that guides the behavior of the machine, nor is it just one instance of a machine doing a computation given a particular set of input parameters, nor is it a suitably programmed machine with the power off; rather an algorithm is the machine actually doing any computation of which it is capable. Gurevich does not come right out and say this, so as worded above this conclusion (inference?) is certainly open to debate:
" . . . every algorithm can be simulated by a Turing machine . . . a program can be simulated and therefore given a precise meaning by a Turing machine." (p. 1)
" It is often thought that the problem of formalizing the notion of sequential algorithm was solved by Church [1936] and Turing [1936]. For example, according to Savage [1987], an algorithm is a computational process defined by a Turing machine. Church and Turing did not solve the problem of formalizing the notion of sequential algorithm. Instead they gave (different but equivalent) formalizations of the notion of computable function, and there is more to an algorithm than the function it computes. (italics added p. 3)
"Of course, the notions of algorithm and computable function are intimately related: by definition, a computable function is a function computable by an algorithm. . . . (p. 4)
In Blass and Gurevich 2002 the authors invoke a dialog between "Quisani" ("Q") and "Authors" (A), using Yiannis Moshovakis as a foil, where they come right out and flatly state:
"A: To localize the disagreement, let's first mention two points of agreement. First, there are some things that are obviously algorithms by anyone's definition -- Turing machines, sequential-time ASMs [Abstract State Machines], and the like. . . .Second, at the other extreme are specifications that would not be regarded as algorithms under anyone's definition, since they give no indication of how to compute anything . . . The issue is how detailed the information has to be in order to count as an algorithm. . . . Moshovakis allows some things that we would call only declarative specifications, and he would probably use the word "implementation" for things that we call algorithms." (paragraphs joined for ease of readability, 2002:22)
This use of the word "implementation" cuts straight to the heart of the question. Early in the paper, Q states his reading of Moshovakis:
"...[H]e would probably think that your practical work [Gurevich works for Microsoft] forces you to think of implementations more than of algorithms. He is quite willing to identify implementations with machines, but he says that algorithms are something more general. What it boils down to is that you say an algorithm is a machine and Moschovakis says it is not." (2002:3)
But the authors waffle here, saying "[L]et's stick to "algorithm" and "machine", and the reader is left, again, confused. We have to wait until Dershowitz and Gurevich 2007 to get the following footnote comment:
" . . . Nevertheless, if one accepts Moshovakis's point of view, then it is the "implementation" of algorithms that we have set out to characterize."(cf Footnote 9 2007:6)
== 2003 Blass and Gurevich's characterization ==
Blass and Gurevich describe their work as evolved from consideration of Turing machines and pointer machines, specifically Kolmogorov-Uspensky machines (KU machines), Schönhage Storage Modification Machines (SMM), and linking automata as defined by Knuth. The work of Gandy and Markov are also described as influential precursors.
Gurevich offers a 'strong' definition of an algorithm (boldface added):
"...Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine....In practice, it would be ridiculous...[Nevertheless,] [c]an one generalize Turing machines so that any algorithm, never mind how abstract, can be modeled by a generalized machine?...But suppose such generalized Turing machines exist. What would their states be?...a first-order structure ... a particular small instruction set suffices in all cases ... computation as an evolution of the state ... could be nondeterministic... can interact with their environment ... [could be] parallel and multi-agent ... [could have] dynamic semantics ... [the two underpinings of their work are:] Turing's thesis ...[and] the notion of (first order) structure of [Tarski 1933]" (Gurevich 2000, p. 1-2)
The above phrase computation as an evolution of the state differs markedly from the definition of Knuth and Stone—the "algorithm" as a Turing machine program. Rather, it corresponds to what Turing called the complete configuration (cf Turing's definition in Undecidable, p. 118) -- and includes both the current instruction (state) and the status of the tape. [cf Kleene (1952) p. 375 where he shows an example of a tape with 6 symbols on it—all other squares are blank—and how to Gödelize its combined table-tape status].
In Algorithm examples we see the evolution of the state first-hand.
== 1995 – Daniel Dennett: evolution as an algorithmic process ==
Philosopher Daniel Dennett analyses the importance of evolution as an algorithmic process in his 1995 book Darwin's Dangerous Idea. Dennett identifies three key features of an algorithm:
Substrate neutrality: an algorithm relies on its logical structure. Thus, the particular form in which an algorithm is manifested is not important (Dennett's example is long division: it works equally well on paper, on parchment, on a computer screen, or using neon lights or in skywriting). (p. 51)
Underlying mindlessness: no matter how complicated the end-product of the algorithmic process may be, each step in the algorithm is sufficiently simple to be performed by a non-sentient, mechanical device. The algorithm does not require a "brain" to maintain or operate it. "The standard textbook analogy notes that algorithms are recipes of sorts, designed to be followed by novice cooks."(p. 51)
Guaranteed results: If the algorithm is executed correctly, it will always produce the same results. "An algorithm is a foolproof recipe." (p. 51)
It is on the basis of this analysis that Dennett concludes that "According to Darwin, evolution is an algorithmic process". (p. 60).
However, in the previous page he has gone out on a much-further limb. In the context of his chapter titled "Processes as Algorithms", he states:
"But then . . are there any limits at all on what may be considered an algorithmic process? I guess the answer is NO; if you wanted to, you can treat any process at the abstract level as an algorithmic process. . . If what strikes you as puzzling is the uniformity of the [ocean's] sand grains or the strength of the [tempered-steel] blade, an algorithmic explanation is what will satisfy your curiosity -- and it will be the truth. . . .
"No matter how impressive the products of an algorithm, the underlying process always consists of nothing but a set of individualy [sic] mindless steps succeeding each other without the help of any intelligent supervision; they are 'automatic' by definition: the workings of an automaton." (p. 59)
It is unclear from the above whether Dennett is stating that the physical world by itself and without observers is intrinsically algorithmic (computational) or whether a symbol-processing observer is what is adding "meaning" to the observations.
== 2002 John Searle adds a clarifying caveat to Dennett's characterization ==
Daniel Dennett is a proponent of strong artificial intelligence: the idea that the logical structure of an algorithm is sufficient to explain mind. John Searle, the creator of the Chinese room thought experiment, claims that "syntax [that is, logical structure] is by itself not sufficient for semantic content [that is, meaning]" (Searle 2002, p. 16). In other words, the "meaning" of symbols is relative to the mind that is using them; an algorithm—a logical construct—by itself is insufficient for a mind.
Searle cautions those who claim that algorithmic (computational) processes are intrinsic to nature (for example, cosmologists, physicists, chemists, etc.):
Computation [...] is observer-relative, and this is because computation is defined in terms of symbol manipulation, but the notion of a 'symbol' is not a notion of physics or chemistry. Something is a symbol only if it is used, treated or regarded as a symbol. The Chinese room argument showed that semantics is not intrinsic to syntax. But what this shows is that syntax is not intrinsic to physics. [...] Something is a symbol only relative to some observer, user or agent who assigns a symbolic interpretation to it [...] you can assign a computational interpretation to anything. But if the question asks, "Is consciousness intrinsically computational?" the answer is: nothing is intrinsically computational [italics added for emphasis]. Computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon. This is an obvious point. I should have seen it ten years ago but I did not.
== 2002: Boolos-Burgess-Jeffrey specification of Turing machine calculation ==
For examples of this specification-method applied to the addition algorithm "m+n" see Algorithm examples.
An example in Boolos-Burgess-Jeffrey (2002) (pp. 31–32) demonstrates the precision required in a complete specification of an algorithm, in this case to add two numbers: m+n. It is similar to the Stone requirements above.
(i) They have discussed the role of "number format" in the computation and selected the "tally notation" to represent numbers:
"Certainly computation can be harder in practice with some notations than others... But... it is possible in principle to do in any other notation, simply by translating the data... For purposes of framing a rigorously defined notion of computability, it is convenient to use monadic or tally notation" (p. 25-26)
(ii) At the outset of their example they specify the machine to be used in the computation as a Turing machine. They have previously specified (p. 26) that the Turing-machine will be of the 4-tuple, rather than 5-tuple, variety. For more on this convention see Turing machine.
(iii) Previously the authors have specified that the tape-head's position will be indicated by a subscript to the right of the scanned symbol. For more on this convention see Turing machine. (In the following, boldface is added for emphasis):
"We have not given an official definition of what it is for a numerical function to be computable by a Turing machine, specifying how inputs or arguments are to be represented on the machine, and how outputs or values represented. Our specifications for a k-place function from positive integers to positive integers are as follows:
"(a) [Initial number format:] The arguments m1, ... mk, ... will be represented in monadic [unary] notation by blocks of those numbers of strokes, each block separated from the next by a single blank, on an otherwise blank tape.
Example: 3+2, 111B11
"(b) [Initial head location, initial state:] Initially, the machine will be scanning the leftmost 1 on the tape, and will be in its initial state, state 1.
Example: 3+2, 11111B11
"(c) [Successful computation -- number format at Halt:] If the function to be computed assigns a value n to the arguments that are represented initially on the tape, then the machine will eventually halt on a tape containing a block of strokes, and otherwise blank...
Example: 3+2, 11111
"(d) [Successful computation -- head location at Halt:] In this case [c] the machine will halt scanning the left-most 1 on the tape...
Example: 3+2, 1n1111
"(e) [Unsuccessful computation -- failure to Halt or Halt with non-standard number format:] If the function that is to be computed assigns no value to the arguments that are represented initially on the tape, then the machine either will never halt, or will halt in some nonstandard configuration..."(ibid)
Example: Bn11111 or B11n111 or B11111n
This specification is incomplete: it requires the location of where the instructions are to be placed and their format in the machine--
(iv) in the finite-state machine's TABLE or, in the case of a Universal Turing machine on the tape, and
(v) the Table of instructions in a specified format
This later point is important. Boolos-Burgess-Jeffrey give a demonstration (p. 36) that the predictability of the entries in the table allow one to "shrink" the table by putting the entries in sequence and omitting the input state and the symbol. Indeed, the example Turing machine computation required only the 4 columns as shown in the table below (but note: these were presented to the machine in rows):
== 2006: Sipser's assertion and his three levels of description ==
For examples of this specification-method applied to the addition algorithm "m+n" see Algorithm examples.
Sipser begins by defining '"algorithm" as follows:
"Informally speaking, an algorithm is a collection of simple instructions for carrying out some task. Commonplace in everyday life, algorithms sometimes are called procedures or recipes (italics in original, p. 154)
"...our real focus from now on is on algorithms. That is, the Turing machine merely serves as a precise model for the definition of algorithm .... we need only to be comfortable enough with Turing machines to believe that they capture all algorithms" ( p. 156)
Does Sipser mean that "algorithm" is just "instructions" for a Turing machine, or is the combination of "instructions + a (specific variety of) Turing machine"? For example, he defines the two standard variants (multi-tape and non-deterministic) of his particular variant (not the same as Turing's original) and goes on, in his Problems (pages 160–161), to describe four more variants (write-once, doubly infinite tape (i.e. left- and right-infinite), left reset, and "stay put instead of left). In addition, he imposes some constraints. First, the input must be encoded as a string (p. 157) and says of numeric encodings in the context of complexity theory:
"But note that unary notation for encoding numbers (as in the number 17 encoded by the unary number 11111111111111111) isn't reasonable because it is exponentially larger than truly reasonable encodings, such as base k notation for any k ≥ 2." (p. 259)
Van Emde Boas comments on a similar problem with respect to the random-access machine (RAM) abstract model of computation sometimes used in place of the Turing machine when doing "analysis of algorithms":
"The absence or presence of multiplicative and parallel bit manipulation operations is of relevance for the correct understanding of some results in the analysis of algorithms.
". . . [T]here hardly exists such as a thing as an "innocent" extension of the standard RAM model in the uniform time measures; either one only has additive arithmetic or one might as well include all reasonable multiplicative and/or bitwise Boolean instructions on small operands." (Van Emde Boas, 1990:26)
With regard to a "description language" for algorithms Sipser finishes the job that Stone and Boolos-Burgess-Jeffrey started (boldface added). He offers us three levels of description of Turing machine algorithms (p. 157):
High-level description: "wherein we use ... prose to describe an algorithm, ignoring the implementation details. At this level we do not need to mention how the machine manages its tape or head."
Implementation description: "in which we use ... prose to describe the way that the Turing machine moves its head and the way that it stores data on its tape. At this level we do not give details of states or transition function."
Formal description: "... the lowest, most detailed, level of description... that spells out in full the Turing machine's states, transition function, and so on."
== 2011: Yanofsky ==
In Yanofsky (2011) an algorithm is defined to be the set of programs that implement that algorithm: the set of all programs is partitioned into equivalence classes. Although the set of programs does not form a category, the set of algorithms form a category with extra structure. The conditions that describe when two programs are equivalent turn out to be coherence relations which give the extra structure to the category of algorithms.
== 2024: Seiller ==
In Seiller (2024) an algorithm is defined as an edge-labelled graph, together with an interpretation of labels as maps in an abstract data structure. This definition is given together with a formal definition of programs (and models of computation), allowing to formally define the notion of implementation, that is when a program implements an algorithm. The notion of algorithm thus obtained avoids some known issues, and is understood as a specification of some kind. In particular, a given program can (and in fact, always do) implement several algorithms. Another important feature of the approach is that it takes into account the fact that a given algorithm can be implemented in different (and possibly unrelated) computational models.
== Notes ==
== References ==
David Berlinski (2000), The Advent of the Algorithm: The 300-Year Journey from an Idea to the Computer, Harcourt, Inc., San Diego, ISBN 0-15-601391-6 (pbk.)
George Boolos, John P. Burgess, Richard Jeffrey (2002), Computability and Logic: Fourth Edition, Cambridge University Press, Cambridge, UK. ISBN 0-521-00758-5 (pbk).
Andreas Blass and Yuri Gurevich (2003), Algorithms: A Quest for Absolute Definitions, Bulletin of European Association for Theoretical Computer Science 81, 2003. Includes an excellent bibliography of 56 references.
Burgin, M. Super-recursive algorithms, Monographs in computer science, Springer, 2005. ISBN 0-387-95569-0
Davis, Martin (1958). Computability & Unsolvability. New York: McGraw-Hill Book Company, Inc.. A source of important definitions and some Turing machine-based algorithms for a few recursive functions.
Davis, Martin (1965). The Undecidable: Basic Papers On Undecidable Propositions, Unsolvable Problems and Computable Functions. New York: Raven Press. Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included.
Dennett, Daniel (1995). Darwin's Dangerous Idea. New York: Touchstone/Simon & Schuster.
Gandy, Robin, Church's Thesis and principles for Mechanisms, in J. Barwise, H. J. Keisler and K. Kunen, eds., The Kleene Symposium, North-Holland Publishing Company 1980) pp. 123–148. Gandy's famous "4 principles of [computational] mechanisms" includes "Principle IV -- The Principle of Local Causality".
Gurevich, Yuri, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pages 77–111. Includes bibliography of 33 sources.
Kleene C., Stephen (1943). "Recursive Predicates and Quantifiers". Transactions of the American Mathematical Society. 54 (1): 41–73. doi:10.2307/1990131. JSTOR 1990131. Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church Thesis).
Kleene, Stephen C. (1991) [1952]. Introduction to Metamathematics (Tenth ed.). North-Holland Publishing Company. Excellent — accessible, readable — reference source for mathematical "foundations".
Knuth, Donald E.. (1973) [1968]. The Art of Computer Programming Second Edition, Volume 1/Fundamental Algorithms (2nd ed.). Addison-Wesley Publishing Company. The first of Knuth's famous series of three texts.
Lewis, H.R. and Papadimitriou, C.H. Elements of the Theory of Computation, Prentice-Hall, Uppre Saddle River, N.J., 1998
Markov, A. A. (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e. Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS 60-51085.]
Minsky, Marvin (1967). Computation: Finite and Infinite Machines (First ed.). Prentice-Hall, Englewood Cliffs, NJ. Minsky expands his "...idea of an algorithm — an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
Rogers, Hartley Jr, (1967), Theory of Recursive Functions and Effective Computability, MIT Press (1987), Cambridge MA, ISBN 0-262-68052-1 (pbk.)
Searle, John (2002). Consciousness and Language. Cambridge UK: Cambridge University Press. ISBN 0-521-59744-7.
Seiller, Thomas, (2024), Mathematical Informatics, Habilitation thesis, Université Sorbonne Paris Nord, [1].
Sipser, Michael, (2006), Introduction to the Theory of Computation: Second Edition, Thompson Course Technology div. of Thompson Learning, Inc. Boston, MA. ISBN 978-0-534-95097-2.
Soare, Robert, (1995 to appear in Proceedings of the 10th International Congress of Logic, Methodology, and Philosophy of Science, August 19–25, 1995, Florence Italy), Computability and Recursion), on the web at ??.
Ian Stewart, Algorithm, Encyclopædia Britannica 2006.
Stone, Harold S. Introduction to Computer Organization and Data Structures (1972 ed.). McGraw-Hill, New York. Cf in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
van Emde Boas, Peter (1990), "Machine Models and Simulations" pp 3–66, appearing in Jan van Leeuwen (1990), Handbook of Theoretical Computer Science. Volume A: Algorithms & Complexity, The MIT Press/Elsevier, 1990, ISBN 0-444-88071-2 (Volume A) | Wikipedia/Algorithm_characterizations |
In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are the Karger–Stein algorithm and the Monte Carlo algorithm for minimum feedback arc set.
The name refers to the Monte Carlo casino in the Principality of Monaco, which is well-known around the world as an icon of gambling. The term "Monte Carlo" was first introduced in 1947 by Nicholas Metropolis.
Las Vegas algorithms are a dual of Monte Carlo algorithms and never return an incorrect answer. However, they may make random choices as part of their work. As a result, the time taken might vary between runs, even with the same input.
If there is a procedure for verifying whether the answer given by a Monte Carlo algorithm is correct, and the probability of a correct answer is bounded above zero, then with probability one, running the algorithm repeatedly while testing the answers will eventually give a correct answer. Whether this process is a Las Vegas algorithm depends on whether halting with probability one is considered to satisfy the definition.
== One-sided vs two-sided error ==
While the answer returned by a deterministic algorithm is always expected to be correct, this is not the case for Monte Carlo algorithms. For decision problems, these algorithms are generally classified as either false-biased or true-biased. A false-biased Monte Carlo algorithm is always correct when it returns false; a true-biased algorithm is always correct when it returns true. While this describes algorithms with one-sided errors, others might have no bias; these are said to have two-sided errors. The answer they provide (either true or false) will be incorrect, or correct, with some bounded probability.
For instance, the Solovay–Strassen primality test is used to determine whether a given number is a prime number. It always answers true for prime number inputs; for composite inputs, it answers false with probability at least 1⁄2 and true with probability less than 1⁄2. Thus, false answers from the algorithm are certain to be correct, whereas the true answers remain uncertain; this is said to be a 1⁄2-correct false-biased algorithm.
== Amplification ==
For a Monte Carlo algorithm with one-sided errors, the failure probability can be reduced (and the success probability amplified) by running the algorithm k times. Consider again the Solovay–Strassen algorithm which is 1⁄2-correct false-biased. One may run this algorithm multiple times returning a false answer if it reaches a false response within k iterations, and otherwise returning true. Thus, if the number is prime then the answer is always correct, and if the number is composite then the answer is correct with probability at least 1−(1−1⁄2)k = 1−2−k.
For Monte Carlo decision algorithms with two-sided error, the failure probability may again be reduced by running the algorithm k times and returning the majority function of the answers.
== Complexity classes ==
The complexity class BPP describes decision problems that can be solved by polynomial-time Monte Carlo algorithms with a bounded probability of two-sided errors, and the complexity class RP describes problems that can be solved by a Monte Carlo algorithm with a bounded probability of one-sided error: if the correct answer is false, the algorithm always says so, but it may answer false incorrectly for some instances where the correct answer is true. In contrast, the complexity class ZPP describes problems solvable by polynomial expected time Las Vegas algorithms. ZPP ⊆ RP ⊆ BPP, but it is not known whether any of these complexity classes is distinct from each other; that is, Monte Carlo algorithms may have more computational power than Las Vegas algorithms, but this has not been proven. Another complexity class, PP, describes decision problems with a polynomial-time Monte Carlo algorithm that is more accurate than flipping a coin but where the error probability cannot necessarily be bounded away from 1⁄2.
== Classes of Monte Carlo and Las Vegas algorithms ==
Randomized algorithms are primarily divided by its two main types, Monte Carlo and Las Vegas, however, these represent only a top of the hierarchy and can be further categorized.
Las Vegas
Sherwood—"performant and effective special case of Las Vegas"
Numerical—"numerical Las Vegas"
Monte Carlo
Atlantic City—"bounded error special case of Monte Carlo"
Numerical—"numerical approximation Monte Carlo"
"Both Las Vegas and Monte Carlo are dealing with decisions, i.e., problems in their decision version." "This however should not give a wrong impression and confine these algorithms to such problems—both types of randomized algorithms can be used on numerical problems as well, problems where the output is not simple ‘yes’/‘no’, but where one needs to receive a result that is numerical in nature."
Previous table represents a general framework for Monte Carlo and Las Vegas randomized algorithms. Instead of the mathematical symbol
<
{\displaystyle <}
one could use
≤
{\displaystyle \leq }
, thus making probabilities in the worst case equal.
== Applications in computational number theory and other areas ==
Well-known Monte Carlo algorithms include the Solovay–Strassen primality test, the Baillie–PSW primality test, the Miller–Rabin primality test, and certain fast variants of the Schreier–Sims algorithm in computational group theory.
For algorithms that are a part of Stochastic Optimization (SO) group of algorithms, where probability is not known in advance and is empirically determined, it is sometimes possible to merge Monte Carlo and such an algorithm "to have both probability bound calculated in advance and a Stochastic Optimization component." "Example of such an algorithm is Ant Inspired Monte Carlo." In this way, "drawback of SO has been mitigated, and a confidence in a solution has been established."
== See also ==
Monte Carlo methods, algorithms used in physical simulation and computational statistics based on taking random samples
Atlantic City algorithm
Las Vegas algorithm
== References ==
=== Citations ===
=== Sources === | Wikipedia/Monte_Carlo_algorithm |
In computer science, graph traversal (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.
== Redundancy ==
Unlike tree traversal, graph traversal may require that some vertices be visited more than once, since it is not necessarily known before transitioning to a vertex that it has already been explored. As graphs become more dense, this redundancy becomes more prevalent, causing computation time to increase; as graphs become more sparse, the opposite holds true.
Thus, it is usually necessary to remember which vertices have already been explored by the algorithm, so that vertices are revisited as infrequently as possible (or in the worst case, to prevent the traversal from continuing indefinitely). This may be accomplished by associating each vertex of the graph with a "color" or "visitation" state during the traversal, which is then checked and updated as the algorithm visits each vertex. If the vertex has already been visited, it is ignored and the path is pursued no further; otherwise, the algorithm checks/updates the vertex and continues down its current path.
Several special cases of graphs imply the visitation of other vertices in their structure, and thus do not require that visitation be explicitly recorded during the traversal. An important example of this is a tree: during a traversal it may be assumed that all "ancestor" vertices of the current vertex (and others depending on the algorithm) have already been visited. Both the depth-first and breadth-first graph searches are adaptations of tree-based algorithms, distinguished primarily by the lack of a structurally determined "root" vertex and the addition of a data structure to record the traversal's visitation state.
== Graph traversal algorithms ==
Note. — If each vertex in a graph is to be traversed by a tree-based algorithm (such as DFS or BFS), then the algorithm must be called at least once for each connected component of the graph. This is easily accomplished by iterating through all the vertices of the graph, performing the algorithm on each vertex that is still unvisited when examined.
=== Depth-first search ===
A depth-first search (DFS) is an algorithm for traversing a finite graph. DFS visits the child vertices before visiting the sibling vertices; that is, it traverses the depth of any particular path before exploring its breadth. A stack (often the program's call stack via recursion) is generally used when implementing the algorithm.
The algorithm begins with a chosen "root" vertex; it then iteratively transitions from the current vertex to an adjacent, unvisited vertex, until it can no longer find an unexplored vertex to transition to from its current location. The algorithm then backtracks along previously visited vertices, until it finds a vertex connected to yet more uncharted territory. It will then proceed down the new path as it had before, backtracking as it encounters dead-ends, and ending only when the algorithm has backtracked past the original "root" vertex from the very first step.
DFS is the basis for many graph-related algorithms, including topological sorts and planarity testing.
==== Pseudocode ====
Input: A graph G and a vertex v of G.
Output: A labeling of the edges in the connected component of v as discovery edges and back edges.
procedure DFS(G, v) is
label v as explored
for all edges e in G.incidentEdges(v) do
if edge e is unexplored then
w ← G.adjacentVertex(v, e)
if vertex w is unexplored then
label e as a discovered edge
recursively call DFS(G, w)
else
label e as a back edge
=== Breadth-first search ===
A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits the sibling vertices before visiting the child vertices, and a queue is used in the search process. This algorithm is often used to find the shortest path from one vertex to another.
==== Pseudocode ====
Input: A graph G and a vertex v of G.
Output: The closest vertex to v satisfying some conditions, or null if no such vertex exists.
procedure BFS(G, v) is
create a queue Q
enqueue v onto Q
mark v
while Q is not empty do
w ← Q.dequeue()
if w is what we are looking for then
return w
for all edges e in G.adjacentEdges(w) do
x ← G.adjacentVertex(w, e)
if x is not marked then
mark x
enqueue x onto Q
return null
== Applications ==
Breadth-first search can be used to solve many problems in graph theory, for example:
finding all vertices within one connected component;
Cheney's algorithm;
finding the shortest path between two vertices;
testing a graph for bipartiteness;
Cuthill–McKee algorithm mesh numbering;
Ford–Fulkerson algorithm for computing the maximum flow in a flow network;
serialization/deserialization of a binary tree vs serialization in sorted order (allows the tree to be re-constructed in an efficient manner);
maze generation algorithms;
flood fill algorithm for marking contiguous regions of a two dimensional image or n-dimensional array;
analysis of networks and relationships.
== Graph exploration ==
The problem of graph exploration can be seen as a variant of graph traversal. It is an online problem, meaning that the information about the graph is only revealed during the runtime of the algorithm. A common model is as follows: given a connected graph G = (V, E) with non-negative edge weights. The algorithm starts at some vertex, and knows all incident outgoing edges and the vertices at the end of these edges—but not more. When a new vertex is visited, then again all incident outgoing edges and the vertices at the end are known. The goal is to visit all n vertices and return to the starting vertex, but the sum of the weights of the tour should be as small as possible. The problem can also be understood as a specific version of the travelling salesman problem, where the salesman has to discover the graph on the go.
For general graphs, the best known algorithms for both undirected and directed graphs is a simple greedy algorithm:
In the undirected case, the greedy tour is at most O(ln n)-times longer than an optimal tour. The best lower bound known for any deterministic online algorithm is 10/3.
Unit weight undirected graphs can be explored with a competitive ration of 2 − ε, which is already a tight bound on Tadpole graphs.
In the directed case, the greedy tour is at most (n − 1)-times longer than an optimal tour. This matches the lower bound of n − 1. An analogous competitive lower bound of Ω(n) also holds for randomized algorithms that know the coordinates of each node in a geometric embedding. If instead of visiting all nodes just a single "treasure" node has to be found, the competitive bounds are Θ(n2) on unit weight directed graphs, for both deterministic and randomized algorithms.
== Universal traversal sequences ==
A universal traversal sequence is a sequence of instructions comprising a graph traversal for any regular graph with a set number of vertices and for any starting vertex. A probabilistic proof was used by Aleliunas et al. to show that there exists a universal traversal sequence with number of instructions proportional to O(n5) for any regular graph with n vertices. The steps specified in the sequence are relative to the current node, not absolute. For example, if the current node is vj, and vj has d neighbors, then the traversal sequence will specify the next node to visit, vj+1, as the ith neighbor of vj, where 1 ≤ i ≤ d.
== See also ==
External memory graph traversal
== References == | Wikipedia/Graph_exploration_algorithm |
The NIST Dictionary of Algorithms and Data Structures is a reference work maintained by the U.S. National Institute of Standards and Technology. It defines a large number of terms relating to algorithms and data structures. For algorithms and data structures not necessarily mentioned here, see list of algorithms and list of data structures.
This list of terms was originally derived from the index of that document, and is in the public domain, as it was compiled by a Federal Government employee as part of a Federal Government work. Some of the terms defined are:
== A ==
absolute performance guarantee
abstract data type (ADT)
abstract syntax tree (AST)
(a,b)-tree
accepting state
Ackermann's function
active data structure
acyclic directed graph
adaptive heap sort
adaptive Huffman coding
adaptive k-d tree
adaptive sort
address-calculation sort
adjacency list representation
adjacency matrix representation
adversary
algorithm
algorithm BSTW
algorithm FGK
algorithmic efficiency
algorithmically solvable
algorithm V
all pairs shortest path
alphabet
Alpha Skip Search algorithm
alternating path
alternating Turing machine
alternation
American flag sort
amortized cost
ancestor
and
and-or tree
American National Standards Institute (ANSI)
antichain
antisymmetric relation
AP
Apostolico–Crochemore algorithm
Apostolico–Giancarlo algorithm
approximate string matching
approximation algorithm
arborescence
arithmetic coding
array
array index
array merging
array search
articulation point
A* search algorithm
assignment problem
association list
associative
associative array
asymptotically tight bound
asymptotic bound
asymptotic lower bound
asymptotic space complexity
asymptotic time complexity
asymptotic upper bound
augmenting path
automaton
average case
average-case cost
AVL tree
axiomatic semantics
== B ==
backtracking
bag
Baillie–PSW primality test
balanced binary search tree
balanced binary tree
balanced k-way merge sort
balanced merge sort
balanced multiway merge
balanced multiway tree
balanced quicksort
balanced tree
balanced two-way merge sort
BANG file
Batcher sort
Baum Welch algorithm
BB α tree
BDD
BD-tree
Bellman–Ford algorithm
Benford's law
best case
best-case cost
best-first search
biconnected component
biconnected graph
bidirectional bubble sort
big-O notation
binary function
binary fuse filter
binary GCD algorithm
binary heap
binary insertion sort
binary knapsack problem
binary priority queue
binary relation
binary search
binary search tree
binary tree
binary tree representation of trees
bingo sort
binomial heap
binomial tree
bin packing problem
bin sort
bintree
bipartite graph
bipartite matching
bisector
bitonic sort
bit vector
Bk tree
bdk tree (not to be confused with k-d-B-tree)
block
block addressing index
blocking flow
block search
Bloom filter
blossom (graph theory)
bogosort
boogol
Boolean
Boolean expression
Boolean function
bottleneck traveling salesman
bottom-up tree automaton
boundary-based representation
bounded error probability in polynomial time
bounded queue
bounded stack
Bounding volume hierarchy, also referred to as bounding volume tree (BV-tree, BVT)
Boyer–Moore string-search algorithm
Boyer–Moore–Horspool algorithm
bozo sort
B+ tree
BPP (complexity)
Bradford's law
branch (as in control flow)
branch (as in revision control)
branch and bound
breadth-first search
Bresenham's line algorithm
brick sort
bridge
British Museum algorithm
brute-force attack
brute-force search
brute-force string search
brute-force string search with mismatches
BSP-tree
B*-tree
B-tree
bubble sort
bucket
bucket array
bucketing method
bucket sort
bucket trie
buddy system
buddy tree
build-heap
Burrows–Wheeler transform (BWT)
busy beaver
Byzantine generals
== C ==
cactus stack
Calculus of Communicating Systems (CCS)
calendar queue
candidate consistency testing
candidate verification
canonical complexity class
capacitated facility location
capacity
capacity constraint
Cartesian tree
cascade merge sort
caverphone
Cayley–Purser algorithm
C curve
cell probe model
cell tree
cellular automaton
centroid
certificate
chain (order theory)
chaining (algorithm)
child
Chinese postman problem
Chinese remainder theorem
Christofides algorithm
Christofides heuristic
chromatic index
chromatic number
Church–Turing thesis
circuit
circuit complexity
circuit value problem
circular list
circular queue
clique
clique problem
clustering (see hash table)
clustering free
coalesced hashing
coarsening
cocktail shaker sort
codeword
coding tree
collective recursion
collision
collision resolution scheme
Colussi
combination
comb sort
Communicating Sequential Processes
commutative
compact DAWG
compact trie
comparison sort
competitive analysis
competitive ratio
complement
complete binary tree
complete graph
completely connected graph
complete tree
complexity
complexity class
computable
concave function
concurrent flow
concurrent read, concurrent write
concurrent read, exclusive write
configuration
confluently persistent data structure
conjunction
connected components
connected graph
co-NP
constant function
continuous knapsack problem
Cook reduction
Cook's theorem
counting sort
covering
CRCW
Crew (algorithm)
critical path problem
CSP (communicating sequential processes)
CSP (constraint satisfaction problem)
CTL
cuckoo hashing
cuckoo filter
cut (graph theory)
cut (logic programming)
cutting plane
cutting stock problem
cutting theorem
cut vertex
cycle sort
cyclic redundancy check (CRC)
== D ==
D-adjacent
DAG shortest paths
Damerau–Levenshtein distance
data structure
decidable
decidable language
decimation
decision problem
decision tree
decomposable searching problem
degree
dense graph
depoissonization
depth
depth-first search (DFS)
deque
derangement
descendant (see tree structure)
deterministic
deterministic algorithm
deterministic finite automata string search
deterministic finite automaton (DFA)
deterministic finite state machine
deterministic finite tree automaton
deterministic pushdown automaton (DPDA)
deterministic tree automaton
Deutsch–Jozsa algorithm
DFS forest
DFTA
diagonalization argument
diameter
dichotomic search
dictionary (data structure)
diet (see discrete interval encoding tree below)
difference (set theory)
digital search tree
digital tree
digraph
Dijkstra's algorithm
diminishing increment sort
dining philosophers
direct chaining hashing
directed acyclic graph (DAG)
directed acyclic word graph (DAWG)
directed graph
discrete interval encoding tree
discrete p-center
disjoint set
disjunction
distributed algorithm
distributional complexity
distribution sort
divide-and-conquer algorithm
divide and marriage before conquest
division method
data domain
don't-care term
Doomsday rule
double-direction bubble sort
double-ended priority queue
double hashing
double left rotation
Double Metaphone
double right rotation
double-ended queue
doubly linked list
dragon curve
dual graph
dual linear program
dyadic tree
dynamic array
dynamic data structure
dynamic hashing
dynamic programming
dynamization transformation
== E ==
edge
eb tree (elastic binary tree)
edge coloring
edge connectivity
edge crossing
edge-weighted graph
edit distance
edit operation
edit script
8 queens
elastic-bucket trie
element uniqueness
end-of-string
epidemic algorithm
Euclidean algorithm
Euclidean distance
Euclidean Steiner tree
Euclidean traveling salesman problem
Euclid's algorithm
Euler cycle
Eulerian graph
Eulerian path
exact string matching
EXCELL (extendible cell)
exchange sort
exclusive or
exclusive read, concurrent write (ERCW)
exclusive read, exclusive write (EREW)
exhaustive search
existential state
expandable hashing
expander graph
exponential
extended binary tree
extended Euclidean algorithm
extended k-d tree
extendible hashing
external index
external memory algorithm
external memory data structure
external merge
external merge sort
external node
external quicksort
external radix sort
external sort
extrapolation search
extremal
extreme point
== F ==
facility location
factor (see substring)
factorial
fast Fourier transform (FFT)
fathoming
feasible region
feasible solution
feedback edge set
feedback vertex set
Ferguson–Forcade algorithm
Fibonacci number
Fibonacci search
Fibonacci tree
Fibonacci heap
Find
find kth least element
finitary tree
finite Fourier transform (discrete Fourier transform)
finite-state machine
finite state machine minimization
finite-state transducer
first come, first served
first-in, first-out (FIFO)
fixed-grid method
flash sort
flow
flow conservation
flow function
flow network
Floyd–Warshall algorithm
Ford–Bellman algorithm
Ford–Fulkerson algorithm
forest
forest editing problem
formal language
formal methods
formal verification
forward index
fractal
fractional knapsack problem
fractional solution
free edge
free list
free tree
free vertex
frequency count heuristic
full array
full binary tree
full inverted index
fully dynamic graph problem
fully persistent data structure
fully polynomial approximation scheme
function (programming)
function (mathematics)
functional data structure
== G ==
Galil–Giancarlo
Galil–Seiferas
gamma function
GBD-tree
geometric optimization problem
global optimum
gnome sort
goobi
graph
graph coloring
graph concentration
graph drawing
graph isomorphism
graph partition
Gray code
greatest common divisor (GCD)
greedy algorithm
greedy heuristic
grid drawing
grid file
Grover's algorithm
== H ==
halting problem
Hamiltonian cycle
Hamiltonian path
Hamming distance
Harter–Highway dragon
hash function
hash heap
hash table
hash table delete
Hausdorff distance
hB-tree
head
heap
heapify
heap property
heapsort
heaviest common subsequence
height
height-balanced binary search tree
height-balanced tree
heuristic
hidden Markov model
highest common factor
Hilbert curve
histogram sort
homeomorphic
horizontal visibility map
Huffman encoding
Hungarian algorithm
hybrid algorithm
hyperedge
hypergraph
== I ==
Identity function
ideal merge
implication
implies
implicit data structure
in-branching
inclusion–exclusion principle
inclusive or
incompressible string
incremental algorithm
in-degree
independent set (graph theory)
index file
information theoretic bound
in-place algorithm
in-order traversal
in-place sort
insertion sort
instantaneous description
integer linear program
integer multi-commodity flow
integer polyhedron
interactive proof system
interface
interior-based representation
internal node
internal sort
interpolation search
interpolation-sequential search
interpolation sort
intersection (set theory)
interval tree
intractable
introsort
introspective sort
inverse Ackermann function
inverted file index
inverted index
irreflexive
isomorphic
iteration
== J ==
Jaro–Winkler distance
Johnson's algorithm
Johnson–Trotter algorithm
jump list
jump search
== K ==
Karmarkar's algorithm
Karnaugh map
Karp–Rabin string-search algorithm
Karp reduction
k-ary heap
k-ary Huffman encoding
k-ary tree
k-clustering
k-coloring
k-connected graph
k-d-B-tree (not to be confused with bdk tree)
k-dimensional
K-dominant match
k-d tree
key
KMP
KmpSkip Search
knapsack problem
knight's tour
Knuth–Morris–Pratt algorithm
Königsberg bridges problem
Kolmogorov complexity
Kraft's inequality
Kripke structure
Kruskal's algorithm
kth order Fibonacci numbers
kth shortest path
kth smallest element
KV diagram
k-way merge
k-way merge sort
k-way tree
== L ==
labeled graph
language
last-in, first-out (LIFO)
Las Vegas algorithm
lattice (group)
layered graph
LCS
leaf
least common multiple (LCM)
leftist tree
left rotation
left-child right-sibling binary tree also termed first-child next-sibling binary tree, doubly chained tree, or filial-heir chain
Lempel–Ziv–Welch (LZW)
level-order traversal
Levenshtein distance
lexicographical order
linear
linear congruential generator
linear hash
linear insertion sort
linear order
linear probing
linear probing sort
linear product
linear program
linear quadtree
linear search
link
linked list
list
list contraction
little-o notation
Lm distance
load factor (computer science)
local alignment
local optimum
logarithm, logarithmic scale
longest common subsequence
longest common substring
Lotka's law
lower bound
lower triangular matrix
lowest common ancestor
l-reduction
== M ==
Malhotra–Kumar–Maheshwari blocking flow (ru.)
Manhattan distance
many-one reduction
Markov chain
marriage problem (see assignment problem)
Master theorem (analysis of algorithms)
matched edge
matched vertex
matching (graph theory)
matrix
matrix-chain multiplication problem
max-heap property
maximal independent set
maximally connected component
Maximal Shift
maximum bipartite matching
maximum-flow problem
MAX-SNP
Mealy machine
mean
median
meld (data structures)
memoization
merge algorithm
merge sort
Merkle tree
meromorphic function
metaheuristic
metaphone
midrange
Miller–Rabin primality test
min-heap property
minimal perfect hashing
minimum bounding box (MBB)
minimum cut
minimum path cover
minimum spanning tree
minimum vertex cut
mixed integer linear program
mode
model checking
model of computation
moderately exponential
MODIFIND
monotone priority queue
monotonically decreasing
monotonically increasing
Monte Carlo algorithm
Moore machine
Morris–Pratt
move (finite-state machine transition)
move-to-front heuristic
move-to-root heuristic
multi-commodity flow
multigraph
multilayer grid file
multiplication method
multiprefix
multiprocessor model
multiset
multi suffix tree
multiway decision
multiway merge
multiway search tree
multiway tree
Munkres' assignment algorithm
== N ==
naive string search
NAND
n-ary function
NC
NC many-one reducibility
nearest neighbor search
negation
network flow (see flow network)
network flow problem
next state
NIST
node
nonbalanced merge
nonbalanced merge sort
nondeterministic
nondeterministic algorithm
nondeterministic finite automaton
nondeterministic finite-state machine (NFA)
nondeterministic finite tree automaton (NFTA)
nondeterministic polynomial time
nondeterministic tree automaton
nondeterministic Turing machine
nonterminal node
nor
not
Not So Naive
NP
NP-complete
NP-complete language
NP-hard
n queens
nullary function
null tree
New York State Identification and Intelligence System (NYSIIS)
== O ==
objective function
occurrence
octree
odd–even sort
offline algorithm
offset (computer science)
omega
omicron
one-based indexing
one-dimensional
online algorithm
open addressing
optimal
optimal cost
optimal hashing
optimal merge
optimal mismatch
optimal polygon triangulation problem
optimal polyphase merge
optimal polyphase merge sort
optimal solution
optimal triangulation problem
optimal value
optimization problem
or
oracle set
oracle tape
oracle Turing machine
orders of approximation
ordered array
ordered binary decision diagram (OBDD)
ordered linked list
ordered tree
order preserving hash
order preserving minimal perfect hashing
oriented acyclic graph
oriented graph
oriented tree
orthogonal drawing
orthogonal lists
orthogonally convex rectilinear polygon
oscillating merge sort
out-branching
out-degree
overlapping subproblems
== P ==
packing (see set packing)
padding argument
pagoda
pairing heap
PAM (point access method)
parallel computation thesis
parallel prefix computation
parallel random-access machine (PRAM)
parametric searching
parent
partial function
partially decidable problem
partially dynamic graph problem
partially ordered set
partially persistent data structure
partial order
partial recursive function
partition (set theory)
passive data structure
patience sorting
path (graph theory)
path cover
path system problem
Patricia tree
pattern
pattern element
P-complete
PCP theorem
Peano curve
Pearson's hashing
perfect binary tree
perfect hashing
perfect k-ary tree
perfect matching
perfect shuffle
performance guarantee
performance ratio
permutation
persistent data structure
phonetic coding
pile (data structure)
pipelined divide and conquer
planar graph
planarization
planar straight-line graph
PLOP-hashing
point access method
pointer jumping
pointer machine
poissonization
polychotomy
polyhedron
polylogarithmic
polynomial
polynomial-time approximation scheme (PTAS)
polynomial hierarchy
polynomial time
polynomial-time Church–Turing thesis
polynomial-time reduction
polyphase merge
polyphase merge sort
polytope
poset
postfix traversal
Post machine (see Post–Turing machine)
postman's sort
postorder traversal
Post correspondence problem
potential function (see potential method)
predicate
prefix
prefix code
prefix computation
prefix sum
prefix traversal
preorder traversal
primary clustering
primitive recursive
Prim's algorithm
principle of optimality
priority queue
prisoner's dilemma
PRNG
probabilistic algorithm
probabilistically checkable proof
probabilistic Turing machine
probe sequence
Procedure (computer science)
process algebra
proper (see proper subset)
proper binary tree
proper coloring
proper subset
property list
prune and search
pseudorandom number generator
pth order Fibonacci numbers
P-tree
purely functional language
pushdown automaton (PDA)
pushdown transducer
p-way merge sort
== Q ==
qm sort
qsort
quadratic probing
quadtree
quadtree complexity theorem
quad trie
quantum computation
queue
quicksort
== R ==
Rabin–Karp string-search algorithm
radix quicksort
radix sort
ragged matrix
Raita algorithm
random-access machine
random number generation
randomization
randomized algorithm
randomized binary search tree
randomized complexity
randomized polynomial time
randomized rounding
randomized search tree
Randomized-Select
random number generator
random sampling
range (function)
range sort
Rank (graph theory)
Ratcliff/Obershelp pattern recognition
reachable
rebalance
recognizer
rectangular matrix
rectilinear
rectilinear Steiner tree
recurrence equations
recurrence relation
recursion
recursion termination
recursion tree
recursive (computer science)
recursive data structure
recursive doubling
recursive language
recursively enumerable language
recursively solvable
red–black tree
reduced basis
reduced digraph
reduced ordered binary decision diagram (ROBDD)
reduction
reflexive relation
regular decomposition
rehashing
relation (mathematics)
relational structure
relative performance guarantee
relaxation
relaxed balance
rescalable
restricted universe sort
result cache
Reverse Colussi
Reverse Factor
R-file
Rice's method
right rotation
right-threaded tree
root
root balance
rooted tree
rotate left
rotate right
rotation
rough graph
RP
R+-tree
R*-tree
R-tree
run time
== S ==
saguaro stack
saturated edge
SBB tree
scan
scapegoat tree
search algorithm
search tree
search tree property
secant search
secondary clustering
memory segment
select algorithm
select and partition
selection problem
selection sort
select kth element
select mode
self-loop
self-organizing heuristic
self-organizing list
self-organizing sequential search
semidefinite programming
separate chaining hashing
separator theorem
sequential search
set
set cover
set packing
shadow heap
shadow merge
shadow merge insert
shaker sort
Shannon–Fano coding
shared memory
Shell sort
Shift-Or
Shor's algorithm
shortcutting
shortest common supersequence
shortest common superstring
shortest path
shortest spanning tree
shuffle
shuffle sort
sibling
Sierpiński curve
Sierpinski triangle
sieve of Eratosthenes
sift up
signature
Simon's algorithm
simple merge
simple path
simple uniform hashing
simplex communication
simulated annealing
simulation theorem
single-destination shortest-path problem
single-pair shortest-path problem
single program multiple data
single-source shortest-path problem
singly linked list
singularity analysis
sink
sinking sort
skd-tree
skew-symmetry
skip list
skip search
slope selection
Smith algorithm
Smith–Waterman algorithm
smoothsort
solvable problem
sort algorithm
sorted array
sorted list
sort in-place
sort merge
soundex
space-constructible function
spanning tree
sparse graph
sparse matrix
sparsification
sparsity
spatial access method
spectral test
splay tree
SPMD
square matrix
square root
SST (shortest spanning tree)
stable
stack (data structure)
stack tree
star-shaped polygon
start state
state
state machine
state transition
static data structure
static Huffman encoding
s-t cut
st-digraph
Steiner minimum tree
Steiner point
Steiner ratio
Steiner tree
Steiner vertex
Steinhaus–Johnson–Trotter algorithm
Stirling's approximation
Stirling's formula
stooge sort
straight-line drawing
strand sort
strictly decreasing
strictly increasing
strictly lower triangular matrix
strictly upper triangular matrix
string
string editing problem
string matching
string matching on ordered alphabets
string matching with errors
string matching with mismatches
string searching
strip packing
strongly connected component
strongly connected graph
strongly NP-hard
subadditive ergodic theorem
subgraph isomorphism
sublinear time algorithm
subsequence
subset
substring
subtree
succinct data structure
suffix
suffix array
suffix automaton
suffix tree
superimposed code
superset
supersink
supersource
symmetric relation
symmetrically linked list
symmetric binary B-tree
symmetric set difference
symmetry breaking
symmetric min max heap
== T ==
tail
tail recursion
tango tree
target
temporal logic
terminal (see Steiner tree)
terminal node
ternary search
ternary search tree (TST)
text searching
theta
threaded binary tree
threaded tree
three-dimensional
three-way merge sort
three-way radix quicksort
time-constructible function
time/space complexity
top-down radix sort
top-down tree automaton
top-node
topological order
topological sort
topology tree
total function
totally decidable language
totally decidable problem
totally undecidable problem
total order
tour
tournament
towers of Hanoi
tractable problem
transducer
transition (see finite-state machine)
transition function (of a finite-state machine or Turing machine)
transitive relation
transitive closure
transitive reduction
transpose sequential search
travelling salesman problem (TSP)
treap
tree
tree automaton
tree contraction
tree editing problem
tree sort
tree transducer
tree traversal
triangle inequality
triconnected graph
trie
trinary function
tripartition
Turbo-BM
Turbo Reverse Factor
Turing machine
Turing reduction
Turing transducer
twin grid file
two-dimensional
two-level grid file
2–3 tree
2–3–4 tree
Two Way algorithm
two-way linked list
two-way merge sort
== U ==
unary function
unbounded knapsack problem (UKP)
uncomputable function
uncomputable problem
undecidable language
undecidable problem
undirected graph
uniform circuit complexity
uniform circuit family
uniform hashing
uniform matrix
union
union of automata
universal hashing
universal state
universal Turing machine
universe
unsolvable problem
unsorted list
upper triangular matrix
== V ==
van Emde Boas priority queue
vehicle routing problem
Veitch diagram
Venn diagram
vertex
vertex coloring
vertex connectivity
vertex cover
vertical visibility map
virtual hashing
visibility map
visible (geometry)
Viterbi algorithm
VP-tree
VRP (vehicle routing problem)
== W ==
walk
weak cluster
weak-heap
weak-heap sort
weight-balanced tree
weighted, directed graph
weighted graph
window
witness
work-depth model
work-efficient
work-preserving
worst case
worst-case cost
worst-case minimum access
Wu's line algorithm
== X ==
Xiaolin Wu's line algorithm
xor
Xor filter
== Y ==
Yule–Simon distribution
== Z ==
Zeller's congruence
0-ary function
0-based indexing
0/1 knapsack problem
Zhu–Takaoka string matching algorithm
Zipfian distribution
Zipf's law
Zipper (data structure)
Zip tree
ZPP
== References == | Wikipedia/Dictionary_of_Algorithms_and_Data_Structures |
A distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors. Distributed algorithms are used in different application areas of distributed computing, such as telecommunications, scientific computing, distributed information processing, and real-time process control. Standard problems solved by distributed algorithms include leader election, consensus, distributed search, spanning tree generation, mutual exclusion, and resource allocation.
Distributed algorithms are a sub-type of parallel algorithm, typically executed concurrently, with separate parts of the algorithm being run simultaneously on independent processors, and having limited information about what the other parts of the algorithm are doing. One of the major challenges in developing and implementing distributed algorithms is successfully coordinating the behavior of the independent parts of the algorithm in the face of processor failures and unreliable communications links. The choice of an appropriate distributed algorithm to solve a given problem depends on both the characteristics of the problem, and characteristics of the system the algorithm will run on such as the type and probability of processor or link failures, the kind of inter-process communication that can be performed, and the level of timing synchronization between separate processes.
== Standard problems ==
Atomic commit
An atomic commit is an operation where a set of distinct changes is applied as a single operation. If the atomic commit succeeds, it means that all the changes have been applied. If there is a failure before the atomic commit can be completed, the "commit" is aborted and no changes will be applied.
Algorithms for solving the atomic commit problem include the two-phase commit protocol and the three-phase commit protocol.
Consensus
Consensus algorithms try to solve the problem of a number of processes agreeing on a common decision.
More precisely, a Consensus protocol must satisfy the four formal properties below.
Termination: every correct process decides some value.
Validity: if all processes propose the same value
v
{\displaystyle v}
, then every correct process decides
v
{\displaystyle v}
.
Integrity: every correct process decides at most one value, and if it decides some value
v
{\displaystyle v}
, then
v
{\displaystyle v}
must have been proposed by some process.
Agreement: if a correct process decides
v
{\displaystyle v}
, then every correct process decides
v
{\displaystyle v}
.
Common algorithms for solving consensus are the Paxos algorithm and the Raft algorithm.
Distributed search
Leader election
Leader election is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are unaware of which node will serve as the "leader," or coordinator, of the task. After a leader election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task leader.
Mutual exclusion
Non-blocking data structures
Reliable Broadcast
Reliable broadcast is a communication primitive in distributed systems. A reliable broadcast is defined by the following properties:
Validity - if a correct process sends a message, then some correct process will eventually deliver that message.
Agreement - if a correct process delivers a message, then all correct processes eventually deliver that message.
Integrity - every correct process delivers the same message at most once and only if that message has been sent by a process.
A reliable broadcast can have sequential, causal or total ordering.
Replication
Resource allocation
Spanning tree generation
Symmetry breaking, e.g. vertex coloring
== References ==
== Further reading ==
Christian Cachin; Rachid Guerraoui; Luís Rodrigues (2011), Introduction to Reliable and Secure Distributed Programming (2. ed.), Springer, Bibcode:2011itra.book.....C, ISBN 978-3-642-15259-7
C. Rodríguez, M. Villagra and B. Barán, Asynchronous team algorithms for Boolean Satisfiability , Bionetics2007, pp. 66–69, 2007.
== External links ==
Media related to Distributed algorithms at Wikimedia Commons
MIT Open Courseware - Distributed Algorithms | Wikipedia/Distributed_algorithm |
In computer science, a selection algorithm is an algorithm for finding the
k
{\displaystyle k}
th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the
k
{\displaystyle k}
th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of
n
{\displaystyle n}
values, these algorithms take linear time,
O
(
n
)
{\displaystyle O(n)}
as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time
O
(
1
)
{\displaystyle O(1)}
.
== Problem statement ==
An algorithm for the selection problem takes as input a collection of values, and a number
k
{\displaystyle k}
. It outputs the
k
{\displaystyle k}
th smallest of these values, or, in some versions of the problem, a collection of the
k
{\displaystyle k}
smallest values. For this to be well-defined, it should be possible to sort the values into an order from smallest to largest; for instance, they may be integers, floating-point numbers, or some other kind of object with a numeric key. However, they are not assumed to have been already sorted. Often, selection algorithms are restricted to a comparison-based model of computation, as in comparison sort algorithms, where the algorithm has access to a comparison operation that can determine the relative ordering of any two values, but may not perform any other kind of arithmetic operations on these values.
To simplify the problem, some works on this problem assume that the values are all distinct from each other, or that some consistent tie-breaking method has been used to assign an ordering to pairs of items with the same value as each other. Another variation in the problem definition concerns the numbering of the ordered values: is the smallest value obtained by setting
k
=
0
{\displaystyle k=0}
, as in zero-based numbering of arrays, or is it obtained by setting
k
=
1
{\displaystyle k=1}
, following the usual English-language conventions for the smallest, second-smallest, etc.? This article follows the conventions used by Cormen et al., according to which all values are distinct and the minimum value is obtained from
k
=
1
{\displaystyle k=1}
.
With these conventions, the maximum value, among a collection of
n
{\displaystyle n}
values, is obtained by setting
k
=
n
{\displaystyle k=n}
. When
n
{\displaystyle n}
is an odd number, the median of the collection is obtained by setting
k
=
(
n
+
1
)
/
2
{\displaystyle k=(n+1)/2}
. When
n
{\displaystyle n}
is even, there are two choices for the median, obtained by rounding this choice of
k
{\displaystyle k}
down or up, respectively: the lower median with
k
=
n
/
2
{\displaystyle k=n/2}
and the upper median with
k
=
n
/
2
+
1
{\displaystyle k=n/2+1}
.
== Algorithms ==
=== Sorting and heapselect ===
As a baseline algorithm, selection of the
k
{\displaystyle k}
th smallest value in a collection of values can be performed by the following two steps:
Sort the collection
If the output of the sorting algorithm is an array, retrieve its
k
{\displaystyle k}
th element; otherwise, scan the sorted sequence to find the
k
{\displaystyle k}
th element.
The time for this method is dominated by the sorting step, which requires
Θ
(
n
log
n
)
{\displaystyle \Theta (n\log n)}
time using a comparison sort. Even when integer sorting algorithms may be used, these are generally slower than the linear time that may be achieved using specialized selection algorithms. Nevertheless, the simplicity of this approach makes it attractive, especially when a highly-optimized sorting routine is provided as part of a runtime library, but a selection algorithm is not. For inputs of moderate size, sorting can be faster than non-random selection algorithms, because of the smaller constant factors in its running time. This method also produces a sorted version of the collection, which may be useful for other later computations, and in particular for selection with other choices of
k
{\displaystyle k}
.
For a sorting algorithm that generates one item at a time, such as selection sort, the scan can be done in tandem with the sort, and the sort can be terminated once the
k
{\displaystyle k}
th element has been found. One possible design of a consolation bracket in a single-elimination tournament, in which the teams who lost to the eventual winner play another mini-tournament to determine second place, can be seen as an instance of this method. Applying this optimization to heapsort produces the heapselect algorithm, which can select the
k
{\displaystyle k}
th smallest value in time
O
(
n
+
k
log
n
)
{\displaystyle O(n+k\log n)}
. This is fast when
k
{\displaystyle k}
is small relative to
n
{\displaystyle n}
, but degenerates to
O
(
n
log
n
)
{\displaystyle O(n\log n)}
for larger values of
k
{\displaystyle k}
, such as the choice
k
=
n
/
2
{\displaystyle k=n/2}
used for median finding.
=== Pivoting ===
Many methods for selection are based on choosing a special "pivot" element from the input, and using comparisons with this element to divide the remaining
n
−
1
{\displaystyle n-1}
input values into two subsets: the set
L
{\displaystyle L}
of elements less than the pivot, and the set
R
{\displaystyle R}
of elements greater than the pivot. The algorithm can then determine where the
k
{\displaystyle k}
th smallest value is to be found, based on a comparison of
k
{\displaystyle k}
with the sizes of these sets. In particular, if
k
≤
|
L
|
{\displaystyle k\leq |L|}
, the
k
{\displaystyle k}
th smallest value is in
L
{\displaystyle L}
, and can be found recursively by applying the same selection algorithm to
L
{\displaystyle L}
. If
k
=
|
L
|
+
1
{\displaystyle k=|L|+1}
, then the
k
{\displaystyle k}
th smallest value is the pivot, and it can be returned immediately. In the remaining case, the
k
{\displaystyle k}
th smallest value is in
R
{\displaystyle R}
, and more specifically it is the element in position
k
−
|
L
|
−
1
{\displaystyle k-|L|-1}
of
R
{\displaystyle R}
. It can be found by applying a selection algorithm recursively, seeking the value in this position in
R
{\displaystyle R}
.
As with the related pivoting-based quicksort algorithm, the partition of the input into
L
{\displaystyle L}
and
R
{\displaystyle R}
may be done by making new collections for these sets, or by a method that partitions a given list or array data type in-place. Details vary depending on how the input collection is represented. The time to compare the pivot against all the other values is
O
(
n
)
{\displaystyle O(n)}
. However, pivoting methods differ in how they choose the pivot, which affects how big the subproblems in each recursive call will be. The efficiency of these methods depends greatly on the choice of the pivot. If the pivot is chosen badly, the running time of this method can be as slow as
O
(
n
2
)
{\displaystyle O(n^{2})}
.
If the pivot were exactly at the median of the input, then each recursive call would have at most half as many values as the previous call, and the total times would add in a geometric series to
O
(
n
)
{\displaystyle O(n)}
. However, finding the median is itself a selection problem, on the entire original input. Trying to find it by a recursive call to a selection algorithm would lead to an infinite recursion, because the problem size would not decrease in each call.
Quickselect chooses the pivot uniformly at random from the input values. It can be described as a prune and search algorithm, a variant of quicksort, with the same pivoting strategy, but where quicksort makes two recursive calls to sort the two subcollections
L
{\displaystyle L}
and
R
{\displaystyle R}
, quickselect only makes one of these two calls. Its expected time is
O
(
n
)
{\displaystyle O(n)}
. For any constant
C
{\displaystyle C}
, the probability that its number of comparisons exceeds
C
n
{\displaystyle Cn}
is superexponentially small in
C
{\displaystyle C}
.
The Floyd–Rivest algorithm, a variation of quickselect, chooses a pivot by randomly sampling a subset of
r
{\displaystyle r}
data values, for some sample size
r
{\displaystyle r}
, and then recursively selecting two elements somewhat above and below position
r
k
/
n
{\displaystyle rk/n}
of the sample to use as pivots. With this choice, it is likely that
k
{\displaystyle k}
is sandwiched between the two pivots, so that after pivoting only a small number of data values between the pivots are left for a recursive call. This method can achieve an expected number of comparisons that is
n
+
min
(
k
,
n
−
k
)
+
o
(
n
)
{\displaystyle n+\min(k,n-k)+o(n)}
. In their original work, Floyd and Rivest claimed that the
o
(
n
)
{\displaystyle o(n)}
term could be made as small as
O
(
n
)
{\displaystyle O({\sqrt {n}})}
by a recursive sampling scheme, but the correctness of their analysis has been questioned. Instead, more rigorous analysis has shown that a version of their algorithm achieves
O
(
n
log
n
)
{\displaystyle O({\sqrt {n\log n}})}
for this term. Although the usual analysis of both quickselect and the Floyd–Rivest algorithm assumes the use of a true random number generator, a version of the Floyd–Rivest algorithm using a pseudorandom number generator seeded with only logarithmically many true random bits has been proven to run in linear time with high probability.
The median of medians method partitions the input into sets of five elements, and uses some other non-recursive method to find the median of each of these sets in constant time per set. It then recursively calls itself to find the median of these
n
/
5
{\displaystyle n/5}
medians. Using the resulting median of medians as the pivot produces a partition with
max
(
|
L
|
,
|
R
|
)
≤
7
n
/
10
{\displaystyle \max(|L|,|R|)\leq 7n/10}
. Thus, a problem on
n
{\displaystyle n}
elements is reduced to two recursive problems on
n
/
5
{\displaystyle n/5}
elements (to find the pivot) and at most
7
n
/
10
{\displaystyle 7n/10}
elements (after the pivot is used). The total size of these two recursive subproblems is at most
9
n
/
10
{\displaystyle 9n/10}
, allowing the total time to be analyzed as a geometric series adding to
O
(
n
)
{\displaystyle O(n)}
. Unlike quickselect, this algorithm is deterministic, not randomized. It was the first linear-time deterministic selection algorithm known, and is commonly taught in undergraduate algorithms classes as an example of a divide and conquer that does not divide into two equal subproblems. However, the high constant factors in its
O
(
n
)
{\displaystyle O(n)}
time bound make it slower than quickselect in practice, and slower even than sorting for inputs of moderate size.
Hybrid algorithms such as introselect can be used to achieve the practical performance of quickselect with a fallback to medians of medians guaranteeing worst-case
O
(
n
)
{\displaystyle O(n)}
time.
=== Factories ===
The deterministic selection algorithms with the smallest known numbers of comparisons, for values of
k
{\displaystyle k}
that are far from
1
{\displaystyle 1}
or
n
{\displaystyle n}
, are based on the concept of factories, introduced in 1976 by Arnold Schönhage, Mike Paterson, and Nick Pippenger. These are methods that build partial orders of certain specified types, on small subsets of input values, by using comparisons to combine smaller partial orders. As a very simple example, one type of factory can take as input a sequence of single-element partial orders, compare pairs of elements from these orders, and produce as output a sequence of two-element totally ordered sets. The elements used as the inputs to this factory could either be input values that have not been compared with anything yet, or "waste" values produced by other factories. The goal of a factory-based algorithm is to combine together different factories, with the outputs of some factories going to the inputs of others, in order to eventually obtain a partial order in which one element (the
k
{\displaystyle k}
th smallest) is larger than some
k
−
1
{\displaystyle k-1}
other elements and smaller than another
n
−
k
{\displaystyle n-k}
others. A careful design of these factories leads to an algorithm that, when applied to median-finding, uses at most
2.942
n
{\displaystyle 2.942n}
comparisons. For other values of
k
{\displaystyle k}
, the number of comparisons is smaller.
=== Parallel algorithms ===
Parallel algorithms for selection have been studied since 1975, when Leslie Valiant introduced the parallel comparison tree model for analyzing these algorithms, and proved that in this model selection using a linear number of comparisons requires
Ω
(
log
log
n
)
{\displaystyle \Omega (\log \log n)}
parallel steps, even for selecting the minimum or maximum. Researchers later found parallel algorithms for selection in
O
(
log
log
n
)
{\displaystyle O(\log \log n)}
steps, matching this bound. In a randomized parallel comparison tree model it is possible to perform selection in a bounded number of steps and a linear number of comparisons. On the more realistic parallel RAM model of computing, with exclusive read exclusive write memory access, selection can be performed in time
O
(
log
n
)
{\displaystyle O(\log n)}
with
O
(
n
/
log
n
)
{\displaystyle O(n/\log n)}
processors, which is optimal both in time and in the number of processors. With concurrent memory access, slightly faster parallel time is possible in general, and the
log
n
{\displaystyle \log n}
term in the time bound can be replaced by
log
k
{\displaystyle \log k}
.
=== Sublinear data structures ===
When data is already organized into a data structure, it may be possible to perform selection in an amount of time that is sublinear in the number of values. As a simple case of this, for data already sorted into an array, selecting the
k
{\displaystyle k}
th element may be performed by a single array lookup, in constant time. For values organized into a two-dimensional array of size
m
×
n
{\displaystyle m\times n}
, with sorted rows and columns, selection may be performed in time
O
(
m
log
(
2
n
/
m
)
)
{\displaystyle O{\bigl (}m\log(2n/m){\bigr )}}
, or faster when
k
{\displaystyle k}
is small relative to the array dimensions. For a collection of
m
{\displaystyle m}
one-dimensional sorted arrays, with
k
i
{\displaystyle k_{i}}
items less than the selected item in the
i
{\displaystyle i}
th array, the time is
O
(
m
+
∑
i
=
1
m
log
(
k
i
+
1
)
)
{\textstyle O{\bigl (}m+\sum _{i=1}^{m}\log(k_{i}+1){\bigr )}}
.
Selection from data in a binary heap takes time
O
(
k
)
{\displaystyle O(k)}
. This is independent of the size
n
{\displaystyle n}
of the heap, and faster than the
O
(
k
log
n
)
{\displaystyle O(k\log n)}
time bound that would be obtained from best-first search. This same method can be applied more generally to data organized as any kind of heap-ordered tree (a tree in which each node stores one value in which the parent of each non-root node has a smaller value than its child). This method of performing selection in a heap has been applied to problems of listing multiple solutions to combinatorial optimization problems, such as finding the k shortest paths in a weighted graph, by defining a state space of solutions in the form of an implicitly defined heap-ordered tree, and then applying this selection algorithm to this tree. In the other direction, linear time selection algorithms have been used as a subroutine in a priority queue data structure related to the heap, improving the time for extracting its
k
{\displaystyle k}
th item from
O
(
log
n
)
{\displaystyle O(\log n)}
to
O
(
log
∗
n
+
log
k
)
{\displaystyle O(\log ^{*}n+\log k)}
; here
log
∗
n
{\displaystyle \log ^{*}n}
is the iterated logarithm.
For a collection of data values undergoing dynamic insertions and deletions, the order statistic tree augments a self-balancing binary search tree structure with a constant amount of additional information per tree node, allowing insertions, deletions, and selection queries that ask for the
k
{\displaystyle k}
th element in the current set to all be performed in
O
(
log
n
)
{\displaystyle O(\log n)}
time per operation. Going beyond the comparison model of computation, faster times per operation are possible for values that are small integers, on which binary arithmetic operations are allowed. It is not possible for a streaming algorithm with memory sublinear in both
n
{\displaystyle n}
and
k
{\displaystyle k}
to solve selection queries exactly for dynamic data, but the count–min sketch can be used to solve selection queries approximately, by finding a value whose position in the ordering of the elements (if it were added to them) would be within
ε
n
{\displaystyle \varepsilon n}
steps of
k
{\displaystyle k}
, for a sketch whose size is within logarithmic factors of
1
/
ε
{\displaystyle 1/\varepsilon }
.
== Lower bounds ==
The
O
(
n
)
{\displaystyle O(n)}
running time of the selection algorithms described above is necessary, because a selection algorithm that can handle inputs in an arbitrary order must take that much time to look at all of its inputs. If any one of its input values is not compared, that one value could be the one that should have been selected, and the algorithm can be made to produce an incorrect answer. Beyond this simple argument, there has been a significant amount of research on the exact number of comparisons needed for selection, both in the randomized and deterministic cases.
Selecting the minimum of
n
{\displaystyle n}
values requires
n
−
1
{\displaystyle n-1}
comparisons, because the
n
−
1
{\displaystyle n-1}
values that are not selected must each have been determined to be non-minimal, by being the largest in some comparison, and no two of these values can be largest in the same comparison. The same argument applies symmetrically to selecting the maximum.
The next simplest case is selecting the second-smallest. After several incorrect attempts, the first tight lower bound on this case was published in 1964 by Soviet mathematician Sergey Kislitsyn. It can be shown by observing that selecting the second-smallest also requires distinguishing the smallest value from the rest, and by considering the number
p
{\displaystyle p}
of comparisons involving the smallest value that an algorithm for this problem makes. Each of the
p
{\displaystyle p}
items that were compared to the smallest value is a candidate for second-smallest, and
p
−
1
{\displaystyle p-1}
of these values must be found larger than another value in a second comparison in order to rule them out as second-smallest.
With
n
−
1
{\displaystyle n-1}
values being the larger in at least one comparison, and
p
−
1
{\displaystyle p-1}
values being the larger in at least two comparisons, there are a total of at least
n
+
p
−
2
{\displaystyle n+p-2}
comparisons. An adversary argument, in which the outcome of each comparison is chosen in order to maximize
p
{\displaystyle p}
(subject to consistency with at least one possible ordering) rather than by the numerical values of the given items, shows that it is possible to force
p
{\displaystyle p}
to be at least
log
2
n
{\displaystyle \log _{2}n}
. Therefore, the worst-case number of comparisons needed to select the second smallest is
n
+
⌈
log
2
n
⌉
−
2
{\displaystyle n+\lceil \log _{2}n\rceil -2}
, the same number that would be obtained by holding a single-elimination tournament with a run-off tournament among the values that lost to the smallest value. However, the expected number of comparisons of a randomized selection algorithm can be better than this bound; for instance, selecting the second-smallest of six elements requires seven comparisons in the worst case, but may be done by a randomized algorithm with an expected number of 6.5 comparisons.
More generally, selecting the
k
{\displaystyle k}
th element out of
n
{\displaystyle n}
requires at least
n
+
min
(
k
,
n
−
k
)
−
O
(
1
)
{\displaystyle n+\min(k,n-k)-O(1)}
comparisons, in the average case, matching the number of comparisons of the Floyd–Rivest algorithm up to its
o
(
n
)
{\displaystyle o(n)}
term. The argument is made directly for deterministic algorithms, with a number of comparisons that is averaged over all possible permutations of the input values. By Yao's principle, it also applies to the expected number of comparisons for a randomized algorithm on its worst-case input.
For deterministic algorithms, it has been shown that selecting the
k
{\displaystyle k}
th element requires
(
1
+
H
(
k
/
n
)
)
n
+
Ω
(
n
)
{\displaystyle {\bigl (}1+H(k/n){\bigr )}n+\Omega ({\sqrt {n}})}
comparisons, where
H
(
x
)
=
x
log
2
1
x
+
(
1
−
x
)
log
2
1
1
−
x
{\displaystyle H(x)=x\log _{2}{\frac {1}{x}}+(1-x)\log _{2}{\frac {1}{1-x}}}
is the binary entropy function. The special case of median-finding has a slightly larger lower bound on the number of comparisons, at least
(
2
+
ε
)
n
{\displaystyle (2+\varepsilon )n}
, for
ε
≈
2
−
80
{\displaystyle \varepsilon \approx 2^{-80}}
.
== Exact numbers of comparisons ==
Knuth supplies the following triangle of numbers summarizing pairs of
n
{\displaystyle n}
and
k
{\displaystyle k}
for which the exact number of comparisons needed by an optimal selection algorithm is known. The
n
{\displaystyle n}
th row of the triangle (starting with
n
=
1
{\displaystyle n=1}
in the top row) gives the numbers of comparisons for inputs of
n
{\displaystyle n}
values, and the
k
{\displaystyle k}
th number within each row gives the number of comparisons needed to select the
k
{\displaystyle k}
th smallest value from an input of that size. The rows are symmetric because selecting the
k
{\displaystyle k}
th smallest requires exactly the same number of comparisons, in the worst case, as selecting the
k
{\displaystyle k}
th largest.
Most, but not all, of the entries on the left half of each row can be found using the formula
n
−
k
+
(
k
−
1
)
⌈
log
2
(
n
+
2
−
k
)
⌉
.
{\displaystyle n-k+(k-1){\bigl \lceil }\log _{2}(n+2-k){\bigr \rceil }.}
This describes the number of comparisons made by a method of Abdollah Hadian and Milton Sobel, related to heapselect, that finds the smallest value using a single-elimination tournament and then repeatedly uses a smaller tournament among the values eliminated by the eventual tournament winners to find the next successive values until reaching the
k
{\displaystyle k}
th smallest. Some of the larger entries were proven to be optimal using a computer search.
== Language support ==
Very few languages have built-in support for general selection, although many provide facilities for finding the smallest or largest element of a list. A notable exception is the Standard Template Library for C++, which provides a templated nth_element method with a guarantee of expected linear time.
Python's standard library includes heapq.nsmallest and heapq.nlargest functions for returning the smallest or largest elements from a collection, in sorted order. The implementation maintains a binary heap, limited to holding
k
{\displaystyle k}
elements, and initialized to the first
k
{\displaystyle k}
elements in the collection. Then, each subsequent item of the collection may replace the largest or smallest element in the heap if it is smaller or larger than this element. The algorithm's memory usage is superior to heapselect (the former only holds
k
{\displaystyle k}
elements in memory at a time while the latter requires manipulating the entire dataset into memory). Running time depends on data ordering. The best case is
O
(
(
n
−
k
)
+
k
log
k
)
{\displaystyle O((n-k)+k\log k)}
for already sorted data. The worst-case is
O
(
n
log
k
)
{\displaystyle O(n\log k)}
for reverse sorted data. In average cases, there are likely to be few heap updates and most input elements are processed with only a single comparison. For example, extracting the 100 largest or smallest values out of 10,000,000 random inputs makes 10,009,401 comparisons on average.
Since 2017, Matlab has included maxk() and mink() functions, which return the maximal (minimal)
k
{\displaystyle k}
values in a vector as well as their indices. The Matlab documentation does not specify which algorithm these functions use or what their running time is.
== History ==
Quickselect was presented without analysis by Tony Hoare in 1965, and first analyzed in a 1971 technical report by Donald Knuth. The first known linear time deterministic selection algorithm is the median of medians method, published in 1973 by Manuel Blum, Robert W. Floyd, Vaughan Pratt, Ron Rivest, and Robert Tarjan. They trace the formulation of the selection problem to work of Charles L. Dodgson (better known as Lewis Carroll) who in 1883 pointed out that the usual design of single-elimination sports tournaments does not guarantee that the second-best player wins second place, and to work of Hugo Steinhaus circa 1930, who followed up this same line of thought by asking for a tournament design that can make this guarantee, with a minimum number of games played (that is, comparisons).
== See also ==
Geometric median § Computation, algorithms for higher-dimensional generalizations of medians
Median filter, application of median-finding algorithms in image processing
== References == | Wikipedia/Selection_algorithm |
In computing, a Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the input. The usual definition of a Las Vegas algorithm includes the restriction that the expected runtime be finite, where the expectation is carried out over the space of random information, or entropy, used in the algorithm. An alternative definition requires that a Las Vegas algorithm always terminates (is effective), but may output a symbol not part of the solution space to indicate failure in finding a solution. The nature of Las Vegas algorithms makes them suitable in situations where the number of possible solutions is limited, and where verifying the correctness of a candidate solution is relatively easy while finding a solution is complex.
Systematic search methods for computationally hard problems, such as some variants of the Davis–Putnam algorithm for propositional satisfiability (SAT), also utilize non-deterministic decisions, and can thus also be considered Las Vegas algorithms.
== History ==
Las Vegas algorithms were introduced by László Babai in 1979, in the context of the graph isomorphism problem, as a dual to Monte Carlo algorithms. Babai introduced the term "Las Vegas algorithm" alongside an example involving coin flips: the algorithm depends on a series of independent coin flips, and there is a small chance of failure (no result). However, in contrast to Monte Carlo algorithms, the Las Vegas algorithm can guarantee the correctness of any reported result.
== Example ==
As mentioned above, Las Vegas algorithms always return correct results. The code above illustrates this property. A variable k is generated randomly; after k is generated, k is used to index the array A. If this index contains the value 1, then k is returned; otherwise, the algorithm repeats this process until it finds 1. Although this Las Vegas algorithm is guaranteed to find the correct answer, it does not have a fixed runtime; due to the randomization (in line 3 of the above code), it is possible for arbitrarily much time to elapse before the algorithm terminates.
== Definition ==
This section provides the conditions that characterize an algorithm's being of Las Vegas type.
An algorithm A is a Las Vegas algorithm for problem class X, if
whenever for a given problem instance x∈X it returns a solution s, s is guaranteed to be a valid solution of x
on each given instance x, the run-time of A is a random variable RTA,x
There are three notions of completeness for Las Vegas algorithms:
complete Las Vegas algorithms can be guaranteed to solve each solvable problem within run-time tmax, where tmax is an instance-dependent constant.
Let P(RTA,x ≤ t) denote the probability that A finds a solution for a soluble instance x in time within t, then A is complete exactly if for each x there exists
some tmax such that P(RTA,x ≤ tmax) = 1.
approximately complete Las Vegas algorithms solve each problem with a probability converging to 1 as the run-time approaches infinity. Thus, A is approximately complete, if for each instance x, limt→∞ P(RTA,x ≤ t) = 1.
essentially incomplete Las Vegas algorithms are Las Vegas algorithms that are not approximately complete.
Approximate completeness is primarily of theoretical interest, as the time limits for finding solutions are usually too large to be of practical use.
=== Application scenarios ===
Las Vegas algorithms have different criteria for the evaluation based on the problem setting. These criteria are divided into three categories with different time limits since Las Vegas algorithms do not have set time complexity. Here are some possible application scenarios:
Type 1: There are no time limits, which means the algorithm runs until it finds the solution.
Type 2: There is a time limit tmax for finding the outcome.
Type 3: The utility of a solution is determined by the time required to find the solution.
(Type 1 and Type 2 are special cases of Type 3.)
For Type 1 where there is no time limit, the average run-time can represent the run-time behavior. This is not the same case for Type 2.
Here, P(RT ≤ tmax), which is the probability of finding a solution within time, describes its run-time behavior.
In case of Type 3, its run-time behavior can only be represented by the run-time distribution function rtd: R → [0,1] defined as rtd(t) = P(RT ≤ t) or its approximation.
The run-time distribution (RTD) is the distinctive way to describe the run-time behavior of a Las Vegas algorithm.
With this data, we can easily get other criteria such as the mean run-time, standard deviation, median, percentiles, or success probabilities P(RT ≤ t) for arbitrary time-limits t.
== Applications ==
=== Analogy ===
Las Vegas algorithms arise frequently in search problems. For example, one looking for some information online might search related websites for the desired information. The time complexity thus ranges from getting "lucky" and finding the content immediately, to being "unlucky" and spending large amounts of time. Once the right website is found, then there is no possibility of error.
=== Randomized quicksort ===
A simple example is randomized quicksort, where the pivot is chosen randomly, and divides the elements into three partitions: elements less than pivot, elements equal to pivot, and elements greater than pivot. QuickSort always generates the solution, which in this case the sorted array. Unfortunately, the time complexity is not that obvious. It turns out that the runtime depends on which element we pick as a pivot.
The worst case Θ(n2) when the pivot is the smallest or the largest element.
T
(
n
)
=
T
(
0
)
+
T
(
n
−
1
)
+
Θ
(
n
)
{\displaystyle T(n)=T(0)+T(n-1)+\Theta (n)}
T
(
n
)
=
Θ
(
1
)
+
T
(
n
−
1
)
+
Θ
(
n
)
{\displaystyle T(n)=\Theta (1)+T(n-1)+\Theta (n)}
T
(
n
)
=
T
(
n
−
1
)
+
Θ
(
n
)
{\displaystyle T(n)=T(n-1)+\Theta (n)}
T
(
n
)
=
Θ
(
n
2
)
{\displaystyle T(n)=\Theta (n^{2})}
However, through randomization, where the pivot is randomly picked and is exactly a middle value each time, the QuickSort can be done in Θ(nlogn).
T
(
n
)
≤
2
∗
T
(
n
/
2
)
+
Θ
(
n
)
{\displaystyle T(n)\leq 2*T(n/2)+\Theta (n)}
T
(
n
)
=
Θ
(
n
log
(
n
)
)
{\displaystyle T(n)=\Theta (n\log(n))}
The runtime of quicksort depends heavily on how well the pivot is selected. If a value of pivot is either too big or small, the partition will be unbalanced, resulting in a poor runtime efficiency. However, if the value of pivot is near the middle of the array, then the split will be reasonably well balanced, yielding a faster runtime. Since the pivot is randomly picked, the running time will be good most of the time and bad occasionally.
In the case of average case, it is hard to determine since the analysis does not depend on the input distribution but on the random choices that the algorithm makes. The average of quicksort is computed over all possible random choices that the algorithm might make when choosing the pivot.
Although the worst-case runtime is Θ(n2), the average-case runtime is Θ(nlogn). It turns out that the worst-case does not happen often. For large values of n, the runtime is Θ(nlogn) with a high probability.
Note that the probability that the pivot is the middle value element each time is one out of n numbers, which is very rare. However, it is still the same runtime when the split is 10%-90% instead of a 50%–50% because the depth of the recursion tree will still be O(logn) with O(n) times taken each level of recursion.
=== Randomized greedy algorithm for the Eight queens problem ===
The eight queens problem is usually solved with a backtracking algorithm. However, a Las Vegas algorithm can be applied; in fact, it is more efficient than backtracking.
Place 8 queens on a chessboard so that no one attacks another. Remember that a queen attacks other pieces on the same row, column and diagonals.
Assume that k rows, 0 ≤ k ≤ 8, are successfully occupied by queens.
If k = 8, then stop with success. Otherwise, proceed to occupy row k + 1.
Calculate all positions on this row not attacked by existing queens. If there are none, then fail. Otherwise, pick one at random, increment k and repeat.
Note that the algorithm simply fails if a queen cannot be placed. But the process can be repeated and every time will generate different arrangement.
== Complexity class ==
The complexity class of decision problems that have Las Vegas algorithms with expected polynomial runtime is ZPP.
It turns out that
ZPP
=
RP
∩
co-RP
{\displaystyle {\textsf {ZPP}}={\textsf {RP}}\cap {\textsf {co-RP}}}
which is intimately connected with the way Las Vegas algorithms are sometimes constructed. Namely the class RP consists of all decision problems for which a randomized polynomial-time algorithm exists that always answers correctly when the correct answer is "no", but is allowed to be wrong with a certain probability bounded away from one when the answer is "yes". When such an algorithm exists for both a problem and its complement (with the answers "yes" and "no" swapped), the two algorithms can be run simultaneously and repeatedly: run each for a constant number of steps, taking turns, until one of them returns a definitive answer. This is the standard way to construct a Las Vegas algorithm that runs in expected polynomial time. Note that in general there is no worst case upper bound on the run time of a Las Vegas algorithm.
== Optimal Las Vegas algorithm ==
In order to make a Las Vegas algorithm optimal, the expected run time should be minimized. This can be done by:
The Las Vegas algorithm A(x) runs repeatedly for some number t1 steps. If A(x) stops during the run time then A(x) is done; otherwise, repeat the process from the beginning for another t2 steps, and so on.
Designing a strategy that is optimal among all strategies for A(x), given the full information about the distribution of TA(x).
The existence of the optimal strategy might be a fascinating theoretical observation. However, it is not practical in real life because it is not easy to find the information of distribution of TA(x). Furthermore, there is no point of running the experiment repeatedly to obtain the information about the distribution since most of the time, the answer is needed only once for any x.
== Relation to Monte Carlo algorithms ==
Las Vegas algorithms can be contrasted with Monte Carlo algorithms, in which the resources used are bounded but the answer may be incorrect with a certain (typically small) probability. A Las Vegas algorithm can be converted into a Monte Carlo algorithm by running it for set time and generating a random answer when it fails to terminate. By an application of Markov's inequality, we can set the bound on the probability that the Las Vegas algorithm would go over the fixed limit.
Here is a table comparing Las Vegas and Monte Carlo algorithms:
If a deterministic way to test for correctness is available, then it is possible to turn a Monte Carlo algorithm into a Las Vegas algorithm. However, it is hard to convert a Monte Carlo algorithm to a Las Vegas algorithm without a way to test the algorithm. On the other hand, changing a Las Vegas algorithm to a Monte Carlo algorithm is easy. This can be done by running a Las Vegas algorithm for a specific period of time given by confidence parameter. If the algorithm finds the solution within the time, then it is success and if not then output can simply be "sorry".
This is an example of Las Vegas and Monte Carlo algorithms for comparison:
Assume that there is an array with the length of even n. Half of the entries in the array are 0's and the remaining half are 1's. The goal here is to find an index that contains a 1.
Since Las Vegas does not end until it finds 1 in the array, it does not gamble with the correctness but run-time. On the other hand, Monte Carlo runs 300 times, which means it is impossible to know that Monte Carlo will find "1" in the array within 300 times of loops until it actually executes the code. It might find the solution or not. Therefore, unlike Las Vegas, Monte Carlo does not gamble with run-time but correctness.
== See also ==
Monte Carlo algorithm
Atlantic City algorithm
Randomness
== References ==
=== Citations ===
=== Sources === | Wikipedia/Las_Vegas_algorithm |
The Graphics Interchange Format (GIF; GHIF or JIF, see § Pronunciation) is a bitmap image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released on June 15, 1987.
The format can contain up to 8 bits per pixel, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It can also represent multiple images in a file, which can be used for animations, and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other images with color gradients but well-suited for simpler images such as graphics or logos with solid areas of color.
GIF images are compressed using the Lempel–Ziv–Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality.
While once in widespread usage on the World Wide Web because of its wide implementation and portability between applications and operating systems, usage of the format has declined for space and quality reasons, often being replaced with newer formats such as PNG for static images and MP4 for videos. In this context, short video clips are sometimes termed "GIFs" despite having no relation to the original file format.
== History ==
CompuServe introduced GIF on 15 June 1987 to provide a color image format for their file downloading areas. This replaced their earlier run-length encoding format, which was black and white only. GIF became popular because it used Lempel–Ziv–Welch data compression. Since this was more efficient than the run-length encoding used by PCX and MacPaint, fairly large images could be downloaded reasonably quickly even with slow modems.
The original version of GIF was called 87a. This version already supported multiple images in a stream.
In 1989, CompuServe released an enhanced version, called 89a, This version added:
support for animation delays
transparent background colors
storage of application-specific metadata
allowing text labels as text (not embedding them in the graphical data). As there is little control over display fonts, however, this feature is rarely used.
The two versions can be distinguished by looking at the first six bytes of the file (the "magic number" or signature), which, when interpreted as ASCII, read "GIF87a" or "GIF89a", respectively.
CompuServe encouraged the adoption of GIF by providing downloadable conversion utilities for many computers. By December 1987, for example, an Apple IIGS user could view pictures created on an Atari ST or Commodore 64. GIF was one of the first two image formats commonly used on Web sites, the other being the black-and-white XBM.
In September 1995 Netscape Navigator 2.0 added the ability for animated GIFs to loop.
While GIF was developed by CompuServe, it used the Lempel–Ziv–Welch (LZW) lossless data compression algorithm patented by Unisys in 1985. Controversy over the licensing agreement between Unisys and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. In 2004, all patents relating to the proprietary compression used for GIF expired.
The feature of storing multiple images in one file, accompanied by control data, is used extensively on the Web to produce simple animations.
The optional interlacing feature, which stores image scan lines out of order in such a fashion that even a partially downloaded image was somewhat recognizable, also helped GIF's popularity, as a user could abort the download if it was not what was required.
In May 2015 Facebook added support for GIF. In 2014, Twitter, also added support to GIF as well as Instagram in 2018.
In 2016, the Internet Archive released a searchable library of GIFs from their Geocities archive.
== Terminology ==
As a noun, the word GIF is found in the newer editions of many dictionaries. In 2012, the American wing of the Oxford University Press recognized GIF as a verb as well, meaning "to create a GIF file", as in "GIFing was the perfect medium for sharing scenes from the Summer Olympics". The press's lexicographers voted it their word of the year, saying that GIFs have evolved into "a tool with serious applications including research and journalism".
=== Pronunciation ===
The pronunciation of the first letter of GIF has been disputed since the 1990s. The most common pronunciations in English are (with a soft g as in gin) and (with a hard g as in gift), differing in the phoneme represented by the letter G. The creators of the format pronounced the acronym GIF as , with a soft g, with Wilhite stating that he intended for the pronunciation to deliberately echo the American peanut butter brand Jif, and CompuServe employees would often quip "choosy developers choose GIF", a spoof of Jif's television commercials. However, the word is widely pronounced as , with a hard g, and polls have generally shown that this hard g pronunciation is more prevalent.
Dictionary.com cites both pronunciations, indicating as the primary pronunciation, while Cambridge Dictionary of American English offers only the hard-g pronunciation. Merriam-Webster's Collegiate Dictionary and Oxford Dictionaries cite both pronunciations, but place the hard g first: . The New Oxford American Dictionary gave only in its second edition but updated it to in the third edition.
The disagreement over the pronunciation has led to heated Internet debate. On the occasion of receiving a lifetime achievement award at the 2013 Webby Awards ceremony, Wilhite publicly rejected the hard-g pronunciation; his speech led to more than 17,000 posts on Twitter and dozens of news articles. The White House and the TV program Jeopardy! also entered the debate in 2013. In February 2020, The J.M. Smucker Company, the owners of the Jif brand, partnered with the animated image database and search engine Giphy to release a limited-edition "Jif vs. GIF" (hashtagged as #JIFvsGIF) jar of peanut butter that had a label humorously declaring the soft-g pronunciation to refer exclusively to the peanut butter, and GIF to be exclusively pronounced with the hard-g pronunciation.
== Usage ==
GIFs are suitable for sharp-edged line art with a limited number of colors, such as logos. This takes advantage of the format's lossless compression, which favors flat areas of uniform color with well defined edges. They can also be used to store low-color sprite data for games. GIFs can be used for small animations and low-resolution video clips, or as reactions in online messaging used to convey emotion and feelings instead of using words. They are popular on social media platforms such as Tumblr, Facebook and Twitter.
== File format ==
Conceptually, a GIF file describes a fixed-sized graphical area (the "logical screen") populated with zero or more "images". Many GIF files have a single image that fills the entire logical screen. Others divide the logical screen into separate sub-images. The images may also function as animation frames in an animated GIF file, but again these need not fill the entire logical screen.
GIF files start with a fixed-length header ("GIF87a" or "GIF89a") giving the version, followed by a fixed-length Logical Screen Descriptor giving the pixel dimensions and other characteristics of the logical screen. The screen descriptor may also specify the presence and size of a Global Color Table (GCT), which follows next if present.
Thereafter, the file is divided into segments of the following types, each introduced by a 1-byte sentinel:
An image (introduced by 0x2C, an ASCII comma ',')
An extension block (introduced by 0x21, an ASCII exclamation point '!')
The trailer (a single byte of value 0x3B, an ASCII semicolon ';'), which should be the last byte of the file.
An image starts with a fixed-length Image Descriptor, which may specify the presence and size of a Local Color Table (which follows next if present). The image data follows: one byte giving the bit width of the unencoded symbols (which must be at least 2 bits wide, even for bi-color images), followed by a series of sub-blocks containing the LZW-encoded data.
Extension blocks (blocks that "extend" the 87a definition via a mechanism already defined in the 87a spec) consist of the sentinel, an additional byte specifying the type of extension, and a series of sub-blocks with the extension data. Extension blocks that modify an image (like the Graphic Control Extension that specifies the optional animation delay time and optional transparent background color) must immediately precede the segment with the image they refer to.
Each sub-block begins with a byte giving the number of subsequent data bytes in the sub-block (1 to 255). The series of sub-blocks is terminated by an empty sub-block (a 0 byte).
This structure allows the file to be parsed even if not all parts are understood. A GIF marked 87a may contain extension blocks; the intent is that a decoder can read and display the file without the features covered in extensions it does not understand.
The full detail of the file format is covered in the GIF specification.
== Palettes ==
GIF is palette-based: the colors used in an image (a frame) in the file have their RGB values defined in a palette table that can hold up to 256 entries, and the data for the image refer to the colors by their indices (0–255) in the palette table. The color definitions in the palette can be drawn from a color space of millions of shades (224 shades, 8 bits for each primary), but the maximum number of colors a frame can use is 256. This limitation was reasonable when GIF was developed because hardware that could display more than 256 colors simultaneously was rare. Simple graphics, line drawings, cartoons, and grey-scale photographs typically need fewer than 256 colors.
Each frame can designate one index as a "transparent background color": any pixel assigned this index takes on the color of the pixel in the same position from the background, which may have been determined by a previous frame of animation.
Many techniques, collectively called dithering, have been developed to approximate a wider range of colors with a small color palette by using pixels of two or more colors to approximate in-between colors. These techniques sacrifice spatial resolution to approximate deeper color resolution. While not part of the GIF specification, dithering can be used in images subsequently encoded as GIF images. This is often not an ideal solution for GIF images, both because the loss of spatial resolution typically makes an image look fuzzy on the screen, and because the dithering patterns often interfere with the compressibility of the image data, working against GIF's main purpose.
In the early days of graphical web browsers, graphics cards with 8-bit buffers (allowing only 256 colors) were common and it was fairly common to make GIF images using the websafe palette. This ensured predictable display, but severely limited the choice of colors. When 24-bit color became the norm, palettes could instead be populated with the optimum colors for individual images.
A small color table may suffice for small images, and keeping the color table small allows the file to be downloaded faster. Both the 87a and 89a specifications allow color tables of 2n colors for any n from 1 through 8. Most graphics applications will read and display GIF images with any of these table sizes; but some do not support all sizes when creating images. Tables of 2, 16, and 256 colors are widely supported.
=== True color ===
Although GIF is almost never used for true color images, it is possible to do so. A GIF image can include multiple image blocks, each of which can have its own 256-color palette, and the blocks can be tiled to create a complete image. Alternatively, the GIF89a specification introduced the idea of a "transparent" color where each image block can include its own palette of 255 visible colors plus one transparent color. A complete image can be created by layering image blocks with the visible portion of each layer showing through the transparent portions of the layers above.
To render a full-color image as a GIF, the original image must be broken down into smaller regions having no more than 255 or 256 different colors. Each of these regions is then stored as a separate image block with its own local palette and when the image blocks are displayed together (either by tiling or by layering partially transparent image blocks), the complete, full-color image appears. For example, breaking an image into tiles of 16 by 16 pixels (256 pixels in total) ensures that no tile has more than the local palette limit of 256 colors, although larger tiles may be used and similar colors merged resulting in some loss of color information.
Since each image block can have its own local color table, a GIF file having many image blocks can be very large, limiting the usefulness of full-color GIFs. Additionally, not all GIF rendering programs handle tiled or layered images correctly. Many rendering programs interpret tiles or layers as animation frames and display them in sequence as an animation with most web browsers automatically displaying the frames with a delay time of 0.1 seconds or more.
== Example GIF file ==
The hex numbers in the following tables are in little-endian byte order, as the format specification prescribes.
=== Image coding ===
The image pixel data, scanned horizontally from top left, are converted by LZW encoding to codes that are then mapped into bytes for storing in the file. The pixel codes typically don't match the 8-bit size of the bytes, so the codes are packed into bytes by a "little-Endian" scheme: the least significant bit of the first code is stored in the least significant bit of the first byte, higher order bits of the code into higher order bits of the byte, spilling over into the low order bits of the next byte as necessary. Each subsequent code is stored starting at the least significant bit not already used.
This byte stream is stored in the file as a series of "sub-blocks". Each sub-block has a maximum length 255 bytes and is prefixed with a byte indicating the number of data bytes in the sub-block. The series of sub-blocks is terminated by an empty sub-block (a single 0 byte, indicating a sub-block with 0 data bytes).
For the sample image above the reversible mapping between 9-bit codes and bytes is shown below.
A slight compression is evident: pixel colors defined initially by 15 bytes are exactly represented by 12 code bytes including control codes.
The encoding process that produces the 9-bit codes is shown below. A local string accumulates pixel color numbers from the palette, with no output action as long as the local string can be found in a code table. There is special treatment of the first two pixels that arrive before the table grows from its initial size by additions of strings. After each output code, the local string is initialized to the latest pixel color (that could not be included in the output code).
Table 9-bit
string --> code code Action
#0 | 000h Initialize root table of 9-bit codes
palette | :
colors | :
#255 | 0FFh
clr | 100h
end | 101h
| 100h Clear
Pixel Local |
color Palette string |
BLACK #40 28 | 028h 1st pixel always to output
WHITE #255 FF | String found in table
28 FF | 102h Always add 1st string to table
FF | Initialize local string
WHITE #255 FF FF | String not found in table
| 0FFh - output code for previous string
FF FF | 103h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
BLACK #40 FF FF 28 | String not found in table
| 103h - output code for previous string
FF FF 28 | 104h - add latest string to table
28 | - initialize local string
WHITE #255 28 FF | String found in table
WHITE #255 28 FF FF | String not found in table
| 102h - output code for previous string
28 FF FF | 105h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
WHITE #255 FF FF FF | String not found in table
| 103h - output code for previous string
FF FF FF | 106h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
WHITE #255 FF FF FF | String found in table
WHITE #255 FF FF FF FF | String not found in table
| 106h - output code for previous string
FF FF FF FF| 107h - add latest string to table
FF | - initialize local string
WHITE #255 FF FF | String found in table
WHITE #255 FF FF FF | String found in table
WHITE #255 FF FF FF FF | String found in table
No more pixels
107h - output code for last string
101h End
For clarity the table is shown above as being built of strings of increasing length. That scheme can function but the table consumes an unpredictable amount of memory. Memory can be saved in practice by noting that each new string to be stored consists of a previously stored string augmented by one character. It is economical to store at each address only two words: an existing address and one character.
The LZW algorithm requires a search of the table for each pixel. A linear search through up to 4096 addresses would make the coding slow. In practice the codes can be stored in order of numerical value; this allows each search to be done by a SAR (Successive Approximation Register, as used in some ADCs), with only 12 magnitude comparisons. For this efficiency an extra table is needed to convert between codes and actual memory addresses; the extra table upkeeping is needed only when a new code is stored which happens at much less than pixel rate.
=== Image decoding ===
Decoding begins by mapping the stored bytes back to 9-bit codes. These are decoded to recover the pixel colors as shown below. A table identical to the one used in the encoder is built by adding strings by this rule:
shift
9-bit ----> Local Table Pixel
code code code --> string Palette color Action
100h 000h | #0 Initialize root table of 9-bit codes
: | palette
: | colors
0FFh | #255
100h | clr
101h | end
028h | #40 BLACK Decode 1st pixel
0FFh 028h | Incoming code found in table
| #255 WHITE - output string from table
102h | 28 FF - add to table
103h 0FFh | Incoming code not found in table
103h | FF FF - add to table
| - output string from table
| #255 WHITE
| #255 WHITE
102h 103h | Incoming code found in table
| - output string from table
| #40 BLACK
| #255 WHITE
104h | FF FF 28 - add to table
103h 102h | Incoming code found in table
| - output string from table
| #255 WHITE
| #255 WHITE
105h | 28 FF FF - add to table
106h 103h | Incoming code not found in table
106h | FF FF FF - add to table
| - output string from table
| #255 WHITE
| #255 WHITE
| #255 WHITE
107h 106h | Incoming code not found in table
107h | FF FF FF FF - add to table
| - output string from table
| #255 WHITE
| #255 WHITE
| #255 WHITE
| #255 WHITE
101h | End
=== LZW code lengths ===
Shorter code lengths can be used for palettes smaller than the 256 colors in the example. If the palette is only 64 colors (so color indexes are 6 bits wide), the symbols can range from 0 to 63, and the symbol width can be taken to be 6 bits, with codes starting at 7 bits. In fact, the symbol width need not match the palette size: as long as the values decoded are always less than the number of colors in the palette, the symbols can be any width from 2 to 8, and the palette size any power of 2 from 2 to 256. For example, if only the first four colors (values 0 to 3) of the palette are used, the symbols can be taken to be 2 bits wide with codes starting at 3 bits.
Conversely, the symbol width could be set at 8, even if only values 0 and 1 are used; these data would only require a two-color table. Although there would be no point in encoding the file that way, something similar typically happens for bi-color images: the minimum symbol width is 2, even if only values 0 and 1 are used.
The code table initially contains codes that are one bit longer than the symbol size in order to accommodate the two special codes clr and end and codes for strings that are added during the process. When the table is full the code length increases to give space for more strings, up to a maximum code 4095 = FFF(hex). As the decoder builds its table it tracks these increases in code length and it is able to unpack incoming bytes accordingly.
=== Uncompressed GIF ===
The GIF encoding process can be modified to create a file without LZW compression that is still viewable as a GIF image. This technique was introduced originally as a way to avoid patent infringement. Uncompressed GIF can also be a useful intermediate format for a graphics programmer because individual pixels are accessible for reading or painting. An uncompressed GIF file can be converted to an ordinary GIF file simply by passing it through an image editor.
The modified encoding method ignores building the LZW table and emits only the root palette codes and the codes for CLEAR and STOP. This yields a simpler encoding (a 1-to-1 correspondence between code values and palette codes) but sacrifices all of the compression: each pixel in the image generates an output code indicating its color index. When processing an uncompressed GIF, a standard GIF decoder will not be prevented from writing strings to its dictionary table, but the code width must never increase since that triggers a different packing of bits to bytes.
If the symbol width is n, the codes of width n+1 fall naturally into two blocks: the lower block of 2n codes for coding single symbols, and the upper block of 2n codes that will be used by the decoder for sequences of length greater than one. Of that upper block, the first two codes are already taken: 2n for CLEAR and 2n + 1 for STOP. The decoder must also be prevented from using the last code in the upper block, 2n+1 − 1, because when the decoder fills that slot, it will increase the code width. Thus in the upper block there are 2n − 3 codes available to the decoder that won't trigger an increase in code width. Because the decoder is always one step behind in maintaining the table, it does not generate a table entry upon receiving the first code from the encoder, but will generate one for each succeeding code. Thus the encoder can generate 2n − 2 codes without triggering an increase in code width. Therefore, the encoder must emit extra CLEAR codes at intervals of 2n − 2 codes or less to make the decoder reset the coding dictionary. The GIF standard allows such extra CLEAR codes to be inserted in the image data at any time. The composite data stream is partitioned into sub-blocks that each carry from 1 to 255 bytes.
For the sample 3×5 image above, the following 9-bit codes represent "clear" (100) followed by image pixels in scan order and "stop" (101).
100 028 0FF 0FF 0FF 028 0FF 0FF 0FF 0FF 0FF 0FF 0FF 0FF 0FF 0FF 101
After the above codes are mapped to bytes, the uncompressed file differs from the compressed file thus:
== Compression example ==
The trivial example of a large image of solid color demonstrates the variable-length LZW compression used in GIF files.
The code values shown are packed into bytes which are then packed into blocks of up to 255 bytes. A block of image data begins with a byte that declares the number of bytes to follow. The last block of data for an image is marked by a zero block-length byte.
== Interlacing ==
The GIF Specification allows each image within the logical screen of a GIF file to specify that it is interlaced; i.e., that the order of the raster lines in its data block is not sequential. This allows a partial display of the image that can be recognized before the full image is painted.
An interlaced image is divided from top to bottom into strips 8 pixels high, and the rows of the image are presented in the following order:
Pass 1: Line 0 (the top-most line) from each strip.
Pass 2: Line 4 from each strip.
Pass 3: Lines 2 and 6 from each strip.
Pass 4: Lines 1, 3, 5, and 7 from each strip.
The pixels within each line are not interlaced, but presented consecutively from left to right. As with non-interlaced images, there is no break between the data for one line and the data for the next. The indicator that an image is interlaced is a bit set in the corresponding Image Descriptor block.
== Animated GIF ==
Although GIF was not designed as an animation medium, its ability to store multiple images in one file naturally suggested using the format to store the frames of an animation sequence. To facilitate displaying animations, the GIF89a spec added the Graphic Control Extension (GCE), which allows the images (frames) in the file to be painted with time delays, forming a video clip. Each frame in an animation GIF is introduced by its own GCE specifying the time delay to wait after the frame is drawn. Global information at the start of the file applies by default to all frames. The data is stream-oriented, so the file offset of the start of each GCE depends on the length of preceding data. Within each frame the LZW-coded image data is arranged in sub-blocks of up to 255 bytes; the size of each sub-block is declared by the byte that precedes it.
By default, an animation displays the sequence of frames only once, stopping when the last frame is displayed. To enable an animation to loop, Netscape in the 1990s used the Application Extension block (intended to allow vendors to add application-specific information to the GIF file) to implement the Netscape Application Block (NAB). This block, placed immediately before the sequence of animation frames, specifies the number of times the sequence of frames should be played (1 to 65535 times) or that it should repeat continuously (zero indicates loop forever). Support for these repeating animations first appeared in Netscape Navigator version 2.0, and then spread to other browsers. Most browsers now recognize and support NAB, though it is not strictly part of the GIF89a specification.
The following example shows the structure of the animation file Rotating earth (large).gif shown (as a thumbnail) in the article's infobox.
The animation delay for each frame is specified in the GCE in hundredths of a second. Some economy of data is possible where a frame need only rewrite a portion of the pixels of the display, because the Image Descriptor can define a smaller rectangle to be rescanned instead of the whole image. Browsers or other displays that do not support animated GIFs typically show only the first frame.
The size and color quality of animated GIF files can vary significantly depending on the application used to create them. Strategies for minimizing file size include using a common global color table for all frames (rather than a complete local color table for each frame) and minimizing the number of pixels covered in successive frames (so that only the pixels that change from one frame to the next are included in the latter frame). More advanced techniques involve modifying color sequences to better match the existing LZW dictionary, a form of lossy compression. Simply packing a series of independent frame images into a composite animation tends to yield large file sizes. Tools are available to minimize the file size given an existing GIF.
== Metadata ==
Metadata can be stored in GIF files as a comment block, a plain text block, or an application-specific application extension block. Several graphics editors use unofficial application extension blocks to include the data used to generate the image, so that it can be recovered for further editing.
All of these methods technically require the metadata to be broken into sub-blocks so that applications can navigate the metadata block without knowing its internal structure.
The Extensible Metadata Platform (XMP) metadata standard introduced an unofficial but now widespread "XMP Data" application extension block for including XMP data in GIF files. Since the XMP data is encoded using UTF-8 without NUL characters, there are no 0 bytes in the data. Rather than break the data into formal sub-blocks, the extension block terminates with a "magic trailer" that routes any application treating the data as sub-blocks to a final 0 byte that terminates the sub-block chain.
== Unisys and LZW patent enforcement ==
In 1977 and 1978, Jacob Ziv and Abraham Lempel published a pair of papers on a new class of lossless data-compression algorithms, now collectively referred to as LZ77 and LZ78. In 1983, Terry Welch developed a fast variant of LZ78 which was named Lempel–Ziv–Welch (LZW).
Welch filed a patent application for the LZW method in June 1983. The resulting patent, US4558302, granted in December 1985, was assigned to Sperry Corporation who subsequently merged with Burroughs Corporation in 1986 and formed Unisys. Further patents were obtained in the United Kingdom, France, Germany, Italy, Japan and Canada.
In addition to the above patents, Welch's 1983 patent also includes citations to several other patents that influenced it, including:
two 1980 Japanese patents from NEC's Jun Kanatsu,
U.S. patent 4,021,782 (1974) from John S. Hoerning,
U.S. patent 4,366,551 (1977) from Klaus E. Holtz, and
a 1981 German patent from Karl Eckhart Heinz.
In June 1984, an article by Welch was published in the IEEE magazine which publicly described the LZW technique for the first time. LZW became a popular data compression technique and, when the patent was granted, Unisys entered into licensing agreements with over a hundred companies.
The popularity of LZW led CompuServe to choose it as the compression technique for their version of GIF, developed in 1987. At the time, CompuServe was not aware of the patent. Unisys became aware that the version of GIF used the LZW compression technique and entered into licensing negotiations with CompuServe in January 1993. The subsequent agreement was announced on 24 December 1994. Unisys stated that they expected all major commercial on-line information services companies employing the LZW patent to license the technology from Unisys at a reasonable rate, but that they would not require licensing, or fees to be paid, for non-commercial, non-profit GIF-based applications, including those for use on the on-line services.
Following this announcement, there was widespread condemnation of CompuServe and Unisys, and many software developers threatened to stop using GIF. The PNG format (see below) was developed in 1995 as an intended replacement. However, obtaining support from the makers of Web browsers and other software for the PNG format proved difficult and it was not possible to replace GIF, although PNG has gradually increased in popularity. Therefore, GIF variations without LZW compression were developed. For instance the libungif library, based on Eric S. Raymond's giflib, allows creation of GIFs that followed the data format but avoided the compression features, thus avoiding use of the Unisys LZW patent. A 2001 Dr. Dobb's article described a way to achieve LZW-compatible encoding without infringing on its patents.
In August 1999, Unisys changed the details of their licensing practice, announcing the option for owners of certain non-commercial and private websites to obtain licenses on payment of a one-time license fee of $5000 or $7500. Such licenses were not required for website owners or other GIF users who had used licensed software to generate GIFs. Nevertheless, Unisys was subjected to thousands of online attacks and abusive emails from users believing that they were going to be charged $5000 or sued for using GIFs on their websites. Despite giving free licenses to hundreds of non-profit organizations, schools and governments, Unisys was completely unable to generate any good publicity and continued to be condemned by individuals and organizations such as the League for Programming Freedom who started the "Burn All GIFs" campaign in 1999.
The United States LZW patent expired on 20 June 2003. The counterpart patents in the United Kingdom, France, Germany and Italy expired on 18 June 2004, the Japanese patents expired on 20 June 2004, and the Canadian patent expired on 7 July 2004. Consequently, while Unisys has further patents and patent applications relating to improvements to the LZW technique, LZW itself (and consequently GIF) have been free to use since July 2004.
== Alternatives ==
=== PNG ===
Portable Network Graphics (PNG) was designed as a replacement for GIF in order to avoid infringement of Unisys' patent on the LZW compression technique. PNG offers better compression and more features than GIF, animation being the only significant exception. PNG is more suitable than GIF in instances where true-color imaging and alpha transparency are required.
Although support for PNG format came slowly, new web browsers support PNG. Older versions of Internet Explorer do not support all features of PNG. Versions 6 and earlier do not support alpha channel transparency without using Microsoft-specific HTML extensions. Gamma correction of PNG images was not supported before version 8, and the display of these images in earlier versions may have the wrong tint.
For identical 8-bit (or lower) image data, PNG files are typically smaller than the equivalent GIFs, due to the more efficient compression techniques used in PNG encoding. Complete support for GIF is complicated chiefly by the complex canvas structure it allows, though this is what enables the compact animation features.
=== Animation formats ===
Videos resolve many issues that GIFs present through common usage on the web. They include drastically smaller file sizes, the ability to surpass the 8-bit color restriction, and better frame-handling and compression through inter-frame coding. Virtually universal support for the GIF format in web browsers and a lack of official support for video in the HTML standard caused GIF to rise to prominence for the purpose of displaying short video-like files on the web.
MNG ("Multiple-image Network Graphics") was originally developed as a PNG-based solution for animations. MNG reached version 1.0 in 2001, but few applications support it.
APNG ("Animated Portable Network Graphics") was proposed by Mozilla in 2006. APNG is an extension to the PNG format as alternative to the MNG format. APNG is supported by most browsers as of 2019. APNG provides the ability to animate PNG files, while retaining backwards compatibility in decoders that cannot understand the animation chunk (unlike MNG). Older decoders will simply render the first frame of the animation.
The PNG group officially rejected APNG as an official extension on 20 April 2007.
There have been several subsequent proposals for a simple animated graphics format based on PNG using several different approaches. Nevertheless, APNG is still under development by Mozilla and is supported in Firefox 3.0 while MNG support was dropped. APNG is currently supported by all major web browsers including Chrome (since version 59.0), Opera, Firefox and Edge.
Embedded Adobe Flash objects and MPEG files were used on some websites to display simple video, but required the use of an additional browser plugin.
WebM and WebP are in development and are supported by some web browsers.
Other options for web animation include serving individual frames using AJAX, or animating SVG ("Scalable vector graphics") images using JavaScript or SMIL ("Synchronized Multimedia Integration Language").
With the introduction of widespread support of the HTML video (<video>) tag in most web browsers, some websites use a looped version of the video tag generated. This gives the appearance of a GIF, but with the size and speed advantages of compressed video.
Notable examples are Gfycat and Imgur and their GIFV metaformat, which is really a video tag playing a looped MP4 or WebM compressed video.
HEIF ("High Efficiency Image File Format") is an image file format, finalized in 2015, which uses a discrete cosine transform (DCT) lossy compression algorithm based on the HEVC video format, and related to the JPEG image format. In contrast to JPEG, HEIF supports animation.
Compared to the GIF format, which lacks DCT compression, HEIF allows significantly more efficient compression. HEIF stores more information and produces higher-quality animated images at a small fraction of an equivalent GIF's size.
VP9 only supports alpha compositing with 4:2:0 chroma subsampling, which may be unsuitable for GIFs that combine transparency with rasterised vector graphics with fine color details.
AV1 video codec or AVIF can also be used either as a video or a sequenced image.
==== Uses ====
In April 2014, 4chan added support for silent WebM videos that are under 3 MB in size and 2 min in length, and in October 2014, Imgur started converting any GIF files uploaded to the site to H.264 video and giving the link to the HTML player the appearance of an actual file with a .gifv extension.
In January 2016, Telegram started re-encoding all GIFs to MPEG-4 videos that "require up to 95% less disk space for the same image quality."
== See also ==
AVIF
Cinemagraph, a partially animated photograph often in GIF
Clear GIF, a technique used to check content access
Comparison of graphics file formats
GIF art, a form of digital art associated with GIF
GIFBuilder, early animated GIF creation program
GNU plotutils (supports pseudo-GIF, which uses run-length encoding rather than LZW)
Microsoft GIF Animator, historic program to create simple animated GIFs
Software patent
== References ==
== External links ==
The GIFLIB project
spec-gif89a.txt GIF 89a specification on w3.org
GIF 89a specification reformatted into HTML
LZW and GIF explained
Animated GIFs: a six-minute documentary produced by Off Book (web series) | Wikipedia/Graphics_Interchange_Format |
Algorithmic topology, or computational topology, is a subfield of topology with an overlap with areas of computer science, in particular, computational geometry and computational complexity theory.
A primary concern of algorithmic topology, as its name suggests, is to develop efficient algorithms for solving problems that arise naturally in fields such as computational geometry, graphics, robotics, social science, structural biology, and chemistry, using methods from computable topology.
== Major algorithms by subject area ==
=== Algorithmic 3-manifold theory ===
A large family of algorithms concerning 3-manifolds revolve around normal surface theory, which is a phrase that encompasses several techniques to turn problems in 3-manifold theory into integer linear programming problems.
Rubinstein and Thompson's 3-sphere recognition algorithm. This is an algorithm that takes as input a triangulated 3-manifold and determines whether or not the manifold is homeomorphic to the 3-sphere. It has exponential run-time in the number of tetrahedral simplexes in the initial 3-manifold, and also an exponential memory profile. Saul Schleimer went on to show the problem lies in the complexity class NP. Furthermore, Raphael Zentner showed that the problem lies in the complexity class coNP, provided that the generalized Riemann hypothesis holds. He uses instanton gauge theory, the geometrization theorem of 3-manifolds, and subsequent work of Greg Kuperberg on the complexity of knottedness detection.
The connect-sum decomposition of 3-manifolds is also implemented in Regina, has exponential run-time and is based on a similar algorithm to the 3-sphere recognition algorithm.
Determining that the Seifert-Weber 3-manifold contains no incompressible surface has been algorithmically implemented by Burton, Rubinstein and Tillmann and based on normal surface theory.
The Manning algorithm is an algorithm to find hyperbolic structures on 3-manifolds whose fundamental group have a solution to the word problem.
At present the JSJ decomposition has not been implemented algorithmically in computer software. Neither has the compression-body decomposition. There are some very popular and successful heuristics, such as SnapPea which has much success computing approximate hyperbolic structures on triangulated 3-manifolds. It is known that the full classification of 3-manifolds can be done algorithmically, in fact, it is known that deciding whether two closed, oriented 3-manifolds given by triangulations (simplicial complexes) are equivalent (homeomorphic) is elementary recursive. This generalizes the result on 3-sphere recognition.
==== Conversion algorithms ====
SnapPea implements an algorithm to convert a planar knot or link diagram into a cusped triangulation. This algorithm has a roughly linear run-time in the number of crossings in the diagram, and low memory profile. The algorithm is similar to the Wirthinger algorithm for constructing presentations of the fundamental group of link complements given by planar diagrams. Similarly, SnapPea can convert surgery presentations of 3-manifolds into triangulations of the presented 3-manifold.
D. Thurston and F. Costantino have a procedure to construct a triangulated 4-manifold from a triangulated 3-manifold. Similarly, it can be used to construct surgery presentations of triangulated 3-manifolds, although the procedure is not explicitly written as an algorithm in principle it should have polynomial run-time in the number of tetrahedra of the given 3-manifold triangulation.
S. Schleimer has an algorithm which produces a triangulated 3-manifold, given input a word (in Dehn twist generators) for the mapping class group of a surface. The 3-manifold is the one that uses the word as the attaching map for a Heegaard splitting of the 3-manifold. The algorithm is based on the concept of a layered triangulation.
=== Algorithmic knot theory ===
Determining whether or not a knot is trivial is known to be in the complexity classes NP as well as co-NP. The problem of determining the genus of a knot in a 3-manifold is NP-complete; however, while NP remains an upper bound on the complexity of determining the genus of a knot in R3 or S3, as of 2006 it was unknown whether the algorithmic problem of determining the genus of a knot in those particular 3-manifolds was still NP-hard.
=== Computational homotopy ===
Computational methods for homotopy groups of spheres.
Computational methods for solving systems of polynomial equations.
Brown has an algorithm to compute the homotopy groups of spaces that are finite Postnikov complexes, although it is not widely considered implementable.
=== Computational homology ===
Computation of homology groups of cell complexes reduces to bringing the boundary matrices into Smith normal form. Although this is a completely solved problem algorithmically, there are various technical obstacles to efficient computation for large complexes. There are two central obstacles. Firstly, the basic Smith form algorithm has cubic complexity in the size of the matrix involved since it uses row and column operations which makes it unsuitable for large cell complexes. Secondly, the intermediate matrices which result from the application of the Smith form algorithm get filled-in even if one starts and ends with sparse matrices.
Efficient and probabilistic Smith normal form algorithms, as found in the LinBox library.
Simple homotopic reductions for pre-processing homology computations, as in the Perseus software package.
Algorithms to compute persistent homology of filtered complexes, as in the TDAstats R package.
In some applications, such as in TDA, it is useful to have representatives of (co)homology classes that are as "small" as possible. This is known as the problem of (co)homology localization. On triangulated manifolds, given a chain representing a homology class, it is in general NP-hard to approximate the minimum-support homologous chain. However, the particular setting of approximating 1-cohomology localization on triangulated 2-manifolds is one of only three known problems whose hardness is equivalent to the Unique Games Conjecture.
== See also ==
Computable topology (the study of the topological nature of computation)
Computational geometry
Digital topology
Topological data analysis
Spatial-temporal reasoning
Experimental mathematics
Geometric modeling
== References ==
== External links ==
CompuTop software archive
Workshop on Application of Topology in Science and Engineering
Computational Topology at Stanford University Archived 2007-06-22 at the Wayback Machine
Computational Homology Software (CHomP) at Rutgers University.
Computational Homology Software (RedHom) at Jagellonian University Archived 2013-07-15 at the Wayback Machine.
The Perseus software project for (persistent) homology.
The javaPlex Persistent Homology software at Stanford.
PHAT: persistent homology algorithms toolbox.
== Books ==
Tomasz Kaczynski; Konstantin Mischaikow; Marian Mrozek (2004). Computational Homology. Springer. ISBN 0-387-40853-3.
Afra J. Zomorodian (2005). Topology for Computing. Cambridge. ISBN 0-521-83666-2.
Computational Topology: An Introduction, Herbert Edelsbrunner, John L. Harer, AMS Bookstore, 2010, ISBN 978-0-8218-4925-5 | Wikipedia/Algorithmic_topology |
A neural network, also called a neuronal network, is an interconnected population of neurons (typically containing multiple neural circuits). Biological neural networks are studied to understand the organization and functioning of nervous systems.
Closely related are artificial neural networks, machine learning models inspired by biological neural networks. They consist of artificial neurons, which are mathematical functions that are designed to be analogous to the mechanisms used by neural circuits.
== Overview ==
A biological neural network is composed of a group of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from electrical signalling, there are other forms of signalling that arise from neurotransmitter diffusion.
Artificial intelligence, cognitive modelling, and artificial neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots.
Neural network theory has served to identify better how the neurons in the brain function and provide the basis for efforts to create artificial intelligence.
== History ==
The preliminary theoretical base for contemporary neural networks was independently proposed by Alexander Bain (1873) and William James (1890). In their work, both thoughts and body activity resulted from interactions among neurons within the brain.
For Bain, every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to the formation of memory. The general scientific community at the time was skeptical of Bain's theory because it required what appeared to be an inordinate number of neural connections within the brain. It is now apparent that the brain is exceedingly complex and that the same brain “wiring” can handle multiple problems and inputs.
James' theory was similar to Bain's; however, he suggested that memories and actions resulted from electrical currents flowing among the neurons in the brain. His model, by focusing on the flow of electrical currents, did not require individual neural connections for each memory or action.
C. S. Sherrington (1898) conducted experiments to test James' theory. He ran electrical currents down the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected by James, Sherrington found that the electrical current strength decreased as the testing continued over time. Importantly, this work led to the discovery of the concept of habituation.
McCulloch and Pitts (1943) also created a computational model for neural networks based on mathematics and algorithms. They called this model threshold logic. These early models paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence.
The parallel distributed processing of the mid-1980s became popular under the name connectionism. The text by Rumelhart and McClelland (1986) provided a full exposition on the use of connectionism in computers to simulate neural processes.
Artificial neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function.
== Neuroscience ==
Theoretical and computational neuroscience is the field concerned with the analysis and computational modeling of biological neural systems.
Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling.
The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (neural network models) and theory (statistical learning theory and information theory).
=== Types of models ===
Many models are used; defined at different levels of abstraction, and modeling different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of the dynamics of neural circuitry arising from interactions between individual neurons, to models of behaviour arising from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and their relation to learning and memory, from the individual neuron to the system level.
=== Connectivity ===
In August 2020 scientists reported that bi-directional connections, or added appropriate feedback connections, can accelerate and improve communication between and in modular neural networks of the brain's cerebral cortex and lower the threshold for their successful communication. They showed that adding feedback connections between a resonance pair can support successful propagation of a single pulse packet throughout the entire network. The connectivity of a neural network stems from its biological structures and is usually challenging to map out experimentally. Scientists used a variety of statistical tools to infer the connectivity of a network based on the observed neuronal activities, i.e., spike trains. Recent research has shown that statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances, providing deeper insights into the structure of neural circuits and their computational properties.
== Recent improvements ==
While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning.
Biophysical models, such as BCM theory, have been important in understanding mechanisms for synaptic plasticity, and have had applications in both computer science and neuroscience.
== See also ==
Adaptive resonance theory
Biological cybernetics
Cognitive architecture
Cognitive science
Connectomics
Cultured neuronal networks
Parallel constraint satisfaction processes
Wood Wide Web
== References == | Wikipedia/Biological_neural_network |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.