text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Implementation is the realization of an application, execution of a plan, idea, model, design, specification, standard, algorithm, policy, or the administration or management of a process or objective.
== Industry-specific definitions ==
=== Information technology ===
In the information technology industry, implementation refers to the post-sales process of guiding a client from purchase to use of the software or hardware that was purchased. This includes requirements analysis, scope analysis, customizations, systems integrations, user policies, user training and delivery. These steps are often overseen by a project manager using project management methodologies. Software Implementations involve several professionals that are relatively new to the knowledge based economy such as business analysts, software implementation specialists, solutions architects, and project managers.
To implement a system successfully, many inter-related tasks need to be carried out in an appropriate sequence. Utilising a well-proven implementation methodology and enlisting professional advice can help but often it is the number of tasks, poor planning and inadequate resourcing that causes problems with an implementation project, rather than any of the tasks being particularly difficult. Similarly with the cultural issues it is often the lack of adequate consultation and two-way communication that inhibits achievement of the desired results.
=== Social and health sciences ===
Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions. According to this definition, implementation processes are purposeful and are described in sufficient detail such that independent observers can detect the presence and strength of the "specific set of activities" related to implementation. In addition, the activity or program being implemented is described in sufficient detail so that independent observers can detect its presence and strength.
In computer science, implementation results in software, while in social and health sciences, implementation science studies how the software can be put into practice or routine use.
== Role of end users ==
System implementation generally benefits from high levels of user involvement and management support. User participation in the design and operation of information systems has several positive results. First, if users are heavily involved in systems design, they move opportunities to mold the system according to their priorities and business requirements, and more opportunities to control the outcome. Second, they are more likely to react positively to the change process. Incorporating user knowledge and expertise leads to better solutions.
The relationship between users and information systems specialists has traditionally been a problem area for information systems implementation efforts. Users and information systems specialists tend to have different backgrounds, interests, and priorities. This is referred to as the user-designer communications gap. These differences lead to divergent organizational loyalties, approaches to problem solving, and vocabularies. Examples of these differences or concerns are below:
=== Designer concerns ===
How much disk storage space will the master file consume?
How many lines of program code will it take to perform this function?
How can we cut down on CPU time when we run the system?
What are the most efficient ways of storing this data?
What database management system should we use?
== Critique of the Premise of Implementation ==
Social scientific research on implementation also takes a step away from the project oriented at implementing a plan, and turns the project into an object of study. Lucy Suchman's work has been key, in that respect, showing how the engineering model of plans and their implementation cannot account for the situated action and cognition involved in real-world practices of users relating to plans: that work shows that a plan cannot be specific enough for detailing everything that successful implementation requires. Instead, implementation draws upon implicit and tacit resources and characteristics of users and of the plan's components.
== See also ==
Application software
Situated cognition
== References == | Wikipedia/Implementation_(computer_science) |
In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., it includes all limiting values of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval [0,1] would be compact. Similarly, the space of rational numbers
Q
{\displaystyle \mathbb {Q} }
is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers
R
{\displaystyle \mathbb {R} }
is not compact either, because it excludes the two limiting values
+
∞
{\displaystyle +\infty }
and
−
∞
{\displaystyle -\infty }
. However, the extended real number line would be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topological spaces.
One such generalization is that a topological space is sequentially compact if every infinite sequence of points sampled from the space has an infinite subsequence that converges to some point of the space. The Bolzano–Weierstrass theorem states that a subset of Euclidean space is compact in this sequential sense if and only if it is closed and bounded. Thus, if one chooses an infinite number of points in the closed unit interval [0, 1], some of those points will get arbitrarily close to some real number in that space.
For instance, some of the numbers in the sequence 1/2, 4/5, 1/3, 5/6, 1/4, 6/7, ... accumulate to 0 (while others accumulate to 1).
Since neither 0 nor 1 are members of the open unit interval (0, 1), those same sets of points would not accumulate to any point of it, so the open unit interval is not compact. Although subsets (subspaces) of Euclidean space can be compact, the entire space itself is not compact, since it is not bounded. For example, considering
R
1
{\displaystyle \mathbb {R} ^{1}}
(the real number line), the sequence of points 0, 1, 2, 3, ... has no subsequence that converges to any real number.
Compactness was formally introduced by Maurice Fréchet in 1906 to generalize the Bolzano–Weierstrass theorem from spaces of geometrical points to spaces of functions. The Arzelà–Ascoli theorem and the Peano existence theorem exemplify applications of this notion of compactness to classical analysis. Following its initial introduction, various equivalent notions of compactness, including sequential compactness and limit point compactness, were developed in general metric spaces. In general topological spaces, however, these notions of compactness are not necessarily equivalent. The most useful notion — and the standard definition of the unqualified term compactness — is phrased in terms of the existence of finite families of open sets that "cover" the space, in the sense that each point of the space lies in some set contained in the family. This more subtle notion, introduced by Pavel Alexandrov and Pavel Urysohn in 1929, exhibits compact spaces as generalizations of finite sets. In spaces that are compact in this sense, it is often possible to patch together information that holds locally – that is, in a neighborhood of each point – into corresponding statements that hold throughout the space, and many theorems are of this character.
The term compact set is sometimes used as a synonym for compact space, but also often refers to a compact subspace of a topological space.
== Historical development ==
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point.
Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected.
The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points.
The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà.
The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point". Towards the beginning of the twentieth century, results similar to that of Arzelà and Ascoli began to accumulate in the area of integral equations, as investigated by David Hilbert and Erhard Schmidt.
For a certain class of Green's functions coming from solutions of integral equations, Schmidt had shown that a property analogous to the Arzelà–Ascoli theorem held in the sense of mean convergence – or convergence in what would later be dubbed a Hilbert space. This ultimately led to the notion of a compact operator as an offshoot of the general notion of a compact space.
It was Maurice Fréchet who, in 1906, had distilled the essence of the Bolzano–Weierstrass property and coined the term compactness to refer to this general phenomenon (he used the term already in his 1904 paper which led to the famous 1906 thesis).
However, a different notion of compactness altogether had also slowly emerged at the end of the 19th century from the study of the continuum, which was seen as fundamental for the rigorous formulation of analysis.
In 1870, Eduard Heine showed that a continuous function defined on a closed and bounded interval was in fact uniformly continuous. In the course of the proof, he made use of a lemma that from any countable cover of the interval by smaller open intervals, it was possible to select a finite number of these that also covered it.
The significance of this lemma was recognized by Émile Borel (1895), and it was generalized to arbitrary collections of intervals by Pierre Cousin (1895) and Henri Lebesgue (1904). The Heine–Borel theorem, as the result is now known, is another special property possessed by closed and bounded sets of real numbers.
This property was significant because it allowed for the passage from local information about a set (such as the continuity of a function) to global information about the set (such as the uniform continuity of a function). This sentiment was expressed by Lebesgue (1904), who also exploited it in the development of the integral now bearing his name. Ultimately, the Russian school of point-set topology, under the direction of Pavel Alexandrov and Pavel Urysohn, formulated Heine–Borel compactness in a way that could be applied to the modern notion of a topological space. Alexandrov & Urysohn (1929) showed that the earlier version of compactness due to Fréchet, now called (relative) sequential compactness, under appropriate conditions followed from the version of compactness that was formulated in terms of the existence of finite subcovers. It was this notion of compactness that became the dominant one, because it was not only a stronger property, but it could be formulated in a more general setting with a minimum of additional technical machinery, as it relied only on the structure of the open sets in a space.
== Basic examples ==
Any finite space is compact; a finite subcover can be obtained by selecting, for each point, an open set containing it. A nontrivial example of a compact space is the (closed) unit interval [0,1] of real numbers. If one chooses an infinite number of distinct points in the unit interval, then there must be some accumulation point among these points in that interval. For instance, the odd-numbered terms of the sequence 1, 1/2, 1/3, 3/4, 1/5, 5/6, 1/7, 7/8, ... get arbitrarily close to 0, while the even-numbered ones get arbitrarily close to 1. The given example sequence shows the importance of including the boundary points of the interval, since the limit points must be in the space itself — an open (or half-open) interval of the real numbers is not compact. It is also crucial that the interval be bounded, since in the interval [0,∞), one could choose the sequence of points 0, 1, 2, 3, ..., of which no sub-sequence ultimately gets arbitrarily close to any given real number.
In two dimensions, closed disks are compact since for any infinite number of points sampled from a disk, some subset of those points must get arbitrarily close either to a point within the disc, or to a point on the boundary. However, an open disk is not compact, because a sequence of points can tend to the boundary – without getting arbitrarily close to any point in the interior. Likewise, spheres are compact, but a sphere missing a point is not since a sequence of points can still tend to the missing point, thereby not getting arbitrarily close to any point within the space. Lines and planes are not compact, since one can take a set of equally-spaced points in any given direction without approaching any point.
== Definitions ==
Various definitions of compactness may apply, depending on the level of generality.
A subset of Euclidean space in particular is called compact if it is closed and bounded. This implies, by the Bolzano–Weierstrass theorem, that any infinite sequence from the set has a subsequence that converges to a point in the set. Various equivalent notions of compactness, such as sequential compactness and limit point compactness, can be developed in general metric spaces.
In contrast, the different notions of compactness are not equivalent in general topological spaces, and the most useful notion of compactness – originally called bicompactness – is defined using covers consisting of open sets (see Open cover definition below).
That this form of compactness holds for closed and bounded subsets of Euclidean space is known as the Heine–Borel theorem. Compactness, when defined in this manner, often allows one to take information that is known locally – in a neighbourhood of each point of the space – and to extend it to information that holds globally throughout the space. An example of this phenomenon is Dirichlet's theorem, to which it was originally applied by Heine, that a continuous function on a compact interval is uniformly continuous; here, continuity is a local property of the function, and uniform continuity the corresponding global property.
=== Open cover definition ===
Formally, a topological space X is called compact if every open cover of X has a finite subcover. That is, X is compact if for every collection C of open subsets of X such that
X
=
⋃
S
∈
C
S
,
{\displaystyle X=\bigcup _{S\in C}S\ ,}
there is a finite subcollection F ⊆ C such that
X
=
⋃
S
∈
F
S
.
{\displaystyle X=\bigcup _{S\in F}S\ .}
Some branches of mathematics such as algebraic geometry, typically influenced by the French school of Bourbaki, use the term quasi-compact for the general notion, and reserve the term compact for topological spaces that are both Hausdorff and quasi-compact. A compact set is sometimes referred to as a compactum, plural compacta.
=== Compactness of subsets ===
A subset K of a topological space X is said to be compact if it is compact as a subspace (in the subspace topology). That is, K is compact if for every arbitrary collection C of open subsets of X such that
K
⊆
⋃
S
∈
C
S
,
{\displaystyle K\subseteq \bigcup _{S\in C}S\ ,}
there is a finite subcollection F ⊆ C such that
K
⊆
⋃
S
∈
F
S
.
{\displaystyle K\subseteq \bigcup _{S\in F}S\ .}
Because compactness is a topological property, the compactness of a subset depends only on the subspace topology induced on it. It follows that, if
K
⊂
Z
⊂
Y
{\displaystyle K\subset Z\subset Y}
, with subset Z equipped with the subspace topology, then K is compact in Z if and only if K is compact in Y.
=== Characterization ===
If X is a topological space then the following are equivalent:
X is compact; i.e., every open cover of X has a finite subcover.
X has a sub-base such that every cover of the space, by members of the sub-base, has a finite subcover (Alexander's sub-base theorem).
X is Lindelöf and countably compact.
Any collection of closed subsets of X with the finite intersection property has nonempty intersection.
Every net on X has a convergent subnet (see the article on nets for a proof).
Every filter on X has a convergent refinement.
Every net on X has a cluster point.
Every filter on X has a cluster point.
Every ultrafilter on X converges to at least one point.
Every infinite subset of X has a complete accumulation point.
For every topological space Y, the projection
X
×
Y
→
Y
{\displaystyle X\times Y\to Y}
is a closed mapping (see proper map).
Every open cover linearly ordered by subset inclusion contains X.
Bourbaki defines a compact space (quasi-compact space) as a topological space where each filter has a cluster point (i.e., 8. in the above).
==== Euclidean space ====
For any subset A of Euclidean space, A is compact if and only if it is closed and bounded; this is the Heine–Borel theorem.
As a Euclidean space is a metric space, the conditions in the next subsection also apply to all of its subsets. Of all of the equivalent conditions, it is in practice easiest to verify that a subset is closed and bounded, for example, for a closed interval or closed n-ball.
==== Metric spaces ====
For any metric space (X, d), the following are equivalent (assuming countable choice):
(X, d) is compact.
(X, d) is complete and totally bounded (this is also equivalent to compactness for uniform spaces).
(X, d) is sequentially compact; that is, every sequence in X has a convergent subsequence whose limit is in X (this is also equivalent to compactness for first-countable uniform spaces).
(X, d) is limit point compact (also called weakly countably compact); that is, every infinite subset of X has at least one limit point in X.
(X, d) is countably compact; that is, every countable open cover of X has a finite subcover.
(X, d) is an image of a continuous function from the Cantor set.
Every decreasing nested sequence of nonempty closed subsets S1 ⊇ S2 ⊇ ... in (X, d) has a nonempty intersection.
Every increasing nested sequence of proper open subsets S1 ⊆ S2 ⊆ ... in (X, d) fails to cover X.
A compact metric space (X, d) also satisfies the following properties:
Lebesgue's number lemma: For every open cover of X, there exists a number δ > 0 such that every subset of X of diameter < δ is contained in some member of the cover.
(X, d) is second-countable, separable and Lindelöf – these three conditions are equivalent for metric spaces. The converse is not true; e.g., a countable discrete space satisfies these three conditions, but is not compact.
X is closed and bounded (as a subset of any metric space whose restricted metric is d). The converse may fail for a non-Euclidean space; e.g. the real line equipped with the discrete metric is closed and bounded but not compact, as the collection of all singletons of the space is an open cover which admits no finite subcover. It is complete but not totally bounded.
==== Ordered spaces ====
For an ordered space (X, <) (i.e. a totally ordered set equipped with the order topology), the following are equivalent:
(X, <) is compact.
Every subset of X has a supremum (i.e. a least upper bound) in X.
Every subset of X has an infimum (i.e. a greatest lower bound) in X.
Every nonempty closed subset of X has a maximum and a minimum element.
An ordered space satisfying (any one of) these conditions is called a complete lattice.
In addition, the following are equivalent for all ordered spaces (X, <), and (assuming countable choice) are true whenever (X, <) is compact. (The converse in general fails if (X, <) is not also metrizable.):
Every sequence in (X, <) has a subsequence that converges in (X, <).
Every monotone increasing sequence in X converges to a unique limit in X.
Every monotone decreasing sequence in X converges to a unique limit in X.
Every decreasing nested sequence of nonempty closed subsets S1 ⊇ S2 ⊇ ... in (X, <) has a nonempty intersection.
Every increasing nested sequence of proper open subsets S1 ⊆ S2 ⊆ ... in (X, <) fails to cover X.
==== Characterization by continuous functions ====
Let X be a topological space and C(X) the ring of real continuous functions on X.
For each p ∈ X, the evaluation map
ev
p
:
C
(
X
)
→
R
{\displaystyle \operatorname {ev} _{p}\colon C(X)\to \mathbb {R} }
given by evp(f) = f(p) is a ring homomorphism.
The kernel of evp is a maximal ideal, since the residue field C(X)/ker evp is the field of real numbers, by the first isomorphism theorem. A topological space X is pseudocompact if and only if every maximal ideal in C(X) has residue field the real numbers. For completely regular spaces, this is equivalent to every maximal ideal being the kernel of an evaluation homomorphism. There are pseudocompact spaces that are not compact, though.
In general, for non-pseudocompact spaces there are always maximal ideals m in C(X) such that the residue field C(X)/m is a (non-Archimedean) hyperreal field. The framework of non-standard analysis allows for the following alternative characterization of compactness: a topological space X is compact if and only if every point x of the natural extension *X is infinitely close to a point x0 of X (more precisely, x is contained in the monad of x0).
==== Hyperreal definition ====
A space X is compact if its hyperreal extension *X (constructed, for example, by the ultrapower construction) has the property that every point of *X is infinitely close to some point of X ⊂ *X. For example, an open real interval X = (0, 1) is not compact because its hyperreal extension *(0,1) contains infinitesimals, which are infinitely close to 0, which is not a point of X.
== Sufficient conditions ==
A closed subset of a compact space is compact.
A finite union of compact sets is compact.
A continuous image of a compact space is compact.
The intersection of any non-empty collection of compact subsets of a Hausdorff space is compact (and closed);
If X is not Hausdorff then the intersection of two compact subsets may fail to be compact (see footnote for example).
The product of any collection of compact spaces is compact. (This is Tychonoff's theorem, which is equivalent to the axiom of choice.)
In a metrizable space, a subset is compact if and only if it is sequentially compact (assuming countable choice)
A finite set endowed with any topology is compact.
== Properties of compact spaces ==
A compact subset of a Hausdorff space X is closed.
If X is not Hausdorff then a compact subset of X may fail to be a closed subset of X (see footnote for example).
If X is not Hausdorff then the closure of a compact set may fail to be compact (see footnote for example).
In any topological vector space (TVS), a compact subset is complete. However, every non-Hausdorff TVS contains compact (and thus complete) subsets that are not closed.
If A and B are disjoint compact subsets of a Hausdorff space X, then there exist disjoint open sets U and V in X such that A ⊆ U and B ⊆ V.
A continuous bijection from a compact space into a Hausdorff space is a homeomorphism.
A compact Hausdorff space is normal and regular.
If a space X is compact and Hausdorff, then no finer topology on X is compact and no coarser topology on X is Hausdorff.
If a subset of a metric space (X, d) is compact then it is d-bounded.
=== Functions and compact spaces ===
Since a continuous image of a compact space is compact, the extreme value theorem holds for such spaces: a continuous real-valued function on a nonempty compact space is bounded above and attains its supremum.
(Slightly more generally, this is true for an upper semicontinuous function.) As a sort of converse to the above statements, the pre-image of a compact space under a proper map is compact.
=== Compactifications ===
Every topological space X is an open dense subspace of a compact space having at most one point more than X, by the Alexandroff one-point compactification.
By the same construction, every locally compact Hausdorff space X is an open dense subspace of a compact Hausdorff space having at most one point more than X.
=== Ordered compact spaces ===
A nonempty compact subset of the real numbers has a greatest element and a least element.
Let X be a simply ordered set endowed with the order topology.
Then X is compact if and only if X is a complete lattice (i.e. all subsets have suprema and infima).
== Examples ==
Any finite topological space, including the empty set, is compact. More generally, any space with a finite topology (only finitely many open sets) is compact; this includes in particular the trivial topology.
Any space carrying the cofinite topology is compact.
Any locally compact Hausdorff space can be turned into a compact space by adding a single point to it, by means of Alexandroff one-point compactification. The one-point compactification of
R
{\displaystyle \mathbb {R} }
is homeomorphic to the circle S1; the one-point compactification of
R
2
{\displaystyle \mathbb {R} ^{2}}
is homeomorphic to the sphere S2. Using the one-point compactification, one can also easily construct compact spaces which are not Hausdorff, by starting with a non-Hausdorff space.
The right order topology or left order topology on any bounded totally ordered set is compact. In particular, Sierpiński space is compact.
No discrete space with an infinite number of points is compact. The collection of all singletons of the space is an open cover which admits no finite subcover. Finite discrete spaces are compact.
In
R
{\displaystyle \mathbb {R} }
carrying the lower limit topology, no uncountable set is compact.
In the cocountable topology on an uncountable set, no infinite set is compact. Like the previous example, the space as a whole is not locally compact but is still Lindelöf.
The closed unit interval [0, 1] is compact. This follows from the Heine–Borel theorem. The open interval (0, 1) is not compact: the open cover
(
1
n
,
1
−
1
n
)
{\textstyle \left({\frac {1}{n}},1-{\frac {1}{n}}\right)}
for n = 3, 4, ... does not have a finite subcover. Similarly, the set of rational numbers in the closed interval [0,1] is not compact: the sets of rational numbers in the intervals
[
0
,
1
π
−
1
n
]
and
[
1
π
+
1
n
,
1
]
{\textstyle \left[0,{\frac {1}{\pi }}-{\frac {1}{n}}\right]{\text{ and }}\left[{\frac {1}{\pi }}+{\frac {1}{n}},1\right]}
cover all the rationals in [0, 1] for n = 4, 5, ... but this cover does not have a finite subcover. Here, the sets are open in the subspace topology even though they are not open as subsets of
R
{\displaystyle \mathbb {R} }
.
The set
R
{\displaystyle \mathbb {R} }
of all real numbers is not compact as there is a cover of open intervals that does not have a finite subcover. For example, intervals (n − 1, n + 1) , where n takes all integer values in Z, cover
R
{\displaystyle \mathbb {R} }
but there is no finite subcover.
On the other hand, the extended real number line carrying the analogous topology is compact; note that the cover described above would never reach the points at infinity and thus would not cover the extended real line. In fact, the set has the homeomorphism to [−1, 1] of mapping each infinity to its corresponding unit and every real number to its sign multiplied by the unique number in the positive part of interval that results in its absolute value when divided by one minus itself, and since homeomorphisms preserve covers, the Heine-Borel property can be inferred.
For every natural number n, the n-sphere is compact. Again from the Heine–Borel theorem, the closed unit ball of any finite-dimensional normed vector space is compact. This is not true for infinite dimensions; in fact, a normed vector space is finite-dimensional if and only if its closed unit ball is compact.
On the other hand, the closed unit ball of the dual of a normed space is compact for the weak-* topology. (Alaoglu's theorem)
The Cantor set is compact. In fact, every compact metric space is a continuous image of the Cantor set.
Consider the set K of all functions f :
R
{\displaystyle \mathbb {R} }
→ [0, 1] from the real number line to the closed unit interval, and define a topology on K so that a sequence
{
f
n
}
{\displaystyle \{f_{n}\}}
in K converges towards f ∈ K if and only if
{
f
n
(
x
)
}
{\displaystyle \{f_{n}(x)\}}
converges towards f(x) for all real numbers x. There is only one such topology; it is called the topology of pointwise convergence or the product topology. Then K is a compact topological space; this follows from the Tychonoff theorem.
A subset of the Banach space of real-valued continuous functions on a compact Hausdorff space is relatively compact if and only if it is equicontinuous and pointwise bounded (Arzelà–Ascoli theorem).
Consider the set K of all functions f : [0, 1] → [0, 1] satisfying the Lipschitz condition |f(x) − f(y)| ≤ |x − y| for all x, y ∈ [0,1]. Consider on K the metric induced by the uniform distance
d
(
f
,
g
)
=
sup
x
∈
[
0
,
1
]
|
f
(
x
)
−
g
(
x
)
|
.
{\displaystyle d(f,g)=\sup _{x\in [0,1]}|f(x)-g(x)|.}
Then by the Arzelà–Ascoli theorem the space K is compact.
The spectrum of any bounded linear operator on a Banach space is a nonempty compact subset of the complex numbers
C
{\displaystyle \mathbb {C} }
. Conversely, any compact subset of
C
{\displaystyle \mathbb {C} }
arises in this manner, as the spectrum of some bounded linear operator. For instance, a diagonal operator on the Hilbert space
ℓ
2
{\displaystyle \ell ^{2}}
may have any compact nonempty subset of
C
{\displaystyle \mathbb {C} }
as spectrum.
The space of Borel probability measures on a compact Hausdorff space is compact for the vague topology, by the Alaoglu theorem.
A collection of probability measures on the Borel sets of Euclidean space is called tight if, for any positive epsilon, there exists a compact subset containing all but at most epsilon of the mass of each of the measures. Helly's theorem then asserts that a collection of probability measures is relatively compact for the vague topology if and only if it is tight.
=== Algebraic examples ===
Topological groups such as an orthogonal group are compact, while groups such as a general linear group are not.
Since the p-adic integers are homeomorphic to the Cantor set, they form a compact set.
Any global field K is a discrete additive subgroup of its adele ring, and the quotient space is compact. This was used in John Tate's thesis to allow harmonic analysis to be used in number theory.
The spectrum of any commutative ring with the Zariski topology (that is, the set of all prime ideals) is compact, but never Hausdorff (except in trivial cases). In algebraic geometry, such topological spaces are examples of quasi-compact schemes, "quasi" referring to the non-Hausdorff nature of the topology.
The spectrum of a Boolean algebra is compact, a fact which is part of the Stone representation theorem. Stone spaces, compact totally disconnected Hausdorff spaces, form the abstract framework in which these spectra are studied. Such spaces are also useful in the study of profinite groups.
The structure space of a commutative unital Banach algebra is a compact Hausdorff space.
The Hilbert cube is compact, again a consequence of Tychonoff's theorem.
A profinite group (e.g. Galois group) is compact.
== See also ==
== Notes ==
== References ==
== Bibliography ==
== External links ==
Sundström, Manya Raman (2010). "A pedagogical history of compactness". arXiv:1006.4131v1 [math.HO].
This article incorporates material from Examples of compact spaces on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Compact_(topology) |
Linear or point-projection perspective (from Latin perspicere 'to see through') is one of two types of graphical projection perspective in the graphic arts; the other is parallel projection. Linear perspective is an approximate representation, generally on a flat surface, of an image as it is seen by the eye. Perspective drawing is useful for representing a three-dimensional scene in a two-dimensional medium, like paper. It is based on the optical fact that for a person an object looks N times (linearly) smaller if it has been moved N times further from the eye than the original distance was.
The most characteristic features of linear perspective are that objects appear smaller as their distance from the observer increases, and that they are subject to foreshortening, meaning that an object's dimensions parallel to the line of sight appear shorter than its dimensions perpendicular to the line of sight. All objects will recede to points in the distance, usually along the horizon line, but also above and below the horizon line depending on the view used.
Italian Renaissance painters and architects including Filippo Brunelleschi, Leon Battista Alberti, Masaccio, Paolo Uccello, Piero della Francesca and Luca Pacioli studied linear perspective, wrote treatises on it, and incorporated it into their artworks.
== Overview ==
Linear or point-projection perspective works by putting an imaginary flat plane that is close to an object under observation and directly facing an observer's eyes (i.e., the observer is on a normal, or perpendicular line to the plane). Then draw straight lines from every point in the object to the observer. The area on the plane where those lines pass through the plane is a point-projection prospective image resembling what is seen by the observer.
=== Examples of one-point perspective ===
=== Examples of two-point perspective ===
=== Examples of three-point perspective ===
=== Examples of curvilinear perspective ===
Additionally, a central vanishing point can be used (just as with one-point perspective) to indicate frontal (foreshortened) depth.
== History ==
=== Early history ===
The earliest art paintings and drawings typically sized many objects and characters hierarchically according to their spiritual or thematic importance, not their distance from the viewer, and did not use foreshortening. The most important figures are often shown as the highest in a composition, also from hieratic motives, leading to the so-called "vertical perspective", common in the art of Ancient Egypt, where a group of "nearer" figures are shown below the larger figure or figures; simple overlapping was also employed to relate distance. Additionally, oblique foreshortening of round elements like shields and wheels is evident in Ancient Greek red-figure pottery.
Systematic attempts to evolve a system of perspective are usually considered to have begun around the fifth century BC in the art of ancient Greece, as part of a developing interest in illusionism allied to theatrical scenery. This was detailed within Aristotle's Poetics as skenographia: using flat panels on a stage to give the illusion of depth. The philosophers Anaxagoras and Democritus worked out geometric theories of perspective for use with skenographia. Alcibiades had paintings in his house designed using skenographia, so this art was not confined merely to the stage. Euclid in his Optics (c. 300 BC) argues correctly that the perceived size of an object is not related to its distance from the eye by a simple proportion. In the first-century BC frescoes of the Villa of P. Fannius Synistor, multiple vanishing points are used in a systematic but not fully consistent manner.
Chinese artists made use of oblique projection from the first or second century until the 18th century. It is not certain how they came to use the technique; Dubery and Willats (1983) speculate that the Chinese acquired the technique from India, which acquired it from Ancient Rome, while others credit it as an indigenous invention of Ancient China. Oblique projection is also seen in Japanese art, such as in the Ukiyo-e paintings of Torii Kiyonaga (1752–1815).
By the later periods of antiquity, artists, especially those in less popular traditions, were well aware that distant objects could be shown smaller than those close at hand for increased realism, but whether this convention was actually used in a work depended on many factors. Some of the paintings found in the ruins of Pompeii show a remarkable realism and perspective for their time. It has been claimed that comprehensive systems of perspective were evolved in antiquity, but most scholars do not accept this. Hardly any of the many works where such a system would have been used have survived. A passage in Philostratus suggests that classical artists and theorists thought in terms of "circles" at equal distance from the viewer, like a classical semi-circular theatre seen from the stage. The roof beams in rooms in the Vatican Virgil, from about 400 AD, are shown converging, more or less, on a common vanishing point, but this is not systematically related to the rest of the composition.
Medieval artists in Europe, like those in the Islamic world and China, were aware of the general principle of varying the relative size of elements according to distance, but even more than classical art were perfectly ready to override it for other reasons. Buildings were often shown obliquely according to a particular convention. The use and sophistication of attempts to convey distance increased steadily during the period, but without a basis in a systematic theory. Byzantine art was also aware of these principles, but also used the reverse perspective convention for the setting of principal figures. Ambrogio Lorenzetti painted a floor with convergent lines in his Presentation at the Temple (1342), though the rest of the painting lacks perspective elements.
=== Renaissance ===
It is generally accepted that Filippo Brunelleschi conducted a series of experiments between 1415 and 1420, which included making drawings of various Florentine buildings in correct perspective. According to Vasari and Antonio Manetti, in about 1420, Brunelleschi demonstrated his discovery of perspective by having people look through a hole on his painting from the backside. Through it, they would see a building such as the Florence Baptistery for which the painting was made. When Brunelleschi lifted a mirror between the building and the painting, the mirror reflected the painting to an observer looking through the hole, so that the observer can compare how similar the building and the painting of it are. (The vanishing point is centered from the perspective of an experiment participant.) Brunelleschi applied this new system of perspective to his paintings around 1425.
This scenario is indicative, but faces several problems that are still debated. First of all, nothing can be said for certain about the correctness of his perspective construction of the Baptistery of San Giovanni because Brunelleschi's panel is lost. Second, no other perspective painting or drawing by Brunelleschi is known. (In fact, Brunelleschi was not known to have painted at all.) Third, in the account written by Antonio Manetti in his Vita di Ser Brunellesco at the end of the 15th century on Brunelleschi's panel, there is not a single occurrence of the word "experiment". Fourth, the conditions listed by Manetti are contradictory with each other. For example, the description of the eyepiece sets a visual field of 15°, much narrower than the visual field resulting from the urban landscape described.
Soon after Brunelleschi's demonstrations, nearly every interested artist in Florence and in Italy used geometrical perspective in their paintings and sculpture, notably Donatello, Masaccio,Lorenzo Ghiberti, Masolino da Panicale, Paolo Uccello, and Filippo Lippi. Not only was perspective a way of showing depth, it was also a new method of creating a composition. Visual art could now depict a single, unified scene rather than a combination of several. Early examples include Masolino's St. Peter Healing a Cripple and the Raising of Tabitha (c. 1423), Donatello's The Feast of Herod (c. 1427), as well as Ghiberti's Jacob and Esau and other panels from the east doors of the Florence Baptistery. Masaccio (d. 1428) achieved an illusionistic effect by placing the vanishing point at the viewer's eye level in his Holy Trinity (c. 1427), and in The Tribute Money, it is placed behind the face of Jesus. In the late 15th century, Melozzo da Forlì first applied the technique of foreshortening (in Rome, Loreto, Forlì and others).
This overall story is based on qualitative judgments, and would need to be faced against the material evaluations that have been conducted on Renaissance perspective paintings. Apart from the paintings of Piero della Francesca, which are a model of the genre, the majority of 15th century works show serious errors in their geometric construction. This is true of Masaccio's Trinity fresco and of many works, including those by renowned artists like Leonardo da Vinci.
As shown by the quick proliferation of accurate perspective paintings in Florence, Brunelleschi likely understood (with help from his friend the mathematician Toscanelli), but did not publish the mathematics behind perspective. Decades later, his friend Leon Battista Alberti wrote De pictura (c. 1435), a treatise on proper methods of showing distance in painting. Alberti's primary breakthrough was not to show the mathematics in terms of conical projections, as it actually appears to the eye. Instead, he formulated the theory based on planar projections, or how the rays of light, passing from the viewer's eye to the landscape, would strike the picture plane (the painting). He was then able to calculate the apparent height of a distant object using two similar triangles. The mathematics behind similar triangles is relatively simple, having been long ago formulated by Euclid. Alberti was also trained in the science of optics through the school of Padua and under the influence of Biagio Pelacani da Parma who studied Alhazen's Book of Optics. This book, translated around 1200 into Latin, had laid the mathematical foundation for perspective in Europe.
Piero della Francesca elaborated on De pictura in his De Prospectiva pingendi in the 1470s, making many references to Euclid. Alberti had limited himself to figures on the ground plane and giving an overall basis for perspective. Della Francesca fleshed it out, explicitly covering solids in any area of the picture plane. Della Francesca also started the now common practice of using illustrated figures to explain the mathematical concepts, making his treatise easier to understand than Alberti's. Della Francesca was also the first to accurately draw the Platonic solids as they would appear in perspective. Luca Pacioli's 1509 Divina proportione (Divine Proportion), illustrated by Leonardo da Vinci, summarizes the use of perspective in painting, including much of Della Francesca's treatise. Leonardo applied one-point perspective as well as shallow focus to some of his works.
Two-point perspective was demonstrated as early as 1525 by Albrecht Dürer, who studied perspective by reading Piero and Pacioli's works, in his Unterweisung der Messung ("Instruction of the Measurement").
== Limitations ==
Perspective images are created with reference to a particular center of vision for the picture plane. In order for the resulting image to appear identical to the original scene, a viewer must view the image from the exact vantage point used in the calculations relative to the image. When viewed from a different point, this cancels out what would appear to be distortions in the image. For example, a sphere drawn in perspective will be stretched into an ellipse. These apparent distortions are more pronounced away from the center of the image as the angle between a projected ray (from the scene to the eye) becomes more acute relative to the picture plane. Artists may choose to "correct" perspective distortions, for example by drawing all spheres as perfect circles, or by drawing figures as if centered on the direction of view. In practice, unless the viewer observes the image from an extreme angle, like standing far to the side of a painting, the perspective normally looks more or less correct. This is referred to as "Zeeman's Paradox".
== See also ==
Anamorphosis
Camera angle
Cutaway drawing
Perspective control
Perspective (geometry)
Trompe-l'œil
Uki-e
Zograscope
== Notes ==
== References ==
=== Sources ===
Edgerton, Samuel Y. (2009). The Mirror, the Window & the Telescope: How Renaissance Linear Perspective Changed Our Vision of the Universe. Ithaca, NY: Cornell University Press. ISBN 978-0-8014-4758-7.
== Further reading ==
Andersen, Kirsti (2007). The Geometry of an Art: The History of the Mathematical Theory of Perspective from Alberti to Monge. Springer.
Damisch, Hubert (1994). The Origin of Perspective, Translated by John Goodman. Cambridge, Massachusetts: MIT Press.
Gill, Robert W (1974). Perspective From Basic to Creative. Australia: Thames & Hudson.
Hyman, Isabelle, comp (1974). Brunelleschi in Perspective. Englewood Cliffs, New Jersey: Prentice-Hall.{{cite book}}: CS1 maint: multiple names: authors list (link)
Kemp, Martin (1992). The Science of Art: Optical Themes in Western Art from Brunelleschi to Seurat. Yale University Press.
Pérez-Gómez, Alberto; Pelletier, Louise (1997). Architectural Representation and the Perspective Hinge. Cambridge, Massachusetts: MIT Press.
Raynaud, Dominique (2003). "Linear perspective in Masaccio's Trinity fresco: Demonstration or self-persuasion?". Nuncius. 18 (1): 331–344. doi:10.1163/182539103X00684.
Raynaud, Dominique (2014). Optics and the Rise of Perspective. A Study in Network Knowledge Diffusion. Oxford: Bardwell Press.
Raynaud, Dominique (2016). Studies on Binocular Vision. Archimedes. Vol. 47. Bibcode:2016sbvo.book.....R. doi:10.1007/978-3-319-42721-8. ISBN 978-3-319-42720-1. S2CID 151589160.
Vasari, Giorgio (1568). The Lives of the Artists. Florence, Italy.
== External links ==
Teaching Perspective in Art and Mathematics through Leonardo da Vinci's Work at Mathematical Association of America
Metaphysical Perspective in Ancient Roman-Wall Painting
How to Draw a Two Point Perspective Grid at Creating Comics | Wikipedia/Perspective_(graphical) |
In mathematics and logic, an axiomatic system is a set of formal statements (i.e. axioms) used to logically derive other statements such as lemmas or theorems. A proof within an axiom system is a sequence of deductive steps that establishes a new statement as a consequence of the axioms. An axiom system is called complete with respect to a property if every formula with the property can be derived using the axioms. The more general term theory is at times used to refer to an axiomatic system and all its derived theorems.
In its pure form, an axiom system is effectively a syntactic construct and does not by itself refer to (or depend on) a formal structure, although axioms are often defined for that purpose. The more modern field of model theory refers to mathematical structures. The relationship between an axiom systems and the models that correspond to it is often a major issue of interest.
== Properties ==
Four typical properties of an axiom system are consistency, relative consistency, completeness and independence. An axiomatic system is said to be consistent if it lacks contradiction. That is, it is impossible to derive both a statement and its negation from the system's axioms.
Consistency is a key requirement for most axiomatic systems, as the presence of contradiction would allow any statement to be proven (principle of explosion).
Relative consistency comes into play when we can not prove the consistency of an axiom system. However, in some cases we can show that an axiom system A is consistent if another
axiom set B is consistent.
In an axiomatic system, an axiom is called independent if it cannot be proven or disproven from other axioms in the system. A system is called independent if each of its underlying axioms is independent. Unlike consistency, in many cases independence is not a necessary requirement for a functioning axiomatic system — though it is usually sought after to minimize the number of axioms in the system.
An axiomatic system is called complete if for every statement, either itself or its negation is derivable from the system's axioms, i.e. every statement can be proven true or false by using the axioms. However, note that in some cases it may be undecidable if a statement can be proven or not.
== Axioms and models ==
A model for an axiomatic system is a well-defined formal structure, which assigns meaning for the undefined terms presented in the system, in a manner that is correct with the relations defined in the system. If an axiom system has a model, the axioms are said to have been satisfied. The existence of a model which satisfies an axiom system, proves the consistency of the system.
Models can also be used to show the independence of an axiom in the system. By constructing a model for a subsystem (without a specific axiom) shows that the omitted axiom is independent if its correctness does not necessarily follow from the subsystem.
Two models are said to be isomorphic if a one-to-one correspondence can be found between their elements, in a manner that preserves their relationship. An axiomatic system for which every model is isomorphic to another is called categorical or categorial. However, this term should not be confused with the topic of category theory. The property of categoriality (categoricity) ensures the completeness of a system, however the converse is not true: Completeness does not ensure the categoriality (categoricity) of a system, since two models can differ in properties that cannot be expressed by the semantics of the system.
=== Example ===
As an example, observe the following axiomatic system, based on first-order logic with additional semantics of the following countably infinitely many axioms added (these can be easily formalized as an axiom schema):
∃
x
1
:
∃
x
2
:
¬
(
x
1
=
x
2
)
{\displaystyle \exists x_{1}:\exists x_{2}:\lnot (x_{1}=x_{2})}
(informally, there exist two different items).
∃
x
1
:
∃
x
2
:
∃
x
3
:
¬
(
x
1
=
x
2
)
∧
¬
(
x
1
=
x
3
)
∧
¬
(
x
2
=
x
3
)
{\displaystyle \exists x_{1}:\exists x_{2}:\exists x_{3}:\lnot (x_{1}=x_{2})\land \lnot (x_{1}=x_{3})\land \lnot (x_{2}=x_{3})}
(informally, there exist three different items).
.
.
.
{\displaystyle ...}
Informally, this infinite set of axioms states that there are infinitely many different items. However, the concept of an infinite set cannot be defined within the system — let alone the cardinality of such a set.
The system has at least two different models – one is the natural numbers (isomorphic to any other countably infinite set), and another is the real numbers (isomorphic to any other set with the cardinality of the continuum). In fact, it has an infinite number of models, one for each cardinality of an infinite set. However, the property distinguishing these models is their cardinality — a property which cannot be defined within the system. Thus the system is not categorial. However it can be shown to be complete, for example by using the Łoś–Vaught test.
== Axiomatic method ==
Stating definitions and propositions in a way such that each new term can be formally eliminated by the priorly introduced terms requires primitive notions (axioms) to avoid infinite regress. This way of doing mathematics is called the axiomatic method.
A common attitude towards the axiomatic method is logicism. In their book Principia Mathematica, Alfred North Whitehead and Bertrand Russell attempted to show that all mathematical theory could be reduced to some collection of axioms. More generally, the reduction of a body of propositions to a particular collection of axioms underlies the mathematician's research program. This was very prominent in the mathematics of the twentieth century, in particular in subjects based around homological algebra.
The explication of the particular axioms used in a theory can help to clarify a suitable level of abstraction that the mathematician would like to work with. For example, mathematicians opted that rings need not be commutative, which differed from Emmy Noether's original formulation. Mathematicians decided to consider topological spaces more generally without the separation axiom which Felix Hausdorff originally formulated.
The Zermelo–Fraenkel set theory, a result of the axiomatic method applied to set theory, allowed the "proper" formulation of set-theory problems and helped avoid the paradoxes of naïve set theory. One such problem was the continuum hypothesis. Zermelo–Fraenkel set theory, with the historically controversial axiom of choice included, is commonly abbreviated ZFC, where "C" stands for "choice". Many authors use ZF to refer to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded. Today ZFC is the standard form of axiomatic set theory and as such is the most common foundation of mathematics.
=== History ===
Mathematical methods developed to some degree of sophistication in ancient Egypt, Babylon, India, and China, apparently without employing the axiomatic method.
Euclid of Alexandria authored the earliest extant axiomatic presentation of Euclidean geometry and number theory. His idea begins with five undeniable geometric assumptions called axioms. Then, using these axioms, he established the truth of other propositions by proofs, hence the axiomatic method.
Many axiomatic systems were developed in the nineteenth century, including non-Euclidean geometry, the foundations of real analysis, Cantor's set theory, Frege's work on foundations, and Hilbert's 'new' use of axiomatic method as a research tool. For example, group theory was first put on an axiomatic basis towards the end of that century. Once the axioms were clarified (that inverse elements should be required, for example), the subject could proceed autonomously, without reference to the transformation group origins of those studies.
=== Example: The Peano axiomatization of natural numbers ===
The mathematical system of natural numbers 0, 1, 2, 3, 4, ... is based on an axiomatic system first devised by the mathematician Giuseppe Peano in 1889. He chose the axioms, in the language of a single unary function symbol S (short for "successor"), for the set of natural numbers to be:
There is a natural number 0.
Every natural number a has a successor, denoted by Sa.
There is no natural number whose successor is 0.
Distinct natural numbers have distinct successors: if a ≠ b, then Sa ≠ Sb.
If a property is possessed by 0 and also by the successor of every natural number it is possessed by, then it is possessed by all natural numbers ("Induction axiom").
=== Axiomatization and proof ===
In mathematics, axiomatization is the process of taking a body of knowledge and working backwards towards its axioms. It is the formulation of a system of statements (i.e. axioms) that relate a number of primitive terms — in order that a consistent body of propositions may be derived deductively from these statements. Thereafter, the proof of any proposition should be, in principle, traceable back to these axioms.
If the formal system is not complete not every proof can be traced back to the axioms of the system it belongs. For example, a number-theoretic statement might be expressible in the language of arithmetic (i.e. the language of the Peano axioms) and a proof might be given that appeals to topology or complex analysis. It might not be immediately clear whether another proof can be found that derives itself solely from the Peano axioms.
== See also ==
Axiom schema – Short notation for a set of statements that are taken to be true
Formalism – View that mathematics does not necessarily represent reality, but is more akin to a game
Gödel's incompleteness theorems – Limitative results in mathematical logic
Hilbert-style deduction system – System of formal deduction in logicPages displaying short descriptions of redirect targets
History of logic
List of logic systems
Logicism – Programme in the philosophy of mathematics
Zermelo–Fraenkel set theory – Standard system of axiomatic set theory, an axiomatic system for set theory and today's most common foundation for mathematics.
== References ==
== Further reading ==
"Axiomatic method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Eric W. Weisstein, Axiomatic System, From MathWorld—A Wolfram Web Resource. Mathworld.wolfram.com & Answers.com | Wikipedia/Axiomatic_method |
In algebraic geometry, a complex algebraic variety is an algebraic variety (in the scheme sense or otherwise) over the field of complex numbers.
== Chow's theorem ==
Chow's theorem states that a projective complex analytic variety, i.e., a closed analytic subvariety of the complex projective space
C
P
n
{\displaystyle \mathbb {C} \mathbf {P} ^{n}}
, is an algebraic variety. These are usually simply referred to as projective varieties.
== Hironaka's theorem ==
Let X be a complex algebraic variety. Then there is a projective resolution of singularities
X
′
→
X
{\displaystyle X'\to X}
.
== Relation with similar concepts ==
Despite Chow's theorem, not every complex analytic variety is a complex algebraic variety.
== See also ==
Complete variety
Complex analytic variety
== References ==
== Bibliography ==
Abramovich, Dan (2017). "Resolution of singularities of complex algebraic varieties and their families". Proceedings of the International Congress of Mathematicians (ICM 2018). pp. 523–546. arXiv:1711.09976. doi:10.1142/9789813272880_0066. ISBN 978-981-327-287-3. S2CID 119708681.
Hironaka, Heisuke (1964). "Resolution of Singularities of an Algebraic Variety over a Field of Characteristic Zero: I". Annals of Mathematics. 79 (1): 109–203. doi:10.2307/1970486. JSTOR 1970486. | Wikipedia/Complex_algebraic_varieties |
In mathematics, the idea of descent extends the intuitive idea of 'gluing' in topology. Since the topologists' glue is the use of equivalence relations on topological spaces, the theory starts with some ideas on identification.
== Descent of vector bundles ==
The case of the construction of vector bundles from data on a disjoint union of topological spaces is a straightforward place to start.
Suppose X is a topological space covered by open sets Xi. Let Y be the disjoint union of the Xi, so that there is a natural mapping
p
:
Y
→
X
.
{\displaystyle p:Y\rightarrow X.}
We think of Y as 'above' X, with the Xi projection 'down' onto X. With this language, descent implies a vector bundle on Y (so, a bundle given on each Xi), and our concern is to 'glue' those bundles Vi, to make a single bundle V on X. What we mean is that V should, when restricted to Xi, give back Vi, up to a bundle isomorphism.
The data needed is then this: on each overlap
X
i
j
,
{\displaystyle X_{ij},}
intersection of Xi and Xj, we'll require mappings
f
i
j
:
V
i
→
V
j
{\displaystyle f_{ij}:V_{i}\rightarrow V_{j}}
to use to identify Vi and Vj there, fiber by fiber. Further the fij must satisfy conditions based on the reflexive, symmetric and transitive properties of an equivalence relation (gluing conditions). For example, the composition
f
j
k
∘
f
i
j
=
f
i
k
{\displaystyle f_{jk}\circ f_{ij}=f_{ik}}
for transitivity (and choosing apt notation). The fii should be identity maps and hence symmetry becomes
f
i
j
=
f
j
i
−
1
{\displaystyle f_{ij}=f_{ji}^{-1}}
(so that it is fiberwise an isomorphism).
These are indeed standard conditions in fiber bundle theory (see transition map). One important application to note is change of fiber : if the fij are all you need to make a bundle, then there are many ways to make an associated bundle. That is, we can take essentially same fij, acting on various fibers.
Another major point is the relation with the chain rule: the discussion of the way there of constructing tensor fields can be summed up as "once you learn to descend the tangent bundle, for which transitivity is the Jacobian chain rule, the rest is just 'naturality of tensor constructions'".
To move closer towards the abstract theory we need to interpret the disjoint union of the
X
i
j
{\displaystyle X_{ij}}
now as
Y
×
X
Y
,
{\displaystyle Y\times _{X}Y,}
the fiber product (here an equalizer) of two copies of the projection p. The bundles on the Xij that we must control are Vi and Vj, the pullbacks to the fiber of V via the two different projection maps to X.
Therefore, by going to a more abstract level one can eliminate the combinatorial side (that is, leave out the indices) and get something that makes sense for p not of the special form of covering with which we began. This then allows a category theory approach: what remains to do is to re-express the gluing conditions.
== History ==
The ideas were developed in the period 1955–1965 (which was roughly the time at which the requirements of algebraic topology were met but those of algebraic geometry were not). From the point of view of abstract category theory the work of comonads of Beck was a summation of those ideas; see Beck's monadicity theorem.
The difficulties of algebraic geometry with passage to the quotient are acute. The urgency (to put it that way) of the problem for the geometers accounts for the title of the 1959 Grothendieck seminar TDTE on theorems of descent and techniques of existence (see FGA) connecting the descent question with the representable functor question in algebraic geometry in general, and the moduli problem in particular.
== Fully faithful descent ==
Let
p
:
X
′
→
X
{\displaystyle p:X'\to X}
. Each sheaf F on X gives rise to a descent datum
(
F
′
=
p
∗
F
,
α
:
p
0
∗
F
′
≃
p
1
∗
F
′
)
,
p
i
:
X
″
=
X
′
×
X
X
′
→
X
′
{\displaystyle (F'=p^{*}F,\alpha :p_{0}^{*}F'\simeq p_{1}^{*}F'),\,p_{i}:X''=X'\times _{X}X'\to X'}
,
where
α
{\displaystyle \alpha }
satisfies the cocycle condition
p
02
∗
α
=
p
12
∗
α
∘
p
01
∗
α
,
p
i
j
:
X
′
×
X
X
′
×
X
X
′
→
X
′
×
X
X
′
{\displaystyle p_{02}^{*}\alpha =p_{12}^{*}\alpha \circ p_{01}^{*}\alpha ,\,p_{ij}:X'\times _{X}X'\times _{X}X'\to X'\times _{X}X'}
.
The fully faithful descent says: The functor
F
↦
(
F
′
,
α
)
{\displaystyle F\mapsto (F',\alpha )}
is fully faithful. Descent theory tells conditions for which there is a fully faithful descent, and when this functor is an equivalence of categories.
== See also ==
Grothendieck connection
Stack (mathematics)
Galois descent
Grothendieck topology
Fibered category
Beck's monadicity theorem
Cohomological descent
Faithfully flat descent
== References ==
SGA 1, Ch VIII – this is the main reference
Siegfried Bosch; Werner Lütkebohmert; Michel Raynaud (1990). Néron Models. Ergebnisse der Mathematik und Ihrer Grenzgebiete. 3. Folge. Vol. 21. Springer-Verlag. ISBN 3540505873. A chapter on the descent theory is more accessible than SGA.
Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical foundations. Special topics in order, topology, algebra, and sheaf theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
== Further reading ==
Other possible sources include:
Angelo Vistoli, Notes on Grothendieck topologies, fibered categories and descent theory arXiv:math.AG/0412512
Mattieu Romagny, A straight way to algebraic stacks
== External links ==
What is descent theory? | Wikipedia/Descent_theory |
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).
This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.
== Definition ==
Given a nonnegative integer m, an order-
m
{\displaystyle m}
linear differential operator is a map
P
{\displaystyle P}
from a function space
F
1
{\displaystyle {\mathcal {F}}_{1}}
on
R
n
{\displaystyle \mathbb {R} ^{n}}
to another function space
F
2
{\displaystyle {\mathcal {F}}_{2}}
that can be written as:
P
=
∑
|
α
|
≤
m
a
α
(
x
)
D
α
,
{\displaystyle P=\sum _{|\alpha |\leq m}a_{\alpha }(x)D^{\alpha }\ ,}
where
α
=
(
α
1
,
α
2
,
⋯
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\alpha _{2},\cdots ,\alpha _{n})}
is a multi-index of non-negative integers,
|
α
|
=
α
1
+
α
2
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}
, and for each
α
{\displaystyle \alpha }
,
a
α
(
x
)
{\displaystyle a_{\alpha }(x)}
is a function on some open domain in n-dimensional space. The operator
D
α
{\displaystyle D^{\alpha }}
is interpreted as
D
α
=
∂
|
α
|
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
{\displaystyle D^{\alpha }={\frac {\partial ^{|\alpha |}}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}
Thus for a function
f
∈
F
1
{\displaystyle f\in {\mathcal {F}}_{1}}
:
P
f
=
∑
|
α
|
≤
m
a
α
(
x
)
∂
|
α
|
f
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
{\displaystyle Pf=\sum _{|\alpha |\leq m}a_{\alpha }(x){\frac {\partial ^{|\alpha |}f}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}
The notation
D
α
{\displaystyle D^{\alpha }}
is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.
The polynomial p obtained by replacing partials
∂
∂
x
i
{\displaystyle {\frac {\partial }{\partial x_{i}}}}
by variables
ξ
i
{\displaystyle \xi _{i}}
in P is called the total symbol of P; i.e., the total symbol of P above is:
p
(
x
,
ξ
)
=
∑
|
α
|
≤
m
a
α
(
x
)
ξ
α
{\displaystyle p(x,\xi )=\sum _{|\alpha |\leq m}a_{\alpha }(x)\xi ^{\alpha }}
where
ξ
α
=
ξ
1
α
1
⋯
ξ
n
α
n
.
{\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}.}
The highest homogeneous component of the symbol, namely,
σ
(
x
,
ξ
)
=
∑
|
α
|
=
m
a
α
(
x
)
ξ
α
{\displaystyle \sigma (x,\xi )=\sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }}
is called the principal symbol of P. While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).
More generally, let E and F be vector bundles over a manifold X. Then the linear operator
P
:
C
∞
(
E
)
→
C
∞
(
F
)
{\displaystyle P:C^{\infty }(E)\to C^{\infty }(F)}
is a differential operator of order
k
{\displaystyle k}
if, in local coordinates on X, we have
P
u
(
x
)
=
∑
|
α
|
=
k
P
α
(
x
)
∂
α
u
∂
x
α
+
lower-order terms
{\displaystyle Pu(x)=\sum _{|\alpha |=k}P^{\alpha }(x){\frac {\partial ^{\alpha }u}{\partial x^{\alpha }}}+{\text{lower-order terms}}}
where, for each multi-index α,
P
α
(
x
)
:
E
→
F
{\displaystyle P^{\alpha }(x):E\to F}
is a bundle map, symmetric on the indices α.
The kth order coefficients of P transform as a symmetric tensor
σ
P
:
S
k
(
T
∗
X
)
⊗
E
→
F
{\displaystyle \sigma _{P}:S^{k}(T^{*}X)\otimes E\to F}
whose domain is the tensor product of the kth symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.
The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components
(
P
u
)
ν
=
∑
μ
P
ν
μ
u
μ
{\displaystyle (Pu)_{\nu }=\sum _{\mu }P_{\nu \mu }u_{\mu }}
on each section u of E. Here Pνμ is the scalar differential operator defined by
P
ν
μ
=
∑
α
P
ν
μ
α
∂
∂
x
α
.
{\displaystyle P_{\nu \mu }=\sum _{\alpha }P_{\nu \mu }^{\alpha }{\frac {\partial }{\partial x^{\alpha }}}.}
With this trivialization, the principal symbol can now be written
(
σ
P
(
ξ
)
u
)
ν
=
∑
|
α
|
=
k
∑
μ
P
ν
μ
α
(
x
)
ξ
α
u
μ
.
{\displaystyle (\sigma _{P}(\xi )u)_{\nu }=\sum _{|\alpha |=k}\sum _{\mu }P_{\nu \mu }^{\alpha }(x)\xi _{\alpha }u_{\mu }.}
In the cotangent space over a fixed point x of X, the symbol
σ
P
{\displaystyle \sigma _{P}}
defines a homogeneous polynomial of degree k in
T
x
∗
X
{\displaystyle T_{x}^{*}X}
with values in
Hom
(
E
x
,
F
x
)
{\displaystyle \operatorname {Hom} (E_{x},F_{x})}
.
== Fourier interpretation ==
A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,
P
f
(
x
)
=
1
(
2
π
)
d
2
∫
R
d
e
i
x
⋅
ξ
p
(
x
,
i
ξ
)
f
^
(
ξ
)
d
ξ
.
{\displaystyle Pf(x)={\frac {1}{(2\pi )^{\frac {d}{2}}}}\int \limits _{\mathbf {R} ^{d}}e^{ix\cdot \xi }p(x,i\xi ){\hat {f}}(\xi )\,d\xi .}
This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.
== Examples ==
The differential operator
P
{\displaystyle P}
is elliptic if its symbol is invertible; that is for each nonzero
θ
∈
T
∗
X
{\displaystyle \theta \in T^{*}X}
the bundle map
σ
P
(
θ
,
…
,
θ
)
{\displaystyle \sigma _{P}(\theta ,\dots ,\theta )}
is invertible. On a compact manifold, it follows from the elliptic theory that P is a Fredholm operator: it has finite-dimensional kernel and cokernel.
In the study of hyperbolic and parabolic partial differential equations, zeros of the principal symbol correspond to the characteristics of the partial differential equation.
In applications to the physical sciences, operators such as the Laplace operator play a major role in setting up and solving partial differential equations.
In differential topology, the exterior derivative and Lie derivative operators have intrinsic meaning.
In abstract algebra, the concept of a derivation allows for generalizations of differential operators, which do not require the use of calculus. Frequently such generalizations are employed in algebraic geometry and commutative algebra. See also Jet (mathematics).
In the development of holomorphic functions of a complex variable z = x + i y, sometimes a complex function is considered to be a function of two real variables x and y. Use is made of the Wirtinger derivatives, which are partial differential operators:
∂
∂
z
=
1
2
(
∂
∂
x
−
i
∂
∂
y
)
,
∂
∂
z
¯
=
1
2
(
∂
∂
x
+
i
∂
∂
y
)
.
{\displaystyle {\frac {\partial }{\partial z}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}-i{\frac {\partial }{\partial y}}\right)\ ,\quad {\frac {\partial }{\partial {\bar {z}}}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right)\ .}
This approach is also used to study functions of several complex variables and functions of a motor variable.
The differential operator del, also called nabla, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's equations. In three-dimensional Cartesian coordinates, del is defined as
∇
=
x
^
∂
∂
x
+
y
^
∂
∂
y
+
z
^
∂
∂
z
.
{\displaystyle \nabla =\mathbf {\hat {x}} {\partial \over \partial x}+\mathbf {\hat {y}} {\partial \over \partial y}+\mathbf {\hat {z}} {\partial \over \partial z}.}
Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.
A chiral differential operator. For now, see [1]
== History ==
The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.
== Notations ==
The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:
d
d
x
{\displaystyle {d \over dx}}
,
D
{\displaystyle D}
,
D
x
,
{\displaystyle D_{x},}
and
∂
x
{\displaystyle \partial _{x}}
.
When taking higher, nth order derivatives, the operator may be written:
d
n
d
x
n
{\displaystyle {d^{n} \over dx^{n}}}
,
D
n
{\displaystyle D^{n}}
,
D
x
n
{\displaystyle D_{x}^{n}}
, or
∂
x
n
{\displaystyle \partial _{x}^{n}}
.
The derivative of a function f of an argument x is sometimes given as either of the following:
[
f
(
x
)
]
′
{\displaystyle [f(x)]'}
f
′
(
x
)
.
{\displaystyle f'(x).}
The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form
∑
k
=
0
n
c
k
D
k
{\displaystyle \sum _{k=0}^{n}c_{k}D^{k}}
in his study of differential equations.
One of the most frequently seen differential operators is the Laplacian operator, defined by
Δ
=
∇
2
=
∑
k
=
1
n
∂
2
∂
x
k
2
.
{\displaystyle \Delta =\nabla ^{2}=\sum _{k=1}^{n}{\frac {\partial ^{2}}{\partial x_{k}^{2}}}.}
Another differential operator is the Θ operator, or theta operator, defined by
Θ
=
z
d
d
z
.
{\displaystyle \Theta =z{d \over dz}.}
This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:
Θ
(
z
k
)
=
k
z
k
,
k
=
0
,
1
,
2
,
…
{\displaystyle \Theta (z^{k})=kz^{k},\quad k=0,1,2,\dots }
In n variables the homogeneity operator is given by
Θ
=
∑
k
=
1
n
x
k
∂
∂
x
k
.
{\displaystyle \Theta =\sum _{k=1}^{n}x_{k}{\frac {\partial }{\partial x_{k}}}.}
As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)
In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:
f
∂
x
←
g
=
g
⋅
∂
x
f
{\displaystyle f{\overleftarrow {\partial _{x}}}g=g\cdot \partial _{x}f}
f
∂
x
→
g
=
f
⋅
∂
x
g
{\displaystyle f{\overrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g}
f
∂
x
↔
g
=
f
⋅
∂
x
g
−
g
⋅
∂
x
f
.
{\displaystyle f{\overleftrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g-g\cdot \partial _{x}f.}
Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.
== Adjoint of an operator ==
Given a linear differential operator
T
{\displaystyle T}
T
u
=
∑
k
=
0
n
a
k
(
x
)
D
k
u
{\displaystyle Tu=\sum _{k=0}^{n}a_{k}(x)D^{k}u}
the adjoint of this operator is defined as the operator
T
∗
{\displaystyle T^{*}}
such that
⟨
T
u
,
v
⟩
=
⟨
u
,
T
∗
v
⟩
{\displaystyle \langle Tu,v\rangle =\langle u,T^{*}v\rangle }
where the notation
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).
=== Formal adjoint in one variable ===
In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by
⟨
f
,
g
⟩
=
∫
a
b
f
(
x
)
¯
g
(
x
)
d
x
,
{\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}\,g(x)\,dx,}
where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as
x
→
a
{\displaystyle x\to a}
and
x
→
b
{\displaystyle x\to b}
, one can also define the adjoint of T by
T
∗
u
=
∑
k
=
0
n
(
−
1
)
k
D
k
[
a
k
(
x
)
¯
u
]
.
{\displaystyle T^{*}u=\sum _{k=0}^{n}(-1)^{k}D^{k}\left[{\overline {a_{k}(x)}}u\right].}
This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When
T
∗
{\displaystyle T^{*}}
is defined according to this formula, it is called the formal adjoint of T.
A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.
=== Several variables ===
If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:
⟨
f
,
P
∗
g
⟩
L
2
(
Ω
)
=
⟨
P
f
,
g
⟩
L
2
(
Ω
)
{\displaystyle \langle f,P^{*}g\rangle _{L^{2}(\Omega )}=\langle Pf,g\rangle _{L^{2}(\Omega )}}
for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.
=== Example ===
The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form
L
u
=
−
(
p
u
′
)
′
+
q
u
=
−
(
p
u
″
+
p
′
u
′
)
+
q
u
=
−
p
u
″
−
p
′
u
′
+
q
u
=
(
−
p
)
D
2
u
+
(
−
p
′
)
D
u
+
(
q
)
u
.
{\displaystyle Lu=-(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p)D^{2}u+(-p')Du+(q)u.}
This property can be proven using the formal adjoint definition above.
This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.
== Properties ==
Differentiation is linear, i.e.
D
(
f
+
g
)
=
(
D
f
)
+
(
D
g
)
,
{\displaystyle D(f+g)=(Df)+(Dg),}
D
(
a
f
)
=
a
(
D
f
)
,
{\displaystyle D(af)=a(Df),}
where f and g are functions, and a is a constant.
Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule
(
D
1
∘
D
2
)
(
f
)
=
D
1
(
D
2
(
f
)
)
.
{\displaystyle (D_{1}\circ D_{2})(f)=D_{1}(D_{2}(f)).}
Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:
D
x
−
x
D
=
1.
{\displaystyle Dx-xD=1.}
The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.
The differential operators also obey the shift theorem.
== Ring of polynomial differential operators ==
=== Ring of univariate polynomial differential operators ===
If R is a ring, let
R
⟨
D
,
X
⟩
{\displaystyle R\langle D,X\rangle }
be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DX − XD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring
R
⟨
D
,
X
⟩
/
I
{\displaystyle R\langle D,X\rangle /I}
. This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form
X
a
D
b
mod
I
{\displaystyle X^{a}D^{b}{\text{ mod }}I}
. It supports an analogue of Euclidean division of polynomials.
Differential modules over
R
[
X
]
{\displaystyle R[X]}
(for the standard derivation) can be identified with modules over
R
⟨
D
,
X
⟩
/
I
{\displaystyle R\langle D,X\rangle /I}
.
=== Ring of multivariate polynomial differential operators ===
If R is a ring, let
R
⟨
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
⟩
{\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle }
be the non-commutative polynomial ring over R in the variables
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
{\displaystyle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}}
, and I the two-sided ideal generated by the elements
(
D
i
X
j
−
X
j
D
i
)
−
δ
i
,
j
,
D
i
D
j
−
D
j
D
i
,
X
i
X
j
−
X
j
X
i
{\displaystyle (D_{i}X_{j}-X_{j}D_{i})-\delta _{i,j},\ \ \ D_{i}D_{j}-D_{j}D_{i},\ \ \ X_{i}X_{j}-X_{j}X_{i}}
for all
1
≤
i
,
j
≤
n
,
{\displaystyle 1\leq i,j\leq n,}
where
δ
{\displaystyle \delta }
is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring
R
⟨
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
⟩
/
I
{\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle /I}
.
This is a non-commutative simple ring.
Every element can be written in a unique way as a R-linear combination of monomials of the form
X
1
a
1
…
X
n
a
n
D
1
b
1
…
D
n
b
n
{\displaystyle X_{1}^{a_{1}}\ldots X_{n}^{a_{n}}D_{1}^{b_{1}}\ldots D_{n}^{b_{n}}}
.
== Coordinate-independent description ==
In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E).
In other words, there exists a linear mapping of vector bundles
i
P
:
J
k
(
E
)
→
F
{\displaystyle i_{P}:J^{k}(E)\to F}
such that
P
=
i
P
∘
j
k
{\displaystyle P=i_{P}\circ j^{k}}
where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.
This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.
=== Relation to commutative algebra ===
An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions
f
0
,
…
,
f
k
∈
C
∞
(
M
)
{\displaystyle f_{0},\ldots ,f_{k}\in C^{\infty }(M)}
we have
[
f
k
,
[
f
k
−
1
,
[
⋯
[
f
0
,
P
]
⋯
]
]
=
0.
{\displaystyle [f_{k},[f_{k-1},[\cdots [f_{0},P]\cdots ]]=0.}
Here the bracket
[
f
,
P
]
:
Γ
(
E
)
→
Γ
(
F
)
{\displaystyle [f,P]:\Gamma (E)\to \Gamma (F)}
is defined as the commutator
[
f
,
P
]
(
s
)
=
P
(
f
⋅
s
)
−
f
⋅
P
(
s
)
.
{\displaystyle [f,P](s)=P(f\cdot s)-f\cdot P(s).}
This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.
== Variants ==
=== A differential operator of infinite order ===
A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.
=== Bidifferential operator ===
A differential operator acting on two functions
D
(
g
,
f
)
{\displaystyle D(g,f)}
is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.
=== Microdifferential operator ===
A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.
== See also ==
== Notes ==
== References ==
Freed, Daniel S. (1987), Geometry of Dirac operators, p. 8, CiteSeerX 10.1.1.186.8445
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 3-540-12104-8, MR 0717035.
Schapira, Pierre (1985). Microdifferential Systems in the Complex Domain. Grundlehren der mathematischen Wissenschaften. Vol. 269. Springer. doi:10.1007/978-3-642-61665-5. ISBN 978-3-642-64904-2.
Wells, R.O. (1973), Differential analysis on complex manifolds, Springer-Verlag, ISBN 0-387-90419-0.
== Further reading ==
Fedosov, Boris; Schulze, Bert-Wolfgang; Tarkhanov, Nikolai (2002). "Analytic index formulas for elliptic corner operators". Annales de l'Institut Fourier. 52 (3): 899–982. doi:10.5802/aif.1906. ISSN 1777-5310.
https://mathoverflow.net/questions/451110/reference-request-inverse-of-differential-operators
== External links ==
Media related to Differential operators at Wikimedia Commons
"Differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Symbol_of_a_differential_operator |
In algebraic geometry, a projective variety is an algebraic variety that is a closed subvariety of a projective space. That is, it is the zero-locus in
P
n
{\displaystyle \mathbb {P} ^{n}}
of some finite family of homogeneous polynomials that generate a prime ideal, the defining ideal of the variety.
A projective variety is a projective curve if its dimension is one; it is a projective surface if its dimension is two; it is a projective hypersurface if its dimension is one less than the dimension of the containing projective space; in this case it is the set of zeros of a single homogeneous polynomial.
If X is a projective variety defined by a homogeneous prime ideal I, then the quotient ring
k
[
x
0
,
…
,
x
n
]
/
I
{\displaystyle k[x_{0},\ldots ,x_{n}]/I}
is called the homogeneous coordinate ring of X. Basic invariants of X such as the degree and the dimension can be read off the Hilbert polynomial of this graded ring.
Projective varieties arise in many ways. They are complete, which roughly can be expressed by saying that there are no points "missing". The converse is not true in general, but Chow's lemma describes the close relation of these two notions. Showing that a variety is projective is done by studying line bundles or divisors on X.
A salient feature of projective varieties are the finiteness constraints on sheaf cohomology. For smooth projective varieties, Serre duality can be viewed as an analog of Poincaré duality. It also leads to the Riemann–Roch theorem for projective curves, i.e., projective varieties of dimension 1. The theory of projective curves is particularly rich, including a classification by the genus of the curve. The classification program for higher-dimensional projective varieties naturally leads to the construction of moduli of projective varieties. Hilbert schemes parametrize closed subschemes of
P
n
{\displaystyle \mathbb {P} ^{n}}
with prescribed Hilbert polynomial. Hilbert schemes, of which Grassmannians are special cases, are also projective schemes in their own right. Geometric invariant theory offers another approach. The classical approaches include the Teichmüller space and Chow varieties.
A particularly rich theory, reaching back to the classics, is available for complex projective varieties, i.e., when the polynomials defining X have complex coefficients. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. For example, the theory of holomorphic vector bundles (more generally coherent analytic sheaves) on X coincide with that of algebraic vector bundles. Chow's theorem says that a subset of projective space is the zero-locus of a family of holomorphic functions if and only if it is the zero-locus of homogeneous polynomials. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory.
== Variety and scheme structure ==
=== Variety structure ===
Let k be an algebraically closed field. The basis of the definition of projective varieties is projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
, which can be defined in different, but equivalent ways:
as the set of all lines through the origin in
k
n
+
1
{\displaystyle k^{n+1}}
(i.e., all one-dimensional vector subspaces of
k
n
+
1
{\displaystyle k^{n+1}}
)
as the set of tuples
(
x
0
,
…
,
x
n
)
∈
k
n
+
1
{\displaystyle (x_{0},\dots ,x_{n})\in k^{n+1}}
, with
x
0
,
…
,
x
n
{\displaystyle x_{0},\dots ,x_{n}}
not all zero, modulo the equivalence relation
(
x
0
,
…
,
x
n
)
∼
λ
(
x
0
,
…
,
x
n
)
{\displaystyle (x_{0},\dots ,x_{n})\sim \lambda (x_{0},\dots ,x_{n})}
for any
λ
∈
k
∖
{
0
}
{\displaystyle \lambda \in k\setminus \{0\}}
. The equivalence class of such a tuple is denoted by
[
x
0
:
⋯
:
x
n
]
.
{\displaystyle [x_{0}:\dots :x_{n}].}
This equivalence class is the general point of projective space. The numbers
x
0
,
…
,
x
n
{\displaystyle x_{0},\dots ,x_{n}}
are referred to as the homogeneous coordinates of the point.
A projective variety is, by definition, a closed subvariety of
P
n
{\displaystyle \mathbb {P} ^{n}}
, where closed refers to the Zariski topology. In general, closed subsets of the Zariski topology are defined to be the common zero-locus of a finite collection of homogeneous polynomial functions. Given a polynomial
f
∈
k
[
x
0
,
…
,
x
n
]
{\displaystyle f\in k[x_{0},\dots ,x_{n}]}
, the condition
f
(
[
x
0
:
⋯
:
x
n
]
)
=
0
{\displaystyle f([x_{0}:\dots :x_{n}])=0}
does not make sense for arbitrary polynomials, but only if f is homogeneous, i.e., the degrees of all the monomials (whose sum is f) are the same. In this case, the vanishing of
f
(
λ
x
0
,
…
,
λ
x
n
)
=
λ
deg
f
f
(
x
0
,
…
,
x
n
)
{\displaystyle f(\lambda x_{0},\dots ,\lambda x_{n})=\lambda ^{\deg f}f(x_{0},\dots ,x_{n})}
is independent of the choice of
λ
≠
0
{\displaystyle \lambda \neq 0}
.
Therefore, projective varieties arise from homogeneous prime ideals I of
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\dots ,x_{n}]}
, and setting
X
=
{
[
x
0
:
⋯
:
x
n
]
∈
P
n
,
f
(
[
x
0
:
⋯
:
x
n
]
)
=
0
for all
f
∈
I
}
.
{\displaystyle X=\left\{[x_{0}:\dots :x_{n}]\in \mathbb {P} ^{n},f([x_{0}:\dots :x_{n}])=0{\text{ for all }}f\in I\right\}.}
Moreover, the projective variety X is an algebraic variety, meaning that it is covered by open affine subvarieties and satisfies the separation axiom. Thus, the local study of X (e.g., singularity) reduces to that of an affine variety. The explicit structure is as follows. The projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
is covered by the standard open affine charts
U
i
=
{
[
x
0
:
⋯
:
x
n
]
,
x
i
≠
0
}
,
{\displaystyle U_{i}=\{[x_{0}:\dots :x_{n}],x_{i}\neq 0\},}
which themselves are affine n-spaces with the coordinate ring
k
[
y
1
(
i
)
,
…
,
y
n
(
i
)
]
,
y
j
(
i
)
=
x
j
/
x
i
.
{\displaystyle k\left[y_{1}^{(i)},\dots ,y_{n}^{(i)}\right],\quad y_{j}^{(i)}=x_{j}/x_{i}.}
Say i = 0 for the notational simplicity and drop the superscript (0). Then
X
∩
U
0
{\displaystyle X\cap U_{0}}
is a closed subvariety of
U
0
≃
A
n
{\displaystyle U_{0}\simeq \mathbb {A} ^{n}}
defined by the ideal of
k
[
y
1
,
…
,
y
n
]
{\displaystyle k[y_{1},\dots ,y_{n}]}
generated by
f
(
1
,
y
1
,
…
,
y
n
)
{\displaystyle f(1,y_{1},\dots ,y_{n})}
for all f in I. Thus, X is an algebraic variety covered by (n+1) open affine charts
X
∩
U
i
{\displaystyle X\cap U_{i}}
.
Note that X is the closure of the affine variety
X
∩
U
0
{\displaystyle X\cap U_{0}}
in
P
n
{\displaystyle \mathbb {P} ^{n}}
. Conversely, starting from some closed (affine) variety
V
⊂
U
0
≃
A
n
{\displaystyle V\subset U_{0}\simeq \mathbb {A} ^{n}}
, the closure of V in
P
n
{\displaystyle \mathbb {P} ^{n}}
is the projective variety called the projective completion of V. If
I
⊂
k
[
y
1
,
…
,
y
n
]
{\displaystyle I\subset k[y_{1},\dots ,y_{n}]}
defines V, then the defining ideal of this closure is the homogeneous ideal of
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\dots ,x_{n}]}
generated by
x
0
deg
(
f
)
f
(
x
1
/
x
0
,
…
,
x
n
/
x
0
)
{\displaystyle x_{0}^{\deg(f)}f(x_{1}/x_{0},\dots ,x_{n}/x_{0})}
for all f in I.
For example, if V is an affine curve given by, say,
y
2
=
x
3
+
a
x
+
b
{\displaystyle y^{2}=x^{3}+ax+b}
in the affine plane, then its projective completion in the projective plane is given by
y
2
z
=
x
3
+
a
x
z
2
+
b
z
3
.
{\displaystyle y^{2}z=x^{3}+axz^{2}+bz^{3}.}
=== Projective schemes ===
For various applications, it is necessary to consider more general algebro-geometric objects than projective varieties, namely projective schemes. The first step towards projective schemes is to endow projective space with a scheme structure, in a way refining the above description of projective space as an algebraic variety, i.e.,
P
n
(
k
)
{\displaystyle \mathbb {P} ^{n}(k)}
is a scheme which it is a union of (n + 1) copies of the affine n-space kn. More generally, projective space over a ring A is the union of the affine schemes
U
i
=
Spec
A
[
x
0
/
x
i
,
…
,
x
n
/
x
i
]
,
0
≤
i
≤
n
,
{\displaystyle U_{i}=\operatorname {Spec} A[x_{0}/x_{i},\dots ,x_{n}/x_{i}],\quad 0\leq i\leq n,}
in such a way the variables match up as expected. The set of closed points of
P
k
n
{\displaystyle \mathbb {P} _{k}^{n}}
, for algebraically closed fields k, is then the projective space
P
n
(
k
)
{\displaystyle \mathbb {P} ^{n}(k)}
in the usual sense.
An equivalent but streamlined construction is given by the Proj construction, which is an analog of the spectrum of a ring, denoted "Spec", which defines an affine scheme. For example, if A is a ring, then
P
A
n
=
Proj
A
[
x
0
,
…
,
x
n
]
.
{\displaystyle \mathbb {P} _{A}^{n}=\operatorname {Proj} A[x_{0},\ldots ,x_{n}].}
If R is a quotient of
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\ldots ,x_{n}]}
by a homogeneous ideal I, then the canonical surjection induces the closed immersion
Proj
R
↪
P
k
n
.
{\displaystyle \operatorname {Proj} R\hookrightarrow \mathbb {P} _{k}^{n}.}
Compared to projective varieties, the condition that the ideal I be a prime ideal was dropped. This leads to a much more flexible notion: on the one hand the topological space
X
=
Proj
R
{\displaystyle X=\operatorname {Proj} R}
may have multiple irreducible components. Moreover, there may be nilpotent functions on X.
Closed subschemes of
P
k
n
{\displaystyle \mathbb {P} _{k}^{n}}
correspond bijectively to the homogeneous ideals I of
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\ldots ,x_{n}]}
that are saturated; i.e.,
I
:
(
x
0
,
…
,
x
n
)
=
I
.
{\displaystyle I:(x_{0},\dots ,x_{n})=I.}
This fact may be considered as a refined version of projective Nullstellensatz.
We can give a coordinate-free analog of the above. Namely, given a finite-dimensional vector space V over k, we let
P
(
V
)
=
Proj
k
[
V
]
{\displaystyle \mathbb {P} (V)=\operatorname {Proj} k[V]}
where
k
[
V
]
=
Sym
(
V
∗
)
{\displaystyle k[V]=\operatorname {Sym} (V^{*})}
is the symmetric algebra of
V
∗
{\displaystyle V^{*}}
. It is the projectivization of V; i.e., it parametrizes lines in V. There is a canonical surjective map
π
:
V
∖
{
0
}
→
P
(
V
)
{\displaystyle \pi :V\setminus \{0\}\to \mathbb {P} (V)}
, which is defined using the chart described above. One important use of the construction is this (cf., § Duality and linear system). A divisor D on a projective variety X corresponds to a line bundle L. One then set
|
D
|
=
P
(
Γ
(
X
,
L
)
)
{\displaystyle |D|=\mathbb {P} (\Gamma (X,L))}
;
it is called the complete linear system of D.
Projective space over any scheme S can be defined as a fiber product of schemes
P
S
n
=
P
Z
n
×
Spec
Z
S
.
{\displaystyle \mathbb {P} _{S}^{n}=\mathbb {P} _{\mathbb {Z} }^{n}\times _{\operatorname {Spec} \mathbb {Z} }S.}
If
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
is the twisting sheaf of Serre on
P
Z
n
{\displaystyle \mathbb {P} _{\mathbb {Z} }^{n}}
, we let
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
denote the pullback of
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
to
P
S
n
{\displaystyle \mathbb {P} _{S}^{n}}
; that is,
O
(
1
)
=
g
∗
(
O
(
1
)
)
{\displaystyle {\mathcal {O}}(1)=g^{*}({\mathcal {O}}(1))}
for the canonical map
g
:
P
S
n
→
P
Z
n
.
{\displaystyle g:\mathbb {P} _{S}^{n}\to \mathbb {P} _{\mathbb {Z} }^{n}.}
A scheme X → S is called projective over S if it factors as a closed immersion
X
→
P
S
n
{\displaystyle X\to \mathbb {P} _{S}^{n}}
followed by the projection to S.
A line bundle (or invertible sheaf)
L
{\displaystyle {\mathcal {L}}}
on a scheme X over S is said to be very ample relative to S if there is an immersion (i.e., an open immersion followed by a closed immersion)
i
:
X
→
P
S
n
{\displaystyle i:X\to \mathbb {P} _{S}^{n}}
for some n so that
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
pullbacks to
L
{\displaystyle {\mathcal {L}}}
. Then a S-scheme X is projective if and only if it is proper and there exists a very ample sheaf on X relative to S. Indeed, if X is proper, then an immersion corresponding to the very ample line bundle is necessarily closed. Conversely, if X is projective, then the pullback of
O
(
1
)
{\displaystyle {\mathcal {O}}(1)}
under the closed immersion of X into a projective space is very ample. That "projective" implies "proper" is deeper: the main theorem of elimination theory.
== Relation to complete varieties ==
By definition, a variety is complete, if it is proper over k. The valuative criterion of properness expresses the intuition that in a proper variety, there are no points "missing".
There is a close relation between complete and projective varieties: on the one hand, projective space and therefore any projective variety is complete. The converse is not true in general. However:
A smooth curve C is projective if and only if it is complete. This is proved by identifying C with the set of discrete valuation rings of the function field k(C) over k. This set has a natural Zariski topology called the Zariski–Riemann space.
Chow's lemma states that for any complete variety X, there is a projective variety Z and a birational morphism Z → X. (Moreover, through normalization, one can assume this projective variety is normal.)
Some properties of a projective variety follow from completeness. For example,
Γ
(
X
,
O
X
)
=
k
{\displaystyle \Gamma (X,{\mathcal {O}}_{X})=k}
for any projective variety X over k. This fact is an algebraic analogue of Liouville's theorem (any holomorphic function on a connected compact complex manifold is constant). In fact, the similarity between complex analytic geometry and algebraic geometry on complex projective varieties goes much further than this, as is explained below.
Quasi-projective varieties are, by definition, those which are open subvarieties of projective varieties. This class of varieties includes affine varieties. Affine varieties are almost never complete (or projective). In fact, a projective subvariety of an affine variety must have dimension zero. This is because only the constants are globally regular functions on a projective variety.
== Examples and basic invariants ==
By definition, any homogeneous ideal in a polynomial ring yields a projective scheme (required to be prime ideal to give a variety). In this sense, examples of projective varieties abound. The following list mentions various classes of projective varieties which are noteworthy since they have been studied particularly intensely. The important class of complex projective varieties, i.e., the case
k
=
C
{\displaystyle k=\mathbb {C} }
, is discussed further below.
The product of two projective spaces is projective. In fact, there is the explicit immersion (called Segre embedding)
{
P
n
×
P
m
→
P
(
n
+
1
)
(
m
+
1
)
−
1
(
x
i
,
y
j
)
↦
x
i
y
j
{\displaystyle {\begin{cases}\mathbb {P} ^{n}\times \mathbb {P} ^{m}\to \mathbb {P} ^{(n+1)(m+1)-1}\\(x_{i},y_{j})\mapsto x_{i}y_{j}\end{cases}}}
As a consequence, the product of projective varieties over k is again projective. The Plücker embedding exhibits a Grassmannian as a projective variety. Flag varieties such as the quotient of the general linear group
G
L
n
(
k
)
{\displaystyle \mathrm {GL} _{n}(k)}
modulo the subgroup of upper triangular matrices, are also projective, which is an important fact in the theory of algebraic groups.
=== Homogeneous coordinate ring and Hilbert polynomial ===
As the prime ideal P defining a projective variety X is homogeneous, the homogeneous coordinate ring
R
=
k
[
x
0
,
…
,
x
n
]
/
P
{\displaystyle R=k[x_{0},\dots ,x_{n}]/P}
is a graded ring, i.e., can be expressed as the direct sum of its graded components:
R
=
⨁
n
∈
N
R
n
.
{\displaystyle R=\bigoplus _{n\in \mathbb {N} }R_{n}.}
There exists a polynomial P such that
dim
R
n
=
P
(
n
)
{\displaystyle \dim R_{n}=P(n)}
for all sufficiently large n; it is called the Hilbert polynomial of X. It is a numerical invariant encoding some extrinsic geometry of X. The degree of P is the dimension r of X and its leading coefficient times r! is the degree of the variety X. The arithmetic genus of X is (−1)r (P(0) − 1) when X is smooth.
For example, the homogeneous coordinate ring of
P
n
{\displaystyle \mathbb {P} ^{n}}
is
k
[
x
0
,
…
,
x
n
]
{\displaystyle k[x_{0},\ldots ,x_{n}]}
and its Hilbert polynomial is
P
(
z
)
=
(
z
+
n
n
)
{\displaystyle P(z)={\binom {z+n}{n}}}
; its arithmetic genus is zero.
If the homogeneous coordinate ring R is an integrally closed domain, then the projective variety X is said to be projectively normal. Note, unlike normality, projective normality depends on R, the embedding of X into a projective space. The normalization of a projective variety is projective; in fact, it's the Proj of the integral closure of some homogeneous coordinate ring of X.
=== Degree ===
Let
X
⊂
P
N
{\displaystyle X\subset \mathbb {P} ^{N}}
be a projective variety. There are at least two equivalent ways to define the degree of X relative to its embedding. The first way is to define it as the cardinality of the finite set
#
(
X
∩
H
1
∩
⋯
∩
H
d
)
{\displaystyle \#(X\cap H_{1}\cap \cdots \cap H_{d})}
where d is the dimension of X and Hi's are hyperplanes in "general positions". This definition corresponds to an intuitive idea of a degree. Indeed, if X is a hypersurface, then the degree of X is the degree of the homogeneous polynomial defining X. The "general positions" can be made precise, for example, by intersection theory; one requires that the intersection is proper and that the multiplicities of irreducible components are all one.
The other definition, which is mentioned in the previous section, is that the degree of X is the leading coefficient of the Hilbert polynomial of X times (dim X)!. Geometrically, this definition means that the degree of X is the multiplicity of the vertex of the affine cone over X.
Let
V
1
,
…
,
V
r
⊂
P
N
{\displaystyle V_{1},\dots ,V_{r}\subset \mathbb {P} ^{N}}
be closed subschemes of pure dimensions that intersect properly (they are in general position). If mi denotes the multiplicity of an irreducible component Zi in the intersection (i.e., intersection multiplicity), then the generalization of Bézout's theorem says:
∑
1
s
m
i
deg
Z
i
=
∏
1
r
deg
V
i
.
{\displaystyle \sum _{1}^{s}m_{i}\deg Z_{i}=\prod _{1}^{r}\deg V_{i}.}
The intersection multiplicity mi can be defined as the coefficient of Zi in the intersection product
V
1
⋅
⋯
⋅
V
r
{\displaystyle V_{1}\cdot \cdots \cdot V_{r}}
in the Chow ring of
P
N
{\displaystyle \mathbb {P} ^{N}}
.
In particular, if
H
⊂
P
N
{\displaystyle H\subset \mathbb {P} ^{N}}
is a hypersurface not containing X, then
∑
1
s
m
i
deg
Z
i
=
deg
(
X
)
deg
(
H
)
{\displaystyle \sum _{1}^{s}m_{i}\deg Z_{i}=\deg(X)\deg(H)}
where Zi are the irreducible components of the scheme-theoretic intersection of X and H with multiplicity (length of the local ring) mi.
A complex projective variety can be viewed as a compact complex manifold; the degree of the variety (relative to the embedding) is then the volume of the variety as a manifold with respect to the metric inherited from the ambient complex projective space. A complex projective variety can be characterized as a minimizer of the volume (in a sense).
=== The ring of sections ===
Let X be a projective variety and L a line bundle on it. Then the graded ring
R
(
X
,
L
)
=
⨁
n
=
0
∞
H
0
(
X
,
L
⊗
n
)
{\displaystyle R(X,L)=\bigoplus _{n=0}^{\infty }H^{0}(X,L^{\otimes n})}
is called the ring of sections of L. If L is ample, then Proj of this ring is X. Moreover, if X is normal and L is very ample, then
R
(
X
,
L
)
{\displaystyle R(X,L)}
is the integral closure of the homogeneous coordinate ring of X determined by L; i.e.,
X
↪
P
N
{\displaystyle X\hookrightarrow \mathbb {P} ^{N}}
so that
O
P
N
(
1
)
{\displaystyle {\mathcal {O}}_{\mathbb {P} ^{N}}(1)}
pulls-back to L.
For applications, it is useful to allow for divisors (or
Q
{\displaystyle \mathbb {Q} }
-divisors) not just line bundles; assuming X is normal, the resulting ring is then called a generalized ring of sections. If
K
X
{\displaystyle K_{X}}
is a canonical divisor on X, then the generalized ring of sections
R
(
X
,
K
X
)
{\displaystyle R(X,K_{X})}
is called the canonical ring of X. If the canonical ring is finitely generated, then Proj of the ring is called the canonical model of X. The canonical ring or model can then be used to define the Kodaira dimension of X.
=== Projective curves ===
Projective schemes of dimension one are called projective curves. Much of the theory of projective curves is about smooth projective curves, since the singularities of curves can be resolved by normalization, which consists in taking locally the integral closure of the ring of regular functions. Smooth projective curves are isomorphic if and only if their function fields are isomorphic. The study of finite extensions of
F
p
(
t
)
,
{\displaystyle \mathbb {F} _{p}(t),}
or equivalently smooth projective curves over
F
p
{\displaystyle \mathbb {F} _{p}}
is an important branch in algebraic number theory.
A smooth projective curve of genus one is called an elliptic curve. As a consequence of the Riemann–Roch theorem, such a curve can be embedded as a closed subvariety in
P
2
{\displaystyle \mathbb {P} ^{2}}
. In general, any (smooth) projective curve can be embedded in
P
3
{\displaystyle \mathbb {P} ^{3}}
(for a proof, see Secant variety#Examples). Conversely, any smooth closed curve in
P
2
{\displaystyle \mathbb {P} ^{2}}
of degree three has genus one by the genus formula and is thus an elliptic curve.
A smooth complete curve of genus greater than or equal to two is called a hyperelliptic curve if there is a finite morphism
C
→
P
1
{\displaystyle C\to \mathbb {P} ^{1}}
of degree two.
=== Projective hypersurfaces ===
Every irreducible closed subset of
P
n
{\displaystyle \mathbb {P} ^{n}}
of codimension one is a hypersurface; i.e., the zero set of some homogeneous irreducible polynomial.
=== Abelian varieties ===
Another important invariant of a projective variety X is the Picard group
Pic
(
X
)
{\displaystyle \operatorname {Pic} (X)}
of X, the set of isomorphism classes of line bundles on X. It is isomorphic to
H
1
(
X
,
O
X
∗
)
{\displaystyle H^{1}(X,{\mathcal {O}}_{X}^{*})}
and therefore an intrinsic notion (independent of embedding). For example, the Picard group of
P
n
{\displaystyle \mathbb {P} ^{n}}
is isomorphic to
Z
{\displaystyle \mathbb {Z} }
via the degree map. The kernel of
deg
:
Pic
(
X
)
→
Z
{\displaystyle \deg :\operatorname {Pic} (X)\to \mathbb {Z} }
is not only an abstract abelian group, but there is a variety called the Jacobian variety of X, Jac(X), whose points equal this group. The Jacobian of a (smooth) curve plays an important role in the study of the curve. For example, the Jacobian of an elliptic curve E is E itself. For a curve X of genus g, Jac(X) has dimension g.
Varieties, such as the Jacobian variety, which are complete and have a group structure are known as abelian varieties, in honor of Niels Abel. In marked contrast to affine algebraic groups such as
G
L
n
(
k
)
{\displaystyle GL_{n}(k)}
, such groups are always commutative, whence the name. Moreover, they admit an ample line bundle and are thus projective. On the other hand, an abelian scheme may not be projective. Examples of abelian varieties are elliptic curves, Jacobian varieties and K3 surfaces.
== Projections ==
Let
E
⊂
P
n
{\displaystyle E\subset \mathbb {P} ^{n}}
be a linear subspace; i.e.,
E
=
{
s
0
=
s
1
=
⋯
=
s
r
=
0
}
{\displaystyle E=\{s_{0}=s_{1}=\cdots =s_{r}=0\}}
for some linearly independent linear functionals si. Then the projection from E is the (well-defined) morphism
{
ϕ
:
P
n
−
E
→
P
r
x
↦
[
s
0
(
x
)
:
⋯
:
s
r
(
x
)
]
{\displaystyle {\begin{cases}\phi :\mathbb {P} ^{n}-E\to \mathbb {P} ^{r}\\x\mapsto [s_{0}(x):\cdots :s_{r}(x)]\end{cases}}}
The geometric description of this map is as follows:
We view
P
r
⊂
P
n
{\displaystyle \mathbb {P} ^{r}\subset \mathbb {P} ^{n}}
so that it is disjoint from E. Then, for any
x
∈
P
n
∖
E
{\displaystyle x\in \mathbb {P} ^{n}\setminus E}
,
ϕ
(
x
)
=
W
x
∩
P
r
,
{\displaystyle \phi (x)=W_{x}\cap \mathbb {P} ^{r},}
where
W
x
{\displaystyle W_{x}}
denotes the smallest linear space containing E and x (called the join of E and x.)
ϕ
−
1
(
{
y
i
≠
0
}
)
=
{
s
i
≠
0
}
,
{\displaystyle \phi ^{-1}(\{y_{i}\neq 0\})=\{s_{i}\neq 0\},}
where
y
i
{\displaystyle y_{i}}
are the homogeneous coordinates on
P
r
.
{\displaystyle \mathbb {P} ^{r}.}
For any closed subscheme
Z
⊂
P
n
{\displaystyle Z\subset \mathbb {P} ^{n}}
disjoint from E, the restriction
ϕ
:
Z
→
P
r
{\displaystyle \phi :Z\to \mathbb {P} ^{r}}
is a finite morphism.
Projections can be used to cut down the dimension in which a projective variety is embedded, up to finite morphisms. Start with some projective variety
X
⊂
P
n
.
{\displaystyle X\subset \mathbb {P} ^{n}.}
If
n
>
dim
X
,
{\displaystyle n>\dim X,}
the projection from a point not on X gives
ϕ
:
X
→
P
n
−
1
.
{\displaystyle \phi :X\to \mathbb {P} ^{n-1}.}
Moreover,
ϕ
{\displaystyle \phi }
is a finite map to its image. Thus, iterating the procedure, one sees there is a finite map
X
→
P
d
,
d
=
dim
X
.
{\displaystyle X\to \mathbb {P} ^{d},\quad d=\dim X.}
This result is the projective analog of Noether's normalization lemma. (In fact, it yields a geometric proof of the normalization lemma.)
The same procedure can be used to show the following slightly more precise result: given a projective variety X over a perfect field, there is a finite birational morphism from X to a hypersurface H in
P
d
+
1
.
{\displaystyle \mathbb {P} ^{d+1}.}
In particular, if X is normal, then it is the normalization of H.
== Duality and linear system ==
While a projective n-space
P
n
{\displaystyle \mathbb {P} ^{n}}
parameterizes the lines in an affine n-space, the dual of it parametrizes the hyperplanes on the projective space, as follows. Fix a field k. By
P
˘
k
n
{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}
, we mean a projective n-space
P
˘
k
n
=
Proj
(
k
[
u
0
,
…
,
u
n
]
)
{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}=\operatorname {Proj} (k[u_{0},\dots ,u_{n}])}
equipped with the construction:
f
↦
H
f
=
{
α
0
x
0
+
⋯
+
α
n
x
n
=
0
}
{\displaystyle f\mapsto H_{f}=\{\alpha _{0}x_{0}+\cdots +\alpha _{n}x_{n}=0\}}
, a hyperplane on
P
L
n
{\displaystyle \mathbb {P} _{L}^{n}}
where
f
:
Spec
L
→
P
˘
k
n
{\displaystyle f:\operatorname {Spec} L\to {\breve {\mathbb {P} }}_{k}^{n}}
is an L-point of
P
˘
k
n
{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}
for a field extension L of k and
α
i
=
f
∗
(
u
i
)
∈
L
.
{\displaystyle \alpha _{i}=f^{*}(u_{i})\in L.}
For each L, the construction is a bijection between the set of L-points of
P
˘
k
n
{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}
and the set of hyperplanes on
P
L
n
{\displaystyle \mathbb {P} _{L}^{n}}
. Because of this, the dual projective space
P
˘
k
n
{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}
is said to be the moduli space of hyperplanes on
P
k
n
{\displaystyle \mathbb {P} _{k}^{n}}
.
A line in
P
˘
k
n
{\displaystyle {\breve {\mathbb {P} }}_{k}^{n}}
is called a pencil: it is a family of hyperplanes on
P
k
n
{\displaystyle \mathbb {P} _{k}^{n}}
parametrized by
P
k
1
{\displaystyle \mathbb {P} _{k}^{1}}
.
If V is a finite-dimensional vector space over k, then, for the same reason as above,
P
(
V
∗
)
=
Proj
(
Sym
(
V
)
)
{\displaystyle \mathbb {P} (V^{*})=\operatorname {Proj} (\operatorname {Sym} (V))}
is the space of hyperplanes on
P
(
V
)
{\displaystyle \mathbb {P} (V)}
. An important case is when V consists of sections of a line bundle. Namely, let X be an algebraic variety, L a line bundle on X and
V
⊂
Γ
(
X
,
L
)
{\displaystyle V\subset \Gamma (X,L)}
a vector subspace of finite positive dimension. Then there is a map:
{
φ
V
:
X
∖
B
→
P
(
V
∗
)
x
↦
H
x
=
{
s
∈
V
|
s
(
x
)
=
0
}
{\displaystyle {\begin{cases}\varphi _{V}:X\setminus B\to \mathbb {P} (V^{*})\\x\mapsto H_{x}=\{s\in V|s(x)=0\}\end{cases}}}
determined by the linear system V, where B, called the base locus, is the intersection of the divisors of zero of nonzero sections in V (see Linear system of divisors#A map determined by a linear system for the construction of the map).
== Cohomology of coherent sheaves ==
Let X be a projective scheme over a field (or, more generally over a Noetherian ring A). Cohomology of coherent sheaves
F
{\displaystyle {\mathcal {F}}}
on X satisfies the following important theorems due to Serre:
H
p
(
X
,
F
)
{\displaystyle H^{p}(X,{\mathcal {F}})}
is a finite-dimensional k-vector space for any p.
There exists an integer
n
0
{\displaystyle n_{0}}
(depending on
F
{\displaystyle {\mathcal {F}}}
; see also Castelnuovo–Mumford regularity) such that
H
p
(
X
,
F
(
n
)
)
=
0
{\displaystyle H^{p}(X,{\mathcal {F}}(n))=0}
for all
n
≥
n
0
{\displaystyle n\geq n_{0}}
and p > 0, where
F
(
n
)
=
F
⊗
O
(
n
)
{\displaystyle {\mathcal {F}}(n)={\mathcal {F}}\otimes {\mathcal {O}}(n)}
is the twisting with a power of a very ample line bundle
O
(
1
)
.
{\displaystyle {\mathcal {O}}(1).}
These results are proven reducing to the case
X
=
P
n
{\displaystyle X=\mathbb {P} ^{n}}
using the isomorphism
H
p
(
X
,
F
)
=
H
p
(
P
r
,
F
)
,
p
≥
0
{\displaystyle H^{p}(X,{\mathcal {F}})=H^{p}(\mathbb {P} ^{r},{\mathcal {F}}),p\geq 0}
where in the right-hand side
F
{\displaystyle {\mathcal {F}}}
is viewed as a sheaf on the projective space by extension by zero. The result then follows by a direct computation for
F
=
O
P
r
(
n
)
,
{\displaystyle {\mathcal {F}}={\mathcal {O}}_{\mathbb {P} ^{r}}(n),}
n any integer, and for arbitrary
F
{\displaystyle {\mathcal {F}}}
reduces to this case without much difficulty.
As a corollary to 1. above, if f is a projective morphism from a noetherian scheme to a noetherian ring, then the higher direct image
R
p
f
∗
F
{\displaystyle R^{p}f_{*}{\mathcal {F}}}
is coherent. The same result holds for proper morphisms f, as can be shown with the aid of Chow's lemma.
Sheaf cohomology groups Hi on a noetherian topological space vanish for i strictly greater than the dimension of the space. Thus the quantity, called the Euler characteristic of
F
{\displaystyle {\mathcal {F}}}
,
χ
(
F
)
=
∑
i
=
0
∞
(
−
1
)
i
dim
H
i
(
X
,
F
)
{\displaystyle \chi ({\mathcal {F}})=\sum _{i=0}^{\infty }(-1)^{i}\dim H^{i}(X,{\mathcal {F}})}
is a well-defined integer (for X projective). One can then show
χ
(
F
(
n
)
)
=
P
(
n
)
{\displaystyle \chi ({\mathcal {F}}(n))=P(n)}
for some polynomial P over rational numbers. Applying this procedure to the structure sheaf
O
X
{\displaystyle {\mathcal {O}}_{X}}
, one recovers the Hilbert polynomial of X. In particular, if X is irreducible and has dimension r, the arithmetic genus of X is given by
(
−
1
)
r
(
χ
(
O
X
)
−
1
)
,
{\displaystyle (-1)^{r}(\chi ({\mathcal {O}}_{X})-1),}
which is manifestly intrinsic; i.e., independent of the embedding.
The arithmetic genus of a hypersurface of degree d is
(
d
−
1
n
)
{\displaystyle {\binom {d-1}{n}}}
in
P
n
{\displaystyle \mathbb {P} ^{n}}
. In particular, a smooth curve of degree d in
P
2
{\displaystyle \mathbb {P} ^{2}}
has arithmetic genus
(
d
−
1
)
(
d
−
2
)
/
2
{\displaystyle (d-1)(d-2)/2}
. This is the genus formula.
== Smooth projective varieties ==
Let X be a smooth projective variety where all of its irreducible components have dimension n. In this situation, the canonical sheaf ωX, defined as the sheaf of Kähler differentials of top degree (i.e., algebraic n-forms), is a line bundle.
=== Serre duality ===
Serre duality states that for any locally free sheaf
F
{\displaystyle {\mathcal {F}}}
on X,
H
i
(
X
,
F
)
≃
H
n
−
i
(
X
,
F
∨
⊗
ω
X
)
′
{\displaystyle H^{i}(X,{\mathcal {F}})\simeq H^{n-i}(X,{\mathcal {F}}^{\vee }\otimes \omega _{X})'}
where the superscript prime refers to the dual space and
F
∨
{\displaystyle {\mathcal {F}}^{\vee }}
is the dual sheaf of
F
{\displaystyle {\mathcal {F}}}
.
A generalization to projective, but not necessarily smooth schemes is known as Verdier duality.
=== Riemann–Roch theorem ===
For a (smooth projective) curve X, H2 and higher vanish for dimensional reason and the space of the global sections of the structure sheaf is one-dimensional. Thus the arithmetic genus of X is the dimension of
H
1
(
X
,
O
X
)
{\displaystyle H^{1}(X,{\mathcal {O}}_{X})}
. By definition, the geometric genus of X is the dimension of H0(X, ωX). Serre duality thus implies that the arithmetic genus and the geometric genus coincide. They will simply be called the genus of X.
Serre duality is also a key ingredient in the proof of the Riemann–Roch theorem. Since X is smooth, there is an isomorphism of groups
{
Cl
(
X
)
→
Pic
(
X
)
D
↦
O
(
D
)
{\displaystyle {\begin{cases}\operatorname {Cl} (X)\to \operatorname {Pic} (X)\\D\mapsto {\mathcal {O}}(D)\end{cases}}}
from the group of (Weil) divisors modulo principal divisors to the group of isomorphism classes of line bundles. A divisor corresponding to ωX is called the canonical divisor and is denoted by K. Let l(D) be the dimension of
H
0
(
X
,
O
(
D
)
)
{\displaystyle H^{0}(X,{\mathcal {O}}(D))}
. Then the Riemann–Roch theorem states: if g is a genus of X,
l
(
D
)
−
l
(
K
−
D
)
=
deg
D
+
1
−
g
,
{\displaystyle l(D)-l(K-D)=\deg D+1-g,}
for any divisor D on X. By the Serre duality, this is the same as:
χ
(
O
(
D
)
)
=
deg
D
+
1
−
g
,
{\displaystyle \chi ({\mathcal {O}}(D))=\deg D+1-g,}
which can be readily proved. A generalization of the Riemann–Roch theorem to higher dimension is the Hirzebruch–Riemann–Roch theorem, as well as the far-reaching Grothendieck–Riemann–Roch theorem.
== Hilbert schemes ==
Hilbert schemes parametrize all closed subvarieties of a projective scheme X in the sense that the points (in the functorial sense) of H correspond to the closed subschemes of X. As such, the Hilbert scheme is an example of a moduli space, i.e., a geometric object whose points parametrize other geometric objects. More precisely, the Hilbert scheme parametrizes closed subvarieties whose Hilbert polynomial equals a prescribed polynomial P. It is a deep theorem of Grothendieck that there is a scheme
H
X
P
{\displaystyle H_{X}^{P}}
over k such that, for any k-scheme T, there is a bijection
{
morphisms
T
→
H
X
P
}
⟷
{
closed subschemes of
X
×
k
T
flat over
T
,
with Hilbert polynomial
P
.
}
{\displaystyle \{{\text{morphisms }}T\to H_{X}^{P}\}\ \ \longleftrightarrow \ \ \{{\text{closed subschemes of }}X\times _{k}T{\text{ flat over }}T,{\text{ with Hilbert polynomial }}P.\}}
The closed subscheme of
X
×
H
X
P
{\displaystyle X\times H_{X}^{P}}
that corresponds to the identity map
H
X
P
→
H
X
P
{\displaystyle H_{X}^{P}\to H_{X}^{P}}
is called the universal family.
For
P
(
z
)
=
(
z
+
r
r
)
{\displaystyle P(z)={\binom {z+r}{r}}}
, the Hilbert scheme
H
P
n
P
{\displaystyle H_{\mathbb {P} ^{n}}^{P}}
is called the Grassmannian of r-planes in
P
n
{\displaystyle \mathbb {P} ^{n}}
and, if X is a projective scheme,
H
X
P
{\displaystyle H_{X}^{P}}
is called the Fano scheme of r-planes on X.
== Complex projective varieties ==
In this section, all algebraic varieties are complex algebraic varieties. A key feature of the theory of complex projective varieties is the combination of algebraic and analytic methods. The transition between these theories is provided by the following link: since any complex polynomial is also a holomorphic function, any complex variety X yields a complex analytic space, denoted
X
(
C
)
{\displaystyle X(\mathbb {C} )}
. Moreover, geometric properties of X are reflected by the ones of
X
(
C
)
{\displaystyle X(\mathbb {C} )}
. For example, the latter is a complex manifold if and only if X is smooth; it is compact if and only if X is proper over
C
{\displaystyle \mathbb {C} }
.
=== Relation to complex Kähler manifolds ===
Complex projective space is a Kähler manifold. This implies that, for any projective algebraic variety X,
X
(
C
)
{\displaystyle X(\mathbb {C} )}
is a compact Kähler manifold. The converse is not in general true, but the Kodaira embedding theorem gives a criterion for a Kähler manifold to be projective.
In low dimensions, there are the following results:
(Riemann) A compact Riemann surface (i.e., compact complex manifold of dimension one) is a projective variety. By the Torelli theorem, it is uniquely determined by its Jacobian.
(Chow-Kodaira) A compact complex manifold of dimension two with two algebraically independent meromorphic functions is a projective variety.
=== GAGA and Chow's theorem ===
Chow's theorem provides a striking way to go the other way, from analytic to algebraic geometry. It states that every analytic subvariety of a complex projective space is algebraic. The theorem may be interpreted to saying that a holomorphic function satisfying certain growth condition is necessarily algebraic: "projective" provides this growth condition. One can deduce from the theorem the following:
Meromorphic functions on the complex projective space are rational.
If an algebraic map between algebraic varieties is an analytic isomorphism, then it is an (algebraic) isomorphism. (This part is a basic fact in complex analysis.) In particular, Chow's theorem implies that a holomorphic map between projective varieties is algebraic. (consider the graph of such a map.)
Every holomorphic vector bundle on a projective variety is induced by a unique algebraic vector bundle.
Every holomorphic line bundle on a projective variety is a line bundle of a divisor.
Chow's theorem can be shown via Serre's GAGA principle. Its main theorem states:
Let X be a projective scheme over
C
{\displaystyle \mathbb {C} }
. Then the functor associating the coherent sheaves on X to the coherent sheaves on the corresponding complex analytic space Xan is an equivalence of categories. Furthermore, the natural maps
H
i
(
X
,
F
)
→
H
i
(
X
an
,
F
)
{\displaystyle H^{i}(X,{\mathcal {F}})\to H^{i}(X^{\text{an}},{\mathcal {F}})}
are isomorphisms for all i and all coherent sheaves
F
{\displaystyle {\mathcal {F}}}
on X.
=== Complex tori vs. complex abelian varieties ===
The complex manifold associated to an abelian variety A over
C
{\displaystyle \mathbb {C} }
is a compact complex Lie group. These can be shown to be of the form
C
g
/
L
{\displaystyle \mathbb {C} ^{g}/L}
and are also referred to as complex tori. Here, g is the dimension of the torus and L is a lattice (also referred to as period lattice).
According to the uniformization theorem already mentioned above, any torus of dimension 1 arises from an abelian variety of dimension 1, i.e., from an elliptic curve. In fact, the Weierstrass's elliptic function
℘
{\displaystyle \wp }
attached to L satisfies a certain differential equation and as a consequence it defines a closed immersion:
{
C
/
L
→
P
2
L
↦
(
0
:
0
:
1
)
z
↦
(
1
:
℘
(
z
)
:
℘
′
(
z
)
)
{\displaystyle {\begin{cases}\mathbb {C} /L\to \mathbb {P} ^{2}\\L\mapsto (0:0:1)\\z\mapsto (1:\wp (z):\wp '(z))\end{cases}}}
There is a p-adic analog, the p-adic uniformization theorem.
For higher dimensions, the notions of complex abelian varieties and complex tori differ: only polarized complex tori come from abelian varieties.
=== Kodaira vanishing ===
The fundamental Kodaira vanishing theorem states that for an ample line bundle
L
{\displaystyle {\mathcal {L}}}
on a smooth projective variety X over a field of characteristic zero,
H
i
(
X
,
L
⊗
ω
X
)
=
0
{\displaystyle H^{i}(X,{\mathcal {L}}\otimes \omega _{X})=0}
for i > 0, or, equivalently by Serre duality
H
i
(
X
,
L
−
1
)
=
0
{\displaystyle H^{i}(X,{\mathcal {L}}^{-1})=0}
for i < n. The first proof of this theorem used analytic methods of Kähler geometry, but a purely algebraic proof was found later. The Kodaira vanishing in general fails for a smooth projective variety in positive characteristic. Kodaira's theorem is one of various vanishing theorems, which give criteria for higher sheaf cohomologies to vanish. Since the Euler characteristic of a sheaf (see above) is often more manageable than individual cohomology groups, this often has important consequences about the geometry of projective varieties.
== Related notions ==
Multi-projective variety
Weighted projective variety, a closed subvariety of a weighted projective space
== See also ==
Algebraic geometry of projective spaces
Adequate equivalence relation
Hilbert scheme
Lefschetz hyperplane theorem
Minimal model program
== Notes ==
== References ==
== External links ==
The Hilbert Scheme by Charles Siegel - a blog post
Projective varieties Ch. 1 | Wikipedia/Projective_algebraic_variety |
In algebraic geometry, an affine variety or affine algebraic variety is a certain kind of algebraic variety that can be described as a subset of an affine space.
More formally, an affine algebraic set is the set of the common zeros over an algebraically closed field k of some family of polynomials in the polynomial ring
k
[
x
1
,
…
,
x
n
]
.
{\displaystyle k[x_{1},\ldots ,x_{n}].}
An affine variety is an affine algebraic set which is not the union of two smaller algebraic sets; algebraically, this means that (the radical of) the ideal generated by the defining polynomials is prime. One-dimensional affine varieties are called affine algebraic curves, while two-dimensional ones are affine algebraic surfaces.
Some texts use the term variety for any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense).
In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field k in which the coefficients are considered, from the algebraically closed field K (containing k) over which the common zeros are considered (that is, the points of the affine algebraic set are in Kn). In this case, the variety is said defined over k, and the points of the variety that belong to kn are said k-rational or rational over k. In the common case where k is the field of real numbers, a k-rational point is called a real point. When the field k is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by xn + yn − 1 = 0 has no rational points for any integer n greater than two.
== Introduction ==
An affine algebraic set is the set of solutions in an algebraically closed field k of a system of polynomial equations with coefficients in k. More precisely, if
f
1
,
…
,
f
m
{\displaystyle f_{1},\ldots ,f_{m}}
are polynomials with coefficients in k, they define an affine algebraic set
V
(
f
1
,
…
,
f
m
)
=
{
(
a
1
,
…
,
a
n
)
∈
k
n
|
f
1
(
a
1
,
…
,
a
n
)
=
…
=
f
m
(
a
1
,
…
,
a
n
)
=
0
}
.
{\displaystyle V(f_{1},\ldots ,f_{m})=\left\{(a_{1},\ldots ,a_{n})\in k^{n}\;|\;f_{1}(a_{1},\ldots ,a_{n})=\ldots =f_{m}(a_{1},\ldots ,a_{n})=0\right\}.}
An affine (algebraic) variety is an affine algebraic set that is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible.
If X is an affine algebraic set, and I is the ideal of all polynomials that are zero on X, then the quotient ring
R
=
k
[
x
1
,
…
,
x
n
]
/
I
{\displaystyle R=k[x_{1},\ldots ,x_{n}]/I}
is called the coordinate ring of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or the polynomial functions on the variety. They form the ring of regular functions on the variety, or, simply, the ring of the variety; in more technical terms (see § Structure sheaf), it is the space of global sections of the structure sheaf of X.
The dimension of a variety is an integer associated to every variety, and even to every algebraic set, whose importance relies on the large number of its equivalent definitions (see Dimension of an algebraic variety).
== Examples ==
The complement of a hypersurface in an affine variety X (that is X \ { f = 0 } for some polynomial f) is affine. Its defining equations are obtained by saturating by f the defining ideal of X. The coordinate ring is thus the localization
k
[
X
]
[
f
−
1
]
{\displaystyle k[X][f^{-1}]}
. For instance, for X = kn and f ∈ k[x1,..., xn], kn \ { f = 0 } is isomorphic to the hypersurface V(1 − xn+1f) in kn+1.
In particular,
k
−
0
{\displaystyle k-0}
(the affine line with the origin removed) is affine, isomorphic to the curve
V
(
1
−
x
y
)
{\displaystyle V(1-xy)}
in
k
2
{\displaystyle k^{2}}
(see Algebraic group § Examples).
On the other hand,
k
2
−
0
{\displaystyle k^{2}-0}
(the affine plane with the origin removed) is not an affine variety (compare this to Hartogs' extension theorem in complex analysis). See Spectrum of a ring § Non-affine examples.
The subvarieties of codimension one in the affine space
k
n
{\displaystyle k^{n}}
are exactly the hypersurfaces, that is the varieties defined by a single polynomial.
The normalization of an irreducible affine variety is affine; the coordinate ring of the normalization is the integral closure of the coordinate ring of the variety. (Similarly, the normalization of a projective variety is a projective variety.)
== Rational points ==
For an affine variety
V
⊆
K
n
{\displaystyle V\subseteq K^{n}}
over an algebraically closed field K, and a subfield k of K, a k-rational point of V is a point
p
∈
V
∩
k
n
.
{\displaystyle p\in V\cap k^{n}.}
That is, a point of V whose coordinates are elements of k. The collection of k-rational points of an affine variety V is often denoted
V
(
k
)
.
{\displaystyle V(k).}
Often, if the base field is the complex numbers C, points that are R-rational (where R is the real numbers) are called real points of the variety, and Q-rational points (Q the rational numbers) are often simply called rational points.
For instance, (1, 0) is a Q-rational and an R-rational point of the variety
V
=
V
(
x
2
+
y
2
−
1
)
⊆
C
2
,
{\displaystyle V=V(x^{2}+y^{2}-1)\subseteq \mathbf {C} ^{2},}
as it is in V and all its coordinates are integers. The point (√2/2, √2/2) is a real point of V that is not Q-rational, and
(
i
,
2
)
{\displaystyle (i,{\sqrt {2}})}
is a point of V that is not R-rational. This variety is called a circle, because the set of its R-rational points is the unit circle. It has infinitely many Q-rational points that are the points
(
1
−
t
2
1
+
t
2
,
2
t
1
+
t
2
)
{\displaystyle \left({\frac {1-t^{2}}{1+t^{2}}},{\frac {2t}{1+t^{2}}}\right)}
where t is a rational number.
The circle
V
(
x
2
+
y
2
−
3
)
⊆
C
2
{\displaystyle V(x^{2}+y^{2}-3)\subseteq \mathbf {C} ^{2}}
is an example of an algebraic curve of degree two that has no Q-rational point. This can be deduced from the fact that, modulo 4, the sum of two squares cannot be 3.
It can be proved that an algebraic curve of degree two with a Q-rational point has infinitely many other Q-rational points; each such point is the second intersection point of the curve and a line with a rational slope passing through the rational point.
The complex variety
V
(
x
2
+
y
2
+
1
)
⊆
C
2
{\displaystyle V(x^{2}+y^{2}+1)\subseteq \mathbf {C} ^{2}}
has no R-rational points, but has many complex points.
If V is an affine variety in C2 defined over the complex numbers C, the R-rational points of V can be drawn on a piece of paper or by graphing software. The figure on the right shows the R-rational points of
V
(
y
2
−
x
3
+
x
2
+
16
x
)
⊆
C
2
.
{\displaystyle V(y^{2}-x^{3}+x^{2}+16x)\subseteq \mathbf {C} ^{2}.}
== Singular points and tangent space ==
Let V be an affine variety defined by the polynomials
f
1
,
…
,
f
r
∈
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle f_{1},\dots ,f_{r}\in k[x_{1},\dots ,x_{n}],}
and
a
=
(
a
1
,
…
,
a
n
)
{\displaystyle a=(a_{1},\dots ,a_{n})}
be a point of V.
The Jacobian matrix JV(a) of V at a is the matrix of the partial derivatives
∂
f
j
∂
x
i
(
a
1
,
…
,
a
n
)
.
{\displaystyle {\frac {\partial f_{j}}{\partial {x_{i}}}}(a_{1},\dots ,a_{n}).}
The point a is regular if the rank of JV(a) equals the codimension of V, and singular otherwise.
If a is regular, the tangent space to V at a is the affine subspace of
k
n
{\displaystyle k^{n}}
defined by the linear equations
∑
i
=
1
n
∂
f
j
∂
x
i
(
a
1
,
…
,
a
n
)
(
x
i
−
a
i
)
=
0
,
j
=
1
,
…
,
r
.
{\displaystyle \sum _{i=1}^{n}{\frac {\partial f_{j}}{\partial {x_{i}}}}(a_{1},\dots ,a_{n})(x_{i}-a_{i})=0,\quad j=1,\dots ,r.}
If the point is singular, the affine subspace defined by these equations is also called a tangent space by some authors, while other authors say that there is no tangent space at a singular point.
A more intrinsic definition which does not use coordinates is given by Zariski tangent space.
== The Zariski topology ==
The affine algebraic sets of kn form the closed sets of a topology on kn, called the Zariski topology. This follows from the fact that
V
(
0
)
=
k
n
,
{\displaystyle V(0)=k^{n},}
V
(
1
)
=
∅
,
{\displaystyle V(1)=\emptyset ,}
V
(
S
)
∪
V
(
T
)
=
V
(
S
T
)
,
{\displaystyle V(S)\cup V(T)=V(ST),}
and
V
(
S
)
∩
V
(
T
)
=
V
(
S
,
T
)
{\displaystyle V(S)\cap V(T)=V(S,T)}
(in fact, a countable intersection of affine algebraic sets is an affine algebraic set).
The Zariski topology can also be described by way of basic open sets, where Zariski-open sets are countable unions of sets of the form
U
f
=
{
p
∈
k
n
:
f
(
p
)
≠
0
}
{\displaystyle U_{f}=\{p\in k^{n}:f(p)\neq 0\}}
for
f
∈
k
[
x
1
,
…
,
x
n
]
.
{\displaystyle f\in k[x_{1},\ldots ,x_{n}].}
These basic open sets are the complements in kn of the closed sets
V
(
f
)
=
D
f
=
{
p
∈
k
n
:
f
(
p
)
=
0
}
,
{\displaystyle V(f)=D_{f}=\{p\in k^{n}:f(p)=0\},}
zero loci of a single polynomial. If k is Noetherian (for instance, if k is a field or a principal ideal domain), then every ideal of k is finitely-generated, so every open set is a finite union of basic open sets.
If V is an affine subvariety of kn the Zariski topology on V is simply the subspace topology inherited from the Zariski topology on kn.
== Geometry–algebra correspondence ==
The geometric structure of an affine variety is linked in a deep way to the algebraic structure of its coordinate ring. Let I and J be ideals of k[V], the coordinate ring of an affine variety V. Let I(V) be the set of all polynomials in
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle k[x_{1},\ldots ,x_{n}],}
that vanish on V, and let
I
{\displaystyle {\sqrt {I}}}
denote the radical of the ideal I, the set of polynomials f for which some power of f is in I. The reason that the base field is required to be algebraically closed is that affine varieties automatically satisfy Hilbert's nullstellensatz: for an ideal J in
k
[
x
1
,
…
,
x
n
]
,
{\displaystyle k[x_{1},\ldots ,x_{n}],}
where k is an algebraically closed field,
I
(
V
(
J
)
)
=
J
.
{\displaystyle I(V(J))={\sqrt {J}}.}
Radical ideals (ideals that are their own radical) of k[V] correspond to algebraic subsets of V. Indeed, for radical ideals I and J,
I
⊆
J
{\displaystyle I\subseteq J}
if and only if
V
(
J
)
⊆
V
(
I
)
.
{\displaystyle V(J)\subseteq V(I).}
Hence V(I)=V(J) if and only if I=J. Furthermore, the function taking an affine algebraic set W and returning I(W), the set of all functions that also vanish on all points of W, is the inverse of the function assigning an algebraic set to a radical ideal, by the nullstellensatz. Hence the correspondence between affine algebraic sets and radical ideals is a bijection. The coordinate ring of an affine algebraic set is reduced (nilpotent-free), as an ideal I in a ring R is radical if and only if the quotient ring R/I is reduced.
Prime ideals of the coordinate ring correspond to affine subvarieties. An affine algebraic set V(I) can be written as the union of two other algebraic sets if and only if I=JK for proper ideals J and K not equal to I (in which case
V
(
I
)
=
V
(
J
)
∪
V
(
K
)
{\displaystyle V(I)=V(J)\cup V(K)}
). This is the case if and only if I is not prime. Affine subvarieties are precisely those whose coordinate ring is an integral domain. This is because an ideal is prime if and only if the quotient of the ring by the ideal is an integral domain.
Maximal ideals of k[V] correspond to points of V. If I and J are radical ideals, then
V
(
J
)
⊆
V
(
I
)
{\displaystyle V(J)\subseteq V(I)}
if and only if
I
⊆
J
.
{\displaystyle I\subseteq J.}
As maximal ideals are radical, maximal ideals correspond to minimal algebraic sets (those that contain no proper algebraic subsets), which are points in V. If V is an affine variety with coordinate ring
R
=
k
[
x
1
,
…
,
x
n
]
/
⟨
f
1
,
…
,
f
m
⟩
,
{\displaystyle R=k[x_{1},\ldots ,x_{n}]/\langle f_{1},\ldots ,f_{m}\rangle ,}
this correspondence becomes explicit through the map
(
a
1
,
…
,
a
n
)
↦
⟨
x
1
−
a
1
¯
,
…
,
x
n
−
a
n
¯
⟩
,
{\displaystyle (a_{1},\ldots ,a_{n})\mapsto \langle {\overline {x_{1}-a_{1}}},\ldots ,{\overline {x_{n}-a_{n}}}\rangle ,}
where
x
i
−
a
i
¯
{\displaystyle {\overline {x_{i}-a_{i}}}}
denotes the image in the quotient algebra R of the polynomial
x
i
−
a
i
.
{\displaystyle x_{i}-a_{i}.}
An algebraic subset is a point if and only if the coordinate ring of the subset is a field, as the quotient of a ring by a maximal ideal is a field.
The following table summarizes this correspondence, for algebraic subsets of an affine variety and ideals of the corresponding coordinate ring:
== Products of affine varieties ==
A product of affine varieties can be defined using the isomorphism An × Am = An+m, then embedding the product in this new affine space. Let An and Am have coordinate rings k[x1,..., xn] and k[y1,..., ym] respectively, so that their product An+m has coordinate ring k[x1,..., xn, y1,..., ym]. Let V = V( f1,..., fN) be an algebraic subset of An, and W = V( g1,..., gM) an algebraic subset of Am. Then each fi is a polynomial in k[x1,..., xn], and each gj is in k[y1,..., ym]. The product of V and W is defined as the algebraic set V × W = V( f1,..., fN, g1,..., gM) in An+m. The product is irreducible if each V, W is irreducible.
The Zariski topology on An × Am is not the topological product of the Zariski topologies on the two spaces. Indeed, the product topology is generated by products of the basic open sets Uf = An − V( f ) and Tg = Am − V( g ). Hence, polynomials that are in k[x1,..., xn, y1,..., ym] but cannot be obtained as a product of a polynomial in k[x1,..., xn] with a polynomial in k[y1,..., ym] will define algebraic sets that are closed in the Zariski topology on An × Am , but not in the product topology.
== Morphisms of affine varieties ==
A morphism, or regular map, of affine varieties is a function between affine varieties that is polynomial in each coordinate: more precisely, for affine varieties V ⊆ kn and W ⊆ km, a morphism from V to W is a map φ : V → W of the form φ(a1, ..., an) = (f1(a1, ..., an), ..., fm(a1, ..., an)), where fi ∈ k[X1, ..., Xn] for each i = 1, ..., m. These are the morphisms in the category of affine varieties.
There is a one-to-one correspondence between morphisms of affine varieties over an algebraically closed field k, and homomorphisms of coordinate rings of affine varieties over k going in the opposite direction. Because of this, along with the fact that there is a one-to-one correspondence between affine varieties over k and their coordinate rings, the category of affine varieties over k is dual to the category of coordinate rings of affine varieties over k. The category of coordinate rings of affine varieties over k is precisely the category of finitely-generated, nilpotent-free algebras over k.
More precisely, for each morphism φ : V → W of affine varieties, there is a homomorphism φ# : k[W] → k[V] between the coordinate rings (going in the opposite direction), and for each such homomorphism, there is a morphism of the varieties associated to the coordinate rings. This can be shown explicitly: let V ⊆ kn and W ⊆ km be affine varieties with coordinate rings k[V] = k[X1, ..., Xn] / I and k[W] = k[Y1, ..., Ym] / J respectively. Let φ : V → W be a morphism. Indeed, a homomorphism between polynomial rings θ : k[Y1, ..., Ym] / J → k[X1, ..., Xn] / I factors uniquely through the ring k[X1, ..., Xn], and a homomorphism ψ : k[Y1, ..., Ym] / J → k[X1, ..., Xn] is determined uniquely by the images of Y1, ..., Ym. Hence, each homomorphism φ# : k[W] → k[V] corresponds uniquely to a choice of image for each Yi. Then given any morphism φ = (f1, ..., fm) from V to W, a homomorphism can be constructed φ# : k[W] → k[V] that sends Yi to
f
i
¯
,
{\displaystyle {\overline {f_{i}}},}
where
f
i
¯
{\displaystyle {\overline {f_{i}}}}
is the equivalence class of fi in k[V].
Similarly, for each homomorphism of the coordinate rings, a morphism of the affine varieties can be constructed in the opposite direction. Mirroring the paragraph above, a homomorphism φ# : k[W] → k[V] sends Yi to a polynomial
f
i
(
X
1
,
…
,
X
n
)
{\displaystyle f_{i}(X_{1},\dots ,X_{n})}
in k[V]. This corresponds to the morphism of varieties φ : V → W defined by φ(a1, ... , an) = (f1(a1, ..., an), ..., fm(a1, ..., an)).
== Structure sheaf ==
Equipped with the structure sheaf described below, an affine variety is a locally ringed space.
Given an affine variety X with coordinate ring A, the sheaf of k-algebras
O
X
{\displaystyle {\mathcal {O}}_{X}}
is defined by letting
O
X
(
U
)
=
Γ
(
U
,
O
X
)
{\displaystyle {\mathcal {O}}_{X}(U)=\Gamma (U,{\mathcal {O}}_{X})}
be the ring of regular functions on U.
Let D(f) = { x | f(x) ≠ 0 } for each f in A. They form a base for the topology of X and so
O
X
{\displaystyle {\mathcal {O}}_{X}}
is determined by its values on the open sets D(f). (See also: sheaf of modules#Sheaf associated to a module.)
The key fact, which relies on Hilbert nullstellensatz in the essential way, is the following:
Proof: The inclusion ⊃ is clear. For the opposite, let g be in the left-hand side and
J
=
{
h
∈
A
|
h
g
∈
A
}
{\displaystyle J=\{h\in A|hg\in A\}}
, which is an ideal. If x is in D(f), then, since g is regular near x, there is some open affine neighborhood D(h) of x such that
g
∈
k
[
D
(
h
)
]
=
A
[
h
−
1
]
{\displaystyle g\in k[D(h)]=A[h^{-1}]}
; that is, hm g is in A and thus x is not in V(J). In other words,
V
(
J
)
⊂
{
x
|
f
(
x
)
=
0
}
{\displaystyle V(J)\subset \{x|f(x)=0\}}
and thus the Hilbert nullstellensatz implies f is in the radical of J; i.e.,
f
n
g
∈
A
{\displaystyle f^{n}g\in A}
.
◻
{\displaystyle \square }
The claim, first of all, implies that X is a "locally ringed" space since
O
X
,
x
=
lim
→
f
(
x
)
≠
0
A
[
f
−
1
]
=
A
m
x
{\displaystyle {\mathcal {O}}_{X,x}=\varinjlim _{f(x)\neq 0}A[f^{-1}]=A_{{\mathfrak {m}}_{x}}}
where
m
x
=
{
f
∈
A
|
f
(
x
)
=
0
}
{\displaystyle {\mathfrak {m}}_{x}=\{f\in A|f(x)=0\}}
. Secondly, the claim implies that
O
X
{\displaystyle {\mathcal {O}}_{X}}
is a sheaf; indeed, it says if a function is regular (pointwise) on D(f), then it must be in the coordinate ring of D(f); that is, "regular-ness" can be patched together.
Hence,
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a locally ringed space.
== Serre's theorem on affineness ==
A theorem of Serre gives a cohomological characterization of an affine variety; it says an algebraic variety is affine if and only if
H
i
(
X
,
F
)
=
0
{\displaystyle H^{i}(X,F)=0}
for any
i
>
0
{\displaystyle i>0}
and any quasi-coherent sheaf F on X. (cf. Cartan's theorem B.) This makes the cohomological study of an affine variety non-existent, in a sharp contrast to the projective case in which cohomology groups of line bundles are of central interest.
== Affine algebraic groups ==
An affine variety G over an algebraically closed field k is called an affine algebraic group if it has:
A multiplication μ: G × G → G, which is a regular morphism that follows the associativity axiom—that is, such that μ(μ(f, g), h) = μ(f, μ(g, h)) for all points f, g and h in G;
An identity element e such that μ(e, g) = μ(g, e) = g for every g in G;
An inverse morphism, a regular bijection ι: G → G such that μ(ι(g), g) = μ(g, ι(g)) = e for every g in G.
Together, these define a group structure on the variety. The above morphisms are often written using ordinary group notation: μ(f, g) can be written as f + g, f⋅g, or fg; the inverse ι(g) can be written as −g or g−1. Using the multiplicative notation, the associativity, identity and inverse laws can be rewritten as: f(gh) = (fg)h, ge = eg = g and gg−1 = g−1g = e.
The most prominent example of an affine algebraic group is GLn(k), the general linear group of degree n. This is the group of linear transformations of the vector space kn; if a basis of kn, is fixed, this is equivalent to the group of n×n invertible matrices with entries in k. It can be shown that any affine algebraic group is isomorphic to a subgroup of GLn(k). For this reason, affine algebraic groups are often called linear algebraic groups.
Affine algebraic groups play an important role in the classification of finite simple groups, as the groups of Lie type are all sets of Fq-rational points of an affine algebraic group, where Fq is a finite field.
== Generalizations ==
If an author requires the base field of an affine variety to be algebraically closed (as this article does), then irreducible affine algebraic sets over non-algebraically closed fields are a generalization of affine varieties. This generalization notably includes affine varieties over the real numbers.
An open subset of an affine variety is called a quasi-affine variety, so every affine variety is quasi-affine. Any quasi-affine variety is in turn a quasi-projective variety.
Affine varieties play the role of local charts for algebraic varieties; that is to say, general algebraic varieties such as projective varieties are obtained by gluing affine varieties. Linear structures that are attached to varieties are also (trivially) affine varieties; e.g., tangent spaces, fibers of algebraic vector bundles.
The construction given in § Structure sheaf allows for a generalization that is used in scheme theory, the modern approach to algebraic geometry. An affine variety is (up to an equivalence of categories) a special case of an affine scheme, a locally-ringed space that is isomorphic to the spectrum of a commutative ring. Each affine variety has an affine scheme associated to it: if V(I) is an affine variety in kn with coordinate ring R = k[x1, ..., xn] / I, then the scheme corresponding to V(I) is Spec(R), the set of prime ideals of R. The affine scheme has "classical points", which correspond with points of the variety (and hence maximal ideals of the coordinate ring of the variety), and also a point for each closed subvariety of the variety (these points correspond to prime, non-maximal ideals of the coordinate ring). This creates a more well-defined notion of the "generic point" of an affine variety, by assigning to each closed subvariety an open point that is dense in the subvariety. More generally, an affine scheme is an affine variety if it is reduced, irreducible, and of finite type over an algebraically closed field k.
== Notes ==
== See also ==
Representations on coordinate rings
== References ==
The original article was written as a partial human translation of the corresponding French article.
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
Fulton, William (1969). Algebraic Curves (PDF). Addison-Wesley. ISBN 0-201-510103.
Milne, James S. (2017). "Algebraic Geometry" (PDF). www.jmilne.org. Retrieved 16 July 2021.
Milne, James S. Lectures on Étale cohomology
Mumford, David (1999). The Red Book of Varieties and Schemes: Includes the Michigan Lectures (1974) on Curves and Their Jacobians. Lecture Notes in Mathematics. Vol. 1358 (2nd ed.). Springer-Verlag. doi:10.1007/b62130. ISBN 354063293X.
Reid, Miles (1988). Undergraduate Algebraic Geometry. Cambridge University Press. ISBN 0-521-35662-8. | Wikipedia/Ring_of_regular_functions |
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions.
A function is analytic if and only if for every
x
0
{\displaystyle x_{0}}
in its domain, its Taylor series about
x
0
{\displaystyle x_{0}}
converges to the function in some neighborhood of
x
0
{\displaystyle x_{0}}
. This is stronger than merely being infinitely differentiable at
x
0
{\displaystyle x_{0}}
, and therefore having a well-defined Taylor series; the Fabius function provides an example of a function that is infinitely differentiable but not analytic.
== Definitions ==
Formally, a function
f
{\displaystyle f}
is real analytic on an open set
D
{\displaystyle D}
in the real line if for any
x
0
∈
D
{\displaystyle x_{0}\in D}
one can write
f
(
x
)
=
∑
n
=
0
∞
a
n
(
x
−
x
0
)
n
=
a
0
+
a
1
(
x
−
x
0
)
+
a
2
(
x
−
x
0
)
2
+
⋯
{\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}\left(x-x_{0}\right)^{n}=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})^{2}+\cdots }
in which the coefficients
a
0
,
a
1
,
…
{\displaystyle a_{0},a_{1},\dots }
are real numbers and the series is convergent to
f
(
x
)
{\displaystyle f(x)}
for
x
{\displaystyle x}
in a neighborhood of
x
0
{\displaystyle x_{0}}
.
Alternatively, a real analytic function is an infinitely differentiable function such that the Taylor series at any point
x
0
{\displaystyle x_{0}}
in its domain
T
(
x
)
=
∑
n
=
0
∞
f
(
n
)
(
x
0
)
n
!
(
x
−
x
0
)
n
{\displaystyle T(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}}
converges to
f
(
x
)
{\displaystyle f(x)}
for
x
{\displaystyle x}
in a neighborhood of
x
0
{\displaystyle x_{0}}
pointwise. The set of all real analytic functions on a given set
D
{\displaystyle D}
is often denoted by
C
ω
(
D
)
{\displaystyle {\mathcal {C}}^{\,\omega }(D)}
, or just by
C
ω
{\displaystyle {\mathcal {C}}^{\,\omega }}
if the domain is understood.
A function
f
{\displaystyle f}
defined on some subset of the real line is said to be real analytic at a point
x
{\displaystyle x}
if there is a neighborhood
D
{\displaystyle D}
of
x
{\displaystyle x}
on which
f
{\displaystyle f}
is real analytic.
The definition of a complex analytic function is obtained by replacing, in the definitions above, "real" with "complex" and "real line" with "complex plane". A function is complex analytic if and only if it is holomorphic i.e. it is complex differentiable. For this reason the terms "holomorphic" and "analytic" are often used interchangeably for such functions.
In complex analysis, a function is called analytic in an open set "U" if it is (complex) differentiable at each point in "U" and its complex derivative is continuous on "U".
== Examples ==
Typical examples of analytic functions are
The following elementary functions:
All polynomials: if a polynomial has degree n, any terms of degree larger than n in its Taylor series expansion must immediately vanish to 0, and so this series will be trivially convergent. Furthermore, every polynomial is its own Maclaurin series.
The exponential function is analytic. Any Taylor series for this function converges not only for x close enough to x0 (as in the definition) but for all values of x (real or complex).
The trigonometric functions, logarithm, and the power functions are analytic on any open set of their domain.
Most special functions (at least in some range of the complex plane):
hypergeometric functions
Bessel functions
gamma functions
Typical examples of functions that are not analytic are
The absolute value function when defined on the set of real numbers or complex numbers is not everywhere analytic because it is not differentiable at 0.
Piecewise defined functions (functions given by different formulae in different regions) are typically not analytic where the pieces meet.
The complex conjugate function z → z* is not complex analytic, although its restriction to the real line is the identity function and therefore real analytic, and it is real analytic as a function from
R
2
{\displaystyle \mathbb {R} ^{2}}
to
R
2
{\displaystyle \mathbb {R} ^{2}}
.
Other non-analytic smooth functions, and in particular any smooth function
f
{\displaystyle f}
with compact support, i.e.
f
∈
C
0
∞
(
R
n
)
{\displaystyle f\in {\mathcal {C}}_{0}^{\infty }(\mathbb {R} ^{n})}
, cannot be analytic on
R
n
{\displaystyle \mathbb {R} ^{n}}
.
== Alternative characterizations ==
The following conditions are equivalent:
f
{\displaystyle f}
is real analytic on an open set
D
{\displaystyle D}
.
There is a complex analytic extension of
f
{\displaystyle f}
to an open set
G
⊂
C
{\displaystyle G\subset \mathbb {C} }
which contains
D
{\displaystyle D}
.
f
{\displaystyle f}
is smooth and for every compact set
K
⊂
D
{\displaystyle K\subset D}
there exists a constant
C
{\displaystyle C}
such that for every
x
∈
K
{\displaystyle x\in K}
and every non-negative integer
k
{\displaystyle k}
the following bound holds
|
d
k
f
d
x
k
(
x
)
|
≤
C
k
+
1
k
!
{\displaystyle \left|{\frac {d^{k}f}{dx^{k}}}(x)\right|\leq C^{k+1}k!}
Complex analytic functions are exactly equivalent to holomorphic functions, and are thus much more easily characterized.
For the case of an analytic function with several variables (see below), the real analyticity can be characterized using the Fourier–Bros–Iagolnitzer transform.
In the multivariable case, real analytic functions satisfy a direct generalization of the third characterization. Let
U
⊂
R
n
{\displaystyle U\subset \mathbb {R} ^{n}}
be an open set, and let
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
.
Then
f
{\displaystyle f}
is real analytic on
U
{\displaystyle U}
if and only if
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
and for every compact
K
⊆
U
{\displaystyle K\subseteq U}
there exists a constant
C
{\displaystyle C}
such that for every multi-index
α
∈
Z
≥
0
n
{\displaystyle \alpha \in \mathbb {Z} _{\geq 0}^{n}}
the following bound holds
sup
x
∈
K
|
∂
α
f
∂
x
α
(
x
)
|
≤
C
|
α
|
+
1
α
!
{\displaystyle \sup _{x\in K}\left|{\frac {\partial ^{\alpha }f}{\partial x^{\alpha }}}(x)\right|\leq C^{|\alpha |+1}\alpha !}
== Properties of analytic functions ==
The sums, products, and compositions of analytic functions are analytic.
The reciprocal of an analytic function that is nowhere zero is analytic, as is the inverse of an invertible analytic function whose derivative is nowhere zero. (See also the Lagrange inversion theorem.)
Any analytic function is smooth, that is, infinitely differentiable. The converse is not true for real functions; in fact, in a certain sense, the real analytic functions are sparse compared to all real infinitely differentiable functions. For the complex numbers, the converse does hold, and in fact any function differentiable once on an open set is analytic on that set (see "analyticity and differentiability" below).
For any open set
Ω
⊆
C
{\displaystyle \Omega \subseteq \mathbb {C} }
, the set A(Ω) of all analytic functions
u
:
Ω
→
C
{\displaystyle u:\Omega \to \mathbb {C} }
is a Fréchet space with respect to the uniform convergence on compact sets. The fact that uniform limits on compact sets of analytic functions are analytic is an easy consequence of Morera's theorem. The set
A
∞
(
Ω
)
{\displaystyle A_{\infty }(\Omega )}
of all bounded analytic functions with the supremum norm is a Banach space.
A polynomial cannot be zero at too many points unless it is the zero polynomial (more precisely, the number of zeros is at most the degree of the polynomial). A similar but weaker statement holds for analytic functions. If the set of zeros of an analytic function ƒ has an accumulation point inside its domain, then ƒ is zero everywhere on the connected component containing the accumulation point. In other words, if (rn) is a sequence of distinct numbers such that ƒ(rn) = 0 for all n and this sequence converges to a point r in the domain of D, then ƒ is identically zero on the connected component of D containing r. This is known as the identity theorem.
Also, if all the derivatives of an analytic function at a point are zero, the function is constant on the corresponding connected component.
These statements imply that while analytic functions do have more degrees of freedom than polynomials, they are still quite rigid.
== Analyticity and differentiability ==
As noted above, any analytic function (real or complex) is infinitely differentiable (also known as smooth, or
C
∞
{\displaystyle {\mathcal {C}}^{\infty }}
). (Note that this differentiability is in the sense of real variables; compare complex derivatives below.) There exist smooth real functions that are not analytic: see non-analytic smooth function. In fact there are many such functions.
The situation is quite different when one considers complex analytic functions and complex derivatives. It can be proved that any complex function differentiable (in the complex sense) in an open set is analytic. Consequently, in complex analysis, the term analytic function is synonymous with holomorphic function.
== Real versus complex analytic functions ==
Real and complex analytic functions have important differences (one could notice that even from their different relationship with differentiability). Analyticity of complex functions is a more restrictive property, as it has more restrictive necessary conditions and complex analytic functions have more structure than their real-line counterparts.
According to Liouville's theorem, any bounded complex analytic function defined on the whole complex plane is constant. The corresponding statement for real analytic functions, with the complex plane replaced by the real line, is clearly false; this is illustrated by
f
(
x
)
=
1
x
2
+
1
.
{\displaystyle f(x)={\frac {1}{x^{2}+1}}.}
Also, if a complex analytic function is defined in an open ball around a point x0, its power series expansion at x0 is convergent in the whole open ball (holomorphic functions are analytic). This statement for real analytic functions (with open ball meaning an open interval of the real line rather than an open disk of the complex plane) is not true in general; the function of the example above gives an example for x0 = 0 and a ball of radius exceeding 1, since the power series 1 − x2 + x4 − x6... diverges for |x| ≥ 1.
Any real analytic function on some open set on the real line can be extended to a complex analytic function on some open set of the complex plane. However, not every real analytic function defined on the whole real line can be extended to a complex function defined on the whole complex plane. The function f(x) defined in the paragraph above is a counterexample, as it is not defined for x = ±i. This explains why the Taylor series of f(x) diverges for |x| > 1, i.e., the radius of convergence is 1 because the complexified function has a pole at distance 1 from the evaluation point 0 and no further poles within the open disc of radius 1 around the evaluation point.
== Analytic functions of several variables ==
One can define analytic functions in several variables by means of power series in those variables (see power series). Analytic functions of several variables have some of the same properties as analytic functions of one variable. However, especially for complex analytic functions, new and interesting phenomena show up in 2 or more complex dimensions:
Zero sets of complex analytic functions in more than one variable are never discrete. This can be proved by Hartogs's extension theorem.
Domains of holomorphy for single-valued functions consist of arbitrary (connected) open sets. In several complex variables, however, only some connected open sets are domains of holomorphy. The characterization of domains of holomorphy leads to the notion of pseudoconvexity.
== See also ==
Cauchy–Riemann equations
Holomorphic function
Paley–Wiener theorem
Quasi-analytic function
Infinite compositions of analytic functions
Non-analytic smooth function
== Notes ==
== References ==
Conway, John B. (1978). Functions of One Complex Variable I. Graduate Texts in Mathematics 11 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90328-6.
Krantz, Steven; Parks, Harold R. (2002). A Primer of Real Analytic Functions (2nd ed.). Birkhäuser. ISBN 0-8176-4264-1.
Gamelin, Theodore W. (2004). Complex Analysis. Springer. ISBN 9788181281142.
== External links ==
"Analytic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Analytic Function". MathWorld.
Solver for all zeros of a complex analytic function that lie within a rectangular region by Ivan B. Ivanov | Wikipedia/Analytic_functions |
In mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety.
The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors. The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space.
While divisors on higher-dimensional varieties continue to play an important role in determining the structure of the variety, on varieties of dimension two or more there are also higher codimension cycles to consider. The behavior of these cycles is strikingly different from that of divisors. For example, every curve has a constant N such that every divisor of degree zero is linearly equivalent to a difference of two effective divisors of degree at most N. David Mumford proved that, on a smooth complete complex algebraic surface S with positive geometric genus, the analogous statement for the group
CH
2
(
S
)
{\displaystyle \operatorname {CH} ^{2}(S)}
of rational equivalence classes of codimension two cycles in S is false. The hypothesis that the geometric genus is positive essentially means (by the Lefschetz theorem on (1,1)-classes) that the cohomology group
H
2
(
S
)
{\displaystyle H^{2}(S)}
contains transcendental information, and in effect Mumford's theorem implies that, despite
CH
2
(
S
)
{\displaystyle \operatorname {CH} ^{2}(S)}
having a purely algebraic definition, it shares transcendental information with
H
2
(
S
)
{\displaystyle H^{2}(S)}
. Mumford's theorem has since been greatly generalized.
The behavior of algebraic cycles ranks among the most important open questions in modern mathematics. The Hodge conjecture, one of the Clay Mathematics Institute's Millennium Prize Problems, predicts that the topology of a complex algebraic variety forces the existence of certain algebraic cycles. The Tate conjecture makes a similar prediction for étale cohomology. Alexander Grothendieck's standard conjectures on algebraic cycles yield enough cycles to construct his category of motives and would imply that algebraic cycles play a vital role in any cohomology theory of algebraic varieties. Conversely, Alexander Beilinson proved that the existence of a category of motives implies the standard conjectures. Additionally, cycles are connected to algebraic K-theory by Bloch's formula, which expresses groups of cycles modulo rational equivalence as the cohomology of K-theory sheaves.
== Definition ==
Let X be a scheme which is finite type over a field k. An algebraic r-cycle on X is a formal linear combination
∑
n
i
[
V
i
]
{\displaystyle \sum n_{i}[V_{i}]}
of r-dimensional closed integral k-subschemes of X. The coefficient ni is the multiplicity of Vi. The set of all r-cycles is the free abelian group
Z
r
X
=
⨁
V
⊆
X
Z
⋅
[
V
]
,
{\displaystyle Z_{r}X=\bigoplus _{V\subseteq X}\mathbf {Z} \cdot [V],}
where the sum is over closed integral subschemes V of X. The groups of cycles for varying r together form a group
Z
∗
X
=
⨁
r
Z
r
X
.
{\displaystyle Z_{*}X=\bigoplus _{r}Z_{r}X.}
This is called the group of algebraic cycles, and any element is called an algebraic cycle. A cycle is effective or positive if all its coefficients are non-negative.
Closed integral subschemes of X are in one-to-one correspondence with the scheme-theoretic points of X under the map that, in one direction, takes each subscheme to its generic point, and in the other direction, takes each point to the unique reduced subscheme supported on the closure of the point. Consequently
Z
∗
X
{\displaystyle Z_{*}X}
can also be described as the free abelian group on the points of X.
A cycle
α
{\displaystyle \alpha }
is rationally equivalent to zero, written
α
∼
0
{\displaystyle \alpha \sim 0}
, if there are a finite number of
(
r
+
1
)
{\displaystyle (r+1)}
-dimensional subvarieties
W
i
{\displaystyle W_{i}}
of
X
{\displaystyle X}
and non-zero rational functions
r
i
∈
k
(
W
i
)
×
{\displaystyle r_{i}\in k(W_{i})^{\times }}
such that
α
=
∑
[
div
W
i
(
r
i
)
]
{\displaystyle \alpha =\sum [\operatorname {div} _{W_{i}}(r_{i})]}
, where
div
W
i
{\displaystyle \operatorname {div} _{W_{i}}}
denotes the divisor of a rational function on Wi. The cycles rationally equivalent to zero are a subgroup
Z
r
(
X
)
rat
⊆
Z
r
(
X
)
{\displaystyle Z_{r}(X)_{\text{rat}}\subseteq Z_{r}(X)}
, and the group of r-cycles modulo rational equivalence is the quotient
A
r
(
X
)
=
Z
r
(
X
)
/
Z
r
(
X
)
rat
.
{\displaystyle A_{r}(X)=Z_{r}(X)/Z_{r}(X)_{\text{rat}}.}
This group is also denoted
CH
r
(
X
)
{\displaystyle \operatorname {CH} _{r}(X)}
. Elements of the group
A
∗
(
X
)
=
⨁
r
A
r
(
X
)
{\displaystyle A_{*}(X)=\bigoplus _{r}A_{r}(X)}
are called cycle classes on X. Cycle classes are said to be effective or positive if they can be represented by an effective cycle.
If X is smooth, projective, and of pure dimension N, the above groups are sometimes reindexed cohomologically as
Z
N
−
r
X
=
Z
r
X
{\displaystyle Z^{N-r}X=Z_{r}X}
and
A
N
−
r
X
=
A
r
X
.
{\displaystyle A^{N-r}X=A_{r}X.}
In this case,
A
∗
X
{\displaystyle A^{*}X}
is called the Chow ring of X because it has a multiplication operation given by the intersection product.
There are several variants of the above definition. We may substitute another ring for integers as our coefficient ring. The case of rational coefficients is widely used. Working with families of cycles over a base, or using cycles in arithmetic situations, requires a relative setup. Let
ϕ
:
X
→
S
{\displaystyle \phi \colon X\to S}
, where S is a regular Noetherian scheme. An r-cycle is a formal sum of closed integral subschemes of X whose relative dimension is r; here the relative dimension of
Y
⊆
X
{\displaystyle Y\subseteq X}
is the transcendence degree of
k
(
Y
)
{\displaystyle k(Y)}
over
k
(
ϕ
(
Y
)
¯
)
{\displaystyle k({\overline {\phi (Y)}})}
minus the codimension of
ϕ
(
Y
)
¯
{\displaystyle {\overline {\phi (Y)}}}
in S.
Rational equivalence can also be replaced by several other coarser equivalence relations on algebraic cycles. Other equivalence relations of interest include algebraic equivalence, homological equivalence for a fixed cohomology theory (such as singular cohomology or étale cohomology), numerical equivalence, as well as all of the above modulo torsion. These equivalence relations have (partially conjectural) applications to the theory of motives.
== Flat pullback and proper pushforward ==
There is a covariant and a contravariant functoriality of the group of algebraic cycles. Let f : X → X' be a map of varieties.
If f is flat of some constant relative dimension (i.e. all fibers have the same dimension), we can define for any subvariety Y' ⊂ X':
f
∗
(
[
Y
′
]
)
=
[
f
−
1
(
Y
′
)
]
{\displaystyle f^{*}([Y'])=[f^{-1}(Y')]\,\!}
which by assumption has the same codimension as Y′.
Conversely, if f is proper, for Y a subvariety of X the pushforward is defined to be
f
∗
(
[
Y
]
)
=
n
[
f
(
Y
)
]
{\displaystyle f_{*}([Y])=n[f(Y)]\,\!}
where n is the degree of the extension of function fields [k(Y) : k(f(Y))] if the restriction of f to Y is finite and 0 otherwise.
By linearity, these definitions extend to homomorphisms of abelian groups
f
∗
:
Z
k
(
X
′
)
→
Z
k
(
X
)
and
f
∗
:
Z
k
(
X
)
→
Z
k
(
X
′
)
{\displaystyle f^{*}\colon Z^{k}(X')\to Z^{k}(X)\quad {\text{and}}\quad f_{*}\colon Z_{k}(X)\to Z_{k}(X')\,\!}
(the latter by virtue of the convention) are homomorphisms of abelian groups. See Chow ring for a discussion of the functoriality related to the ring structure.
== See also ==
divisor (algebraic geometry)
Relative cycle
== References ==
Fulton, William (1998), Intersection theory, Ergebnisse der Mathematik und ihrer Grenzgebiete. Third series. A Series of Modern Surveys in Mathematics, vol. 2, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98549-7, MR 1644323
Gordon, B. Brent; Lewis, James D.; Müller-Stach, Stefan; Saito, Shuji; Yui, Noriko, eds. (2000), The arithmetic and geometry of algebraic cycles: proceedings of the CRM summer school, June 7–19, 1998, Banff, Alberta, Canada, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1954-8 | Wikipedia/Algebraic_cycle |
In number theory and algebraic geometry, the Tate conjecture is a 1963 conjecture of John Tate that would describe the algebraic cycles on a variety in terms of a more computable invariant, the Galois representation on étale cohomology. The conjecture is a central problem in the theory of algebraic cycles. It can be considered an arithmetic analog of the Hodge conjecture.
== Statement of the conjecture ==
Let V be a smooth projective variety over a field k which is finitely generated over its prime field. Let ks be a separable closure of k, and let G be the absolute Galois group Gal(ks/k) of k. Fix a prime number ℓ which is invertible in k. Consider the ℓ-adic cohomology groups (coefficients in the ℓ-adic integers Zℓ, scalars then extended to the ℓ-adic numbers Qℓ) of the base extension of V to ks; these groups are representations of G. For any i ≥ 0, a codimension-i subvariety of V (understood to be defined over k) determines an element of the cohomology group
H
2
i
(
V
k
s
,
Q
ℓ
(
i
)
)
=
W
{\displaystyle H^{2i}(V_{k_{s}},\mathbf {Q} _{\ell }(i))=W}
which is fixed by G. Here Qℓ(i ) denotes the ith Tate twist, which means that this representation of the Galois group G is tensored with the ith power of the cyclotomic character.
The Tate conjecture states that the subspace WG of W fixed by the Galois group G is spanned, as a Qℓ-vector space, by the classes of codimension-i subvarieties of V. An algebraic cycle means a finite linear combination of subvarieties; so an equivalent statement is that every element of WG is the class of an algebraic cycle on V with Qℓ coefficients.
== Known cases ==
The Tate conjecture for divisors (algebraic cycles of codimension 1) is a major open problem. For example, let f : X → C be a morphism from a smooth projective surface onto a smooth projective curve over a finite field. Suppose that the generic fiber F of f, which is a curve over the function field k(C), is smooth over k(C). Then the Tate conjecture for divisors on X is equivalent to the Birch and Swinnerton-Dyer conjecture for the Jacobian variety of F. By contrast, the Hodge conjecture for divisors on any smooth complex projective variety is known (the Lefschetz (1,1)-theorem).
Probably the most important known case is that the Tate conjecture is true for divisors on abelian varieties. This is a theorem of Tate for abelian varieties over finite fields, and of Faltings for abelian varieties over number fields, part of Faltings's solution of the Mordell conjecture. Zarhin extended these results to any finitely generated base field. The Tate conjecture for divisors on abelian varieties implies the Tate conjecture for divisors on any product of curves C1 × ... × Cn.
The (known) Tate conjecture for divisors on abelian varieties is equivalent to a powerful statement about homomorphisms between abelian varieties. Namely, for any abelian varieties A and B over a finitely generated field k, the natural map
Hom
(
A
,
B
)
⊗
Z
Q
ℓ
→
Hom
G
(
H
1
(
A
k
s
,
Q
ℓ
)
,
H
1
(
B
k
s
,
Q
ℓ
)
)
{\displaystyle {\text{Hom}}(A,B)\otimes _{\mathbf {Z} }\mathbf {Q} _{\ell }\to {\text{Hom}}_{G}\left(H_{1}\left(A_{k_{s}},\mathbf {Q} _{\ell }\right),H_{1}\left(B_{k_{s}},\mathbf {Q} _{\ell }\right)\right)}
is an isomorphism. In particular, an abelian variety A is determined up to isogeny by the Galois representation on its Tate module H1(Aks, Zℓ).
The Tate conjecture also holds for K3 surfaces over finitely generated fields of characteristic not 2. (On a surface, the nontrivial part of the conjecture is about divisors.) In characteristic zero, the Tate conjecture for K3 surfaces was proved by André and Tankeev. For K3 surfaces over finite fields of characteristic not 2, the Tate conjecture was proved by Nygaard, Ogus, Charles, Madapusi Pera, and Maulik.
Totaro (2017) surveys known cases of the Tate conjecture.
== Related conjectures ==
Let X be a smooth projective variety over a finitely generated field k. The semisimplicity conjecture predicts that the representation of the Galois group G = Gal(ks/k) on the ℓ-adic cohomology of X is semisimple (that is, a direct sum of irreducible representations). For k of characteristic 0, Moonen (2017) showed that the Tate conjecture (as stated above) implies the semisimplicity of
H
i
(
X
×
k
k
¯
,
Q
ℓ
(
n
)
)
.
{\displaystyle H^{i}\left(X\times _{k}{\overline {k}},\mathbf {Q} _{\ell }(n)\right).}
For k finite of order q, Tate showed that the Tate conjecture plus the semisimplicity conjecture would imply the strong Tate conjecture, namely that the order of the pole of the zeta function Z(X, t) at t = q−j is equal to the rank of the group of algebraic cycles of codimension j modulo numerical equivalence.
Like the Hodge conjecture, the Tate conjecture would imply most of Grothendieck's standard conjectures on algebraic cycles. Namely, it would imply the Lefschetz standard conjecture (that the inverse of the Lefschetz isomorphism is defined by an algebraic correspondence); that the Künneth components of the diagonal are algebraic; and that numerical equivalence and homological equivalence of algebraic cycles are the same.
== Notes ==
== References ==
André, Yves (1996), "On the Shafarevich and Tate conjectures for hyper-Kähler varieties", Mathematische Annalen, 305: 205–248, doi:10.1007/BF01444219, MR 1391213, S2CID 122949797
Faltings, Gerd (1983), "Endlichkeitssätze für abelsche Varietäten über Zahlkörpern", Inventiones Mathematicae, 73 (3): 349–366, Bibcode:1983InMat..73..349F, doi:10.1007/BF01388432, MR 0718935, S2CID 121049418
Madapusi Pera, K. (2013), "The Tate conjecture for K3 surfaces in odd characteristic", Inventiones Mathematicae, 201 (2): 625–668, arXiv:1301.6326, Bibcode:2013arXiv1301.6326M, doi:10.1007/s00222-014-0557-5, S2CID 253746655
Moonen, Ben (2017), A remark on the Tate conjecture, arXiv:1709.04489v1
Tate, John (1965), "Algebraic cycles and poles of zeta functions", in Schilling, O. F. G. (ed.), Arithmetical Algebraic Geometry, New York: Harper and Row, pp. 93–110, MR 0225778
Tate, John (1966), "Endomorphisms of abelian varieties over finite fields", Inventiones Mathematicae, 2 (2): 134–144, Bibcode:1966InMat...2..134T, doi:10.1007/bf01404549, MR 0206004, S2CID 245902
Tate, John (1994), "Conjectures on algebraic cycles in ℓ-adic cohomology", Motives, Proceedings of Symposia in Pure Mathematics, vol. 55, American Mathematical Society, pp. 71–83, ISBN 0-8218-1636-5, MR 1265523
Ulmer, Douglas (2014), "Curves and Jacobians over function fields", Arithmetic Geometry over Global Function Fields, Advanced Courses in Mathematics - CRM Barcelona, Birkhäuser, pp. 283–337, doi:10.1007/978-3-0348-0853-8, ISBN 978-3-0348-0852-1
Totaro, Burt (2017), "Recent progress on the Tate conjecture", Bulletin of the American Mathematical Society, New Series, 54 (4): 575–590, doi:10.1090/bull/1588
== External links ==
James Milne, The Tate conjecture over finite fields (AIM talk). | Wikipedia/Tate_conjecture |
In geometry and mechanics, a displacement is a vector whose length is the shortest distance from the initial to the final position of a point P undergoing motion. It quantifies both the distance and direction of the net or total motion along a straight line from the initial position to the final position of the point trajectory. A displacement may be identified with the translation that maps the initial position to the final position. Displacement is the shift in location when an object in motion changes from one position to another.
For motion over a given interval of time, the displacement divided by the length of the time interval defines the average velocity (a vector), whose magnitude is the average speed (a scalar quantity).
== Formulation ==
A displacement may be formulated as a relative position (resulting from the motion), that is, as the final position xf of a point relative to its initial position xi. The corresponding displacement vector can be defined as the difference between the final and initial positions:
s
=
x
f
−
x
i
=
Δ
x
{\displaystyle s=x_{\textrm {f}}-x_{\textrm {i}}=\Delta {x}}
== Rigid body ==
In dealing with the motion of a rigid body, the term displacement may also include the rotations of the body. In this case, the displacement of a particle of the body is called linear displacement (displacement along a line), while the rotation of the body is called angular displacement.
== Derivatives ==
For a position vector
s
{\displaystyle \mathbf {s} }
that is a function of time
t
{\displaystyle t}
, the derivatives can be computed with respect to
t
{\displaystyle t}
. The first two derivatives are frequently encountered in physics.
Velocity
v
=
d
s
d
t
{\displaystyle \mathbf {v} ={\frac {d\mathbf {s} }{dt}}}
Acceleration
a
=
d
v
d
t
=
d
2
s
d
t
2
{\displaystyle \mathbf {a} ={\frac {d\mathbf {v} }{dt}}={\frac {d^{2}\mathbf {s} }{dt^{2}}}}
Jerk
j
=
d
a
d
t
=
d
2
v
d
t
2
=
d
3
s
d
t
3
{\displaystyle \mathbf {j} ={\frac {d\mathbf {a} }{dt}}={\frac {d^{2}\mathbf {v} }{dt^{2}}}={\frac {d^{3}\mathbf {s} }{dt^{3}}}}
These common names correspond to terminology used in basic kinematics. By extension, the higher order derivatives can be computed in a similar fashion. Study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite series, enabling several analytical techniques in engineering and physics. The fourth order derivative is called jounce.
== Discussion ==
In considering motions of objects over time, the instantaneous velocity of the object is the rate of change of the displacement as a function of time. The instantaneous speed, then, is distinct from velocity, or the time rate of change of the distance travelled along a specific path. The velocity may be equivalently defined as the time rate of change of the position vector. If one considers a moving initial position, or equivalently a moving origin (e.g. an initial position or origin which is fixed to a train wagon, which in turn moves on its rail track), the velocity of P (e.g. a point representing the position of a passenger walking on the train) may be referred to as a relative velocity; this is opposed to an absolute velocity, which is computed with respect to a point and coordinate axes which are considered to be at rest (a inertial frame of reference such as, for instance, a point fixed on the floor of the train station and the usual vertical and horizontal directions).
== See also ==
Affine space
Deformation (mechanics)
Displacement field (mechanics)
Equipollence (geometry)
Motion vector
Position vector
Radial velocity
Screw displacement
== References ==
== External links ==
Media related to Displacement vector at Wikimedia Commons | Wikipedia/Displacement_(physics) |
In physics, a body force is a force that acts throughout the volume of a body. Forces due to gravity, electric fields and magnetic fields are examples of body forces. Body forces contrast with contact forces or surface forces which are exerted to the surface of an object.
Fictitious forces such as the centrifugal force, Euler force, and the Coriolis effect are other examples of body forces.
== Definition ==
=== Qualitative ===
A body force is simply a type of force, and so it has the same dimensions as force, [M][L][T]−2. However, it is often convenient to talk about a body force in terms of either the force per unit volume or the force per unit mass. If the force per unit volume is of interest, it is referred to as the force density throughout the system.
A body force is distinct from a contact force in that the force does not require contact for transmission. Thus, common forces associated with pressure gradients and conductive and convective heat transmission are not body forces as they require contact between systems to exist. Radiation heat transfer, on the other hand, is a perfect example of a body force.
More examples of common body forces include;
Gravity,
Electric forces acting on an object charged throughout its volume,
Magnetic forces acting on currents within an object, such as the braking force that results from eddy currents,
Fictitious forces (or inertial forces) can be viewed as body forces. Common inertial forces are,
Centrifugal force,
Coriolis force,
Euler force (or transverse force), which occurs in a rotating reference frame when the rate of rotation of the frame is changing
However, fictitious forces are not actually forces. Rather they are corrections to Newton's second law when it is formulated in an accelerating reference frame. (Gravity can also be considered a fictitious force in the context of General Relativity.)
=== Quantitative ===
The body force density is defined so that the volume integral (throughout a volume of interest) of it gives the total force acting throughout the body;
F
b
o
d
y
=
∫
V
f
(
r
)
d
V
,
{\displaystyle \mathbf {F} _{\mathrm {body} }=\int \limits _{V}\mathbf {f} (\mathbf {r} )\mathrm {d} V\,,}
where dV is an infinitesimal volume element, and f is the external body force density field acting on the system.
== Acceleration ==
Like any other force, a body force will cause an object to accelerate. For a non-rigid object, Newton's second law applied to a small volume element is
f
(
r
)
=
ρ
(
r
)
a
(
r
)
{\displaystyle \mathbf {f} (\mathbf {r} )=\rho (\mathbf {r} )\mathbf {a} (\mathbf {r} )}
,
where ρ(r) is the mass density of the substance, ƒ the force density, and a(r) is acceleration, all at point r.
== The case of gravity ==
In the case of a body in the gravitational field on a planet surface, a(r) is nearly constant (g) and uniform. Near the Earth
g
=
9.81
m
s
−
2
{\displaystyle g=9.81{\text{ }}\mathrm {ms} ^{-2}}
.
In this case simply
F
b
o
d
y
=
∫
V
ρ
(
r
)
g
d
V
=
∫
V
ρ
(
r
)
d
V
⋅
g
=
m
g
{\displaystyle \mathbf {F} _{\mathrm {body} }=\int \limits _{V}\rho (\mathbf {r} )\mathbf {g} \mathrm {d} V=\int \limits _{V}\rho (\mathbf {r} )\mathrm {d} V\cdot \mathbf {g} =m\mathbf {g} }
where m is the mass of the body.
== See also ==
Action at a distance
Fictitious force
Force density
Non-contact force
Normal force
Surface force
== References == | Wikipedia/Body_force |
In mechanics, compression is the application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. It is contrasted with tension or traction, the application of balanced outward ("pulling") forces; and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration.
In uniaxial compression, the forces are directed along one direction only, so that they act towards decreasing the object's length along that direction. The compressive forces may also be applied in multiple directions; for example inwards along the edges of a plate or all over the side surface of a cylinder, so as to reduce its area (biaxial compression), or inwards over the entire surface of a body, so as to reduce its volume.
Technically, a material is under a state of compression, at some specific point and along a specific direction
x
{\displaystyle x}
, if the normal component of the stress vector across a surface with normal direction
x
{\displaystyle x}
is directed opposite to
x
{\displaystyle x}
. If the stress vector itself is opposite to
x
{\displaystyle x}
, the material is said to be under normal compression or pure compressive stress along
x
{\displaystyle x}
. In a solid, the amount of compression generally depends on the direction
x
{\displaystyle x}
, and the material may be under compression along some directions but under traction along others. If the stress vector is purely compressive and has the same magnitude for all directions, the material is said to be under isotropic compression, hydrostatic compression, or bulk compression. This is the only type of static compression that liquids and gases can bear. It affects the volume of the material, as quantified by the bulk modulus and the volumetric strain.
The inverse process of compression is called decompression, dilation, or expansion, in which the object enlarges or increases in volume.
In a mechanical wave, which is longitudinal, the medium is displaced in the wave's direction, resulting in areas of compression and rarefaction.
== Effects ==
When put under compression (or any other type of stress), every material will suffer some deformation, even if imperceptible, that causes the average relative positions of its atoms and molecules to change. The deformation may be permanent, or may be reversed when the compression forces disappear. In the latter case, the deformation gives rise to reaction forces that oppose the compression forces, and may eventually balance them.
Liquids and gases cannot bear steady uniaxial or biaxial compression, they will deform promptly and permanently and will not offer any permanent reaction force. However they can bear isotropic compression, and may be compressed in other ways momentarily, for instance in a sound wave.
Every ordinary material will contract in volume when put under isotropic compression, contract in cross-section area when put under uniform biaxial compression, and contract in length when put into uniaxial compression. The deformation may not be uniform and may not be aligned with the compression forces. What happens in the directions where there is no compression depends on the material. Most materials will expand in those directions, but some special materials will remain unchanged or even contract. In general, the relation between the stress applied to a material and the resulting deformation is a central topic of continuum mechanics.
== Uses ==
Compression of solids has many implications in materials science, physics and structural engineering, for compression yields noticeable amounts of stress and tension.
By inducing compression, mechanical properties such as compressive strength or modulus of elasticity, can be measured.
Compression machines range from very small table top systems to ones with over 53 MN capacity.
Gases are often stored and shipped in highly compressed form, to save space. Slightly compressed air or other gases are also used to fill balloons, rubber boats, and other inflatable structures. Compressed liquids are used in hydraulic equipment and in fracking.
== In engines ==
=== Internal combustion engines ===
In internal combustion engines the explosive mixture gets compressed before it is ignited; the compression improves the efficiency of the engine. In the Otto cycle, for instance, the second stroke of the piston effects the compression of the charge which has been drawn into the cylinder by the first forward stroke.
=== Steam engines ===
The term is applied to the arrangement by which the exhaust valve of a steam engine is made to close, shutting a portion of the exhaust steam in the cylinder, before the stroke of the piston is quite complete. This steam being compressed as the stroke is completed, a cushion is formed against which the piston does work while its velocity is being rapidly reduced, and thus the stresses in the mechanism due to the inertia of the reciprocating parts are lessened. This compression, moreover, obviates the shock which would otherwise be caused by the admission of the fresh steam for the return stroke.
== See also ==
Buckling
Container compression test
Compression member
Compressive strength
Longitudinal wave
P-wave
Rarefaction
Strength of materials
Résal effect
Plane strain compression test
== References == | Wikipedia/Compression_(physics) |
In physics and engineering, a constitutive equation or constitutive relation is a relation between two or more physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance or field, and approximates its response to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations.
Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function.
== Mechanical properties of matter ==
The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a "stress-strain relation" in this example, but also called a "constitutive assumption" or an "equation of state" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms
like "material", "isotropic", "aeolotropic", etc. The class of "constitutive relations" of the form stress rate = f (velocity gradient, stress, density) was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell.
In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions.
=== Definitions ===
=== Deformation of solids ===
==== Friction ====
Friction is a complicated phenomenon. Macroscopically, the friction force F at the interface of two materials can be modelled as proportional to the reaction force R at a point of contact between two interfaces through a dimensionless coefficient of friction μf, which depends on the pair of materials:
F
=
μ
f
R
.
{\displaystyle F=\mu _{\text{f}}R.}
This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object).
==== Stress and strain ====
The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) k in a scalar equation, stating the tensile/compressive force is proportional to the extended (or contracted) displacement x:
F
i
=
−
k
x
i
{\displaystyle F_{i}=-kx_{i}}
meaning the material responds linearly. Equivalently, in terms of the stress σ, Young's modulus E, and strain ε (dimensionless):
σ
=
E
ε
{\displaystyle \sigma =E\,\varepsilon }
In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor:
σ
i
j
=
C
i
j
k
l
ε
k
l
⇌
ε
i
j
=
S
i
j
k
l
σ
k
l
{\displaystyle \sigma _{ij}=C_{ijkl}\,\varepsilon _{kl}\,\rightleftharpoons \,\varepsilon _{ij}=S_{ijkl}\,\sigma _{kl}}
where C is the elasticity tensor and S is the compliance tensor.
==== Solid-state deformation ====
Several classes of deformation in elastic materials are the following:
Plastic
The applied force induces non-recoverable deformation in the material when the stress (or elastic strain) reaches a critical magnitude, called the yield point.
Elastic
The material recovers its initial shape after deformation.
Viscoelastic
If the time-dependent resistive contributions are large, and cannot be neglected. Rubbers and plastics have this property, and certainly do not satisfy Hooke's law. In fact, elastic hysteresis occurs.
Anelastic
If the material is close to elastic, but the applied force induces additional time-dependent resistive forces (i.e. depend on rate of change of extension/compression, in addition to the extension/compression). Metals and ceramics have this characteristic, but it is usually negligible, although not so much when heating due to friction occurs (such as vibrations or shear stresses in machines).
Hyperelastic
The applied force induces displacements in the material following a strain energy density function.
==== Collisions ====
The relative speed of separation vseparation of an object A after a collision with another object B is related to the relative speed of approach vapproach by the coefficient of restitution, defined by Newton's experimental impact law:
e
=
|
v
|
separation
|
v
|
approach
{\displaystyle e={\frac {|\mathbf {v} |_{\text{separation}}}{|\mathbf {v} |_{\text{approach}}}}}
which depends on the materials A and B are made from, since the collision involves interactions at the surfaces of A and B. Usually 0 ≤ e ≤ 1, in which e = 1 for completely elastic collisions, and e = 0 for completely inelastic collisions. It is possible for e ≥ 1 to occur – for superelastic (or explosive) collisions.
=== Deformation of fluids ===
The drag equation gives the drag force D on an object of cross-section area A moving through a fluid of density ρ at velocity v (relative to the fluid)
D
=
1
2
c
d
ρ
A
v
2
{\displaystyle D={\frac {1}{2}}c_{d}\rho Av^{2}}
where the drag coefficient (dimensionless) cd depends on the geometry of the object and the drag forces at the interface between the fluid and object.
For a Newtonian fluid of viscosity μ, the shear stress τ is linearly related to the strain rate (transverse flow velocity gradient) ∂u/∂y (units s−1). In a uniform shear flow:
τ
=
μ
∂
u
∂
y
,
{\displaystyle \tau =\mu {\frac {\partial u}{\partial y}},}
with u(y) the variation of the flow velocity u in the cross-flow (transverse) direction y. In general, for a Newtonian fluid, the relationship between the elements τij of the shear stress tensor and the deformation of the fluid is given by
τ
i
j
=
2
μ
(
e
i
j
−
1
3
Δ
δ
i
j
)
{\displaystyle \tau _{ij}=2\mu \left(e_{ij}-{\frac {1}{3}}\Delta \delta _{ij}\right)}
with
e
i
j
=
1
2
(
∂
v
i
∂
x
j
+
∂
v
j
∂
x
i
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}\right)}
and
Δ
=
∑
k
e
k
k
=
div
v
,
{\displaystyle \Delta =\sum _{k}e_{kk}={\text{div}}\;\mathbf {v} ,}
where vi are the components of the flow velocity vector in the corresponding xi coordinate directions, eij are the components of the strain rate tensor, Δ is the volumetric strain rate (or dilatation rate) and δij is the Kronecker delta.
The ideal gas law is a constitutive relation in the sense the pressure p and volume V are related to the temperature T, via the number of moles n of gas:
p
V
=
n
R
T
{\displaystyle pV=nRT}
where R is the gas constant (J⋅K−1⋅mol−1).
== Electromagnetism ==
=== Constitutive equations in electromagnetism and related areas ===
In both classical and quantum physics, the precise dynamics of a system form a set of coupled differential equations, which are almost always too complicated to be solved exactly, even at the level of statistical mechanics. In the context of electromagnetism, this remark applies to not only the dynamics of free charges and currents (which enter Maxwell's equations directly), but also the dynamics of bound charges and currents (which enter Maxwell's equations through the constitutive relations). As a result, various approximation schemes are typically used.
For example, in real materials, complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. See for example, linear response theory, Green–Kubo relations and Green's function (many-body theory).
These complex theories provide detailed formulas for the constitutive relations describing the electrical response of various materials, such as permittivities, permeabilities, conductivities and so forth.
It is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B, before doing calculations in electromagnetism (i.e. applying Maxwell's macroscopic equations). These equations specify the response of bound charge and current to the applied fields and are called constitutive relations.
Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves:
D
(
r
,
t
)
=
ε
0
E
(
r
,
t
)
+
P
(
r
,
t
)
H
(
r
,
t
)
=
1
μ
0
B
(
r
,
t
)
−
M
(
r
,
t
)
,
{\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t)\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}}
where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine the following special cases.
==== Without magnetic or dielectric materials ====
In the absence of magnetic or dielectric materials, the constitutive relations are simple:
D
=
ε
0
E
,
H
=
B
/
μ
0
{\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} =\mathbf {B} /\mu _{0}}
where ε0 and μ0 are two universal constants, called the permittivity of free space and permeability of free space, respectively.
==== Isotropic linear materials ====
In an (isotropic) linear material, where P is proportional to E, and M is proportional to B, the constitutive relations are also straightforward. In terms of the polarization P and the magnetization M they are:
P
=
ε
0
χ
e
E
,
M
=
χ
m
H
,
{\displaystyle \mathbf {P} =\varepsilon _{0}\chi _{e}\mathbf {E} ,\quad \mathbf {M} =\chi _{m}\mathbf {H} ,}
where χe and χm are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are:
D
=
ε
E
,
H
=
B
/
μ
,
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} =\mathbf {B} /\mu ,}
where ε and μ are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by:
ε
/
ε
0
=
ε
r
=
χ
e
+
1
,
μ
/
μ
0
=
μ
r
=
χ
m
+
1
{\displaystyle \varepsilon /\varepsilon _{0}=\varepsilon _{r}=\chi _{e}+1,\quad \mu /\mu _{0}=\mu _{r}=\chi _{m}+1}
==== General case ====
For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B. These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny.
In general, the constitutive relations can usually still be written:
D
=
ε
E
,
H
=
μ
−
1
B
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} =\mu ^{-1}\mathbf {B} }
but ε and μ are not, in general, simple constants, but rather functions of E, B, position and time, and tensorial in nature. Examples are:
As a variation of these examples, in general materials are bianisotropic where D and B depend on both E and H, through the additional coupling constants ξ and ζ:
D
=
ε
E
+
ξ
H
,
B
=
μ
H
+
ζ
E
.
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} +\xi \mathbf {H} \,,\quad \mathbf {B} =\mu \mathbf {H} +\zeta \mathbf {E} .}
In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration).
Some man-made materials such as metamaterials and photonic crystals are designed to have customized permittivity and permeability.
==== Calculation of constitutive relations ====
The theoretical calculation of a material's constitutive equations is a common, important, and sometimes difficult task in theoretical condensed-matter physics and materials science. In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields.
The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation.
These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function.
A different set of homogenization methods (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous effective medium (valid for excitations with wavelengths much larger than the scale of the inhomogeneity).
The theoretical modeling of the continuum-approximation properties of many real materials often rely upon experimental measurement as well. For example, ε of an insulator at low frequencies can be measured by making it into a parallel-plate capacitor, and ε at optical-light frequencies is often measured by ellipsometry.
=== Thermoelectric and electromagnetic properties of matter ===
These constitutive equations are often used in crystallography, a field of solid-state physics.
== Photonics ==
=== Refractive index ===
The (absolute) refractive index of a medium n (dimensionless) is an inherently important property of geometric and physical optics defined as the ratio of the luminal speed in vacuum c0 to that in the medium c:
n
=
c
0
c
=
ε
μ
ε
0
μ
0
=
ε
r
μ
r
{\displaystyle n={\frac {c_{0}}{c}}={\sqrt {\frac {\varepsilon \mu }{\varepsilon _{0}\mu _{0}}}}={\sqrt {\varepsilon _{r}\mu _{r}}}}
where ε is the permittivity and εr the relative permittivity of the medium, likewise μ is the permeability and μr are the relative permeability of the medium. The vacuum permittivity is ε0 and vacuum permeability is μ0. In general, n (also εr) are complex numbers.
The relative refractive index is defined as the ratio of the two refractive indices. Absolute is for one material, relative applies to every possible pair of interfaces;
n
A
B
=
n
A
n
B
{\displaystyle n_{AB}={\frac {n_{A}}{n_{B}}}}
=== Speed of light in matter ===
As a consequence of the definition, the speed of light in matter is
c
=
1
ε
μ
{\displaystyle c={\frac {1}{\sqrt {\varepsilon \mu }}}}
for special case of vacuum; ε = ε0 and μ = μ0,
c
0
=
1
ε
0
μ
0
{\displaystyle c_{0}={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}}
=== Piezooptic effect ===
The piezooptic effect relates the stresses in solids σ to the dielectric impermeability a, which are coupled by a fourth-rank tensor called the piezooptic coefficient Π (units K−1):
a
i
j
=
Π
i
j
p
q
σ
p
q
{\displaystyle a_{ij}=\Pi _{ijpq}\sigma _{pq}}
== Transport phenomena ==
=== Definitions ===
=== Definitive laws ===
There are several laws which describe the transport of matter, or properties of it, in an almost identical way. In every case, in words they read:
Flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material.
In general the constant must be replaced by a 2nd rank tensor, to account for directional dependences of the material.
== See also ==
Defining equation (physical chemistry)
Governing equation
Principle of material objectivity
Rheology
== Notes ==
== References == | Wikipedia/Constitutive_equations |
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae.
Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless.
Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation.
The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822.: 42
== Formulation ==
The Buckingham π theorem describes how every physically meaningful equation involving n variables can be equivalently rewritten as an equation of n − m dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables.
A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature.: 43 This may give insight into the fundamental properties of the system, as illustrated in the examples below.
The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary".
There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols:
time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J).
The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity Q is given by
dim
Q
=
T
a
L
b
M
c
I
d
Θ
e
N
f
J
g
{\displaystyle \operatorname {dim} Q={\mathsf {T}}^{a}{\mathsf {L}}^{b}{\mathsf {M}}^{c}{\mathsf {I}}^{d}{\mathsf {\Theta }}^{e}{\mathsf {N}}^{f}{\mathsf {J}}^{g}}
where a, b, c, d, e, f, g are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since Q = TI.
A quantity that has only b ≠ 0 (with all other exponents zero) is known as a geometric quantity. A quantity that has only both a ≠ 0 and b ≠ 0 is known as a kinematic quantity. A quantity that has only all of a ≠ 0, b ≠ 0, and c ≠ 0 is known as a dynamic quantity.
A quantity that has all exponents null is said to have dimension one.
The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, 1 in = 2.54 cm; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity.
There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis.
=== Simple cases ===
As examples, the dimension of the physical quantity speed v is
dim
v
=
length
time
=
L
T
=
T
−
1
L
.
{\displaystyle \operatorname {dim} v={\frac {\text{length}}{\text{time}}}={\frac {\mathsf {L}}{\mathsf {T}}}={\mathsf {T}}^{-1}{\mathsf {L}}.}
The dimension of the physical quantity acceleration a is
dim
a
=
speed
time
=
T
−
1
L
T
=
T
−
2
L
.
{\displaystyle \operatorname {dim} a={\frac {\text{speed}}{\text{time}}}={\frac {{\mathsf {T}}^{-1}{\mathsf {L}}}{\mathsf {T}}}={\mathsf {T}}^{-2}{\mathsf {L}}.}
The dimension of the physical quantity force F is
dim
F
=
mass
×
acceleration
=
M
×
T
−
2
L
=
T
−
2
L
M
.
{\displaystyle \operatorname {dim} F={\text{mass}}\times {\text{acceleration}}={\mathsf {M}}\times {\mathsf {T}}^{-2}{\mathsf {L}}={\mathsf {T}}^{-2}{\mathsf {L}}{\mathsf {M}}.}
The dimension of the physical quantity pressure P is
dim
P
=
force
area
=
T
−
2
L
M
L
2
=
T
−
2
L
−
1
M
.
{\displaystyle \operatorname {dim} P={\frac {\text{force}}{\text{area}}}={\frac {{\mathsf {T}}^{-2}{\mathsf {L}}{\mathsf {M}}}{{\mathsf {L}}^{2}}}={\mathsf {T}}^{-2}{\mathsf {L}}^{-1}{\mathsf {M}}.}
The dimension of the physical quantity energy E is
dim
E
=
force
×
displacement
=
T
−
2
L
M
×
L
=
T
−
2
L
2
M
.
{\displaystyle \operatorname {dim} E={\text{force}}\times {\text{displacement}}={\mathsf {T}}^{-2}{\mathsf {L}}{\mathsf {M}}\times {\mathsf {L}}={\mathsf {T}}^{-2}{\mathsf {L}}^{2}{\mathsf {M}}.}
The dimension of the physical quantity power P is
dim
P
=
energy
time
=
T
−
2
L
2
M
T
=
T
−
3
L
2
M
.
{\displaystyle \operatorname {dim} P={\frac {\text{energy}}{\text{time}}}={\frac {{\mathsf {T}}^{-2}{\mathsf {L}}^{2}{\mathsf {M}}}{\mathsf {T}}}={\mathsf {T}}^{-3}{\mathsf {L}}^{2}{\mathsf {M}}.}
The dimension of the physical quantity electric charge Q is
dim
Q
=
current
×
time
=
T
I
.
{\displaystyle \operatorname {dim} Q={\text{current}}\times {\text{time}}={\mathsf {T}}{\mathsf {I}}.}
The dimension of the physical quantity voltage V is
dim
V
=
power
current
=
T
−
3
L
2
M
I
=
T
−
3
L
2
M
I
−
1
.
{\displaystyle \operatorname {dim} V={\frac {\text{power}}{\text{current}}}={\frac {{\mathsf {T}}^{-3}{\mathsf {L}}^{2}{\mathsf {M}}}{\mathsf {I}}}={\mathsf {T^{-3}}}{\mathsf {L}}^{2}{\mathsf {M}}{\mathsf {I}}^{-1}.}
The dimension of the physical quantity capacitance C is
dim
C
=
electric charge
electric potential difference
=
T
I
T
−
3
L
2
M
I
−
1
=
T
4
L
−
2
M
−
1
I
2
.
{\displaystyle \operatorname {dim} C={\frac {\text{electric charge}}{\text{electric potential difference}}}={\frac {{\mathsf {T}}{\mathsf {I}}}{{\mathsf {T}}^{-3}{\mathsf {L}}^{2}{\mathsf {M}}{\mathsf {I}}^{-1}}}={\mathsf {T^{4}}}{\mathsf {L^{-2}}}{\mathsf {M^{-1}}}{\mathsf {I^{2}}}.}
=== Rayleigh's method ===
In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh.
The method involves the following steps:
Gather all the independent variables that are likely to influence the dependent variable.
If R is a variable that depends upon independent variables R1, R2, R3, ..., Rn, then the functional equation can be written as R = F(R1, R2, R3, ..., Rn).
Write the above equation in the form R = C R1a R2b R3c ... Rnm, where C is a dimensionless constant and a, b, c, ..., m are arbitrary exponents.
Express each of the quantities in the equation in some base units in which the solution is required.
By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents a, b, c, ..., m.
Solve these equations to obtain the values of the exponents a, b, c, ..., m.
Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents.
As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis.
== Concrete numbers and base units ==
Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof.
A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units.
Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as 1 N = 1 kg⋅m⋅s−2.
=== Percentages, derivatives and integrals ===
Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since 1% = 1/100.
Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus:
position (x) has the dimension L (length);
derivative of position with respect to time (dx/dt, velocity) has dimension T−1L—length from position, time due to the gradient;
the second derivative (d2x/dt2 = d(dx/dt) / dt, acceleration) has dimension T−2L.
Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator.
force has the dimension T−2LM (mass multiplied by acceleration);
the integral of force with respect to the distance (s) the object has travelled (
∫
F
d
s
{\displaystyle \textstyle \int F\ ds}
, work) has dimension T−2L2M.
In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year).
In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
== Dimensional homogeneity (commensurability) ==
The most basic rule of dimensional analysis is that of dimensional homogeneity.
However, the dimensions form an abelian group under multiplication, so:
For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h.
The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if mman, mrat and Lman denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression mman + mrat is meaningful, but the heterogeneous expression mman + Lman is meaningless. However, mman/L2man is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions.
Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension T−2L2M, they are fundamentally different physical quantities.
To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use 1 yard = 0.9144 m to convert 35 yards to 32.004 m.
A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres.
== Conversion factor ==
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and 100 kPa = 1 bar. The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to 100 kPa / 1 bar = 1. Since any quantity can be multiplied by 1 without changing it, the expression "100 kPa / 1 bar" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, 5 bar × 100 kPa / 1 bar = 500 kPa because 5 × 100 / 1 = 500, and bar/bar cancels out, so 5 bar = 500 kPa.
== Applications ==
Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well.
=== Mathematics ===
A simple application of dimensional analysis to mathematics is in computing the form of the volume of an n-ball (the solid ball in n dimensions), or the area of its surface, the n-sphere: being an n-dimensional figure, the volume scales as xn, while the surface area, being (n − 1)-dimensional, scales as xn−1. Thus the volume of the n-ball in terms of the radius is Cnrn, for some constant Cn. Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone.
=== Finance, economics, and accounting ===
In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios.
For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid".
In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year).
Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year.
Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.)
In financial analysis, bond duration can be defined as (dV/dr)/V, where V is the value of a bond (or portfolio), r is the continuously compounded interest rate and dV/dr is a derivative. From the previous point, the dimension of r is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because dr is in the "denominator" of the derivative.
=== Fluid mechanics ===
In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include:
Reynolds number (Re), generally important in all types of fluid problems:
R
e
=
ρ
u
d
μ
.
{\displaystyle \mathrm {Re} ={\frac {\rho \,ud}{\mu }}.}
Froude number (Fr), modeling flow with a free surface:
F
r
=
u
g
L
.
{\displaystyle \mathrm {Fr} ={\frac {u}{\sqrt {g\,L}}}.}
Euler number (Eu), used in problems in which pressure is of interest:
E
u
=
Δ
p
ρ
u
2
.
{\displaystyle \mathrm {Eu} ={\frac {\Delta p}{\rho u^{2}}}.}
Mach number (Ma), important in high speed flows where the velocity approaches or exceeds the local speed of sound:
M
a
=
u
c
,
{\displaystyle \mathrm {Ma} ={\frac {u}{c}},}
where c is the local speed of sound.
== History ==
The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Joseph-Louis Lagrange, in a 1799 article at the Turin Academy of Science.
This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem.
Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity.
In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like F = ma should be independent of the units employed to measure the physical variables.
James Clerk Maxwell and Fleeming Jenkin played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant G is taken as unity, thereby defining M = T−2L3. By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were Q = T−1L3/2M1/2, which, after substituting his M = T−2L3 equation for mass, results in charge having the same dimensions as mass, viz. Q = T−2L3.
Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound.
The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.
== Examples ==
=== A simple example: period of a harmonic oscillator ===
What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g.
The four quantities have the following dimensions: T [T]; m [M]; k [M/T2]; and g [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, G1 = T2k/m [T2 · M/T2 / M = 1], and putting G1 = C for some dimensionless constant C gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well.
The variable g does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines g with k, m, and T, because g is the only quantity that involves the dimension L. This implies that in this problem the g is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of g: it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way:
T
=
κ
m
k
{\displaystyle T=\kappa {\sqrt {\tfrac {m}{k}}}}
, for some dimensionless constant κ (equal to
C
{\displaystyle {\sqrt {C}}}
from the original dimensionless equation).
When faced with a case where dimensional analysis rejects a variable (g, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here.
When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as κ.
=== A more complex example: energy of a vibrating wire ===
Consider the case of a vibrating wire of length ℓ (L) vibrating with an amplitude A (L). The wire has a linear density ρ (M/L) and is under tension s (LM/T2), and we want to know the energy E (L2M/T2) in the wire. Let π1 and π2 be two dimensionless products of powers of the variables chosen, given by
π
1
=
E
A
s
π
2
=
ℓ
A
.
{\displaystyle {\begin{aligned}\pi _{1}&={\frac {E}{As}}\\\pi _{2}&={\frac {\ell }{A}}.\end{aligned}}}
The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation
F
(
E
A
s
,
ℓ
A
)
=
0
,
{\displaystyle F\left({\frac {E}{As}},{\frac {\ell }{A}}\right)=0,}
where F is some unknown function, or, equivalently as
E
=
A
s
f
(
ℓ
A
)
,
{\displaystyle E=Asf\left({\frac {\ell }{A}}\right),}
where f is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function f. But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to ℓ, and so infer that E = ℓs. The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident.
The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis.
=== A third example: demand versus capacity for a rotating disc ===
Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness t (L) and radius R (L). The disc has a density ρ (M/L3), rotates at an angular velocity ω (T−1) and this leads to a stress S (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following (5 − 3 = 2) non-dimensional groups:
demand/capacity = ρR2ω2/S
thickness/radius or aspect ratio = t/R
Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.
== Properties ==
=== Mathematical properties ===
The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; L0 = 1, and the inverse of L is 1/L or L−1. L raised to any integer power p is a member of the group, having an inverse of L−p or 1/Lp. The operation of the group is multiplication, having the usual rules for handling exponents (Ln × Lm = Ln+m). Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second).
An abelian group is equivalent to a module over the integers, with the dimensional symbol TiLjMk corresponding to the tuple (i, j, k). When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module.
A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa).
The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, (0, 0, 0).
In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like VL1/2. However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions.
One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions M and L, one has the vector spaces VM and VL, and can define VML := VM ⊗ VL as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar.
The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., m) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, {π1, ..., πm}. (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity X can be expressed in the general form
X
=
∏
i
=
1
m
(
π
i
)
k
i
.
{\displaystyle X=\prod _{i=1}^{m}(\pi _{i})^{k_{i}}\,.}
Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form
f
(
π
1
,
π
2
,
.
.
.
,
π
m
)
=
0
.
{\displaystyle f(\pi _{1},\pi _{2},...,\pi _{m})=0\,.}
Knowing this restriction can be a powerful tool for obtaining new insight into the system.
=== Mechanics ===
The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent.
For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M.
On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons:
There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space).
Velocity, being expressible in terms of length and time (V = L/T), is redundant (the set is not linearly independent).
=== Other fields of physics and chemistry ===
Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ 6.02×1023 mol−1) is also defined as a base dimension, N.
In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features.
=== Polynomials and transcendental functions ===
Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form.
Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.)
While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity log(a/b) = log a − log b, where the logarithm is taken in any base, holds for dimensionless numbers a and b, but it does not hold if a and b are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.
Similarly, while one can evaluate monomials (xn) of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for x2, the expression (3 m)2 = 9 m2 makes sense (as an area), while for x2 + x, the expression (3 m)2 + 3 m = 9 m2 + 3 m does not make sense.
However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example,
1
2
⋅
(
−
9.8
m
/
s
2
)
⋅
t
2
+
(
500
m
/
s
)
⋅
t
.
{\displaystyle {\tfrac {1}{2}}\cdot (\mathrm {-9.8~m/s^{2}} )\cdot t^{2}+(\mathrm {500~m/s} )\cdot t.}
This is the height to which an object rises in time t if the acceleration of gravity is 9.8 metres per second per second and the initial upward speed is 500 metres per second. It is not necessary for t to be in seconds. For example, suppose t = 0.01 minutes. Then the first term would be
1
2
⋅
(
−
9.8
m
/
s
2
)
⋅
(
0.01
m
i
n
)
2
=
1
2
⋅
−
9.8
⋅
(
0.01
2
)
(
m
i
n
/
s
)
2
⋅
m
=
1
2
⋅
−
9.8
⋅
(
0.01
2
)
⋅
60
2
⋅
m
.
{\displaystyle {\begin{aligned}&{\tfrac {1}{2}}\cdot (\mathrm {-9.8~m/s^{2}} )\cdot (\mathrm {0.01~min} )^{2}\\[10pt]={}&{\tfrac {1}{2}}\cdot -9.8\cdot \left(0.01^{2}\right)(\mathrm {min/s} )^{2}\cdot \mathrm {m} \\[10pt]={}&{\tfrac {1}{2}}\cdot -9.8\cdot \left(0.01^{2}\right)\cdot 60^{2}\cdot \mathrm {m} .\end{aligned}}}
=== Combining units and numerical values ===
The value of a dimensional physical quantity Z is written as the product of a unit [Z] within the dimension and a dimensionless numerical value or numerical factor, n.
Z
=
n
×
[
Z
]
=
n
[
Z
]
{\displaystyle Z=n\times [Z]=n[Z]}
When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed:
1
f
t
=
0.3048
m
{\displaystyle \mathrm {1\,ft} =\mathrm {0.3048\,m} }
is identical to
1
=
0.3048
m
1
f
t
.
{\displaystyle 1={\frac {\mathrm {0.3048\,m} }{\mathrm {1\,ft} }}.}
The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted.
Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units.
=== Quantity equations ===
A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities.
In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit.
For example, a quantity equation for displacement d as speed s multiplied by time difference t would be:
d = s t
for s = 5 m/s, where t and d may be expressed in any units, converted if necessary.
In contrast, a corresponding numerical-value equation would be:
D = 5 T
where T is the numeric value of t when expressed in seconds and D is the numeric value of d when expressed in metres.
Generally, the use of numerical-value equations is discouraged.
== Dimensionless concepts ==
=== Constants ===
The dimensionless constants that arise in the results obtained, such as the C in the Poiseuille's Law problem and the κ in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc.
=== Formalisms ===
Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, χ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be ~ 1/χd, where d is the dimension of the lattice.
It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: c, ħ, and G, in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other.
Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants ħ, c, and G (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit c → ∞, ħ → 0 and G → 0. In problems involving a gravitational field the latter limit should be taken such that the field stays finite.
== Dimensional equivalences ==
Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.
=== SI units ===
== Programming languages ==
Dimensional correctness as part of type checking has been studied since 1977.
Implementations for Ada and C++ were described in 1985 and 1988.
Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran.
Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices.
McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure.
Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.
== Geometry: position vs. displacement ==
=== Affine quantities ===
Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change).
Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable:
adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward),
adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection),
subtracting two positions should yield a displacement,
but one may not add two positions.
This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement).
Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity.
Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity.
Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement.
Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis.
This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero,
−273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F,
where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated.
For temperature differences,
1 K = 1 °C ≠ 1 °F = 1 °R.
(Here °R refers to the Rankine scale, not the Réaumur scale).
Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K. But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature corresponding to the Celsius temperature −272.15 °C, or a temperature difference equal to 1 °C.
=== Orientation and frame of reference ===
Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference.
This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis.
=== Huntley's extensions ===
Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank
m
{\displaystyle m}
of the dimensional matrix.
He introduced two approaches:
The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent.
Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia.
==== Directed dimensions ====
As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component
v
y
{\displaystyle v_{\text{y}}}
and a horizontal velocity component
v
x
{\displaystyle v_{\text{x}}}
, assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then R, the distance travelled, with dimension L,
v
x
{\displaystyle v_{\text{x}}}
,
v
y
{\displaystyle v_{\text{y}}}
, both dimensioned as T−1L, and g the downward acceleration of gravity, with dimension T−2L.
With these four quantities, we may conclude that the equation for the range R may be written:
R
∝
v
x
a
v
y
b
g
c
.
{\displaystyle R\propto v_{\text{x}}^{a}\,v_{\text{y}}^{b}\,g^{c}.}
Or dimensionally
L
=
(
T
−
1
L
)
a
+
b
(
T
−
2
L
)
c
{\displaystyle {\mathsf {L}}=\left({\mathsf {T}}^{-1}{\mathsf {L}}\right)^{a+b}\left({\mathsf {T}}^{-2}{\mathsf {L}}\right)^{c}}
from which we may deduce that
a
+
b
+
c
=
1
{\displaystyle a+b+c=1}
and
a
+
b
+
2
c
=
0
{\displaystyle a+b+2c=0}
, which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation.
However, if we use directed length dimensions, then
v
x
{\displaystyle v_{\mathrm {x} }}
will be dimensioned as T−1Lx,
v
y
{\displaystyle v_{\mathrm {y} }}
as T−1Ly, R as Lx and g as T−2Ly. The dimensional equation becomes:
L
x
=
(
T
−
1
L
x
)
a
(
T
−
1
L
y
)
b
(
T
−
2
L
y
)
c
{\displaystyle {\mathsf {L}}_{\mathrm {x} }=\left({{\mathsf {T}}^{-1}}{{\mathsf {L}}_{\mathrm {x} }}\right)^{a}\left({{\mathsf {T}}^{-1}}{{\mathsf {L}}_{\mathrm {y} }}\right)^{b}\left({{\mathsf {T}}^{-2}}{{\mathsf {L}}_{\mathrm {y} }}\right)^{c}}
and we may solve completely as a = 1, b = 1 and c = −1. The increase in deductive power gained by the use of directed length dimensions is apparent.
Huntley's concept of directed length dimensions however has some serious limitations:
It does not deal well with vector equations involving the cross product,
nor does it handle well the use of angles as physical variables.
It also is often quite difficult to assign the L, Lx, Ly, Lz, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?
Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems.
==== Quantity of matter ====
In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only proportional to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition.
For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables:
There are three fundamental variables, so the above five equations will yield two independent dimensionless variables:
π
1
=
m
˙
η
r
{\displaystyle \pi _{1}={\frac {\dot {m}}{\eta r}}}
π
2
=
p
x
ρ
r
5
m
˙
2
{\displaystyle \pi _{2}={\frac {p_{\mathrm {x} }\rho r^{5}}{{\dot {m}}^{2}}}}
If we distinguish between inertial mass with dimension
M
i
{\displaystyle M_{\text{i}}}
and quantity of matter with dimension
M
m
{\displaystyle M_{\text{m}}}
, then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written:
C
=
p
x
ρ
r
4
η
m
˙
{\displaystyle C={\frac {p_{\mathrm {x} }\rho r^{4}}{\eta {\dot {m}}}}}
where now only C is an undetermined constant (found to be equal to
π
/
8
{\displaystyle \pi /8}
by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law.
Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable.
=== Siano's extension: orientational analysis ===
Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin (x, y) = (0, 0) at a speed v and angle θ above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range R, at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable π = R g/v2, but offers no insight into the relationship between R and θ.
Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols 1x 1y 1z to denote vector directions, and an orientationless symbol 10. Thus, Huntley's Lx becomes L1x with L specifying the dimension of length, and 1x specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that 1i−1 = 1i, the following multiplication table for the orientation symbols results:
The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of 1z. For angles, consider an angle θ that lies in the z-plane. Form a right triangle in the z-plane with θ being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation 1x and the side opposite has an orientation 1y. Since (using ~ to indicate orientational equivalence) tan(θ) = θ + ... ~ 1y/1x we conclude that an angle in the xy-plane must have an orientation 1y/1x = 1z, which is not unreasonable. Analogous reasoning forces the conclusion that sin(θ) has orientation 1z while cos(θ) has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form a cos(θ) + b sin(θ), where a and b are real scalars. An expression such as
sin
(
θ
+
π
/
2
)
=
cos
(
θ
)
{\displaystyle \sin(\theta +\pi /2)=\cos(\theta )}
is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written:
sin
(
a
1
z
+
b
1
z
)
=
sin
(
a
1
z
)
cos
(
b
1
z
)
+
sin
(
b
1
z
)
cos
(
a
1
z
)
,
{\displaystyle \sin \left(a\,1_{\text{z}}+b\,1_{\text{z}}\right)=\sin \left(a\,1_{\text{z}})\cos(b\,1_{\text{z}}\right)+\sin \left(b\,1_{\text{z}})\cos(a\,1_{\text{z}}\right),}
which for
a
=
θ
{\displaystyle a=\theta }
and
b
=
π
/
2
{\displaystyle b=\pi /2}
yields
sin
(
θ
1
z
+
[
π
/
2
]
1
z
)
=
1
z
cos
(
θ
1
z
)
{\displaystyle \sin(\theta \,1_{\text{z}}+[\pi /2]\,1_{\text{z}})=1_{\text{z}}\cos(\theta \,1_{\text{z}})}
. Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is
1
0
{\displaystyle 1_{0}}
.
The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd.
As an example, for the projectile problem, using orientational symbols, θ, being in the xy-plane will thus have dimension 1z and the range of the projectile R will be of the form:
R
=
g
a
v
b
θ
c
which means
L
1
x
∼
(
L
1
y
T
2
)
a
(
L
T
)
b
1
z
c
.
{\displaystyle R=g^{a}\,v^{b}\,\theta ^{c}{\text{ which means }}{\mathsf {L}}\,1_{\mathrm {x} }\sim \left({\frac {{\mathsf {L}}\,1_{\text{y}}}{{\mathsf {T}}^{2}}}\right)^{a}\left({\frac {\mathsf {L}}{\mathsf {T}}}\right)^{b}\,1_{\mathsf {z}}^{c}.\,}
Dimensional homogeneity will now correctly yield a = −1 and b = 2, and orientational homogeneity requires that
1
x
/
(
1
y
a
1
z
c
)
=
1
z
c
+
1
=
1
{\displaystyle 1_{x}/(1_{y}^{a}1_{z}^{c})=1_{z}^{c+1}=1}
. In other words, that c must be an odd integer. In fact, the required function of theta will be sin(θ)cos(θ) which is a series consisting of odd powers of θ.
It is seen that the Taylor series of sin(θ) and cos(θ) are orientationally homogeneous using the above multiplication table, while expressions like cos(θ) + sin(θ) and exp(θ) are not, and are (correctly) deemed unphysical.
Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
== See also ==
Buckingham π theorem
Dimensionless numbers in fluid mechanics
Fermi estimate – used to teach dimensional analysis
Numerical-value equation
Rayleigh's method of dimensional analysis
Similitude – an application of dimensional analysis
System of measurement
=== Related areas of mathematics ===
Covariance and contravariance of vectors
Exterior algebra
Geometric algebra
Quantity calculus
== Notes ==
== References ==
Barenblatt, G. I. (1996), Scaling, Self-Similarity, and Intermediate Asymptotics, Cambridge, UK: Cambridge University Press, Bibcode:1996sssi.book.....B, ISBN 978-0-521-43522-2
Bhaskar, R.; Nigam, Anil (1990), "Qualitative Physics Using Dimensional Analysis", Artificial Intelligence, 45 (1–2): 73–111, doi:10.1016/0004-3702(90)90038-2
Bhaskar, R.; Nigam, Anil (1991), "Qualitative Explanations of Red Giant Formation", The Astrophysical Journal, 372: 592–6, Bibcode:1991ApJ...372..592B, doi:10.1086/170003
Boucher; Alves (1960), "Dimensionless Numbers", Chemical Engineering Progress, 55: 55–64
Bridgman, P. W. (1922), Dimensional Analysis, Yale University Press, ISBN 978-0-548-91029-0 {{citation}}: ISBN / Date incompatibility (help)
Buckingham, Edgar (1914), "On Physically Similar Systems: Illustrations of the Use of Dimensional Analysis", Physical Review, 4 (4): 345–376, Bibcode:1914PhRv....4..345B, doi:10.1103/PhysRev.4.345, hdl:10338.dmlcz/101743
Drobot, S. (1953–1954), "On the foundations of dimensional analysis" (PDF), Studia Mathematica, 14: 84–99, doi:10.4064/sm-14-1-84-99, archived (PDF) from the original on 16 January 2004
Fourier, Joseph (1822), Theorie analytique de la chaleur (in French), Paris: Firmin Didot
Gibbings, J.C. (2011), Dimensional Analysis, Springer, ISBN 978-1-84996-316-9
Hart, George W. (1994), "The theory of dimensioned matrices", in Lewis, John G. (ed.), Proceedings of the Fifth SIAM Conference on Applied Linear Algebra, SIAM, pp. 186–190, ISBN 978-0-89871-336-7 As postscript
Hart, George W. (1995), Multidimensional Analysis: Algebras and Systems for Science and Engineering, Springer-Verlag, ISBN 978-0-387-94417-3
Huntley, H. E. (1967), Dimensional Analysis, Dover, OCLC 682090763, OL 6128830M, LOC 67-17978
Klinkenberg, A. (1955), "Dimensional systems and systems of units in physics with special reference to chemical engineering: Part I. The principles according to which dimensional systems and systems of units are constructed", Chemical Engineering Science, 4 (3): 130–140, 167–177, Bibcode:1955ChEnS...4..130K, doi:10.1016/0009-2509(55)80004-8
Langhaar, Henry L. (1951), Dimensional Analysis and Theory of Models, Wiley, ISBN 978-0-88275-682-0 {{citation}}: ISBN / Date incompatibility (help)
Mendez, P.F.; Ordóñez, F. (September 2005), "Scaling Laws From Statistical Data and Dimensional Analysis", Journal of Applied Mechanics, 72 (5): 648–657, Bibcode:2005JAM....72..648M, CiteSeerX 10.1.1.422.610, doi:10.1115/1.1943434
Moody, L. F. (1944), "Friction Factors for Pipe Flow", Transactions of the American Society of Mechanical Engineers, 66 (671): 671–678, doi:10.1115/1.4018140
Murphy, N. F. (1949), "Dimensional Analysis", Bulletin of the Virginia Polytechnic Institute, 42 (6)
Perry, J. H.; et al. (1944), "Standard System of Nomenclature for Chemical Engineering Unit Operations", Transactions of the American Institute of Chemical Engineers, 40 (251)
Pesic, Peter (2005), Sky in a Bottle, MIT Press, pp. 227–8, ISBN 978-0-262-16234-0
Petty, G. W. (2001), "Automated computation and consistency checking of physical dimensions and units in scientific programs", Software: Practice and Experience, 31 (11): 1067–76, doi:10.1002/spe.401, S2CID 206506776
Porter, Alfred W. (1933), The Method of Dimensions (3rd ed.), Methuen
J. W. Strutt (3rd Baron Rayleigh) (1915), "The Principle of Similitude", Nature, 95 (2368): 66–8, Bibcode:1915Natur..95...66R, doi:10.1038/095066c0
Siano, Donald (1985), "Orientational Analysis – A Supplement to Dimensional Analysis – I", Journal of the Franklin Institute, 320 (6): 267–283, doi:10.1016/0016-0032(85)90031-6
Siano, Donald (1985), "Orientational Analysis, Tensor Analysis and The Group Properties of the SI Supplementary Units – II", Journal of the Franklin Institute, 320 (6): 285–302, doi:10.1016/0016-0032(85)90032-8
Silberberg, I. H.; McKetta, J. J. Jr. (1953), "Learning How to Use Dimensional Analysis", Petroleum Refiner, 32 (4): 5, (5): 147, (6): 101, (7): 129
Tao, Terence (2012). "A mathematical formalisation of dimensional analysis".
Van Driest, E. R. (March 1946), "On Dimensional Analysis and the Presentation of Data in Fluid Flow Problems", Journal of Applied Mechanics, 68 (A–34)
Whitney, H. (1968), "The Mathematics of Physical Quantities, Parts I and II", American Mathematical Monthly, 75 (2): 115–138, 227–256, doi:10.2307/2315883, JSTOR 2315883
Wilson, Edwin B. (1920) "Theory of Dimensions", chapter XI of Aeronautics, via Internet Archive
Vignaux, GA (1992), "Dimensional Analysis in Data Modelling", in Erickson, Gary J.; Neudorfer, Paul O. (eds.), Maximum entropy and Bayesian methods: proceedings of the Eleventh International Workshop on Maximum Entropy and Bayesian Methods of Statistical Analysis, Seattle, 1991, Kluwer Academic, ISBN 978-0-7923-2031-9
Kasprzak, Wacław; Lysik, Bertold; Rybaczuk, Marek (1990), Dimensional Analysis in the Identification of Mathematical Models, World Scientific, ISBN 978-981-02-0304-7
== Further reading ==
Giancoli, Douglas C. (2014). "1. Introduction, Measurement, Estimating §1.8 Dimensions and Dimensional Analysis". Physics: Principles with Applications (7th ed.). Pearson. ISBN 978-0-321-62592-2. OCLC 853154197.
== External links ==
List of dimensions for variety of physical quantities
Unicalc Live web calculator doing units conversion by dimensional analysis
A C++ implementation of compile-time dimensional analysis in the Boost open-source libraries
Buckingham's pi-theorem
Quantity System calculator for units conversion based on dimensional approach Archived 24 December 2017 at the Wayback Machine
Units, quantities, and fundamental constants project dimensional analysis maps
Bowley, Roger (2009). "Dimensional Analysis". Sixty Symbols. Brady Haran for the University of Nottingham.
Dureisseix, David (2019). An introduction to dimensional analysis (lecture). INSA Lyon. | Wikipedia/Dimension_(physics) |
In mathematics, especially functional analysis, a Banach algebra, named after Stefan Banach, is an associative algebra
A
{\displaystyle A}
over the real or complex numbers (or over a non-Archimedean complete normed field) that at the same time is also a Banach space, that is, a normed space that is complete in the metric induced by the norm. The norm is required to satisfy
‖
x
y
‖
≤
‖
x
‖
‖
y
‖
for all
x
,
y
∈
A
.
{\displaystyle \|x\,y\|\ \leq \|x\|\,\|y\|\quad {\text{ for all }}x,y\in A.}
This ensures that the multiplication operation is continuous with respect to the metric topology.
A Banach algebra is called unital if it has an identity element for the multiplication whose norm is
1
,
{\displaystyle 1,}
and commutative if its multiplication is commutative.
Any Banach algebra
A
{\displaystyle A}
(whether it is unital or not) can be embedded isometrically into a unital Banach algebra
A
e
{\displaystyle A_{e}}
so as to form a closed ideal of
A
e
{\displaystyle A_{e}}
. Often one assumes a priori that the algebra under consideration is unital because one can develop much of the theory by considering
A
e
{\displaystyle A_{e}}
and then applying the outcome in the original algebra. However, this is not the case all the time. For example, one cannot define all the trigonometric functions in a Banach algebra without identity.
The theory of real Banach algebras can be very different from the theory of complex Banach algebras. For example, the spectrum of an element of a nontrivial complex Banach algebra can never be empty, whereas in a real Banach algebra it could be empty for some elements.
Banach algebras can also be defined over fields of
p
{\displaystyle p}
-adic numbers. This is part of
p
{\displaystyle p}
-adic analysis.
== Examples ==
The prototypical example of a Banach algebra is
C
0
(
X
)
{\displaystyle C_{0}(X)}
, the space of (complex-valued) continuous functions, defined on a locally compact Hausdorff space
X
{\displaystyle X}
, that vanish at infinity.
C
0
(
X
)
{\displaystyle C_{0}(X)}
is unital if and only if
X
{\displaystyle X}
is compact. The complex conjugation being an involution,
C
0
(
X
)
{\displaystyle C_{0}(X)}
is in fact a C*-algebra. More generally, every C*-algebra is a Banach algebra by definition.
The set of real (or complex) numbers is a Banach algebra with norm given by the absolute value.
The set of all real or complex
n
{\displaystyle n}
-by-
n
{\displaystyle n}
matrices becomes a unital Banach algebra if we equip it with a sub-multiplicative matrix norm.
Take the Banach space
R
n
{\displaystyle \mathbb {R} ^{n}}
(or
C
n
{\displaystyle \mathbb {C} ^{n}}
) with norm
‖
x
‖
=
max
|
x
i
|
{\displaystyle \|x\|=\max _{}|x_{i}|}
and define multiplication componentwise:
(
x
1
,
…
,
x
n
)
(
y
1
,
…
,
y
n
)
=
(
x
1
y
1
,
…
,
x
n
y
n
)
.
{\displaystyle \left(x_{1},\ldots ,x_{n}\right)\left(y_{1},\ldots ,y_{n}\right)=\left(x_{1}y_{1},\ldots ,x_{n}y_{n}\right).}
The quaternions form a 4-dimensional real Banach algebra, with the norm being given by the absolute value of quaternions.
The algebra of all bounded real- or complex-valued functions defined on some set (with pointwise multiplication and the supremum norm) is a unital Banach algebra.
The algebra of all bounded continuous real- or complex-valued functions on some locally compact space (again with pointwise operations and supremum norm) is a Banach algebra.
The algebra of all continuous linear operators on a Banach space
E
{\displaystyle E}
(with functional composition as multiplication and the operator norm as norm) is a unital Banach algebra. The set of all compact operators on
E
{\displaystyle E}
is a Banach algebra and closed ideal. It is without identity if
dim
E
=
∞
.
{\displaystyle \dim E=\infty .}
If
G
{\displaystyle G}
is a locally compact Hausdorff topological group and
μ
{\displaystyle \mu }
is its Haar measure, then the Banach space
L
1
(
G
)
{\displaystyle L^{1}(G)}
of all
μ
{\displaystyle \mu }
-integrable functions on
G
{\displaystyle G}
becomes a Banach algebra under the convolution
x
y
(
g
)
=
∫
x
(
h
)
y
(
h
−
1
g
)
d
μ
(
h
)
{\displaystyle xy(g)=\int x(h)y\left(h^{-1}g\right)d\mu (h)}
for
x
,
y
∈
L
1
(
G
)
.
{\displaystyle x,y\in L^{1}(G).}
Uniform algebra: A Banach algebra that is a subalgebra of the complex algebra
C
(
X
)
{\displaystyle C(X)}
with the supremum norm and that contains the constants and separates the points of
X
{\displaystyle X}
(which must be a compact Hausdorff space).
Natural Banach function algebra: A uniform algebra all of whose characters are evaluations at points of
X
.
{\displaystyle X.}
C*-algebra: A Banach algebra that is a closed *-subalgebra of the algebra of bounded operators on some Hilbert space.
Measure algebra: A Banach algebra consisting of all Radon measures on some locally compact group, where the product of two measures is given by convolution of measures.
The algebra of the quaternions
H
{\displaystyle \mathbb {H} }
is a real Banach algebra, but it is not a complex algebra (and hence not a complex Banach algebra) for the simple reason that the center of the quaternions is the real numbers, which cannot contain a copy of the complex numbers.
An affinoid algebra is a certain kind of Banach algebra over a nonarchimedean field. Affinoid algebras are the basic building blocks in rigid analytic geometry.
== Properties ==
Several elementary functions that are defined via power series may be defined in any unital Banach algebra; examples include the exponential function and the trigonometric functions, and more generally any entire function. (In particular, the exponential map can be used to define abstract index groups.) The formula for the geometric series remains valid in general unital Banach algebras. The binomial theorem also holds for two commuting elements of a Banach algebra.
The set of invertible elements in any unital Banach algebra is an open set, and the inversion operation on this set is continuous (and hence is a homeomorphism), so that it forms a topological group under multiplication.
If a Banach algebra has unit
1
,
{\displaystyle \mathbf {1} ,}
then
1
{\displaystyle \mathbf {1} }
cannot be a commutator; that is,
x
y
−
y
x
≠
1
{\displaystyle xy-yx\neq \mathbf {1} }
for any
x
,
y
∈
A
.
{\displaystyle x,y\in A.}
This is because
x
y
{\displaystyle xy}
and
y
x
{\displaystyle yx}
have the same spectrum except possibly
0.
{\displaystyle 0.}
The various algebras of functions given in the examples above have very different properties from standard examples of algebras such as the reals. For example:
Every real Banach algebra that is a division algebra is isomorphic to the reals, the complexes, or the quaternions. Hence, the only complex Banach algebra that is a division algebra is the complexes. (This is known as the Gelfand–Mazur theorem.)
Every unital real Banach algebra with no zero divisors, and in which every principal ideal is closed, is isomorphic to the reals, the complexes, or the quaternions.
Every commutative real unital Noetherian Banach algebra with no zero divisors is isomorphic to the real or complex numbers.
Every commutative real unital Noetherian Banach algebra (possibly having zero divisors) is finite-dimensional.
Permanently singular elements in Banach algebras are topological divisors of zero, that is, considering extensions
B
{\displaystyle B}
of Banach algebras
A
{\displaystyle A}
some elements that are singular in the given algebra
A
{\displaystyle A}
have a multiplicative inverse element in a Banach algebra extension
B
.
{\displaystyle B.}
Topological divisors of zero in
A
{\displaystyle A}
are permanently singular in any Banach extension
B
{\displaystyle B}
of
A
.
{\displaystyle A.}
== Spectral theory ==
Unital Banach algebras over the complex field provide a general setting to develop spectral theory. The spectrum of an element
x
∈
A
,
{\displaystyle x\in A,}
denoted by
σ
(
x
)
{\displaystyle \sigma (x)}
, consists of all those complex scalars
λ
{\displaystyle \lambda }
such that
x
−
λ
1
{\displaystyle x-\lambda \mathbf {1} }
is not invertible in
A
.
{\displaystyle A.}
The spectrum of any element
x
{\displaystyle x}
is a closed subset of the closed disc in
C
{\displaystyle \mathbb {C} }
with radius
‖
x
‖
{\displaystyle \|x\|}
and center
0
,
{\displaystyle 0,}
and thus is compact. Moreover, the spectrum
σ
(
x
)
{\displaystyle \sigma (x)}
of an element
x
{\displaystyle x}
is non-empty and satisfies the spectral radius formula:
sup
{
|
λ
|
:
λ
∈
σ
(
x
)
}
=
lim
n
→
∞
‖
x
n
‖
1
/
n
.
{\displaystyle \sup\{|\lambda |:\lambda \in \sigma (x)\}=\lim _{n\to \infty }\|x^{n}\|^{1/n}.}
Given
x
∈
A
,
{\displaystyle x\in A,}
the holomorphic functional calculus allows to define
f
(
x
)
∈
A
{\displaystyle f(x)\in A}
for any function
f
{\displaystyle f}
holomorphic in a neighborhood of
σ
(
x
)
.
{\displaystyle \sigma (x).}
Furthermore, the spectral mapping theorem holds:
σ
(
f
(
x
)
)
=
f
(
σ
(
x
)
)
.
{\displaystyle \sigma (f(x))=f(\sigma (x)).}
When the Banach algebra
A
{\displaystyle A}
is the algebra
L
(
X
)
{\displaystyle L(X)}
of bounded linear operators on a complex Banach space
X
{\displaystyle X}
(for example, the algebra of square matrices), the notion of the spectrum in
A
{\displaystyle A}
coincides with the usual one in operator theory. For
f
∈
C
(
X
)
{\displaystyle f\in C(X)}
(with a compact Hausdorff space
X
{\displaystyle X}
), one sees that:
σ
(
f
)
=
{
f
(
t
)
:
t
∈
X
}
.
{\displaystyle \sigma (f)=\{f(t):t\in X\}.}
The norm of a normal element
x
{\displaystyle x}
of a C*-algebra coincides with its spectral radius. This generalizes an analogous fact for normal operators.
Let
A
{\displaystyle A}
be a complex unital Banach algebra in which every non-zero element
x
{\displaystyle x}
is invertible (a division algebra). For every
a
∈
A
,
{\displaystyle a\in A,}
there is
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
such that
a
−
λ
1
{\displaystyle a-\lambda \mathbf {1} }
is not invertible (because the spectrum of
a
{\displaystyle a}
is not empty) hence
a
=
λ
1
:
{\displaystyle a=\lambda \mathbf {1} :}
this algebra
A
{\displaystyle A}
is naturally isomorphic to
C
{\displaystyle \mathbb {C} }
(the complex case of the Gelfand–Mazur theorem).
== Ideals and characters ==
Let
A
{\displaystyle A}
be a unital commutative Banach algebra over
C
.
{\displaystyle \mathbb {C} .}
Since
A
{\displaystyle A}
is then a commutative ring with unit, every non-invertible element of
A
{\displaystyle A}
belongs to some maximal ideal of
A
.
{\displaystyle A.}
Since a maximal ideal
m
{\displaystyle {\mathfrak {m}}}
in
A
{\displaystyle A}
is closed,
A
/
m
{\displaystyle A/{\mathfrak {m}}}
is a Banach algebra that is a field, and it follows from the Gelfand–Mazur theorem that there is a bijection between the set of all maximal ideals of
A
{\displaystyle A}
and the set
Δ
(
A
)
{\displaystyle \Delta (A)}
of all nonzero homomorphisms from
A
{\displaystyle A}
to
C
.
{\displaystyle \mathbb {C} .}
The set
Δ
(
A
)
{\displaystyle \Delta (A)}
is called the "structure space" or "character space" of
A
,
{\displaystyle A,}
and its members "characters".
A character
χ
{\displaystyle \chi }
is a linear functional on
A
{\displaystyle A}
that is at the same time multiplicative,
χ
(
a
b
)
=
χ
(
a
)
χ
(
b
)
,
{\displaystyle \chi (ab)=\chi (a)\chi (b),}
and satisfies
χ
(
1
)
=
1.
{\displaystyle \chi (\mathbf {1} )=1.}
Every character is automatically continuous from
A
{\displaystyle A}
to
C
,
{\displaystyle \mathbb {C} ,}
since the kernel of a character is a maximal ideal, which is closed. Moreover, the norm (that is, operator norm) of a character is one. Equipped with the topology of pointwise convergence on
A
{\displaystyle A}
(that is, the topology induced by the weak-* topology of
A
∗
{\displaystyle A^{*}}
), the character space,
Δ
(
A
)
,
{\displaystyle \Delta (A),}
is a Hausdorff compact space.
For any
x
∈
A
,
{\displaystyle x\in A,}
σ
(
x
)
=
σ
(
x
^
)
{\displaystyle \sigma (x)=\sigma ({\hat {x}})}
where
x
^
{\displaystyle {\hat {x}}}
is the Gelfand representation of
x
{\displaystyle x}
defined as follows:
x
^
{\displaystyle {\hat {x}}}
is the continuous function from
Δ
(
A
)
{\displaystyle \Delta (A)}
to
C
{\displaystyle \mathbb {C} }
given by
x
^
(
χ
)
=
χ
(
x
)
.
{\displaystyle {\hat {x}}(\chi )=\chi (x).}
The spectrum of
x
^
,
{\displaystyle {\hat {x}},}
in the formula above, is the spectrum as element of the algebra
C
(
Δ
(
A
)
)
{\displaystyle C(\Delta (A))}
of complex continuous functions on the compact space
Δ
(
A
)
.
{\displaystyle \Delta (A).}
Explicitly,
σ
(
x
^
)
=
{
χ
(
x
)
:
χ
∈
Δ
(
A
)
}
.
{\displaystyle \sigma ({\hat {x}})=\{\chi (x):\chi \in \Delta (A)\}.}
As an algebra, a unital commutative Banach algebra is semisimple (that is, its Jacobson radical is zero) if and only if its Gelfand representation has trivial kernel. An important example of such an algebra is a commutative C*-algebra. In fact, when
A
{\displaystyle A}
is a commutative unital C*-algebra, the Gelfand representation is then an isometric *-isomorphism between
A
{\displaystyle A}
and
C
(
Δ
(
A
)
)
.
{\displaystyle C(\Delta (A)).}
== Banach *-algebras ==
A Banach *-algebra
A
{\displaystyle A}
is a Banach algebra over the field of complex numbers, together with a map
∗
:
A
→
A
{\displaystyle {}^{*}:A\to A}
that has the following properties:
(
x
∗
)
∗
=
x
{\displaystyle \left(x^{*}\right)^{*}=x}
for all
x
∈
A
{\displaystyle x\in A}
(so the map is an involution).
(
x
+
y
)
∗
=
x
∗
+
y
∗
{\displaystyle (x+y)^{*}=x^{*}+y^{*}}
for all
x
,
y
∈
A
.
{\displaystyle x,y\in A.}
(
λ
x
)
∗
=
λ
¯
x
∗
{\displaystyle (\lambda x)^{*}={\bar {\lambda }}x^{*}}
for every
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
and every
x
∈
A
;
{\displaystyle x\in A;}
here,
λ
¯
{\displaystyle {\bar {\lambda }}}
denotes the complex conjugate of
λ
.
{\displaystyle \lambda .}
(
x
y
)
∗
=
y
∗
x
∗
{\displaystyle (xy)^{*}=y^{*}x^{*}}
for all
x
,
y
∈
A
.
{\displaystyle x,y\in A.}
In other words, a Banach *-algebra is a Banach algebra over
C
{\displaystyle \mathbb {C} }
that is also a *-algebra.
In most natural examples, one also has that the involution is isometric, that is,
‖
x
∗
‖
=
‖
x
‖
for all
x
∈
A
.
{\displaystyle \|x^{*}\|=\|x\|\quad {\text{ for all }}x\in A.}
Some authors include this isometric property in the definition of a Banach *-algebra.
A Banach *-algebra satisfying
‖
x
∗
x
‖
=
‖
x
∗
‖
‖
x
‖
{\displaystyle \|x^{*}x\|=\|x^{*}\|\|x\|}
is a C*-algebra.
== See also ==
Approximate identity – net in a normed algebra that acts as a substitute for an identity elementPages displaying wikidata descriptions as a fallback
Kaplansky's conjecture – Numerous conjectures by mathematician Irving KaplanskyPages displaying short descriptions of redirect targets
Operator algebra – Branch of functional analysis
Shilov boundary
== Notes ==
== References == | Wikipedia/Banach_*-algebra |
In mathematics, transform theory is the study of transforms, which relate a function in one domain to another function in a second domain. The essence of transform theory is that by a suitable choice of basis for a vector space a problem may be simplified—or diagonalized as in spectral theory.
Main examples of transforms that are both well known and widely applicable include integral transforms such as the Fourier transform, the fractional Fourier Transform, the Laplace transform, and linear canonical transformations. These transformations are used in signal processing, optics, and quantum mechanics.
== Spectral theory ==
In spectral theory, the spectral theorem says that if A is an n×n self-adjoint matrix, there is an orthonormal basis of eigenvectors of A. This implies that A is diagonalizable.
Furthermore, each eigenvalue is real.
== Transforms ==
Laplace transform
Fourier transform
Fractional Fourier Transform
Linear canonical transformation
Wavelet transform
Hankel transform
Joukowsky transform
Mellin transform
Z-transform
== References ==
Keener, James P. 2000. Principles of Applied Mathematics: Transformation and Approximation. Cambridge: Westview Press. ISBN 0-7382-0129-4
== Notes == | Wikipedia/Transform_theory |
In mathematics, the Wiener algebra, named after Norbert Wiener and usually denoted by A(T), is the space of absolutely convergent Fourier series. Here T denotes the circle group.
== Banach algebra structure ==
The norm of a function f ∈ A(T) is given by
‖
f
‖
=
∑
n
=
−
∞
∞
|
f
^
(
n
)
|
,
{\displaystyle \|f\|=\sum _{n=-\infty }^{\infty }|{\hat {f}}(n)|,\,}
where
f
^
(
n
)
=
1
2
π
∫
−
π
π
f
(
t
)
e
−
i
n
t
d
t
{\displaystyle {\hat {f}}(n)={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(t)e^{-int}\,dt}
is the nth Fourier coefficient of f. The Wiener algebra A(T) is closed under pointwise multiplication of functions. Indeed,
f
(
t
)
g
(
t
)
=
∑
m
∈
Z
f
^
(
m
)
e
i
m
t
⋅
∑
n
∈
Z
g
^
(
n
)
e
i
n
t
=
∑
n
,
m
∈
Z
f
^
(
m
)
g
^
(
n
)
e
i
(
m
+
n
)
t
=
∑
n
∈
Z
{
∑
m
∈
Z
f
^
(
n
−
m
)
g
^
(
m
)
}
e
i
n
t
,
f
,
g
∈
A
(
T
)
;
{\displaystyle {\begin{aligned}f(t)g(t)&=\sum _{m\in \mathbb {Z} }{\hat {f}}(m)e^{imt}\,\cdot \,\sum _{n\in \mathbb {Z} }{\hat {g}}(n)e^{int}\\&=\sum _{n,m\in \mathbb {Z} }{\hat {f}}(m){\hat {g}}(n)e^{i(m+n)t}\\&=\sum _{n\in \mathbb {Z} }\left\{\sum _{m\in \mathbb {Z} }{\hat {f}}(n-m){\hat {g}}(m)\right\}e^{int},\qquad f,g\in A(\mathbb {T} );\end{aligned}}}
therefore
‖
f
g
‖
=
∑
n
∈
Z
|
∑
m
∈
Z
f
^
(
n
−
m
)
g
^
(
m
)
|
≤
∑
m
|
f
^
(
m
)
|
∑
n
|
g
^
(
n
)
|
=
‖
f
‖
‖
g
‖
.
{\displaystyle \|fg\|=\sum _{n\in \mathbb {Z} }\left|\sum _{m\in \mathbb {Z} }{\hat {f}}(n-m){\hat {g}}(m)\right|\leq \sum _{m}|{\hat {f}}(m)|\sum _{n}|{\hat {g}}(n)|=\|f\|\,\|g\|.\,}
Thus the Wiener algebra is a commutative unitary Banach algebra. Also, A(T) is isomorphic to the Banach algebra l1(Z), with the isomorphism given by the Fourier transform.
== Properties ==
The sum of an absolutely convergent Fourier series is continuous, so
A
(
T
)
⊂
C
(
T
)
{\displaystyle A(\mathbb {T} )\subset C(\mathbb {T} )}
where C(T) is the ring of continuous functions on the unit circle.
On the other hand an integration by parts, together with the Cauchy–Schwarz inequality and Parseval's formula, shows that
C
1
(
T
)
⊂
A
(
T
)
.
{\displaystyle C^{1}(\mathbb {T} )\subset A(\mathbb {T} ).\,}
More generally,
L
i
p
α
(
T
)
⊂
A
(
T
)
⊂
C
(
T
)
{\displaystyle \mathrm {Lip} _{\alpha }(\mathbb {T} )\subset A(\mathbb {T} )\subset C(\mathbb {T} )}
for
α
>
1
/
2
{\displaystyle \alpha >1/2}
(see Katznelson (2004)).
== Wiener's 1/f theorem ==
Wiener (1932, 1933) proved that if f has absolutely convergent Fourier series and is never zero, then its reciprocal 1/f also has an absolutely convergent Fourier series. Many other proofs have appeared since then, including an elementary one by Newman (1975).
Gelfand (1941, 1941b) used the theory of Banach algebras that he developed to show that the maximal ideals of A(T) are of the form
M
x
=
{
f
∈
A
(
T
)
∣
f
(
x
)
=
0
}
,
x
∈
T
,
{\displaystyle M_{x}=\left\{f\in A(\mathbb {T} )\,\mid \,f(x)=0\right\},\quad x\in \mathbb {T} ~,}
which is equivalent to Wiener's theorem.
== See also ==
Wiener–Lévy theorem
== Notes ==
== References ==
Arveson, William (2001) [1994], "A Short Course on Spectral Theory", Encyclopedia of Mathematics, EMS Press
Gelfand, I. (1941a), "Normierte Ringe", Rec. Math. (Mat. Sbornik), Nouvelle Série, 9 (51): 3–24, MR 0004726
Gelfand, I. (1941b), "Über absolut konvergente trigonometrische Reihen und Integrale", Rec. Math. (Mat. Sbornik), Nouvelle Série, 9 (51): 51–66, MR 0004727
Katznelson, Yitzhak (2004), An introduction to harmonic analysis (Third ed.), New York: Cambridge Mathematical Library, ISBN 978-0-521-54359-0
Newman, D. J. (1975), "A simple proof of Wiener's 1/f theorem", Proceedings of the American Mathematical Society, 48: 264–265, doi:10.2307/2040730, ISSN 0002-9939, MR 0365002
Wiener, Norbert (1932), "Tauberian Theorems", Annals of Mathematics, 33 (1): 1–100, doi:10.2307/1968102
Wiener, Norbert (1933), The Fourier integral and certain of its applications, Cambridge Mathematical Library, Cambridge University Press, doi:10.1017/CBO9780511662492, ISBN 978-0-521-35884-2, MR 0983891 {{citation}}: ISBN / Date incompatibility (help) | Wikipedia/Wiener_algebra |
Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functions" (for example, as a Fourier series which is a sum of sinusoids) and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible.
Spectral methods and finite-element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are generally nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains (compact support). Consequently, spectral methods connect variables globally while finite elements do so locally. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the fastest possible, when the solution is smooth. However, there are no known three-dimensional single-domain spectral shock capturing results (shock waves are not smooth). In the finite-element community, a method where the degree of the elements is very high or increases as the grid parameter h increases is sometimes called a spectral-element method.
Spectral methods can be used to solve differential equations (PDEs, ODEs, eigenvalue, etc) and optimization problems. When applying spectral methods to time-dependent PDEs, the solution is typically written as a sum of basis functions with time-dependent coefficients; substituting this in the PDE yields a system of ODEs in the coefficients which can be solved using any numerical method for ODEs. Eigenvalue problems for ODEs are similarly converted to matrix eigenvalue problems .
Spectral methods were developed in a long series of papers by Steven Orszag starting in 1969 including, but not limited to, Fourier series methods for periodic geometry problems, polynomial spectral methods for finite and unbounded geometry problems, pseudospectral methods for highly nonlinear problems, and spectral iteration methods for fast solution of steady-state problems. The implementation of the spectral method is normally accomplished either with collocation or a Galerkin or a Tau approach . For very small problems, the spectral method is unique in that solutions may be written out symbolically, yielding a practical alternative to series solutions for differential equations.
Spectral methods can be computationally less expensive and easier to implement than finite element methods; they shine best when high accuracy is sought in simple domains with smooth solutions. However, because of their global nature, the matrices associated with step computation are dense and computational efficiency will quickly suffer when there are many degrees of freedom (with some exceptions, for example if matrix applications can be written as Fourier transforms). For larger problems and nonsmooth solutions, finite elements will generally work better due to sparse matrices and better modelling of discontinuities and sharp bends.
== Examples of spectral methods ==
=== A concrete, linear example ===
Here we presume an understanding of basic multivariate calculus and Fourier series. If
g
(
x
,
y
)
{\displaystyle g(x,y)}
is a known, complex-valued function of two real variables, and g is periodic in x and y (that is,
g
(
x
,
y
)
=
g
(
x
+
2
π
,
y
)
=
g
(
x
,
y
+
2
π
)
{\displaystyle g(x,y)=g(x+2\pi ,y)=g(x,y+2\pi )}
) then we are interested in finding a function f(x,y) so that
(
∂
2
∂
x
2
+
∂
2
∂
y
2
)
f
(
x
,
y
)
=
g
(
x
,
y
)
for all
x
,
y
{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)f(x,y)=g(x,y)\quad {\text{for all }}x,y}
where the expression on the left denotes the second partial derivatives of f in x and y, respectively. This is the Poisson equation, and can be physically interpreted as some sort of heat conduction problem, or a problem in potential theory, among other possibilities.
If we write f and g in Fourier series:
f
=:
∑
a
j
,
k
e
i
(
j
x
+
k
y
)
,
g
=:
∑
b
j
,
k
e
i
(
j
x
+
k
y
)
,
{\displaystyle {\begin{aligned}f&=:\sum a_{j,k}e^{i(jx+ky)},\\[5mu]g&=:\sum b_{j,k}e^{i(jx+ky)},\end{aligned}}}
and substitute into the differential equation, we obtain this equation:
∑
−
a
j
,
k
(
j
2
+
k
2
)
e
i
(
j
x
+
k
y
)
=
∑
b
j
,
k
e
i
(
j
x
+
k
y
)
.
{\displaystyle \sum -a_{j,k}(j^{2}+k^{2})e^{i(jx+ky)}=\sum b_{j,k}e^{i(jx+ky)}.}
We have exchanged partial differentiation with an infinite sum, which is legitimate if we assume for instance that f has a continuous second derivative. By the uniqueness theorem for Fourier expansions, we must then equate the Fourier coefficients term by term, giving
which is an explicit formula for the Fourier coefficients aj,k.
With periodic boundary conditions, the Poisson equation possesses a solution only if b0,0 = 0. Therefore, we can freely choose a0,0 which will be equal to the mean of the resolution. This corresponds to choosing the integration constant.
To turn this into an algorithm, only finitely many frequencies are solved for. This introduces an error which can be shown to be proportional to
h
n
{\displaystyle h^{n}}
, where
h
:=
1
/
n
{\displaystyle h:=1/n}
and
n
{\displaystyle n}
is the highest frequency treated.
==== Algorithm ====
Compute the Fourier transform (bj,k) of g.
Compute the Fourier transform (aj,k) of f via the formula (*).
Compute f by taking an inverse Fourier transform of (aj,k).
Since we're only interested in a finite window of frequencies (of size n, say) this can be done using a fast Fourier transform algorithm. Therefore, globally the algorithm runs in time O(n log n).
=== Nonlinear example ===
We wish to solve the forced, transient, nonlinear Burgers' equation using a spectral approach.
Given
u
(
x
,
0
)
{\displaystyle u(x,0)}
on the periodic domain
x
∈
[
0
,
2
π
)
{\displaystyle x\in \left[0,2\pi \right)}
, find
u
∈
U
{\displaystyle u\in {\mathcal {U}}}
such that
∂
t
u
+
u
∂
x
u
=
ρ
∂
x
x
u
+
f
∀
x
∈
[
0
,
2
π
)
,
∀
t
>
0
{\displaystyle \partial _{t}u+u\partial _{x}u=\rho \partial _{xx}u+f\quad \forall x\in \left[0,2\pi \right),\forall t>0}
where ρ is the viscosity coefficient. In weak conservative form this becomes
⟨
∂
t
u
,
v
⟩
=
⟨
∂
x
(
−
1
2
u
2
+
ρ
∂
x
u
)
,
v
⟩
+
⟨
f
,
v
⟩
∀
v
∈
V
,
∀
t
>
0
{\displaystyle \left\langle \partial _{t}u,v\right\rangle ={\Bigl \langle }\partial _{x}\left(-{\tfrac {1}{2}}u^{2}+\rho \partial _{x}u\right),v{\Bigr \rangle }+\left\langle f,v\right\rangle \quad \forall v\in {\mathcal {V}},\forall t>0}
where following inner product notation. Integrating by parts and using periodicity grants
⟨
∂
t
u
,
v
⟩
=
⟨
1
2
u
2
−
ρ
∂
x
u
,
∂
x
v
⟩
+
⟨
f
,
v
⟩
∀
v
∈
V
,
∀
t
>
0.
{\displaystyle \langle \partial _{t}u,v\rangle =\left\langle {\tfrac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}v\right\rangle +\left\langle f,v\right\rangle \quad \forall v\in {\mathcal {V}},\forall t>0.}
To apply the Fourier–Galerkin method, choose both
U
N
:=
{
u
:
u
(
x
,
t
)
=
∑
k
=
−
N
/
2
N
/
2
−
1
u
^
k
(
t
)
e
i
k
x
}
{\displaystyle {\mathcal {U}}^{N}:={\biggl \{}u:u(x,t)=\sum _{k=-N/2}^{N/2-1}{\hat {u}}_{k}(t)e^{ikx}{\biggr \}}}
and
V
N
:=
span
{
e
i
k
x
:
k
∈
−
1
2
N
,
…
,
1
2
N
−
1
}
{\displaystyle {\mathcal {V}}^{N}:=\operatorname {span} \left\{e^{ikx}:k\in -{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\}}
where
u
^
k
(
t
)
:=
1
2
π
⟨
u
(
x
,
t
)
,
e
i
k
x
⟩
{\displaystyle {\hat {u}}_{k}(t):={\frac {1}{2\pi }}\langle u(x,t),e^{ikx}\rangle }
. This reduces the problem to finding
u
∈
U
N
{\displaystyle u\in {\mathcal {U}}^{N}}
such that
⟨
∂
t
u
,
e
i
k
x
⟩
=
⟨
1
2
u
2
−
ρ
∂
x
u
,
∂
x
e
i
k
x
⟩
+
⟨
f
,
e
i
k
x
⟩
∀
k
∈
{
−
1
2
N
,
…
,
1
2
N
−
1
}
,
∀
t
>
0.
{\displaystyle \langle \partial _{t}u,e^{ikx}\rangle =\left\langle {\tfrac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}e^{ikx}\right\rangle +\left\langle f,e^{ikx}\right\rangle \quad \forall k\in \left\{-{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\},\forall t>0.}
Using the orthogonality relation
⟨
e
i
l
x
,
e
i
k
x
⟩
=
2
π
δ
l
k
{\displaystyle \langle e^{ilx},e^{ikx}\rangle =2\pi \delta _{lk}}
where
δ
l
k
{\displaystyle \delta _{lk}}
is the Kronecker delta, we simplify the above three terms for each
k
{\displaystyle k}
to see
⟨
∂
t
u
,
e
i
k
x
⟩
=
⟨
∂
t
∑
l
u
^
l
e
i
l
x
,
e
i
k
x
⟩
=
⟨
∑
l
∂
t
u
^
l
e
i
l
x
,
e
i
k
x
⟩
=
2
π
∂
t
u
^
k
,
⟨
f
,
e
i
k
x
⟩
=
⟨
∑
l
f
^
l
e
i
l
x
,
e
i
k
x
⟩
=
2
π
f
^
k
,
and
⟨
1
2
u
2
−
ρ
∂
x
u
,
∂
x
e
i
k
x
⟩
=
⟨
1
2
(
∑
p
u
^
p
e
i
p
x
)
(
∑
q
u
^
q
e
i
q
x
)
−
ρ
∂
x
∑
l
u
^
l
e
i
l
x
,
∂
x
e
i
k
x
⟩
=
⟨
1
2
∑
p
∑
q
u
^
p
u
^
q
e
i
(
p
+
q
)
x
,
i
k
e
i
k
x
⟩
−
⟨
ρ
i
∑
l
l
u
^
l
e
i
l
x
,
i
k
e
i
k
x
⟩
=
−
1
2
i
k
⟨
∑
p
∑
q
u
^
p
u
^
q
e
i
(
p
+
q
)
x
,
e
i
k
x
⟩
−
ρ
k
⟨
∑
l
l
u
^
l
e
i
l
x
,
e
i
k
x
⟩
=
−
i
π
k
∑
p
+
q
=
k
u
^
p
u
^
q
−
2
π
ρ
k
2
u
^
k
.
{\displaystyle {\begin{aligned}\left\langle \partial _{t}u,e^{ikx}\right\rangle &={\biggl \langle }\partial _{t}\sum _{l}{\hat {u}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }={\biggl \langle }\sum _{l}\partial _{t}{\hat {u}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }=2\pi \partial _{t}{\hat {u}}_{k},\\\left\langle f,e^{ikx}\right\rangle &={\biggl \langle }\sum _{l}{\hat {f}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }=2\pi {\hat {f}}_{k},{\text{ and}}\\\left\langle {\tfrac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}e^{ikx}\right\rangle &={\biggl \langle }{\tfrac {1}{2}}{\Bigl (}\sum _{p}{\hat {u}}_{p}e^{ipx}{\Bigr )}{\Bigl (}\sum _{q}{\hat {u}}_{q}e^{iqx}{\Bigr )}-\rho \partial _{x}\sum _{l}{\hat {u}}_{l}e^{ilx},\partial _{x}e^{ikx}{\biggr \rangle }\\&={\biggl \langle }{\tfrac {1}{2}}\sum _{p}\sum _{q}{\hat {u}}_{p}{\hat {u}}_{q}e^{i\left(p+q\right)x},ike^{ikx}{\biggr \rangle }-{\biggl \langle }\rho i\sum _{l}l{\hat {u}}_{l}e^{ilx},ike^{ikx}{\biggr \rangle }\\&=-{\tfrac {1}{2}}ik{\biggl \langle }\sum _{p}\sum _{q}{\hat {u}}_{p}{\hat {u}}_{q}e^{i\left(p+q\right)x},e^{ikx}{\biggr \rangle }-\rho k{\biggl \langle }\sum _{l}l{\hat {u}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }\\&=-i\pi k\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-2\pi \rho {}k^{2}{\hat {u}}_{k}.\end{aligned}}}
Assemble the three terms for each
k
{\displaystyle k}
to obtain
2
π
∂
t
u
^
k
=
−
i
π
k
∑
p
+
q
=
k
u
^
p
u
^
q
−
2
π
ρ
k
2
u
^
k
+
2
π
f
^
k
k
∈
{
−
1
2
N
,
…
,
1
2
N
−
1
}
,
∀
t
>
0.
{\displaystyle 2\pi \partial _{t}{\hat {u}}_{k}=-i\pi k\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-2\pi \rho {}k^{2}{\hat {u}}_{k}+2\pi {\hat {f}}_{k}\quad k\in \left\{-{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\},\forall t>0.}
Dividing through by
2
π
{\displaystyle 2\pi }
, we finally arrive at
∂
t
u
^
k
=
−
i
k
2
∑
p
+
q
=
k
u
^
p
u
^
q
−
ρ
k
2
u
^
k
+
f
^
k
k
∈
{
−
1
2
N
,
…
,
1
2
N
−
1
}
,
∀
t
>
0.
{\displaystyle \partial _{t}{\hat {u}}_{k}=-{\frac {ik}{2}}\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-\rho {}k^{2}{\hat {u}}_{k}+{\hat {f}}_{k}\quad k\in \left\{-{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\},\forall t>0.}
With Fourier transformed initial conditions
u
^
k
(
0
)
{\displaystyle {\hat {u}}_{k}(0)}
and forcing
f
^
k
(
t
)
{\displaystyle {\hat {f}}_{k}(t)}
, this coupled system of ordinary differential equations may be integrated in time (using, e.g., a Runge Kutta technique) to find a solution. The nonlinear term is a convolution, and there are several transform-based techniques for evaluating it efficiently. See the references by Boyd and Canuto et al. for more details.
== A relationship with the spectral element method ==
One can show that if
g
{\displaystyle g}
is infinitely differentiable, then the numerical algorithm using Fast Fourier Transforms will converge faster than any polynomial in the grid size h. That is, for any n>0, there is a
C
n
<
∞
{\displaystyle C_{n}<\infty }
such that the error is less than
C
n
h
n
{\displaystyle C_{n}h^{n}}
for all sufficiently small values of
h
{\displaystyle h}
. We say that the spectral method is of order
n
{\displaystyle n}
, for every n>0.
Because a spectral element method is a finite element method of very high order, there is a similarity in the convergence properties. However, whereas the spectral method is based on the eigendecomposition of the particular boundary value problem, the finite element method does not use that information and works for arbitrary elliptic boundary value problems.
== See also ==
Finite element method
Gaussian grid
Pseudo-spectral method
Spectral element method
Galerkin method
Collocation method
== References ==
Bengt Fornberg (1996) A Practical Guide to Pseudospectral Methods. Cambridge University Press, Cambridge, UK
Chebyshev and Fourier Spectral Methods by John P. Boyd.
Canuto C., Hussaini M. Y., Quarteroni A., and Zang T.A. (2006) Spectral Methods. Fundamentals in Single Domains. Springer-Verlag, Berlin Heidelberg
Javier de Frutos, Julia Novo (2000): A Spectral Element Method for the Navier–Stokes Equations with Improved Accuracy
Polynomial Approximation of Differential Equations, by Daniele Funaro, Lecture Notes in Physics, Volume 8, Springer-Verlag, Heidelberg 1992
D. Gottlieb and S. Orzag (1977) "Numerical Analysis of Spectral Methods : Theory and Applications", SIAM, Philadelphia, PA
J. Hesthaven, S. Gottlieb and D. Gottlieb (2007) "Spectral methods for time-dependent problems", Cambridge UP, Cambridge, UK
Steven A. Orszag (1969) Numerical Methods for the Simulation of Turbulence, Phys. Fluids Supp. II, 12, 250–257
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.7. Spectral Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
Jie Shen, Tao Tang and Li-Lian Wang (2011) "Spectral Methods: Algorithms, Analysis and Applications" (Springer Series in Computational Mathematics, V. 41, Springer), ISBN 354071040X
Lloyd N. Trefethen (2000) Spectral Methods in MATLAB. SIAM, Philadelphia, PA
Muradova A. D. (2008) "The spectral method and numerical continuation algorithm for the von Kármán problem with postbuckling behaviour of solutions", Advances in Computational Mathematics, 29, pp. 179–206, https://doi.org/10.1007/s10444-007-9050-7.
Muradova A. D. (2015) "A time spectral method for solving the nonlinear dynamic equations of a rectangular elastic plate", Journal of Engineering Mathematics, 92, pp. 83–101, https://doi.org/10.1007/s10665-014-9752-z. | Wikipedia/Spectral_method |
In mathematics, an AW*-algebra is an algebraic generalization of a W*-algebra. They were introduced by Irving Kaplansky in 1951. As operator algebras, von Neumann algebras, among all C*-algebras, are typically handled using one of two means: they are the dual space of some Banach space, and they are determined to a large extent by their projections. The idea behind AW*-algebras is to forget the former, topological, condition, and use only the latter, algebraic, condition.
== Definition ==
Recall that a projection of a C*-algebra is a self-adjoint idempotent element. A C*-algebra A is an AW*-algebra if for every subset S of A, the left annihilator
A
n
n
L
(
S
)
=
{
a
∈
A
∣
∀
s
∈
S
,
a
s
=
0
}
{\displaystyle \mathrm {Ann} _{L}(S)=\{a\in A\mid \forall s\in S,as=0\}\,}
is generated as a left ideal by some projection p of A, and similarly the right annihilator is generated as a right ideal by some projection q:
∀
S
⊆
A
∃
p
,
q
∈
P
r
o
j
(
A
)
:
A
n
n
L
(
S
)
=
A
p
,
A
n
n
R
(
S
)
=
q
A
{\displaystyle \forall S\subseteq A\,\exists p,q\in \mathrm {Proj} (A)\colon \mathrm {Ann} _{L}(S)=Ap,\quad \mathrm {Ann} _{R}(S)=qA}
.
Hence an AW*-algebra is a C*-algebra that is at the same time a Baer *-ring.
The original definition of Kaplansky states that an AW*-algebra is a C*-algebra such that (1) any set of orthogonal projections has a least upper bound, and (2) each maximal commutative C*-subalgebra is generated by its projections. The first condition states that the projections have an interesting structure, while the second condition ensures that there are enough projections for it to be interesting. Note that the second condition is equivalent to the condition that each maximal commutative C*-subalgebra is monotone complete.
== Structure theory ==
Many results concerning von Neumann algebras carry over to AW*-algebras. For example, AW*-algebras can be classified according to the behavior of their projections, and decompose into types. For another example, normal matrices with entries in an AW*-algebra can always be diagonalized. AW*-algebras also always have polar decomposition.
However, there are also ways in which AW*-algebras behave differently from von Neumann algebras. For example, AW*-algebras of type I can exhibit pathological properties, even though Kaplansky already showed that such algebras with trivial center are automatically von Neumann algebras.
== The commutative case ==
A commutative C*-algebra is an AW*-algebra if and only if its spectrum is a Stonean space. Via Stone duality, commutative AW*-algebras therefore correspond to complete Boolean algebras. The projections of a commutative AW*-algebra form a complete Boolean algebra, and conversely, any complete Boolean algebra is isomorphic to the projections of some commutative AW*-algebra.
== References == | Wikipedia/AW*-algebra |
In mathematics, the multiplier algebra, denoted by M(A), of a C*-algebra A is a unital C*-algebra that is the largest unital C*-algebra that contains A as an ideal in a "non-degenerate" way. It is the noncommutative generalization of Stone–Čech compactification. Multiplier algebras were introduced by Busby (1968).
For example, if A is the C*-algebra of compact operators on a separable Hilbert space, M(A) is B(H), the C*-algebra of all bounded operators on H.
== Definition ==
An ideal I in a C*-algebra B is said to be essential if I ∩ J is non-trivial for every ideal J. An ideal I is essential if and only if I⊥, the "orthogonal complement" of I in the Hilbert C*-module B is {0}.
Let A be a C*-algebra. Its multiplier algebra M(A) is any C*-algebra satisfying the following universal property: for any C*-algebra D containing A as an ideal, there exists a unique *-homomorphism φ: D → M(A) such that φ extends the identity homomorphism on A and φ(A⊥) = {0}.
Uniqueness up to isomorphism is specified by the universal property. When A is unital, M(A) = A. It also follows from the definition that for any D containing A as an essential ideal, the multiplier algebra M(A) contains D as a C*-subalgebra.
The existence of M(A) can be shown in several ways.
A double centralizer of a C*-algebra A is a pair (L, R) of bounded linear maps on A such that aL(b) = R(a)b for all a and b in A. This implies that ||L|| = ||R||. The set of double centralizers of A can be given a C*-algebra structure. This C*-algebra contains A as an essential ideal and can be identified as the multiplier algebra M(A). For instance, if A is the compact operators K(H) on a separable Hilbert space, then each x ∈ B(H) defines a double centralizer of A by simply multiplication from the left and right.
Alternatively, M(A) can be obtained via representations. The following fact will be needed:
Lemma. If I is an ideal in a C*-algebra B, then any faithful nondegenerate representation π of I can be extended uniquely to B.
Now take any faithful nondegenerate representation π of A on a Hilbert space H. The above lemma, together with the universal property of the multiplier algebra, yields that M(A) is isomorphic to the idealizer of π(A) in B(H). It is immediate that M(K(H)) = B(H).
Lastly, let E be a Hilbert C*-module and B(E) (resp. K(E)) be the adjointable (resp. compact) operators on E M(A) can be identified via a *-homomorphism of A into B(E). Something similar to the above lemma is true:
Lemma. If I is an ideal in a C*-algebra B, then any faithful nondegenerate *-homomorphism π of I into B(E) can be extended uniquely to B.
Consequently, if π is a faithful nondegenerate *-homomorphism of A into B(E), then M(A) is isomorphic to the idealizer of π(A). For instance, M(K(E)) = B(E) for any Hilbert module E.
The C*-algebra A is isomorphic to the compact operators on the Hilbert module A. Therefore, M(A) is the adjointable operators on A.
== Strict topology ==
Consider the topology on M(A) specified by the seminorms {la, ra}a ∈ A, where
l
a
(
x
)
=
‖
a
x
‖
,
r
a
(
x
)
=
‖
x
a
‖
.
{\displaystyle l_{a}(x)=\|ax\|,\;r_{a}(x)=\|xa\|.}
The resulting topology is called the strict topology on M(A). A is strictly dense in M(A) .
When A is unital, M(A) = A, and the strict topology coincides with the norm topology. For B(H) = M(K(H)), the strict topology is the σ-strong* topology. It follows from above that B(H) is complete in the σ-strong* topology.
== Commutative case ==
Let X be a locally compact Hausdorff space, A = C0(X), the commutative C*-algebra of continuous functions that vanish at infinity. Then M(A) is Cb(X), the continuous bounded functions on X. By the Gelfand–Naimark theorem, one has the isomorphism of C*-algebras
C
b
(
X
)
≃
C
(
Y
)
{\displaystyle C_{b}(X)\simeq C(Y)}
where Y is the spectrum of Cb(X). Y is in fact homeomorphic to the Stone–Čech compactification βX of X.
== Corona algebra ==
The corona or corona algebra of A is the quotient M(A)/A.
For example, the corona algebra of the algebra of compact operators on a Hilbert space is the Calkin algebra.
The corona algebra is a noncommutative analogue of the corona set of a topological space.
== References ==
B. Blackadar, K-Theory for Operator Algebras, MSRI Publications, 1986.
Busby, Robert C. (1968), "Double centralizers and extensions of C*-algebras" (PDF), Transactions of the American Mathematical Society, 132 (1): 79–99, doi:10.2307/1994883, ISSN 0002-9947, JSTOR 1994883, MR 0225175, S2CID 54047557, archived from the original (PDF) on 2020-02-20
Pedersen, Gert K. (2001) [1994], "Multipliers of C*-algebras", Encyclopedia of Mathematics, EMS Press | Wikipedia/Corona_algebra |
In mathematics, particularly in functional analysis and ring theory, an approximate identity is a net in a Banach algebra or ring (generally without an identity) that acts as a substitute for an identity element.
== Definition ==
A right approximate identity in a Banach algebra A is a net
{
e
λ
:
λ
∈
Λ
}
{\displaystyle \{e_{\lambda }:\lambda \in \Lambda \}}
such that for every element a of A,
lim
λ
∈
Λ
‖
a
e
λ
−
a
‖
=
0.
{\displaystyle \lim _{\lambda \in \Lambda }\lVert ae_{\lambda }-a\rVert =0.}
Similarly, a left approximate identity in a Banach algebra A is a net
{
e
λ
:
λ
∈
Λ
}
{\displaystyle \{e_{\lambda }:\lambda \in \Lambda \}}
such that for every element a of A,
lim
λ
∈
Λ
‖
e
λ
a
−
a
‖
=
0.
{\displaystyle \lim _{\lambda \in \Lambda }\lVert e_{\lambda }a-a\rVert =0.}
An approximate identity is a net which is both a right approximate identity and a left approximate identity.
== C*-algebras ==
For C*-algebras, a right (or left) approximate identity consisting of self-adjoint elements is the same as an approximate identity. The net of all positive elements in A of norm ≤ 1 with its natural order is an approximate identity for any C*-algebra. This is called the canonical approximate identity of a C*-algebra. Approximate identities are not unique. For example, for compact operators acting on a Hilbert space, the net consisting of finite rank projections would be another approximate identity.
If an approximate identity is a sequence, we call it a sequential approximate identity and a C*-algebra with a sequential approximate identity is called σ-unital. Every separable C*-algebra is σ-unital, though the converse is false. A commutative C*-algebra is σ-unital if and only if its spectrum is σ-compact. In general, a C*-algebra A is σ-unital if and only if A contains a strictly positive element, i.e. there exists h in A+ such that the hereditary C*-subalgebra generated by h is A.
One sometimes considers approximate identities consisting of specific types of elements. For example, a C*-algebra has real rank zero if and only if every hereditary C*-subalgebra has an approximate identity consisting of projections. This was known as property (HP) in earlier literature.
== Convolution algebras ==
An approximate identity in a convolution algebra plays the same role as a sequence of function approximations to the Dirac delta function (which is the identity element for convolution). For example, the Fejér kernels of Fourier series theory give rise to an approximate identity.
== Rings ==
In ring theory, an approximate identity is defined in a similar way, except that the ring is given the discrete topology so that a = aeλ for some λ.
A module over a ring with approximate identity is called non-degenerate if for every m in the module there is some λ with m = meλ.
== See also ==
Mollifier
Nascent delta function
Summability kernel | Wikipedia/Σ-unital_algebra |
In the mathematical field of functional analysis, a nuclear C*-algebra is a C*-algebra A such that for every C*-algebra B the injective and projective C*-cross norms coincides on the algebraic tensor product A⊗B and the completion of A⊗B with respect to this norm is a C*-algebra. This property was first studied by Takesaki (1964) under the name "Property T", which is not related to Kazhdan's property T.
== Characterizations ==
Nuclearity admits the following equivalent characterizations:
The identity map, as a completely positive map, approximately factors through matrix algebras. By this equivalence, nuclearity can be considered a noncommutative analogue of the existence of partitions of unity.
The enveloping von Neumann algebra is injective.
It is amenable as a Banach algebra.
(For separable algebras) It is isomorphic to a C*-subalgebra B of the Cuntz algebra 𝒪2 with the property that there exists a conditional expectation from 𝒪2 to B.
== Examples ==
The commutative unital C* algebra of (real or complex-valued) continuous functions on a compact Hausdorff space as well as the noncommutative unital algebra of n×n real or complex matrices are nuclear.
== See also ==
Exact C*-algebra
Injective tensor product
Nuclear space – A generalization of finite-dimensional Euclidean spaces different from Hilbert spaces
Projective tensor product – tensor product defined on two topological vector spacesPages displaying wikidata descriptions as a fallback
== References ==
Connes, Alain (1976), "Classification of injective factors.", Annals of Mathematics, Second Series, 104 (1): 73–115, doi:10.2307/1971057, ISSN 0003-486X, JSTOR 1971057, MR 0454659
Effros, Edward G.; Ruan, Zhong-Jin (2000), Operator spaces, London Mathematical Society Monographs. New Series, vol. 23, The Clarendon Press Oxford University Press, ISBN 978-0-19-853482-2, MR 1793753
Lance, E. Christopher (1982), "Tensor products and nuclear C*-algebras", Operator algebras and applications, Part I (Kingston, Ont., 1980), Proc. Sympos. Pure Math., vol. 38, Providence, R.I.: Amer. Math. Soc., pp. 379–399, MR 0679721
Pisier, Gilles (2003), Introduction to operator space theory, London Mathematical Society Lecture Note Series, vol. 294, Cambridge University Press, ISBN 978-0-521-81165-1, MR 2006539
Rørdam, M. (2002), "Classification of nuclear simple C*-algebras", Classification of nuclear C*-algebras. Entropy in operator algebras, Encyclopaedia Math. Sci., vol. 126, Berlin, New York: Springer-Verlag, pp. 1–145, MR 1878882
Takesaki, Masamichi (1964), "On the cross-norm of the direct product of C*-algebras", The Tohoku Mathematical Journal, Second Series, 16: 111–122, doi:10.2748/tmj/1178243737, ISSN 0040-8735, MR 0165384
Takesaki, Masamichi (2003), "Nuclear C*-algebras", Theory of operator algebras. III, Encyclopaedia of Mathematical Sciences, vol. 127, Berlin, New York: Springer-Verlag, pp. 153–204, ISBN 978-3-540-42913-5, MR 1943007 | Wikipedia/Nuclear_C*-algebra |
In mathematics, a projectionless C*-algebra is a C*-algebra with no nontrivial projections. For a unital C*-algebra, the projections 0 and 1 are trivial. While for a non-unital C*-algebra, only 0 is considered trivial. The problem of whether simple infinite-dimensional C*-algebras with this property exist was posed in 1958 by Irving Kaplansky, and the first example of one was published in 1981 by Bruce Blackadar. For commutative C*-algebras, being projectionless is equivalent to its spectrum being connected. Due to this, being projectionless can be considered as a noncommutative analogue of a connected space.
== Examples ==
C, the algebra of complex numbers.
The reduced group C*-algebra of the free group on finitely many generators.
The Jiang-Su algebra is simple, projectionless, and KK-equivalent to C.
== Dimension drop algebras ==
Let
B
0
{\displaystyle {\mathcal {B}}_{0}}
be the class consisting of the C*-algebras
C
0
(
R
)
,
C
0
(
R
2
)
,
D
n
,
S
D
n
{\displaystyle C_{0}(\mathbb {R} ),C_{0}(\mathbb {R} ^{2}),D_{n},SD_{n}}
for each
n
≥
2
{\displaystyle n\geq 2}
, and let
B
{\displaystyle {\mathcal {B}}}
be the class of all C*-algebras of the form
M
k
1
(
B
1
)
⊕
M
k
2
(
B
2
)
⊕
.
.
.
⊕
M
k
r
(
B
r
)
{\displaystyle M_{k_{1}}(B_{1})\oplus M_{k_{2}}(B_{2})\oplus ...\oplus M_{k_{r}}(B_{r})}
,
where
r
,
k
1
,
.
.
.
,
k
r
{\displaystyle r,k_{1},...,k_{r}}
are integers, and where
B
1
,
.
.
.
,
B
r
{\displaystyle B_{1},...,B_{r}}
belong to
B
0
{\displaystyle {\mathcal {B}}_{0}}
.
Every C*-algebra A in
B
{\displaystyle {\mathcal {B}}}
is projectionless, moreover, its only projection is 0.
== References == | Wikipedia/Projectionless_C*-algebra |
Superstrong approximation is a generalisation of strong approximation in algebraic groups G, to provide spectral gap results. The spectrum in question is that of the Laplacian matrix associated to a family of quotients of a discrete group Γ; and the gap is that between the first and second eigenvalues (normalisation so that the first eigenvalue corresponds to constant functions as eigenvectors). Here Γ is a subgroup of the rational points of G, but need not be a lattice: it may be a so-called thin group. The "gap" in question is a lower bound (absolute constant) for the difference of those eigenvalues.
A consequence and equivalent of this property, potentially holding for Zariski dense subgroups Γ of the special linear group over the integers, and in more general classes of algebraic groups G, is that the sequence of Cayley graphs for reductions Γp modulo prime numbers p, with respect to any fixed set S in Γ that is a symmetric set and generating set, is an expander family.
In this context "strong approximation" is the statement that S when reduced generates the full group of points of G over the prime fields with p elements, when p is large enough. It is equivalent to the Cayley graphs being connected (when p is large enough), or that the locally constant functions on these graphs are constant, so that the eigenspace for the first eigenvalue is one-dimensional. Superstrong approximation therefore is a concrete quantitative improvement on these statements.
== Background ==
Property
(
τ
)
{\displaystyle (\tau )}
is an analogue in discrete group theory of Kazhdan's property (T), and was introduced by Alexander Lubotzky. For a given family of normal subgroups N of finite index in Γ, one equivalent formulation is that the Cayley graphs of the groups Γ/N, all with respect to a fixed symmetric set of generators S, form an expander family. Therefore superstrong approximation is a formulation of property
(
τ
)
{\displaystyle (\tau )}
, where the subgroups N are the kernels of reduction modulo large enough primes p.
The Lubotzky–Weiss conjecture states (for special linear groups and reduction modulo primes) that an expansion result of this kind holds independent of the choice of S. For applications, it is also relevant to have results where the modulus is not restricted to being a prime.
== Proofs of superstrong approximation ==
Results on superstrong approximation have been found using techniques on approximate subgroups, and growth rate in finite simple groups.
== Notes ==
== References ==
Breuillard, Emmanuel; Oh, Hee, eds. (2014). Thin Groups and Superstrong Approximation. Cambridge University Press. ISBN 978-1-107-03685-7.
Matthews, C. R.; Vaserstein, L. N.; Weisfeiler, B. (1984). "Congruence properties of Zariski-dense subgroups. I.". Proc. London Math. Soc. Series 3. 48 (3): 514–532. doi:10.1112/plms/s3-48.3.514. MR 0735226. | Wikipedia/Superstrong_approximation |
In functional analysis, a uniform algebra A on a compact Hausdorff topological space X is a closed (with respect to the uniform norm) subalgebra of the C*-algebra C(X) (the continuous complex-valued functions on X) with the following properties:
the constant functions are contained in A
for every x, y
∈
{\displaystyle \in }
X there is f
∈
{\displaystyle \in }
A with f(x)
≠
{\displaystyle \neq }
f(y). This is called separating the points of X.
As a closed subalgebra of the commutative Banach algebra C(X) a uniform algebra is itself a unital commutative Banach algebra (when equipped with the uniform norm). Hence, it is, (by definition) a Banach function algebra.
A uniform algebra A on X is said to be natural if the maximal ideals of A are precisely the ideals
M
x
{\displaystyle M_{x}}
of functions vanishing at a point x in X.
== Abstract characterization ==
If A is a unital commutative Banach algebra such that
|
|
a
2
|
|
=
|
|
a
|
|
2
{\displaystyle ||a^{2}||=||a||^{2}}
for all a in A, then there is a compact Hausdorff X such that A is isomorphic as a Banach algebra to a uniform algebra on X. This result follows from the spectral radius formula and the Gelfand representation.
== Notes ==
== References == | Wikipedia/Uniform_algebra |
In algebraic geometry, a correspondence between algebraic varieties V and W is a subset R of V×W, that is closed in the Zariski topology. In set theory, a subset of a Cartesian product of two sets is called a binary relation or correspondence; thus, a correspondence here is a relation that is defined by algebraic equations. There are some important examples, even when V and W are algebraic curves: for example the Hecke operators of modular form theory may be considered as correspondences of modular curves.
However, the definition of a correspondence in algebraic geometry is not completely standard. For instance, Fulton, in his book on intersection theory, uses the definition above. In literature, however, a correspondence from a variety X to a variety Y is often taken to be a subset Z of X×Y such that Z is finite and surjective over each component of X. Note the asymmetry in this latter definition; which talks about a correspondence from X to Y rather than a correspondence between X and Y. The typical example of the latter kind of correspondence is the graph of a function f:X→Y. Correspondences also play an important role in the construction of motives (cf. presheaf with transfers).
== See also ==
Adequate equivalence relation
== References == | Wikipedia/Correspondence_(algebraic_geometry) |
In applied mathematics, proto-value functions (PVFs) are automatically learned basis functions that are useful in approximating task-specific value functions, providing a compact representation of the powers of transition matrices. They provide a novel framework for solving the credit assignment problem. The framework introduces a novel approach to solving Markov decision processes (MDP) and reinforcement learning problems, using multiscale spectral and manifold learning methods. Proto-value functions are generated by spectral analysis of a graph, using the graph Laplacian.
Proto-value functions were first introduced in the context of reinforcement learning by Sridhar Mahadevan in his paper, Proto-Value Functions: Developmental Reinforcement Learning at ICML 2005.
== Motivation ==
Value function approximation is a critical component to solving Markov decision processes (MDPs) defined over a continuous state space. A good function approximator allows a reinforcement learning (RL) agent to accurately represent the value of any state it has experienced, without explicitly storing its value. Linear function approximation using basis functions is a common way of constructing a value function approximation, like radial basis functions, polynomial state encodings, and CMACs. However, parameters associated with these basis functions often require significant domain-specific hand-engineering. Proto-value functions attempts to solve this required hand-engineering by accounting for the underlying manifold structure of the problem domain.
== Overview ==
Proto-value functions are task-independent global basis functions that collectively span the entire space of possible value functions for a given state space. They incorporate geometric constraints intrinsic to the environment. For example, states close in Euclidean distance (such as states on opposite sides of a wall) may be far apart in manifold space. Previous approaches to this nonlinearity problem lacked a broad theoretical framework, and consequently have only been explored in the context of discrete MDPs.
Proto-value functions arise from reformulating the problem of value function approximation as real-valued function approximation on a graph or manifold. This results in broader applicability of the learned bases and enables a new class of learning algorithms, which learn representations and policies at the same time.
== Basis functions from graph Laplacian ==
This approach constructs the basis functions by spectral analysis of the graph Laplacian, a self-adjoint (or symmetric) operator on the space of functions on the graph, closely related to the random walk operator.
For the sake of simplicity, assume that the underlying state space can be represented as an undirected unweighted graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
The combinatorial Laplacian
L
{\displaystyle L}
is defined as the operator
L
=
D
−
A
{\displaystyle L=D-A}
, where
D
{\displaystyle D}
is a diagonal matrix called the degree matrix and
A
{\displaystyle A}
is the adjacency matrix.
The spectral analysis of the Laplace operator on a graph consists of finding the eigenvalues and eigenfunctions which solve the equation
L
φ
λ
=
λ
φ
λ
,
{\displaystyle L\varphi _{\lambda }=\lambda \varphi _{\lambda },}
where
L
{\displaystyle L}
is the combinatorial Laplacian,
φ
λ
{\displaystyle \varphi _{\lambda }}
is an eigenfunction associated with the eigenvalue
λ
{\displaystyle \lambda }
. Here the term "eigenfunction" is used to denote what is traditionally referred to as eigenvector in linear algebra, because the Laplacian eigenvectors can naturally be viewed as functions that map each vertex to a real number.
The combinatorial Laplacian is not the only operator on graphs to select from. Other possible graph operators include:
Normalized Laplacian
L
normalized
=
I
−
D
−
1
/
2
A
D
−
1
/
2
{\displaystyle L_{\text{normalized}}=I-D^{-1/2}AD^{-1/2}}
Random Walk
P
=
D
−
1
A
{\displaystyle P=D^{-1}A}
=== Graph construction on discrete state space ===
For a finite state space the graph
G
{\displaystyle G}
mentioned above can be simply constructed by examining the connections between states. Let
S
i
{\displaystyle S_{i}}
and
S
j
{\displaystyle S_{j}}
be any two states. Then
G
i
,
j
=
{
1
if
S
i
↔
S
j
0
otherwise
{\displaystyle G_{i,j}={\begin{cases}1&{\text{if }}S_{i}\leftrightarrow S_{j}\\0&{\text{otherwise}}\end{cases}}}
It is important to note that this can only be done when the state space is finite and of reasonable size.
=== Graph construction on continuous or large state space ===
For a continuous state space or simply a very large discrete state space, it is necessary to sample from the manifold in state space. Then constructing the Graph
G
{\displaystyle G}
based on the samples.
There are a few issues to consider here:
How to sample the manifold
Random walk or guided exploration
How to determine if two sample should be connected
== Application ==
Once the PVFs are generated, they can be plugged into a traditional function approximation framework. One such method is least-squares approximation.
=== Least-squares approximation using proto-value functions ===
Let
Φ
G
=
{
V
1
G
,
…
,
V
k
G
}
{\displaystyle \Phi _{G}=\left\{V_{1}^{G},\dots ,V_{k}^{G}\right\}}
be the basis set of PVFs, where each
V
i
G
{\displaystyle V_{i}^{G}}
is the eigenfunction defined over all states in the graph
G
{\displaystyle G}
.
Let
V
^
π
{\displaystyle {\widehat {V}}^{\pi }}
be the target value function that is only known for a subset of states
S
M
G
=
{
s
1
,
…
,
s
m
}
{\displaystyle S_{M}^{G}=\left\{s_{1},\dots ,s_{m}\right\}}
.
Define the gram matrix
K
G
=
(
Φ
m
G
)
T
Φ
m
G
.
{\displaystyle K_{G}=\left(\Phi _{m}^{G}\right)^{T}\Phi _{m}^{G}.}
Here
S
m
G
{\displaystyle S_{m}^{G}}
is the component wise projection of the PVFs onto the states in
S
G
m
{\displaystyle S_{G}^{m}}
. Hence, each entry of the gram matrix is
K
G
(
i
,
j
)
=
∑
k
V
i
G
(
k
)
V
j
G
(
k
)
.
{\displaystyle K_{G}(i,j)=\sum _{k}V_{i}^{G}(k)V_{j}^{G}(k).}
The coefficients that minimize the least squares error are then described by the equation
α
=
K
G
−
1
(
Φ
M
G
)
T
V
^
π
.
{\displaystyle \alpha =K_{G}^{-1}\left(\Phi _{M}^{G}\right)^{T}{\widehat {V}}^{\pi }.}
A nonlinear least-squares approach is possible by using the k PVFs with the largest absolute coefficients to compute the approximation.
== See also ==
Reinforcement learning
Markov decision process
Basis function
Eigenfunction
Laplacian matrix
== References == | Wikipedia/Proto-value_function |
In mathematics, specifically in functional and complex analysis, the disk algebra A(D) (also spelled disc algebra) is the set of holomorphic functions
ƒ : D →
C
{\displaystyle \mathbb {C} }
(where D is the open unit disk in the complex plane
C
{\displaystyle \mathbb {C} }
) that extend to a continuous function on the closure of D. That is,
A
(
D
)
=
H
∞
(
D
)
∩
C
(
D
¯
)
,
{\displaystyle A(\mathbf {D} )=H^{\infty }(\mathbf {D} )\cap C({\overline {\mathbf {D} }}),}
where H∞(D) denotes the Banach space of bounded analytic functions on the unit disc D (i.e. a Hardy space).
When endowed with the pointwise addition (f + g)(z) = f(z) + g(z) and pointwise multiplication (fg)(z) = f(z)g(z), this set becomes an algebra over C, since if f and g belong to the disk algebra, then so do f + g and fg.
Given the uniform norm
‖
f
‖
=
sup
{
|
f
(
z
)
|
∣
z
∈
D
}
=
max
{
|
f
(
z
)
|
∣
z
∈
D
¯
}
,
{\displaystyle \|f\|=\sup {\big \{}|f(z)|\mid z\in \mathbf {D} {\big \}}=\max {\big \{}|f(z)|\mid z\in {\overline {\mathbf {D} }}{\big \}},}
by construction, it becomes a uniform algebra and a commutative Banach algebra.
By construction, the disc algebra is a closed subalgebra of the Hardy space H∞. In contrast to the stronger requirement that a continuous extension to the circle exists, it is a lemma of Fatou that a general element of H∞ can be radially extended to the circle almost everywhere.
== References == | Wikipedia/Disk_algebra |
The spectrum of a linear operator
T
{\displaystyle T}
that operates on a Banach space
X
{\displaystyle X}
is a fundamental concept of functional analysis. The spectrum consists of all scalars
λ
{\displaystyle \lambda }
such that the operator
T
−
λ
{\displaystyle T-\lambda }
does not have a bounded inverse on
X
{\displaystyle X}
. The spectrum has a standard decomposition into three parts:
a point spectrum, consisting of the eigenvalues of
T
{\displaystyle T}
;
a continuous spectrum, consisting of the scalars that are not eigenvalues but make the range of
T
−
λ
{\displaystyle T-\lambda }
a proper dense subset of the space;
a residual spectrum, consisting of all other scalars in the spectrum.
This decomposition is relevant to the study of differential equations, and has applications to many branches of science and engineering. A well-known example from quantum mechanics is the explanation for the discrete spectral lines and the continuous band in the light emitted by excited atoms of hydrogen.
== Decomposition into point spectrum, continuous spectrum, and residual spectrum ==
=== For bounded Banach space operators ===
Let X be a Banach space, B(X) the family of bounded operators on X, and T ∈ B(X). By definition, a complex number λ is in the spectrum of T, denoted σ(T), if T − λ does not have an inverse in B(X).
If T − λ is one-to-one and onto, i.e. bijective, then its inverse is bounded; this follows directly from the open mapping theorem of functional analysis. So, λ is in the spectrum of T if and only if T − λ is not one-to-one or not onto. One distinguishes three separate cases:
T − λ is not injective. That is, there exist two distinct elements x,y in X such that (T − λ)(x) = (T − λ)(y). Then z = x − y is a non-zero vector such that T(z) = λz. In other words, λ is an eigenvalue of T in the sense of linear algebra. In this case, λ is said to be in the point spectrum of T, denoted σp(T).
T − λ is injective, and its range is a dense subset R of X; but is not the whole of X. In other words, there exists some element x in X such that (T − λ)(y) can be as close to x as desired, with y in X; but is never equal to x. It can be proved that, in this case, T − λ is not bounded below (i.e. it sends far apart elements of X too close together). Equivalently, the inverse linear operator (T − λ)−1, which is defined on the dense subset R, is not a bounded operator, and therefore cannot be extended to the whole of X. Then λ is said to be in the continuous spectrum, σc(T), of T.
T − λ is injective but does not have dense range. That is, there is some element x in X and a neighborhood N of x such that (T − λ)(y) is never in N. In this case, the map (T − λ)−1 x → x may be bounded or unbounded, but in any case does not admit a unique extension to a bounded linear map on all of X. Then λ is said to be in the residual spectrum of T, σr(T).
So σ(T) is the disjoint union of these three sets,
σ
(
T
)
=
σ
p
(
T
)
∪
σ
c
(
T
)
∪
σ
r
(
T
)
.
{\displaystyle \sigma (T)=\sigma _{p}(T)\cup \sigma _{c}(T)\cup \sigma _{r}(T).}
The complement of the spectrum
σ
(
T
)
{\displaystyle \sigma (T)}
is known as resolvent set
ρ
(
T
)
{\displaystyle \rho (T)}
that is
ρ
(
T
)
=
C
∖
σ
(
T
)
{\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}
.
In addition, when T − λ does not have dense range, whether is injective or not, then λ is said to be in the compression spectrum of T, σcp(T). The compression spectrum consists of the whole residual spectrum and part of point spectrum.
=== For unbounded operators ===
The spectrum of an unbounded operator can be divided into three parts in the same way as in the bounded case, but because the operator is not defined everywhere, the definitions of domain, inverse, etc. are more involved.
=== Examples ===
==== Multiplication operator ====
Given a σ-finite measure space (S, Σ, μ), consider the Banach space Lp(μ). A function h: S → C is called essentially bounded if h is bounded μ-almost everywhere. An essentially bounded h induces a bounded multiplication operator Th on Lp(μ):
(
T
h
f
)
(
s
)
=
h
(
s
)
⋅
f
(
s
)
.
{\displaystyle (T_{h}f)(s)=h(s)\cdot f(s).}
The operator norm of T is the essential supremum of h. The essential range of h is defined in the following way: a complex number λ is in the essential range of h if for all ε > 0, the preimage of the open ball Bε(λ) under h has strictly positive measure. We will show first that σ(Th) coincides with the essential range of h and then examine its various parts.
If λ is not in the essential range of h, take ε > 0 such that h−1(Bε(λ)) has zero measure. The function g(s) = 1/(h(s) − λ) is bounded almost everywhere by 1/ε. The multiplication operator Tg satisfies Tg · (Th − λ) = (Th − λ) · Tg = I. So λ does not lie in spectrum of Th. On the other hand, if λ lies in the essential range of h, consider the sequence of sets {Sn =
h−1(B1/n(λ))}. Each Sn has positive measure. Let fn be the characteristic function of Sn. We can compute directly
‖
(
T
h
−
λ
)
f
n
‖
p
p
=
‖
(
h
−
λ
)
f
n
‖
p
p
=
∫
S
n
|
h
−
λ
|
p
d
μ
≤
1
n
p
μ
(
S
n
)
=
1
n
p
‖
f
n
‖
p
p
.
{\displaystyle \|(T_{h}-\lambda )f_{n}\|_{p}^{p}=\|(h-\lambda )f_{n}\|_{p}^{p}=\int _{S_{n}}|h-\lambda \;|^{p}d\mu \leq {\frac {1}{n^{p}}}\;\mu (S_{n})={\frac {1}{n^{p}}}\|f_{n}\|_{p}^{p}.}
This shows Th − λ is not bounded below, therefore not invertible.
If λ is such that μ( h−1({λ})) > 0, then λ lies in the point spectrum of Th as follows. Let f be the characteristic function of the measurable set h−1(λ), then by considering two cases, we find
∀
s
∈
S
,
(
T
h
f
)
(
s
)
=
λ
f
(
s
)
,
{\displaystyle \forall s\in S,\;(T_{h}f)(s)=\lambda f(s),}
so λ is an eigenvalue of Th.
Any λ in the essential range of h that does not have a positive measure preimage is in the continuous spectrum of Th. To show this, we must show that Th − λ has dense range. Given f ∈ Lp(μ), again we consider the sequence of sets {Sn = h−1(B1/n(λ))}. Let gn be the characteristic function of S − Sn. Define
f
n
(
s
)
=
1
h
(
s
)
−
λ
⋅
g
n
(
s
)
⋅
f
(
s
)
.
{\displaystyle f_{n}(s)={\frac {1}{h(s)-\lambda }}\cdot g_{n}(s)\cdot f(s).}
Direct calculation shows that fn ∈ Lp(μ), with
‖
f
n
‖
p
≤
n
‖
f
‖
p
{\displaystyle \|f_{n}\|_{p}\leq n\|f\|_{p}}
. Then by the dominated convergence theorem,
(
T
h
−
λ
)
f
n
→
f
{\displaystyle (T_{h}-\lambda )f_{n}\rightarrow f}
in the Lp(μ) norm.
Therefore, multiplication operators have no residual spectrum. In particular, by the spectral theorem, normal operators on a Hilbert space have no residual spectrum.
==== Shifts ====
In the special case when S is the set of natural numbers and μ is the counting measure, the corresponding Lp(μ) is denoted by lp. This space consists of complex valued sequences {xn} such that
∑
n
≥
0
|
x
n
|
p
<
∞
.
{\displaystyle \sum _{n\geq 0}|x_{n}|^{p}<\infty .}
For 1 < p < ∞, l p is reflexive. Define the left shift T : l p → l p by
T
(
x
1
,
x
2
,
x
3
,
…
)
=
(
x
2
,
x
3
,
x
4
,
…
)
.
{\displaystyle T(x_{1},x_{2},x_{3},\dots )=(x_{2},x_{3},x_{4},\dots ).}
T is a partial isometry with operator norm 1. So σ(T) lies in the closed unit disk of the complex plane.
T* is the right shift (or unilateral shift), which is an isometry on l q, where 1/p + 1/q = 1:
T
∗
(
x
1
,
x
2
,
x
3
,
…
)
=
(
0
,
x
1
,
x
2
,
…
)
.
{\displaystyle T^{*}(x_{1},x_{2},x_{3},\dots )=(0,x_{1},x_{2},\dots ).}
For λ ∈ C with |λ| < 1,
x
=
(
1
,
λ
,
λ
2
,
…
)
∈
l
p
{\displaystyle x=(1,\lambda ,\lambda ^{2},\dots )\in l^{p}}
and T x = λ x. Consequently, the point spectrum of T contains the open unit disk. Now, T* has no eigenvalues, i.e. σp(T*) is empty. Thus, invoking reflexivity and the theorem in Spectrum_(functional_analysis)#Spectrum_of_the_adjoint_operator (that σp(T) ⊂ σr(T*) ∪ σp(T*)), we can deduce that the open unit disk lies in the residual spectrum of T*.
The spectrum of a bounded operator is closed, which implies the unit circle, { |λ| = 1 } ⊂ C, is in σ(T). Again by reflexivity of l p and the theorem given above (this time, that σr(T) ⊂ σp(T*)), we have that σr(T) is also empty. Therefore, for a complex number λ with unit norm, one must have λ ∈ σp(T) or λ ∈ σc(T). Now if |λ| = 1 and
T
x
=
λ
x
,
i
.
e
.
(
x
2
,
x
3
,
x
4
,
…
)
=
λ
(
x
1
,
x
2
,
x
3
,
…
)
,
{\displaystyle Tx=\lambda x,\qquad i.e.\;(x_{2},x_{3},x_{4},\dots )=\lambda (x_{1},x_{2},x_{3},\dots ),}
then
x
=
x
1
(
1
,
λ
,
λ
2
,
…
)
,
{\displaystyle x=x_{1}(1,\lambda ,\lambda ^{2},\dots ),}
which cannot be in l p, a contradiction. This means the unit circle must lie in the continuous spectrum of T.
So for the left shift T, σp(T) is the open unit disk and σc(T) is the unit circle, whereas for the right shift T*, σr(T*) is the open unit disk and σc(T*) is the unit circle.
For p = 1, one can perform a similar analysis. The results will not be exactly the same, since reflexivity no longer holds.
== Self-adjoint operators on Hilbert space ==
Hilbert spaces are Banach spaces, so the above discussion applies to bounded operators on Hilbert spaces as well. A subtle point concerns the spectrum of T*. For a Banach space, T* denotes the transpose and σ(T*) = σ(T). For a Hilbert space, T* normally denotes the adjoint of an operator T ∈ B(H), not the transpose, and σ(T*) is not σ(T) but rather its image under complex conjugation.
For a self-adjoint T ∈ B(H), the Borel functional calculus gives additional ways to break up the spectrum naturally.
=== Borel functional calculus ===
This subsection briefly sketches the development of this calculus. The idea is to first establish the continuous functional calculus, and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem. For the continuous functional calculus, the key ingredients are the following:
If T is self-adjoint, then for any polynomial P, the operator norm satisfies
‖
P
(
T
)
‖
=
sup
λ
∈
σ
(
T
)
|
P
(
λ
)
|
.
{\displaystyle \|P(T)\|=\sup _{\lambda \in \sigma (T)}|P(\lambda )|.}
The Stone–Weierstrass theorem, which implies that the family of polynomials (with complex coefficients), is dense in C(σ(T)), the continuous functions on σ(T).
The family C(σ(T)) is a Banach algebra when endowed with the uniform norm. So the mapping
P
→
P
(
T
)
{\displaystyle P\rightarrow P(T)}
is an isometric homomorphism from a dense subset of C(σ(T)) to B(H). Extending the mapping by continuity gives f(T) for f ∈ C(σ(T)): let Pn be polynomials such that Pn → f uniformly and define f(T) = lim Pn(T). This is the continuous functional calculus.
For a fixed h ∈ H, we notice that
f
→
⟨
h
,
f
(
T
)
h
⟩
{\displaystyle f\rightarrow \langle h,f(T)h\rangle }
is a positive linear functional on C(σ(T)). According to the Riesz–Markov–Kakutani representation theorem a unique measure μh on σ(T) exists such that
∫
σ
(
T
)
f
d
μ
h
=
⟨
h
,
f
(
T
)
h
⟩
.
{\displaystyle \int _{\sigma (T)}f\,d\mu _{h}=\langle h,f(T)h\rangle .}
This measure is sometimes called the spectral measure associated to h. The spectral measures can be used to extend the continuous functional calculus to bounded Borel functions. For a bounded function g that is Borel measurable, define, for a proposed g(T)
∫
σ
(
T
)
g
d
μ
h
=
⟨
h
,
g
(
T
)
h
⟩
.
{\displaystyle \int _{\sigma (T)}g\,d\mu _{h}=\langle h,g(T)h\rangle .}
Via the polarization identity, one can recover (since H is assumed to be complex)
⟨
k
,
g
(
T
)
h
⟩
.
{\displaystyle \langle k,g(T)h\rangle .}
and therefore g(T) h for arbitrary h.
In the present context, the spectral measures, combined with a result from measure theory, give a decomposition of σ(T).
=== Decomposition into absolutely continuous, singular continuous, and pure point ===
Let h ∈ H and μh be its corresponding spectral measure on σ(T). According to a refinement of Lebesgue's decomposition theorem, μh can be decomposed into three mutually singular parts:
μ
h
=
μ
a
c
+
μ
s
c
+
μ
p
p
{\displaystyle \mu _{h}=\mu _{\mathrm {ac} }+\mu _{\mathrm {sc} }+\mu _{\mathrm {pp} }}
where μac is absolutely continuous with respect to the Lebesgue measure, μsc is singular with respect to the Lebesgue measure and atomless, and μpp is a pure point measure.
All three types of measures are invariant under linear operations. Let Hac be the subspace consisting of vectors whose spectral measures are absolutely continuous with respect to the Lebesgue measure. Define Hpp and Hsc in analogous fashion. These subspaces are invariant under T. For example, if h ∈ Hac and k = T h. Let χ be the characteristic function of some Borel set in σ(T), then
⟨
k
,
χ
(
T
)
k
⟩
=
∫
σ
(
T
)
χ
(
λ
)
⋅
λ
2
d
μ
h
(
λ
)
=
∫
σ
(
T
)
χ
(
λ
)
d
μ
k
(
λ
)
.
{\displaystyle \langle k,\chi (T)k\rangle =\int _{\sigma (T)}\chi (\lambda )\cdot \lambda ^{2}d\mu _{h}(\lambda )=\int _{\sigma (T)}\chi (\lambda )\;d\mu _{k}(\lambda ).}
So
λ
2
d
μ
h
=
d
μ
k
{\displaystyle \lambda ^{2}d\mu _{h}=d\mu _{k}}
and k ∈ Hac. Furthermore, applying the spectral theorem gives
H
=
H
a
c
⊕
H
s
c
⊕
H
p
p
.
{\displaystyle H=H_{\mathrm {ac} }\oplus H_{\mathrm {sc} }\oplus H_{\mathrm {pp} }.}
This leads to the following definitions:
The spectrum of T restricted to Hac is called the absolutely continuous spectrum of T, σac(T).
The spectrum of T restricted to Hsc is called its singular spectrum, σsc(T).
The set of eigenvalues of T is called the pure point spectrum of T, σpp(T).
The closure of the eigenvalues is the spectrum of T restricted to Hpp.
So
σ
(
T
)
=
σ
a
c
(
T
)
∪
σ
s
c
(
T
)
∪
σ
¯
p
p
(
T
)
.
{\displaystyle \sigma (T)=\sigma _{\mathrm {ac} }(T)\cup \sigma _{\mathrm {sc} }(T)\cup {{\bar {\sigma }}_{\mathrm {pp} }(T)}.}
=== Comparison ===
A bounded self-adjoint operator on Hilbert space is, a fortiori, a bounded operator on a Banach space. Therefore, one can also apply to T the decomposition of the spectrum that was achieved above for bounded operators on a Banach space. Unlike the Banach space formulation, the union
σ
(
T
)
=
σ
¯
p
p
(
T
)
∪
σ
a
c
(
T
)
∪
σ
s
c
(
T
)
{\displaystyle \sigma (T)={{\bar {\sigma }}_{\mathrm {pp} }(T)}\cup \sigma _{\mathrm {ac} }(T)\cup \sigma _{\mathrm {sc} }(T)}
need not be disjoint. It is disjoint when the operator T is of uniform multiplicity, say m, i.e. if T is unitarily equivalent to multiplication by λ on the direct sum
⨁
i
=
1
m
L
2
(
R
,
μ
i
)
{\displaystyle \bigoplus _{i=1}^{m}L^{2}(\mathbb {R} ,\mu _{i})}
for some Borel measures
μ
i
{\displaystyle \mu _{i}}
. When more than one measure appears in the above expression, we see that it is possible for the union of the three types of spectra to not be disjoint. If λ ∈ σac(T) ∩ σpp(T), λ is sometimes called an eigenvalue embedded in the absolutely continuous spectrum.
When T is unitarily equivalent to multiplication by λ on
L
2
(
R
,
μ
)
,
{\displaystyle L^{2}(\mathbb {R} ,\mu ),}
the decomposition of σ(T) from Borel functional calculus is a refinement of the Banach space case.
=== Quantum mechanics ===
The preceding comments can be extended to the unbounded self-adjoint operators since Riesz-Markov holds for locally compact Hausdorff spaces.
In quantum mechanics, observables are (often unbounded) self-adjoint operators and their spectra are the possible outcomes of measurements.
The pure point spectrum corresponds to bound states in the following way:
A quantum state is a bound state if and only if it is finitely normalizable for all times
t
∈
R
{\displaystyle t\in \mathbb {R} }
.
An observable has pure point spectrum if and only if its eigenstates form an orthonormal basis of
H
{\displaystyle H}
.
A particle is said to be in a bound state if it remains "localized" in a bounded region of space. Intuitively one might therefore think that the "discreteness" of the spectrum is intimately related to the corresponding states being "localized". However, a careful mathematical analysis shows that this is not true in general. For example, consider the function
f
(
x
)
=
{
n
if
x
∈
[
n
,
n
+
1
n
4
]
0
else
,
∀
n
∈
N
.
{\displaystyle f(x)={\begin{cases}n&{\text{if }}x\in \left[n,n+{\frac {1}{n^{4}}}\right]\\0&{\text{else}}\end{cases}},\quad \forall n\in \mathbb {N} .}
This function is normalizable (i.e.
f
∈
L
2
(
R
)
{\displaystyle f\in L^{2}(\mathbb {R} )}
) as
∫
n
n
+
1
n
4
n
2
d
x
=
1
n
2
⇒
∫
−
∞
∞
|
f
(
x
)
|
2
d
x
=
∑
n
=
1
∞
1
n
2
.
{\displaystyle \int _{n}^{n+{\frac {1}{n^{4}}}}n^{2}\,dx={\frac {1}{n^{2}}}\Rightarrow \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}.}
Known as the Basel problem, this series converges to
π
2
6
{\textstyle {\frac {\pi ^{2}}{6}}}
. Yet,
f
{\displaystyle f}
increases as
x
→
∞
{\displaystyle x\to \infty }
, i.e, the state "escapes to infinity". The phenomena of Anderson localization and dynamical localization describe when the eigenfunctions are localized in a physical sense. Anderson Localization means that eigenfunctions decay exponentially as
x
→
∞
{\displaystyle x\to \infty }
. Dynamical localization is more subtle to define.
Sometimes, when performing quantum mechanical measurements, one encounters "eigenstates" that are not localized, e.g., quantum states that do not lie in L2(R). These are free states belonging to the absolutely continuous spectrum. In the spectral theorem for unbounded self-adjoint operators, these states are referred to as "generalized eigenvectors" of an observable with "generalized eigenvalues" that do not necessarily belong to its spectrum. Alternatively, if it is insisted that the notion of eigenvectors and eigenvalues survive the passage to the rigorous, one can consider operators on rigged Hilbert spaces.
An example of an observable whose spectrum is purely absolutely continuous is the position operator of a free particle moving on the entire real line. Also, since the momentum operator is unitarily equivalent to the position operator, via the Fourier transform, it has a purely absolutely continuous spectrum as well.
The singular spectrum correspond to physically impossible outcomes. It was believed for some time that the singular spectrum was something artificial. However, examples as the almost Mathieu operator and random Schrödinger operators have shown, that all types of spectra arise naturally in physics.
== Decomposition into essential spectrum and discrete spectrum ==
Let
A
:
X
→
X
{\displaystyle A:\,X\to X}
be a closed operator defined on the domain
D
(
A
)
⊂
X
{\displaystyle D(A)\subset X}
which is dense in X. Then there is a decomposition of the spectrum of A into a disjoint union,
σ
(
A
)
=
σ
e
s
s
,
5
(
A
)
⊔
σ
d
(
A
)
,
{\displaystyle \sigma (A)=\sigma _{\mathrm {ess} ,5}(A)\sqcup \sigma _{\mathrm {d} }(A),}
where
σ
e
s
s
,
5
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,5}(A)}
is the fifth type of the essential spectrum of A (if A is a self-adjoint operator, then
σ
e
s
s
,
k
(
A
)
=
σ
e
s
s
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,k}(A)=\sigma _{\mathrm {ess} }(A)}
for all
1
≤
k
≤
5
{\displaystyle 1\leq k\leq 5}
);
σ
d
(
A
)
{\displaystyle \sigma _{\mathrm {d} }(A)}
is the discrete spectrum of A, which consists of normal eigenvalues, or, equivalently, of isolated points of
σ
(
A
)
{\displaystyle \sigma (A)}
such that the corresponding Riesz projector has a finite rank. It is a proper subset of the point spectrum, i.e.,
σ
d
(
A
)
⊂
σ
p
(
A
)
{\displaystyle \sigma _{d}(A)\subset \sigma _{p}(A)}
, as the set of eigenvalues of A need not necessarily be isolated points of the spectrum.
== See also ==
Point spectrum, the set of eigenvalues.
Essential spectrum, spectrum of an operator modulo compact perturbations.
Discrete spectrum (mathematics), the set of normal eigenvalues.
Spectral theory of normal C*-algebras
Spectrum (functional analysis)
== Notes ==
== References ==
Blanchard, Philippe; Brüning, Erwin (2015). Mathematical Methods in Physics. Birkhäuser. ISBN 978-3-319-14044-5.
Dunford, N.; Schwartz, J. T. (1988). Linear Operators, Part 1: General Theory. John Wiley & Sons. ISBN 0-471-60848-3.
Jitomirskaya, S.; Simon, B. (1994). "Operators with singular continuous spectrum: III. Almost periodic Schrödinger operators". Communications in Mathematical Physics. 165 (1): 201–205. doi:10.1007/BF02099743. ISSN 0010-3616.
de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid.
Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: I: Functional analysis. Academic Press. ISBN 978-0-12-585050-6.
Ruelle, D. (1969). "A remark on bound states in potential-scattering theory" (PDF). Il Nuovo Cimento A. 61 (4). Springer Science and Business Media LLC. doi:10.1007/bf02819607. ISSN 0369-3546.
Simon, B. (1978). "An Overview of Rigorous Scattering Theory".
Simon, B.; Stolz, G. (1996). "Operators with singular continuous spectrum, V. Sparse potentials". Proceedings of the American Mathematical Society. 124 (7): 2073–2080. doi:10.1090/S0002-9939-96-03465-X. ISSN 0002-9939.
Simon, Barry (2005). Orthogonal polynomials on the unit circle. Part 1. Classical theory. American Mathematical Society Colloquium Publications. Vol. 54. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-3446-6. MR 2105088.
Teschl, G. (2014). Mathematical Methods in Quantum Mechanics. Providence (R.I): American Mathematical Soc. ISBN 978-1-4704-1704-8. | Wikipedia/Decomposition_of_spectrum_(functional_analysis) |
The Schröder–Bernstein theorem from set theory has analogs in the context operator algebras. This article discusses such operator-algebraic results.
== For von Neumann algebras ==
Suppose M is a von Neumann algebra and E, F are projections in M. Let ~ denote the Murray-von Neumann equivalence relation on M. Define a partial order « on the family of projections by E « F if E ~ F' ≤ F. In other words, E « F if there exists a partial isometry U ∈ M such that U*U = E and UU* ≤ F.
For closed subspaces M and N where projections PM and PN, onto M and N respectively, are elements of M, M « N if PM « PN.
The Schröder–Bernstein theorem states that if M « N and N « M, then M ~ N.
A proof, one that is similar to a set-theoretic argument, can be sketched as follows. Colloquially, N « M means that N can be isometrically embedded in M. So
M
=
M
0
⊃
N
0
{\displaystyle M=M_{0}\supset N_{0}}
where N0 is an isometric copy of N in M. By assumption, it is also true that, N, therefore N0, contains an isometric copy M1 of M. Therefore, one can write
M
=
M
0
⊃
N
0
⊃
M
1
.
{\displaystyle M=M_{0}\supset N_{0}\supset M_{1}.}
By induction,
M
=
M
0
⊃
N
0
⊃
M
1
⊃
N
1
⊃
M
2
⊃
N
2
⊃
⋯
.
{\displaystyle M=M_{0}\supset N_{0}\supset M_{1}\supset N_{1}\supset M_{2}\supset N_{2}\supset \cdots .}
It is clear that
R
=
∩
i
≥
0
M
i
=
∩
i
≥
0
N
i
.
{\displaystyle R=\cap _{i\geq 0}M_{i}=\cap _{i\geq 0}N_{i}.}
Let
M
⊖
N
=
d
e
f
M
∩
(
N
)
⊥
.
{\displaystyle M\ominus N{\stackrel {\mathrm {def} }{=}}M\cap (N)^{\perp }.}
So
M
=
⊕
i
≥
0
(
M
i
⊖
N
i
)
⊕
⊕
j
≥
0
(
N
j
⊖
M
j
+
1
)
⊕
R
{\displaystyle M=\oplus _{i\geq 0}(M_{i}\ominus N_{i})\quad \oplus \quad \oplus _{j\geq 0}(N_{j}\ominus M_{j+1})\quad \oplus R}
and
N
0
=
⊕
i
≥
1
(
M
i
⊖
N
i
)
⊕
⊕
j
≥
0
(
N
j
⊖
M
j
+
1
)
⊕
R
.
{\displaystyle N_{0}=\oplus _{i\geq 1}(M_{i}\ominus N_{i})\quad \oplus \quad \oplus _{j\geq 0}(N_{j}\ominus M_{j+1})\quad \oplus R.}
Notice
M
i
⊖
N
i
∼
M
⊖
N
for all
i
.
{\displaystyle M_{i}\ominus N_{i}\sim M\ominus N\quad {\mbox{for all}}\quad i.}
The theorem now follows from the countable additivity of ~.
== Representations of C*-algebras ==
There is also an analog of Schröder–Bernstein for representations of C*-algebras. If A is a C*-algebra, a representation of A is a *-homomorphism φ from A into L(H), the bounded operators on some Hilbert space H.
If there exists a projection P in L(H) where P φ(a) = φ(a) P for every a in A, then a subrepresentation σ of φ can be defined in a natural way: σ(a) is φ(a) restricted to the range of P. So φ then can be expressed as a direct sum of two subrepresentations φ = φ' ⊕ σ.
Two representations φ1 and φ2, on H1 and H2 respectively, are said to be unitarily equivalent if there exists a unitary operator U: H2 → H1 such that φ1(a)U = Uφ2(a), for every a.
In this setting, the Schröder–Bernstein theorem reads:
If two representations ρ and σ, on Hilbert spaces H and G respectively, are each unitarily equivalent to a subrepresentation of the other, then they are unitarily equivalent.
A proof that resembles the previous argument can be outlined. The assumption implies that there exist surjective partial isometries from H to G and from G to H. Fix two such partial isometries for the argument. One has
ρ
=
ρ
1
≃
ρ
1
′
⊕
σ
1
where
σ
1
≃
σ
.
{\displaystyle \rho =\rho _{1}\simeq \rho _{1}'\oplus \sigma _{1}\quad {\mbox{where}}\quad \sigma _{1}\simeq \sigma .}
In turn,
ρ
1
≃
ρ
1
′
⊕
(
σ
1
′
⊕
ρ
2
)
where
ρ
2
≃
ρ
.
{\displaystyle \rho _{1}\simeq \rho _{1}'\oplus (\sigma _{1}'\oplus \rho _{2})\quad {\mbox{where}}\quad \rho _{2}\simeq \rho .}
By induction,
ρ
1
≃
ρ
1
′
⊕
σ
1
′
⊕
ρ
2
′
⊕
σ
2
′
⋯
≃
(
⊕
i
≥
1
ρ
i
′
)
⊕
(
⊕
i
≥
1
σ
i
′
)
,
{\displaystyle \rho _{1}\simeq \rho _{1}'\oplus \sigma _{1}'\oplus \rho _{2}'\oplus \sigma _{2}'\cdots \simeq (\oplus _{i\geq 1}\rho _{i}')\oplus (\oplus _{i\geq 1}\sigma _{i}'),}
and
σ
1
≃
σ
1
′
⊕
ρ
2
′
⊕
σ
2
′
⋯
≃
(
⊕
i
≥
2
ρ
i
′
)
⊕
(
⊕
i
≥
1
σ
i
′
)
.
{\displaystyle \sigma _{1}\simeq \sigma _{1}'\oplus \rho _{2}'\oplus \sigma _{2}'\cdots \simeq (\oplus _{i\geq 2}\rho _{i}')\oplus (\oplus _{i\geq 1}\sigma _{i}').}
Now each additional summand in the direct sum expression is obtained using one of the two fixed partial isometries, so
ρ
i
′
≃
ρ
j
′
and
σ
i
′
≃
σ
j
′
for all
i
,
j
.
{\displaystyle \rho _{i}'\simeq \rho _{j}'\quad {\mbox{and}}\quad \sigma _{i}'\simeq \sigma _{j}'\quad {\mbox{for all}}\quad i,j\;.}
This proves the theorem.
== See also ==
Schröder–Bernstein theorem for measurable spaces
Schröder–Bernstein property
== References ==
B. Blackadar, Operator Algebras, Springer, 2006. | Wikipedia/Schröder–Bernstein_theorems_for_operator_algebras |
In functional analysis, a state of an operator system is a positive linear functional of norm 1. States in functional analysis generalize the notion of density matrices in quantum mechanics, which represent quantum states, both mixed states and pure states. Density matrices in turn generalize state vectors, which only represent pure states. For M an operator system in a C*-algebra A with identity, the set of all states of M, sometimes denoted by S(M), is convex, weak-* closed in the Banach dual space M*. Thus the set of all states of M with the weak-* topology forms a compact Hausdorff space, known as the state space of M .
In the C*-algebraic formulation of quantum mechanics, states in this previous sense correspond to physical states, i.e. mappings from physical observables (self-adjoint elements of the C*-algebra) to their expected measurement outcome (real number).
== Jordan decomposition ==
States can be viewed as noncommutative generalizations of probability measures. By Gelfand representation, every commutative C*-algebra A is of the form C0(X) for some locally compact Hausdorff X. In this case, S(A) consists of positive Radon measures on X, and the pure states are the evaluation functionals on X.
More generally, the GNS construction shows that every state is, after choosing a suitable representation, a vector state.
A bounded linear functional on a C*-algebra A is said to be self-adjoint if it is real-valued on the self-adjoint elements of A. Self-adjoint functionals are noncommutative analogues of signed measures.
The Jordan decomposition in measure theory says that every signed measure can be expressed as the difference of two positive measures supported on disjoint sets. This can be extended to the noncommutative setting.
It follows from the above decomposition that A* is the linear span of states.
== Some important classes of states ==
=== Pure states ===
By the Krein-Milman theorem, the state space of M has extreme points. The extreme points of the state space are termed pure states and other states are known as mixed states.
=== Vector states ===
For a Hilbert space H and a vector x in H, the formula ωx(T) := ⟨Tx,x⟩ (for T in B(H)) defines a positive linear functional on B(H). Since ωx(1)=||x||2, ωx is a state if ||x||=1. If A is a C*-subalgebra of B(H) and M an operator system in A, then the restriction of ωx to M defines a positive linear functional on M. The states of M that arise in this manner, from unit vectors in H, are termed vector states of M.
=== Faithful states ===
A state
τ
{\displaystyle \tau }
is faithful, if it is injective on the positive elements, that is,
τ
(
a
∗
a
)
=
0
{\displaystyle \tau (a^{*}a)=0}
implies
a
=
0
{\displaystyle a=0}
.
=== Normal states ===
A state
τ
{\displaystyle \tau }
is called normal, iff for every monotone, increasing net
H
α
{\displaystyle H_{\alpha }}
of operators with least upper bound
H
{\displaystyle H}
,
τ
(
H
α
)
{\displaystyle \tau (H_{\alpha })\;}
converges to
τ
(
H
)
{\displaystyle \tau (H)\;}
.
=== Tracial states ===
A tracial state is a state
τ
{\displaystyle \tau }
such that
τ
(
A
B
)
=
τ
(
B
A
)
.
{\displaystyle \tau (AB)=\tau (BA)\;.}
For any separable C*-algebra, the set of tracial states is a Choquet simplex.
=== Factorial states ===
A factorial state of a C*-algebra A is a state such that the commutant of the corresponding GNS representation of A is a factor.
== See also ==
Quantum state
Gelfand–Naimark–Segal construction
Quantum mechanics
Quantum state
Density matrix
== References ==
Lin, H. (2001), An Introduction to the Classification of Amenable C*-algebras, World Scientific | Wikipedia/State_(functional_analysis) |
In mathematics, the multiplier algebra, denoted by M(A), of a C*-algebra A is a unital C*-algebra that is the largest unital C*-algebra that contains A as an ideal in a "non-degenerate" way. It is the noncommutative generalization of Stone–Čech compactification. Multiplier algebras were introduced by Busby (1968).
For example, if A is the C*-algebra of compact operators on a separable Hilbert space, M(A) is B(H), the C*-algebra of all bounded operators on H.
== Definition ==
An ideal I in a C*-algebra B is said to be essential if I ∩ J is non-trivial for every ideal J. An ideal I is essential if and only if I⊥, the "orthogonal complement" of I in the Hilbert C*-module B is {0}.
Let A be a C*-algebra. Its multiplier algebra M(A) is any C*-algebra satisfying the following universal property: for any C*-algebra D containing A as an ideal, there exists a unique *-homomorphism φ: D → M(A) such that φ extends the identity homomorphism on A and φ(A⊥) = {0}.
Uniqueness up to isomorphism is specified by the universal property. When A is unital, M(A) = A. It also follows from the definition that for any D containing A as an essential ideal, the multiplier algebra M(A) contains D as a C*-subalgebra.
The existence of M(A) can be shown in several ways.
A double centralizer of a C*-algebra A is a pair (L, R) of bounded linear maps on A such that aL(b) = R(a)b for all a and b in A. This implies that ||L|| = ||R||. The set of double centralizers of A can be given a C*-algebra structure. This C*-algebra contains A as an essential ideal and can be identified as the multiplier algebra M(A). For instance, if A is the compact operators K(H) on a separable Hilbert space, then each x ∈ B(H) defines a double centralizer of A by simply multiplication from the left and right.
Alternatively, M(A) can be obtained via representations. The following fact will be needed:
Lemma. If I is an ideal in a C*-algebra B, then any faithful nondegenerate representation π of I can be extended uniquely to B.
Now take any faithful nondegenerate representation π of A on a Hilbert space H. The above lemma, together with the universal property of the multiplier algebra, yields that M(A) is isomorphic to the idealizer of π(A) in B(H). It is immediate that M(K(H)) = B(H).
Lastly, let E be a Hilbert C*-module and B(E) (resp. K(E)) be the adjointable (resp. compact) operators on E M(A) can be identified via a *-homomorphism of A into B(E). Something similar to the above lemma is true:
Lemma. If I is an ideal in a C*-algebra B, then any faithful nondegenerate *-homomorphism π of I into B(E) can be extended uniquely to B.
Consequently, if π is a faithful nondegenerate *-homomorphism of A into B(E), then M(A) is isomorphic to the idealizer of π(A). For instance, M(K(E)) = B(E) for any Hilbert module E.
The C*-algebra A is isomorphic to the compact operators on the Hilbert module A. Therefore, M(A) is the adjointable operators on A.
== Strict topology ==
Consider the topology on M(A) specified by the seminorms {la, ra}a ∈ A, where
l
a
(
x
)
=
‖
a
x
‖
,
r
a
(
x
)
=
‖
x
a
‖
.
{\displaystyle l_{a}(x)=\|ax\|,\;r_{a}(x)=\|xa\|.}
The resulting topology is called the strict topology on M(A). A is strictly dense in M(A) .
When A is unital, M(A) = A, and the strict topology coincides with the norm topology. For B(H) = M(K(H)), the strict topology is the σ-strong* topology. It follows from above that B(H) is complete in the σ-strong* topology.
== Commutative case ==
Let X be a locally compact Hausdorff space, A = C0(X), the commutative C*-algebra of continuous functions that vanish at infinity. Then M(A) is Cb(X), the continuous bounded functions on X. By the Gelfand–Naimark theorem, one has the isomorphism of C*-algebras
C
b
(
X
)
≃
C
(
Y
)
{\displaystyle C_{b}(X)\simeq C(Y)}
where Y is the spectrum of Cb(X). Y is in fact homeomorphic to the Stone–Čech compactification βX of X.
== Corona algebra ==
The corona or corona algebra of A is the quotient M(A)/A.
For example, the corona algebra of the algebra of compact operators on a Hilbert space is the Calkin algebra.
The corona algebra is a noncommutative analogue of the corona set of a topological space.
== References ==
B. Blackadar, K-Theory for Operator Algebras, MSRI Publications, 1986.
Busby, Robert C. (1968), "Double centralizers and extensions of C*-algebras" (PDF), Transactions of the American Mathematical Society, 132 (1): 79–99, doi:10.2307/1994883, ISSN 0002-9947, JSTOR 1994883, MR 0225175, S2CID 54047557, archived from the original (PDF) on 2020-02-20
Pedersen, Gert K. (2001) [1994], "Multipliers of C*-algebras", Encyclopedia of Mathematics, EMS Press | Wikipedia/Multiplier_algebra |
In mathematics, specifically in functional analysis, a Banach algebra, A, is amenable if all bounded derivations from A into dual Banach A-bimodules are inner (that is of the form
a
↦
a
.
x
−
x
.
a
{\displaystyle a\mapsto a.x-x.a}
for some
x
{\displaystyle x}
in the dual module).
An equivalent characterization is that A is amenable if and only if it has a virtual diagonal.
== Examples ==
If A is a group algebra
L
1
(
G
)
{\displaystyle L^{1}(G)}
for some locally compact group G then A is amenable if and only if G is amenable.
If A is a C*-algebra then A is amenable if and only if it is nuclear.
If A is a uniform algebra on a compact Hausdorff space then A is amenable if and only if it is trivial (i.e. the algebra C(X) of all continuous complex functions on X).
If A is amenable and there is a continuous algebra homomorphism
θ
{\displaystyle \theta }
from A to another Banach algebra, then the closure of
θ
(
A
)
¯
{\displaystyle {\overline {\theta (A)}}}
is amenable.
== References ==
F.F. Bonsall, J. Duncan, "Complete normed algebras", Springer-Verlag (1973).
H.G. Dales, "Banach algebras and automatic continuity", Oxford University Press (2001).
B.E. Johnson, "Cohomology in Banach algebras", Memoirs of the AMS 127 (1972).
J.-P. Pier, "Amenable Banach algebras", Longman Scientific and Technical (1988).
Volker Runde, "Amenable Banach Algebras. A Panorama", Springer Verlag (2020). | Wikipedia/Amenable_Banach_algebra |
In functional analysis, a Banach function algebra on a compact Hausdorff space X is unital subalgebra, A, of the commutative C*-algebra C(X) of all continuous, complex-valued functions from X, together with a norm on A that makes it a Banach algebra.
A function algebra is said to vanish at a point p if f(p) = 0 for all
f
∈
A
{\displaystyle f\in A}
. A function algebra separates points if for each distinct pair of points
p
,
q
∈
X
{\displaystyle p,q\in X}
, there is a function
f
∈
A
{\displaystyle f\in A}
such that
f
(
p
)
≠
f
(
q
)
{\displaystyle f(p)\neq f(q)}
.
For every
x
∈
X
{\displaystyle x\in X}
define
ε
x
(
f
)
=
f
(
x
)
,
{\displaystyle \varepsilon _{x}(f)=f(x),}
for
f
∈
A
{\displaystyle f\in A}
. Then
ε
x
{\displaystyle \varepsilon _{x}}
is a homomorphism (character) on
A
{\displaystyle A}
, non-zero if
A
{\displaystyle A}
does not vanish at
x
{\displaystyle x}
.
Theorem: A Banach function algebra is semisimple (that is its Jacobson radical is equal to zero) and each commutative unital, semisimple Banach algebra is isomorphic (via the Gelfand transform) to a Banach function algebra on its character space (the space of algebra homomorphisms from A into the complex numbers given the relative weak* topology).
If the norm on
A
{\displaystyle A}
is the uniform norm (or sup-norm) on
X
{\displaystyle X}
, then
A
{\displaystyle A}
is called
a uniform algebra. Uniform algebras are an important special case of Banach function algebras.
== References ==
Andrew Browder (1969) Introduction to Function Algebras, W. A. Benjamin
H.G. Dales (2000) Banach Algebras and Automatic Continuity, London Mathematical Society Monographs 24, Clarendon Press ISBN 0-19-850013-0
Graham Allan & H. Garth Dales (2011) Introduction to Banach Spaces and Algebras, Oxford University Press ISBN 978-0-19-920654-4 | Wikipedia/Banach_function_algebra |
In the mathematical field of spectral graph theory, a Ramanujan graph is a regular graph whose spectral gap is almost as large as possible (see extremal graph theory). Such graphs are excellent spectral expanders. As Murty's survey paper notes, Ramanujan graphs "fuse diverse branches of pure mathematics, namely, number theory, representation theory, and algebraic geometry".
These graphs are indirectly named after Srinivasa Ramanujan; their name comes from the Ramanujan–Petersson conjecture, which was used in a construction of some of these graphs.
== Definition ==
Let
G
{\displaystyle G}
be a connected
d
{\displaystyle d}
-regular graph with
n
{\displaystyle n}
vertices, and let
λ
1
≥
λ
2
≥
⋯
≥
λ
n
{\displaystyle \lambda _{1}\geq \lambda _{2}\geq \cdots \geq \lambda _{n}}
be the eigenvalues of the adjacency matrix of
G
{\displaystyle G}
(or the spectrum of
G
{\displaystyle G}
). Because
G
{\displaystyle G}
is connected and
d
{\displaystyle d}
-regular, its eigenvalues satisfy
d
=
λ
1
>
λ
2
{\displaystyle d=\lambda _{1}>\lambda _{2}}
≥
⋯
≥
λ
n
≥
−
d
{\displaystyle \geq \cdots \geq \lambda _{n}\geq -d}
.
Define
λ
(
G
)
=
max
i
≠
1
|
λ
i
|
=
max
(
|
λ
2
|
,
…
,
|
λ
n
|
)
{\displaystyle \lambda (G)=\max _{i\neq 1}|\lambda _{i}|=\max(|\lambda _{2}|,\ldots ,|\lambda _{n}|)}
. A connected
d
{\displaystyle d}
-regular graph
G
{\displaystyle G}
is a Ramanujan graph if
λ
(
G
)
≤
2
d
−
1
{\displaystyle \lambda (G)\leq 2{\sqrt {d-1}}}
.
Many sources uses an alternative definition
λ
′
(
G
)
=
max
|
λ
i
|
<
d
|
λ
i
|
{\displaystyle \lambda '(G)=\max _{|\lambda _{i}|<d}|\lambda _{i}|}
(whenever there exists
λ
i
{\displaystyle \lambda _{i}}
with
|
λ
i
|
<
d
{\displaystyle |\lambda _{i}|<d}
) to define Ramanujan graphs. In other words, we allow
−
d
{\displaystyle -d}
in addition to the "small" eigenvalues. Since
λ
n
=
−
d
{\displaystyle \lambda _{n}=-d}
if and only if the graph is bipartite, we will refer to the graphs that satisfy this alternative definition but not the first definition bipartite Ramanujan graphs. If
G
{\displaystyle G}
is a Ramanujan graph, then
G
×
K
2
{\displaystyle G\times K_{2}}
is a bipartite Ramanujan graph, so the existence of Ramanujan graphs is stronger.
As observed by Toshikazu Sunada, a regular graph is Ramanujan if and only if its Ihara zeta function satisfies an analog of the Riemann hypothesis.
== Examples and constructions ==
=== Explicit examples ===
The complete graph
K
d
+
1
{\displaystyle K_{d+1}}
has spectrum
d
,
−
1
,
−
1
,
…
,
−
1
{\displaystyle d,-1,-1,\dots ,-1}
, and thus
λ
(
K
d
+
1
)
=
1
{\displaystyle \lambda (K_{d+1})=1}
and the graph is a Ramanujan graph for every
d
>
1
{\displaystyle d>1}
. The complete bipartite graph
K
d
,
d
{\displaystyle K_{d,d}}
has spectrum
d
,
0
,
0
,
…
,
0
,
−
d
{\displaystyle d,0,0,\dots ,0,-d}
and hence is a bipartite Ramanujan graph for every
d
{\displaystyle d}
.
The Petersen graph has spectrum
3
,
1
,
1
,
1
,
1
,
1
,
−
2
,
−
2
,
−
2
,
−
2
{\displaystyle 3,1,1,1,1,1,-2,-2,-2,-2}
, so it is a 3-regular Ramanujan graph. The icosahedral graph is a 5-regular Ramanujan graph.
A Paley graph of order
q
{\displaystyle q}
is
q
−
1
2
{\displaystyle {\frac {q-1}{2}}}
-regular with all other eigenvalues being
−
1
±
q
2
{\displaystyle {\frac {-1\pm {\sqrt {q}}}{2}}}
, making Paley graphs an infinite family of Ramanujan graphs.
More generally, let
f
(
x
)
{\displaystyle f(x)}
be a degree 2 or 3 polynomial over
F
q
{\displaystyle \mathbb {F} _{q}}
. Let
S
=
{
f
(
x
)
:
x
∈
F
q
}
{\displaystyle S=\{f(x)\,:\,x\in \mathbb {F} _{q}\}}
be the image of
f
(
x
)
{\displaystyle f(x)}
as a multiset, and suppose
S
=
−
S
{\displaystyle S=-S}
. Then the Cayley graph for
F
q
{\displaystyle \mathbb {F} _{q}}
with generators from
S
{\displaystyle S}
is a Ramanujan graph.
Mathematicians are often interested in constructing infinite families of
d
{\displaystyle d}
-regular Ramanujan graphs for every fixed
d
{\displaystyle d}
. Such families are useful in applications.
=== Algebraic constructions ===
Several explicit constructions of Ramanujan graphs arise as Cayley graphs and are algebraic in nature. See Winnie Li's survey on Ramanujan's conjecture and other aspects of number theory relevant to these results.
Lubotzky, Phillips and Sarnak and independently Margulis showed how to construct an infinite family of
(
p
+
1
)
{\displaystyle (p+1)}
-regular Ramanujan graphs, whenever
p
{\displaystyle p}
is a prime number and
p
≡
1
(
mod
4
)
{\displaystyle p\equiv 1{\pmod {4}}}
. Both proofs use the Ramanujan conjecture, which led to the name of Ramanujan graphs. Besides being Ramanujan graphs, these constructions satisfies some other properties, for example, their girth is
Ω
(
log
p
(
n
)
)
{\displaystyle \Omega (\log _{p}(n))}
where
n
{\displaystyle n}
is the number of nodes.
Let us sketch the Lubotzky-Phillips-Sarnak construction. Let
q
≡
1
mod
4
{\displaystyle q\equiv 1{\bmod {4}}}
be a prime not equal to
p
{\displaystyle p}
. By Jacobi's four-square theorem, there are
p
+
1
{\displaystyle p+1}
solutions to the equation
p
=
a
0
2
+
a
1
2
+
a
2
2
+
a
3
2
{\displaystyle p=a_{0}^{2}+a_{1}^{2}+a_{2}^{2}+a_{3}^{2}}
where
a
0
>
0
{\displaystyle a_{0}>0}
is odd and
a
1
,
a
2
,
a
3
{\displaystyle a_{1},a_{2},a_{3}}
are even. To each such solution associate the
PGL
(
2
,
Z
/
q
Z
)
{\displaystyle \operatorname {PGL} (2,\mathbb {Z} /q\mathbb {Z} )}
matrix
α
~
=
(
a
0
+
i
a
1
a
2
+
i
a
3
−
a
2
+
i
a
3
a
0
−
i
a
1
)
,
i
a fixed solution to
i
2
=
−
1
mod
q
.
{\displaystyle {\tilde {\alpha }}={\begin{pmatrix}a_{0}+ia_{1}&a_{2}+ia_{3}\\-a_{2}+ia_{3}&a_{0}-ia_{1}\end{pmatrix}},\qquad i{\text{ a fixed solution to }}i^{2}=-1{\bmod {q}}.}
If
p
{\displaystyle p}
is not a quadratic residue modulo
q
{\displaystyle q}
let
X
p
,
q
{\displaystyle X^{p,q}}
be the Cayley graph of
PGL
(
2
,
Z
/
q
Z
)
{\displaystyle \operatorname {PGL} (2,\mathbb {Z} /q\mathbb {Z} )}
with these
p
+
1
{\displaystyle p+1}
generators, and otherwise, let
X
p
,
q
{\displaystyle X^{p,q}}
be the Cayley graph of
PSL
(
2
,
Z
/
q
Z
)
{\displaystyle \operatorname {PSL} (2,\mathbb {Z} /q\mathbb {Z} )}
with the same generators. Then
X
p
,
q
{\displaystyle X^{p,q}}
is a
(
p
+
1
)
{\displaystyle (p+1)}
-regular graph on
n
=
q
(
q
2
−
1
)
{\displaystyle n=q(q^{2}-1)}
or
q
(
q
2
−
1
)
/
2
{\displaystyle q(q^{2}-1)/2}
vertices depending on whether or not
p
{\displaystyle p}
is a quadratic residue modulo
q
{\displaystyle q}
. It is proved that
X
p
,
q
{\displaystyle X^{p,q}}
is a Ramanujan graph.
Morgenstern later extended the construction of Lubotzky, Phillips and Sarnak. His extended construction holds whenever
p
{\displaystyle p}
is a prime power.
Arnold Pizer proved that the supersingular isogeny graphs are Ramanujan, although they tend to have lower girth than the graphs of Lubotzky, Phillips, and Sarnak. Like the graphs of Lubotzky, Phillips, and Sarnak, the degrees of these graphs are always a prime number plus one.
=== Probabilistic examples ===
Adam Marcus, Daniel Spielman and Nikhil Srivastava proved the existence of infinitely many
d
{\displaystyle d}
-regular bipartite Ramanujan graphs for any
d
≥
3
{\displaystyle d\geq 3}
. Later they proved that there exist bipartite Ramanujan graphs of every degree and every number of vertices. Michael B. Cohen showed how to construct these graphs in polynomial time.
The initial work followed an approach of Bilu and Linial. They considered an operation called a 2-lift that takes a
d
{\displaystyle d}
-regular graph
G
{\displaystyle G}
with
n
{\displaystyle n}
vertices and a sign on each edge, and produces a new
d
{\displaystyle d}
-regular graph
G
′
{\displaystyle G'}
on
2
n
{\displaystyle 2n}
vertices. Bilu & Linial conjectured that there always exists a signing so that every new eigenvalue of
G
′
{\displaystyle G'}
has magnitude at most
2
d
−
1
{\displaystyle 2{\sqrt {d-1}}}
. This conjecture guarantees the existence of Ramanujan graphs with degree
d
{\displaystyle d}
and
2
k
(
d
+
1
)
{\displaystyle 2^{k}(d+1)}
vertices for any
k
{\displaystyle k}
—simply start with the complete graph
K
d
+
1
{\displaystyle K_{d+1}}
, and iteratively take 2-lifts that retain the Ramanujan property.
Using the method of interlacing polynomials, Marcus, Spielman, and Srivastava proved Bilu & Linial's conjecture holds when
G
{\displaystyle G}
is already a bipartite Ramanujan graph, which is enough to conclude the existence result. The sequel proved the stronger statement that a sum of
d
{\displaystyle d}
random bipartite matchings is Ramanujan with non-vanishing probability. Hall, Puder and Sawin extended the original work of Marcus, Spielman and Srivastava to r-lifts.
It is still an open problem whether there are infinitely many
d
{\displaystyle d}
-regular (non-bipartite) Ramanujan graphs for any
d
≥
3
{\displaystyle d\geq 3}
. In particular, the problem is open for
d
=
7
{\displaystyle d=7}
, the smallest case for which
d
−
1
{\displaystyle d-1}
is not a prime power and hence not covered by Morgenstern's construction.
== Ramanujan graphs as expander graphs ==
The constant
2
d
−
1
{\displaystyle 2{\sqrt {d-1}}}
in the definition of Ramanujan graphs is asymptotically sharp. More precisely, the Alon-Boppana bound states that for every
d
{\displaystyle d}
and
ϵ
>
0
{\displaystyle \epsilon >0}
, there exists
n
{\displaystyle n}
such that all
d
{\displaystyle d}
-regular graphs
G
{\displaystyle G}
with at least
n
{\displaystyle n}
vertices satisfy
λ
(
G
)
>
2
d
−
1
−
ϵ
{\displaystyle \lambda (G)>2{\sqrt {d-1}}-\epsilon }
. This means that Ramanujan graphs are essentially the best possible expander graphs.
Due to achieving the tight bound on
λ
(
G
)
{\displaystyle \lambda (G)}
, the expander mixing lemma gives excellent bounds on the uniformity of the distribution of the edges in Ramanujan graphs, and any random walks on the graphs has a logarithmic mixing time (in terms of the number of vertices): in other words, the random walk converges to the (uniform) stationary distribution very quickly. Therefore, the diameter of Ramanujan graphs are also bounded logarithmically in terms of the number of vertices.
=== Random graphs ===
Confirming a conjecture of Alon, Friedman showed that many families of random graphs are weakly-Ramanujan. This means that for every
d
{\displaystyle d}
and
ϵ
>
0
{\displaystyle \epsilon >0}
and for sufficiently large
n
{\displaystyle n}
, a random
d
{\displaystyle d}
-regular
n
{\displaystyle n}
-vertex graph
G
{\displaystyle G}
satisfies
λ
(
G
)
<
2
d
−
1
+
ϵ
{\displaystyle \lambda (G)<2{\sqrt {d-1}}+\epsilon }
with high probability. While this result shows that random graphs are close to being Ramanujan, it cannot be used to prove the existence of Ramanujan graphs. It is conjectured, though, that random graphs are Ramanujan with substantial probability (roughly 52%). In addition to direct numerical evidence, there is some theoretical support for this conjecture: the spectral gap of a
d
{\displaystyle d}
-regular graph seems to behave according to a Tracy-Widom distribution from random matrix theory, which would predict the same asymptotic.
In 2024 a preprint by Jiaoyang Huang, Theo McKenzieand and Horng-Tzer Yau proved that
λ
(
G
)
≤
2
d
−
1
{\displaystyle \lambda (G)\leq 2{\sqrt {d-1}}}
with the fraction of eigenvalues that hit the Alon-Boppana bound approximately 69% from proving that edge universality holds, that is they follow a Tracy-Widom distribution associated with the Gaussian Orthogonal Ensemble
== Applications of Ramanujan graphs ==
Expander graphs have many applications to computer science, number theory, and group theory, see e.g Lubotzky's survey on applications to pure and applied math and Hoory, Linial, and Wigderson's survey which focuses on computer science. Ramanujan graphs are in some sense the best expanders, and so they are especially useful in applications where expanders are needed. Importantly, the Lubotzky, Phillips, and Sarnak graphs can be traversed extremely quickly in practice, so they are practical for applications.
Some example applications include
In an application to fast solvers for Laplacian linear systems, Lee, Peng, and Spielman relied on the existence of bipartite Ramanujan graphs of every degree in order to quickly approximate the complete graph.
Lubetzky and Peres proved that the simple random walk exhibits cutoff phenomenon on all Ramanujan graphs. This means that the random walk undergoes a phase transition from being completely unmixed to completely mixed in the total variation norm. This result strongly relies on the graph being Ramanujan, not just an expander—some good expanders are known to not exhibit cutoff.
Ramanujan graphs of Pizer have been proposed as the basis for post-quantum elliptic-curve cryptography.
Ramanujan graphs can be used to construct expander codes, which are good error correcting codes.
== See also ==
Expander graph
Alon-Boppana bound
Expander mixing lemma
Spectral graph theory
== References ==
== Further reading ==
Giuliana Davidoff; Peter Sarnak; Alain Valette (2003). Elementary number theory, group theory and Ramanujan graphs. LMS student texts. Vol. 55. Cambridge University Press. ISBN 0-521-53143-8. OCLC 50253269.
Sunada, Toshikazu (1986). "L-functions in geometry and some applications". In Shiohama, Katsuhiro; Sakai, Takashi; Sunada, Toshikazu (eds.). Curvature and Topology of Riemannian Manifolds: Proceedings of the 17th International Taniguchi Symposium held in Katata, Japan, August 26–31, 1985. Lecture Notes in Mathematics. Vol. 1201. Berlin: Springer. pp. 266–284. doi:10.1007/BFb0075662. ISBN 978-3-540-16770-9. MR 0859591.
== External links ==
Survey paper by M. Ram Murty
Survey paper by Alexander Lubotzky
Survey paper by Hoory, Linial, and Wigderson | Wikipedia/Ramanujan_graph |
In functional analysis, compact operators are linear operators on Banach spaces that map bounded sets to relatively compact sets. In the case of a Hilbert space H, the compact operators are the closure of the finite rank operators in the uniform operator topology. In general, operators on infinite-dimensional spaces feature properties that do not appear in the finite-dimensional case, i.e. for matrices. The compact operators are notable in that they share as much similarity with matrices as one can expect from a general operator. In particular, the spectral properties of compact operators resemble those of square matrices.
This article first summarizes the corresponding results from the matrix case before discussing the spectral properties of compact operators. The reader will see that most statements transfer verbatim from the matrix case.
The spectral theory of compact operators was first developed by F. Riesz.
== Spectral theory of matrices ==
The classical result for square matrices is the Jordan canonical form, which states the following:
Theorem. Let A be an n × n complex matrix, i.e. A a linear operator acting on Cn. If λ1...λk are the distinct eigenvalues of A, then Cn can be decomposed into the invariant subspaces of A
C
n
=
⨁
i
=
1
k
Y
i
.
{\displaystyle \mathbf {C} ^{n}=\bigoplus _{i=1}^{k}Y_{i}.}
The subspace Yi = Ker(λi − A)m where Ker(λi − A)m = Ker(λi − A)m+1. Furthermore, the poles of the resolvent function ζ → (ζ − A)−1 coincide with the set of eigenvalues of A.
== Compact operators ==
=== Statement ===
=== Proof ===
Preliminary Lemmas
The theorem claims several properties of the operator λ − C where λ ≠ 0.
Without loss of generality, it can be assumed that λ = 1.
Therefore we consider I − C, I being the identity operator. The proof will require two lemmas.
This fact will be used repeatedly in the argument leading to the theorem.
Notice that when X is a Hilbert space, the lemma is trivial.
Concluding the Proof
=== Invariant subspaces ===
As in the matrix case, the above spectral properties lead to a decomposition of X into invariant subspaces of a compact operator C. Let λ ≠ 0 be an eigenvalue of C; so λ is an isolated point of σ(C). Using the holomorphic functional calculus, define the Riesz projection E(λ) by
E
(
λ
)
=
1
2
π
i
∫
γ
(
ξ
−
C
)
−
1
d
ξ
{\displaystyle E(\lambda )={1 \over 2\pi i}\int _{\gamma }(\xi -C)^{-1}d\xi }
where γ is a Jordan contour that encloses only λ from σ(C). Let Y be the subspace Y = E(λ)X. C restricted to Y is a compact invertible operator with spectrum {λ}, therefore Y is finite-dimensional. Let ν be such that Ker(λ − C)ν = Ker(λ − C)ν + 1. By inspecting the Jordan form, we see that (λ − C)ν = 0 while (λ − C)ν − 1 ≠ 0. The Laurent series of the resolvent mapping centered at λ shows that
E
(
λ
)
(
λ
−
C
)
ν
=
(
λ
−
C
)
ν
E
(
λ
)
=
0.
{\displaystyle E(\lambda )(\lambda -C)^{\nu }=(\lambda -C)^{\nu }E(\lambda )=0.}
So Y = Ker(λ − C)ν.
The E(λ) satisfy E(λ)2 = E(λ), so that they are indeed projection operators or spectral projections. By definition they commute with C. Moreover E(λ)E(μ) = 0 if λ ≠ μ.
Let X(λ) = E(λ)X if λ is a non-zero eigenvalue. Thus X(λ) is a finite-dimensional invariant subspace, the generalised eigenspace of λ.
Let X(0) be the intersection of the kernels of the E(λ). Thus X(0) is a closed subspace invariant under C and the restriction of C to X(0) is a compact operator with spectrum {0}.
=== Operators with compact power ===
If B is an operator on a Banach space X such that Bn is compact for some n, then the theorem proven above also holds for B.
== See also ==
Spectral theorem
Spectral theory of normal C*-algebras
== References ==
John B. Conway, A course in functional analysis, Graduate Texts in Mathematics 96, Springer 1990. ISBN 0-387-97245-5 | Wikipedia/Spectral_theory_of_compact_operators |
In general topology and related areas of mathematics, the disjoint union (also called the direct sum, free union, free sum, topological sum, or coproduct) of a family of topological spaces is a space formed by equipping the disjoint union of the underlying sets with a natural topology called the disjoint union topology. Roughly speaking, in the disjoint union the given spaces are considered as part of a single new space where each looks as it would alone and they are isolated from each other.
The name coproduct originates from the fact that the disjoint union is the categorical dual of the product space construction.
== Definition ==
Let {Xi : i ∈ I} be a family of topological spaces indexed by I. Let
X
=
∐
i
X
i
{\displaystyle X=\coprod _{i}X_{i}}
be the disjoint union of the underlying sets. For each i in I, let
φ
i
:
X
i
→
X
{\displaystyle \varphi _{i}:X_{i}\to X\,}
be the canonical injection (defined by
φ
i
(
x
)
=
(
x
,
i
)
{\displaystyle \varphi _{i}(x)=(x,i)}
). The disjoint union topology on X is defined as the finest topology on X for which all the canonical injections
φ
i
{\displaystyle \varphi _{i}}
are continuous (i.e.: it is the final topology on X induced by the canonical injections).
Explicitly, the disjoint union topology can be described as follows. A subset U of X is open in X if and only if its preimage
φ
i
−
1
(
U
)
{\displaystyle \varphi _{i}^{-1}(U)}
is open in Xi for each i ∈ I. Yet another formulation is that a subset V of X is open relative to X iff its intersection with Xi is open relative to Xi for each i.
== Properties ==
The disjoint union space X, together with the canonical injections, can be characterized by the following universal property: If Y is a topological space, and fi : Xi → Y is a continuous map for each i ∈ I, then there exists precisely one continuous map f : X → Y such that the following set of diagrams commute:
This shows that the disjoint union is the coproduct in the category of topological spaces. It follows from the above universal property that a map f : X → Y is continuous iff fi = f o φi is continuous for all i in I.
In addition to being continuous, the canonical injections φi : Xi → X are open and closed maps. It follows that the injections are topological embeddings so that each Xi may be canonically thought of as a subspace of X.
== Examples ==
If each Xi is homeomorphic to a fixed space A, then the disjoint union X is homeomorphic to the product space A × I where I has the discrete topology.
== Preservation of topological properties ==
Every disjoint union of discrete spaces is discrete
Separation
Every disjoint union of T0 spaces is T0
Every disjoint union of T1 spaces is T1
Every disjoint union of Hausdorff spaces is Hausdorff
Connectedness
The disjoint union of two or more nonempty topological spaces is disconnected
== See also ==
product topology, the dual construction
subspace topology and its dual quotient topology
topological union, a generalization to the case where the pieces are not disjoint
== References == | Wikipedia/Disjoint_union_(topology) |
In mathematics, the real rank of a C*-algebra is a noncommutative analogue of Lebesgue covering dimension. The notion was first introduced by Lawrence G. Brown and Gert K. Pedersen.
== Definition ==
The real rank of a unital C*-algebra A is the smallest non-negative integer n, denoted RR(A), such that for every (n + 1)-tuple (x0, x1, ... ,xn) of self-adjoint elements of A and every ε > 0, there exists an (n + 1)-tuple (y0, y1, ... ,yn)
of self-adjoint elements of A such that
∑
i
=
0
n
y
i
2
{\displaystyle \sum _{i=0}^{n}y_{i}^{2}}
is invertible and
‖
∑
i
=
0
n
(
x
i
−
y
i
)
2
‖
<
ε
{\displaystyle \lVert \sum _{i=0}^{n}(x_{i}-y_{i})^{2}\rVert <\varepsilon }
. If no such integer exists, then the real rank of A is infinite. The real rank of a non-unital C*-algebra is defined to be the real rank of its unitalization.
== Comparisons with dimension ==
If X is a locally compact Hausdorff space, then RR(C0(X)) = dim(X), where dim is the Lebesgue covering dimension of X. As a result, real rank is considered a noncommutative generalization of dimension, but real rank can be rather different when compared to dimension. For example, most noncommutative tori have real rank zero, despite being a noncommutative version of the two-dimensional torus. For locally compact Hausdorff spaces, being zero-dimensional is equivalent to being totally disconnected. The analogous relationship fails for C*-algebras; while AF-algebras have real rank zero, the converse is false. Formulas that hold for dimension may not generalize for real rank. For example, Brown and Pedersen conjectured that RR(A ⊗ B) ≤ RR(A) + RR(B), since it is true that dim(X × Y) ≤ dim(X) + dim(Y). They proved a special case that if A is AF and B has real rank zero, then A ⊗ B has real rank zero. But in general their conjecture is false, there are C*-algebras A and B with real rank zero such that A ⊗ B has real rank greater than zero.
== Real rank zero ==
C*-algebras with real rank zero are of particular interest. By definition, a unital C*-algebra has real rank zero if and only if the invertible self-adjoint elements of A are dense in the self-adjoint elements of A. This condition is equivalent to the previously studied conditions:
(FS) The self-adjoint elements of A with finite spectrum are dense in the self-adjoint elements of A.
(HP) Every hereditary C*-subalgebra of A has an approximate identity consisting of projections.
This equivalence can be used to give many examples of C*-algebras with real rank zero including AW*-algebras, Bunce–Deddens algebras, and von Neumann algebras. More broadly, simple unital purely infinite C*-algebras have real rank zero including the Cuntz algebras and Cuntz–Krieger algebras. Since simple graph C*-algebras are either AF or purely infinite, every simple graph C*-algebra has real rank zero.
Having real rank zero is a property closed under taking direct limits, hereditary C*-subalgebras, and strong Morita equivalence. In particular, if A has real rank zero, then Mn(A), the algebra of n × n matrices over A, has real rank zero for any integer n ≥ 1.
== References == | Wikipedia/Real_rank_(C*-algebras) |
In mathematics, a topological space
X
{\displaystyle X}
is said to be a Baire space if countable unions of closed sets with empty interior also have empty interior.
According to the Baire category theorem, compact Hausdorff spaces and complete metric spaces are examples of Baire spaces.
The Baire category theorem combined with the properties of Baire spaces has numerous applications in topology, geometry, and analysis, in particular functional analysis. For more motivation and applications, see the article Baire category theorem. The current article focuses more on characterizations and basic properties of Baire spaces per se.
Bourbaki introduced the term "Baire space" in honor of René Baire, who investigated the Baire category theorem in the context of Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
in his 1899 thesis.
== Definition ==
The definition that follows is based on the notions of meagre (or first category) set (namely, a set that is a countable union of sets whose closure has empty interior) and nonmeagre (or second category) set (namely, a set that is not meagre). See the corresponding article for details.
A topological space
X
{\displaystyle X}
is called a Baire space if it satisfies any of the following equivalent conditions:
Every countable intersection of dense open sets is dense.
Every countable union of closed sets with empty interior has empty interior.
Every meagre set has empty interior.
Every nonempty open set is nonmeagre.
Every comeagre set is dense.
Whenever a countable union of closed sets has an interior point, at least one of the closed sets has an interior point.
The equivalence between these definitions is based on the associated properties of complementary subsets of
X
{\displaystyle X}
(that is, of a set
A
⊆
X
{\displaystyle A\subseteq X}
and of its complement
X
∖
A
{\displaystyle X\setminus A}
) as given in the table below.
== Baire category theorem ==
The Baire category theorem gives sufficient conditions for a topological space to be a Baire space.
(BCT1) Every complete pseudometric space is a Baire space. In particular, every completely metrizable topological space is a Baire space.
(BCT2) Every locally compact regular space is a Baire space. In particular, every locally compact Hausdorff space is a Baire space.
BCT1 shows that the following are Baire spaces:
The space
R
{\displaystyle \mathbb {R} }
of real numbers.
The space of irrational numbers, which is homeomorphic to the Baire space
ω
ω
{\displaystyle \omega ^{\omega }}
of set theory.
Every Polish space.
BCT2 shows that the following are Baire spaces:
Every compact Hausdorff space; for example, the Cantor set (or Cantor space).
Every manifold, even if it is not paracompact (hence not metrizable), like the long line.
One should note however that there are plenty of spaces that are Baire spaces without satisfying the conditions of the Baire category theorem, as shown in the Examples section below.
== Properties ==
Every nonempty Baire space is nonmeagre. In terms of countable intersections of dense open sets, being a Baire space is equivalent to such intersections being dense, while being a nonmeagre space is equivalent to the weaker condition that such intersections are nonempty.
Every open subspace of a Baire space is a Baire space.
Every dense Gδ set in a Baire space is a Baire space. The result need not hold if the Gδ set is not dense. See the Examples section.
Every comeagre set in a Baire space is a Baire space.
A subset of a Baire space is comeagre if and only if it contains a dense Gδ set.
A closed subspace of a Baire space need not be Baire. See the Examples section.
If a space contains a dense subspace that is Baire, it is also a Baire space.
A space that is locally Baire, in the sense that each point has a neighborhood that is a Baire space, is a Baire space.
Every topological sum of Baire spaces is Baire.
The product of two Baire spaces is not necessarily Baire.
An arbitrary product of complete metric spaces is Baire.
Every locally compact sober space is a Baire space.
Every finite topological space is a Baire space (because a finite space has only finitely many open sets and the intersection of two open dense sets is an open dense set).
A topological vector space is a Baire space if and only if it is nonmeagre, which happens if and only if every closed balanced absorbing subset has non-empty interior.
Let
f
n
:
X
→
Y
{\displaystyle f_{n}:X\to Y}
be a sequence of continuous functions with pointwise limit
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
If
X
{\displaystyle X}
is a Baire space, then the points where
f
{\displaystyle f}
is not continuous is a meagre set in
X
{\displaystyle X}
and the set of points where
f
{\displaystyle f}
is continuous is dense in
X
.
{\displaystyle X.}
A special case of this is the uniform boundedness principle.
== Examples ==
The empty space is a Baire space. It is the only space that is both Baire and meagre.
The space
R
{\displaystyle \mathbb {R} }
of real numbers with the usual topology is a Baire space.
The space
Q
{\displaystyle \mathbb {Q} }
of rational numbers (with the topology induced from
R
{\displaystyle \mathbb {R} }
) is not a Baire space, since it is meagre.
The space of irrational numbers (with the topology induced from
R
{\displaystyle \mathbb {R} }
) is a Baire space, since it is comeagre in
R
.
{\displaystyle \mathbb {R} .}
The space
X
=
[
0
,
1
]
∪
(
[
2
,
3
]
∩
Q
)
{\displaystyle X=[0,1]\cup ([2,3]\cap \mathbb {Q} )}
(with the topology induced from
R
{\displaystyle \mathbb {R} }
) is nonmeagre, but not Baire. There are several ways to see it is not Baire: for example because the subset
[
0
,
1
]
{\displaystyle [0,1]}
is comeagre but not dense; or because the nonempty subset
[
2
,
3
]
∩
Q
{\displaystyle [2,3]\cap \mathbb {Q} }
is open and meagre.
Similarly, the space
X
=
{
1
}
∪
(
[
2
,
3
]
∩
Q
)
{\displaystyle X=\{1\}\cup ([2,3]\cap \mathbb {Q} )}
is not Baire. It is nonmeagre since
1
{\displaystyle 1}
is an isolated point.
The following are examples of Baire spaces for which the Baire category theorem does not apply, because these spaces are not locally compact and not completely metrizable:
The Sorgenfrey line.
The Sorgenfrey plane.
The Niemytzki plane.
The subspace of
R
2
{\displaystyle \mathbb {R} ^{2}}
consisting of the open upper half plane together with the rationals on the x-axis, namely,
X
=
(
R
×
(
0
,
∞
)
)
∪
(
Q
×
{
0
}
)
,
{\displaystyle X=(\mathbb {R} \times (0,\infty ))\cup (\mathbb {Q} \times \{0\}),}
is a Baire space, because the open upper half plane is dense in
X
{\displaystyle X}
and completely metrizable, hence Baire. The space
X
{\displaystyle X}
is not locally compact and not completely metrizable. The set
Q
×
{
0
}
{\displaystyle \mathbb {Q} \times \{0\}}
is closed in
X
{\displaystyle X}
, but is not a Baire space. Since in a metric space closed sets are Gδ sets, this also shows that in general Gδ sets in a Baire space need not be Baire.
Algebraic varieties with the Zariski topology are Baire spaces. An example is the affine space
A
n
{\displaystyle \mathbb {A} ^{n}}
consisting of the set
C
n
{\displaystyle \mathbb {C} ^{n}}
of n-tuples of complex numbers, together with the topology whose closed sets are the vanishing sets of polynomials
f
∈
C
[
x
1
,
…
,
x
n
]
.
{\displaystyle f\in \mathbb {C} [x_{1},\ldots ,x_{n}].}
== See also ==
Banach–Mazur game
Barrelled space – Type of topological vector space
Blumberg theorem – Any real function on R admits a continuous restriction on a dense subset of R
Choquet game – Topological game
Property of Baire – Difference of an open set by a meager set
Webbed space – Space where open mapping and closed graph theorems hold
== Notes ==
== References ==
Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063.
Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4.
Gierz, G.; Hofmann, K. H.; Keimel, K.; Lawson, J. D.; Mislove, M. W.; Scott, D. S. (2003). Continuous Lattices and Domains. Encyclopedia of Mathematics and its Applications. Vol. 93. Cambridge University Press. ISBN 978-0521803380.
Haworth, R. C.; McCoy, R. A. (1977), Baire Spaces, Warszawa: Instytut Matematyczny Polskiej Akademi Nauk
Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153.
Munkres, James R. (2000). Topology. Prentice-Hall. ISBN 0-13-181629-2.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
== External links ==
Encyclopaedia of Mathematics article on Baire space
Encyclopaedia of Mathematics article on Baire theorem | Wikipedia/Category_(topology) |
In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator (or, more generally, an unbounded linear operator) is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number
λ
{\displaystyle \lambda }
is said to be in the spectrum of a bounded linear operator
T
{\displaystyle T}
if
T
−
λ
I
{\displaystyle T-\lambda I}
either has no set-theoretic inverse;
or the set-theoretic inverse is either unbounded or defined on a non-dense subset.
Here,
I
{\displaystyle I}
is the identity operator.
By the closed graph theorem,
λ
{\displaystyle \lambda }
is in the spectrum if and only if the bounded operator
T
−
λ
I
:
V
→
V
{\displaystyle T-\lambda I:V\to V}
is non-bijective on
V
{\displaystyle V}
.
The study of spectra and related properties is known as spectral theory, which has numerous applications, most notably the mathematical formulation of quantum mechanics.
The spectrum of an operator on a finite-dimensional vector space is precisely the set of eigenvalues. However an operator on an infinite-dimensional space may have additional elements in its spectrum, and may have no eigenvalues. For example, consider the right shift operator R on the Hilbert space ℓ2,
(
x
1
,
x
2
,
…
)
↦
(
0
,
x
1
,
x
2
,
…
)
.
{\displaystyle (x_{1},x_{2},\dots )\mapsto (0,x_{1},x_{2},\dots ).}
This has no eigenvalues, since if Rx=λx then by expanding this expression we see that x1=0, x2=0, etc. On the other hand, 0 is in the spectrum because although the operator R − 0 (i.e. R itself) is invertible, the inverse is defined on a set which is not dense in ℓ2. In fact every bounded linear operator on a complex Banach space must have a non-empty spectrum.
The notion of spectrum extends to unbounded (i.e. not necessarily bounded) operators. A complex number λ is said to be in the spectrum of an unbounded operator
T
:
X
→
X
{\displaystyle T:\,X\to X}
defined on domain
D
(
T
)
⊆
X
{\displaystyle D(T)\subseteq X}
if there is no bounded inverse
(
T
−
λ
I
)
−
1
:
X
→
D
(
T
)
{\displaystyle (T-\lambda I)^{-1}:\,X\to D(T)}
defined on the whole of
X
.
{\displaystyle X.}
If T is closed (which includes the case when T is bounded), boundedness of
(
T
−
λ
I
)
−
1
{\displaystyle (T-\lambda I)^{-1}}
follows automatically from its existence. U
The space of bounded linear operators B(X) on a Banach space X is an example of a unital Banach algebra. Since the definition of the spectrum does not mention any properties of B(X) except those that any such algebra has, the notion of a spectrum may be generalised to this context by using the same definition verbatim.
== Spectrum of a bounded operator ==
=== Definition ===
Let
T
{\displaystyle T}
be a bounded linear operator acting on a Banach space
X
{\displaystyle X}
over the complex scalar field
C
{\displaystyle \mathbb {C} }
, and
I
{\displaystyle I}
be the identity operator on
X
{\displaystyle X}
. The spectrum of
T
{\displaystyle T}
is the set of all
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
for which the operator
T
−
λ
I
{\displaystyle T-\lambda I}
does not have an inverse that is a bounded linear operator.
Since
T
−
λ
I
{\displaystyle T-\lambda I}
is a linear operator, the inverse is linear if it exists; and, by the bounded inverse theorem, it is bounded. Therefore, the spectrum consists precisely of those scalars
λ
{\displaystyle \lambda }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
is not bijective.
The spectrum of a given operator
T
{\displaystyle T}
is often denoted
σ
(
T
)
{\displaystyle \sigma (T)}
, and its complement, the resolvent set, is denoted
ρ
(
T
)
=
C
∖
σ
(
T
)
{\displaystyle \rho (T)=\mathbb {C} \setminus \sigma (T)}
. (
ρ
(
T
)
{\displaystyle \rho (T)}
is sometimes used to denote the spectral radius of
T
{\displaystyle T}
)
=== Relation to eigenvalues ===
If
λ
{\displaystyle \lambda }
is an eigenvalue of
T
{\displaystyle T}
, then the operator
T
−
λ
I
{\displaystyle T-\lambda I}
is not one-to-one, and therefore its inverse
(
T
−
λ
I
)
−
1
{\displaystyle (T-\lambda I)^{-1}}
is not defined. However, the converse statement is not true: the operator
T
−
λ
I
{\displaystyle T-\lambda I}
may not have an inverse, even if
λ
{\displaystyle \lambda }
is not an eigenvalue. Thus the spectrum of an operator always contains all its eigenvalues, but is not limited to them.
For example, consider the Hilbert space
ℓ
2
(
Z
)
{\displaystyle \ell ^{2}(\mathbb {Z} )}
, that consists of all bi-infinite sequences of real numbers
v
=
(
…
,
v
−
2
,
v
−
1
,
v
0
,
v
1
,
v
2
,
…
)
{\displaystyle v=(\ldots ,v_{-2},v_{-1},v_{0},v_{1},v_{2},\ldots )}
that have a finite sum of squares
∑
i
=
−
∞
+
∞
v
i
2
{\textstyle \sum _{i=-\infty }^{+\infty }v_{i}^{2}}
. The bilateral shift operator
T
{\displaystyle T}
simply displaces every element of the sequence by one position; namely if
u
=
T
(
v
)
{\displaystyle u=T(v)}
then
u
i
=
v
i
−
1
{\displaystyle u_{i}=v_{i-1}}
for every integer
i
{\displaystyle i}
. The eigenvalue equation
T
(
v
)
=
λ
v
{\displaystyle T(v)=\lambda v}
has no nonzero solution in this space, since it implies that all the values
v
i
{\displaystyle v_{i}}
have the same absolute value (if
|
λ
|
=
1
{\displaystyle \vert \lambda \vert =1}
) or are a geometric progression (if
|
λ
|
≠
1
{\displaystyle \vert \lambda \vert \neq 1}
); either way, the sum of their squares would not be finite. However, the operator
T
−
λ
I
{\displaystyle T-\lambda I}
is not invertible if
|
λ
|
=
1
{\displaystyle |\lambda |=1}
. For example, the sequence
u
{\displaystyle u}
such that
u
i
=
1
/
(
|
i
|
+
1
)
{\displaystyle u_{i}=1/(|i|+1)}
is in
ℓ
2
(
Z
)
{\displaystyle \ell ^{2}(\mathbb {Z} )}
; but there is no sequence
v
{\displaystyle v}
in
ℓ
2
(
Z
)
{\displaystyle \ell ^{2}(\mathbb {Z} )}
such that
(
T
−
I
)
v
=
u
{\displaystyle (T-I)v=u}
(that is,
v
i
−
1
=
u
i
+
v
i
{\displaystyle v_{i-1}=u_{i}+v_{i}}
for all
i
{\displaystyle i}
).
=== Basic properties ===
The spectrum of a bounded operator T is always a closed, bounded subset of the complex plane.
If the spectrum were empty, then the resolvent function
R
(
λ
)
=
(
T
−
λ
I
)
−
1
,
λ
∈
C
,
{\displaystyle R(\lambda )=(T-\lambda I)^{-1},\qquad \lambda \in \mathbb {C} ,}
would be defined everywhere on the complex plane and bounded. But it can be shown that the resolvent function R is holomorphic on its domain. By the vector-valued version of Liouville's theorem, this function is constant, thus everywhere zero as it is zero at infinity. This would be a contradiction.
The boundedness of the spectrum follows from the Neumann series expansion in λ; the spectrum σ(T) is bounded by ||T||. A similar result shows the closedness of the spectrum.
The bound ||T|| on the spectrum can be refined somewhat. The spectral radius, r(T), of T is the radius of the smallest circle in the complex plane which is centered at the origin and contains the spectrum σ(T) inside of it, i.e.
r
(
T
)
=
sup
{
|
λ
|
:
λ
∈
σ
(
T
)
}
.
{\displaystyle r(T)=\sup\{|\lambda |:\lambda \in \sigma (T)\}.}
The spectral radius formula says that for any element
T
{\displaystyle T}
of a Banach algebra,
r
(
T
)
=
lim
n
→
∞
‖
T
n
‖
1
/
n
.
{\displaystyle r(T)=\lim _{n\to \infty }\left\|T^{n}\right\|^{1/n}.}
== Spectrum of an unbounded operator ==
One can extend the definition of spectrum to unbounded operators on a Banach space X. These operators are no longer elements in the Banach algebra B(X).
=== Definition ===
Let X be a Banach space and
T
:
D
(
T
)
→
X
{\displaystyle T:\,D(T)\to X}
be a linear operator defined on domain
D
(
T
)
⊆
X
{\displaystyle D(T)\subseteq X}
.
A complex number λ is said to be in the resolvent set (also called regular set) of
T
{\displaystyle T}
if the operator
T
−
λ
I
:
D
(
T
)
→
X
{\displaystyle T-\lambda I:\,D(T)\to X}
has a bounded everywhere-defined inverse, i.e. if there exists a bounded operator
S
:
X
→
D
(
T
)
{\displaystyle S:\,X\rightarrow D(T)}
such that
S
(
T
−
λ
I
)
=
I
D
(
T
)
,
(
T
−
λ
I
)
S
=
I
X
.
{\displaystyle S(T-\lambda I)=I_{D(T)},\,(T-\lambda I)S=I_{X}.}
A complex number λ is then in the spectrum if λ is not in the resolvent set.
For λ to be in the resolvent (i.e. not in the spectrum), just like in the bounded case,
T
−
λ
I
{\displaystyle T-\lambda I}
must be bijective, since it must have a two-sided inverse. As before, if an inverse exists, then its linearity is immediate, but in general it may not be bounded, so this condition must be checked separately.
By the closed graph theorem, boundedness of
(
T
−
λ
I
)
−
1
{\displaystyle (T-\lambda I)^{-1}}
does follow directly from its existence when T is closed. Then, just as in the bounded case, a complex number λ lies in the spectrum of a closed operator T if and only if
T
−
λ
I
{\displaystyle T-\lambda I}
is not bijective. Note that the class of closed operators includes all bounded operators.
=== Basic properties ===
The spectrum of an unbounded operator is in general a closed, possibly empty, subset of the complex plane.
If the operator T is not closed, then
σ
(
T
)
=
C
{\displaystyle \sigma (T)=\mathbb {C} }
.
The following example indicates that non-closed operators may have empty spectra. Let
T
{\displaystyle T}
denote the differentiation operator on
L
2
(
[
0
,
1
]
)
{\displaystyle L^{2}([0,1])}
, whose domain is defined to be the closure of
C
c
∞
(
(
0
,
1
]
)
{\displaystyle C_{c}^{\infty }((0,1])}
with respect to the
H
1
{\displaystyle H^{1}}
-Sobolev space norm. This space can be characterized as all functions in
H
1
(
[
0
,
1
]
)
{\displaystyle H^{1}([0,1])}
that are zero at
t
=
0
{\displaystyle t=0}
. Then,
T
−
z
{\displaystyle T-z}
has trivial kernel on this domain, as any
H
1
(
[
0
,
1
]
)
{\displaystyle H^{1}([0,1])}
-function in its kernel is a constant multiple of
e
z
t
{\displaystyle e^{zt}}
, which is zero at
t
=
0
{\displaystyle t=0}
if and only if it is identically zero. Therefore, the complement of the spectrum is all of
C
.
{\displaystyle \mathbb {C} .}
== Classification of points in the spectrum ==
A bounded operator T on a Banach space is invertible, i.e. has a bounded inverse, if and only if T is bounded below, i.e.
‖
T
x
‖
≥
c
‖
x
‖
,
{\displaystyle \|Tx\|\geq c\|x\|,}
for some
c
>
0
,
{\displaystyle c>0,}
and has dense range. Accordingly, the spectrum of T can be divided into the following parts:
λ
∈
σ
(
T
)
{\displaystyle \lambda \in \sigma (T)}
if
T
−
λ
I
{\displaystyle T-\lambda I}
is not bounded below. In particular, this is the case if
T
−
λ
I
{\displaystyle T-\lambda I}
is not injective, that is, λ is an eigenvalue. The set of eigenvalues is called the point spectrum of T and denoted by σp(T). Alternatively,
T
−
λ
I
{\displaystyle T-\lambda I}
could be one-to-one but still not bounded below. Such λ is not an eigenvalue but still an approximate eigenvalue of T (eigenvalues themselves are also approximate eigenvalues). The set of approximate eigenvalues (which includes the point spectrum) is called the approximate point spectrum of T, denoted by σap(T).
λ
∈
σ
(
T
)
{\displaystyle \lambda \in \sigma (T)}
if
T
−
λ
I
{\displaystyle T-\lambda I}
does not have dense range. The set of such λ is called the compression spectrum of T, denoted by
σ
c
p
(
T
)
{\displaystyle \sigma _{\mathrm {cp} }(T)}
. If
T
−
λ
I
{\displaystyle T-\lambda I}
does not have dense range but is injective, λ is said to be in the residual spectrum of T, denoted by
σ
r
(
T
)
{\displaystyle \sigma _{\mathrm {r} }(T)}
.
Note that the approximate point spectrum and residual spectrum are not necessarily disjoint (however, the point spectrum and the residual spectrum are).
The following subsections provide more details on the three parts of σ(T) sketched above.
=== Point spectrum ===
If an operator is not injective (so there is some nonzero x with T(x) = 0), then it is clearly not invertible. So if λ is an eigenvalue of T, one necessarily has λ ∈ σ(T). The set of eigenvalues of T is also called the point spectrum of T, denoted by σp(T). Some authors refer to the closure of the point spectrum as the pure point spectrum
σ
p
p
(
T
)
=
σ
p
(
T
)
¯
{\displaystyle \sigma _{pp}(T)={\overline {\sigma _{p}(T)}}}
while others simply consider
σ
p
p
(
T
)
:=
σ
p
(
T
)
.
{\displaystyle \sigma _{pp}(T):=\sigma _{p}(T).}
=== Approximate point spectrum ===
More generally, by the bounded inverse theorem, T is not invertible if it is not bounded below; that is, if there is no c > 0 such that ||Tx|| ≥ c||x|| for all x ∈ X. So the spectrum includes the set of approximate eigenvalues, which are those λ such that T - λI is not bounded below; equivalently, it is the set of λ for which there is a sequence of unit vectors x1, x2, ... for which
lim
n
→
∞
‖
T
x
n
−
λ
x
n
‖
=
0
{\displaystyle \lim _{n\to \infty }\|Tx_{n}-\lambda x_{n}\|=0}
.
The set of approximate eigenvalues is known as the approximate point spectrum, denoted by
σ
a
p
(
T
)
{\displaystyle \sigma _{\mathrm {ap} }(T)}
.
It is easy to see that the eigenvalues lie in the approximate point spectrum.
For example, consider the right shift R on
l
2
(
Z
)
{\displaystyle l^{2}(\mathbb {Z} )}
defined by
R
:
e
j
↦
e
j
+
1
,
j
∈
Z
,
{\displaystyle R:\,e_{j}\mapsto e_{j+1},\quad j\in \mathbb {Z} ,}
where
(
e
j
)
j
∈
N
{\displaystyle {\big (}e_{j}{\big )}_{j\in \mathbb {N} }}
is the standard orthonormal basis in
l
2
(
Z
)
{\displaystyle l^{2}(\mathbb {Z} )}
. Direct calculation shows R has no eigenvalues, but every λ with
|
λ
|
=
1
{\displaystyle |\lambda |=1}
is an approximate eigenvalue; letting xn be the vector
1
n
(
…
,
0
,
1
,
λ
−
1
,
λ
−
2
,
…
,
λ
1
−
n
,
0
,
…
)
{\displaystyle {\frac {1}{\sqrt {n}}}(\dots ,0,1,\lambda ^{-1},\lambda ^{-2},\dots ,\lambda ^{1-n},0,\dots )}
one can see that ||xn|| = 1 for all n, but
‖
R
x
n
−
λ
x
n
‖
=
2
n
→
0.
{\displaystyle \|Rx_{n}-\lambda x_{n}\|={\sqrt {\frac {2}{n}}}\to 0.}
Since R is a unitary operator, its spectrum lies on the unit circle. Therefore, the approximate point spectrum of R is its entire spectrum.
This conclusion is also true for a more general class of operators.
A unitary operator is normal. By the spectral theorem, a bounded operator on a Hilbert space H is normal if and only if it is equivalent (after identification of H with an
L
2
{\displaystyle L^{2}}
space) to a multiplication operator. It can be shown that the approximate point spectrum of a bounded multiplication operator equals its spectrum.
=== Discrete spectrum ===
The discrete spectrum is defined as the set of normal eigenvalues or, equivalently, as the set of isolated points of the spectrum such that the corresponding Riesz projector is of finite rank. As such, the discrete spectrum is a strict subset of the point spectrum, i.e.,
σ
d
(
T
)
⊂
σ
p
(
T
)
.
{\displaystyle \sigma _{d}(T)\subset \sigma _{p}(T).}
=== Continuous spectrum ===
The set of all λ for which
T
−
λ
I
{\displaystyle T-\lambda I}
is injective and has dense range, but is not surjective, is called the continuous spectrum of T, denoted by
σ
c
(
T
)
{\displaystyle \sigma _{\mathbb {c} }(T)}
. The continuous spectrum therefore consists of those approximate eigenvalues which are not eigenvalues and do not lie in the residual spectrum. That is,
σ
c
(
T
)
=
σ
a
p
(
T
)
∖
(
σ
r
(
T
)
∪
σ
p
(
T
)
)
{\displaystyle \sigma _{\mathrm {c} }(T)=\sigma _{\mathrm {ap} }(T)\setminus (\sigma _{\mathrm {r} }(T)\cup \sigma _{\mathrm {p} }(T))}
.
For example,
A
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
e
j
↦
e
j
/
j
{\displaystyle e_{j}\mapsto e_{j}/j}
,
j
∈
N
{\displaystyle j\in \mathbb {N} }
, is injective and has a dense range, yet
R
a
n
(
A
)
⊊
l
2
(
N
)
{\displaystyle \mathrm {Ran} (A)\subsetneq l^{2}(\mathbb {N} )}
.
Indeed, if
x
=
∑
j
∈
N
c
j
e
j
∈
l
2
(
N
)
{\textstyle x=\sum _{j\in \mathbb {N} }c_{j}e_{j}\in l^{2}(\mathbb {N} )}
with
c
j
∈
C
{\displaystyle c_{j}\in \mathbb {C} }
such that
∑
j
∈
N
|
c
j
|
2
<
∞
{\textstyle \sum _{j\in \mathbb {N} }|c_{j}|^{2}<\infty }
, one does not necessarily have
∑
j
∈
N
|
j
c
j
|
2
<
∞
{\textstyle \sum _{j\in \mathbb {N} }\left|jc_{j}\right|^{2}<\infty }
, and then
∑
j
∈
N
j
c
j
e
j
∉
l
2
(
N
)
{\textstyle \sum _{j\in \mathbb {N} }jc_{j}e_{j}\notin l^{2}(\mathbb {N} )}
.
=== Compression spectrum ===
The set of
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
does not have dense range is known as the compression spectrum of T and is denoted by
σ
c
p
(
T
)
{\displaystyle \sigma _{\mathrm {cp} }(T)}
.
=== Residual spectrum ===
The set of
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
is injective but does not have dense range is known as the residual spectrum of T and is denoted by
σ
r
(
T
)
{\displaystyle \sigma _{\mathrm {r} }(T)}
:
σ
r
(
T
)
=
σ
c
p
(
T
)
∖
σ
p
(
T
)
.
{\displaystyle \sigma _{\mathrm {r} }(T)=\sigma _{\mathrm {cp} }(T)\setminus \sigma _{\mathrm {p} }(T).}
An operator may be injective, even bounded below, but still not invertible. The right shift on
l
2
(
N
)
{\displaystyle l^{2}(\mathbb {N} )}
,
R
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
R
:
e
j
↦
e
j
+
1
,
j
∈
N
{\displaystyle R:\,e_{j}\mapsto e_{j+1},\,j\in \mathbb {N} }
, is such an example. This shift operator is an isometry, therefore bounded below by 1. But it is not invertible as it is not surjective (
e
1
∉
R
a
n
(
R
)
{\displaystyle e_{1}\not \in \mathrm {Ran} (R)}
), and moreover
R
a
n
(
R
)
{\displaystyle \mathrm {Ran} (R)}
is not dense in
l
2
(
N
)
{\displaystyle l^{2}(\mathbb {N} )}
(
e
1
∉
R
a
n
(
R
)
¯
{\displaystyle e_{1}\notin {\overline {\mathrm {Ran} (R)}}}
).
=== Peripheral spectrum ===
The peripheral spectrum of an operator is defined as the set of points in its spectrum which have modulus equal to its spectral radius.
=== Essential spectrum ===
There are five similar definitions of the essential spectrum of closed densely defined linear operator
A
:
X
→
X
{\displaystyle A:\,X\to X}
which satisfy
σ
e
s
s
,
1
(
A
)
⊂
σ
e
s
s
,
2
(
A
)
⊂
σ
e
s
s
,
3
(
A
)
⊂
σ
e
s
s
,
4
(
A
)
⊂
σ
e
s
s
,
5
(
A
)
⊂
σ
(
A
)
.
{\displaystyle \sigma _{\mathrm {ess} ,1}(A)\subset \sigma _{\mathrm {ess} ,2}(A)\subset \sigma _{\mathrm {ess} ,3}(A)\subset \sigma _{\mathrm {ess} ,4}(A)\subset \sigma _{\mathrm {ess} ,5}(A)\subset \sigma (A).}
All these spectra
σ
e
s
s
,
k
(
A
)
,
1
≤
k
≤
5
{\displaystyle \sigma _{\mathrm {ess} ,k}(A),\ 1\leq k\leq 5}
, coincide in the case of self-adjoint operators.
The essential spectrum
σ
e
s
s
,
1
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,1}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that
A
−
λ
I
{\displaystyle A-\lambda I}
is not semi-Fredholm. (The operator is semi-Fredholm if its range is closed and either its kernel or cokernel (or both) is finite-dimensional.) Example 1:
λ
=
0
∈
σ
e
s
s
,
1
(
A
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,1}(A)}
for the operator
A
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
A
:
e
j
↦
e
j
/
j
,
j
∈
N
{\displaystyle A:\,e_{j}\mapsto e_{j}/j,~j\in \mathbb {N} }
(because the range of this operator is not closed: the range does not include all of
l
2
(
N
)
{\displaystyle l^{2}(\mathbb {N} )}
although its closure does).Example 2:
λ
=
0
∈
σ
e
s
s
,
1
(
N
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,1}(N)}
for
N
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle N:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
N
:
v
↦
0
{\displaystyle N:\,v\mapsto 0}
for any
v
∈
l
2
(
N
)
{\displaystyle v\in l^{2}(\mathbb {N} )}
(because both kernel and cokernel of this operator are infinite-dimensional).
The essential spectrum
σ
e
s
s
,
2
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,2}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that the operator either
A
−
λ
I
{\displaystyle A-\lambda I}
has infinite-dimensional kernel or has a range which is not closed. It can also be characterized in terms of Weyl's criterion: there exists a sequence
(
x
j
)
j
∈
N
{\displaystyle (x_{j})_{j\in \mathbb {N} }}
in the space X such that
‖
x
j
‖
=
1
{\displaystyle \Vert x_{j}\Vert =1}
,
lim
j
→
∞
‖
(
A
−
λ
I
)
x
j
‖
=
0
,
{\textstyle \lim _{j\to \infty }\left\|(A-\lambda I)x_{j}\right\|=0,}
and such that
(
x
j
)
j
∈
N
{\displaystyle (x_{j})_{j\in \mathbb {N} }}
contains no convergent subsequence. Such a sequence is called a singular sequence (or a singular Weyl sequence).Example:
λ
=
0
∈
σ
e
s
s
,
2
(
B
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,2}(B)}
for the operator
B
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle B:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
B
:
e
j
↦
e
j
/
2
{\displaystyle B:\,e_{j}\mapsto e_{j/2}}
if j is even and
e
j
↦
0
{\displaystyle e_{j}\mapsto 0}
when j is odd (kernel is infinite-dimensional; cokernel is zero-dimensional). Note that
λ
=
0
∉
σ
e
s
s
,
1
(
B
)
{\displaystyle \lambda =0\not \in \sigma _{\mathrm {ess} ,1}(B)}
.
The essential spectrum
σ
e
s
s
,
3
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,3}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that
A
−
λ
I
{\displaystyle A-\lambda I}
is not Fredholm. (The operator is Fredholm if its range is closed and both its kernel and cokernel are finite-dimensional.) Example:
λ
=
0
∈
σ
e
s
s
,
3
(
J
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,3}(J)}
for the operator
J
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle J:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
J
:
e
j
↦
e
2
j
{\displaystyle J:\,e_{j}\mapsto e_{2j}}
(kernel is zero-dimensional, cokernel is infinite-dimensional). Note that
λ
=
0
∉
σ
e
s
s
,
2
(
J
)
{\displaystyle \lambda =0\not \in \sigma _{\mathrm {ess} ,2}(J)}
.
The essential spectrum
σ
e
s
s
,
4
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,4}(A)}
is defined as the set of points
λ
{\displaystyle \lambda }
of the spectrum such that
A
−
λ
I
{\displaystyle A-\lambda I}
is not Fredholm of index zero. It could also be characterized as the largest part of the spectrum of A which is preserved by compact perturbations. In other words,
σ
e
s
s
,
4
(
A
)
=
⋂
K
∈
B
0
(
X
)
σ
(
A
+
K
)
{\textstyle \sigma _{\mathrm {ess} ,4}(A)=\bigcap _{K\in B_{0}(X)}\sigma (A+K)}
; here
B
0
(
X
)
{\displaystyle B_{0}(X)}
denotes the set of all compact operators on X. Example:
λ
=
0
∈
σ
e
s
s
,
4
(
R
)
{\displaystyle \lambda =0\in \sigma _{\mathrm {ess} ,4}(R)}
where
R
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
is the right shift operator,
R
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle R:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
R
:
e
j
↦
e
j
+
1
{\displaystyle R:\,e_{j}\mapsto e_{j+1}}
for
j
∈
N
{\displaystyle j\in \mathbb {N} }
(its kernel is zero, its cokernel is one-dimensional). Note that
λ
=
0
∉
σ
e
s
s
,
3
(
R
)
{\displaystyle \lambda =0\not \in \sigma _{\mathrm {ess} ,3}(R)}
.
The essential spectrum
σ
e
s
s
,
5
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,5}(A)}
is the union of
σ
e
s
s
,
1
(
A
)
{\displaystyle \sigma _{\mathrm {ess} ,1}(A)}
with all components of
C
∖
σ
e
s
s
,
1
(
A
)
{\displaystyle \mathbb {C} \setminus \sigma _{\mathrm {ess} ,1}(A)}
that do not intersect with the resolvent set
C
∖
σ
(
A
)
{\displaystyle \mathbb {C} \setminus \sigma (A)}
. It can also be characterized as
σ
(
A
)
∖
σ
d
(
A
)
{\displaystyle \sigma (A)\setminus \sigma _{\mathrm {d} }(A)}
.Example: consider the operator
T
:
l
2
(
Z
)
→
l
2
(
Z
)
{\displaystyle T:\,l^{2}(\mathbb {Z} )\to l^{2}(\mathbb {Z} )}
,
T
:
e
j
↦
e
j
−
1
{\displaystyle T:\,e_{j}\mapsto e_{j-1}}
for
j
≠
0
{\displaystyle j\neq 0}
,
T
:
e
0
↦
0
{\displaystyle T:\,e_{0}\mapsto 0}
. Since
‖
T
‖
=
1
{\displaystyle \Vert T\Vert =1}
, one has
σ
(
T
)
⊂
D
1
¯
{\displaystyle \sigma (T)\subset {\overline {\mathbb {D} _{1}}}}
. For any
z
∈
C
{\displaystyle z\in \mathbb {C} }
with
|
z
|
=
1
{\displaystyle |z|=1}
, the range of
T
−
z
I
{\displaystyle T-zI}
is dense but not closed, hence the boundary of the unit disc is in the first type of the essential spectrum:
∂
D
1
⊂
σ
e
s
s
,
1
(
T
)
{\displaystyle \partial \mathbb {D} _{1}\subset \sigma _{\mathrm {ess} ,1}(T)}
. For any
z
∈
C
{\displaystyle z\in \mathbb {C} }
with
|
z
|
<
1
{\displaystyle |z|<1}
,
T
−
z
I
{\displaystyle T-zI}
has a closed range, one-dimensional kernel, and one-dimensional cokernel, so
z
∈
σ
(
T
)
{\displaystyle z\in \sigma (T)}
although
z
∉
σ
e
s
s
,
k
(
T
)
{\displaystyle z\not \in \sigma _{\mathrm {ess} ,k}(T)}
for
1
≤
k
≤
4
{\displaystyle 1\leq k\leq 4}
; thus,
σ
e
s
s
,
k
(
T
)
=
∂
D
1
{\displaystyle \sigma _{\mathrm {ess} ,k}(T)=\partial \mathbb {D} _{1}}
for
1
≤
k
≤
4
{\displaystyle 1\leq k\leq 4}
. There are two components of
C
∖
σ
e
s
s
,
1
(
T
)
{\displaystyle \mathbb {C} \setminus \sigma _{\mathrm {ess} ,1}(T)}
:
{
z
∈
C
:
|
z
|
>
1
}
{\displaystyle \{z\in \mathbb {C} :\,|z|>1\}}
and
{
z
∈
C
:
|
z
|
<
1
}
{\displaystyle \{z\in \mathbb {C} :\,|z|<1\}}
. The component
{
|
z
|
<
1
}
{\displaystyle \{|z|<1\}}
has no intersection with the resolvent set; by definition,
σ
e
s
s
,
5
(
T
)
=
σ
e
s
s
,
1
(
T
)
∪
{
z
∈
C
:
|
z
|
<
1
}
=
{
z
∈
C
:
|
z
|
≤
1
}
{\displaystyle \sigma _{\mathrm {ess} ,5}(T)=\sigma _{\mathrm {ess} ,1}(T)\cup \{z\in \mathbb {C} :\,|z|<1\}=\{z\in \mathbb {C} :\,|z|\leq 1\}}
.
== Example: Hydrogen atom ==
The hydrogen atom provides an example of different types of the spectra. The hydrogen atom Hamiltonian operator
H
=
−
Δ
−
Z
|
x
|
{\displaystyle H=-\Delta -{\frac {Z}{|x|}}}
,
Z
>
0
{\displaystyle Z>0}
, with domain
D
(
H
)
=
H
1
(
R
3
)
{\displaystyle D(H)=H^{1}(\mathbb {R} ^{3})}
has a discrete set of eigenvalues (the discrete spectrum
σ
d
(
H
)
{\displaystyle \sigma _{\mathrm {d} }(H)}
, which in this case coincides with the point spectrum
σ
p
(
H
)
{\displaystyle \sigma _{\mathrm {p} }(H)}
since there are no eigenvalues embedded into the continuous spectrum) that can be computed by the Rydberg formula. Their corresponding eigenfunctions are called eigenstates, or the bound states. The result of the ionization process is described by the continuous part of the spectrum (the energy of the collision/ionization is not "quantized"), represented by
σ
c
o
n
t
(
H
)
=
[
0
,
+
∞
)
{\displaystyle \sigma _{\mathrm {cont} }(H)=[0,+\infty )}
(it also coincides with the essential spectrum,
σ
e
s
s
(
H
)
=
[
0
,
+
∞
)
{\displaystyle \sigma _{\mathrm {ess} }(H)=[0,+\infty )}
).
== Spectrum of the adjoint operator ==
Let X be a Banach space and
T
:
X
→
X
{\displaystyle T:\,X\to X}
a closed linear operator with dense domain
D
(
T
)
⊂
X
{\displaystyle D(T)\subset X}
.
If X* is the dual space of X, and
T
∗
:
X
∗
→
X
∗
{\displaystyle T^{*}:\,X^{*}\to X^{*}}
is the hermitian adjoint of T, then
σ
(
T
∗
)
=
σ
(
T
)
¯
:=
{
z
∈
C
:
z
¯
∈
σ
(
T
)
}
.
{\displaystyle \sigma (T^{*})={\overline {\sigma (T)}}:=\{z\in \mathbb {C} :{\bar {z}}\in \sigma (T)\}.}
We also get
σ
p
(
T
)
⊂
σ
r
(
T
∗
)
∪
σ
p
(
T
∗
)
¯
{\displaystyle \sigma _{\mathrm {p} }(T)\subset {\overline {\sigma _{\mathrm {r} }(T^{*})\cup \sigma _{\mathrm {p} }(T^{*})}}}
by the following argument: X embeds isometrically into X**.
Therefore, for every non-zero element in the kernel of
T
−
λ
I
{\displaystyle T-\lambda I}
there exists a non-zero element in X** which vanishes on
R
a
n
(
T
∗
−
λ
¯
I
)
{\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}
.
Thus
R
a
n
(
T
∗
−
λ
¯
I
)
{\displaystyle \mathrm {Ran} (T^{*}-{\bar {\lambda }}I)}
can not be dense.
Furthermore, if X is reflexive, we have
σ
r
(
T
∗
)
¯
⊂
σ
p
(
T
)
{\displaystyle {\overline {\sigma _{\mathrm {r} }(T^{*})}}\subset \sigma _{\mathrm {p} }(T)}
.
== Spectra of particular classes of operators ==
=== Compact operators ===
If T is a compact operator, or, more generally, an inessential operator, then it can be shown that the spectrum is countable, that zero is the only possible accumulation point, and that any nonzero λ in the spectrum is an eigenvalue.
=== Quasinilpotent operators ===
A bounded operator
A
:
X
→
X
{\displaystyle A:\,X\to X}
is quasinilpotent if
‖
A
n
‖
1
/
n
→
0
{\displaystyle \lVert A^{n}\rVert ^{1/n}\to 0}
as
n
→
∞
{\displaystyle n\to \infty }
(in other words, if the spectral radius of A equals zero). Such operators could equivalently be characterized by the condition
σ
(
A
)
=
{
0
}
.
{\displaystyle \sigma (A)=\{0\}.}
An example of such an operator is
A
:
l
2
(
N
)
→
l
2
(
N
)
{\displaystyle A:\,l^{2}(\mathbb {N} )\to l^{2}(\mathbb {N} )}
,
e
j
↦
e
j
+
1
/
2
j
{\displaystyle e_{j}\mapsto e_{j+1}/2^{j}}
for
j
∈
N
{\displaystyle j\in \mathbb {N} }
.
=== Self-adjoint operators ===
If X is a Hilbert space and T is a self-adjoint operator (or, more generally, a normal operator), then a remarkable result known as the spectral theorem gives an analogue of the diagonalisation theorem for normal finite-dimensional operators (Hermitian matrices, for example).
For self-adjoint operators, one can use spectral measures to define a decomposition of the spectrum into absolutely continuous, pure point, and singular parts.
== Spectrum of a real operator ==
The definitions of the resolvent and spectrum can be extended to any continuous linear operator
T
{\displaystyle T}
acting on a Banach space
X
{\displaystyle X}
over the real field
R
{\displaystyle \mathbb {R} }
(instead of the complex field
C
{\displaystyle \mathbb {C} }
) via its complexification
T
C
{\displaystyle T_{\mathbb {C} }}
. In this case we define the resolvent set
ρ
(
T
)
{\displaystyle \rho (T)}
as the set of all
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
such that
T
C
−
λ
I
{\displaystyle T_{\mathbb {C} }-\lambda I}
is invertible as an operator acting on the complexified space
X
C
{\displaystyle X_{\mathbb {C} }}
; then we define
σ
(
T
)
=
C
∖
ρ
(
T
)
{\displaystyle \sigma (T)=\mathbb {C} \setminus \rho (T)}
.
=== Real spectrum ===
The real spectrum of a continuous linear operator
T
{\displaystyle T}
acting on a real Banach space
X
{\displaystyle X}
, denoted
σ
R
(
T
)
{\displaystyle \sigma _{\mathbb {R} }(T)}
, is defined as the set of all
λ
∈
R
{\displaystyle \lambda \in \mathbb {R} }
for which
T
−
λ
I
{\displaystyle T-\lambda I}
fails to be invertible in the real algebra of bounded linear operators acting on
X
{\displaystyle X}
. In this case we have
σ
(
T
)
∩
R
=
σ
R
(
T
)
{\displaystyle \sigma (T)\cap \mathbb {R} =\sigma _{\mathbb {R} }(T)}
. Note that the real spectrum may or may not coincide with the complex spectrum. In particular, the real spectrum could be empty.
== Spectrum of a unital Banach algebra ==
Let B be a complex Banach algebra containing a unit e. Then we define the spectrum σ(x) (or more explicitly σB(x)) of an element x of B to be the set of those complex numbers λ for which λe − x is not invertible in B. This extends the definition for bounded linear operators B(X) on a Banach space X, since B(X) is a unital Banach algebra.
== See also ==
Essential spectrum
Discrete spectrum (mathematics)
Self-adjoint operator
Pseudospectrum
Resolvent set
== Notes ==
== References ==
Dales et al., Introduction to Banach Algebras, Operators, and Harmonic Analysis, ISBN 0-521-53584-0
"Spectrum of an operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Simon, Barry (2005). Orthogonal polynomials on the unit circle. Part 1. Classical theory. American Mathematical Society Colloquium Publications. Vol. 54. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-3446-6. MR 2105088.
Teschl, G. (2014). Mathematical Methods in Quantum Mechanics. Providence (R.I): American Mathematical Soc. ISBN 978-1-4704-1704-8. | Wikipedia/Continuous_spectrum_(functional_analysis) |
In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.
The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces.
See the article decomposition of a module for a way to write a module as a direct sum of submodules.
== Construction for vector spaces and abelian groups ==
We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by considering these two cases in depth.
=== Construction for two vector spaces ===
Suppose V and W are vector spaces over the field K. The Cartesian product V × W can be given the structure of a vector space over K (Halmos 1974, §18) by defining the operations componentwise:
(v1, w1) + (v2, w2) = (v1 + v2, w1 + w2)
α (v, w) = (α v, α w)
for v, v1, v2 ∈ V, w, w1, w2 ∈ W, and α ∈ K.
The resulting vector space is called the direct sum of V and W and is usually denoted by a plus symbol inside a circle:
V
⊕
W
{\displaystyle V\oplus W}
It is customary to write the elements of an ordered sum not as ordered pairs (v, w), but as a sum v + w.
The subspace V × {0} of V ⊕ W is isomorphic to V and is often identified with V; similarly for {0} × W and W. (See internal direct sum below.) With this identification, every element of V ⊕ W can be written in one and only one way as the sum of an element of V and an element of W. The dimension of V ⊕ W is equal to the sum of the dimensions of V and W. One elementary use is the reconstruction of a finite vector space from any subspace W and its orthogonal complement:
R
n
=
W
⊕
W
⊥
{\displaystyle \mathbb {R} ^{n}=W\oplus W^{\perp }}
This construction readily generalizes to any finite number of vector spaces.
=== Construction for two abelian groups ===
For abelian groups G and H which are written additively, the direct product of G and H is also called a direct sum (Mac Lane & Birkhoff 1999, §V.6). Thus the Cartesian product G × H is equipped with the structure of an abelian group by defining the operations componentwise:
(g1, h1) + (g2, h2) = (g1 + g2, h1 + h2)
for g1, g2 in G, and h1, h2 in H.
Integral multiples are similarly defined componentwise by
n(g, h) = (ng, nh)
for g in G, h in H, and n an integer. This parallels the extension of the scalar product of vector spaces to the direct sum above.
The resulting abelian group is called the direct sum of G and H and is usually denoted by a plus symbol inside a circle:
G
⊕
H
{\displaystyle G\oplus H}
It is customary to write the elements of an ordered sum not as ordered pairs (g, h), but as a sum g + h.
The subgroup G × {0} of G ⊕ H is isomorphic to G and is often identified with G; similarly for {0} × H and H. (See internal direct sum below.) With this identification, it is true that every element of G ⊕ H can be written in one and only one way as the sum of an element of G and an element of H. The rank of G ⊕ H is equal to the sum of the ranks of G and H.
This construction readily generalises to any finite number of abelian groups.
== Construction for an arbitrary family of modules ==
One should notice a clear similarity between the definitions of the direct sum of two vector spaces and of two abelian groups. In fact, each is a special case of the construction of the direct sum of two modules. Additionally, by modifying the definition one can accommodate the direct sum of an infinite family of modules. The precise definition is as follows (Bourbaki 1989, §II.1.6).
Let R be a ring, and {Mi : i ∈ I} a family of left R-modules indexed by the set I. The direct sum of {Mi} is then defined to be the set of all sequences
(
α
i
)
{\displaystyle (\alpha _{i})}
where
α
i
∈
M
i
{\displaystyle \alpha _{i}\in M_{i}}
and
α
i
=
0
{\displaystyle \alpha _{i}=0}
for cofinitely many indices i. (The direct product is analogous but the indices do not need to cofinitely vanish.)
It can also be defined as functions α from I to the disjoint union of the modules Mi such that α(i) ∈ Mi for all i ∈ I and α(i) = 0 for cofinitely many indices i. These functions can equivalently be regarded as finitely supported sections of the fiber bundle over the index set I, with the fiber over
i
∈
I
{\displaystyle i\in I}
being
M
i
{\displaystyle M_{i}}
.
This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two such sequences (or functions) α and β can be added by writing
(
α
+
β
)
i
=
α
i
+
β
i
{\displaystyle (\alpha +\beta )_{i}=\alpha _{i}+\beta _{i}}
for all i (note that this is again zero for all but finitely many indices), and such a function can be multiplied with an element r from R by defining
r
(
α
)
i
=
(
r
α
)
i
{\displaystyle r(\alpha )_{i}=(r\alpha )_{i}}
for all i. In this way, the direct sum becomes a left R-module, and it is denoted
⨁
i
∈
I
M
i
.
{\displaystyle \bigoplus _{i\in I}M_{i}.}
It is customary to write the sequence
(
α
i
)
{\displaystyle (\alpha _{i})}
as a sum
∑
α
i
{\displaystyle \sum \alpha _{i}}
. Sometimes a primed summation
∑
′
α
i
{\displaystyle \sum '\alpha _{i}}
is used to indicate that cofinitely many of the terms are zero.
== Properties ==
The direct sum is a submodule of the direct product of the modules Mi (Bourbaki 1989, §II.1.7). The direct product is the set of all functions α from I to the disjoint union of the modules Mi with α(i)∈Mi, but not necessarily vanishing for all but finitely many i. If the index set I is finite, then the direct sum and the direct product are equal.
Each of the modules Mi may be identified with the submodule of the direct sum consisting of those functions which vanish on all indices different from i. With these identifications, every element x of the direct sum can be written in one and only one way as a sum of finitely many elements from the modules Mi.
If the Mi are actually vector spaces, then the dimension of the direct sum is equal to the sum of the dimensions of the Mi. The same is true for the rank of abelian groups and the length of modules.
Every vector space over the field K is isomorphic to a direct sum of sufficiently many copies of K, so in a sense only these direct sums have to be considered. This is not true for modules over arbitrary rings.
The tensor product distributes over direct sums in the following sense: if N is some right R-module, then the direct sum of the tensor products of N with Mi (which are abelian groups) is naturally isomorphic to the tensor product of N with the direct sum of the Mi.
Direct sums are commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct sum.
The abelian group of R-linear homomorphisms from the direct sum to some left R-module L is naturally isomorphic to the direct product of the abelian groups of R-linear homomorphisms from Mi to L:
Hom
R
(
⨁
i
∈
I
M
i
,
L
)
≅
∏
i
∈
I
Hom
R
(
M
i
,
L
)
.
{\displaystyle \operatorname {Hom} _{R}{\biggl (}\bigoplus _{i\in I}M_{i},L{\biggr )}\cong \prod _{i\in I}\operatorname {Hom} _{R}\left(M_{i},L\right).}
Indeed, there is clearly a homomorphism τ from the left hand side to the right hand side, where τ(θ)(i) is the R-linear homomorphism sending x∈Mi to θ(x) (using the natural inclusion of Mi into the direct sum). The inverse of the homomorphism τ is defined by
τ
−
1
(
β
)
(
α
)
=
∑
i
∈
I
β
(
i
)
(
α
(
i
)
)
{\displaystyle \tau ^{-1}(\beta )(\alpha )=\sum _{i\in I}\beta (i)(\alpha (i))}
for any α in the direct sum of the modules Mi. The key point is that the definition of τ−1 makes sense because α(i) is zero for all but finitely many i, and so the sum is finite.In particular, the dual vector space of a direct sum of vector spaces is isomorphic to the direct product of the duals of those spaces.
The finite direct sum of modules is a biproduct: If
p
k
:
A
1
⊕
⋯
⊕
A
n
→
A
k
{\displaystyle p_{k}:A_{1}\oplus \cdots \oplus A_{n}\to A_{k}}
are the canonical projection mappings and
i
k
:
A
k
↦
A
1
⊕
⋯
⊕
A
n
{\displaystyle i_{k}:A_{k}\mapsto A_{1}\oplus \cdots \oplus A_{n}}
are the inclusion mappings, then
i
1
∘
p
1
+
⋯
+
i
n
∘
p
n
{\displaystyle i_{1}\circ p_{1}+\cdots +i_{n}\circ p_{n}}
equals the identity morphism of A1 ⊕ ⋯ ⊕ An, and
p
k
∘
i
l
{\displaystyle p_{k}\circ i_{l}}
is the identity morphism of Ak in the case l = k, and is the zero map otherwise.
== Internal direct sum ==
Suppose M is an R-module and Mi is a submodule of M for each i in I. If every x in M can be written in exactly one way as a sum of finitely many elements of the Mi, then we say that M is the internal direct sum of the submodules Mi (Halmos 1974, §18). In this case, M is naturally isomorphic to the (external) direct sum of the Mi as defined above (Adamson 1972, p.61).
A submodule N of M is a direct summand of M if there exists some other submodule N′ of M such that M is the internal direct sum of N and N′. In this case, N and N′ are called complementary submodules.
== Universal property ==
In the language of category theory, the direct sum is a coproduct and hence a colimit in the category of left R-modules, which means that it is characterized by the following universal property. For every i in I, consider the natural embedding
j
i
:
M
i
→
⨁
i
∈
I
M
i
{\displaystyle j_{i}:M_{i}\rightarrow \bigoplus _{i\in I}M_{i}}
which sends the elements of Mi to those functions which are zero for all arguments but i. Now let M be an arbitrary R-module and fi : Mi → M be arbitrary R-linear maps for every i, then there exists precisely one R-linear map
f
:
⨁
i
∈
I
M
i
→
M
{\displaystyle f:\bigoplus _{i\in I}M_{i}\rightarrow M}
such that f o ji = fi for all i.
== Grothendieck group ==
The direct sum gives a collection of objects the structure of a commutative monoid, in that the addition of objects is defined, but not subtraction. In fact, subtraction can be defined, and every commutative monoid can be extended to an abelian group. This extension is known as the Grothendieck group. The extension is done by defining equivalence classes of pairs of objects, which allows certain pairs to be treated as inverses. The construction, detailed in the article on the Grothendieck group, is "universal", in that it has the universal property of being unique, and homomorphic to any other embedding of a commutative monoid in an abelian group.
== Direct sum of modules with additional structure ==
If the modules we are considering carry some additional structure (for example, a norm or an inner product), then the direct sum of the modules can often be made to carry this additional structure, as well. In this case, we obtain the coproduct in the appropriate category of all objects carrying the additional structure. Two prominent examples occur for Banach spaces and Hilbert spaces.
In some classical texts, the phrase "direct sum of algebras over a field" is also introduced for denoting the algebraic structure that is presently more commonly called a direct product of algebras; that is, the Cartesian product of the underlying sets with the componentwise operations. This construction, however, does not provide a coproduct in the category of algebras, but a direct product (see note below and the remark on direct sums of rings).
=== Direct sum of algebras ===
A direct sum of algebras
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is the direct sum as vector spaces, with product
(
x
1
+
y
1
)
(
x
2
+
y
2
)
=
(
x
1
x
2
+
y
1
y
2
)
.
{\displaystyle (x_{1}+y_{1})(x_{2}+y_{2})=(x_{1}x_{2}+y_{1}y_{2}).}
Consider these classical examples:
R
⊕
R
{\displaystyle \mathbf {R} \oplus \mathbf {R} }
is ring isomorphic to split-complex numbers, also used in interval analysis.
C
⊕
C
{\displaystyle \mathbf {C} \oplus \mathbf {C} }
is the algebra of tessarines introduced by James Cockle in 1848.
H
⊕
H
,
{\displaystyle \mathbf {H} \oplus \mathbf {H} ,}
called the split-biquaternions, was introduced by William Kingdon Clifford in 1873.
Joseph Wedderburn exploited the concept of a direct sum of algebras in his classification of hypercomplex numbers. See his Lectures on Matrices (1934), page 151.
Wedderburn makes clear the distinction between a direct sum and a direct product of algebras: For the direct sum the field of scalars acts jointly on both parts:
λ
(
x
⊕
y
)
=
λ
x
⊕
λ
y
{\displaystyle \lambda (x\oplus y)=\lambda x\oplus \lambda y}
while for the direct product a scalar factor may be collected alternately with the parts, but not both:
λ
(
x
,
y
)
=
(
λ
x
,
y
)
=
(
x
,
λ
y
)
.
{\displaystyle \lambda (x,y)=(\lambda x,y)=(x,\lambda y).}
Ian R. Porteous uses the three direct sums above, denoting them
2
R
,
2
C
,
2
H
,
{\displaystyle ^{2}R,\ ^{2}C,\ ^{2}H,}
as rings of scalars in his analysis of Clifford Algebras and the Classical Groups (1995).
The construction described above, as well as Wedderburn's use of the terms direct sum and direct product follow a different convention than the one in category theory. In categorical terms, Wedderburn's direct sum is a categorical product, whilst Wedderburn's direct product is a coproduct (or categorical sum), which (for commutative algebras) actually corresponds to the tensor product of algebras.
=== Direct sum of Banach spaces ===
The direct sum of two Banach spaces
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is the direct sum of
X
{\displaystyle X}
and
Y
{\displaystyle Y}
considered as vector spaces, with the norm
‖
(
x
,
y
)
‖
=
‖
x
‖
X
+
‖
y
‖
Y
{\displaystyle \|(x,y)\|=\|x\|_{X}+\|y\|_{Y}}
for all
x
∈
X
{\displaystyle x\in X}
and
y
∈
Y
.
{\displaystyle y\in Y.}
Generally, if
X
i
{\displaystyle X_{i}}
is a collection of Banach spaces, where
i
{\displaystyle i}
traverses the index set
I
,
{\displaystyle I,}
then the direct sum
⨁
i
∈
I
X
i
{\displaystyle \bigoplus _{i\in I}X_{i}}
is a module consisting of all functions
x
{\displaystyle x}
defined over
I
{\displaystyle I}
such that
x
(
i
)
∈
X
i
{\displaystyle x(i)\in X_{i}}
for all
i
∈
I
{\displaystyle i\in I}
and
∑
i
∈
I
‖
x
(
i
)
‖
X
i
<
∞
.
{\displaystyle \sum _{i\in I}\|x(i)\|_{X_{i}}<\infty .}
The norm is given by the sum above. The direct sum with this norm is again a Banach space.
For example, if we take the index set
I
=
N
{\displaystyle I=\mathbb {N} }
and
X
i
=
R
,
{\displaystyle X_{i}=\mathbb {R} ,}
then the direct sum
⨁
i
∈
N
X
i
{\displaystyle \bigoplus _{i\in \mathbb {N} }X_{i}}
is the space
ℓ
1
,
{\displaystyle \ell _{1},}
which consists of all the sequences
(
a
i
)
{\displaystyle \left(a_{i}\right)}
of reals with finite norm
‖
a
‖
=
∑
i
|
a
i
|
.
{\textstyle \|a\|=\sum _{i}\left|a_{i}\right|.}
A closed subspace
A
{\displaystyle A}
of a Banach space
X
{\displaystyle X}
is complemented if there is another closed subspace
B
{\displaystyle B}
of
X
{\displaystyle X}
such that
X
{\displaystyle X}
is equal to the internal direct sum
A
⊕
B
.
{\displaystyle A\oplus B.}
Note that not every closed subspace is complemented; e.g.
c
0
{\displaystyle c_{0}}
is not complemented in
ℓ
∞
.
{\displaystyle \ell ^{\infty }.}
=== Direct sum of modules with bilinear forms ===
Let
{
(
M
i
,
b
i
)
:
i
∈
I
}
{\displaystyle \left\{\left(M_{i},b_{i}\right):i\in I\right\}}
be a family indexed by
I
{\displaystyle I}
of modules equipped with bilinear forms. The orthogonal direct sum is the module direct sum with bilinear form
B
{\displaystyle B}
defined by
B
(
(
x
i
)
,
(
y
i
)
)
=
∑
i
∈
I
b
i
(
x
i
,
y
i
)
{\displaystyle B\left({\left({x_{i}}\right),\left({y_{i}}\right)}\right)=\sum _{i\in I}b_{i}\left({x_{i},y_{i}}\right)}
in which the summation makes sense even for infinite index sets
I
{\displaystyle I}
because only finitely many of the terms are non-zero.
=== Direct sum of Hilbert spaces ===
If finitely many Hilbert spaces
H
1
,
…
,
H
n
{\displaystyle H_{1},\ldots ,H_{n}}
are given, one can construct their orthogonal direct sum as above (since they are vector spaces), defining the inner product as:
⟨
(
x
1
,
…
,
x
n
)
,
(
y
1
,
…
,
y
n
)
⟩
=
⟨
x
1
,
y
1
⟩
+
⋯
+
⟨
x
n
,
y
n
⟩
.
{\displaystyle \left\langle \left(x_{1},\ldots ,x_{n}\right),\left(y_{1},\ldots ,y_{n}\right)\right\rangle =\langle x_{1},y_{1}\rangle +\cdots +\langle x_{n},y_{n}\rangle .}
The resulting direct sum is a Hilbert space which contains the given Hilbert spaces as mutually orthogonal subspaces.
If infinitely many Hilbert spaces
H
i
{\displaystyle H_{i}}
for
i
∈
I
{\displaystyle i\in I}
are given, we can carry out the same construction; notice that when defining the inner product, only finitely many summands will be non-zero. However, the result will only be an inner product space and it will not necessarily be complete. We then define the direct sum of the Hilbert spaces
H
i
{\displaystyle H_{i}}
to be the completion of this inner product space.
Alternatively and equivalently, one can define the direct sum of the Hilbert spaces
H
i
{\displaystyle H_{i}}
as the space of all functions α with domain
I
,
{\displaystyle I,}
such that
α
(
i
)
{\displaystyle \alpha (i)}
is an element of
H
i
{\displaystyle H_{i}}
for every
i
∈
I
{\displaystyle i\in I}
and:
∑
i
‖
α
(
i
)
‖
2
<
∞
.
{\displaystyle \sum _{i}\left\|\alpha _{(i)}\right\|^{2}<\infty .}
The inner product of two such function α and β is then defined as:
⟨
α
,
β
⟩
=
∑
i
⟨
α
i
,
β
i
⟩
.
{\displaystyle \langle \alpha ,\beta \rangle =\sum _{i}\langle \alpha _{i},\beta _{i}\rangle .}
This space is complete and we get a Hilbert space.
For example, if we take the index set
I
=
N
{\displaystyle I=\mathbb {N} }
and
X
i
=
R
,
{\displaystyle X_{i}=\mathbb {R} ,}
then the direct sum
⊕
i
∈
N
X
i
{\displaystyle \oplus _{i\in \mathbb {N} }X_{i}}
is the space
ℓ
2
,
{\displaystyle \ell _{2},}
which consists of all the sequences
(
a
i
)
{\displaystyle \left(a_{i}\right)}
of reals with finite norm
‖
a
‖
=
∑
i
‖
a
i
‖
2
.
{\textstyle \|a\|={\sqrt {\sum _{i}\left\|a_{i}\right\|^{2}}}.}
Comparing this with the example for Banach spaces, we see that the Banach space direct sum and the Hilbert space direct sum are not necessarily the same. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum, although the norm will be different.
Every Hilbert space is isomorphic to a direct sum of sufficiently many copies of the base field, which is either
R
or
C
.
{\displaystyle \mathbb {R} {\text{ or }}\mathbb {C} .}
This is equivalent to the assertion that every Hilbert space has an orthonormal basis. More generally, every closed subspace of a Hilbert space is complemented because it admits an orthogonal complement. Conversely, the Lindenstrauss–Tzafriri theorem asserts that if every closed subspace of a Banach space is complemented, then the Banach space is isomorphic (topologically) to a Hilbert space.
== See also ==
Biproduct – in category theory, an object that is both product and coproduct in compatible waysPages displaying wikidata descriptions as a fallback
Indecomposable module
Jordan–Hölder theorem – Decomposition of an algebraic structurePages displaying short descriptions of redirect targets
Krull–Schmidt theorem – Mathematical theorem
Split exact sequence – Type of short exact sequence in mathematics
== References ==
Adamson, Iain T. (1972), Elementary rings and modules, University Mathematical Texts, Oliver and Boyd, ISBN 0-05-002192-3.
Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9.
Dummit, David S.; Foote, Richard M. (1991), Abstract algebra, Englewood Cliffs, NJ: Prentice Hall, Inc., ISBN 0-13-004771-6.
Halmos, Paul (1974), Finite dimensional vector spaces, Springer, ISBN 0-387-90093-4
Mac Lane, S.; Birkhoff, G. (1999), Algebra, AMS Chelsea, ISBN 0-8218-1646-2. | Wikipedia/Direct_sum_of_algebras |
In functional analysis, every C*-algebra is isomorphic to a subalgebra of the C*-algebra
B
(
H
)
{\displaystyle {\mathcal {B}}(H)}
of bounded linear operators on some Hilbert space
H
.
{\displaystyle H.}
This article describes the spectral theory of closed normal subalgebras of
B
(
H
)
{\displaystyle {\mathcal {B}}(H)}
. A subalgebra
A
{\displaystyle A}
of
B
(
H
)
{\displaystyle {\mathcal {B}}(H)}
is called normal if it is commutative and closed under the
∗
{\displaystyle \ast }
operation: for all
x
,
y
∈
A
{\displaystyle x,y\in A}
, we have
x
∗
∈
A
{\displaystyle x^{\ast }\in A}
and that
x
y
=
y
x
{\displaystyle xy=yx}
.
== Resolution of identity ==
Throughout,
H
{\displaystyle H}
is a fixed Hilbert space.
A projection-valued measure on a measurable space
(
X
,
Ω
)
,
{\displaystyle (X,\Omega ),}
where
Ω
{\displaystyle \Omega }
is a σ-algebra of subsets of
X
,
{\displaystyle X,}
is a mapping
π
:
Ω
→
B
(
H
)
{\displaystyle \pi :\Omega \to {\mathcal {B}}(H)}
such that for all
ω
∈
Ω
,
{\displaystyle \omega \in \Omega ,}
π
(
ω
)
{\displaystyle \pi (\omega )}
is a self-adjoint projection on
H
{\displaystyle H}
(that is,
π
(
ω
)
{\displaystyle \pi (\omega )}
is a bounded linear operator
π
(
ω
)
:
H
→
H
{\displaystyle \pi (\omega ):H\to H}
that satisfies
π
(
ω
)
=
π
(
ω
)
∗
{\displaystyle \pi (\omega )=\pi (\omega )^{*}}
and
π
(
ω
)
∘
π
(
ω
)
=
π
(
ω
)
{\displaystyle \pi (\omega )\circ \pi (\omega )=\pi (\omega )}
) such that
π
(
X
)
=
Id
H
{\displaystyle \pi (X)=\operatorname {Id} _{H}\quad }
(where
Id
H
{\displaystyle \operatorname {Id} _{H}}
is the identity operator of
H
{\displaystyle H}
) and for every
x
,
y
∈
H
,
{\displaystyle x,y\in H,}
the function
Ω
→
C
{\displaystyle \Omega \to \mathbb {C} }
defined by
ω
↦
⟨
π
(
ω
)
x
,
y
⟩
{\displaystyle \omega \mapsto \langle \pi (\omega )x,y\rangle }
is a complex measure on
M
{\displaystyle M}
(that is, a complex-valued countably additive function).
A resolution of identity on a measurable space
(
X
,
Ω
)
{\displaystyle (X,\Omega )}
is a function
π
:
Ω
→
B
(
H
)
{\displaystyle \pi :\Omega \to {\mathcal {B}}(H)}
such that for every
ω
1
,
ω
2
∈
Ω
{\displaystyle \omega _{1},\omega _{2}\in \Omega }
:
π
(
∅
)
=
0
{\displaystyle \pi (\varnothing )=0}
;
π
(
X
)
=
Id
H
{\displaystyle \pi (X)=\operatorname {Id} _{H}}
;
for every
ω
∈
Ω
,
{\displaystyle \omega \in \Omega ,}
π
(
ω
)
{\displaystyle \pi (\omega )}
is a self-adjoint projection on
H
{\displaystyle H}
;
for every
x
,
y
∈
H
,
{\displaystyle x,y\in H,}
the map
π
x
,
y
:
Ω
→
C
{\displaystyle \pi _{x,y}:\Omega \to \mathbb {C} }
defined by
π
x
,
y
(
ω
)
=
⟨
π
(
ω
)
x
,
y
⟩
{\displaystyle \pi _{x,y}(\omega )=\langle \pi (\omega )x,y\rangle }
is a complex measure on
Ω
{\displaystyle \Omega }
;
π
(
ω
1
∩
ω
2
)
=
π
(
ω
1
)
∘
π
(
ω
2
)
{\displaystyle \pi \left(\omega _{1}\cap \omega _{2}\right)=\pi \left(\omega _{1}\right)\circ \pi \left(\omega _{2}\right)}
;
if
ω
1
∩
ω
2
=
∅
{\displaystyle \omega _{1}\cap \omega _{2}=\varnothing }
then
π
(
ω
1
∪
ω
2
)
=
π
(
ω
1
)
+
π
(
ω
2
)
{\displaystyle \pi \left(\omega _{1}\cup \omega _{2}\right)=\pi \left(\omega _{1}\right)+\pi \left(\omega _{2}\right)}
;
If
Ω
{\displaystyle \Omega }
is the
σ
{\displaystyle \sigma }
-algebra of all Borels sets on a Hausdorff locally compact (or compact) space, then the following additional requirement is added:
for every
x
,
y
∈
H
,
{\displaystyle x,y\in H,}
the map
π
x
,
y
:
Ω
→
C
{\displaystyle \pi _{x,y}:\Omega \to \mathbb {C} }
is a regular Borel measure (this is automatically satisfied on compact metric spaces).
Conditions 2, 3, and 4 imply that
π
{\displaystyle \pi }
is a projection-valued measure.
=== Properties ===
Throughout, let
π
{\displaystyle \pi }
be a resolution of identity.
For all
x
∈
H
,
{\displaystyle x\in H,}
π
x
,
x
:
Ω
→
C
{\displaystyle \pi _{x,x}:\Omega \to \mathbb {C} }
is a positive measure on
Ω
{\displaystyle \Omega }
with total variation
‖
π
x
,
x
‖
=
π
x
,
x
(
X
)
=
‖
x
‖
2
{\displaystyle \left\|\pi _{x,x}\right\|=\pi _{x,x}(X)=\|x\|^{2}}
and that satisfies
π
x
,
x
(
ω
)
=
⟨
π
(
ω
)
x
,
x
⟩
=
‖
π
(
ω
)
x
‖
2
{\displaystyle \pi _{x,x}(\omega )=\langle \pi (\omega )x,x\rangle =\|\pi (\omega )x\|^{2}}
for all
ω
∈
Ω
.
{\displaystyle \omega \in \Omega .}
For every
ω
1
,
ω
2
∈
Ω
{\displaystyle \omega _{1},\omega _{2}\in \Omega }
:
π
(
ω
1
)
π
(
ω
2
)
=
π
(
ω
2
)
π
(
ω
1
)
{\displaystyle \pi \left(\omega _{1}\right)\pi \left(\omega _{2}\right)=\pi \left(\omega _{2}\right)\pi \left(\omega _{1}\right)}
(since both are equal to
π
(
ω
1
∩
ω
2
)
{\displaystyle \pi \left(\omega _{1}\cap \omega _{2}\right)}
).
If
ω
1
∩
ω
2
=
∅
{\displaystyle \omega _{1}\cap \omega _{2}=\varnothing }
then the ranges of the maps
π
(
ω
1
)
{\displaystyle \pi \left(\omega _{1}\right)}
and
π
(
ω
2
)
{\displaystyle \pi \left(\omega _{2}\right)}
are orthogonal to each other and
π
(
ω
1
)
π
(
ω
2
)
=
0
=
π
(
ω
2
)
π
(
ω
1
)
.
{\displaystyle \pi \left(\omega _{1}\right)\pi \left(\omega _{2}\right)=0=\pi \left(\omega _{2}\right)\pi \left(\omega _{1}\right).}
π
:
Ω
→
B
(
H
)
{\displaystyle \pi :\Omega \to {\mathcal {B}}(H)}
is finitely additive.
If
ω
1
,
ω
2
,
…
{\displaystyle \omega _{1},\omega _{2},\ldots }
are pairwise disjoint elements of
Ω
{\displaystyle \Omega }
whose union is
ω
{\displaystyle \omega }
and if
π
(
ω
i
)
=
0
{\displaystyle \pi \left(\omega _{i}\right)=0}
for all
i
{\displaystyle i}
then
π
(
ω
)
=
0.
{\displaystyle \pi (\omega )=0.}
However,
π
:
Ω
→
B
(
H
)
{\displaystyle \pi :\Omega \to {\mathcal {B}}(H)}
is countably additive only in trivial situations as is now described: suppose that
ω
1
,
ω
2
,
…
{\displaystyle \omega _{1},\omega _{2},\ldots }
are pairwise disjoint elements of
Ω
{\displaystyle \Omega }
whose union is
ω
{\displaystyle \omega }
and that the partial sums
∑
i
=
1
n
π
(
ω
i
)
{\displaystyle \sum _{i=1}^{n}\pi \left(\omega _{i}\right)}
converge to
π
(
ω
)
{\displaystyle \pi (\omega )}
in
B
(
H
)
{\displaystyle {\mathcal {B}}(H)}
(with its norm topology) as
n
→
∞
{\displaystyle n\to \infty }
; then since the norm of any projection is either
0
{\displaystyle 0}
or
≥
1
,
{\displaystyle \geq 1,}
the partial sums cannot form a Cauchy sequence unless all but finitely many of the
π
(
ω
i
)
{\displaystyle \pi \left(\omega _{i}\right)}
are
0.
{\displaystyle 0.}
For any fixed
x
∈
H
,
{\displaystyle x\in H,}
the map
π
x
:
Ω
→
H
{\displaystyle \pi _{x}:\Omega \to H}
defined by
π
x
(
ω
)
:=
π
(
ω
)
x
{\displaystyle \pi _{x}(\omega ):=\pi (\omega )x}
is a countably additive
H
{\displaystyle H}
-valued measure on
Ω
.
{\displaystyle \Omega .}
Here countably additive means that whenever
ω
1
,
ω
2
,
…
{\displaystyle \omega _{1},\omega _{2},\ldots }
are pairwise disjoint elements of
Ω
{\displaystyle \Omega }
whose union is
ω
,
{\displaystyle \omega ,}
then the partial sums
∑
i
=
1
n
π
(
ω
i
)
x
{\displaystyle \sum _{i=1}^{n}\pi \left(\omega _{i}\right)x}
converge to
π
(
ω
)
x
{\displaystyle \pi (\omega )x}
in
H
.
{\displaystyle H.}
Said more succinctly,
∑
i
=
1
∞
π
(
ω
i
)
x
=
π
(
ω
)
x
.
{\displaystyle \sum _{i=1}^{\infty }\pi \left(\omega _{i}\right)x=\pi (\omega )x.}
In other words, for every pairwise disjoint family of elements
(
ω
i
)
i
=
1
∞
⊆
Ω
{\displaystyle \left(\omega _{i}\right)_{i=1}^{\infty }\subseteq \Omega }
whose union is
ω
∞
∈
Ω
{\displaystyle \omega _{\infty }\in \Omega }
, then
∑
i
=
1
n
π
(
ω
i
)
=
π
(
⋃
i
=
1
n
ω
i
)
{\displaystyle \sum _{i=1}^{n}\pi \left(\omega _{i}\right)=\pi \left(\bigcup _{i=1}^{n}\omega _{i}\right)}
(by finite additivity of
π
{\displaystyle \pi }
) converges to
π
(
ω
∞
)
{\displaystyle \pi \left(\omega _{\infty }\right)}
in the strong operator topology on
B
(
H
)
{\displaystyle {\mathcal {B}}(H)}
: for every
x
∈
H
{\displaystyle x\in H}
, the sequence of elements
∑
i
=
1
n
π
(
ω
i
)
x
{\displaystyle \sum _{i=1}^{n}\pi \left(\omega _{i}\right)x}
converges to
π
(
ω
∞
)
x
{\displaystyle \pi \left(\omega _{\infty }\right)x}
in
H
{\displaystyle H}
(with respect to the norm topology).
== L∞(π) - space of essentially bounded function ==
The
π
:
Ω
→
B
(
H
)
{\displaystyle \pi :\Omega \to {\mathcal {B}}(H)}
be a resolution of identity on
(
X
,
Ω
)
.
{\displaystyle (X,\Omega ).}
=== Essentially bounded functions ===
Suppose
f
:
X
→
C
{\displaystyle f:X\to \mathbb {C} }
is a complex-valued
Ω
{\displaystyle \Omega }
-measurable function. There exists a unique largest open subset
V
f
{\displaystyle V_{f}}
of
C
{\displaystyle \mathbb {C} }
(ordered under subset inclusion) such that
π
(
f
−
1
(
V
f
)
)
=
0.
{\displaystyle \pi \left(f^{-1}\left(V_{f}\right)\right)=0.}
To see why, let
D
1
,
D
2
,
…
{\displaystyle D_{1},D_{2},\ldots }
be a basis for
C
{\displaystyle \mathbb {C} }
's topology consisting of open disks and suppose that
D
i
1
,
D
i
2
,
…
{\displaystyle D_{i_{1}},D_{i_{2}},\ldots }
is the subsequence (possibly finite) consisting of those sets such that
π
(
f
−
1
(
D
i
k
)
)
=
0
{\displaystyle \pi \left(f^{-1}\left(D_{i_{k}}\right)\right)=0}
; then
D
i
1
∪
D
i
2
∪
⋯
=
V
f
.
{\displaystyle D_{i_{1}}\cup D_{i_{2}}\cup \cdots =V_{f}.}
Note that, in particular, if
D
{\displaystyle D}
is an open subset of
C
{\displaystyle \mathbb {C} }
such that
D
∩
Im
f
=
∅
{\displaystyle D\cap \operatorname {Im} f=\varnothing }
then
π
(
f
−
1
(
D
)
)
=
π
(
∅
)
=
0
{\displaystyle \pi \left(f^{-1}(D)\right)=\pi (\varnothing )=0}
so that
D
⊆
V
f
{\displaystyle D\subseteq V_{f}}
(although there are other ways in which
π
(
f
−
1
(
D
)
)
{\displaystyle \pi \left(f^{-1}(D)\right)}
may equal 0). Indeed,
C
∖
cl
(
Im
f
)
⊆
V
f
.
{\displaystyle \mathbb {C} \setminus \operatorname {cl} (\operatorname {Im} f)\subseteq V_{f}.}
The essential range of
f
{\displaystyle f}
is defined to be the complement of
V
f
.
{\displaystyle V_{f}.}
It is the smallest closed subset of
C
{\displaystyle \mathbb {C} }
that contains
f
(
x
)
{\displaystyle f(x)}
for almost all
x
∈
X
{\displaystyle x\in X}
(that is, for all
x
∈
X
{\displaystyle x\in X}
except for those in some set
ω
∈
Ω
{\displaystyle \omega \in \Omega }
such that
π
(
ω
)
=
0
{\displaystyle \pi (\omega )=0}
). The essential range is a closed subset of
C
{\displaystyle \mathbb {C} }
so that if it is also a bounded subset of
C
{\displaystyle \mathbb {C} }
then it is compact.
The function
f
{\displaystyle f}
is essentially bounded if its essential range is bounded, in which case define its essential supremum, denoted by
‖
f
‖
∞
,
{\displaystyle \|f\|^{\infty },}
to be the supremum of all
|
λ
|
{\displaystyle |\lambda |}
as
λ
{\displaystyle \lambda }
ranges over the essential range of
f
.
{\displaystyle f.}
=== Space of essentially bounded functions ===
Let
B
(
X
,
Ω
)
{\displaystyle {\mathcal {B}}(X,\Omega )}
be the vector space of all bounded complex-valued
Ω
{\displaystyle \Omega }
-measurable functions
f
:
X
→
C
,
{\displaystyle f:X\to \mathbb {C} ,}
which becomes a Banach algebra when normed by
‖
f
‖
∞
:=
sup
x
∈
X
|
f
(
x
)
|
.
{\displaystyle \|f\|_{\infty }:=\sup _{x\in X}|f(x)|.}
The function
‖
⋅
‖
∞
{\displaystyle \|\,\cdot \,\|^{\infty }}
is a seminorm on
B
(
X
,
Ω
)
,
{\displaystyle {\mathcal {B}}(X,\Omega ),}
but not necessarily a norm.
The kernel of this seminorm,
N
∞
:=
{
f
∈
B
(
X
,
Ω
)
:
‖
f
‖
∞
=
0
}
,
{\displaystyle N^{\infty }:=\left\{f\in {\mathcal {B}}(X,\Omega ):\|f\|^{\infty }=0\right\},}
is a vector subspace of
B
(
X
,
Ω
)
{\displaystyle {\mathcal {B}}(X,\Omega )}
that is a closed two-sided ideal of the Banach algebra
(
B
(
X
,
Ω
)
,
‖
⋅
‖
∞
)
.
{\displaystyle \left({\mathcal {B}}(X,\Omega ),\|\cdot \|_{\infty }\right).}
Hence the quotient of
B
(
X
,
Ω
)
{\displaystyle {\mathcal {B}}(X,\Omega )}
by
N
∞
{\displaystyle N^{\infty }}
is also a Banach algebra, denoted by
L
∞
(
π
)
:=
B
(
X
,
Ω
)
/
N
∞
{\displaystyle L^{\infty }(\pi ):={\mathcal {B}}(X,\Omega )/N^{\infty }}
where the norm of any element
f
+
N
∞
∈
L
∞
(
π
)
{\displaystyle f+N^{\infty }\in L^{\infty }(\pi )}
is equal to
‖
f
‖
∞
{\displaystyle \|f\|^{\infty }}
(since if
f
+
N
∞
=
g
+
N
∞
{\displaystyle f+N^{\infty }=g+N^{\infty }}
then
‖
f
‖
∞
=
‖
g
‖
∞
{\displaystyle \|f\|^{\infty }=\|g\|^{\infty }}
) and this norm makes
L
∞
(
π
)
{\displaystyle L^{\infty }(\pi )}
into a Banach algebra.
The spectrum of
f
+
N
∞
{\displaystyle f+N^{\infty }}
in
L
∞
(
π
)
{\displaystyle L^{\infty }(\pi )}
is the essential range of
f
.
{\displaystyle f.}
This article will follow the usual practice of writing
f
{\displaystyle f}
rather than
f
+
N
∞
{\displaystyle f+N^{\infty }}
to represent elements of
L
∞
(
π
)
.
{\displaystyle L^{\infty }(\pi ).}
== Spectral theorem ==
The maximal ideal space of a Banach algebra
A
{\displaystyle A}
is the set of all complex homomorphisms
A
→
C
,
{\displaystyle A\to \mathbb {C} ,}
which we'll denote by
σ
A
.
{\displaystyle \sigma _{A}.}
For every
T
{\displaystyle T}
in
A
,
{\displaystyle A,}
the Gelfand transform of
T
{\displaystyle T}
is the map
G
(
T
)
:
σ
A
→
C
{\displaystyle G(T):\sigma _{A}\to \mathbb {C} }
defined by
G
(
T
)
(
h
)
:=
h
(
T
)
.
{\displaystyle G(T)(h):=h(T).}
σ
A
{\displaystyle \sigma _{A}}
is given the weakest topology making every
G
(
T
)
:
σ
A
→
C
{\displaystyle G(T):\sigma _{A}\to \mathbb {C} }
continuous. With this topology,
σ
A
{\displaystyle \sigma _{A}}
is a compact Hausdorff space and every
T
{\displaystyle T}
in
A
,
{\displaystyle A,}
G
(
T
)
{\displaystyle G(T)}
belongs to
C
(
σ
A
)
,
{\displaystyle C\left(\sigma _{A}\right),}
which is the space of continuous complex-valued functions on
σ
A
.
{\displaystyle \sigma _{A}.}
The range of
G
(
T
)
{\displaystyle G(T)}
is the spectrum
σ
(
T
)
{\displaystyle \sigma (T)}
and that the spectral radius is equal to
max
{
|
G
(
T
)
(
h
)
|
:
h
∈
σ
A
}
,
{\displaystyle \max \left\{|G(T)(h)|:h\in \sigma _{A}\right\},}
which is
≤
‖
T
‖
.
{\displaystyle \leq \|T\|.}
The above result can be specialized to a single normal bounded operator.
== See also ==
Projection-valued measure – Mathematical operator-value measure of interest in quantum mechanics and functional analysis
Spectral theory of compact operators
Spectral theorem – Result about when a matrix can be diagonalized
== References ==
Robertson, A. P. (1973). Topological vector spaces. Cambridge England: University Press. ISBN 0-521-29882-2. OCLC 589250.
Robertson, Alex P.; Robertson, Wendy J. (1980). Topological Vector Spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge England: Cambridge University Press. ISBN 978-0-521-29882-7. OCLC 589250.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. | Wikipedia/Spectral_theory_of_normal_C*-algebras |
In mathematics, the Dirichlet eigenvalues are the fundamental modes of vibration of an idealized drum with a given shape. The problem of whether one can hear the shape of a drum is: given the Dirichlet eigenvalues, what features of the shape of the drum can one deduce. Here a "drum" is thought of as an elastic membrane Ω, which is represented as a planar domain whose boundary is fixed. The Dirichlet eigenvalues are found by solving the following problem for an unknown function u ≠ 0 and eigenvalue λ
Here Δ is the Laplacian, which is given in xy-coordinates by
Δ
u
=
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
.
{\displaystyle \Delta u={\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}.}
The boundary value problem (1) is the Dirichlet problem for the Helmholtz equation, and so λ is known as a Dirichlet eigenvalue for Ω. Dirichlet eigenvalues are contrasted with Neumann eigenvalues: eigenvalues for the corresponding Neumann problem. The Laplace operator Δ appearing in (1) is often known as the Dirichlet Laplacian when it is considered as accepting only functions u satisfying the Dirichlet boundary condition. More generally, in spectral geometry one considers (1) on a manifold with boundary Ω. Then Δ is taken to be the Laplace–Beltrami operator, also with Dirichlet boundary conditions.
It can be shown, using the spectral theorem for compact self-adjoint operators that the eigenspaces are finite-dimensional and that the Dirichlet eigenvalues λ are real, positive, and have no limit point. Thus they can be arranged in increasing order:
0
<
λ
1
≤
λ
2
≤
⋯
,
λ
n
→
∞
,
{\displaystyle 0<\lambda _{1}\leq \lambda _{2}\leq \cdots ,\quad \lambda _{n}\to \infty ,}
where each eigenvalue is counted according to its geometric multiplicity. The eigenspaces are orthogonal in the space of square-integrable functions, and consist of smooth functions. In fact, the Dirichlet Laplacian has a continuous extension to an operator from the Sobolev space
H
0
2
(
Ω
)
{\displaystyle H_{0}^{2}(\Omega )}
into
L
2
(
Ω
)
{\displaystyle L^{2}(\Omega )}
. This operator is invertible, and its inverse is compact and self-adjoint so that the usual spectral theorem can be applied to obtain the eigenspaces of Δ and the reciprocals 1/λ of its eigenvalues.
One of the primary tools in the study of the Dirichlet eigenvalues is the max-min principle: the first eigenvalue λ1 minimizes the Dirichlet energy. To wit,
λ
1
=
inf
u
≠
0
∫
Ω
|
∇
u
|
2
∫
Ω
|
u
|
2
,
{\displaystyle \lambda _{1}=\inf _{u\not =0}{\frac {\int _{\Omega }|\nabla u|^{2}}{\int _{\Omega }|u|^{2}}},}
the infimum is taken over all u of compact support that do not vanish identically in Ω. By a density argument, this infimum agrees with that taken over nonzero
u
∈
H
0
1
(
Ω
)
{\displaystyle u\in H_{0}^{1}(\Omega )}
. Moreover, using results from the calculus of variations analogous to the Lax–Milgram theorem, one can show that a minimizer exists in
H
0
1
(
Ω
)
{\displaystyle H_{0}^{1}(\Omega )}
. More generally, one has
λ
k
=
sup
inf
∫
Ω
|
∇
u
|
2
∫
Ω
|
u
|
2
{\displaystyle \lambda _{k}=\sup \inf {\frac {\int _{\Omega }|\nabla u|^{2}}{\int _{\Omega }|u|^{2}}}}
where the supremum is taken over all (k−1)-tuples
ϕ
1
,
…
,
ϕ
k
−
1
∈
H
0
1
(
Ω
)
{\displaystyle \phi _{1},\dots ,\phi _{k-1}\in H_{0}^{1}(\Omega )}
and the infimum over all u orthogonal to the
ϕ
i
{\displaystyle \phi _{i}}
.
== Applications ==
The Dirichlet Laplacian may arise from various problems of mathematical physics;
it may refer to modes of at idealized drum, small waves at the surface of an idealized pool,
as well as to a mode of an idealized optical fiber in the paraxial approximation.
The last application is most practical in connection to the double-clad fibers;
in such fibers, it is important, that most of modes of the fill the domain uniformly,
or the most of rays cross the core. The poorest shape seems to be the circularly-symmetric domain
,.
The modes of pump should not avoid the active core used in double-clad fiber amplifiers.
The spiral-shaped domain happens to be especially efficient for such an application due to the
boundary behavior of modes of Dirichlet laplacian.
The theorem about boundary behavior of the Dirichlet Laplacian if analogy of the property of rays in geometrical optics (Fig.1);
the angular momentum of a ray (green) increases at each reflection from the spiral part of the boundary (blue), until the ray hits the chunk (red); all rays (except those parallel to the optical axis) unavoidly visit the region in vicinity of the chunk to frop the excess of the
angular momentum. Similarly, all the modes of the Dirichlet Laplacian have non-zero values in vicinity of the chunk. The normal component of the derivative
of the mode at the boundary can be interpreted as pressure; the pressure integrated over the surface gives the force. As the mode is steady-state
solution of the propagation equation (with trivial dependence of the longitudinal coordinate), the total force should be zero.
Similarly, the angular momentum of the force of pressure should be also zero. However, there exists a formal proof, which
does not refer to the analogy with the physical system.
== See also ==
Rayleigh–Faber–Krahn inequality
== Notes ==
== References ==
Benguria, Rafael D. "Dirichlet Eigenvalue". Encyclopedia of Mathematics. Springer. Retrieved 28 October 2021.
Chavel, Isaac (1984). Eigenvalues in Riemannian geometry. Pure Appl. Math. Vol. 115. Academic Press. ISBN 978-0-12-170640-1..
Courant, Richard; Hilbert, David (1962). Methods of Mathematical Physics, Volume I. Wiley-Interscience.. | Wikipedia/Dirichlet_eigenvalue |
In geometric measure theory the area formula relates the Hausdorff measure of the image of a Lipschitz map, while accounting for multiplicity, to the integral of the Jacobian of the map. It is one of the fundamental results of the field that has connections, for example, to rectifiability and Sard's theorem.
Definition: Given
f
:
R
n
→
R
m
{\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
and
A
⊂
R
n
{\displaystyle A\subset \mathbb {R} ^{n}}
, the multiplicity function
N
(
f
,
A
,
y
)
,
y
∈
R
m
{\displaystyle N(f,A,y),\,y\in \mathbb {R} ^{m}}
, is the (possibly infinite) number of points in the preimage
f
−
1
(
y
)
∩
A
{\displaystyle f^{-1}(y)\cap A}
. The multiplicity function is also called the Banach indicatrix. Note that
N
(
f
,
A
,
y
)
=
H
0
(
f
−
1
(
y
)
∩
A
)
{\displaystyle N(f,A,y)={\mathcal {H}}^{0}(f^{-1}(y)\cap A)}
. Here,
H
n
{\displaystyle {\mathcal {H}}^{n}}
denotes the n-dimensional Hausdorff measure, and
L
n
{\displaystyle {\mathcal {L}}^{n}}
will denote the n-dimensional Lebesgue measure.
Theorem: If
f
:
R
n
→
R
m
{\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
is Lipschitz and
n
≤
m
{\displaystyle n\leq m}
, then for any measurable
A
⊂
R
n
{\displaystyle A\subset \mathbb {R} ^{n}}
,
∫
A
J
(
D
f
(
x
)
)
d
L
n
(
x
)
=
∫
R
m
N
(
f
,
A
,
y
)
d
H
n
(
y
)
,
{\displaystyle \int _{A}{J}(Df(x))\,d{\mathcal {L}}^{n}(x)=\int _{\mathbb {R} ^{m}}N(f,A,y)\,d{\mathcal {H}}^{n}(y)\,,}
where
J
(
D
f
(
x
)
)
=
det
(
D
f
(
x
)
t
D
f
(
x
)
)
{\displaystyle {J}(Df(x))={\sqrt {\det(Df(x)^{t}Df(x))}}}
is the Jacobian of
D
f
(
x
)
{\displaystyle Df(x)}
.
The measurability of the multiplicity function is part of the claim. The Jacobian is defined almost everywhere by Rademacher's differentiability theorem.
The theorem was proved first by Herbert Federer (Federer 1969).
== Sources ==
== External links ==
"Area formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Area_formula_(geometric_measure_theory) |
The icosian calculus is a non-commutative algebraic structure discovered by the Irish mathematician William Rowan Hamilton in 1856.
In modern terms, he gave a group presentation of the icosahedral rotation group by generators and relations.
Hamilton's discovery derived from his attempts to find an algebra of "triplets" or 3-tuples that he believed would reflect the three Cartesian axes. The symbols of the icosian calculus correspond to moves between vertices on a dodecahedron. (Hamilton originally thought in terms of moves between the faces of an icosahedron, which is equivalent by duality. This is the origin of the name "icosian".) Hamilton's work in this area resulted indirectly in the terms Hamiltonian circuit and Hamiltonian path in graph theory. He also invented the icosian game as a means of illustrating and popularising his discovery.
== Informal definition ==
The algebra is based on three symbols,
ι
{\displaystyle \iota }
,
κ
{\displaystyle \kappa }
, and
λ
{\displaystyle \lambda }
, that Hamilton described as "roots of unity", by which he meant that repeated application of any of them a particular number of times yields the identity, which he denoted by 1. Specifically, they satisfy the relations,
ι
2
=
1
,
κ
3
=
1
,
λ
5
=
1.
{\displaystyle {\begin{aligned}\iota ^{2}&=1,\\\kappa ^{3}&=1,\\\lambda ^{5}&=1.\end{aligned}}}
Hamilton gives one additional relation between the symbols,
λ
=
ι
κ
,
{\displaystyle \lambda =\iota \kappa ,}
which is to be understood as application of
κ
{\displaystyle \kappa }
followed by application of
ι
{\displaystyle \iota }
. Hamilton points out that application in the reverse order produces a different result, implying that composition or multiplication of symbols is not generally commutative, although it is associative. The symbols generate a group of order 60, isomorphic to the group of rotations of a regular icosahedron or dodecahedron, and therefore to the alternating group of degree five. This, however, is not how Hamilton described them.
Hamilton drew comparisons between the icosians and his system of quaternions, but noted that, unlike quaternions, which can be added and multiplied, obeying a distributive law, the icosians could only, as far as he knew, be multiplied.
Hamilton understood his symbols by reference to the dodecahedron, which he represented in flattened form as a graph in the plane. The dodecahedron has 30 edges, and if arrows are placed on edges, there are two possible arrow directions for each edge, resulting in 60 directed edges. Each symbol corresponds to a permutation of the set of directed edges. The definitions below refer to the labeled diagram above. The notation
(
A
,
B
)
{\displaystyle (A,B)}
represents a directed edge from vertex
A
{\displaystyle A}
to vertex
B
{\displaystyle B}
. Vertex
A
{\displaystyle A}
is the tail of
(
A
,
B
)
{\displaystyle (A,B)}
and vertex
B
{\displaystyle B}
is its head.
The icosian symbol
ι
{\displaystyle \iota }
reverses the arrow on every directed edge, that is, it interchanges the head and tail. Hence
(
B
,
C
)
{\displaystyle (B,C)}
is transformed into
(
C
,
B
)
{\displaystyle (C,B)}
. Similarly, applying
ι
{\displaystyle \iota }
to
(
C
,
B
)
{\displaystyle (C,B)}
produces
(
B
,
C
)
{\displaystyle (B,C)}
, and to
(
R
,
S
)
{\displaystyle (R,S)}
produces
(
S
,
R
)
{\displaystyle (S,R)}
.
The icosian symbol
κ
{\displaystyle \kappa }
, applied to a directed edge
e
{\displaystyle e}
, produces the directed edge that (1) has the same head as
e
{\displaystyle e}
and that (2) is encountered first as one moves around the head of
e
{\displaystyle e}
in the anticlockwise direction. Hence applying
κ
{\displaystyle \kappa }
to
(
B
,
C
)
{\displaystyle (B,C)}
produces
(
D
,
C
)
{\displaystyle (D,C)}
, to
(
C
,
B
)
{\displaystyle (C,B)}
produces
(
Z
,
B
)
{\displaystyle (Z,B)}
, and to
(
R
,
S
)
{\displaystyle (R,S)}
produces
(
N
,
S
)
{\displaystyle (N,S)}
.
The icosian symbol
λ
{\displaystyle \lambda }
applied to a directed edge
e
{\displaystyle e}
produces the directed edge that results from making a right turn at the head of
e
{\displaystyle e}
. Hence applying
λ
{\displaystyle \lambda }
to
(
B
,
C
)
{\displaystyle (B,C)}
produces
(
C
,
D
)
{\displaystyle (C,D)}
, to
(
C
,
B
)
{\displaystyle (C,B)}
produces
(
B
,
A
)
{\displaystyle (B,A)}
, and to
(
R
,
S
)
{\displaystyle (R,S)}
produces
(
S
,
N
)
{\displaystyle (S,N)}
. Comparing the results of applying
κ
{\displaystyle \kappa }
and
λ
{\displaystyle \lambda }
to the same directed edge exhibits the rule
λ
=
ι
κ
{\displaystyle \lambda =\iota \kappa }
.
It is useful to define the symbol
μ
{\displaystyle \mu }
for the operation that produces the directed edge that results from making a left turn at the head of the directed edge to which the operation is applied. This symbol satisfies the relations
μ
=
λ
κ
=
ι
κ
2
.
{\displaystyle \mu =\lambda \kappa =\iota \kappa ^{2}.}
For example, the directed edge obtained by making a left turn from
(
B
,
C
)
{\displaystyle (B,C)}
is
(
C
,
P
)
{\displaystyle (C,P)}
. Indeed,
κ
{\displaystyle \kappa }
applied to
(
B
,
C
)
{\displaystyle (B,C)}
produces
(
D
,
C
)
{\displaystyle (D,C)}
and
λ
{\displaystyle \lambda }
applied to
(
D
,
C
)
{\displaystyle (D,C)}
produces
(
C
,
P
)
{\displaystyle (C,P)}
. Also,
κ
2
{\displaystyle \kappa ^{2}}
applied to
(
B
,
C
)
{\displaystyle (B,C)}
produces
(
P
,
C
)
{\displaystyle (P,C)}
and
ι
{\displaystyle \iota }
applied to
(
P
,
C
)
{\displaystyle (P,C)}
produces
(
C
,
P
)
{\displaystyle (C,P)}
.
These permutations are not rotations of the dodecahedron. Nevertheless, the group of permutations generated by these symbols is isomorphic to the rotation group of the dodecahedron, a fact that can be deduced from a specific feature of symmetric cubic graphs, of which the dodecahedron graph is an example. The rotation group of the dodecahedron has the property that for a given directed edge there is a unique rotation that sends that directed edge to any other specified directed edge. Hence by choosing a reference edge, say
(
B
,
C
)
{\displaystyle (B,C)}
, a one-to-one correspondence between directed edges and rotations is established: let
g
E
{\displaystyle g_{E}}
be the rotation that sends the reference edge
R
{\displaystyle R}
to directed edge
E
{\displaystyle E}
. (Indeed, there are 60 directed edges and 60 rotations.) The rotations are permutations of the set of directed edges of a different sort. Let
g
(
E
)
{\displaystyle g(E)}
denote the image of edge
E
{\displaystyle E}
under the rotation
g
{\displaystyle g}
. The icosian associated to
g
{\displaystyle g}
sends the reference edge
R
{\displaystyle R}
to the same directed edge as does
g
{\displaystyle g}
, namely to
g
(
R
)
{\displaystyle g(R)}
. The result of applying that icosian to any other directed edge
E
{\displaystyle E}
is
g
E
g
(
R
)
=
g
E
g
g
E
−
1
(
E
)
{\displaystyle g_{E}g(R)=g_{E}gg_{E}^{-1}(E)}
.
== Application to Hamiltonian circuits on the edges of the dodecahedron ==
A word consisting of the symbols
λ
{\displaystyle \lambda }
and
μ
{\displaystyle \mu }
corresponds to a sequence of right and left turns in the graph. Specifying such a word along with an initial directed edge therefore specifies a directed path along the edges of the dodecahedron. If the group element represented by the word equals the identity, then the path returns to the initial directed edge in the final step. If the additional requirement is imposed that every vertex of the graph be visited exactly once—specifically that every vertex occur exactly once as the head of a directed edge in the path—then a Hamiltonian circuit is obtained. Finding such a circuit was one of the challenges posed by Hamilton's icosian game. Hamilton exhibited the word
(
λ
3
μ
3
(
λ
μ
)
2
)
2
{\displaystyle (\lambda ^{3}\mu ^{3}(\lambda \mu )^{2})^{2}}
with the properties described above. Any of the 60 directed edges may serve as initial edge as a consequence of the symmetry of the dodecahedron, but only 30 distinct Hamiltonian circuits are obtained in this way, up to shift in starting point, because the word consists of the same sequence of 10 left and right turns repeated twice. The word with the roles of
λ
{\displaystyle \lambda }
and
μ
{\displaystyle \mu }
interchanged has the same properties, but these give the same Hamiltonian cycles, up to shift in initial edge and reversal of direction. Hence Hamilton's word accounts for all Hamiltonian cycles in the dodecahedron, whose number is known to be 30.
== Legacy ==
The icosian calculus is one of the earliest examples of many mathematical ideas, including:
presenting and studying a group by generators and relations;
visualization of a group by a graph, which led to combinatorial group theory and later geometric group theory;
Hamiltonian circuits and Hamiltonian paths in graph theory;
dessin d'enfant – see dessin d'enfant: history for details.
== See also ==
Icosian
== References == | Wikipedia/Icosian_calculus |
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether an arbitrary program eventually halts when run.
== Background ==
A decision problem is a question which, for every input in some infinite set of inputs, requires a "yes" or "no" answer. Those inputs can be numbers (for example, the decision problem "is the input a prime number?") or values of some other kind, such as strings of a formal language.
The formal representation of a decision problem is a subset of the natural numbers. For decision problems on natural numbers, the set consists of those numbers that the decision problem answers "yes" to. For example, the decision problem "is the input even?" is formalized as the set of even numbers. A decision problem whose input consists of strings or more complex values is formalized as the set of numbers that, via a specific Gödel numbering, correspond to inputs that satisfy the decision problem's criteria.
A decision problem A is called decidable or effectively solvable if the formalized set of A is a recursive set. Otherwise, A is called undecidable. A problem is called partially decidable, semi-decidable, solvable, or provable if A is a recursively enumerable set.
== Example: the halting problem in computability theory ==
In computability theory, the halting problem is a decision problem which can be stated as follows:
Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever.
Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.
== Relationship with Gödel's incompleteness theorem ==
The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through a mathematical proof.
The weaker form of the theorem can be proved from the undecidability of the halting problem as follows. Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n′ such that N(n′) = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.
== Examples of undecidable problems ==
Undecidable problems can be related to different topics, such as logic, abstract machines or topology. Since there are uncountably many undecidable problems, any list, even one of infinite length, is necessarily incomplete.
== Examples of undecidable statements ==
There are two distinct senses of the word "undecidable" in contemporary use. The first of these is the sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either "the answer to A is yes" or "the answer to A is no".
Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. The usage of "independent" is also ambiguous, however. It can mean just "not provable", leaving open whether an independent statement might be refuted.
Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point among various philosophical schools.
One of the first problems suspected to be undecidable, in the second sense of the term, was the word problem for groups, first posed by Max Dehn in 1911, which asks if there is a finitely presented group for which no algorithm exists to determine whether two words are equivalent. This was shown to be the case in 1955.
The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC.
In 1970, Russian mathematician Yuri Matiyasevich showed that Hilbert's Tenth Problem, posed in 1900 as a challenge to the next century of mathematicians, cannot be solved. Hilbert's challenge sought an algorithm which finds all solutions of a Diophantine equation. A Diophantine equation is a more general case of Fermat's Last Theorem; we seek the integer roots of a polynomial in any number of variables with integer coefficients. Since we have only one equation but n variables, infinitely many solutions exist (and are easy to find) in the complex plane; however, the problem becomes impossible if solutions are constrained to integer values only. Matiyasevich showed this problem to be unsolvable by mapping a Diophantine equation to a recursively enumerable set and invoking Gödel's Incompleteness Theorem.
In 1936, Alan Turing proved that the halting problem—the question of whether or not a Turing machine halts on a given program—is undecidable, in the second sense of the term. This result was later generalized by Rice's theorem.
In 1973, Saharon Shelah showed the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory.
In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic.
Kruskal's tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable on basis of a philosophy of mathematics called predicativism.
Goodstein's theorem is a statement about the Ramsey theory of the natural numbers that Kirby and Paris showed is undecidable in Peano arithmetic.
Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.
In 2007, researchers Kurtz and Simon, building on earlier work by J.H. Conway in the 1970s, proved that a natural generalization of the Collatz problem is undecidable.
In 2019, Ben-David and colleagues constructed an example of a learning model (named EMX), and showed a family of functions whose learnability in EMX is undecidable in standard set theory.
== See also ==
Decidability (logic)
Entscheidungsproblem
Proof of impossibility
Unknowability
Wicked problem
== Notes ==
== References == | Wikipedia/Algorithmically_insoluble |
Discrete calculus or the calculus of discrete functions, is the mathematical study of incremental change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word calculus is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of continuous change.
Discrete calculus has two entry points, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of piece-wise linear curves. Integral calculus concerns accumulation of quantities and the areas under piece-wise constant curves. These two points of view are related to each other by the fundamental theorem of discrete calculus.
The study of the concepts of change starts with their discrete form. The development is dependent on a parameter, the increment
Δ
x
{\displaystyle \Delta x}
of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally, the limit of discrete calculus as
Δ
x
→
0
{\displaystyle \Delta x\to 0}
is infinitesimal calculus. Even though it serves as a discrete underpinning of calculus, the main value of discrete calculus is in applications.
== Two initial constructions ==
Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called differentiation. Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the difference quotient function or just the difference quotient of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function.
Suppose the functions are defined at points separated by an increment
Δ
x
=
h
>
0
{\displaystyle \Delta x=h>0}
:
a
,
a
+
h
,
a
+
2
h
,
…
,
a
+
n
h
,
…
{\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots }
The "doubling function" may be denoted by
g
(
x
)
=
2
x
{\displaystyle g(x)=2x}
and the "squaring function" by
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
. The "difference quotient" is the rate of change of the function over one of the intervals
[
x
,
x
+
h
]
{\displaystyle [x,x+h]}
defined by the formula:
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle {\frac {f(x+h)-f(x)}{h}}.}
It takes the function
f
{\displaystyle f}
as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function
g
(
x
)
=
2
x
+
h
{\displaystyle g(x)=2x+h}
, as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals:
a
+
h
/
2
,
a
+
h
+
h
/
2
,
a
+
2
h
+
h
/
2
,
.
.
.
,
a
+
n
h
+
h
/
2
,
.
.
.
{\displaystyle a+h/2,a+h+h/2,a+2h+h/2,...,a+nh+h/2,...}
As the rate of change is that for the whole interval
[
x
,
x
+
h
]
{\displaystyle [x,x+h]}
, any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a
1
{\displaystyle 1}
-cochain.
The most common notation for the difference quotient is:
Δ
f
Δ
x
(
x
+
h
/
2
)
=
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle {\frac {\Delta f}{\Delta x}}(x+h/2)={\frac {f(x+h)-f(x)}{h}}.}
If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if
f
{\displaystyle f}
is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of
f
{\displaystyle f}
is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as
y
=
m
x
+
b
{\displaystyle y=mx+b}
, where
x
{\displaystyle x}
is the independent variable,
y
{\displaystyle y}
is the dependent variable,
b
{\displaystyle b}
is the
y
{\displaystyle y}
-intercept, and:
m
=
rise
run
=
change in
y
change in
x
=
Δ
y
Δ
x
.
{\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.}
This gives an exact value for the slope of a straight line.
If the function is not linear, however, then the change in
y
{\displaystyle y}
divided by the change in
x
{\displaystyle x}
varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let
f
{\displaystyle f}
be a function, and fix a point
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
.
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
is a point on the graph of the function. If
h
{\displaystyle h}
is the increment of
x
{\displaystyle x}
, then
x
+
h
{\displaystyle x+h}
is the next value of
x
{\displaystyle x}
. Therefore,
(
x
+
h
,
f
(
x
+
h
)
)
{\displaystyle (x+h,f(x+h))}
is the increment of
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
. The slope of the line between these two points is
m
=
f
(
x
+
h
)
−
f
(
x
)
(
x
+
h
)
−
x
=
f
(
x
+
h
)
−
f
(
x
)
h
.
{\displaystyle m={\frac {f(x+h)-f(x)}{(x+h)-x}}={\frac {f(x+h)-f(x)}{h}}.}
So
m
{\displaystyle m}
is the slope of the line between
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
and
(
x
+
h
,
f
(
x
+
h
)
)
{\displaystyle (x+h,f(x+h))}
.
Here is a particular example, the difference quotient of the squaring function. Let
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
be the squaring function. Then:
Δ
f
Δ
x
(
x
)
=
(
x
+
h
)
2
−
x
2
h
=
x
2
+
2
h
x
+
h
2
−
x
2
h
=
2
h
x
+
h
2
h
=
2
x
+
h
.
{\displaystyle {\begin{aligned}{\frac {\Delta f}{\Delta x}}(x)&={(x+h)^{2}-x^{2} \over {h}}\\&={x^{2}+2hx+h^{2}-x^{2} \over {h}}\\&={2hx+h^{2} \over {h}}\\&=2x+h.\end{aligned}}}
The difference quotient of the difference quotient is called the second difference quotient and it is defined at
a
+
h
,
a
+
2
h
,
a
+
3
h
,
…
,
a
+
n
h
,
…
{\displaystyle a+h,a+2h,a+3h,\ldots ,a+nh,\ldots }
and so on.
Discrete integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of a sum is called integration. In technical language, integral calculus studies a certain linear operator.
The Riemann sum inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis.
A motivating example is the distances traveled in a given time.
distance
=
speed
⋅
time
{\displaystyle {\text{distance}}={\text{speed}}\cdot {\text{time}}}
If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting an incrementally varying velocity over a given time period. If the bars in the diagram on the right represents speed as it varies from an interval to the next, the distance traveled (between the times represented by
a
{\displaystyle a}
and
b
{\displaystyle b}
) is the area of the shaded region
s
{\displaystyle s}
.
So, the interval between
a
{\displaystyle a}
and
b
{\displaystyle b}
is divided into a number of equal segments, the length of each segment represented by the symbol
Δ
x
{\displaystyle \Delta x}
. For each small segment, we have one value of the function
f
(
x
)
{\displaystyle f(x)}
. Call that value
v
{\displaystyle v}
. Then the area of the rectangle with base
Δ
x
{\displaystyle \Delta x}
and height
v
{\displaystyle v}
gives the distance (time
Δ
x
{\displaystyle \Delta x}
multiplied by speed
v
{\displaystyle v}
) traveled in that segment. Associated with each segment is the value of the function above it,
f
(
x
)
=
v
{\displaystyle f(x)=v}
. The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled.
Suppose a function is defined at the mid-points of the intervals of equal length
Δ
x
=
h
>
0
{\displaystyle \Delta x=h>0}
:
a
+
h
/
2
,
a
+
h
+
h
/
2
,
a
+
2
h
+
h
/
2
,
…
,
a
+
n
h
−
h
/
2
,
…
{\displaystyle a+h/2,a+h+h/2,a+2h+h/2,\ldots ,a+nh-h/2,\ldots }
Then the Riemann sum from
a
{\displaystyle a}
to
b
=
a
+
n
h
{\displaystyle b=a+nh}
in sigma notation is:
∑
i
=
1
n
f
(
a
+
i
h
)
Δ
x
.
{\displaystyle \sum _{i=1}^{n}f(a+ih)\,\Delta x.}
As this computation is carried out for each
n
{\displaystyle n}
, the new function is defined at the points:
a
,
a
+
h
,
a
+
2
h
,
…
,
a
+
n
h
,
…
{\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots }
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus: If a function
f
{\displaystyle f}
is defined on a partition of the interval
[
a
,
b
]
{\displaystyle [a,b]}
,
b
=
a
+
n
h
{\displaystyle b=a+nh}
, and if
F
{\displaystyle F}
is a function whose difference quotient is
f
{\displaystyle f}
, then we have:
∑
i
=
0
n
−
1
f
(
a
+
i
h
+
h
/
2
)
Δ
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \sum _{i=0}^{n-1}f(a+ih+h/2)\,\Delta x=F(b)-F(a).}
Furthermore, for every
m
=
0
,
1
,
2
,
…
,
n
−
1
{\textstyle m=0,1,2,\ldots ,n-1}
, we have:
Δ
Δ
x
∑
i
=
0
m
f
(
a
+
i
h
+
h
/
2
)
Δ
x
=
f
(
a
+
m
h
+
h
/
2
)
.
{\displaystyle {\frac {\Delta }{\Delta x}}\sum _{i=0}^{m}f(a+ih+h/2)\,\Delta x=f(a+mh+h/2).}
This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences.
== History ==
The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs. After the limit is taken, however, they are never to be seen again. However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative.
During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals:
Henri Poincaré: triangulations (barycentric subdivision, dual triangulation), Poincaré lemma, the first proof of the general Stokes Theorem, and a lot more
L. E. J. Brouwer: simplicial approximation theorem
Élie Cartan, Georges de Rham: the notion of differential form, the exterior derivative as a coordinate-independent linear operator, exactness/closedness of forms
Emmy Noether, Heinz Hopf, Leopold Vietoris, Walther Mayer: modules of chains, the boundary operator, chain complexes
J. W. Alexander, Solomon Lefschetz, Lev Pontryagin, Andrey Kolmogorov, Norman Steenrod, Eduard Čech: the early cochain notions
Hermann Weyl: the Kirchhoff laws stated in terms of the boundary and the coboundary operators
W. V. D. Hodge: the Hodge star operator, the Hodge decomposition
Samuel Eilenberg, Saunders Mac Lane, Norman Steenrod, J.H.C. Whitehead: the rigorous development of homology and cohomology theory including chain and cochain complexes, the cup product
Hassler Whitney: cochains as integrands
The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling.
== Applications ==
Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.
Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use the Riemann sums to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity have been expressed in the language of discrete calculus.
Chemistry uses calculus in determining reaction rates and radioactive decay (exponential decay).
In biology, population dynamics starts with reproduction and death rates to model population changes (population modeling).
In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation.
The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, to rapidly extract features and detect object; another algorithm that could be used is the summed area table.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.
In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets.
In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures.
Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function.
== Calculus of differences and sums ==
Suppose a function (a
0
{\displaystyle 0}
-cochain)
f
{\displaystyle f}
is defined at points separated by an increment
Δ
x
=
h
>
0
{\displaystyle \Delta x=h>0}
:
a
,
a
+
h
,
a
+
2
h
,
…
,
a
+
n
h
,
…
{\displaystyle a,a+h,a+2h,\ldots ,a+nh,\ldots }
The difference (or the exterior derivative, or the coboundary operator) of the function is given by:
(
Δ
f
)
(
[
x
,
x
+
h
]
)
=
f
(
x
+
h
)
−
f
(
x
)
.
{\displaystyle {\big (}\Delta f{\big )}{\big (}[x,x+h]{\big )}=f(x+h)-f(x).}
It is defined at each of the above intervals; it is a
1
{\displaystyle 1}
-cochain.
Suppose a
1
{\displaystyle 1}
-cochain
g
{\displaystyle g}
is defined at each of the above intervals. Then its sum is a function (a
0
{\displaystyle 0}
-cochain) defined at each of the points by:
(
∑
g
)
(
a
+
n
h
)
=
∑
i
=
1
n
g
(
[
a
+
(
i
−
1
)
h
,
a
+
i
h
]
)
.
{\displaystyle \left(\sum g\right)\!(a+nh)=\sum _{i=1}^{n}g{\big (}[a+(i-1)h,a+ih]{\big )}.}
These are their properties:
Constant rule: If
c
{\displaystyle c}
is a constant, then
Δ
c
=
0
{\displaystyle \Delta c=0}
Linearity: if
a
{\displaystyle a}
and
b
{\displaystyle b}
are constants,
Δ
(
a
f
+
b
g
)
=
a
Δ
f
+
b
Δ
g
,
∑
(
a
f
+
b
g
)
=
a
∑
f
+
b
∑
g
{\displaystyle \Delta (af+bg)=a\,\Delta f+b\,\Delta g,\quad \sum (af+bg)=a\,\sum f+b\,\sum g}
Product rule:
Δ
(
f
g
)
=
f
Δ
g
+
g
Δ
f
+
Δ
f
Δ
g
{\displaystyle \Delta (fg)=f\,\Delta g+g\,\Delta f+\Delta f\,\Delta g}
Fundamental theorem of calculus I:
(
∑
Δ
f
)
(
a
+
n
h
)
=
f
(
a
+
n
h
)
−
f
(
a
)
{\displaystyle \left(\sum \Delta f\right)\!(a+nh)=f(a+nh)-f(a)}
Fundamental theorem of calculus II:
Δ
(
∑
g
)
=
g
{\displaystyle \Delta \!\left(\sum g\right)=g}
The definitions are applied to graphs as follows. If a function (a
0
{\displaystyle 0}
-cochain)
f
{\displaystyle f}
is defined at the nodes of a graph:
a
,
b
,
c
,
…
{\displaystyle a,b,c,\ldots }
then its exterior derivative (or the differential) is the difference, i.e., the following function defined on the edges of the graph (
1
{\displaystyle 1}
-cochain):
(
d
f
)
(
[
a
,
b
]
)
=
f
(
b
)
−
f
(
a
)
.
{\displaystyle \left(df\right)\!{\big (}[a,b]{\big )}=f(b)-f(a).}
If
g
{\displaystyle g}
is a
1
{\displaystyle 1}
-cochain, then its integral over a sequence of edges
σ
{\displaystyle \sigma }
of the graph is the sum of its values over all edges of
σ
{\displaystyle \sigma }
("path integral"):
∫
σ
g
=
∑
σ
g
(
[
a
,
b
]
)
.
{\displaystyle \int _{\sigma }g=\sum _{\sigma }g{\big (}[a,b]{\big )}.}
These are the properties:
Constant rule: If
c
{\displaystyle c}
is a constant, then
d
c
=
0
{\displaystyle dc=0}
Linearity: if
a
{\displaystyle a}
and
b
{\displaystyle b}
are constants,
d
(
a
f
+
b
g
)
=
a
d
f
+
b
d
g
,
∫
σ
(
a
f
+
b
g
)
=
a
∫
σ
f
+
b
∫
σ
g
{\displaystyle d(af+bg)=a\,df+b\,dg,\quad \int _{\sigma }(af+bg)=a\,\int _{\sigma }f+b\,\int _{\sigma }g}
Product rule:
d
(
f
g
)
=
f
d
g
+
g
d
f
+
d
f
d
g
{\displaystyle d(fg)=f\,dg+g\,df+df\,dg}
Fundamental theorem of calculus I: if a
1
{\displaystyle 1}
-chain
σ
{\displaystyle \sigma }
consists of the edges
[
a
0
,
a
1
]
,
[
a
1
,
a
2
]
,
.
.
.
,
[
a
n
−
1
,
a
n
]
{\displaystyle [a_{0},a_{1}],[a_{1},a_{2}],...,[a_{n-1},a_{n}]}
, then for any
0
{\displaystyle 0}
-cochain
f
{\displaystyle f}
∫
σ
d
f
=
f
(
a
n
)
−
f
(
a
0
)
{\displaystyle \int _{\sigma }df=f(a_{n})-f(a_{0})}
Fundamental theorem of calculus II: if the graph is a tree,
g
{\displaystyle g}
is a
1
{\displaystyle 1}
-cochain, and a function (
0
{\displaystyle 0}
-cochain) is defined on the nodes of the graph by
f
(
x
)
=
∫
σ
g
{\displaystyle f(x)=\int _{\sigma }g}
where a
1
{\displaystyle 1}
-chain
σ
{\displaystyle \sigma }
consists of
[
a
0
,
a
1
]
,
[
a
1
,
a
2
]
,
.
.
.
,
[
a
n
−
1
,
x
]
{\displaystyle [a_{0},a_{1}],[a_{1},a_{2}],...,[a_{n-1},x]}
for some fixed
a
0
{\displaystyle a_{0}}
, then
d
f
=
g
{\displaystyle df=g}
See references.
== Chains of simplices and cubes ==
A simplicial complex
S
{\displaystyle S}
is a set of simplices that satisfies the following conditions:
1. Every face of a simplex from
S
{\displaystyle S}
is also in
S
{\displaystyle S}
.
2. The non-empty intersection of any two simplices
σ
1
,
σ
2
∈
S
{\displaystyle \sigma _{1},\sigma _{2}\in S}
is a face of both
σ
1
{\displaystyle \sigma _{1}}
and
σ
2
{\displaystyle \sigma _{2}}
.
By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as
(
v
0
,
.
.
.
,
v
k
)
{\displaystyle (v_{0},...,v_{k})}
, with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean.
Let
S
{\displaystyle S}
be a simplicial complex. A simplicial k-chain is a finite formal sum
∑
i
=
1
N
c
i
σ
i
,
{\displaystyle \sum _{i=1}^{N}c_{i}\sigma _{i},\,}
where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example,
(
v
0
,
v
1
)
=
−
(
v
1
,
v
0
)
.
{\displaystyle (v_{0},v_{1})=-(v_{1},v_{0}).}
The vector space of k-chains on
S
{\displaystyle S}
is written
C
k
{\displaystyle C_{k}}
. It has a basis in one-to-one correspondence with the set of k-simplices in
S
{\displaystyle S}
. To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.
Let
σ
=
(
v
0
,
.
.
.
,
v
k
)
{\displaystyle \sigma =(v_{0},...,v_{k})}
be an oriented k-simplex, viewed as a basis element of
C
k
{\displaystyle C_{k}}
. The boundary operator
∂
k
:
C
k
→
C
k
−
1
{\displaystyle \partial _{k}:C_{k}\rightarrow C_{k-1}}
is the linear operator defined by:
∂
k
(
σ
)
=
∑
i
=
0
k
(
−
1
)
i
(
v
0
,
…
,
v
i
^
,
…
,
v
k
)
,
{\displaystyle \partial _{k}(\sigma )=\sum _{i=0}^{k}(-1)^{i}(v_{0},\dots ,{\widehat {v_{i}}},\dots ,v_{k}),}
where the oriented simplex
(
v
0
,
…
,
v
i
^
,
…
,
v
k
)
{\displaystyle (v_{0},\dots ,{\widehat {v_{i}}},\dots ,v_{k})}
is the
i
{\displaystyle i}
th face of
σ
{\displaystyle \sigma }
, obtained by deleting its
i
{\displaystyle i}
th vertex.
In
C
k
{\displaystyle C_{k}}
, elements of the subgroup
Z
k
=
ker
∂
k
{\displaystyle Z_{k}=\ker \partial _{k}}
are referred to as cycles, and the subgroup
B
k
=
im
∂
k
+
1
{\displaystyle B_{k}=\operatorname {im} \partial _{k+1}}
is said to consist of boundaries.
A direct computation shows that
∂
2
=
0
{\displaystyle \partial ^{2}=0}
. In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces
(
C
k
,
∂
k
)
{\displaystyle (C_{k},\partial _{k})}
form a chain complex. Another equivalent statement is that
B
k
{\displaystyle B_{k}}
is contained in
Z
k
{\displaystyle Z_{k}}
.
A cubical complex is a set composed of points, line segments, squares, cubes, and their n-dimensional counterparts. They are used analogously to simplices to form complexes. An elementary interval is a subset
I
⊂
R
{\displaystyle I\subset \mathbf {R} }
of the form
I
=
[
ℓ
,
ℓ
+
1
]
or
I
=
[
ℓ
,
ℓ
]
{\displaystyle I=[\ell ,\ell +1]\quad {\text{or}}\quad I=[\ell ,\ell ]}
for some
ℓ
∈
Z
{\displaystyle \ell \in \mathbf {Z} }
. An elementary cube
Q
{\displaystyle Q}
is the finite product of elementary intervals, i.e.
Q
=
I
1
×
I
2
×
⋯
×
I
d
⊂
R
d
{\displaystyle Q=I_{1}\times I_{2}\times \cdots \times I_{d}\subset \mathbf {R} ^{d}}
where
I
1
,
I
2
,
…
,
I
d
{\displaystyle I_{1},I_{2},\ldots ,I_{d}}
are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube
[
0
,
1
]
n
{\displaystyle [0,1]^{n}}
embedded in Euclidean space
R
d
{\displaystyle \mathbf {R} ^{d}}
(for some
n
,
d
∈
N
∪
{
0
}
{\displaystyle n,d\in \mathbf {N} \cup \{0\}}
with
n
≤
d
{\displaystyle n\leq d}
). A set
X
⊆
R
d
{\displaystyle X\subseteq \mathbf {R} ^{d}}
is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes. The boundary operator and the chain complex are defined similarly to those for simplicial complexes.
More general are cell complexes.
A chain complex
(
C
∗
,
∂
∗
)
{\displaystyle (C_{*},\partial _{*})}
is a sequence of vector spaces
…
,
C
0
,
C
1
,
C
2
,
C
3
,
C
4
,
…
{\displaystyle \ldots ,C_{0},C_{1},C_{2},C_{3},C_{4},\ldots }
connected by linear operators (called boundary operators)
∂
n
:
C
n
→
C
n
−
1
{\displaystyle \partial _{n}:C_{n}\to C_{n-1}}
, such that the composition of any two consecutive maps is the zero map. Explicitly, the boundary operators satisfy
∂
n
∘
∂
n
+
1
=
0
{\displaystyle \partial _{n}\circ \partial _{n+1}=0}
, or with indices suppressed,
∂
2
=
0
{\displaystyle \partial ^{2}=0}
. The complex may be written out as follows.
⋯
←
∂
0
C
0
←
∂
1
C
1
←
∂
2
C
2
←
∂
3
C
3
←
∂
4
C
4
←
∂
5
⋯
{\displaystyle \cdots {\xleftarrow {\partial _{0}}}C_{0}{\xleftarrow {\partial _{1}}}C_{1}{\xleftarrow {\partial _{2}}}C_{2}{\xleftarrow {\partial _{3}}}C_{3}{\xleftarrow {\partial _{4}}}C_{4}{\xleftarrow {\partial _{5}}}\cdots }
A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images). A simplicial map
f
{\displaystyle f}
from a simplicial complex
S
{\displaystyle S}
to another
T
{\displaystyle T}
is a function from the vertex set of
S
{\displaystyle S}
to the vertex set of
T
{\displaystyle T}
such that the image of each simplex in
S
{\displaystyle S}
(viewed as a set of vertices) is a simplex in
T
{\displaystyle T}
. It generates a linear map, called a chain map, from the chain complex of
S
{\displaystyle S}
to the chain complex of
T
{\displaystyle T}
. Explicitly, it is given on
k
{\displaystyle k}
-chains by
f
(
(
v
0
,
…
,
v
k
)
)
=
(
f
(
v
0
)
,
…
,
f
(
v
k
)
)
{\displaystyle f((v_{0},\ldots ,v_{k}))=(f(v_{0}),\ldots ,f(v_{k}))}
if
f
(
v
0
)
,
.
.
.
,
f
(
v
k
)
{\displaystyle f(v_{0}),...,f(v_{k})}
are all distinct, and otherwise it is set equal to
0
{\displaystyle 0}
.
A chain map
f
{\displaystyle f}
between two chain complexes
(
A
∗
,
d
A
,
∗
)
{\displaystyle (A_{*},d_{A,*})}
and
(
B
∗
,
d
B
,
∗
)
{\displaystyle (B_{*},d_{B,*})}
is a sequence
f
∗
{\displaystyle f_{*}}
of homomorphisms
f
n
:
A
n
→
B
n
{\displaystyle f_{n}:A_{n}\rightarrow B_{n}}
for each
n
{\displaystyle n}
that commutes with the boundary operators on the two chain complexes, so
d
B
,
n
∘
f
n
=
f
n
−
1
∘
d
A
,
n
{\displaystyle d_{B,n}\circ f_{n}=f_{n-1}\circ d_{A,n}}
. This is written out in the following commutative diagram:
A chain map sends cycles to cycles and boundaries to boundaries.
See references.
== Discrete differential forms: cochains ==
For each vector space Ci in the chain complex we consider its dual space
C
i
∗
:=
H
o
m
(
C
i
,
R
)
,
{\displaystyle C_{i}^{*}:=\mathrm {Hom} (C_{i},{\bf {R}}),}
and
d
i
=
∂
i
∗
{\displaystyle d^{i}=\partial _{i}^{*}}
is its dual linear operator
d
i
−
1
:
C
i
−
1
∗
→
C
i
∗
.
{\displaystyle d^{i-1}:C_{i-1}^{*}\to C_{i}^{*}.}
This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex
⋯
←
C
i
+
1
∗
←
∂
i
∗
C
i
∗
←
∂
i
−
1
∗
C
i
−
1
∗
←
⋯
{\displaystyle \cdots \leftarrow C_{i+1}^{*}{\stackrel {\partial _{i}^{*}}{\leftarrow }}\ C_{i}^{*}{\stackrel {\partial _{i-1}^{*}}{\leftarrow }}C_{i-1}^{*}\leftarrow \cdots }
The cochain complex
(
C
∗
,
d
∗
)
{\displaystyle (C^{*},d^{*})}
is the dual notion to a chain complex. It consists of a sequence of vector spaces
.
.
.
,
C
0
,
C
1
,
C
2
,
C
3
,
C
4
,
.
.
.
{\displaystyle ...,C_{0},C_{1},C_{2},C_{3},C_{4},...}
connected by linear operators
d
n
:
C
n
→
C
n
+
1
{\displaystyle d^{n}:C^{n}\to C^{n+1}}
satisfying
d
n
+
1
∘
d
n
=
0
{\displaystyle d^{n+1}\circ d^{n}=0}
. The cochain complex may be written out in a similar fashion to the chain complex.
⋯
→
d
−
1
C
0
→
d
0
C
1
→
d
1
C
2
→
d
2
C
3
→
d
3
C
4
→
d
4
⋯
{\displaystyle \cdots {\xrightarrow {d^{-1}}}C^{0}{\xrightarrow {d^{0}}}C^{1}{\xrightarrow {d^{1}}}C^{2}{\xrightarrow {d^{2}}}C^{3}{\xrightarrow {d^{3}}}C^{4}{\xrightarrow {d^{4}}}\cdots }
The index
n
{\displaystyle n}
in either
C
n
{\displaystyle C_{n}}
or
C
n
{\displaystyle C^{n}}
is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension.
The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of
d
{\displaystyle d}
are called cocycles (or closed elements), and the elements in the image of
d
{\displaystyle d}
are called coboundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles.
The Poincaré lemma states that if
B
{\displaystyle B}
is an open ball in
R
n
{\displaystyle {\bf {R}}^{n}}
, any closed
p
{\displaystyle p}
-form
ω
{\displaystyle \omega }
defined on
B
{\displaystyle B}
is exact, for any integer
p
{\displaystyle p}
with
1
≤
p
≤
n
{\displaystyle 1\leq p\leq n}
.
When we refer to cochains as discrete (differential) forms, we refer to
d
{\displaystyle d}
as the exterior derivative. We also use the calculus notation for the values of the forms:
ω
(
s
)
=
∫
s
ω
.
{\displaystyle \omega (s)=\int _{s}\omega .}
Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval:
∑
i
=
0
n
−
1
Δ
F
Δ
x
(
a
+
i
h
+
h
/
2
)
Δ
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \sum _{i=0}^{n-1}{\frac {\Delta F}{\Delta x}}(a+ih+h/2)\,\Delta x=F(b)-F(a).}
Stokes' theorem says that the sum of a form
ω
{\displaystyle \omega }
over the boundary of some orientable manifold
Ω
{\displaystyle \Omega }
is equal to the sum of its exterior derivative
d
ω
{\displaystyle d\omega }
over the whole of
Ω
{\displaystyle \Omega }
, i.e.,
∫
Ω
d
ω
=
∫
∂
Ω
ω
.
{\displaystyle \int _{\Omega }d\omega =\int _{\partial \Omega }\omega \,.}
It is worthwhile to examine the underlying principle by considering an example for
d
=
2
{\displaystyle d=2}
dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains.
See references.
== The wedge product of forms ==
In discrete calculus, this is a construction that creates from forms higher order forms: adjoining two cochains of degree
p
{\displaystyle p}
and
q
{\displaystyle q}
to form a composite cochain of degree
p
+
q
{\displaystyle p+q}
.
For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension.
For simplicial complexes, the wedge product is implemented as the cup product: if
f
p
{\displaystyle f^{p}}
is a
p
{\displaystyle p}
-cochain and
g
q
{\displaystyle g^{q}}
is a
q
{\displaystyle q}
-cochain, then
(
f
p
⌣
g
q
)
(
σ
)
=
f
p
(
σ
0
,
1
,
.
.
.
,
p
)
⋅
g
q
(
σ
p
,
p
+
1
,
.
.
.
,
p
+
q
)
{\displaystyle (f^{p}\smile g^{q})(\sigma )=f^{p}(\sigma _{0,1,...,p})\cdot g^{q}(\sigma _{p,p+1,...,p+q})}
where
σ
{\displaystyle \sigma }
is a
(
p
+
q
)
{\displaystyle (p+q)}
-simplex and
σ
S
,
S
⊂
{
0
,
1
,
.
.
.
,
p
+
q
}
{\displaystyle \sigma _{S},\ S\subset \{0,1,...,p+q\}}
,
is the simplex spanned by
S
{\displaystyle S}
into the
(
p
+
q
)
{\displaystyle (p+q)}
-simplex whose vertices are indexed by
{
0
,
.
.
.
,
p
+
q
}
{\displaystyle \{0,...,p+q\}}
. So,
σ
0
,
1
,
.
.
.
,
p
{\displaystyle \sigma _{0,1,...,p}}
is the
p
{\displaystyle p}
-th front face and
σ
p
,
p
+
1
,
.
.
.
,
p
+
q
{\displaystyle \sigma _{p,p+1,...,p+q}}
is the
q
{\displaystyle q}
-th back face of
σ
{\displaystyle \sigma }
, respectively.
The coboundary of the cup product of cochains
f
p
{\displaystyle f^{p}}
and
g
q
{\displaystyle g^{q}}
is given by
d
(
f
p
⌣
g
q
)
=
d
f
p
⌣
g
q
+
(
−
1
)
p
(
f
p
⌣
d
g
q
)
.
{\displaystyle d(f^{p}\smile g^{q})=d{f^{p}}\smile g^{q}+(-1)^{p}(f^{p}\smile d{g^{q}}).}
The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary.
The cup product operation satisfies the identity
α
p
⌣
β
q
=
(
−
1
)
p
q
(
β
q
⌣
α
p
)
.
{\displaystyle \alpha ^{p}\smile \beta ^{q}=(-1)^{pq}(\beta ^{q}\smile \alpha ^{p}).}
In other words, the corresponding multiplication is graded-commutative.
See references.
However, the wedge product can be defined also on cellular complexes, whose highest-dimensional cells are general polygons. Such a wedge product was presented in A simple and complete discrete exterior calculus on general polygonal meshes. Furthermore, the authors then employ this polygonal wedge product to define discrete Lie derivative on general polygonal meshes .
== Laplace operator ==
The Laplace operator
Δ
f
{\displaystyle \Delta f}
of a function
f
{\displaystyle f}
at a vertex
p
{\displaystyle p}
, is (up to a factor) the rate at which the average value of
f
{\displaystyle f}
over a cellular neighborhood of
p
{\displaystyle p}
deviates from
f
(
p
)
{\displaystyle f(p)}
. The Laplace operator represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena.
The codifferential
δ
:
C
k
→
C
k
−
1
{\displaystyle \delta :C^{k}\to C^{k-1}}
is an operator defined on
k
{\displaystyle k}
-forms by:
δ
=
(
−
1
)
n
(
k
−
1
)
+
1
⋆
d
⋆
=
(
−
1
)
k
⋆
−
1
d
⋆
,
{\displaystyle \delta =(-1)^{n(k-1)+1}{\star }d{\star }=(-1)^{k}\,{\star }^{-1}d{\star },}
where
d
{\displaystyle d}
is the exterior derivative or differential and
⋆
{\displaystyle \star }
is the Hodge star operator.
The codifferential is the adjoint of the exterior derivative according to Stokes' theorem:
(
η
,
δ
ζ
)
=
(
d
η
,
ζ
)
.
{\displaystyle (\eta ,\delta \zeta )=(d\eta ,\zeta ).}
Since the differential satisfies
d
2
=
0
{\displaystyle d^{2}=0}
, the codifferential has the corresponding property
δ
2
=
⋆
d
⋆
⋆
d
⋆
=
(
−
1
)
k
(
n
−
k
)
⋆
d
2
⋆
=
0.
{\displaystyle \delta ^{2}={\star }d{\star }{\star }d{\star }=(-1)^{k(n-k)}{\star }d^{2}{\star }=0.}
The Laplace operator is defined by:
Δ
=
(
δ
+
d
)
2
=
δ
d
+
d
δ
.
{\displaystyle \Delta =(\delta +d)^{2}=\delta d+d\delta .}
See references.
== Related ==
Discrete element method
Divided differences
Finite difference coefficient
Finite difference method
Finite element method
Finite volume method
Numerical differentiation
Numerical integration
Numerical methods for ordinary differential equations
== See also ==
Calculus of finite differences
Calculus on finite weighted graphs
Cellular automaton
Discrete differential geometry
Discrete Laplace operator
Calculus of finite differences, discrete calculus or discrete analysis
Discrete Morse theory
== References == | Wikipedia/Discrete_calculus |
In mathematics, the discrete exterior calculus (DEC) is the extension of the exterior calculus to discrete spaces including graphs, finite element meshes, and lately also general polygonal meshes (non-flat and non-convex). DEC methods have proved to be very powerful in improving and analyzing finite element methods: for instance, DEC-based methods allow the use of highly non-uniform meshes to obtain accurate results. Non-uniform meshes are advantageous because they allow the use of large elements where the process to be simulated is relatively simple, as opposed to a fine resolution where the process may be complicated (e.g., near an obstruction to a fluid flow), while using less computational power than if a uniformly fine mesh were used.
== The discrete exterior derivative ==
Stokes' theorem relates the integral of a differential (n − 1)-form ω over the boundary ∂M of an n-dimensional manifold M to the integral of dω (the exterior derivative of ω, and a differential n-form on M) over M itself:
∫
M
d
ω
=
∫
∂
M
ω
.
{\displaystyle \int _{M}\mathrm {d} \omega =\int _{\partial M}\omega .}
One could think of differential k-forms as linear operators that act on k-dimensional "bits" of space, in which case one might prefer to use the bracket notation for a dual pairing. In this notation, Stokes' theorem reads as
⟨
d
ω
∣
M
⟩
=
⟨
ω
∣
∂
M
⟩
.
{\displaystyle \langle \mathrm {d} \omega \mid M\rangle =\langle \omega \mid \partial M\rangle .}
In finite element analysis, the first stage is often the approximation of the domain of interest by a triangulation, T. For example, a curve would be approximated as a union of straight line segments; a surface would be approximated by a union of triangles, whose edges are straight line segments, which themselves terminate in points. Topologists would refer to such a construction as a simplicial complex. The boundary operator on this triangulation/simplicial complex T is defined in the usual way: for example, if L is a directed line segment from one point, a, to another, b, then the boundary ∂L of L is the formal difference b − a.
A k-form on T is a linear operator acting on k-dimensional subcomplexes of T; e.g., a 0-form assigns values to points, and extends linearly to linear combinations of points; a 1-form assigns values to line segments in a similarly linear way. If ω is a k-form on T, then the discrete exterior derivative dω of ω is the unique (k + 1)-form defined so that Stokes' theorem holds:
⟨
d
ω
∣
S
⟩
=
⟨
ω
∣
∂
S
⟩
.
{\displaystyle \langle \mathrm {d} \omega \mid S\rangle =\langle \omega \mid \partial S\rangle .}
For every (k + 1)-dimensional subcomplex of T, S.
Other operators and operations such as the discrete wedge product, Hodge star, or Lie derivative can also be defined.
== See also ==
Discrete differential geometry
Discrete Morse theory
Topological combinatorics
Discrete calculus
== Notes ==
== References ==
A simple and complete discrete exterior calculus on general polygonal meshes, Ptackova, Lenka and Velho, Luiz, Computer Aided Geometric Design, 2021, DOI: 10.1016/j.cagd.2021.102002
Discrete Calculus, Grady, Leo J., Polimeni, Jonathan R., 2010
Hirani Thesis on Discrete Exterior Calculus
A Primal-to-Primal Discretization of Exterior Calculus on Polygonal Meshes, Ptackova, L. and Velho, L., Symposium on Geometry Processing 2017, DOI: 10.2312/SGP.20171204
Convergence of discrete exterior calculus approximations for Poisson problems, E. Schulz & G. Tsogtgerel, Disc. Comp. Geo. 63(2), 346 - 376, 2020
On geometric discretization of elasticity, Arash Yavari, J. Math. Phys. 49, 022901 (2008), DOI:10.1063/1.2830977
Discrete Differential Geometry: An Applied Introduction, Keenan Crane, 2018 | Wikipedia/Discrete_exterior_calculus |
Discrete Morse theory is a combinatorial adaptation of Morse theory developed by Robin Forman. The theory has various practical applications in diverse fields of applied mathematics and computer science, such as configuration spaces, homology computation, denoising, mesh compression, and topological data analysis.
== Notation regarding CW complexes ==
Let
X
{\displaystyle X}
be a CW complex and denote by
X
{\displaystyle {\mathcal {X}}}
its set of cells. Define the incidence function
κ
:
X
×
X
→
Z
{\displaystyle \kappa \colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {Z} }
in the following way: given two cells
σ
{\displaystyle \sigma }
and
τ
{\displaystyle \tau }
in
X
{\displaystyle {\mathcal {X}}}
, let
κ
(
σ
,
τ
)
{\displaystyle \kappa (\sigma ,~\tau )}
be the degree of the attaching map from the boundary of
σ
{\displaystyle \sigma }
to
τ
{\displaystyle \tau }
. The boundary operator is the endomorphism
∂
{\displaystyle \partial }
of the free abelian group generated by
X
{\displaystyle {\mathcal {X}}}
defined by
∂
(
σ
)
=
∑
τ
∈
X
κ
(
σ
,
τ
)
τ
.
{\displaystyle \partial (\sigma )=\sum _{\tau \in {\mathcal {X}}}\kappa (\sigma ,\tau )\tau .}
It is a defining property of boundary operators that
∂
∘
∂
≡
0
{\displaystyle \partial \circ \partial \equiv 0}
. In more axiomatic definitions one can find the requirement that
∀
σ
,
τ
′
∈
X
{\displaystyle \forall \sigma ,\tau ^{\prime }\in {\mathcal {X}}}
∑
τ
∈
X
κ
(
σ
,
τ
)
κ
(
τ
,
τ
′
)
=
0
{\displaystyle \sum _{\tau \in {\mathcal {X}}}\kappa (\sigma ,\tau )\kappa (\tau ,\tau ^{\prime })=0}
which is a consequence of the above definition of the boundary operator and the requirement that
∂
∘
∂
≡
0
{\displaystyle \partial \circ \partial \equiv 0}
.
== Discrete Morse functions ==
A real-valued function
μ
:
X
→
R
{\displaystyle \mu \colon {\mathcal {X}}\to \mathbb {R} }
is a discrete Morse function if it satisfies the following two properties:
For any cell
σ
∈
X
{\displaystyle \sigma \in {\mathcal {X}}}
, the number of cells
τ
∈
X
{\displaystyle \tau \in {\mathcal {X}}}
in the boundary of
σ
{\displaystyle \sigma }
which satisfy
μ
(
σ
)
≤
μ
(
τ
)
{\displaystyle \mu (\sigma )\leq \mu (\tau )}
is at most one.
For any cell
σ
∈
X
{\displaystyle \sigma \in {\mathcal {X}}}
, the number of cells
τ
∈
X
{\displaystyle \tau \in {\mathcal {X}}}
containing
σ
{\displaystyle \sigma }
in their boundary which satisfy
μ
(
σ
)
≥
μ
(
τ
)
{\displaystyle \mu (\sigma )\geq \mu (\tau )}
is at most one.
It can be shown that the cardinalities in the two conditions cannot both be one simultaneously for a fixed cell
σ
{\displaystyle \sigma }
, provided that
X
{\displaystyle {\mathcal {X}}}
is a regular CW complex. In this case, each cell
σ
∈
X
{\displaystyle \sigma \in {\mathcal {X}}}
can be paired with at most one exceptional cell
τ
∈
X
{\displaystyle \tau \in {\mathcal {X}}}
: either a boundary cell with larger
μ
{\displaystyle \mu }
value, or a co-boundary cell with smaller
μ
{\displaystyle \mu }
value. The cells which have no pairs, i.e., whose function values are strictly higher than their boundary cells and strictly lower than their co-boundary cells are called critical cells. Thus, a discrete Morse function partitions the CW complex into three distinct cell collections:
X
=
A
⊔
K
⊔
Q
{\displaystyle {\mathcal {X}}={\mathcal {A}}\sqcup {\mathcal {K}}\sqcup {\mathcal {Q}}}
, where:
A
{\displaystyle {\mathcal {A}}}
denotes the critical cells which are unpaired,
K
{\displaystyle {\mathcal {K}}}
denotes cells which are paired with boundary cells, and
Q
{\displaystyle {\mathcal {Q}}}
denotes cells which are paired with co-boundary cells.
By construction, there is a bijection of sets between
k
{\displaystyle k}
-dimensional cells in
K
{\displaystyle {\mathcal {K}}}
and the
(
k
−
1
)
{\displaystyle (k-1)}
-dimensional cells in
Q
{\displaystyle {\mathcal {Q}}}
, which can be denoted by
p
k
:
K
k
→
Q
k
−
1
{\displaystyle p^{k}\colon {\mathcal {K}}^{k}\to {\mathcal {Q}}^{k-1}}
for each natural number
k
{\displaystyle k}
. It is an additional technical requirement that for each
K
∈
K
k
{\displaystyle K\in {\mathcal {K}}^{k}}
, the degree of the attaching map from the boundary of
K
{\displaystyle K}
to its paired cell
p
k
(
K
)
∈
Q
{\displaystyle p^{k}(K)\in {\mathcal {Q}}}
is a unit in the underlying ring of
X
{\displaystyle {\mathcal {X}}}
. For instance, over the integers
Z
{\displaystyle \mathbb {Z} }
, the only allowed values are
±
1
{\displaystyle \pm 1}
. This technical requirement is guaranteed, for instance, when one assumes that
X
{\displaystyle {\mathcal {X}}}
is a regular CW complex over
Z
{\displaystyle \mathbb {Z} }
.
The fundamental result of discrete Morse theory establishes that the CW complex
X
{\displaystyle {\mathcal {X}}}
is isomorphic on the level of homology to a new complex
A
{\displaystyle {\mathcal {A}}}
consisting of only the critical cells. The paired cells in
K
{\displaystyle {\mathcal {K}}}
and
Q
{\displaystyle {\mathcal {Q}}}
describe gradient paths between adjacent critical cells which can be used to obtain the boundary operator on
A
{\displaystyle {\mathcal {A}}}
. Some details of this construction are provided in the next section.
== The Morse complex ==
A gradient path is a sequence of paired cells
ρ
=
(
Q
1
,
K
1
,
Q
2
,
K
2
,
…
,
Q
M
,
K
M
)
{\displaystyle \rho =(Q_{1},K_{1},Q_{2},K_{2},\ldots ,Q_{M},K_{M})}
satisfying
Q
m
=
p
(
K
m
)
{\displaystyle Q_{m}=p(K_{m})}
and
κ
(
K
m
,
Q
m
+
1
)
≠
0
{\displaystyle \kappa (K_{m},~Q_{m+1})\neq 0}
. The index of this gradient path is defined to be the integer
ν
(
ρ
)
=
∏
m
=
1
M
−
1
−
κ
(
K
m
,
Q
m
+
1
)
∏
m
=
1
M
κ
(
K
m
,
Q
m
)
.
{\displaystyle \nu (\rho )={\frac {\prod _{m=1}^{M-1}-\kappa (K_{m},Q_{m+1})}{\prod _{m=1}^{M}\kappa (K_{m},Q_{m})}}.}
The division here makes sense because the incidence between paired cells must be
±
1
{\displaystyle \pm 1}
. Note that by construction, the values of the discrete Morse function
μ
{\displaystyle \mu }
must decrease across
ρ
{\displaystyle \rho }
. The path
ρ
{\displaystyle \rho }
is said to connect two critical cells
A
,
A
′
∈
A
{\displaystyle A,A'\in {\mathcal {A}}}
if
κ
(
A
,
Q
1
)
≠
0
≠
κ
(
K
M
,
A
′
)
{\displaystyle \kappa (A,Q_{1})\neq 0\neq \kappa (K_{M},A')}
. This relationship may be expressed as
A
→
ρ
A
′
{\displaystyle A{\stackrel {\rho }{\to }}A'}
. The multiplicity of this connection is defined to be the integer
m
(
ρ
)
=
κ
(
A
,
Q
1
)
⋅
ν
(
ρ
)
⋅
κ
(
K
M
,
A
′
)
{\displaystyle m(\rho )=\kappa (A,Q_{1})\cdot \nu (\rho )\cdot \kappa (K_{M},A')}
. Finally, the Morse boundary operator on the critical cells
A
{\displaystyle {\mathcal {A}}}
is defined by
Δ
(
A
)
=
κ
(
A
,
A
′
)
+
∑
A
→
ρ
A
′
m
(
ρ
)
A
′
{\displaystyle \Delta (A)=\kappa (A,A')+\sum _{A{\stackrel {\rho }{\to }}A'}m(\rho )A'}
where the sum is taken over all gradient path connections from
A
{\displaystyle A}
to
A
′
{\displaystyle A'}
.
== Basic results ==
Many of the familiar results from continuous Morse theory apply in the discrete setting.
=== The Morse inequalities ===
Let
A
{\displaystyle {\mathcal {A}}}
be a Morse complex associated to the CW complex
X
{\displaystyle {\mathcal {X}}}
. The number
m
q
=
|
A
q
|
{\displaystyle m_{q}=|{\mathcal {A}}_{q}|}
of
q
{\displaystyle q}
-cells in
A
{\displaystyle {\mathcal {A}}}
is called the
q
{\displaystyle q}
-th Morse number. Let
β
q
{\displaystyle \beta _{q}}
denote the
q
{\displaystyle q}
-th Betti number of
X
{\displaystyle {\mathcal {X}}}
. Then, for any
N
>
0
{\displaystyle N>0}
, the following inequalities hold
m
N
≥
β
N
{\displaystyle m_{N}\geq \beta _{N}}
, and
m
N
−
m
N
−
1
+
⋯
±
m
0
≥
β
N
−
β
N
−
1
+
⋯
±
β
0
{\displaystyle m_{N}-m_{N-1}+\dots \pm m_{0}\geq \beta _{N}-\beta _{N-1}+\dots \pm \beta _{0}}
Moreover, the Euler characteristic
χ
(
X
)
{\displaystyle \chi ({\mathcal {X}})}
of
X
{\displaystyle {\mathcal {X}}}
satisfies
χ
(
X
)
=
m
0
−
m
1
+
⋯
±
m
dim
X
{\displaystyle \chi ({\mathcal {X}})=m_{0}-m_{1}+\dots \pm m_{\dim {\mathcal {X}}}}
=== Discrete Morse homology and homotopy type ===
Let
X
{\displaystyle {\mathcal {X}}}
be a regular CW complex with boundary operator
∂
{\displaystyle \partial }
and a discrete Morse function
μ
:
X
→
R
{\displaystyle \mu \colon {\mathcal {X}}\to \mathbb {R} }
. Let
A
{\displaystyle {\mathcal {A}}}
be the associated Morse complex with Morse boundary operator
Δ
{\displaystyle \Delta }
. Then, there is an isomorphism of homology groups
H
∗
(
X
,
∂
)
≃
H
∗
(
A
,
Δ
)
,
{\displaystyle H_{*}({\mathcal {X}},\partial )\simeq H_{*}({\mathcal {A}},\Delta ),}
and similarly for the homotopy groups.
== Applications ==
Discrete Morse theory finds its application in molecular shape analysis, skeletonization of digital images/volumes, graph reconstruction from noisy data, denoising noisy point clouds and analysing lithic tools in archaeology.
== See also ==
Digital Morse theory
Stratified Morse theory
Shape analysis
Topological combinatorics
Discrete differential geometry
== References == | Wikipedia/Discrete_Morse_theory |
In mathematics, specifically in operator K-theory, the Baum–Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object.
The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture.
The conjecture is also closely related to index theory, as the assembly map
μ
{\displaystyle \mu }
is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program.
The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects.
== Formulation ==
Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism
μ
:
R
K
∗
Γ
(
E
Γ
_
)
→
K
∗
(
C
r
∗
(
Γ
)
)
,
{\displaystyle \mu :RK_{*}^{\Gamma }({\underline {E\Gamma }})\to K_{*}(C_{r}^{*}(\Gamma )),}
called the assembly map, from the equivariant K-homology with
Γ
{\displaystyle \Gamma }
-compact supports of the classifying space of proper actions
E
Γ
_
{\displaystyle {\underline {E\Gamma }}}
to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1.
Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism:
Baum-Connes Conjecture. The assembly map
μ
{\displaystyle \mu }
is an isomorphism.
As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the
C
∗
{\displaystyle C^{*}}
-algebra, one usually views the conjecture as an "explanation" of the right hand side.
The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982.
In case
Γ
{\displaystyle \Gamma }
is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space
B
Γ
{\displaystyle B\Gamma }
of
Γ
{\displaystyle \Gamma }
.
There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a
C
∗
{\displaystyle C^{*}}
-algebra
A
{\displaystyle A}
on which
Γ
{\displaystyle \Gamma }
acts by
C
∗
{\displaystyle C^{*}}
-automorphisms. It says in KK-language that the assembly map
μ
A
,
Γ
:
R
K
K
∗
Γ
(
E
Γ
_
,
A
)
→
K
∗
(
A
⋊
λ
Γ
)
,
{\displaystyle \mu _{A,\Gamma }:RKK_{*}^{\Gamma }({\underline {E\Gamma }},A)\to K_{*}(A\rtimes _{\lambda }\Gamma ),}
is an isomorphism, containing the case without coefficients as the case
A
=
C
.
{\displaystyle A=\mathbb {C} .}
However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups.
== Examples ==
Let
Γ
{\displaystyle \Gamma }
be the integers
Z
{\displaystyle \mathbb {Z} }
. Then the left hand side is the K-homology of
B
Z
{\displaystyle B\mathbb {Z} }
which is the circle. The
C
∗
{\displaystyle C^{*}}
-algebra of the integers is by the commutative Gelfand–Naimark transform, which reduces to the Fourier transform in this case, isomorphic to the algebra of continuous functions on the circle. So the right hand side is the topological K-theory of the circle. One can then show that the assembly map is KK-theoretic Poincaré duality as defined by Gennadi Kasparov, which is an isomorphism.
== Results ==
The conjecture without coefficients is still open, although the field has received great attention since 1982.
The conjecture is proved for the following classes of groups:
Discrete subgroups of
S
O
(
n
,
1
)
{\displaystyle SO(n,1)}
and
S
U
(
n
,
1
)
{\displaystyle SU(n,1)}
.
Groups with the Haagerup property, sometimes called a-T-menable groups. These are groups that admit an isometric action on an affine Hilbert space
H
{\displaystyle H}
which is proper in the sense that
lim
n
→
∞
g
n
ξ
→
∞
{\displaystyle \lim _{n\to \infty }g_{n}\xi \to \infty }
for all
ξ
∈
H
{\displaystyle \xi \in H}
and all sequences of group elements
g
n
{\displaystyle g_{n}}
with
lim
n
→
∞
g
n
→
∞
{\displaystyle \lim _{n\to \infty }g_{n}\to \infty }
. Examples of a-T-menable groups are amenable groups, Coxeter groups, groups acting properly on trees, and groups acting properly on simply connected
C
A
T
(
0
)
{\displaystyle CAT(0)}
cubical complexes.
Groups that admit a finite presentation with only one relation.
Discrete cocompact subgroups of real Lie groups of real rank 1.
Cocompact lattices in
S
L
(
3
,
R
)
,
S
L
(
3
,
C
)
{\displaystyle SL(3,\mathbb {R} ),SL(3,\mathbb {C} )}
or
S
L
(
3
,
Q
p
)
{\displaystyle SL(3,\mathbb {Q} _{p})}
. It was a long-standing problem since the first days of the conjecture to expose a single infinite property T-group that satisfies it. However, such a group was given by V. Lafforgue in 1998 as he showed that cocompact lattices in
S
L
(
3
,
R
)
{\displaystyle SL(3,\mathbb {R} )}
have the property of rapid decay and thus satisfy the conjecture.
Gromov hyperbolic groups and their subgroups.
Among non-discrete groups, the conjecture has been shown in 2003 by J. Chabert, S. Echterhoff and R. Nest for the vast class of all almost connected groups (i. e. groups having a cocompact connected component), and all groups of
k
{\displaystyle k}
-rational points of a linear algebraic group over a local field
k
{\displaystyle k}
of characteristic zero (e.g.
k
=
Q
p
{\displaystyle k=\mathbb {Q} _{p}}
). For the important subclass of real reductive groups, the conjecture had already been shown in 1987 by Antony Wassermann.
Injectivity is known for a much larger class of groups thanks to the Dirac-dual-Dirac method. This goes back to ideas of Michael Atiyah and was developed in great generality by Gennadi Kasparov in 1987.
Injectivity is known for the following classes:
Discrete subgroups of connected Lie groups or virtually connected Lie groups.
Discrete subgroups of p-adic groups.
Bolic groups (a certain generalization of hyperbolic groups).
Groups which admit an amenable action on some compact space.
The simplest example of a group for which it is not known whether it satisfies the conjecture is
S
L
3
(
Z
)
{\displaystyle SL_{3}(\mathbb {Z} )}
.
== References ==
Mislin, Guido & Valette, Alain (2003), Proper Group Actions and the Baum–Connes Conjecture, Basel: Birkhäuser, ISBN 0-8176-0408-1.
Valette, Alain (2002), Introduction to the Baum-Connes Conjecture, Basel: Birkhäuser, ISBN 978-3-7643-6706-0.
== External links ==
On the Baum-Connes conjecture by Dmitry Matsnev. | Wikipedia/Baum–Connes_conjecture |
In mathematics, computational group theory is the study of
groups by means of computers. It is concerned
with designing and analysing algorithms and
data structures to compute information about groups. The subject
has attracted interest because for many interesting groups
(including most of the sporadic groups) it is impractical
to perform calculations by hand.
Important algorithms in computational group theory include:
the Schreier–Sims algorithm for finding the order of a permutation group
the Todd–Coxeter algorithm and Knuth–Bendix algorithm for coset enumeration
the product-replacement algorithm for finding random elements of a group
Two important computer algebra systems (CAS) used for group theory are
GAP and Magma. Historically, other systems such as CAS (for character theory) and Cayley (a predecessor of Magma) were important.
Some achievements of the field include:
complete enumeration of all finite groups of order less than 2000
computation of representations for all the sporadic groups
== See also ==
Black box group
== References ==
A survey of the subject by Ákos Seress from Ohio State University, expanded from an article that appeared in the Notices of the American Mathematical Society is available online. There is also a survey by Charles Sims from Rutgers University and an older survey by Joachim Neubüser from RWTH Aachen.
There are three books covering various parts of the subject:
Derek F. Holt, Bettina Eick, Eamonn A. O'Brien, "Handbook of computational group theory", Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, Florida, 2005. ISBN 1-58488-372-3
Charles C. Sims, "Computation with Finitely-presented Groups", Encyclopedia of Mathematics and its Applications, vol 48, Cambridge University Press, Cambridge, 1994. ISBN 0-521-43213-8
Ákos Seress, "Permutation group algorithms", Cambridge Tracts in Mathematics, vol. 152, Cambridge University Press, Cambridge, 2003. ISBN 0-521-66103-X. | Wikipedia/Computational_group_theory |
Geometric and Functional Analysis (GAFA) is a mathematical journal published by Birkhäuser, an independent division of Springer-Verlag. The journal is published bi-monthly.
The journal publishes major results on a broad range of mathematical topics related to geometry and analysis.
GAFA is both an acronym and a part of the official full name of the journal.
== History ==
GAFA was founded in 1991 by Mikhail Gromov and Vitali Milman. The idea for the journal was inspired by the long-running Israeli seminar series "Geometric Aspects of Functional Analysis" of which Vitali Milman had been one of the main organizers in the previous years. The journal retained the same acronym as the series to stress the connection between the two.
== Journal information ==
The journal is reviewed cover-to-cover in Mathematical Reviews and zbMATH Open and is indexed cover-to-cover in the Web of Science. According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.2.
The journal has seven editors: Vitali Milman (editor-in-chief), Simon Donaldson, Mikhail Gromov, Larry Guth, Boáz Klartag, Leonid Polterovich, and Peter Sarnak.
== See also ==
Geometric analysis
== References ==
== External links ==
Geometric and Functional Analysis (GAFA), official journal website, Springer-Verlag | Wikipedia/Geometric_and_Functional_Analysis |
In mathematics, especially in the area of modern algebra known as combinatorial group theory, Nielsen transformations are certain automorphisms of a free group which are a non-commutative analogue of row reduction and one of the main tools used in studying free groups (Fine, Rosenberger & Stille 1995).
Given a finite basis of a free group
F
n
{\displaystyle F_{n}}
, the corresponding set of elementary Nielsen transformations forms a finite generating set of
A
u
t
(
F
n
)
{\displaystyle \mathrm {Aut} (F_{n})}
. This system of generators is analogous to elementary matrices for
G
L
n
(
Z
)
{\displaystyle GL_{n}(\mathbb {Z} )}
and Dehn twists for mapping class groups of closed surfaces.
Nielsen transformations were introduced in (Nielsen 1921) to prove that every subgroup of a free group is free (the Nielsen–Schreier theorem). They are now used in a variety of mathematics, including computational group theory, k-theory, and knot theory.
== Definitions ==
=== Free groups ===
Let
F
n
{\textstyle F_{n}}
be a finitely generated free group of rank
n
{\textstyle n}
. An elementary Nielsen transformation maps an ordered basis
[
x
1
,
…
,
x
n
]
{\textstyle [x_{1},\ldots ,x_{n}]}
to a new basis
[
y
1
,
…
,
y
n
]
{\textstyle [y_{1},\ldots ,y_{n}]}
by one of the following operations:
Permute the
x
i
{\textstyle x_{i}}
s by some permutation
σ
∈
S
n
{\textstyle \sigma \in S_{n}}
, i.e.
[
y
1
,
…
,
y
n
]
=
[
x
σ
(
1
)
,
…
,
x
σ
(
n
)
]
{\textstyle [y_{1},\ldots ,y_{n}]=[x_{\sigma (1)},\ldots ,x_{\sigma (n)}]}
Invert some
x
i
{\textstyle x_{i}}
, i.e.
[
y
1
,
…
,
y
n
]
=
[
x
1
,
…
,
x
i
−
1
,
…
,
x
n
]
{\textstyle [y_{1},\ldots ,y_{n}]=[x_{1},\ldots ,x_{i}^{-1},\ldots ,x_{n}]}
Replace some
x
i
{\textstyle x_{i}}
with
x
i
x
j
{\displaystyle x_{i}x_{j}}
for some
j
≠
i
{\textstyle j\neq i}
, i.e.
[
y
1
,
…
,
y
n
]
=
[
x
1
,
…
,
x
i
x
j
,
…
,
x
n
]
{\textstyle [y_{1},\ldots ,y_{n}]=[x_{1},\ldots ,x_{i}x_{j},\ldots ,x_{n}]}
.
A Nielsen transformation is a finite composition of elementary Nielsen transformations. Since automorphisms of
F
n
{\displaystyle F_{n}}
are determined by the image of a basis, the elementary Nielsen transformations correspond to a finite subset of the automorphism group
A
u
t
(
F
n
)
{\textstyle \mathrm {Aut} (F_{n})}
, which is in fact a generating set (see below). Hence, Nielsen transformation can alternatively be defined simply as the action of an automorphism of
F
n
{\textstyle F_{n}}
on bases.
Elementary Nielsen transformations are the analogues of the elementary row operations. Transformations of the first kind are analogous to row permutations. Transformations of the second kind correspond to scaling a row by an invertible scalar. Transformations of the third kind correspond to row additions (transvections).
Since the finite permutation group
S
n
{\displaystyle S_{n}}
is generated by transpositions, one sees from the chain of elementary Nielsen transformations of type 2 and 3:
(
x
i
,
x
i
+
1
)
↦
(
x
i
,
x
i
+
1
−
1
)
↦
(
x
i
x
i
+
1
−
1
,
x
i
+
1
−
1
)
↦
(
x
i
+
1
x
i
−
1
,
x
i
+
1
−
1
)
↦
(
x
i
+
1
x
i
−
1
,
x
i
−
1
)
↦
(
x
i
+
1
x
i
−
1
,
x
i
)
↦
(
x
i
+
1
,
x
i
)
{\displaystyle (x_{i},x_{i+1})\mapsto {(x_{i},x_{i+1}^{-1})}\mapsto {{(x_{i}x_{i+1}^{-1}},x_{i+1}^{-1})}\mapsto ({{x_{i+1}x_{i}^{-1}},x_{i+1}^{-1}})\mapsto {({x_{i+1}x_{i}^{-1}},x_{i}^{-1})}\mapsto (x_{i+1}x_{i}^{-1},x_{i})\mapsto (x_{i+1},x_{i})}
that elementary Nielsen transformations of type 2 and 3 are in fact enough to generate all Nielsen transformations.
Using the two generators
(
12
)
{\textstyle (12)}
and
(
1
…
n
)
{\textstyle (1\ldots n)}
of
S
n
{\textstyle S_{n}}
, one can alternatively restrict attention to only four operations:
switch
x
1
{\textstyle x_{1}}
and
x
2
{\textstyle x_{2}}
cyclically permute the
x
i
{\textstyle x_{i}}
s
invert
x
1
{\textstyle x_{1}}
replace
x
1
{\textstyle x_{1}}
with
x
1
x
2
{\textstyle x_{1}x_{2}}
.
=== General finitely generated groups ===
When dealing with groups that are not free, one instead applies these transformations to finite ordered subsets of a group. In this situation, compositions of the elementary transformations are called regular. If one allows removing elements of the subset that are the identity element, then the transformation is called singular.
The image under a Nielsen transformation (elementary or not, regular or not) of a generating set of a group G is also a generating set of G. Two generating sets are called Nielsen equivalent if there is a Nielsen transformation taking one to the other (beware this is not an equivalence relation). If the generating sets have the same size, then it suffices to consider compositions of regular Nielsen transformations.
== Examples ==
The dihedral group of order 10 has two Nielsen equivalence classes of generating sets of size 2. Letting x be an element of order 2, and y being an element of order 5, the two classes of generating sets are represented by [ x, y ] and [ x, yy ], and each class has 15 distinct elements. A very important generating set of a dihedral group is the generating set from its presentation as a Coxeter group. Such a generating set for a dihedral group of order 10 consists of any pair of elements of order 2, such as [ x, xy ]. This generating set is equivalent to [ x, y ] via:
[ x−1, y ], type 3
[ y, x−1 ], type 1
[ y−1, x−1 ], type 3
[ y−1x−1, x−1 ], type 4
[ xy, x−1 ], type 3
[ x−1, xy ], type 1
[ x, xy ], type 3
Unlike [ x, y ] and [ x, yy ], the generating sets [ x, y, 1 ] and [ x, yy, 1 ] are equivalent. A transforming sequence using more convenient elementary transformations (all swaps, all inverses, all products) is:
[ x, y, 1 ]
[ x, y, y ], multiply 2nd generator into 3rd
[ x, yy, y ], multiply 3rd generator into 2nd
[ x, yy, yyy ], multiply 2nd generator into 3rd
[ x, yy, 1 ], multiply 2nd generator into 3rd
== Applications ==
=== Nielsen–Schreier theorem ===
The Nielsen-Schreier theorem states that every subgroup
F
′
≤
F
{\textstyle F'\leq F}
of a free group
F
{\textstyle F}
is also free. The modern proof relies on the fact that a group (finitely generated or not) is free, if and only if it is the fundamental group of a graph (finite or not). This allows one to explicitly find a basis of
F
′
{\textstyle F'}
, since it is geometrically realized as the fundamental group of a covering of a graph whose fundamental group is
F
{\displaystyle F}
.
However, the original proof by Nielsen for the case of finitely generated subgroups, given in (Nielsen 1921), is different and more combinatorial. It relies on the notion of a Nielsen reduced generating set, which roughly means one for which there is not too much cancellation in products. The paper shows that every finite generating set of a subgroup of a free group is (singularly) Nielsen equivalent to a Nielsen reduced generating set, and that a Nielsen reduced generating set is a free basis for the subgroup, so the subgroup is free. This proof is given in some detail in (Magnus, Karrass & Solitar 2004, Ch 3.2).
=== Automorphism groups ===
In (Nielsen 1924), it is shown that the elementary Nielsen transformations generate the full automorphism group of a finitely generated free group. Nielsen, and later Bernhard Neumann used these ideas to give finite presentations of the automorphism groups of free groups. This is also described in standard textbooks such as (Magnus, Karrass & Solitar 2004, p. 131, Th 3.2).
For a given generating set of a finitely generated group, it is not necessarily true that every automorphism is a Nielsen transformation, but for every automorphism, there is a generating set where the automorphism is given by a Nielsen transformation, (Rapaport 1959).
The adequate generalization of Nielsen transformations for automorphisms of free products of freely indecomposable groups are Whitehead automorphisms. Together with the automorphisms of the Grushko factors, they form a generating set of the automorphism group of any finitely generated group, known as the Fouxe-Rabinovitch generators.
=== Word problem ===
A particularly simple case of the word problem for groups and the isomorphism problem for groups asks if a finitely presented group is the trivial group. This is known to be intractable in general, even though there is a finite sequence of elementary Tietze transformations taking the presentation to the trivial presentation if and only if the group is trivial. A special case is that of "balanced presentations", those finite presentations with equal numbers of generators and relators. For these groups, there is a conjecture that the required transformations are quite a bit simpler (in particular, do not involve adding or removing relators). If one allows taking the set of relators to any Nielsen equivalent set, and one allows conjugating the relators, then one gets an equivalence relation on ordered subsets of a relators of a finitely presented group. The Andrews–Curtis conjecture is that the relators of any balanced presentation of the trivial group are equivalent to a set of trivial relators, stating that each generator is the identity element.
In the textbook (Magnus, Karrass & Solitar 2004, pp. 131–132), an application of Nielsen transformations is given to solve the generalized word problem for free groups, also known as the membership problem for subgroups given by finite generating sets in free groups.
=== Isomorphism problem ===
A particularly important special case of the isomorphism problem for groups concerns the fundamental groups of three-dimensional knots, which can be solved using Nielsen transformations and a method of J. W. Alexander (Magnus, Karrass & Solitar 2004, Ch 3.4).
=== Product replacement algorithm ===
In computational group theory, it is important to generate random elements of a finite group. Popular methods of doing this apply Markov chain methods to generate random generating sets of the group. The "product replacement algorithm" simply uses randomly chosen Nielsen transformations in order to take a random walk on the graph of generating sets of the group. The algorithm is well studied, and survey is given in (Pak 2001). One version of the algorithm, called "shake", is:
Take any ordered generating set and append some copies of the identity element, so that there are n elements in the set
Repeat the following for a certain number of times (called a burn in)
Choose integers i and j uniformly at random from 1 to n, and choose e uniformly at random from { 1, -1 }
Replace the ith generator with the product of the ith generator and the jth generator raised to the eth power
Every time a new random element is desired, repeat the previous two steps, then return one of the generating elements as the desired random element
The generating set used during the course of this algorithm can be proved to vary uniformly over all Nielsen equivalent generating sets. However, this algorithm has a number of statistical and theoretical problems. For instance, there can be more than one Nielsen equivalence class of generators. Also, the elements of generating sets need be uniformly distributed (for instance, elements of the Frattini subgroup can never occur in a generating set of minimal size, but more subtle problems occur too).
Most of these problems are quickly remedied in the following modification called "rattle", (Leedham-Green & Murray 2002):
In addition to the generating set, store an additional element of the group, initialized to the identity
Every time a generator is replaced, choose k uniformly at random, and replace the additional element by the product of the additional element with the kth generator.
=== K-theory ===
To understand Nielsen equivalence of non-minimal generating sets, module theoretic investigations have been useful, as in (Evans 1989). Continuing in these lines, a K-theoretic formulation of the obstruction to Nielsen equivalence was described in (Lustig 1991) and (Lustig & Moriah 1993). These show an important connection between the Whitehead group of the group ring and the Nielsen equivalence classes of generators.
== See also ==
Tietze transformation
Automorphism group of a free group
== References ==
=== Notes ===
=== Textbooks and surveys ===
Cohen, Daniel E. (1989), Combinatorial group theory: a topological approach, London Mathematical Society Student Texts, vol. 14, Cambridge University Press, doi:10.1017/CBO9780511565878, ISBN 978-0-521-34133-2, MR 1020297
Fine, Benjamin; Rosenberger, Gerhard; Stille, Michael (1995), "Nielsen transformations and applications: a survey", in Kim, Ann Chi; Kim, A.C.; Johnson, D.L. (eds.), Groups—Korea '94: Proceedings of the International Conference Held at Pusan National University, Pusan, Korea, August 18–25, 1994, Walter de Gruyter, pp. 69–105, ISBN 978-3-11-014793-3, MR 1476950
Schupp, Paul E.; Lyndon, Roger C. (2001), Combinatorial group theory, Springer-Verlag, ISBN 978-3-540-41158-1, MR 0577064
Magnus, Wilhelm; Karrass, Abraham; Solitar, Donald (2004), Combinatorial Group Theory, Dover Publications, ISBN 978-0-486-43830-6, MR 0207802
=== Primary sources ===
Alexander, J. W. (1928), "Topological invariants of knots and links", Transactions of the American Mathematical Society, 30 (2): 275–306, doi:10.2307/1989123, JFM 54.0603.03, JSTOR 1989123
Evans, Martin J. (1989), "Primitive elements in free groups", Proceedings of the American Mathematical Society, 106 (2): 313–6, doi:10.2307/2048805, JSTOR 2048805, MR 0952315
Fenchel, Werner; Nielsen, Jakob (2003), Schmidt, Asmus L. (ed.), Discontinuous groups of isometries in the hyperbolic plane, De Gruyter Studies in mathematics, vol. 29, Berlin: Walter de Gruyter & Co.
Leedham-Green, C. R.; Murray, Scott H. (2002), "Variants of product replacement", Computational and statistical group theory (Las Vegas, NV/Hoboken, NJ, 2001), Contemp. Math., vol. 298, Providence, R.I.: American Mathematical Society, pp. 97–104, doi:10.1090/conm/298/05116, MR 1929718
Lustig, Martin (1991), "Nielsen equivalence and simple-homotopy type", Proceedings of the London Mathematical Society, 3rd Series, 62 (3): 537–562, doi:10.1112/plms/s3-62.3.537, MR 1095232
Lustig, Martin; Moriah, Yoav (1993), "Generating systems of groups and Reidemeister-Whitehead torsion", Journal of Algebra, 157 (1): 170–198, doi:10.1006/jabr.1993.1096, MR 1219664
Nielsen, Jakob (1921), "Om regning med ikke-kommutative faktorer og dens anvendelse i gruppeteorien", Math. Tidsskrift B (in Danish), 1921: 78–94, JFM 48.0123.03, JSTOR 24529483
Nielsen, Jakob (1924), "Die Isomorphismengruppe der freien Gruppen", Mathematische Annalen (in German), 91 (3–4): 169–209, doi:10.1007/BF01556078, JFM 50.0078.04
Pak, Igor (2001), "What do we know about the product replacement algorithm?", Groups and computation, III (Columbus, OH, 1999), Ohio State Univ. Math. Res. Inst. Publ., vol. 8, Walter de Gruyter, pp. 301–347, MR 1829489
Rapaport, Elvira Strasser (1959), "Note on Nielsen transformations", Proceedings of the American Mathematical Society, 10 (2): 228–235, doi:10.2307/2033582, JSTOR 2033582, MR 0104724 | Wikipedia/Nielsen_transformation |
In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties:
A is a topologically closed set in the norm topology of operators.
A is closed under the operation of taking adjoints of operators.
Another important class of non-Hilbert C*-algebras includes the algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space.
C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras that are now known as von Neumann algebras.
Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space.
C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras.
== Abstract characterization ==
We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark.
A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map
x
↦
x
∗
{\textstyle x\mapsto x^{*}}
for
x
∈
A
{\textstyle x\in A}
with the following properties:
It is an involution, for every x in A:
x
∗
∗
=
(
x
∗
)
∗
=
x
{\displaystyle x^{**}=(x^{*})^{*}=x}
For all x, y in A:
(
x
+
y
)
∗
=
x
∗
+
y
∗
{\displaystyle (x+y)^{*}=x^{*}+y^{*}}
(
x
y
)
∗
=
y
∗
x
∗
{\displaystyle (xy)^{*}=y^{*}x^{*}}
For every complex number
λ
∈
C
{\displaystyle \lambda \in \mathbb {C} }
and every x in A:
(
λ
x
)
∗
=
λ
¯
x
∗
.
{\displaystyle (\lambda x)^{*}={\overline {\lambda }}x^{*}.}
For all x in A:
‖
x
x
∗
‖
=
‖
x
‖
‖
x
∗
‖
.
{\displaystyle \|xx^{*}\|=\|x\|\|x^{*}\|.}
Remark. The first four identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:
‖
x
x
∗
‖
=
‖
x
‖
2
,
{\displaystyle \|xx^{*}\|=\|x\|^{2},}
which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below.
The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:
‖
x
‖
2
=
‖
x
∗
x
‖
=
sup
{
|
λ
|
:
x
∗
x
−
λ
1
is not invertible
}
.
{\displaystyle \|x\|^{2}=\|x^{*}x\|=\sup\{|\lambda |:x^{*}x-\lambda \,1{\text{ is not invertible}}\}.}
A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if
For x and y in A
π
(
x
y
)
=
π
(
x
)
π
(
y
)
{\displaystyle \pi (xy)=\pi (x)\pi (y)\,}
For x in A
π
(
x
∗
)
=
π
(
x
)
∗
{\displaystyle \pi (x^{*})=\pi (x)^{*}\,}
In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity.
A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic.
== Some history: B*-algebras and C*-algebras ==
The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition:
‖
x
x
∗
‖
=
‖
x
‖
2
{\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert ^{2}}
for all x in the given B*-algebra. (B*-condition)
This condition automatically implies that the *-involution is isometric, that is,
‖
x
‖
=
‖
x
∗
‖
{\displaystyle \lVert x\rVert =\lVert x^{*}\rVert }
. Hence,
‖
x
x
∗
‖
=
‖
x
‖
‖
x
∗
‖
{\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert \lVert x^{*}\rVert }
, and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition
‖
x
‖
=
‖
x
∗
‖
{\displaystyle \lVert x\rVert =\lVert x^{*}\rVert }
. For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'.
The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space".
== Structure of C*-algebras ==
C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism.
=== Self-adjoint elements ===
Self-adjoint elements are those of the form
x
=
x
∗
{\displaystyle x=x^{*}}
. The set of elements of a C*-algebra A of the form
x
∗
x
{\displaystyle x^{*}x}
forms a closed convex cone. This cone is identical to the elements of the form
x
x
∗
{\displaystyle xx^{*}}
. Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of
R
{\displaystyle \mathbb {R} }
)
The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted
≥
{\displaystyle \geq }
. In this ordering, a self-adjoint element
x
∈
A
{\displaystyle x\in A}
satisfies
x
≥
0
{\displaystyle x\geq 0}
if and only if the spectrum of
x
{\displaystyle x}
is non-negative, if and only if
x
=
s
∗
s
{\displaystyle x=s^{*}s}
for some
s
∈
A
{\displaystyle s\in A}
. Two self-adjoint elements
x
{\displaystyle x}
and
y
{\displaystyle y}
of A satisfy
x
≥
y
{\displaystyle x\geq y}
if
x
−
y
≥
0
{\displaystyle x-y\geq 0}
.
This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction.
=== Quotients and approximate identities ===
Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that
x
e
λ
→
x
{\displaystyle xe_{\lambda }\rightarrow x}
0
≤
e
λ
≤
e
μ
≤
1
whenever
λ
≤
μ
.
{\displaystyle 0\leq e_{\lambda }\leq e_{\mu }\leq 1\quad {\mbox{ whenever }}\lambda \leq \mu .}
In case A is separable, A has a sequential approximate identity. More generally, A will have a sequential approximate identity if and only if A contains a strictly positive element, i.e. a positive element h such that hAh is dense in A.
Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra.
Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra.
== Examples ==
=== Finite-dimensional C*-algebras ===
The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, Cn, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type:
Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum
A
=
⨁
e
∈
min
A
A
e
{\displaystyle A=\bigoplus _{e\in \min A}Ae}
where min A is the set of minimal nonzero self-adjoint central projections of A.
Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A.
A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science.
An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras.
=== C*-algebras of operators ===
The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem.
=== C*-algebras of compact operators ===
Let H be a separable infinite-dimensional Hilbert space. The algebra K(H) of compact operators on H is a norm closed subalgebra of B(H). It is also closed under involution; hence it is a C*-algebra.
Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras:
Theorem. If A is a C*-subalgebra of K(H), then there exists Hilbert spaces {Hi}i∈I such that
A
≅
⨁
i
∈
I
K
(
H
i
)
,
{\displaystyle A\cong \bigoplus _{i\in I}K(H_{i}),}
where the (C*-)direct sum consists of elements (Ti) of the Cartesian product Π K(Hi) with ||Ti|| → 0.
Though K(H) does not have an identity element, a sequential approximate identity for K(H) can be developed. To be specific, H is isomorphic to the space of square summable sequences l2; we may assume that H = l2. For each natural number n let Hn be the subspace of sequences of l2 which vanish for indices k ≥ n and let en be the orthogonal projection onto Hn. The sequence {en}n is an approximate identity for K(H).
K(H) is a two-sided closed ideal of B(H). For separable Hilbert spaces, it is the unique ideal. The quotient of B(H) by K(H) is the Calkin algebra.
=== Commutative C*-algebras ===
Let X be a locally compact Hausdorff space. The space
C
0
(
X
)
{\displaystyle C_{0}(X)}
of complex-valued continuous functions on X that vanish at infinity (defined in the article on local compactness) forms a commutative C*-algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
under pointwise multiplication and addition. The involution is pointwise conjugation.
C
0
(
X
)
{\displaystyle C_{0}(X)}
has a multiplicative unit element if and only if
X
{\displaystyle X}
is compact. As does any C*-algebra,
C
0
(
X
)
{\displaystyle C_{0}(X)}
has an approximate identity. In the case of
C
0
(
X
)
{\displaystyle C_{0}(X)}
this is immediate: consider the directed set of compact subsets of
X
{\displaystyle X}
, and for each compact
K
{\displaystyle K}
let
f
K
{\displaystyle f_{K}}
be a function of compact support which is identically 1 on
K
{\displaystyle K}
. Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions
{
f
K
}
{\displaystyle \{f_{K}\}}
is an approximate identity.
The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra
C
0
(
X
)
{\displaystyle C_{0}(X)}
, where
X
{\displaystyle X}
is the space of characters equipped with the weak* topology. Furthermore, if
C
0
(
X
)
{\displaystyle C_{0}(X)}
is isomorphic to
C
0
(
Y
)
{\displaystyle C_{0}(Y)}
as C*-algebras, it follows that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs.
=== C*-enveloping algebra ===
Given a Banach *-algebra A with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E(A) and *-morphism π from A into E(A) that is universal, that is, every other continuous *-morphism π ' : A → B factors uniquely through π. The algebra E(A) is called the C*-enveloping algebra of the Banach *-algebra A.
Of particular importance is the C*-algebra of a locally compact group G. This is defined as the enveloping C*-algebra of the group algebra of G. The C*-algebra of G provides context for general harmonic analysis of G in the case G is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra.
=== Von Neumann algebras ===
Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology.
The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it.
== Type for C*-algebras ==
A C*-algebra A is of type I if and only if for all non-degenerate representations π of A the von Neumann algebra π(A)″ (that is, the bicommutant of π(A)) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π(A)″ is a factor.
A locally compact group is said to be of type I if and only if its group C*-algebra is type I.
However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties.
== C*-algebras and quantum field theory ==
In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x).
This C*-algebra approach is used in the Haag–Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra.
== See also ==
Banach algebra
Banach *-algebra
*-algebra
Hilbert C*-module
Operator K-theory
Operator system, a unital subspace of a C*-algebra that is *-closed.
Gelfand–Naimark–Segal construction
Jordan operator algebra
== Notes ==
== References ==
Arveson, W. (1976), An Invitation to C*-Algebra, Springer-Verlag, ISBN 0-387-90176-0. An excellent introduction to the subject, accessible for those with a knowledge of basic functional analysis.
Connes, Alain (1994), Non-commutative geometry, Gulf Professional, ISBN 0-12-185860-X. This book is widely regarded as a source of new research material, providing much supporting intuition, but it is difficult.
Dixmier, Jacques (1969), Les C*-algèbres et leurs représentations, Gauthier-Villars, ISBN 0-7204-0762-1. This is a somewhat dated reference, but is still considered as a high-quality technical exposition. It is available in English from North Holland press.
Doran, Robert S.; Belfi, Victor A. (1986), Characterizations of C*-algebras: The Gelfand-Naimark Theorems, CRC Press, ISBN 978-0-8247-7569-8.
Emch, G. (1972), Algebraic Methods in Statistical Mechanics and Quantum Field Theory, Wiley-Interscience, ISBN 0-471-23900-3. Mathematically rigorous reference which provides extensive physics background.
A.I. Shtern (2001) [1994], "C*-algebra", Encyclopedia of Mathematics, EMS Press
Sakai, S. (1971), C*-algebras and W*-algebras, Springer, ISBN 3-540-63633-1.
Segal, Irving (1947), "Irreducible representations of operator algebras", Bulletin of the American Mathematical Society, 53 (2): 73–88, doi:10.1090/S0002-9904-1947-08742-5. | Wikipedia/C*-algebras |
In geometric group theory, a graph of groups is an object consisting of a collection of groups indexed by the vertices and edges of a graph, together with a family of monomorphisms of the edge groups into the vertex groups.
There is a unique group, called the fundamental group, canonically associated to each finite connected graph of groups. It admits an orientation-preserving action on a tree: the original graph of groups can be recovered from the quotient graph and the stabilizer subgroups. This theory, commonly referred to as Bass–Serre theory, is due to the work of Hyman Bass and Jean-Pierre Serre.
== Definition ==
A graph of groups over a graph Y is an assignment to each vertex x of Y of a group Gx and to each edge y of Y of a group Gy as well as monomorphisms φy,0 and φy,1 mapping Gy into the groups assigned to the vertices at its ends.
== Fundamental group ==
Let T be a spanning tree for Y and define the fundamental group Γ to be the group generated by the vertex groups Gx and elements y for each edge of Y with the following relations:
y = y−1 if y is the edge y with the reverse orientation.
y φy,0(x) y−1 = φy,1(x) for all x in Gy.
y = 1 if y is an edge in T.
This definition is independent of the choice of T.
The benefit in defining the fundamental groupoid of a graph of groups, as shown by Higgins (1976), is that it is defined independently of base point or tree. Also there is proved there a nice normal form for the elements of the fundamental groupoid. This includes normal form theorems for a free product with amalgamation and for an HNN extension (Bass 1993).
== Structure theorem ==
Let Γ be the fundamental group corresponding to the spanning tree T. For every vertex x and edge y, Gx and Gy can be identified with their images in Γ. It is possible to define a graph with vertices and edges the disjoint union of all coset spaces Γ/Gx and Γ/Gy respectively. This graph is a tree, called the universal covering tree, on which Γ acts. It admits the graph Y as fundamental domain. The graph of groups given by the stabilizer subgroups on the fundamental domain corresponds to the original graph of groups.
== Examples ==
A graph of groups on a graph with one edge and two vertices corresponds to a free product with amalgamation.
A graph of groups on a single vertex with a loop corresponds to an HNN extension.
== Generalisations ==
The simplest possible generalisation of a graph of groups is a 2-dimensional complex of groups. These are modeled on orbifolds arising from cocompact properly discontinuous actions of discrete groups on 2-dimensional simplicial complexes that have the structure of CAT(0) spaces. The quotient of the simplicial complex has finite stabilizer groups attached to vertices, edges and triangles together with monomorphisms for every inclusion of simplices. A complex of groups is said to be developable if it arises as the quotient of a CAT(0) simplicial complex. Developability is a non-positive curvature condition on the complex of groups: it can be verified locally by checking that all circuits occurring in the links of vertices have length at least six. Such complexes of groups originally arose in the theory of 2-dimensional Bruhat–Tits buildings; their
general definition and continued study have been inspired by the ideas of Gromov.
== See also ==
Bass–Serre theory
Right-angled Artin group
== References ==
Bass, Hyman (1993), "Covering theory for graphs of groups", Journal of Pure and Applied Algebra, 89 (1–2): 3–47, doi:10.1016/0022-4049(93)90085-8, MR 1239551.
Bridson, Martin R.; Haefliger, André (1999), Metric Spaces of Non-Positive Curvature, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319, Berlin: Springer-Verlag, ISBN 3-540-64324-9, MR 1744486.
Dicks, Warren; Dunwoody, M. J. (1989), Groups Acting on Graphs, Cambridge Studies in Advanced Mathematics, vol. 17, Cambridge: Cambridge University Press, ISBN 0-521-23033-0, MR 1001965.
Haefliger, André (1990), "Orbi-espaces [Orbispaces]", Sur les groupes hyperboliques d'après Mikhael Gromov (Bern, 1988), Progress in Mathematics (in French), vol. 83, Boston, MA: Birkhäuser, pp. 203–213, ISBN 0-8176-3508-4, MR 1086659
Higgins, P. J. (1976), "The fundamental groupoid of a graph of groups", Journal of the London Mathematical Society, 2nd Series, 13 (1): 145–149, doi:10.1112/jlms/s2-13.1.145, MR 0401927
Scott, Peter; Wall, Terry (1979), "Topological Methods in Group Theory", Homological Group Theory, London Math. Soc. Lecture Note Ser., vol. 36, Cambridge: Cambridge University Press, pp. 137–203, ISBN 0-521-22729-1, MR 0564422.
Serre, Jean-Pierre (2003), Trees, Springer Monographs in Mathematics, Berlin: Springer-Verlag, ISBN 3-540-44237-5, MR 1954121. Translated by John Stillwell from "arbres, amalgames, SL2", written with the collaboration of Hyman Bass, 3rd edition, astérisque 46 (1983). See Chapter I.5. | Wikipedia/Graph_of_groups |
In mathematics, Out(Fn) is the outer automorphism group of a free group on n generators. These groups are at universal stage in geometric group theory, as they act on the set of presentations with
n
{\displaystyle n}
generators of any finitely generated group. Despite geometric analogies with general linear groups and mapping class groups, their complexity is generally regarded as more challenging, which has fueled the development of new techniques in the field.
== Definition ==
Let
F
n
{\displaystyle F_{n}}
be the free nonabelian group of rank
n
≥
1
{\displaystyle n\geq 1}
. The set of inner automorphisms of
F
n
{\displaystyle F_{n}}
, i.e. automorphisms obtained as conjugations by an element of
F
n
{\displaystyle F_{n}}
, is a normal subgroup
I
n
n
(
F
n
)
◃
A
u
t
(
F
n
)
{\displaystyle \mathrm {Inn} (F_{n})\triangleleft \mathrm {Aut} (F_{n})}
. The outer automorphism group of
F
n
{\displaystyle F_{n}}
is the quotient
O
u
t
(
F
n
)
:=
A
u
t
(
F
n
)
/
I
n
n
(
F
n
)
.
{\displaystyle \mathrm {Out} (F_{n}):=\mathrm {Aut} (F_{n})/\mathrm {Inn} (F_{n}).}
An element of
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
is called an outer class.
== Relations to other groups ==
=== Linear groups ===
The abelianization map
F
n
→
Z
n
{\displaystyle F_{n}\to \mathbb {Z} ^{n}}
induces a homomorphism from
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
to the general linear group
G
L
(
n
,
Z
)
{\displaystyle \mathrm {GL} (n,\mathbb {Z} )}
, the latter being the automorphism group of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
. This map is onto, making
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
a group extension,
1
→
T
o
r
(
F
n
)
→
O
u
t
(
F
n
)
→
G
L
(
n
,
Z
)
→
1
{\displaystyle 1\to \mathrm {Tor} (F_{n})\to \mathrm {Out} (F_{n})\to \mathrm {GL} (n,\mathbb {Z} )\to 1}
.
The kernel
T
o
r
(
F
n
)
{\displaystyle \mathrm {Tor} (F_{n})}
is the Torelli group of
F
n
{\displaystyle F_{n}}
.
The map
O
u
t
(
F
2
)
→
G
L
(
2
,
Z
)
{\displaystyle \mathrm {Out} (F_{2})\to \mathrm {GL} (2,\mathbb {Z} )}
is an isomorphism. This no longer holds for higher ranks: the Torelli group of
F
3
{\displaystyle F_{3}}
contains the automorphism fixing two basis elements and multiplying the remaining one by the commutator of the two others.
=== Aut(Fn) ===
By definition,
A
u
t
(
F
n
)
{\displaystyle \mathrm {Aut} (F_{n})}
is an extension of the inner automorphism group
I
n
n
(
F
n
)
{\displaystyle \mathrm {Inn} (F_{n})}
by
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
. The inner automorphism group itself is the image of the action by conjugation, which has kernel the center
Z
(
F
n
)
{\displaystyle Z(F_{n})}
. Since
Z
(
F
n
)
{\displaystyle Z(F_{n})}
is trivial for
n
≥
2
{\displaystyle n\geq 2}
, this gives a short exact sequence
1
→
F
n
→
A
u
t
(
F
n
)
→
O
u
t
(
F
n
)
→
1.
{\displaystyle 1\rightarrow F_{n}\rightarrow \mathrm {Aut} (F_{n})\rightarrow \mathrm {Out} (F_{n})\rightarrow 1.}
For all
n
≥
2
{\displaystyle n\geq 2}
, there are embeddings
A
u
t
(
F
n
)
⟶
O
u
t
(
F
n
+
1
)
{\displaystyle \mathrm {Aut} (F_{n})\longrightarrow \mathrm {Out} (F_{n+1})}
obtained by taking the outer class of the extension of an automorphism of
F
n
{\displaystyle F_{n}}
fixing the additional generator. Therefore, when studying properties that are inherited by subgroups and quotients, the theories of
A
u
t
(
F
n
)
{\displaystyle \mathrm {Aut} (F_{n})}
and
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
are essentially the same.
=== Mapping class groups of surfaces ===
Because
F
n
{\displaystyle F_{n}}
is the fundamental group of a bouquet of n circles,
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
can be described topologically as the mapping class group of a bouquet of n circles (in the homotopy category), in analogy to the mapping class group of a closed surface which is isomorphic to the outer automorphism group of the fundamental group of that surface.
Given any finite graph with fundamental group
F
n
{\displaystyle F_{n}}
, the graph can be "thickened" to a surface
S
{\displaystyle S}
with one boundary component that retracts onto the graph. The Birman exact sequence yields a map from the mapping class group
M
C
G
(
S
)
⟶
O
u
t
(
F
n
)
{\displaystyle \mathrm {MCG} (S)\longrightarrow \mathrm {Out} (F_{n})}
. The elements of
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
that are in the image of such a map are called geometric. Such outer classes must leave invariant the cyclic word corresponding to the boundary, hence there are many non-geometric outer classes. A converse is true under some irreducibility assumptions, providing geometric realization for outer classes fixing a conjugacy class.
== Known results ==
For
n
≥
4
{\textstyle n\geq 4}
,
O
u
t
(
F
n
)
{\textstyle \mathrm {Out} (F_{n})}
is not linear, i.e. it has no faithful representation by matrices over a field (Formanek, Procesi, 1992);
For
n
≥
3
{\displaystyle n\geq 3}
, the isoperimetric function of
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
is exponential (Hatcher, Vogtmann, 1996);
The Tits Alternative holds in
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
: each subgroup is either virtually solvable or else it contains a free group of rank 2 (Bestvina, Feighn, Handel, 2000);
For
n
≥
3
{\textstyle n\geq 3}
,
O
u
t
(
O
u
t
(
F
n
)
)
=
1
{\textstyle \mathrm {Out} (\mathrm {Out} (F_{n}))={1}}
(Bridson and Vogtmann, 2000);
Every solvable subgroup of
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
has a finitely generated free abelian subgroup of finite index (Bestvina, Feighn, Handel, 2004);
For
i
>
0
{\displaystyle i>0}
, all but finitely many of the
i
{\displaystyle i}
th-degree homology morphisms induced by the sequence
…
→
O
u
t
(
F
n
−
1
)
→
O
u
t
(
F
n
)
→
O
u
t
(
F
n
+
1
)
→
…
{\displaystyle \ldots \rightarrow \mathrm {Out} (F_{n-1})\rightarrow \mathrm {Out} (F_{n})\rightarrow \mathrm {Out} (F_{n+1})\rightarrow \ldots }
are isomorphisms (Hatcher and Vogtmann, 2004);
For
n
≥
2
{\textstyle n\geq 2}
, the reduced
C
∗
{\textstyle C^{*}}
-algebra of
O
u
t
(
F
n
)
{\textstyle \mathrm {Out} (F_{n})}
(i.e. the closure of its image under the regular representation) is simple;
For
n
≥
4
{\textstyle n\geq 4}
, if
Γ
{\textstyle \Gamma }
is a finite index subgroup of
O
u
t
(
F
n
)
{\textstyle \mathrm {Out} (F_{n})}
, then any subgroup of
O
u
t
(
F
n
)
{\textstyle \mathrm {Out} (F_{n})}
isomorphic to
Γ
{\textstyle \Gamma }
is a conjugate of
Γ
{\textstyle \Gamma }
(Farb and Handel, 2007);
For
n
≥
5
{\textstyle n\geq 5}
,
O
u
t
(
F
n
)
{\textstyle \mathrm {Out} (F_{n})}
has Kazhdan's property (T) (Kaluba, Nowak, Ozawa, 2019 for
n
=
5
{\textstyle n=5}
; Kaluba, Kielak, Nowak, 2021 for
n
≥
6
{\displaystyle n\geq 6}
);
Actions on hyperbolic complexes satisfying acylindricity conditions were constructed, in analogy with complexes like the complex of curves for mapping class groups;
For
n
≥
3
{\textstyle n\geq 3}
,
O
u
t
(
F
n
)
{\textstyle \mathrm {Out} (F_{n})}
is rigid with respect to measure equivalence (Guirardel and Horbez, 2021 preprint).
== Outer space ==
Out(Fn) acts geometrically on a cell complex known as Culler–Vogtmann Outer space, which can be thought of as the Fricke-Teichmüller space for a bouquet of circles.
=== Definition ===
A point of the outer space is essentially an
R
{\displaystyle \mathbb {R} }
-graph X homotopy equivalent to a bouquet of n circles together with a certain choice of a free homotopy class of a homotopy equivalence from X to the bouquet of n circles. An
R
{\displaystyle \mathbb {R} }
-graph is just a weighted graph with weights in
R
{\displaystyle \mathbb {R} }
. The sum of all weights should be 1 and all weights should be positive. To avoid ambiguity (and to get a finite dimensional space) it is furthermore required that the valency of each vertex should be at least 3.
A more descriptive view avoiding the homotopy equivalence f is the following. We may fix an identification of the fundamental group of the bouquet of n circles with the free group
F
n
{\displaystyle F_{n}}
in n variables. Furthermore, we may choose a maximal tree in X and choose for each remaining edge a direction. We will now assign to each remaining edge e a word in
F
n
{\displaystyle F_{n}}
in the following way. Consider the closed path starting with e and then going back to the origin of e in the maximal tree. Composing this path with f we get a closed path in a bouquet of n circles and hence an element in its fundamental group
F
n
{\displaystyle F_{n}}
. This element is not well defined; if we change f by a free homotopy we obtain another element. It turns out, that those two elements are conjugate to each other, and hence we can choose the unique cyclically reduced element in this conjugacy class. It is possible to reconstruct the free homotopy type of f from these data. This view has the advantage, that it avoids the extra choice of f and has the disadvantage that additional ambiguity arises, because one has to choose a maximal tree and an orientation of the remaining edges.
The operation of Out(Fn) on the outer space is defined as follows. Every automorphism g of
F
n
{\displaystyle F_{n}}
induces a self homotopy equivalence g′ of the bouquet of n circles. Composing f with g′ gives the desired action. And in the other model it is just application of g and making the resulting word cyclically reduced.
=== Connection to length functions ===
Every point in the outer space determines a unique length function
l
X
:
F
n
→
R
{\displaystyle l_{X}\colon F_{n}\to \mathbb {R} }
. A word in
F
n
{\displaystyle F_{n}}
determines via the chosen homotopy equivalence a closed path in X. The length of the word is then the minimal length of a path in the free homotopy class of that closed path. Such a length function is constant on each conjugacy class. The assignment
X
↦
l
X
{\displaystyle X\mapsto l_{X}}
defines an embedding of the outer space to some infinite dimensional projective space.
=== Simplicial structure on the outer space ===
In the second model an open simplex is given by all those
R
{\displaystyle \mathbb {R} }
-graphs, which have combinatorically the same underlying graph and the same edges are labeled with the same words (only the length of the edges may differ). The boundary simplices of such a simplex consists of all graphs, that arise from this graph by collapsing an edge. If that edge is a loop it cannot be collapsed without changing the homotopy type of the graph. Hence there is no boundary simplex. So one can think about the outer space as a simplicial complex with some simplices removed. It is easy to verify, that the action of
O
u
t
(
F
n
)
{\displaystyle \mathrm {Out} (F_{n})}
is simplicial and has finite isotropy groups.
== See also ==
Train track map
Automorphism group of a free group
Outer space
== References ==
Culler, Marc; Vogtmann, Karen (1986). "Moduli of graphs and automorphisms of free groups" (PDF). Inventiones Mathematicae. 84 (1): 91–119. Bibcode:1986InMat..84...91C. doi:10.1007/BF01388734. MR 0830040.
Vogtmann, Karen (2002). "Automorphisms of free groups and outer space" (PDF). Geometriae Dedicata. 94: 1–31. doi:10.1023/A:1020973910646. MR 1950871.
Vogtmann, Karen (2008), "What is … outer space?" (PDF), Notices of the American Mathematical Society, 55 (7): 784–786, MR 2436509 | Wikipedia/Outer_space_(group_theory) |
In geometric group theory and dynamical systems the iterated monodromy group of a covering map is a group describing the monodromy action of the fundamental group on all iterations of the covering. A single covering map between spaces is therefore used to create a tower of coverings, by placing the covering over itself repeatedly. In terms of the Galois theory of covering spaces, this construction on spaces is expected to correspond to a construction on groups. The iterated monodromy group provides this construction, and it is applied to encode the combinatorics and symbolic dynamics of the covering, and provide examples of self-similar groups.
== Definition ==
The iterated monodromy group of f is the following quotient group:
I
M
G
f
:=
π
1
(
X
,
t
)
⋂
n
∈
N
K
e
r
ϝ
n
{\displaystyle \mathrm {IMG} f:={\frac {\pi _{1}(X,t)}{\bigcap _{n\in \mathbb {N} }\mathrm {Ker} \,\digamma ^{n}}}}
where :
f
:
X
1
→
X
{\displaystyle f:X_{1}\rightarrow X}
is a covering of a path-connected and locally path-connected topological space X by its subset
X
1
{\displaystyle X_{1}}
,
π
1
(
X
,
t
)
{\displaystyle \pi _{1}(X,t)}
is the fundamental group of X and
ϝ
:
π
1
(
X
,
t
)
→
S
y
m
f
−
1
(
t
)
{\displaystyle \digamma :\pi _{1}(X,t)\rightarrow \mathrm {Sym} \,f^{-1}(t)}
is the monodromy action for f.
ϝ
n
:
π
1
(
X
,
t
)
→
S
y
m
f
−
n
(
t
)
{\displaystyle \digamma ^{n}:\pi _{1}(X,t)\rightarrow \mathrm {Sym} \,f^{-n}(t)}
is the monodromy action of the
n
t
h
{\displaystyle n^{\mathrm {th} }}
iteration of f,
∀
n
∈
N
0
{\displaystyle \forall n\in \mathbb {N} _{0}}
.
== Action ==
The iterated monodromy group acts by automorphism on the rooted tree of preimages
T
f
:=
⨆
n
≥
0
f
−
n
(
t
)
,
{\displaystyle T_{f}:=\bigsqcup _{n\geq 0}f^{-n}(t),}
where a vertex
z
∈
f
−
n
(
t
)
{\displaystyle z\in f^{-n}(t)}
is connected by an edge with
f
(
z
)
∈
f
−
(
n
−
1
)
(
t
)
{\displaystyle f(z)\in f^{-(n-1)}(t)}
.
== Examples ==
=== Iterated monodromy groups of rational functions ===
Let :
f be a complex rational function
P
f
{\displaystyle P_{f}}
be the union of forward orbits of its critical points (the post-critical set).
If
P
f
{\displaystyle P_{f}}
is finite (or has a finite set of accumulation points), then the iterated monodromy group of f is the iterated monodromy group of the covering
f
:
C
^
∖
f
−
1
(
P
f
)
→
C
^
∖
P
f
{\displaystyle f:{\hat {C}}\setminus f^{-1}(P_{f})\rightarrow {\hat {C}}\setminus P_{f}}
, where
C
^
{\displaystyle {\hat {C}}}
is the Riemann sphere.
Iterated monodromy groups of rational functions usually have exotic properties from the point of view of classical group theory. Most of them are infinitely presented, many have intermediate growth.
==== IMG of polynomials ====
The Basilica group is the iterated monodromy group of the polynomial
z
2
−
1
{\displaystyle z^{2}-1}
== See also ==
Growth rate (group theory)
Amenable group
Complex dynamics
Julia set
== References ==
Volodymyr Nekrashevych, Self-Similar Groups, Mathematical Surveys and Monographs Vol. 117, Amer. Math. Soc., Providence, RI, 2005; ISBN 0-412-34550-1.
Kevin M. Pilgrim, Combinations of Complex Dynamical Systems, Springer-Verlag, Berlin, 2003; ISBN 3-540-20173-4.
== External links ==
arXiv.org - Iterated Monodromy Group - preprints about the Iterated Monodromy Group.
Laurent Bartholdi's page - Movies illustrating the Dehn twists about a Julia set.
mathworld.wolfram.com - The Monodromy Group page. | Wikipedia/Iterated_monodromy_group |
The Novikov conjecture is one of the most important unsolved problems in topology. It is named for Sergei Novikov who originally posed the conjecture in 1965.
The Novikov conjecture concerns the homotopy invariance of certain polynomials in the Pontryagin classes of a manifold, arising from the fundamental group. According to the Novikov conjecture, the higher signatures, which are certain numerical invariants of smooth manifolds, are homotopy invariants.
The conjecture has been proved for finitely generated abelian groups. It is not yet known whether the Novikov conjecture holds true for all groups. There are no known counterexamples to the conjecture.
== Precise formulation of the conjecture ==
Let
G
{\displaystyle G}
be a discrete group and
B
G
{\displaystyle BG}
its classifying space, which is an Eilenberg–MacLane space of type
K
(
G
,
1
)
{\displaystyle K(G,1)}
, and therefore unique up to homotopy equivalence as a CW complex. Let
f
:
M
→
B
G
{\displaystyle f\colon M\rightarrow BG}
be a continuous map from a closed oriented
n
{\displaystyle n}
-dimensional manifold
M
{\displaystyle M}
to
B
G
{\displaystyle BG}
, and
x
∈
H
n
−
4
i
(
B
G
;
Q
)
.
{\displaystyle x\in H^{n-4i}(BG;\mathbb {Q} ).}
Novikov considered the numerical expression, found by evaluating the cohomology class in top dimension against the fundamental class
[
M
]
{\displaystyle [M]}
, and known as a higher signature:
⟨
f
∗
(
x
)
∪
L
i
(
M
)
,
[
M
]
⟩
∈
Q
{\displaystyle \left\langle f^{*}(x)\cup L_{i}(M),[M]\right\rangle \in \mathbb {Q} }
where
L
i
{\displaystyle L_{i}}
is the
i
t
h
{\displaystyle i^{\rm {th}}}
Hirzebruch polynomial, or sometimes (less descriptively) as the
i
t
h
{\displaystyle i^{\rm {th}}}
L
{\displaystyle L}
-polynomial. For each
i
{\displaystyle i}
, this polynomial can be expressed in the Pontryagin classes of the manifold's tangent bundle. The Novikov conjecture states that the higher signature is an invariant of the oriented homotopy type of
M
{\displaystyle M}
for every such map
f
{\displaystyle f}
and every such class
x
{\displaystyle x}
, in other words, if
h
:
M
′
→
M
{\displaystyle h\colon M'\rightarrow M}
is an orientation preserving homotopy equivalence, the higher signature associated to
f
∘
h
{\displaystyle f\circ h}
is equal to that associated to
f
{\displaystyle f}
.
== Connection with the Borel conjecture ==
The Novikov conjecture is equivalent to the rational injectivity of the assembly map in L-theory. The
Borel conjecture on the rigidity of aspherical manifolds is equivalent to the assembly map being an isomorphism.
== References ==
Davis, James F. (2000), "Manifold aspects of the Novikov conjecture" (PDF), in Cappell, Sylvain; Ranicki, Andrew; Rosenberg, Jonathan (eds.), Surveys on surgery theory. Vol. 1, Annals of Mathematics Studies, Princeton University Press, pp. 195–224, ISBN 978-0-691-04937-3, MR 1747536
John Milnor and James D. Stasheff, Characteristic Classes, Annals of Mathematics Studies 76, Princeton (1974).
Sergei P. Novikov, Algebraic construction and properties of Hermitian analogs of k-theory over rings with involution from the point of view of Hamiltonian formalism. Some applications to differential topology and to the theory of characteristic classes. Izv.Akad.Nauk SSSR, v. 34, 1970 I N2, pp. 253–288; II: N3, pp. 475–500. English summary in Actes Congr. Intern. Math., v. 2, 1970, pp. 39–45.
== External links ==
Biography of Sergei Novikov
Novikov Conjecture Bibliography
Novikov Conjecture 1993 Oberwolfach Conference Proceedings, Volume 1
Novikov Conjecture 1993 Oberwolfach Conference Proceedings, Volume 2
2004 Oberwolfach Seminar notes on the Novikov Conjecture (pdf)
Scholarpedia article by S.P. Novikov (2010)
The Novikov Conjecture at the Manifold Atlas | Wikipedia/Novikov_conjecture |
In mathematics, the ping-pong lemma, or table-tennis lemma, is any of several mathematical statements that ensure that several elements in a group acting on a set freely generates a free subgroup of that group.
== History ==
The ping-pong argument goes back to the late 19th century and is commonly attributed to Felix Klein who used it to study subgroups of Kleinian groups, that is, of discrete groups of isometries of the hyperbolic 3-space or, equivalently Möbius transformations of the Riemann sphere. The ping-pong lemma was a key tool used by Jacques Tits in his 1972 paper containing the proof of a famous result now known as the Tits alternative. The result states that a finitely generated linear group is either virtually solvable or contains a free subgroup of rank two. The ping-pong lemma and its variations are widely used in geometric topology and geometric group theory.
Modern versions of the ping-pong lemma can be found in many books such as Lyndon & Schupp, de la Harpe, Bridson & Haefliger and others.
== Formal statements ==
=== Ping-pong lemma for several subgroups ===
This version of the ping-pong lemma ensures that several subgroups of a group acting on a set generate a free product. The following statement appears in Olijnyk and Suchchansky (2004), and the proof is from de la Harpe (2000).
Let G be a group acting on a set X and let H1, H2, ..., Hk be subgroups of G where k ≥ 2, such that at least one of these subgroups has order greater than 2.
Suppose there exist pairwise disjoint nonempty subsets X1, X2, ...,Xk of X such that the following holds:
For any i ≠ s and for any h in Hi, h ≠ 1 we have h(Xs) ⊆ Xi.
Then
⟨
H
1
,
…
,
H
k
⟩
=
H
1
∗
⋯
∗
H
k
.
{\displaystyle \langle H_{1},\dots ,H_{k}\rangle =H_{1}\ast \dots \ast H_{k}.}
==== Proof ====
By the definition of free product, it suffices to check that a given (nonempty) reduced word represents a nontrivial element of
G
{\displaystyle G}
. Let
w
{\displaystyle w}
be such a word of length
m
≥
2
{\displaystyle m\geq 2}
, and let
w
=
∏
i
=
1
m
w
i
,
{\displaystyle w=\prod _{i=1}^{m}w_{i},}
where
w
i
∈
H
α
i
{\textstyle w_{i}\in H_{\alpha _{i}}}
for some
α
i
∈
{
1
,
…
,
k
}
{\textstyle \alpha _{i}\in \{1,\dots ,k\}}
. Since
w
{\textstyle w}
is reduced, we have
α
i
≠
α
i
+
1
{\displaystyle \alpha _{i}\neq \alpha _{i+1}}
for any
i
=
1
,
…
,
m
−
1
{\displaystyle i=1,\dots ,m-1}
and each
w
i
{\displaystyle w_{i}}
is distinct from the identity element of
H
α
i
{\displaystyle H_{\alpha _{i}}}
. We then let
w
{\displaystyle w}
act on an element of one of the sets
X
i
{\textstyle X_{i}}
. As we assume that at least one subgroup
H
i
{\displaystyle H_{i}}
has order at least 3, without loss of generality we may assume that
H
1
{\displaystyle H_{1}}
has order at least 3. We first make the assumption that
α
1
{\displaystyle \alpha _{1}}
and
α
m
{\displaystyle \alpha _{m}}
are both 1 (which implies
m
≥
3
{\displaystyle m\geq 3}
). From here we consider
w
{\displaystyle w}
acting on
X
2
{\displaystyle X_{2}}
. We get the following chain of containments:
w
(
X
2
)
⊆
∏
i
=
1
m
−
1
w
i
(
X
1
)
⊆
∏
i
=
1
m
−
2
w
i
(
X
α
m
−
1
)
⊆
⋯
⊆
w
1
(
X
α
2
)
⊆
X
1
.
{\displaystyle w(X_{2})\subseteq \prod _{i=1}^{m-1}w_{i}(X_{1})\subseteq \prod _{i=1}^{m-2}w_{i}(X_{\alpha _{m-1}})\subseteq \dots \subseteq w_{1}(X_{\alpha _{2}})\subseteq X_{1}.}
By the assumption that different
X
i
{\displaystyle X_{i}}
's are disjoint, we conclude that
w
{\displaystyle w}
acts nontrivially on some element of
X
2
{\displaystyle X_{2}}
, thus
w
{\displaystyle w}
represents a nontrivial element of
G
{\displaystyle G}
.
To finish the proof we must consider the three cases:
if
α
1
=
1
,
α
m
≠
1
{\displaystyle \alpha _{1}=1,\,\alpha _{m}\neq 1}
, then let
h
∈
H
1
∖
{
w
1
−
1
,
1
}
{\displaystyle h\in H_{1}\setminus \{w_{1}^{-1},1\}}
(such an
h
{\displaystyle h}
exists since by assumption
H
1
{\displaystyle H_{1}}
has order at least 3);
if
α
1
≠
1
,
α
m
=
1
{\displaystyle \alpha _{1}\neq 1,\,\alpha _{m}=1}
, then let
h
∈
H
1
∖
{
w
m
,
1
}
{\displaystyle h\in H_{1}\setminus \{w_{m},1\}}
;
and if
α
1
≠
1
,
α
m
≠
1
{\displaystyle \alpha _{1}\neq 1,\,\alpha _{m}\neq 1}
, then let
h
∈
H
1
∖
{
1
}
{\displaystyle h\in H_{1}\setminus \{1\}}
.
In each case,
h
w
h
−
1
{\displaystyle hwh^{-1}}
after reduction becomes a reduced word with its first and last letter in
H
1
{\displaystyle H_{1}}
. Finally,
h
w
h
−
1
{\displaystyle hwh^{-1}}
represents a nontrivial element of
G
{\displaystyle G}
, and so does
w
{\displaystyle w}
. This proves the claim.
=== The Ping-pong lemma for cyclic subgroups ===
Let G be a group acting on a set X. Let a1, ...,ak be elements of G of infinite order, where k ≥ 2. Suppose there exist disjoint nonempty subsets
of X with the following properties:
ai(X − Xi–) ⊆ Xi+ for i = 1, ..., k;
ai−1(X − Xi+) ⊆ Xi– for i = 1, ..., k.
Then the subgroup H = ⟨a1, ..., ak⟩ ≤ G generated by a1, ..., ak is free with free basis {a1, ..., ak}.
==== Proof ====
This statement follows as a corollary of the version for general subgroups if we let Xi = Xi+ ∪ Xi− and let Hi = ⟨ai⟩.
== Examples ==
=== Special linear group example ===
One can use the ping-pong lemma to prove that the subgroup H = ⟨A,B⟩ ≤ SL2(Z), generated by the matrices
A
=
(
1
2
0
1
)
{\displaystyle A={\begin{pmatrix}1&2\\0&1\end{pmatrix}}}
and
B
=
(
1
0
2
1
)
{\displaystyle B={\begin{pmatrix}1&0\\2&1\end{pmatrix}}}
is free of rank two.
==== Proof ====
Indeed, let H1 = ⟨A⟩ and H2 = ⟨B⟩ be cyclic subgroups of SL2(Z) generated by A and B accordingly. It is not hard to check that A and B are elements of infinite order in SL2(Z) and that
H
1
=
{
A
n
∣
n
∈
Z
}
=
{
(
1
2
n
0
1
)
:
n
∈
Z
}
{\displaystyle H_{1}=\{A^{n}\mid n\in \mathbb {Z} \}=\left\{{\begin{pmatrix}1&2n\\0&1\end{pmatrix}}:n\in \mathbb {Z} \right\}}
and
H
2
=
{
B
n
∣
n
∈
Z
}
=
{
(
1
0
2
n
1
)
:
n
∈
Z
}
.
{\displaystyle H_{2}=\{B^{n}\mid n\in \mathbb {Z} \}=\left\{{\begin{pmatrix}1&0\\2n&1\end{pmatrix}}:n\in \mathbb {Z} \right\}.}
Consider the standard action of SL2(Z) on R2 by linear transformations. Put
X
1
=
{
(
x
y
)
∈
R
2
:
|
x
|
>
|
y
|
}
{\displaystyle X_{1}=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}:|x|>|y|\right\}}
and
X
2
=
{
(
x
y
)
∈
R
2
:
|
x
|
<
|
y
|
}
.
{\displaystyle X_{2}=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}:|x|<|y|\right\}.}
It is not hard to check, using the above explicit descriptions of H1 and H2, that for every nontrivial g ∈ H1 we have g(X2) ⊆ X1 and that for every nontrivial g ∈ H2 we have g(X1) ⊆ X2. Using the alternative form of the ping-pong lemma, for two subgroups, given above, we conclude that H = H1 ∗ H2. Since the groups H1 and H2 are infinite cyclic, it follows that H is a free group of rank two.
=== Word-hyperbolic group example ===
Let G be a word-hyperbolic group which is torsion-free, that is, with no nonidentity elements of finite order. Let g, h ∈ G be two non-commuting elements, that is such that gh ≠ hg. Then there exists M ≥ 1 such that for any integers n ≥ M, m ≥ M the subgroup H = ⟨gn, hm⟩ ≤ G is free of rank two.
==== Sketch of the proof ====
Source:
The group G acts on its hyperbolic boundary ∂G by homeomorphisms. It is known that if a in G is a nonidentity element then a has exactly two distinct fixed points, a∞ and a−∞ in ∂G and that a∞ is an attracting fixed point while a−∞ is a repelling fixed point.
Since g and h do not commute, basic facts about word-hyperbolic groups imply that g∞, g−∞, h∞ and h−∞ are four distinct points in ∂G. Take disjoint neighborhoods U+, U–, V+, and V– of g∞, g−∞, h∞ and h−∞ in ∂G respectively.
Then the attracting/repelling properties of the fixed points of g and h imply that there exists M ≥ 1 such that for any integers n ≥ M, m ≥ M we have:
gn(∂G – U–) ⊆ U+
g−n(∂G – U+) ⊆ U–
hm(∂G – V–) ⊆ V+
h−m(∂G – V+) ⊆ V–
The ping-pong lemma now implies that H = ⟨gn, hm⟩ ≤ G is free of rank two.
== Applications of the ping-pong lemma ==
The ping-pong lemma is used in Kleinian groups to study their so-called Schottky subgroups. In the Kleinian groups context the ping-pong lemma can be used to show that a particular group of isometries of the hyperbolic 3-space is not just free but also properly discontinuous and geometrically finite.
Similar Schottky-type arguments are widely used in geometric group theory, particularly for subgroups of word-hyperbolic groups and for automorphism groups of trees.
The ping-pong lemma is also used for studying Schottky-type subgroups of mapping class groups of Riemann surfaces, where the set on which the mapping class group acts is the Thurston boundary of the Teichmüller space. A similar argument is also utilized in the study of subgroups of the outer automorphism group of a free group.
One of the most famous applications of the ping-pong lemma is in the proof of Jacques Tits of the so-called Tits alternative for linear groups. (see also for an overview of Tits' proof and an explanation of the ideas involved, including the use of the ping-pong lemma).
There are generalizations of the ping-pong lemma that produce not just free products but also amalgamated free products and HNN extensions. These generalizations are used, in particular, in the proof of Maskit's Combination Theorem for Kleinian groups.
There are also versions of the ping-pong lemma which guarantee that several elements in a group generate a free semigroup. Such versions are available both in the general context of a group action on a set, and for specific types of actions, e.g. in the context of linear groups, groups acting on trees and others.
== References ==
== See also ==
Free group
Free product
Kleinian group
Tits alternative
Word-hyperbolic group
Schottky group | Wikipedia/Ping-pong_lemma |
In mathematics, a finite subdivision rule is a recursive way of dividing a polygon or other two-dimensional shape into smaller and smaller pieces. Subdivision rules in a sense are generalizations of regular geometric fractals. Instead of repeating exactly the same design over and over, they have slight variations in each stage, allowing a richer structure while maintaining the elegant style of fractals. Subdivision rules have been used in architecture, biology, and computer science, as well as in the study of hyperbolic manifolds. Substitution tilings are a well-studied type of subdivision rule.
== Definition ==
A subdivision rule takes a tiling of the plane by polygons and turns it into a new tiling by subdividing each polygon into smaller polygons. It is finite if there are only finitely many ways that every polygon can subdivide. Each way of subdividing a tile is called a tile type. Each tile type is represented by a label (usually a letter). Every tile type subdivides into smaller tile types. Each edge also gets subdivided according to finitely many edge types. Finite subdivision rules can only subdivide tilings that are made up of polygons labelled by tile types. Such tilings are called subdivision complexes for the subdivision rule. Given any subdivision complex for a subdivision rule, we can subdivide it over and over again to get a sequence of tilings.
For instance, binary subdivision has one tile type and one edge type:
Since the only tile type is a quadrilateral, binary subdivision can only subdivide tilings made up of quadrilaterals. This means that the only subdivision complexes are tilings by quadrilaterals. The tiling can be regular, but doesn't have to be:
Here we start with a complex made of four quadrilaterals and subdivide it twice. All quadrilaterals are type A tiles.
== Examples of finite subdivision rules ==
Barycentric subdivision is an example of a subdivision rule with one edge type (that gets subdivided into two edges) and one tile type (a triangle that gets subdivided into 6 smaller triangles). Any triangulated surface is a barycentric subdivision complex.
The Penrose tiling can be generated by a subdivision rule on a set of four tile types (the curved lines in the table below only help to show how the tiles fit together):
Certain rational maps give rise to finite subdivision rules. This includes most Lattès maps.
Every prime, non-split alternating knot or link complement has a subdivision rule, with some tiles that do not subdivide, corresponding to the boundary of the link complement. The subdivision rules show what the night sky would look like to someone living in a knot complement; because the universe wraps around itself (i.e. is not simply connected), an observer would see the visible universe repeat itself in an infinite pattern. The subdivision rule describes that pattern.
The subdivision rule looks different for different geometries. This is a subdivision rule for the trefoil knot, which is not a hyperbolic knot:
And this is the subdivision rule for the Borromean rings, which is hyperbolic:
In each case, the subdivision rule would act on some tiling of a sphere (i.e. the night sky), but it is easier to just draw a small part of the night sky, corresponding to a single tile being repeatedly subdivided. This is what happens for the trefoil knot:
And for the Borromean rings:
== Subdivision rules in higher dimensions ==
Subdivision rules can easily be generalized to other dimensions. For instance, barycentric subdivision is used in all dimensions. Also, binary subdivision can be generalized to other dimensions (where hypercubes get divided by every midplane), as in the proof of the Heine–Borel theorem.
== Rigorous definition ==
A finite subdivision rule
R
{\displaystyle R}
consists of the following.
1. A finite 2-dimensional CW complex
S
R
{\displaystyle S_{R}}
, called the subdivision complex, with a fixed cell structure such that
S
R
{\displaystyle S_{R}}
is the union of its closed 2-cells. We assume that for each closed 2-cell
s
~
{\displaystyle {\tilde {s}}}
of
S
R
{\displaystyle S_{R}}
there is a CW structure
s
{\displaystyle s}
on a closed 2-disk such that
s
{\displaystyle s}
has at least two vertices, the vertices and edges of
s
{\displaystyle s}
are contained in
∂
s
{\displaystyle \partial s}
, and the characteristic map
ψ
s
:
s
→
S
R
{\displaystyle \psi _{s}:s\rightarrow S_{R}}
which maps onto
s
~
{\displaystyle {\tilde {s}}}
restricts to a homeomorphism onto each open cell.
2. A finite two dimensional CW complex
R
(
S
R
)
{\displaystyle R(S_{R})}
, which is a subdivision of
S
R
{\displaystyle S_{R}}
.
3.A continuous cellular map
ϕ
R
:
R
(
S
R
)
→
S
R
{\displaystyle \phi _{R}:R(S_{R})\rightarrow S_{R}}
called the subdivision map, whose restriction to every open cell is a homeomorphism onto an open cell.
Each CW complex
s
{\displaystyle s}
in the definition above (with its given characteristic map
ψ
s
{\displaystyle \psi _{s}}
) is called a tile type.
An
R
{\displaystyle R}
-complex for a subdivision rule
R
{\displaystyle R}
is a 2-dimensional CW complex
X
{\displaystyle X}
which is the union of its closed 2-cells, together with a continuous cellular map
f
:
X
→
S
R
{\displaystyle f:X\rightarrow S_{R}}
whose restriction to each open cell is a homeomorphism. We can subdivide
X
{\displaystyle X}
into a complex
R
(
X
)
{\displaystyle R(X)}
by requiring that the induced map
f
:
R
(
X
)
→
R
(
S
R
)
{\displaystyle f:R(X)\rightarrow R(S_{R})}
restricts to a homeomorphism onto each open cell.
R
(
X
)
{\displaystyle R(X)}
is again an
R
{\displaystyle R}
-complex with map
ϕ
R
∘
f
:
R
(
X
)
→
S
R
{\displaystyle \phi _{R}\circ f:R(X)\rightarrow S_{R}}
. By repeating this process, we obtain a sequence of subdivided
R
{\displaystyle R}
-complexes
R
n
(
X
)
{\displaystyle R^{n}(X)}
with maps
ϕ
R
n
∘
f
:
R
n
(
X
)
→
S
R
{\displaystyle \phi _{R}^{n}\circ f:R^{n}(X)\rightarrow S_{R}}
.
Binary subdivision is one example:
The subdivision complex can be created by gluing together the opposite edges of the square, making the subdivision complex
S
R
{\displaystyle S_{R}}
into a torus. The subdivision map
ϕ
{\displaystyle \phi }
is the doubling map on the torus, wrapping the meridian around itself twice and the longitude around itself twice. This is a four-fold covering map. The plane, tiled by squares, is a subdivision complex for this subdivision rule, with the structure map
f
:
R
2
→
R
(
S
R
)
{\displaystyle f:\mathbb {R} ^{2}\rightarrow R(S_{R})}
given by the standard covering map. Under subdivision, each square in the plane gets subdivided into squares of one-fourth the size.
== Quasi-isometry properties ==
Subdivision rules can be used to study the quasi-isometry properties of certain spaces. Given a subdivision rule
R
{\displaystyle R}
and subdivision complex
X
{\displaystyle X}
, we can construct a graph called the history graph that records the action of the subdivision rule. The graph consists of the dual graphs of every stage
R
n
(
X
)
{\displaystyle R^{n}(X)}
, together with edges connecting each tile in
R
n
(
X
)
{\displaystyle R^{n}(X)}
with its subdivisions in
R
n
+
1
(
X
)
{\displaystyle R^{n+1}(X)}
.
The quasi-isometry properties of the history graph can be studied using subdivision rules. For instance, the history graph is quasi-isometric to hyperbolic space exactly when the subdivision rule is conformal, as described in the combinatorial Riemann mapping theorem.
== Applications ==
Islamic Girih tiles in Islamic architecture are self-similar tilings that can be modeled with finite subdivision rules. In 2007, Peter J. Lu of Harvard University and Professor Paul J. Steinhardt of Princeton University published a paper in the journal Science suggesting that girih tilings possessed properties consistent with self-similar fractal quasicrystalline tilings such as Penrose tilings (presentation 1974, predecessor works starting in about 1964) predating them by five centuries.
Subdivision surfaces in computer graphics use subdivision rules to refine a surface to any given level of precision. These subdivision surfaces (such as the Catmull-Clark subdivision surface) take a polygon mesh (the kind used in 3D animated movies) and refines it to a mesh with more polygons by adding and shifting points according to different recursive formulas. Although many points get shifted in this process, each new mesh is combinatorially a subdivision of the old mesh (meaning that for every edge and vertex of the old mesh, you can identify a corresponding edge and vertex in the new one, plus several more edges and vertices).
Subdivision rules were applied by Cannon, Floyd and Parry (2000) to the study of large-scale growth patterns of biological organisms. Cannon, Floyd and Parry produced a mathematical growth model which demonstrated that some systems determined by simple finite subdivision rules can results in objects (in their example, a tree trunk) whose large-scale form oscillates wildly over time, even though the local subdivision laws remain the same. Cannon, Floyd and Parry also applied their model to the analysis of the growth patterns of rat tissue. They suggested that the "negatively curved" (or non-euclidean) nature of microscopic growth patterns of biological organisms is one of the key reasons why large-scale organisms do not look like crystals or polyhedral shapes but in fact in many cases resemble self-similar fractals. In particular they suggested that such "negatively curved" local structure is manifested in highly folded and highly connected nature of the brain and the lung tissue.
== Cannon's conjecture ==
Cannon, Floyd, and Parry first studied finite subdivision rules as an attempt to prove the following conjecture:
Cannon's conjecture: Every Gromov hyperbolic group with a 2-sphere at infinity acts geometrically on hyperbolic 3-space.
Here, a geometric action is a cocompact, properly discontinuous action by isometries. This conjecture was partially solved by Grigori Perelman in his proof of the geometrization conjecture, which states (in part) that any Gromov hyperbolic group that is a 3-manifold group must act geometrically on hyperbolic 3-space. However, it still remains to be shown that a Gromov hyperbolic group with a 2-sphere at infinity is a 3-manifold group.
Cannon and Swenson showed that a hyperbolic group with a 2-sphere at infinity has an associated subdivision rule. If this subdivision rule is conformal in a certain sense, the group will be a 3-manifold group with the geometry of hyperbolic 3-space.
== Combinatorial Riemann mapping theorem ==
Subdivision rules give a sequence of tilings of a surface, and tilings give an idea of distance, length, and area (by letting each tile have length and area 1). In the limit, the distances that come from these tilings may converge in some sense to an analytic structure on the surface. The Combinatorial Riemann Mapping Theorem gives necessary and sufficient conditions for this to occur.
Its statement needs some background. A tiling
T
{\displaystyle T}
of a ring
R
{\displaystyle R}
(i.e., a closed annulus) gives two invariants,
M
sup
(
R
,
T
)
{\displaystyle M_{\sup }(R,T)}
and
m
inf
(
R
,
T
)
{\displaystyle m_{\inf }(R,T)}
, called approximate moduli. These are similar to the classical modulus of a ring. They are defined by the use of weight functions. A weight function
ρ
{\displaystyle \rho }
assigns a non-negative number called a weight to each tile of
T
{\displaystyle T}
. Every path in
R
{\displaystyle R}
can be given a length, defined to be the sum of the weights of all tiles in the path. Define the height
H
(
ρ
)
{\displaystyle H(\rho )}
of
R
{\displaystyle R}
under
ρ
{\displaystyle \rho }
to be the infimum of the length of all possible paths connecting the inner boundary of
R
{\displaystyle R}
to the outer boundary. The circumference
C
(
ρ
)
{\displaystyle C(\rho )}
of
R
{\displaystyle R}
under
ρ
{\displaystyle \rho }
is the infimum of the length of all possible paths circling the ring (i.e. not nullhomotopic in R). The area
A
(
ρ
)
{\displaystyle A(\rho )}
of
R
{\displaystyle R}
under
ρ
{\displaystyle \rho }
is defined to be the sum of the squares of all weights in
R
{\displaystyle R}
. Then define
M
sup
(
R
,
T
)
=
sup
H
(
ρ
)
2
A
(
ρ
)
,
{\displaystyle M_{\sup }(R,T)=\sup {\frac {H(\rho )^{2}}{A(\rho )}},}
m
inf
(
R
,
T
)
=
inf
A
(
ρ
)
C
(
ρ
)
2
.
{\displaystyle m_{\inf }(R,T)=\inf {\frac {A(\rho )}{C(\rho )^{2}}}.}
Note that they are invariant under scaling of the metric.
A sequence
T
1
,
T
2
,
…
{\displaystyle T_{1},T_{2},\ldots }
of tilings is conformal (
K
{\displaystyle K}
) if mesh approaches 0 and:
For each ring
R
{\displaystyle R}
, the approximate moduli
M
sup
(
R
,
T
i
)
{\displaystyle M_{\sup }(R,T_{i})}
and
m
inf
(
R
,
T
i
)
{\displaystyle m_{\inf }(R,T_{i})}
, for all
i
{\displaystyle i}
sufficiently large, lie in a single interval of the form
[
r
,
K
r
]
{\displaystyle [r,Kr]}
; and
Given a point
x
{\displaystyle x}
in the surface, a neighborhood
N
{\displaystyle N}
of
x
{\displaystyle x}
, and an integer
I
{\displaystyle I}
, there is a ring
R
{\displaystyle R}
in
N
∖
{
x
}
{\displaystyle N\smallsetminus \{x\}}
separating x from the complement of
N
{\displaystyle N}
, such that for all large
i
{\displaystyle i}
the approximate moduli of
R
{\displaystyle R}
are all greater than
I
{\displaystyle I}
.
=== Statement of theorem ===
If a sequence
T
1
,
T
2
,
…
{\displaystyle T_{1},T_{2},\ldots }
of tilings of a surface is conformal (
K
{\displaystyle K}
) in the above sense, then there is a conformal structure on the surface and a constant
K
′
{\displaystyle K'}
depending only on
K
{\displaystyle K}
in which the classical moduli and approximate moduli (from
T
i
{\displaystyle T_{i}}
for
i
{\displaystyle i}
sufficiently large) of any given annulus are
K
′
{\displaystyle K'}
-comparable, meaning that they lie in a single interval
[
r
,
K
′
r
]
{\displaystyle [r,K'r]}
.
=== Consequences ===
The Combinatorial Riemann Mapping Theorem implies that a group
G
{\displaystyle G}
acts geometrically on
H
3
{\displaystyle \mathbb {H} ^{3}}
if and only if it is Gromov hyperbolic, it has a sphere at infinity, and the natural subdivision rule on the sphere gives rise to a sequence of tilings that is conformal in the sense above. Thus, Cannon's conjecture would be true if all such subdivision rules were conformal.
== References ==
== External links ==
Bill Floyd's research page. This page contains most of the research papers by Cannon, Floyd and Parry on subdivision rules, as well as a gallery of subdivision rules. | Wikipedia/Cannon's_conjecture |
In the mathematical subject of geometric group theory, the growth rate of a group with respect to a symmetric generating set describes how fast a group grows. Every element in the group can be written as a product of generators, and the growth rate counts the number of elements that can be written as a product of length n.
== Definition ==
Suppose G is a finitely generated group; and T is a finite symmetric set of generators
(symmetric means that if
x
∈
T
{\displaystyle x\in T}
then
x
−
1
∈
T
{\displaystyle x^{-1}\in T}
).
Any element
x
∈
G
{\displaystyle x\in G}
can be expressed as a word in the T-alphabet
x
=
a
1
⋅
a
2
⋯
a
k
where
a
i
∈
T
.
{\displaystyle x=a_{1}\cdot a_{2}\cdots a_{k}{\text{ where }}a_{i}\in T.}
Consider the subset of all elements of G that can be expressed by such a word of length ≤ n
B
n
(
G
,
T
)
=
{
x
∈
G
∣
x
=
a
1
⋅
a
2
⋯
a
k
where
a
i
∈
T
and
k
≤
n
}
.
{\displaystyle B_{n}(G,T)=\{x\in G\mid x=a_{1}\cdot a_{2}\cdots a_{k}{\text{ where }}a_{i}\in T{\text{ and }}k\leq n\}.}
This set is just the closed ball of radius n in the word metric d on G with respect to the generating set T:
B
n
(
G
,
T
)
=
{
x
∈
G
∣
d
(
x
,
e
)
≤
n
}
.
{\displaystyle B_{n}(G,T)=\{x\in G\mid d(x,e)\leq n\}.}
More geometrically,
B
n
(
G
,
T
)
{\displaystyle B_{n}(G,T)}
is the set of vertices in the Cayley graph with respect to T that are within distance n of the identity.
Given two nondecreasing positive functions a and b one can say that they are equivalent (
a
∼
b
{\displaystyle a\sim b}
) if there is a constant C such that for all positive integers n,
a
(
n
/
C
)
≤
b
(
n
)
≤
a
(
C
n
)
,
{\displaystyle a(n/C)\leq b(n)\leq a(Cn),\,}
for example
p
n
∼
q
n
{\displaystyle p^{n}\sim q^{n}}
if
p
,
q
>
1
{\displaystyle p,q>1}
.
Then the growth rate of the group G can be defined as the corresponding equivalence class of the function
#
(
n
)
=
|
B
n
(
G
,
T
)
|
,
{\displaystyle \#(n)=|B_{n}(G,T)|,}
where
|
B
n
(
G
,
T
)
|
{\displaystyle |B_{n}(G,T)|}
denotes the number of elements in the set
B
n
(
G
,
T
)
{\displaystyle B_{n}(G,T)}
. Although the function
#
(
n
)
{\displaystyle \#(n)}
depends on the set of generators T its rate of growth does not (see below) and therefore the rate of growth gives an invariant of a group.
The word metric d and therefore sets
B
n
(
G
,
T
)
{\displaystyle B_{n}(G,T)}
depend on the generating set T. However, any two such metrics are bilipschitz equivalent in the following sense: for finite symmetric generating sets E, F, there is a positive constant C such that
1
C
d
F
(
x
,
y
)
≤
d
E
(
x
,
y
)
≤
C
d
F
(
x
,
y
)
.
{\displaystyle {1 \over C}\ d_{F}(x,y)\leq d_{E}(x,y)\leq C\ d_{F}(x,y).}
As an immediate corollary of this inequality we get that the growth rate does not depend on the choice of generating set.
== Polynomial and exponential growth ==
If
#
(
n
)
≤
C
(
n
k
+
1
)
{\displaystyle \#(n)\leq C(n^{k}+1)}
for some
C
,
k
<
∞
{\displaystyle C,k<\infty }
we say that G has a polynomial growth rate.
The infimum
k
0
{\displaystyle k_{0}}
of such k's is called the order of polynomial growth.
According to Gromov's theorem, a group of polynomial growth is a virtually nilpotent group, i.e. it has a nilpotent subgroup of finite index. In particular, the order of polynomial growth
k
0
{\displaystyle k_{0}}
has to be a natural number and in fact
#
(
n
)
∼
n
k
0
{\displaystyle \#(n)\sim n^{k_{0}}}
.
If
#
(
n
)
≥
a
n
{\displaystyle \#(n)\geq a^{n}}
for some
a
>
1
{\displaystyle a>1}
we say that G has an exponential growth rate.
Every finitely generated G has at most exponential growth, i.e. for some
b
>
1
{\displaystyle b>1}
we have
#
(
n
)
≤
b
n
{\displaystyle \#(n)\leq b^{n}}
.
If
#
(
n
)
{\displaystyle \#(n)}
grows more slowly than any exponential function, G has a subexponential growth rate. Any such group is amenable.
== Examples ==
A free group of finite rank
k
>
1
{\displaystyle k>1}
has exponential growth rate.
A finite group has constant growth—that is, polynomial growth of order 0—and this includes fundamental groups of manifolds whose universal cover is compact.
If M is a closed negatively curved Riemannian manifold then its fundamental group
π
1
(
M
)
{\displaystyle \pi _{1}(M)}
has exponential growth rate. John Milnor proved this using the fact that the word metric on
π
1
(
M
)
{\displaystyle \pi _{1}(M)}
is quasi-isometric to the universal cover of M.
The free abelian group
Z
d
{\displaystyle \mathbb {Z} ^{d}}
has a polynomial growth rate of order d.
The discrete Heisenberg group
H
3
{\displaystyle H_{3}}
has a polynomial growth rate of order 4. This fact is a special case of the general theorem of Hyman Bass and Yves Guivarch that is discussed in the article on Gromov's theorem.
The lamplighter group has an exponential growth.
The existence of groups with intermediate growth, i.e. subexponential but not polynomial was open for many years. The question was asked by Milnor in 1968 and was finally answered in the positive by Rostislav Grigorchuk in 1984. There are still open questions in this area and a complete picture of which orders of growth are possible and which are not is missing.
The triangle groups include infinitely many finite groups (the spherical ones, corresponding to sphere), three groups of quadratic growth (the Euclidean ones, corresponding to Euclidean plane), and infinitely many groups of exponential growth (the hyperbolic ones, corresponding to the hyperbolic plane).
== See also ==
Connections to isoperimetric inequalities
== References ==
Milnor J. (1968). "A note on curvature and fundamental group". Journal of Differential Geometry. 2: 1–7. doi:10.4310/jdg/1214501132.
Grigorchuk R. I. (1984). "Degrees of growth of finitely generated groups and the theory of invariant means". Izv. Akad. Nauk SSSR Ser. Mat. (in Russian). 48 (5): 939–985.
== Further reading ==
Rostislav Grigorchuk and Igor Pak (2006). "Groups of Intermediate Growth: an Introduction for Beginners". arXiv:math.GR/0607384. | Wikipedia/Growth_rate_(group_theory) |
In group theory, Tietze transformations are used to transform a given presentation of a group into another, often simpler presentation of the same group. These transformations are named after Heinrich Tietze who introduced them in a paper in 1908.
A presentation is in terms of generators and relations; formally speaking the presentation is a pair of a set of named generators, and a set of words in the free group on the generators that are taken to be the relations. Tietze transformations are built up of elementary steps, each of which individually rather evidently takes the presentation to a presentation of an isomorphic group. These elementary steps may operate on generators or relations, and are of four kinds.
== Adding a relation ==
If a relation can be derived from the existing relations then it may be added to the presentation without changing the group. Let G=〈 x | x3=1 〉 be a finite presentation for the cyclic group of order 3. Multiplying x3=1 on both sides by x3 we get x6 = x3 = 1 so x6 = 1 is derivable from x3=1. Hence G=〈 x | x3=1, x6=1 〉 is another presentation for the same group.
== Removing a relation ==
If a relation in a presentation can be derived from the other relations then it can be removed from the presentation without affecting the group. In G = 〈 x | x3 = 1, x6 = 1 〉 the relation x6 = 1 can be derived from x3 = 1 so it can be safely removed. Note, however, that if x3 = 1 is removed from the presentation the group G = 〈 x | x6 = 1 〉 defines the cyclic group of order 6 and does not define the same group. Care must be taken to show that any relations that are removed are consequences of the other relations.
== Adding a generator ==
Given a presentation it is possible to add a new generator that is expressed as a word in the original generators. Starting with G = 〈 x | x3 = 1 〉 and letting y = x2 the new presentation G = 〈 x,y | x3 = 1, y = x2 〉 defines the same group.
== Removing a generator ==
If a relation can be formed where one of the generators is a word in the other generators then that generator may be removed. In order to do this it is necessary to replace all occurrences of the removed generator with its equivalent word. The presentation for the elementary abelian group of order 4, G=〈 x,y,z | x = yz, y2=1, z2=1, x=x−1 〉 can be replaced by G = 〈 y,z | y2 = 1, z2 = 1, (yz) = (yz)−1 〉 by removing x.
== Examples ==
Let G = 〈 x,y | x3 = 1, y2 = 1, (xy)2 = 1 〉 be a presentation for the symmetric group of degree three. The generator x corresponds to the permutation (1,2,3) and y to (2,3). Through Tietze transformations this presentation can be converted to G = 〈 y, z | (zy)3 = 1, y2 = 1, z2 = 1 〉, where z corresponds to (1,2).
== See also ==
Nielsen Transformation
Andrews-Curtis Conjecture
== References ==
Roger C. Lyndon, Paul E. Schupp, Combinatorial Group Theory, Springer, 2001. ISBN 3-540-41158-5. | Wikipedia/Tietze_transformation |
In cryptography, a random oracle is an oracle (a theoretical black box) that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated, it responds the same way every time that query is submitted.
Stated differently, a random oracle is a mathematical function chosen uniformly at random, that is, a function mapping each possible query to a (fixed) random response from its output domain.
Random oracles first appeared in the context of complexity theory, in which they were used to argue that complexity class separations may face relativization barriers, with the most prominent case being the P vs NP problem, two classes shown in 1981 to be distinct relative to a random oracle almost surely. They made their way into cryptography by the publication of Mihir Bellare and Phillip Rogaway in 1993, which introduced them as a formal cryptographic model to be used in reduction proofs.
They are typically used when the proof cannot be carried out using weaker assumptions on the cryptographic hash function. A system that is proven secure when every hash function is replaced by a random oracle is described as being secure in the random oracle model, as opposed to secure in the standard model of cryptography.
== Applications ==
Random oracles are typically used as an idealised replacement for cryptographic hash functions in schemes where strong randomness assumptions are needed of the hash function's output. Such a proof often shows that a system or a protocol is secure by showing that an attacker must require impossible behavior from the oracle, or solve some mathematical problem believed hard in order to break it. However, it only proves such properties in the random oracle model, making sure no major design flaws are present. It is in general not true that such a proof implies the same properties in the standard model. Still, a proof in the random oracle model is considered better than no formal security proof at all.
Not all uses of cryptographic hash functions require random oracles: schemes that require only one or more properties having a definition in the standard model (such as collision resistance, preimage resistance, second preimage resistance, etc.) can often be proven secure in the standard model (e.g., the Cramer–Shoup cryptosystem).
Random oracles have long been considered in computational complexity theory, and many schemes have been proven secure in the random oracle model, for example Optimal Asymmetric Encryption Padding, RSA-FDH and PSS. In 1986, Amos Fiat and Adi Shamir showed a major application of random oracles – the removal of interaction from protocols for the creation of signatures.
In 1989, Russell Impagliazzo and Steven Rudich showed the limitation of random oracles – namely that their existence alone is not sufficient for secret-key exchange.
In 1993, Mihir Bellare and Phillip Rogaway were the first to advocate their use in cryptographic constructions. In their definition, the random oracle produces a bit-string of infinite length which can be truncated to the length desired.
When a random oracle is used within a security proof, it is made available to all players, including the adversary or adversaries.
== Domain separation ==
A single oracle may be treated as multiple oracles by pre-pending a fixed bit-string to the beginning of each query (e.g., queries formatted as "1||x" or "0||x" can be considered as calls to two separate random oracles, similarly "00||x", "01||x", "10||x" and "11||x" can be used to represent calls to four separate random oracles). This practice is usually called domain separation. Oracle cloning is the re-use of the once-constructed random oracle within the same proof (this in practice corresponds to the multiple uses of the same cryptographic hash within one algorithm for different purposes). Oracle cloning with improper domain separation breaks security proofs and can lead to successful attacks.
== Limitations ==
According to the Church–Turing thesis, no function computable by a finite algorithm can implement a true random oracle (which by definition requires an infinite description because it has infinitely many possible inputs, and its outputs are all independent from each other and need to be individually specified by any description).
In fact, certain contrived signature and encryption schemes are known which are proven secure in the random oracle model, but which are trivially insecure when any real function is substituted for the random oracle. Nonetheless, for any more natural protocol a proof of security in the random oracle model gives very strong evidence of the practical security of the protocol.
In general, if a protocol is proven secure, attacks to that protocol must either be outside what was proven, or break one of the assumptions in the proof; for instance if the proof relies on the hardness of integer factorization, to break this assumption one must discover a fast integer factorization algorithm. Instead, to break the random oracle assumption, one must discover some unknown and undesirable property of the actual hash function; for good hash functions where such properties are believed unlikely, the considered protocol can be considered secure.
== Random oracle hypothesis ==
Although the Baker–Gill–Solovay theorem showed that there exists an oracle A such that PA = NPA, subsequent work by Bennett and Gill, showed that for a random oracle B (a function from {0,1}n to {0,1} such that each input element maps to each of 0 or 1 with probability 1/2, independently of the mapping of all other inputs), PB ⊊ NPB with probability 1. Similar separations, as well as the fact that random oracles separate classes with probability 0 or 1 (as a consequence of the Kolmogorov's zero–one law), led to the creation of the Random Oracle Hypothesis, that two "acceptable" complexity classes C1 and C2 are equal if and only if they are equal (with probability 1) under a random oracle (the acceptability of a complexity class is defined in BG81). This hypothesis was later shown to be false, as the two acceptable complexity classes IP and PSPACE were shown to be equal despite IPA ⊊ PSPACEA for a random oracle A with probability 1.
== Ideal cipher ==
An ideal cipher is a random permutation oracle that is used to model an idealized block cipher. A random permutation decrypts each ciphertext block into one and only one plaintext block and vice versa, so there is a one-to-one correspondence. Some cryptographic proofs make not only the "forward" permutation available to all players, but also the "reverse" permutation.
Recent works showed that an ideal cipher can be constructed from a random oracle using 10-round or even 8-round Feistel networks.
== Ideal permutation ==
An ideal permutation is an idealized object sometimes used in cryptography to model the behaviour of a permutation whose outputs are indistinguishable from those of a random permutation. In the ideal permutation model, an additional oracle access is given to the ideal permutation and its inverse. The ideal permutation model can be seen as a special case of the ideal cipher model where access is given to only a single permutation, instead of a family of permutations as in the case of the ideal cipher model.
== Quantum-accessible random oracles ==
Post-quantum cryptography studies quantum attacks on classical cryptographic schemes. As a random oracle is an abstraction of a hash function, it makes sense to assume that a quantum attacker can access the random oracle in quantum superposition. Many of the classical security proofs break down in that quantum random oracle model and need to be revised.
== See also ==
Sponge function
Oracle machine
Topics in cryptography
== References ==
== Sources ==
Bellare, Mihir; Davis, Hannah; Günther, Felix (2020). "Separate Your Domains: NIST PQC KEMs, Oracle Cloning and Read-Only Indifferentiability". Advances in Cryptology – EUROCRYPT 2020. Lecture Notes in Computer Science. Vol. 12106. Cham: Springer International Publishing. pp. 3–32. doi:10.1007/978-3-030-45724-2_1. hdl:20.500.11850/392433. ISBN 978-3-030-45723-5. ISSN 0302-9743. S2CID 214642193. | Wikipedia/Random_oracle_model |
In cryptography, a universal hashing message authentication code, or UMAC, is a message authentication code (MAC) calculated using universal hashing, which involves choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function that was used. A variation of the scheme was first published in 1999. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. In contrast to traditional MACs, which are serializable, a UMAC can be executed in parallel. Thus, as machines continue to offer more parallel-processing capabilities, the speed of implementing UMAC can increase.
A specific type of UMAC, also commonly referred to just as "UMAC", is described in an informational RFC published as RFC 4418 in March 2006. It has provable cryptographic strength and is usually substantially less computationally intensive than other MACs. UMAC's design is optimized for 32-bit architectures with SIMD support, with a performance of 1 CPU cycle per byte (cpb) with SIMD and 2 cpb without SIMD. A closely related variant of UMAC that is optimized for 64-bit architectures is given by VMAC, which was submitted to the IETF as a draft in April 2007 (draft-krovetz-vmac-01) but never gathered enough attention to be approved as an RFC.
== Background ==
=== Universal hashing ===
Let's say the hash function is chosen from a class of hash functions H, which maps messages into D, the set of possible message digests. This class is called universal if, for any distinct pair of messages, there are at most |H|/|D| functions that map them to the same member of D.
This means that if an attacker wants to replace one message with another and, from his point of view, the hash function was chosen completely randomly, the probability that the UMAC will not detect his modification is at most 1/|D|.
But this definition is not strong enough — if the possible messages are 0 and 1, D={0,1} and H consists of the identity operation and not, H is universal. But even if the digest is encrypted by modular addition, the attacker can change the message and the digest at the same time and the receiver wouldn't know the difference.
=== Strongly universal hashing ===
A class of hash functions H that is good to use will make it difficult for an attacker to guess the correct digest d of a fake message f after intercepting one message a with digest c. In other words,
Pr
h
∈
H
[
h
(
f
)
=
d
|
h
(
a
)
=
c
]
{\displaystyle \Pr _{h\in H}[h(f)=d|h(a)=c]\,}
needs to be very small, preferably 1/|D|.
It is easy to construct a class of hash functions when D is field. For example, if |D| is prime, all the operations are taken modulo |D|. The message a is then encoded as an n-dimensional vector over D (a1, a2, ..., an). H then has |D|n+1 members, each corresponding to an (n + 1)-dimensional vector over D (h0, h1, ..., hn). If we let
h
(
a
)
=
h
0
+
∑
i
=
1
n
h
i
a
i
{\displaystyle h(a)=h_{0}+\sum _{i=1}^{n}{h_{i}}{a_{i}}}
we can use the rules of probabilities and combinatorics to prove that
Pr
h
∈
H
[
h
(
f
)
=
d
|
h
(
a
)
=
c
]
=
1
|
D
|
{\displaystyle \Pr _{h\in H}[h(f)=d|h(a)=c]={1 \over |D|}}
If we properly encrypt all the digests (e.g. with a one-time pad), an attacker cannot learn anything from them and the same hash function can be used for all communication between the two parties. This may not be true for ECB encryption because it may be quite likely that two messages produce the same hash value. Then some kind of initialization vector should be used, which is often called the nonce. It has become common practice to set h0 = f(nonce), where f is also secret.
Notice that having massive amounts of computer power does not help the attacker at all. If the recipient limits the amount of forgeries it accepts (by sleeping whenever it detects one), |D| can be 232 or smaller.
=== Example ===
The following C function generates a 24 bit UMAC. It assumes that secret is a multiple of 24 bits, msg is not longer than secret and result already contains the 24 secret bits e.g. f(nonce). nonce does not need to be contained in msg.
== NH and the RFC UMAC ==
=== NH ===
Functions in the above unnamed strongly universal hash-function family uses n multiplies to compute a hash value.
The NH family halves the number of multiplications, which roughly translates to a two-fold speed-up in practice. For speed, UMAC uses the NH hash-function family. NH is specifically designed to use SIMD instructions, and hence UMAC is the first MAC function optimized for SIMD.
The following hash family is
2
−
w
{\displaystyle 2^{-w}}
-universal:
NH
K
(
M
)
=
(
∑
i
=
0
(
n
/
2
)
−
1
(
(
m
2
i
+
k
2
i
)
mod
2
w
)
⋅
(
(
m
2
i
+
1
+
k
2
i
+
1
)
mod
2
w
)
)
mod
2
2
w
{\displaystyle \operatorname {NH} _{K}(M)=\left(\sum _{i=0}^{(n/2)-1}((m_{2i}+k_{2i}){\bmod {~}}2^{w})\cdot ((m_{2i+1}+k_{2i+1}){\bmod {~}}2^{w})\right){\bmod {~}}2^{2w}}
.
where
The message M is encoded as an n-dimensional vector of w-bit words (m0, m1, m2, ..., mn-1).
The intermediate key K is encoded as an n+1-dimensional vector of w-bit words (k0, k1, k2, ..., kn). A pseudorandom generator generates K from a shared secret key.
Practically, NH is done in unsigned integers. All multiplications are mod 2^w, all additions mod 2^w/2, and all inputs as are a vector of half-words (
w
/
2
=
32
{\displaystyle w/2=32}
-bit integers). The algorithm will then use
⌈
k
/
2
⌉
{\displaystyle \lceil k/2\rceil }
multiplications, where
k
{\displaystyle k}
was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input.
=== RFC 4418 ===
RFC 4418 is an informational RFC that describes a wrapping of NH for UMAC. The overall UHASH ("Universal Hash Function") routine produces a variable length of tags, which corresponds to the number of iterations (and the total lengths of keys) needed in all three layers of its hashing. Several calls to an AES-based key derivation function is used to provide keys for all three keyed hashes.
Layer 1 (1024 byte chunks -> 8 byte hashes concatenated) uses NH because it is fast.
Layer 2 hashes everything down to 16 bytes using a POLY function that performs prime modulus arithmetics, with the prime changing as the size of the input grows.
Layer 3 hashes the 16-byte string to a fixed length of 4 bytes. This is what one iteration generates.
In RFC 4418, NH is rearranged to take a form of:
Y = 0
for (i = 0; i < t; i += 8) do
Y
=
Y
+
64
(
(
M
i
+
0
+
32
K
i
+
0
)
∗
64
(
M
i
+
4
+
32
K
i
+
4
)
)
Y
=
Y
+
64
(
(
M
i
+
1
+
32
K
i
+
1
)
∗
64
(
M
i
+
5
+
32
K
i
+
5
)
)
Y
=
Y
+
64
(
(
M
i
+
2
+
32
K
i
+
2
)
∗
64
(
M
i
+
6
+
32
K
i
+
6
)
)
Y
=
Y
+
64
(
(
M
i
+
3
+
32
K
i
+
3
)
∗
64
(
M
i
+
7
+
32
K
i
+
7
)
)
{\displaystyle {\begin{aligned}{\mathtt {Y}}&={\mathtt {Y+_{64}((M_{i+0}+_{32}K_{i+0})*_{64}(M_{i+4}+_{32}K_{i+4}))}}\\{\mathtt {Y}}&={\mathtt {Y+_{64}((M_{i+1}+_{32}K_{i+1})*_{64}(M_{i+5}+_{32}K_{i+5}))}}\\{\mathtt {Y}}&={\mathtt {Y+_{64}((M_{i+2}+_{32}K_{i+2})*_{64}(M_{i+6}+_{32}K_{i+6}))}}\\{\mathtt {Y}}&={\mathtt {Y+_{64}((M_{i+3}+_{32}K_{i+3})*_{64}(M_{i+7}+_{32}K_{i+7}))}}\end{aligned}}}
end for
This definition is designed to encourage programmers to use SIMD instructions on the accumulation, since only data with four indices away are likely to not be put in the same SIMD register, and hence faster to multiply in bulk. On a hypothetical machine, it could simply translate to:
== See also ==
Poly1305, another fast MAC based on strongly universal hashing
MMH-Badger MAC, another fast MAC
== References ==
== External links ==
Ted Krovetz. "UMAC: Fast and Provably Secure Message Authentication".
Miller, Damien; Valchev, Peter (2007-09-03). "The use of UMAC in the SSH Transport Layer Protocol: draft-miller-secsh-umac-01.txt". IETF. | Wikipedia/UMAC_(cryptography) |
Skein is a cryptographic hash function and one of five finalists in the NIST hash function competition. Entered as a candidate to become the SHA-3 standard, the successor of SHA-1 and SHA-2, it ultimately lost to NIST hash candidate Keccak.
The name Skein refers to how the Skein function intertwines the input, similar to a skein of yarn.
== History ==
Skein was created by Bruce Schneier, Niels Ferguson, Stefan Lucks, Doug Whiting, Mihir Bellare, Tadayoshi Kohno, Jon Callas and Jesse Walker.
Skein is based on the Threefish tweakable block cipher compressed using Unique Block Iteration (UBI) chaining mode, a variant of the Matyas–Meyer–Oseas hash mode, while leveraging an optional low-overhead argument-system for flexibility.
Skein's algorithm and a reference implementation was given to public domain.
== Functionality ==
Skein supports internal state sizes of 256, 512 and 1024 bits, and arbitrary output sizes.
The authors claim 6.1 cycles per byte for any output size on an Intel Core 2 Duo in 64-bit mode.
The core of Threefish is based on a MIX function that transforms 2 64-bit words using a single addition, rotation by a constant and XOR. The UBI chaining mode combines an input chaining value with an arbitrary length input string and produces a fixed size output.
Threefish's nonlinearity comes entirely from the combination of addition operations and exclusive-ORs; it does not use S-boxes. The function is optimized for 64-bit processors, and the Skein paper defines optional features such as randomized hashing, parallelizable tree hashing, a stream cipher, personalization, and a key derivation function.
== Cryptanalysis ==
In October 2010, an attack that combines rotational cryptanalysis with the rebound attack was published. The attack finds rotational collisions for 53 of 72 rounds in Threefish-256, and 57 of 72 rounds in Threefish-512. It also affects the Skein hash function. This is a follow-up to the earlier attack published in February, which breaks 39 and 42 rounds respectively.
The Skein team tweaked the key schedule constant for round 3 of the NIST hash function competition, to make this attack less effective, even though they believe the hash would still be secure without these tweaks.
== Examples of Skein hashes ==
Hash values of empty string.
Skein-256-256("")
c8877087da56e072870daa843f176e9453115929094c3a40c463a196c29bf7ba
Skein-512-256("")
39ccc4554a8b31853b9de7a1fe638a24cce6b35a55f2431009e18780335d2621
Skein-512-512("")
bc5b4c50925519c290cc634277ae3d6257212395cba733bbad37a4af0fa06af41fca7903d06564fea7a2d3730dbdb80c1f85562dfcc070334ea4d1d9e72cba7a
Even a small change in the message will (with overwhelming probability) result in a mostly different hash, due to the avalanche effect. For example, adding a period to the end of the sentence:
Skein-512-256("The quick brown fox jumps over the lazy dog")
b3250457e05d3060b1a4bbc1428bc75a3f525ca389aeab96cfa34638d96e492a
Skein-512-256("The quick brown fox jumps over the lazy dog.")
41e829d7fca71c7d7154ed8fc8a069f274dd664ae0ed29d365d919f4e575eebb
Skein-512-512("The quick brown fox jumps over the lazy dog")
94c2ae036dba8783d0b3f7d6cc111ff810702f5c77707999be7e1c9486ff238a7044de734293147359b4ac7e1d09cd247c351d69826b78dcddd951f0ef912713
Skein-512-512("The quick brown fox jumps over the lazy dog.")
658223cb3d69b5e76e3588ca63feffba0dc2ead38a95d0650564f2a39da8e83fbb42c9d6ad9e03fbfde8a25a880357d457dbd6f74cbcb5e728979577dbce5436
== References ==
== External links ==
Official Skein website (dead, Wayback Machine archive)
Bruce Schneier's Skein webpage
=== Implementations ===
SPARKSkein – an implementation of Skein in SPARK, with proofs of type-safety
Botan contains a C++ implementation of Skein-512
nskein – a .NET implementation of Skein with support for all block sizes
pyskein Skein module for Python
PHP-Skein-Hash Skein hash for PHP on GitHub
Digest::Skein, an implementation in C and Perl
skeinfish A C# implementation of Skein and Threefish (based on version 1.3)
Java, Scala, and Javascript implementations of Skein 512-512 (based on version 1.3)
A Java implementation of Skein (based on version 1.1)
An implementation of Skein in Ada
skerl, Skein hash function for Erlang, via NIFs
Skein 512-512 implemented in Bash
Skein implemented in Haskell
VHDL source code developed by the Cryptographic Engineering Research Group (CERG) at George Mason University
skeinr Skein implemented in Ruby
fhreefish An efficient implementation of Skein-256 for 8-bit Atmel AVR microcontrollers, meeting the performance estimates outlined in the official specification | Wikipedia/Skein_(hash_function) |
In cryptography, Tiger is a cryptographic hash function designed by Ross Anderson and Eli Biham in 1995 for efficiency on 64-bit platforms. The size of a Tiger hash value is 192 bits. Truncated versions (known as Tiger/128 and Tiger/160) can be used for compatibility with protocols assuming a particular hash size. Unlike the SHA-2 family, no distinguishing initialization values are defined; they are simply prefixes of the full Tiger/192 hash value.
Tiger2 is a variant where the message is padded by first appending a byte with the hexadecimal value of 0x80 as in MD4, MD5 and SHA, rather than with the hexadecimal value of 0x01 as in the case of Tiger. The two variants are otherwise identical.
== Algorithm ==
Tiger is based on Merkle–Damgård construction. The one-way compression function operates on 64-bit words, maintaining 3 words of state and processing 8 words of data. There are 24 rounds, using a combination of operation mixing with XOR and addition/subtraction, rotates, and S-box lookups, and a fairly intricate key scheduling algorithm for deriving 24 round keys from the 8 input words.
Although fast in software, Tiger's large S-boxes (four S-boxes, each with 256 64-bit entries totaling 8 KiB) make implementations in hardware or microcontrollers difficult.
== Usage ==
Tiger is frequently used in Merkle hash tree form, where it is referred to as TTH (Tiger Tree Hash). TTH is used by many clients on the Direct Connect and Gnutella file sharing networks, and can optionally be included in the BitTorrent metafile for better content availability.
Tiger was considered for inclusion in the OpenPGP standard, but was abandoned in favor of RIPEMD-160.
== OID ==
RFC 2440 refers to TIGER as having no OID, whereas the GNU Coding Standards list TIGER as having OID 1.3.6.1.4.1.11591.12.2. In the IPSEC subtree, HMAC-TIGER is assigned OID 1.3.6.1.5.5.8.1.3. No OID for TTH has been announced yet.
== Byte order ==
The specification of Tiger does not define the way its output should be printed but only defines the result to be three ordered 64-bit integers. The "testtiger" program at the author's homepage was intended to allow easy testing of the test source code, rather than to define any particular print order. The protocols Direct Connect and ADC as well as the program tthsum use little-endian byte order, which is also preferred by one of the authors.
== Examples ==
In the example below, the 192-bit (24-byte) Tiger hashes are represented as 48 hexadecimal digits in little-endian byte order. The following demonstrates a 43-byte ASCII input and the corresponding Tiger hashes:
Tiger("The quick brown fox jumps over the lazy dog") =
6d12a41e72e644f017b6f0e2f7b44c6285f06dd5d2c5b075
Tiger2("The quick brown fox jumps over the lazy dog") =
976abff8062a2e9dcea3a1ace966ed9c19cb85558b4976d8
Even a small change in the message will (with very high probability) result in a completely different hash, e.g. changing d to c:
Tiger("The quick brown fox jumps over the lazy cog") =
a8f04b0f7201a0d728101c9d26525b31764a3493fcd8458f
Tiger2("The quick brown fox jumps over the lazy cog") =
09c11330283a27efb51930aa7dc1ec624ff738a8d9bdd3df
The hash of the zero-length string is:
Tiger("") =
3293ac630c13f0245f92bbb1766e16167a4e58492dde73f3
Tiger2("") =
4441be75f6018773c206c22745374b924aa8313fef919f41
== Cryptanalysis ==
Unlike MD5 or SHA-0/1, there are no known effective attacks on the full 24-round Tiger except for pseudo-near collision. While MD5 processes its state with 64 simple 32-bit operations per 512-bit block and SHA-1 with 80, Tiger updates its state with a total of 144 such operations per 512-bit block, additionally strengthened by large S-box look-ups.
John Kelsey and Stefan Lucks have found a collision-finding attack on 16-round Tiger with a time complexity equivalent to about 244 compression function invocations and another attack that finds pseudo-near collisions in 20-round Tiger with work less than that of 248 compression function invocations. Florian Mendel et al. have improved upon these attacks by describing a collision attack spanning 19 rounds of Tiger, and a 22-round pseudo-near-collision attack. These attacks require a work effort equivalent to about 262 and 244 evaluations of the Tiger compression function, respectively.
== See also ==
Hash function security summary
Comparison of cryptographic hash functions
List of hash functions
Serpent – a block cipher by the same authors
== References ==
== External links ==
The Tiger home page | Wikipedia/Tiger_(hash_function) |
Steganography ( STEG-ə-NOG-rə-fee) is the practice of representing information within another message or physical object, in such a manner that the presence of the concealed information would not be evident to an unsuspecting person's examination. In computing/electronic contexts, a computer file, message, image, or video is concealed within another file, message, image, or video. Generally, the hidden messages appear to be (or to be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a formal shared secret are forms of security through obscurity, while key-dependent steganographic schemes try to adhere to Kerckhoffs's principle.
The word steganography comes from Greek steganographia, which combines the words steganós (στεγανός), meaning "covered or concealed", and -graphia (γραφή) meaning "writing". The first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography, disguised as a book on magic.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest and may in themselves be incriminating in countries in which encryption is illegal. Whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing both the fact that a secret message is being sent and its contents.
Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside a transport layer, such as a document file, image file, program, or protocol. Media files are ideal for steganographic transmission because of their large size. For example, a sender might start with an innocuous image file and adjust the color of every hundredth pixel to correspond to a letter in the alphabet. The change is so subtle that someone who is not looking for it is unlikely to notice the change.
== History ==
The first recorded uses of steganography can be traced back to 440 BC in Greece, when Herodotus mentions two examples in his Histories. Histiaeus sent a message to his vassal, Aristagoras, by shaving the head of his most trusted servant, "marking" the message onto his scalp, then sending him on his way once his hair had regrown, with the instruction, "When thou art come to Miletus, bid Aristagoras shave thy head, and look thereon." Additionally, Demaratus sent a warning about a forthcoming attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in common use then as reusable writing surfaces, sometimes used for shorthand.
In his work Polygraphiae, Johannes Trithemius developed his Ave Maria cipher that can hide information in a Latin praise of God. "Auctor sapientissimus conseruans angelica deferat nobis charitas potentissimi creatoris", for example, contains the concealed word VICIPEDIA.
== Techniques ==
Numerous techniques throughout history have been developed to embed a message within another medium.
=== Physical ===
Placing the message in a physical item has been widely used for centuries. Some notable examples include invisible ink on paper, writing a message in Morse code on yarn worn by a courier, microdots, or using a music cipher to hide messages as musical notes in sheet music.
==== Social steganography ====
In communities with social or government taboos or censorship, people use cultural steganography—hiding messages in idiom, pop culture references, and other messages they share publicly and assume are monitored. This relies on social context to make the underlying messages visible only to certain readers. Examples include:
Hiding a message in the title and context of a shared video or image.
Misspelling names or words that are popular in the media in a given week, to suggest an alternative meaning.
Hiding a picture that can be traced by using Paint or any other drawing tool.
=== Digital messages ===
Since the dawn of computers, techniques have been developed to embed messages in digital cover mediums. The message to conceal is often encrypted, then used to overwrite part of a much larger block of encrypted data or a block of random data (an unbreakable cipher like the one-time pad generates ciphertexts that look perfectly random without the private key).
Examples of this include changing pixels in image or sound files, properties of digital text such as spacing and font choice, chaffing and winnowing, mimic functions, modifying the echo of a sound file (echo steganography)., and including data in ignored sections of a file.
=== Steganography in streaming media ===
Since the era of evolving network applications, steganography research has shifted from image steganography to steganography in streaming media such as Voice over Internet Protocol (VoIP).
In 2003, Giannoula et al. developed a data hiding technique leading to compressed forms of source video signals on a frame-by-frame basis.
In 2005, Dittmann et al. studied steganography and watermarking of multimedia contents such as VoIP.
In 2008, Yongfeng Huang and Shanyu Tang presented a novel approach to information hiding in low bit-rate VoIP speech stream, and their published work on steganography is the first-ever effort to improve the codebook partition by using Graph theory along with Quantization Index Modulation in low bit-rate streaming media.
In 2011 and 2012, Yongfeng Huang and Shanyu Tang devised new steganographic algorithms that use codec parameters as cover object to realise real-time covert VoIP steganography. Their findings were published in IEEE Transactions on Information Forensics and Security.
In 2024, Cheddad & Cheddad proposed a new framework for reconstructing lost or corrupted audio signals using a combination of machine learning techniques and latent information. The main idea of their paper is to enhance audio signal reconstruction by fusing steganography, halftoning (dithering), and state-of-the-art shallow and deep learning methods (e.g., RF, LSTM). This combination of steganography, halftoning, and machine learning for audio signal reconstruction may inspire further research in optimizing this approach or applying it to other domains, such as image reconstruction (i.e., inpainting).
=== Adaptive steganography ===
Adaptive steganography is a technique for concealing information within digital media by tailoring the embedding process to the specific features of the cover medium. An example of this approach is demonstrated in the work. Their method develops a skin tone detection algorithm, capable of identifying facial features, which is then applied to adaptive steganography. By incorporating face rotation into their approach, the technique aims to enhance its adaptivity to conceal information in a manner that is both less detectable and more robust across various facial orientations within images. This strategy can potentially improve the efficacy of information hiding in both static images and video content.
=== Cyber-physical systems/Internet of Things ===
Academic work since 2012 demonstrated the feasibility of steganography for cyber-physical systems (CPS)/the Internet of Things (IoT). Some techniques of CPS/IoT steganography overlap with network steganography, i.e. hiding data in communication protocols used in CPS/the IoT. However, specific techniques hide data in CPS components. For instance, data can be stored in unused registers of IoT/CPS components and in the states of IoT/CPS actuators.
=== Printed ===
Digital steganography output may be in the form of printed documents. A message, the plaintext, may be first encrypted by traditional means, producing a ciphertext. Then, an innocuous cover text is modified in some way so as to contain the ciphertext, resulting in the stegotext. For example, the letter size, spacing, typeface, or other characteristics of a cover text can be manipulated to carry the hidden message. Only a recipient who knows the technique used can recover the message and then decrypt it. Francis Bacon developed Bacon's cipher as such a technique.
The ciphertext produced by most digital steganography methods, however, is not printable. Traditional digital methods rely on perturbing noise in the channel file to hide the message, and as such, the channel file must be transmitted to the recipient with no additional noise from the transmission. Printing introduces much noise in the ciphertext, generally rendering the message unrecoverable. There are techniques that address this limitation, one notable example being ASCII Art Steganography.
Although not classic steganography, some types of modern color laser printers integrate the model, serial number, and timestamps on each printout for traceability reasons using a dot-matrix code made of small, yellow dots not recognizable to the naked eye — see printer steganography for details.
=== Network ===
In 2015, a taxonomy of 109 network hiding methods was presented by Steffen Wendzel, Sebastian Zander et al. that summarized core concepts used in network steganography research. The taxonomy was developed further in recent years by several publications and authors and adjusted to new domains, such as CPS steganography.
In 1977, Kent concisely described the potential for covert channel signaling in general network communication protocols, even if the traffic is encrypted (in a footnote) in "Encryption-Based Protection for Interactive User/Computer Communication," Proceedings of the Fifth Data Communications Symposium, September 1977.
In 1987, Girling first studied covert channels on a local area network (LAN), identified and realised three obvious covert channels (two storage channels and one timing channel), and his research paper entitled “Covert channels in LAN’s” published in IEEE Transactions on Software Engineering, vol. SE-13 of 2, in February 1987.
In 1989, Wolf implemented covert channels in LAN protocols, e.g. using the reserved fields, pad fields, and undefined fields in the TCP/IP protocol.
In 1997, Rowland used the IP identification field, the TCP initial sequence number and acknowledge sequence number fields in TCP/IP headers to build covert channels.
In 2002, Kamran Ahsan made an excellent summary of research on network steganography.
In 2005, Steven J. Murdoch and Stephen Lewis contributed a chapter entitled "Embedding Covert Channels into TCP/IP" in the "Information Hiding" book published by Springer.
All information hiding techniques that may be used to exchange steganograms in telecommunication networks can be classified under the general term of network steganography. This nomenclature was originally introduced by Krzysztof Szczypiorski in 2003. Contrary to typical steganographic methods that use digital media (images, audio and video files) to hide data, network steganography uses communication protocols' control elements and their intrinsic functionality. As a result, such methods can be harder to detect and eliminate.
Typical network steganography methods involve modification of the properties of a single network protocol. Such modification can be applied to the protocol data unit (PDU), to the time relations between the exchanged PDUs, or both (hybrid methods).
Moreover, it is feasible to utilize the relation between two or more different network protocols to enable secret communication. These applications fall under the term inter-protocol steganography. Alternatively, multiple network protocols can be used simultaneously to transfer hidden information and so-called control protocols can be embedded into steganographic communications to extend their capabilities, e.g. to allow dynamic overlay routing or the switching of utilized hiding methods and network protocols.
Network steganography covers a broad spectrum of techniques, which include, among others:
Steganophony – the concealment of messages in Voice-over-IP conversations, e.g. the employment of delayed or corrupted packets that would normally be ignored by the receiver (this method is called LACK – Lost Audio Packets Steganography), or, alternatively, hiding information in unused header fields.
WLAN Steganography – transmission of steganograms in Wireless Local Area Networks. A practical example of WLAN Steganography is the HICCUPS system (Hidden Communication System for Corrupted Networks)
== Additional terminology ==
Discussions of steganography generally use terminology analogous to and consistent with conventional radio and communications technology. However, some terms appear specifically in software and are easily confused. These are the most relevant ones to digital steganographic systems:
The payload is the data covertly communicated. The carrier is the signal, stream, or data file that hides the payload, which differs from the channel, which typically means the type of input, such as a JPEG image. The resulting signal, stream, or data file with the encoded payload is sometimes called the package, stego file, or covert message. The proportion of bytes, samples, or other signal elements modified to encode the payload is called the encoding density and is typically expressed as a number between 0 and 1.
In a set of files, the files that are considered likely to contain a payload are suspects. A suspect identified through some type of statistical analysis can be referred to as a candidate.
== Countermeasures and detection ==
Detecting physical steganography requires a careful physical examination, including the use of magnification, developer chemicals, and ultraviolet light. It is a time-consuming process with obvious resource implications, even in countries that employ many people to spy on other citizens. However, it is feasible to screen mail of certain suspected individuals or institutions, such as prisons or prisoner-of-war (POW) camps.
During World War II, prisoner of war camps gave prisoners specially-treated paper that would reveal invisible ink. An article in the 24 June 1948 issue of Paper Trade Journal by the Technical Director of the United States Government Printing Office had Morris S. Kantrowitz describe in general terms the development of this paper. Three prototype papers (Sensicoat, Anilith, and Coatalith) were used to manufacture postcards and stationery provided to German prisoners of war in the US and Canada. If POWs tried to write a hidden message, the special paper rendered it visible. The US granted at least two patents related to the technology, one to Kantrowitz, U.S. patent 2,515,232, "Water-Detecting paper and Water-Detecting Coating Composition Therefor," patented 18 July 1950, and an earlier one, "Moisture-Sensitive Paper and the Manufacture Thereof," U.S. patent 2,445,586, patented 20 July 1948. A similar strategy issues prisoners with writing paper ruled with a water-soluble ink that runs in contact with water-based invisible ink.
In computing, steganographically encoded package detection is called steganalysis. The simplest method to detect modified files, however, is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known clean copies of the materials and then compare them against the current contents of the site. The differences, if the carrier is the same, comprise the payload. In general, using extremely high compression rates makes steganography difficult but not impossible. Compression errors provide a hiding place for data, but high compression reduces the amount of data available to hold the payload, raising the encoding density, which facilitates easier detection (in extreme cases, even by casual observation).
There are a variety of basic tests that can be done to identify whether or not a secret message exists. This process is not concerned with the extraction of the message, which is a different process and a separate step. The most basic approaches of steganalysis are visual or aural attacks, structural attacks, and statistical attacks. These approaches attempt to detect the steganographic algorithms that were used. These algorithms range from unsophisticated to very sophisticated, with early algorithms being much easier to detect due to statistical anomalies that were present. The size of the message that is being hidden is a factor in how difficult it is to detect. The overall size of the cover object also plays a factor as well. If the cover object is small and the message is large, this can distort the statistics and make it easier to detect. A larger cover object with a small message decreases the statistics and gives it a better chance of going unnoticed.
Steganalysis that targets a particular algorithm has much better success as it is able to key in on the anomalies that are left behind. This is because the analysis can perform a targeted search to discover known tendencies since it is aware of the behaviors that it commonly exhibits. When analyzing an image the least significant bits of many images are actually not random. The camera sensor, especially lower-end sensors are not the best quality and can introduce some random bits. This can also be affected by the file compression done on the image. Secret messages can be introduced into the least significant bits in an image and then hidden. A steganography tool can be used to camouflage the secret message in the least significant bits but it can introduce a random area that is too perfect. This area of perfect randomization stands out and can be detected by comparing the least significant bits to the next-to-least significant bits on an image that hasn't been compressed.
Generally, though, there are many techniques known to be able to hide messages in data using steganographic techniques. None are, by definition, obvious when users employ standard applications, but some can be detected by specialist tools. Others, however, are resistant to detection—or rather it is not possible to reliably distinguish data containing a hidden message from data containing just noise—even when the most sophisticated analysis is performed. Steganography is being used to conceal and deliver more effective cyber attacks, referred to as Stegware. The term Stegware was first introduced in 2017 to describe any malicious operation involving steganography as a vehicle to conceal an attack. Detection of steganography is challenging, and because of that, not an adequate defence. Therefore, the only way of defeating the threat is to transform data in a way that destroys any hidden messages, a process called Content Threat Removal.
== Applications ==
=== Use in modern printers ===
Some modern computer printers use steganography, including Hewlett-Packard and Xerox brand color laser printers. The printers add tiny yellow dots to each page. The barely-visible dots contain encoded printer serial numbers and date and time stamps.
=== Example from modern practice ===
The larger the cover message (in binary data, the number of bits) relative to the hidden message, the easier it is to hide the hidden message (as an analogy, the larger the "haystack", the easier it is to hide a "needle"). So digital pictures, which contain much data, are sometimes used to hide messages on the Internet and on other digital communication media. It is not clear how common this practice actually is.
For example, a 24-bit bitmap uses 8 bits to represent each of the three color values (red, green, and blue) of each pixel. The blue alone has 28 different levels of blue intensity. The difference between 111111112 and 111111102 in the value for blue intensity is likely to be undetectable by the human eye. Therefore, the least significant bit can be used more or less undetectably for something else other than color information. If that is repeated for the green and the red elements of each pixel as well, it is possible to encode one letter of ASCII text for every three pixels.
Stated somewhat more formally, the objective for making steganographic encoding difficult to detect is to ensure that the changes to the carrier (the original signal) because of the injection of the payload (the signal to covertly embed) are visually (and ideally, statistically) negligible. The changes are indistinguishable from the noise floor of the carrier. All media can be a carrier, but media with a large amount of redundant or compressible information is better suited.
From an information theoretical point of view, that means that the channel must have more capacity than the "surface" signal requires. There must be redundancy. For a digital image, it may be noise from the imaging element; for digital audio, it may be noise from recording techniques or amplification equipment. In general, electronics that digitize an analog signal suffer from several noise sources, such as thermal noise, flicker noise, and shot noise. The noise provides enough variation in the captured digital information that it can be exploited as a noise cover for hidden data. In addition, lossy compression schemes (such as JPEG) always introduce some error to the decompressed data, and it is possible to exploit that for steganographic use, as well.
Although steganography and digital watermarking seem similar, they are not. In steganography, the hidden message should remain intact until it reaches its destination. Steganography can be used for digital watermarking in which a message (being simply an identifier) is hidden in an image so that its source can be tracked or verified (for example, Coded Anti-Piracy) or even just to identify an image (as in the EURion constellation). In such a case, the technique of hiding the message (here, the watermark) must be robust to prevent tampering. However, digital watermarking sometimes requires a brittle watermark, which can be modified easily, to check whether the image has been tampered with. That is the key difference between steganography and digital watermarking.
=== Alleged use by intelligence services ===
In 2010, the Federal Bureau of Investigation alleged that the Russian foreign intelligence service uses customized steganography software for embedding encrypted text messages inside image files for certain communications with "illegal agents" (agents without diplomatic cover) stationed abroad.
On 23 April 2019 the U.S. Department of Justice unsealed an indictment charging Xiaoqing Zheng, a Chinese businessman and former Principal Engineer at General Electric, with 14 counts of conspiring to steal intellectual property and trade secrets from General Electric. Zheng had allegedly used steganography to exfiltrate 20,000 documents from General Electric to Tianyi Aviation Technology Co. in Nanjing, China, a company the FBI accused him of starting with backing from the Chinese government.
=== Distributed steganography ===
There are distributed steganography methods, including methodologies that distribute the payload through multiple carrier files in diverse locations to make detection more difficult. For example, U.S. patent 8,527,779 by cryptographer William Easttom (Chuck Easttom).
=== Online challenge ===
The puzzles that are presented by Cicada 3301 incorporate steganography with cryptography and other solving techniques since 2012. Puzzles involving steganography have also been featured in other alternative reality games.
The communications of The May Day mystery incorporate steganography and other solving techniques since 1981.
=== Computer malware ===
It is possible to steganographically hide computer malware into digital images, videos, audio and various other files in order to evade detection by antivirus software. This type of malware is called stegomalware. It can be activated by external code, which can be malicious or even non-malicious if some vulnerability in the software reading the file is exploited.
Stegomalware can be removed from certain files without knowing whether they contain stegomalware or not. This is done through content disarm and reconstruction (CDR) software, and it involves reprocessing the entire file or removing parts from it. Actually detecting stegomalware in a file can be difficult and may involve testing the file behaviour in virtual environments or deep learning analysis of the file.
== Steganalysis ==
=== Stegoanalytical algorithms ===
Stegoanalytical algorithms can be cataloged in different ways, highlighting: according to the available information and according to the purpose sought.
==== According to the information available ====
There is the possibility of cataloging these algorithms based on the information held by the stegoanalyst in terms of clear and encrypted messages. It is a technique similar to cryptography, however, they have several differences:
Chosen stego attack: the stegoanalyst perceives the final target stego and the steganographic algorithm used.
Known cover attack: the stegoanalyst comprises the initial conductive target and the final target stego.
Known stego attack: the stegoanalyst knows the initial carrier target and the final target stego, in addition to the algorithm used.
Stego only attack: the stegoanalyst perceives exclusively the stego target.
Chosen message attack: the stegoanalyst, following a message selected by him, originates a stego target.
Known message attack: the stegoanalyst owns the stego target and the hidden message, which is known to them.
==== According to the purpose sought ====
The principal purpose of steganography is to transfer information unnoticed, however, it is possible for an attacker to have two different pretensions:
Passive steganalysis: does not alter the target stego, therefore, it examines the target stego in order to establish whether it carries hidden information and recovers the hidden message, the key used or both.
Active steganalysis: changes the initial stego target, therefore, it seeks to suppress the transfer of information, if it exists.
== See also ==
== References ==
== Sources ==
== External links ==
An overview of digital steganography, particularly within images, for the computationally curious by Chris League, Long Island University, 2015
Examples showing images hidden in other images
Information Hiding: Steganography & Digital Watermarking. Papers and information about steganography and steganalysis research from 1995 to the present. Includes Steganography Software Wiki list. Dr. Neil F. Johnson.
Detecting Steganographic Content on the Internet. 2002 paper by Niels Provos and Peter Honeyman published in Proceedings of the Network and Distributed System Security Symposium (San Diego, CA, 6–8 February 2002). NDSS 2002. Internet Society, Washington, D.C.
Covert Channels in the TCP/IP Suite Archived 23 October 2012 at the Wayback Machine – 1996 paper by Craig Rowland detailing the hiding of data in TCP/IP packets.
Network Steganography Centre Tutorials Archived 16 December 2017 at the Wayback Machine. How-to articles on the subject of network steganography (Wireless LANs, VoIP – Steganophony, TCP/IP protocols and mechanisms, Steganographic Router, Inter-protocol steganography). By Krzysztof Szczypiorski and Wojciech Mazurczyk from Network Security Group.
Invitation to BPCS-Steganography.
Steganography by Michael T. Raggo, DefCon 12 (1 August 2004)
File Format Extension Through Steganography by Blake W. Ford and Khosrow Kaikhah
Computer steganography. Theory and practice with Mathcad (Rus) 2006 paper by Konakhovich G. F., Puzyrenko A. Yu. published in MK-Press Kyiv, Ukraine
stegano a Free and Open Source steganography web service. | Wikipedia/Steganography |
There are a number of standards related to cryptography. Standard algorithms and protocols provide a focus for study; standards for popular applications attract a large amount of cryptanalysis.
== Encryption standards ==
Data Encryption Standard (DES, now obsolete)
Advanced Encryption Standard (AES)
RSA the original public key algorithm
OpenPGP
== Hash standards ==
MD5 128-bit (obsolete)
SHA-1 160-bit (obsolete)
SHA-2 available in 224, 256, 384, and 512-bit variants
HMAC keyed hash
PBKDF2 Key derivation function (RFC 2898)
== Digital signature standards ==
Digital Signature Standard (DSS), based on the Digital Signature Algorithm (DSA)
RSA
Elliptic Curve DSA
== Public-key infrastructure (PKI) standards ==
X.509 Public Key Certificates
== Wireless Standards ==
Wired Equivalent Privacy (WEP), severely flawed and superseded by WPA
Wi-Fi Protected Access (WPA) better than WEP, a 'pre-standard' partial version of 802.11i
802.11i a.k.a. WPA2, uses AES and other improvements on WEP
A5/1 and A5/2 cell phone encryption for GSM
== U.S. Government Federal Information Processing Standards (FIPS) ==
FIPS PUB 31 Guidelines for Automatic Data Processing Physical Security and Risk Management 1974
FIPS PUB 46-3 Data Encryption Standard (DES) 1999
FIPS PUB 73 Guidelines for Security of Computer Applications 1980
FIPS PUB 74 Guidelines for Implementing and Using the NBS Data Encryption Standard 1981
FIPS PUB 81 DES Modes of Operation 1980
FIPS PUB 102 Guideline for Computer Security Certification and Accreditation 1983
FIPS PUB 112 Password Usage 1985, defines 10 factors to be considered in access control systems that are based on passwords
FIPS PUB 113 Computer Data Authentication 1985, specifies a Data Authentication Algorithm (DAA) based on DES, adopted by the Department of Treasury and the banking community to protect electronic fund transfers.
FIPS PUB 140-2 Security Requirements for Cryptographic Modules 2001, defines four increasing security levels
FIPS PUB 171 Key Management Using ANSI X9.17 (ANSI X9.17-1985) 1992, based on DES
FIPS PUB 180-2 Secure Hash Standard (SHS) 2002 defines the SHA family
FIPS PUB 181 Automated Password Generator (APG) 1993
FIPS PUB 185 Escrowed Encryption Standard (EES) 1994, a key escrow system that provides for decryption of telecommunications when lawfully authorized.
FIPS PUB 186-2 Digital Signature Standard (DSS) 2000
FIPS PUB 190 Guideline for the Use of Advanced Authentication Technology Alternatives 1994
FIPS PUB 191 Guideline for the Analysis of local area network Security 1994
FIPS PUB 196 Entity Authentication Using Public Key Cryptography 1997
FIPS PUB 197 Advanced Encryption Standard (AES) 2001
FIPS PUB 198 The Keyed-Hash Message Authentication Code (HMAC) 2002
== Internet Requests for Comments (RFCs) ==
== Classified Standards ==
EKMS NSA's Electronic Key Management System
FNBDT NSA's secure narrow band voice standard
Fortezza encryption based on portable crypto token in PC Card format
STE secure telephone
STU-III older secure telephone
TEMPEST prevents compromising emanations
== Other ==
IPsec Virtual Private Network (VPN) and more
IEEE P1363 covers most aspects of public-key cryptography
Transport Layer Security (formerly SSL)
SSH secure Telnet and more
Content Scrambling System (CSS, the DVD encryption standard, broken by DeCSS)
Kerberos authentication standard
RADIUS authentication standard
ANSI X9.59 electronic payment standard
Common Criteria Trusted operating system standard
CRYPTREC Japanese Government's cryptography recommendations
== See also ==
NSA cryptography
Topics in cryptography | Wikipedia/Cryptography_standards |
In cryptography, a key derivation function (KDF) is a cryptographic algorithm that derives one or more secret keys from a secret value such as a master key, a password, or a passphrase using a pseudorandom function (which typically uses a cryptographic hash function or block cipher). KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of a Diffie–Hellman key exchange into a symmetric key for use with AES. Keyed cryptographic hash functions are popular examples of pseudorandom functions used for key derivation.
== History ==
The first deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after its man page), and was invented by Robert Morris in 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modified DES encryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in the Unix password file. While it was a great advance at the time, increases in processor speeds since the PDP-11 era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strong passphrases impossible.
Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computational cost not only on CPUs, but also resist the cost/performance advantages of modern massively-parallel platforms for such tasks. Various algorithms have been designed specifically for this purpose, including bcrypt, scrypt and, more recently, Lyra2 and Argon2 (the latter being the winner of the Password Hashing Competition). The large-scale Ashley Madison data breach in which roughly 36 million passwords hashes were stolen by attackers illustrated the importance of algorithm selection in securing passwords. Although bcrypt was employed to protect the hashes (making large scale brute-force cracking expensive and time-consuming), a significant portion of the accounts in the compromised data also contained a password hash based on the fast general-purpose MD5 algorithm, which made it possible for over 11 million of the passwords to be cracked in a matter of weeks.
In June 2017, The U.S. National Institute of Standards and Technology (NIST) issued a new revision of their digital authentication guidelines, NIST SP 800-63B-3,: 5.1.1.2 stating that: "Verifiers SHALL store memorized secrets [i.e. passwords] in a form that is resistant to offline attacks. Memorized secrets SHALL be salted and hashed using a suitable one-way key derivation function. Key derivation functions take a password, a salt, and a cost factor as inputs then generate a password hash. Their purpose is to make each password guessing trial by an attacker who has obtained a password hash file expensive and therefore the cost of a guessing attack high or prohibitive."
Modern password-based key derivation functions, such as PBKDF2, are based on a recognized cryptographic hash, such as SHA-2, use more salt (at least 64 bits and chosen randomly) and a high iteration count. NIST recommends a minimum iteration count of 10,000.: 5.1.1.2
"For especially critical keys, or for very powerful systems or systems where user-perceived performance is not critical, an iteration count of 10,000,000 may be appropriate.”
: 5.2
== Key derivation ==
The original use for a KDF is key derivation, the generation of keys from secret passwords or passphrases. Variations on this theme include:
In conjunction with non-secret parameters to derive one or more keys from a common secret value (which is sometimes also referred to as "key diversification"). Such use may prevent an attacker who obtains a derived key from learning useful information about either the input secret value or any of the other derived keys. A KDF may also be used to ensure that derived keys have other desirable properties, such as avoiding "weak keys" in some specific encryption systems.
As components of multiparty key-agreement protocols. Examples of such key derivation functions include KDF1, defined in IEEE Std 1363-2000, and similar functions in ANSI X9.42.
To derive keys from secret passwords or passphrases (a password-based KDF).
To derive keys of different length from the ones provided. KDFs designed for this purpose include HKDF and SSKDF. These take an 'info' bit string as an additional optional 'info' parameter, which may be crucial to bind the derived key material to application- and context-specific information.
Key stretching and key strengthening.
=== Key stretching and key strengthening ===
Key derivation functions are also used in applications to derive keys from secret passwords or passphrases, which typically do not have the desired properties to be used directly as cryptographic keys. In such applications, it is generally recommended that the key derivation function be made deliberately slow so as to frustrate brute-force attack or dictionary attack on the password or passphrase input value.
Such use may be expressed as DK = KDF(key, salt, iterations), where DK is the derived key, KDF is the key derivation function, key is the original key or password, salt is a random number which acts as cryptographic salt, and iterations refers to the number of iterations of a sub-function. The derived key is used instead of the original key or password as the key to the system. The values of the salt and the number of iterations (if it is not fixed) are stored with the hashed password or sent as cleartext (unencrypted) with an encrypted message.
The difficulty of a brute force attack is increased with the number of iterations. A practical limit on the iteration count is the unwillingness of users to tolerate a perceptible delay in logging into a computer or seeing a decrypted message. The use of salt prevents the attackers from precomputing a dictionary of derived keys.
An alternative approach, called key strengthening, extends the key with a random salt, but then (unlike in key stretching) securely deletes the salt. This forces both the attacker and legitimate users to perform a brute-force search for the salt value. Although the paper that introduced key stretching referred to this earlier technique and intentionally chose a different name, the term "key strengthening" is now often (arguably incorrectly) used to refer to key stretching.
== Password hashing ==
Despite their original use for key derivation, KDFs are possibly better known for their use in password hashing (password verification by hash comparison), as used by the passwd file or shadow password file. Password hash functions should be relatively expensive to calculate in case of brute-force attacks, and the key stretching of KDFs happen to provide this characteristic. The non-secret parameters are called "salt" in this context.
In 2013 a Password Hashing Competition was announced to choose a new, standard algorithm for password hashing. On 20 July 2015 the competition ended and Argon2 was announced as the final winner. Four other algorithms received special recognition: Catena, Lyra2, Makwa and yescrypt.
As of May 2023, the Open Worldwide Application Security Project (OWASP) recommends the following KDFs for password hashing, listed in order of priority:
Argon2id
scrypt if Argon2id is unavailable
bcrypt for legacy systems
PBKDF2 if FIPS-140 compliance is required
== References ==
== Further reading ==
Percival, Colin (May 2009). "Stronger Key Derivation via Sequential Memory-Hard Functions" (PDF). BSDCan'09 Presentation. Retrieved 19 May 2009.
Key Derivation Functions | Wikipedia/Key_derivation_function |
The following tables compare general and technical information for a number of cryptographic hash functions. See the individual functions' articles for further information. This article is not all-inclusive or necessarily up-to-date. An overview of hash function security/cryptanalysis can be found at hash function security summary.
== General information ==
Basic general information about the cryptographic hash functions: year, designer, references, etc.
== Parameters ==
=== Notes ===
== Compression function ==
The following tables compare technical information for compression functions of cryptographic hash functions. The information comes from the specifications, please refer to them for more details.
=== Notes ===
== See also ==
List of hash functions
Hash function security summary
Word (computer architecture)
== References ==
== External links ==
ECRYPT Benchmarking of Cryptographic Hashes – measurements of hash function speed on various platforms
The ECRYPT Hash Function Website – A wiki for cryptographic hash functions
SHA-3 Project – Information about SHA-3 competition | Wikipedia/Comparison_of_cryptographic_hash_functions |
Hidden Fields Equations (HFE), also known as HFE trapdoor function, is a public key cryptosystem which was introduced at Eurocrypt in 1996 and proposed by (in French) Jacques Patarin following the idea of the Matsumoto and Imai system. It is based on polynomials over finite fields
F
q
{\displaystyle \mathbb {F} _{q}}
of different size to disguise the relationship between the private key and public key. HFE is in fact a family which consists of basic HFE and combinatorial versions of HFE. The HFE family of cryptosystems is based on the hardness of the problem of finding solutions to a system of multivariate quadratic equations (the so-called MQ problem) since it uses private affine transformations to hide the extension field and the private polynomials. Hidden Field Equations also have been used to construct digital signature schemes, e.g. Quartz and Sflash.
== Mathematical background ==
One of the central notions to understand how Hidden Field Equations work is to see that for two extension fields
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
F
q
m
{\displaystyle \mathbb {F} _{q^{m}}}
over the same base field
F
q
{\displaystyle \mathbb {F} _{q}}
one can interpret a system of
m
{\displaystyle m}
multivariate polynomials in
n
{\displaystyle n}
variables over
F
q
{\displaystyle \mathbb {F} _{q}}
as a function
F
q
n
→
F
q
m
{\displaystyle \mathbb {F} _{q^{n}}\to \mathbb {F} _{q^{m}}}
by using a suitable basis of
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
over
F
q
{\displaystyle \mathbb {F} _{q}}
. In almost all applications the polynomials are quadratic, i.e. they have degree 2. We start with the simplest kind of polynomials, namely monomials, and show how they lead to quadratic systems of equations.
Consider a finite field
F
q
{\displaystyle \mathbb {F} _{q}}
, where
q
{\displaystyle q}
is a power of 2, and an extension field
K
{\displaystyle K}
. Let
0
<
h
<
q
n
{\displaystyle 0<h<q^{n}}
such that
h
=
q
θ
+
1
{\displaystyle h=q^{\theta }+1}
for some
θ
{\displaystyle \theta }
and gcd
(
h
,
q
n
−
1
)
=
1
{\displaystyle (h,q^{n}-1)=1}
. The condition gcd
(
h
,
q
n
−
1
)
=
1
{\displaystyle (h,q^{n}-1)=1}
is equivalent to requiring that the map
u
→
u
h
{\displaystyle u\to u^{h}}
on
K
{\displaystyle K}
is one to one and its inverse is the map
u
→
u
h
′
{\displaystyle u\to u^{h'}}
where
h
′
{\displaystyle h'}
is the multiplicative inverse of
h
mod
q
n
−
1
{\displaystyle h\ {\bmod {q}}^{n}-1}
.
Take a random element
u
∈
F
q
n
{\displaystyle u\in \mathbb {F} _{q^{n}}}
. Define
w
∈
F
q
n
{\displaystyle w\in \mathbb {F} _{q^{n}}}
by
w
=
u
h
=
u
q
θ
u
(
1
)
{\displaystyle w=u^{h}=u^{q^{\theta }}u\ \ \ \ (1)}
Let
β
1
,
.
.
.
,
β
n
{\displaystyle \beta _{1},...,\beta _{n}}
to be a basis of
K
{\displaystyle K}
as an
F
q
{\displaystyle \mathbb {F} _{q}}
vector space. We represent
u
{\displaystyle u}
with respect to the basis as
u
=
(
u
1
,
.
.
.
,
u
n
)
{\displaystyle u=(u_{1},...,u_{n})}
and
w
=
(
w
1
,
.
.
.
,
w
n
)
{\displaystyle w=(w_{1},...,w_{n})}
. Let
A
(
k
)
=
a
i
j
(
k
)
{\displaystyle A^{(k)}={a_{ij}^{(k)}}}
be the matrix of the linear transformation
u
→
u
q
k
{\displaystyle u\to u^{q^{k}}}
with respect to the basis
β
1
,
.
.
.
,
β
n
{\displaystyle \beta _{1},...,\beta _{n}}
, i.e. such that
β
i
q
k
=
∑
j
=
1
n
a
i
j
k
β
j
,
a
i
j
k
∈
F
q
{\displaystyle \beta _{i}^{q^{k}}=\sum _{j=1}^{n}a_{ij}^{k}\beta _{j},\ \ a_{ij}^{k}\in \mathbb {F} _{q}}
for
1
≤
i
,
k
≤
n
{\displaystyle 1\leq i,k\leq n}
. Additionally, write all products of basis elements in terms of the basis, i.e.:
β
i
β
j
=
∑
l
=
1
n
m
i
j
l
β
l
,
m
i
j
l
∈
F
q
{\displaystyle \beta _{i}\beta _{j}=\sum _{l=1}^{n}m_{ijl}\beta _{l},\ \ m_{ijl}\in \mathbb {F} _{q}}
for each
1
≤
i
,
j
≤
n
{\displaystyle 1\leq i,j\leq n}
. The system of
n
{\displaystyle n}
equations which is explicit in the
w
i
{\displaystyle w_{i}}
and quadratic in the
u
j
{\displaystyle u_{j}}
can be obtained by expanding (1) and equating to zero the coefficients of the
β
i
{\displaystyle \beta _{i}}
.
Choose two secret affine transformations
S
{\displaystyle S}
and
T
{\displaystyle T}
, i.e. two invertible
n
×
n
{\displaystyle n\times n}
matrices
M
S
=
{
S
i
j
}
{\displaystyle M_{S}=\{S_{ij}\}}
and
M
T
=
{
T
i
j
}
{\displaystyle M_{T}=\{T_{ij}\}}
with entries in
F
q
{\displaystyle \mathbb {F} _{q}}
and two vectors
v
S
{\displaystyle v_{S}}
and
v
T
{\displaystyle v_{T}}
of length
n
{\displaystyle n}
over
F
q
{\displaystyle \mathbb {F} _{q}}
and define
x
{\displaystyle x}
and
y
{\displaystyle y}
via:
u
=
S
x
=
M
S
x
+
v
S
w
=
T
y
=
M
T
y
+
v
T
(
2
)
{\displaystyle u=Sx=M_{S}x+v_{S}\ \ \ \ w=Ty=M_{T}y+v_{T}\ \ \ \ (2)}
By using the affine relations in (2) to replace the
u
j
,
w
i
{\displaystyle u_{j},w_{i}}
with
x
k
,
y
l
{\displaystyle x_{k},y_{l}}
, the system of
n
{\displaystyle n}
equations is linear in the
y
l
{\displaystyle y_{l}}
and of degree 2 in the
x
k
{\displaystyle x_{k}}
. Applying linear algebra it will give
n
{\displaystyle n}
explicit equations, one for each
y
l
{\displaystyle y_{l}}
as polynomials of degree 2 in the
x
k
{\displaystyle x_{k}}
.
== Multivariate cryptosystem ==
The basic idea of the HFE family of using this as a multivariate cryptosystem is to build the secret key starting from a polynomial
P
{\displaystyle P}
in one unknown
x
{\displaystyle x}
over some finite field
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
(normally value
q
=
2
{\displaystyle q=2}
is used). This polynomial can be easily inverted over
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
, i.e. it is feasible to find any solutions to the equation
P
(
x
)
=
y
{\displaystyle P(x)=y}
when such solution exist. The secret transformation either decryption and/or signature is based on this inversion. As explained above
P
{\displaystyle P}
can be identified with a system of
n
{\displaystyle n}
equations
(
p
1
,
.
.
.
,
p
n
)
{\displaystyle (p_{1},...,p_{n})}
using a fixed basis. To build a cryptosystem the polynomial
(
p
1
,
.
.
.
,
p
n
)
{\displaystyle (p_{1},...,p_{n})}
must be transformed so that the public information hides the original structure and prevents inversion. This is done by viewing the finite fields
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
as a vector space over
F
q
{\displaystyle \mathbb {F} _{q}}
and by choosing two linear affine transformations
S
{\displaystyle S}
and
T
{\displaystyle T}
. The triplet
(
S
,
P
,
T
)
{\displaystyle (S,P,T)}
constitute the private key. The private polynomial
P
{\displaystyle P}
is defined over
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
. The public key is
(
p
1
,
.
.
.
,
p
n
)
{\displaystyle (p_{1},...,p_{n})}
. Below is the diagram for MQ-trapdoor
(
S
,
P
,
T
)
{\displaystyle (S,P,T)}
in HFE
input
x
→
x
=
(
x
1
,
.
.
.
,
x
n
)
→
secret
:
S
x
′
→
secret
:
P
y
′
→
secret
:
T
output
y
{\displaystyle {\text{input}}x\to x=(x_{1},...,x_{n}){\overset {{\text{secret}}:S}{\to }}x'{\overset {{\text{secret}}:P}{\to }}y'{\overset {{\text{secret}}:T}{\to }}{\text{output}}y}
== HFE polynomial ==
The private polynomial
P
{\displaystyle P}
with degree
d
{\displaystyle d}
over
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
is an element of
F
q
n
[
x
]
{\displaystyle \mathbb {F} _{q^{n}}[x]}
. If the terms of polynomial
P
{\displaystyle P}
have at most quadratic terms over
F
q
{\displaystyle \mathbb {F} _{q}}
then it will keep the public polynomial small. The case that
P
{\displaystyle P}
consists of monomials of the form
x
q
s
i
+
q
t
i
{\displaystyle x^{q^{s_{i}}+q^{t_{i}}}}
, i.e. with 2 powers of
q
{\displaystyle q}
in the exponent
is the basic version of HFE, i.e.
P
{\displaystyle P}
is chosen as
P
(
x
)
=
∑
c
i
x
q
s
i
+
q
t
i
{\displaystyle P(x)=\sum c_{i}x^{q^{s_{i}}+q^{t_{i}}}}
The degree
d
{\displaystyle d}
of the polynomial is also known as security parameter and the bigger its value the better for security since the resulting set of quadratic equations resembles a randomly chosen set of quadratic equations. On the other side large
d
{\displaystyle d}
slows down the deciphering. Since
P
{\displaystyle P}
is a polynomial of degree at most
d
{\displaystyle d}
the inverse of
P
{\displaystyle P}
, denoted by
P
−
1
{\displaystyle P^{-1}}
can be computed in
d
2
(
ln
d
)
O
(
1
)
n
2
F
q
{\displaystyle d^{2}(\ln d)^{O(1)}n^{2}\mathbb {F} _{q}}
operations.
== Encryption and decryption ==
The public key is given by the
n
{\displaystyle n}
multivariate polynomials
(
p
1
,
.
.
.
,
p
n
)
{\displaystyle (p_{1},...,p_{n})}
over
F
q
{\displaystyle \mathbb {F} _{q}}
. It is thus necessary to transfer the message
M
{\displaystyle M}
from
F
q
n
→
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}\to \mathbb {F} _{q}^{n}}
in order to encrypt it, i.e. we assume that
M
{\displaystyle M}
is a vector
(
x
1
,
.
.
.
,
x
n
)
∈
F
q
n
{\displaystyle (x_{1},...,x_{n})\in \mathbb {F} _{q}^{n}}
. To encrypt message
M
{\displaystyle M}
we evaluate each
p
i
{\displaystyle p_{i}}
at
(
x
1
,
.
.
.
,
x
n
)
{\displaystyle (x_{1},...,x_{n})}
. The ciphertext is
(
p
1
(
x
1
,
.
.
.
,
x
n
)
,
p
2
(
x
1
,
.
.
.
,
x
n
)
,
.
.
.
,
p
n
(
x
1
,
.
.
.
,
x
n
)
)
∈
F
q
n
{\displaystyle (p_{1}(x_{1},...,x_{n}),p_{2}(x_{1},...,x_{n}),...,p_{n}(x_{1},...,x_{n}))\in \mathbb {F} _{q}^{n}}
.
To understand decryption let us express encryption in terms of
S
,
T
,
P
{\displaystyle S,T,P}
. Note that these are not available to the sender. By evaluating the
p
i
{\displaystyle p_{i}}
at the message we first apply
S
{\displaystyle S}
, resulting in
x
′
{\displaystyle x'}
. At this point
x
′
{\displaystyle x'}
is transferred from
F
q
n
→
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}\to \mathbb {F} _{q^{n}}}
so we can apply the private polynomial
P
{\displaystyle P}
which is over
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
and this result is denoted by
y
′
∈
F
q
n
{\displaystyle y'\in \mathbb {F} _{q^{n}}}
. Once again,
y
′
{\displaystyle y'}
is transferred to the vector
(
y
1
′
,
.
.
.
,
y
n
′
)
{\displaystyle (y_{1}',...,y_{n}')}
and the transformation
T
{\displaystyle T}
is applied and the final output
y
∈
F
q
n
{\displaystyle y\in \mathbb {F} _{q^{n}}}
is produced from
(
y
1
,
.
.
.
,
y
n
)
∈
F
q
n
{\displaystyle (y_{1},...,y_{n})\in \mathbb {F} _{q}^{n}}
.
To decrypt
y
{\displaystyle y}
, the above steps are done in reverse order. This is possible if the private key
(
S
,
P
,
T
)
{\displaystyle (S,P,T)}
is known. The crucial step in the deciphering is not the inversion of
S
{\displaystyle S}
and
T
{\displaystyle T}
but rather the computations of the solution of
P
(
x
′
)
=
y
′
{\displaystyle P(x')=y'}
. Since
P
{\displaystyle P}
is not necessary a bijection, one may find more than one solution to this inversion (there exist at most d different solutions
X
′
=
(
x
1
′
,
.
.
.
,
x
d
′
)
∈
F
q
n
{\displaystyle X'=(x_{1}',...,x_{d}')\in \mathbb {F} _{q^{n}}}
since
P
{\displaystyle P}
is a polynomial of degree d). The redundancy denoted as
r
{\displaystyle r}
is added at the first step to the message
M
{\displaystyle M}
in order to select the right
M
{\displaystyle M}
from the set of solutions
X
′
{\displaystyle X'}
. The diagram below shows the basic HFE for encryption.
M
→
+
r
x
→
secret
:
S
x
′
→
secret
:
P
y
′
→
secret
:
T
y
{\displaystyle M{\overset {+r}{\to }}x{\overset {{\text{secret}}:S}{\to }}x'{\overset {{\text{secret}}:P}{\to }}y'{\overset {{\text{secret}}:T}{\to }}y}
== HFE variations ==
Hidden Field Equations has four basic variations namely +,-,v and f and it is possible to combine them in various way. The basic principle is the following:
01. The + sign consists of linearity mixing of the public equations with some random equations.
02. The - sign is due to Adi Shamir and intends to remove the redundancy 'r' of the public equations.
03. The f sign consists of fixing some
f
{\displaystyle f}
input variables of the public key, this variant is sometimes called p for projection.
04. The v sign is defined as a construction and sometimes quite complex such that the inverse of the function can be found only if some v of the variables called vinegar variables are fixed. This idea is due to Jacques Patarin.
05. The IP sign means Internal perturbation, it consists in adding random quadratic polynomial to the secret equations. However the random quadratic polynomial is composed with a small rank linear map. Making it possible to invert.
06. The LL' variant consist in adding a random linear combination of a small number of product of linear map to every public equations. It is meant to be used in encryption mode.
The operations above preserve to some extent the trapdoor solvability of the function.
HFE- and HFEv were useful in signature schemes as they prevent from slowing down the signature generation and also enhance the overall security of HFE whereas for encryption both HFE- and HFEv will lead to a rather slow decryption process so neither too many equations can be removed (HFE-) nor too many variables should be added (HFEv). Both HFE- and HFEv were used to obtain Quartz. However due to new Min-Ranks attack by Ding, Petzoldt and Tao it made these scheme obsolete.
For Signatures it is now recommended to use HFE IP- or HFE IPv. Indeed the IP variant is very effective against certain type of Min-Rank attacks (Min-Rank S) while v or - variant are effective against all other attacks (Mainly Min-Rank T or Gröbner Basis attacks).
For encryption, the only current recommended scheme is HFE LL'.
== HFE attacks ==
There are two famous attacks on HFE:
Recover the Private Key (Shamir-Kipnis): The key point of this attack is to recover the private key as sparse univariate polynomials over the extension field
F
q
n
{\displaystyle \mathbb {F} _{q^{n}}}
. The attack only works for basic HFE and fails for all its variations.
Fast Gröbner Bases (Faugère): The idea of Faugère's attacks is to use fast algorithm to compute a Gröbner basis of the system of polynomial equations. Faugère broke the HFE challenge 1 in 96 hours in 2002, and in 2003 Faugère and Joux worked together on the security of HFE.
== References ==
Nicolas T. Courtois, Magnus Daum and Patrick Felke, On the Security of HFE, HFEv- and Quartz
Andrey Sidorenko, Hidden Field Equations, EIDMA Seminar 2004 Technische Universiteit Eindhoven
Yvo G. Desmet, Public Key Cryptography-PKC 2003, ISBN 3-540-00324-X | Wikipedia/Hidden_Field_Equations |
In cryptography, a T-function is a bijective mapping that updates every bit of the state in a way that can be described as
x
i
′
=
x
i
+
f
(
x
0
,
⋯
,
x
i
−
1
)
{\displaystyle x_{i}'=x_{i}+f(x_{0},\cdots ,x_{i-1})}
, or in simple words an update function in which each bit of the state is updated by a linear combination of the same bit and a function of a subset of its less significant bits. If every single less significant bit is included in the update of every bit in the state, such a T-function is called triangular. Thanks to their bijectivity (no collisions, therefore no entropy loss) regardless of the used Boolean functions and regardless of the selection of inputs (as long as they all come from one side of the output bit), T-functions are now widely used in cryptography to construct block ciphers, stream ciphers, PRNGs and hash functions. T-functions were first proposed in 2002 by A. Klimov and A. Shamir in their paper "A New Class of Invertible Mappings". Ciphers such as TSC-1, TSC-3, TSC-4, ABC, Mir-1 and VEST are built with different types of T-functions.
Because arithmetic operations such as addition, subtraction and multiplication are also T-functions (triangular T-functions), software-efficient word-based T-functions can be constructed by combining bitwise logic with arithmetic operations. Another important property of T-functions based on arithmetic operations is predictability of their period, which is highly attractive to cryptographers. Although triangular T-functions are naturally vulnerable to guess-and-determine attacks, well chosen bitwise transpositions between rounds can neutralize that imbalance. In software-efficient ciphers, it can be done by interleaving arithmetic operations with byte-swapping operations and to a small degree with bitwise rotation operations. However, triangular T-functions remain highly inefficient in hardware.
T-functions do not have any restrictions on the types and the widths of the update functions used for each bit. Subsequent transposition of the output bits and iteration of the T-function also do not affect bijectivity. This freedom allows the designer to choose the update functions or S-boxes that satisfy all other cryptographic criteria and even choose arbitrary or key-dependent update functions (see family keying).
Hardware-efficient lightweight T-functions with identical widths of all the update functions for each bit of the state can thus be easily constructed. The core accumulators of VEST ciphers are a good example of such reasonably light-weight T-functions that are balanced out after 2 rounds by the transposition layer making all the 2-round feedback functions of roughly the same width and losing the "T-function" bias of depending only on the less significant bits of the state.
== References ==
Klimov, Alexander; Shamir, Adi (2002). "A New Class of Invertible Mappings" (PDF). Cryptographic Hardware and Embedded Systems - CHES 2002. Lecture Notes in Computer Science. Vol. 2523. Springer-Verlag. pp. 470–483. doi:10.1007/3-540-36400-5_34. ISBN 978-3-540-00409-7. S2CID 29129205.
Klimov, Alexander; Shamir, Adi (2003). "Cryptographic Applications of T-Functions". Selected Areas in Cryptography. Lecture Notes in Computer Science. Vol. 3006. Springer-Verlag. pp. 248–261. doi:10.1007/978-3-540-24654-1_18. ISBN 978-3-540-21370-3. S2CID 30281166.
Klimov, Alexander; Shamir, Adi (2004). "New Cryptographic Primitives Based on Multiword T-Functions". Fast Software Encryption. Lecture Notes in Computer Science. Vol. 3017. Springer-Verlag. pp. 1–15. doi:10.1007/978-3-540-25937-4_1. ISBN 978-3-540-22171-5.
Daum, Magnus (2005). "Narrow T-Functions". Fast Software Encryption. Lecture Notes in Computer Science. Vol. 3557. Springer-Verlag. pp. 50–67. doi:10.1007/11502760_4. ISBN 978-3-540-26541-2.
Hong, Jin; Lee, Dong Hoon; Yeom, Yongjin; Han, Daewan (2005). "A New Class of Single Cycle T-Functions". Fast Software Encryption. Lecture Notes in Computer Science. Vol. 3557. Springer-Verlag. pp. 68–82. doi:10.1007/11502760_5. ISBN 978-3-540-26541-2.
Klimov, Alexander; Shamir, Adi (2005). "New Applications of T-Functions in Block Ciphers and Hash Functions". Fast Software Encryption. Lecture Notes in Computer Science. Vol. 3557. Springer-Verlag. pp. 18–31. doi:10.1007/11502760_2. ISBN 978-3-540-26541-2. | Wikipedia/T-function |
Industrial espionage, also known as economic espionage, corporate spying, or corporate espionage, is a form of espionage conducted for commercial purposes instead of purely national security.
While political espionage is conducted or orchestrated by governments and is international in scope, industrial or corporate espionage is more often national and occurs between companies or corporations.
== Forms of economic and industrial espionage ==
In short, the purpose of espionage is to gather knowledge about one or more organizations. Economic or industrial espionage takes place in two main forms. It may include the acquisition of intellectual property, such as information on industrial manufacture, ideas, techniques and processes, recipes and formulas. Or it could include sequestration of proprietary or operational information, such as that on customer datasets, pricing, sales, marketing, research and development, policies, prospective bids, planning or marketing strategies or the changing compositions and locations of production. It may describe activities such as theft of trade secrets, bribery, blackmail and technological surveillance. As well as orchestrating espionage on commercial organizations, governments can also be targets – for example, to determine the terms of a tender for a government contract.
== Target industries ==
Economic and industrial espionage is most commonly associated with technology-heavy industries, including computer software and hardware, biotechnology, aerospace, telecommunications, transportation and engine technology, automobiles, machine tools, energy, materials and coatings and so on. Silicon Valley is known to be one of the world's most targeted areas for espionage, though any industry with information of use to competitors may be a target.
== Information theft and sabotage ==
Information can make the difference between success and failure; if a trade secret is stolen, the competitive playing field is leveled or even tipped in favor of a competitor. Although a lot of information-gathering is accomplished legally through competitive intelligence, at times corporations feel the best way to get information is to take it. Economic or industrial espionage is a threat to any business whose livelihood depends on information.
In recent years, economic or industrial espionage has taken on an expanded definition. For instance, attempts to sabotage a corporation may be considered industrial espionage; in this sense, the term takes on the wider connotations of its parent word. That espionage and sabotage (corporate or otherwise) have become more clearly associated with each other is also demonstrated by a number of profiling studies, some government, some corporate. The United States government currently has a polygraph examination entitled the "Test of Espionage and Sabotage" (TES), contributing to the notion of the interrelationship between espionage and sabotage countermeasures. In practice, particularly by "trusted insiders", they are generally considered functionally identical for the purpose of informing countermeasures.
== Agents and the process of collection ==
Economic or industrial espionage commonly occurs in one of two ways. Firstly, a dissatisfied employee appropriates information to advance interests or to damage the company. Secondly, a competitor or foreign government seeks information to advance its own technological or financial interest. "Moles", or trusted insiders, are generally considered the best sources for economic or industrial espionage. Historically known as a "patsy", an insider can be induced, willingly or under duress, to provide information. A patsy may be initially asked to hand over inconsequential information and, once compromised by committing a crime, blackmailed into handing over more sensitive material. Individuals may leave one company to take up employment with another and take sensitive information with them. Such apparent behavior has been the focus of numerous industrial espionage cases that have resulted in legal battles. Some countries hire individuals to do spying rather than the use of their own intelligence agencies. Academics, business delegates, and students are often thought to be used by governments in gathering information. Some countries, such as Japan, have been reported to expect students to be debriefed on returning home. A spy may follow a guided tour of a factory and then get "lost". A spy could be an engineer, a maintenance man, a cleaner, an insurance salesman, or an inspector: anyone who has legitimate access to the premises.
A spy may break into the premises to steal data and may search through waste paper and refuse, known as "dumpster diving". Information may be compromised via unsolicited requests for information, marketing surveys, or use of technical support or research or software facilities. Outsourced industrial producers may ask for information outside the agreed-upon contract.
Computers have facilitated the process of collecting information because of the ease of access to large amounts of information through physical contact or the Internet.
== History ==
=== Origins ===
Economic and industrial espionage has a long history. Father Francois Xavier d'Entrecolles, who visited Jingdezhen, China in 1712 and later used this visit to reveal the manufacturing methods of Chinese porcelain to Europe, is sometimes considered to have conducted an early case of industrial espionage.
Historical accounts have been written of industrial espionage between Britain and France. Attributed to Britain's emergence as an "industrial creditor", the second decade of the 18th century saw the emergence of a large-scale state-sponsored effort to surreptitiously take British industrial technology to France. Witnesses confirmed both the inveigling of tradespersons abroad and the placing of apprentices in England. Protests by those such as ironworkers in Sheffield and steelworkers in Newcastle, about skilled industrial workers being enticed abroad, led to the first English legislation aimed at preventing this method of economic and industrial espionage. This did not prevent Samuel Slater from bringing British textile technology to the United States in 1789. In order to catch up with technological advances of European powers, the US government in the eighteenth and nineteenth centuries actively encouraged intellectual piracy.
American founding father and first U.S. Treasury Secretary Alexander Hamilton advocated rewarding those bringing "improvements and secrets of extraordinary value" into the United States. This was instrumental in making the United States a haven for industrial spies.
=== 20th century ===
East-West commercial development opportunities after World War I saw a rise in Soviet interest in American and European manufacturing know-how, exploited by Amtorg Corporation. Later, with Western restrictions on the export of items thought likely to increase military capabilities to the USSR, Soviet industrial espionage was a well known adjunct to other spying activities up until the 1980s. BYTE reported in April 1984, for example, that although the Soviets sought to develop their own microelectronics, their technology appeared to be several years behind the West's. Soviet CPUs required multiple chips and appeared to be close or exact copies of American products such as the Intel 3000 and DEC LSI-11/2.
==== "Operation Brunnhilde" ====
Some of these activities were directed via the East German Stasi (Ministry for State Security). One such operation, "Operation Brunnhilde," operated from the mid-1950s until early 1966 and made use of spies from many Communist Bloc countries. Through at least 20 forays, many western European industrial secrets were compromised. One member of the "Brunnhilde" ring was a Swiss chemical engineer, Dr. Jean Paul Soupert (also known as "Air Bubble"), living in Brussels. He was described by Peter Wright in Spycatcher as having been "doubled" by the Belgian Sûreté de l'État. He revealed information about industrial espionage conducted by the ring, including the fact that Russian agents had obtained details of Concorde's advanced electronics system. He testified against two Kodak employees, living and working in Britain, during a trial in which they were accused of passing information on industrial processes to him, though they were eventually acquitted.
According to a 2020 American Economic Review study, East German industrial espionage in West Germany significantly reduced the gap in total factor productivity between the two countries.
==== Soviet spetsinformatsiya system ====
A secret report from the Military-Industrial Commission of the USSR (VPK), from 1979–80, detailed how spetsinformatsiya (Russian: специнформация, "special records") could be utilised in twelve different military industrial areas. Writing in the Bulletin of the Atomic Scientists, Philip Hanson detailed a spetsinformatsiya system in which 12 industrial branch ministries formulated requests for information to aid technological development in their military programs. Acquisition plans were described as operating on 2-year and 5-year cycles with about 3000 tasks underway each year. Efforts were aimed at civilian and military industrial targets, such as in the petrochemical industries. Some information was gathered to compare Soviet technological advancement with that of their competitors. Much unclassified information was also gathered, blurring the boundary with "competitive intelligence".
The Soviet military was recognised as making much better use of acquired information than civilian industries, where their record in replicating and developing industrial technology was poor.
=== Legacy of Cold War espionage ===
Following the demise of the Soviet Union and the end of the Cold War, commentators, including the US Congressional Intelligence Committee, noted a redirection amongst the espionage community from military to industrial targets, with Western and former communist countries making use of "underemployed" spies and expanding programs directed at stealing information.
The legacy of Cold War spying included not just the redirection of personnel but the use of spying apparatus such as computer databases, scanners for eavesdropping, spy satellites, bugs and wires.
=== Industrial espionage as part of US foreign policy ===
Former CIA Director Stansfield Turner stated in 1991, "as we increase emphasis on securing economic intelligence, we will have to spy on the more developed countries-our allies and friends with whom we compete economically-but to whom we turn first for political and military assistance in a crisis. This means that rather than instinctively reaching for human, on-site spying, the United States will want to look to those impersonal technical systems, primarily satellite photography and intercepts".
Former CIA Director James Woolsey acknowledged in 2000 that the United States steals economic secrets from foreign firms and their governments "with espionage, with communications, with reconnaissance satellites". He listed the three reasons as understanding whether sanctions are functioning for countries under sanction, monitoring dual-use technology that could be used to produce or develop weapons, and to spy on bribery.
In 2013 The United States was accused of spying on Brazilian oil company Petrobras. Brazil's President Dilma Rousseff stated that it was tantamount to industrial espionage and had no security justification.
In 2014 former US intelligence officer Edward Snowden stated that the National Security Agency (NSA) was engaged in industrial espionage and that they spied on German companies that compete with US firms. He also highlighted the fact the NSA uses mobile phone apps such as Angry Birds to gather personal data.
According to a 2014 Glenn Greenwald article, "potentially sabotaging another country's hi-tech industries and their top companies has long been a sanctioned American strategy." The article was based on a leaked report issued from former U.S. Director of National Intelligence James R. Clapper's office that evaluated how intelligence could be used to overcome a loss of the United States' technological and innovative edge. When contacted, the Director of National Intelligence office responded, "the United States—unlike our adversaries—does not steal proprietary corporate information", and insisted that "the Intelligence Community regularly engages in analytic exercises". The report, he said, "is not intended to be, and is not, a reflection of current policy or operations".
In September 2019, security firm Qi An Xin published report linking the CIA to a series of attacks targeting Chinese aviation agencies between 2012 and 2017.
=== Israel's economic espionage in the United States ===
Israel has an active program to gather proprietary information within the United States. These collection activities are primarily directed at obtaining information on military systems and advanced computing applications that can be used in Israel's sizable armaments industry.
Israel was accused by the US government of selling US military technology and secrets to China in 1993.
In 2014 American counter-intelligence officials told members of the House Judiciary and Foreign Affairs committees that Israel's current espionage activities in America are "unrivaled".
== Use of computers and the Internet ==
=== Personal computers ===
Computers have become key in exercising industrial espionage due to the enormous amount of information they contain and the ease at which it can be copied and transmitted. The use of computers for espionage increased rapidly in the 1990s. Information has commonly been stolen by individuals posing as subsidiary workers, such as cleaners or repairmen, gaining access to unattended computers and copying information from them. Laptops were, and still are, a prime target, with those traveling abroad on business being warned not to leave them for any period of time. Perpetrators of espionage have been known to find many ways of conning unsuspecting individuals into parting, often only temporarily, from their possessions, enabling others to access and steal information. A "bag-op" refers to the use of hotel staff to access data, such as through laptops, in hotel rooms. Information may be stolen in transit, in taxis, at airport baggage counters, baggage carousels, on trains and so on.
=== The Internet ===
The rise of the Internet and computer networks has expanded the range and detail of information available and the ease of access for the purpose of industrial espionage. This type of operation is generally identified as state backed or sponsored, because the "access to personal, financial or analytic resources" identified exceed that which could be accessed by cyber criminals or individual hackers. Sensitive military or defense engineering or other industrial information may not have immediate monetary value to criminals, compared with, say, bank details. Analysis of cyberattacks suggests deep knowledge of networks, with targeted attacks, obtained by numerous individuals operating in a sustained organized way.
=== Opportunities for sabotage ===
The rising use of the internet has also extended opportunities for industrial espionage with the aim of sabotage. In the early 2000s, energy companies were increasingly coming under attack from hackers. Energy power systems, doing jobs like monitoring power grids or water flow, once isolated from the other computer networks, were now being connected to the internet, leaving them more vulnerable, having historically few built-in security features. The use of these methods of industrial espionage have increasingly become a concern for governments, due to potential attacks by hostile foreign governments or terrorist groups.
=== Malware ===
One of the means of perpetrators conducting industrial espionage is by exploiting vulnerabilities in computer software. Malware and spyware are "tool[s] for industrial espionage", in "transmitting digital copies of trade secrets, customer plans, future plans and contacts". Newer forms of malware include devices which surreptitiously switch on mobile phones camera and recording devices. In attempts to tackle such attacks on their intellectual property, companies are increasingly keeping important information "off network," leaving an "air gap", with some companies building Faraday cages to shield from electromagnetic or cellphone transmissions.
=== Distributed denial of service (DDoS) attack ===
The distributed denial of service (DDoS) attack uses compromised computer systems to orchestrate a flood of requests on the target system, causing it to shut down and deny service to other users. It could potentially be used for economic or industrial espionage with the purpose of sabotage. This method was allegedly utilized by Russian secret services, over a period of two weeks on a cyberattack on Estonia in May 2007, in response to the removal of a Soviet era war memorial.
== Notable cases ==
=== British East India Company ===
In 1848, the British East India Company broke Qing China's global near-monopoly on tea production by smuggling Chinese tea out of the nation and copying Chinese tea-making processes. The British Empire had previously run a considerable trade deficit with China by importing the nation's tea and other goods. The British attempted to rectify the deficit by trading opium to the Chinese, but encountered difficulties after the Daoguang Emperor banned the opium trade and the First Opium War broke out. To avoid further issues in trading tea with China, the East India Company hired Scottish botanist Robert Fortune to travel to China under the guise of a Chinese nobleman and obtain Chinese trade secrets and tea plants for replanting. Infiltrating Chinese tea-making facilities, Fortune recorded the Chinese process for creating tea and smuggled tea leaves and seeds back to the East India Company. The East India Company later introduced these methods to company-ruled India, using India to compete and surpass China in tea production.
=== France and the United States ===
Between 1987 and 1989, IBM and Texas Instruments were thought to have been targeted by French DGSE with the intention of helping France's Groupe Bull. In 1993, U.S. aerospace companies were also thought to have been targeted by French interests. During the early 1990s, France was described as one of the most aggressive pursuers of espionage to garner foreign industrial and technological secrets. France accused the U.S. of attempting to sabotage its high tech industrial base. The government of France allegedly continues to conduct ongoing industrial espionage against American aerodynamics and satellite companies.
=== Volkswagen ===
In 1993, car manufacturer Opel, the German division of General Motors, accused Volkswagen of industrial espionage after Opel's chief of production, Jose Ignacio Lopez, and seven other executives moved to Volkswagen. Volkswagen subsequently threatened to sue for defamation, resulting in a four-year legal battle. The case, which was finally settled in 1997, resulted in one of the largest settlements in the history of industrial espionage, with Volkswagen agreeing to pay General Motors $100 million and to buy at least $1 billion of car parts from the company over 7 years, although it did not explicitly apologize for Lopez's behavior.
=== Hilton and Starwood ===
In April 2009, Starwood accused its rival Hilton Worldwide of a "massive" case of industrial espionage. After being acquired by The Blackstone Group, Hilton employed 10 managers and executives from Starwood. Starwood accused Hilton of stealing corporate information relating to its luxury brand concepts, used in setting up its Denizen hotels. Specifically, former head of its luxury brands group, Ron Klein, was accused of downloading "truckloads of documents" from a laptop to his personal email account.
=== Google and Operation Aurora ===
On 13 January 2010, Google announced that operators, from within China, had hacked into their Google China operation, stealing intellectual property and, in particular, accessing the email accounts of human rights activists. The attack was thought to have been part of a more widespread cyber attack on companies within China which has become known as Operation Aurora. Intruders were thought to have launched a zero-day attack, exploiting a weakness in the Microsoft Internet Explorer browser, the malware used being a modification of the trojan "Hydraq". Concerned about the possibility of hackers taking advantage of this previously unknown weakness in Internet Explorer, the governments of Germany and, subsequently France, issued warnings not to use the browser.
There was speculation that "insiders" had been involved in the attack, with some Google China employees being denied access to the company's internal networks after the company's announcement. In February 2010, computer experts from the U.S. National Security Agency claimed that the attacks on Google probably originated from two Chinese universities associated with expertise in computer science, Shanghai Jiao Tong University and the Shandong Lanxiang Vocational School, the latter having close links to the Chinese military.
Google claimed at least 20 other companies had also been targeted in the cyber attack, said by the London Times, to have been part of an "ambitious and sophisticated attempt to steal secrets from unwitting corporate victims" including "defence contractors, finance and technology companies". Rather than being the work of individuals or organised criminals, the level of sophistication of the attack was thought to have been "more typical of a nation state". Some commentators speculated as to whether the attack was part of what is thought to be a concerted Chinese industrial espionage operation aimed at getting "high-tech information to jump-start China's economy". Critics pointed to what was alleged to be a lax attitude to the intellectual property of foreign businesses in China, letting them operate but then seeking to copy or reverse engineer their technology for the benefit of Chinese "national champions". In Google's case, they may have (also) been concerned about the possible misappropriation of source code or other technology for the benefit of Chinese rival Baidu. In March 2010 Google subsequently decided to cease offering censored results in China, leading to the closing of its Chinese operation.
=== USA v. Lan Lee, et al. ===
The United States charged two former NetLogic Inc. engineers, Lan Lee and Yuefei Ge, of committing economic espionage against TSMC and NetLogic, Inc. A jury acquitted the defendants of the charges with regard to TSMC and deadlocked on the charges with regard to NetLogic. In May 2010, a federal judge dismissed all the espionage charges against the two defendants. The judge ruled that the U.S. government presented no evidence of espionage.
=== Dongxiao Yue and Chordiant Software, Inc. ===
In May 2010, the federal jury convicted Chordiant Software, Inc., a U.S. corporation, of stealing Dongxiao Yue's JRPC technologies and used them in a product called Chordiant Marketing Director. Yue previously filed lawsuits against Symantec Corporation for a similar theft.
== Concerns of national governments ==
=== Brazil ===
Revelations from the Snowden documents have provided information to the effect that the United States, notably vis-à-vis the NSA, has been conducting aggressive economic espionage against Brazil. Canadian intelligence has apparently supported U.S. economic espionage efforts.
=== China ===
The Chinese cybersecurity company Qihoo 360 accused the Central Intelligence Agency of the United States of an 11-year-long hacking campaign that targeted several industries including aviation organizations, scientific research institutions, petroleum firms, internet companies, and government agencies.
=== United States ===
A 2009 report to the US government, by aerospace and defense company Northrop Grumman, describes Chinese economic espionage as comprising "the single greatest threat to U.S. technology". Blogging on the 2009 cyber attack on Google, Joe Stewart of SecureWorks referred to a "persistent campaign of 'espionage-by-malware' emanating from the People's Republic of China (PRC)" with both corporate and state secrets being "Shanghaied". The Northrop Grumman report states that the collection of US defense engineering data stolen through cyberattacks is regarded as having "saved the recipient of the information years of R&D and significant amounts of funding". Concerns about the extent of cyberattacks has led to the situation being described as the dawn of a "new cold cyberwar".
According to Edward Snowden, the National Security Agency spies on foreign companies. In June 2015 Wikileaks published documents about the National Security Agency spying on French companies.
=== United Kingdom ===
During December 2007, this was suddenly revealed that Jonathan Evans, head of the United Kingdom's MI5, had sent out confidential letters to 300 chief executives and security chiefs at the country's banks, accountants and legal firms warning of attacks from Chinese 'state organisations'. A summary was also posted on the secure website of the Centre for the Protection of the National Infrastructure, accessed by some of the nation's 'critical infrastructure' companies, including 'telecoms firms, banks and water and electricity companies'. One security expert warned about the use of 'custom trojans,' software specifically designed to hack into a particular firm and feed back data. Whilst China was identified as the country most active in the use of internet spying, up to 120 other countries were said to be using similar techniques. The Chinese government responded to UK accusations of economic espionage by saying that the report of such activities was 'slanderous' and that the government opposed hacking which is prohibited by law.
=== Germany ===
German counter-intelligence experts have maintained the German economy is losing around €53 billion or the equivalent of 30,000 jobs to economic espionage yearly.
In Operation Eikonal, German BND agents received "selector lists" from the NSA – search terms for their dragnet surveillance. They contain IP addresses, mobile phone numbers and email accounts with the BND surveillance system containing hundreds of thousands and possibly more than a million such targets. These lists have been subject of controversy as in 2008 it was revealed that they contained some terms targeting the European Aeronautic Defence and Space Company (EADS), the Eurocopter project as well as French administration, which were first noticed by BND employees in 2005. After the revelations made by whistleblower Edward Snowden, the BND decided to investigate the issue whose October 2013 conclusion was that at least 2,000 of these selectors were aimed at Western European or even German interests which has been a violation of the Memorandum of Agreement that the US and Germany signed in 2002 in the wake of the 9/11 terror attacks. After reports emerged in 2014 that EADS and Eurocopter had been surveillance targets the Left Party and the Greens filed an official request to obtain evidence of the violations.
The BND's project group charged with supporting the NSA investigative committee in German parliament set up in spring 2014, reviewed the selectors and discovered 40,000 suspicious search parameters, including espionage targets in Western European governments and numerous companies. The group also confirmed suspicions that the NSA had systematically violated German interests and concluded that the Americans could have perpetrated economic espionage directly under the Germans' noses. The investigative parliamentary committee was not granted access to the NSA's selectors list as an appeal led by opposition politicians failed at Germany's top court. Instead the ruling coalition appointed an administrative judge, Kurt Graulich, as a "person of trust" who was granted access to the list and briefed the investigative commission on its contents after analyzing the 40,000 parameters. In his almost 300-paged report Graulich concluded that European government agencies were targeted massively and that Americans hence broke contractual agreements. He also found that German targets which received special protection from surveillance of domestic intelligence agencies by Germany's Basic Law (Grundgesetz) − including numerous enterprises based in Germany – were featured in the NSA's wishlist in a surprising plenitude.
== Competitive intelligence and economic or industrial espionage ==
"Competitive intelligence" involves the legal and ethical activity of systematically gathering, analyzing and managing information on industrial competitors. It may include activities such as examining newspaper articles, corporate publications, websites, patent filings, specialised databases, information at trade shows and the like to determine information on a corporation. The compilation of these crucial elements is sometimes termed CIS or CRS, a Competitive Intelligence Solution or Competitive Response Solution, with its roots in market research. Douglas Bernhardt has characterised "competitive intelligence" as involving "the application of principles and practices from military and national intelligence to the domain of global business"; it is the commercial equivalent of open-source intelligence.
The difference between competitive intelligence and economic or industrial espionage is not clear; one needs to understand the legal basics to recognize how to draw the line between the two.
== See also ==
American Economic Espionage Act of 1996
Business intelligence
Corporate warfare
Cyber spying
FBI
Genetically modified maize § Corporate espionage
History of tea in India
Labor spying in the United States
Reverse engineering
== Notes ==
== References ==
== Bibliography ==
=== Books ===
=== Newspapers and journals ===
=== Web ===
== Further reading == | Wikipedia/Industrial_espionage |
Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem
S
V
P
γ
{\displaystyle \mathrm {SVP} _{\gamma }}
(where
γ
=
n
c
{\displaystyle \gamma =n^{c}}
for some constant
c
>
0
{\displaystyle c>0}
) is hard in a worst-case scenario.
Average case problems are the problems that are hard to be solved for some randomly selected instances. For cryptography applications, worst case complexity is not sufficient, and we need to guarantee cryptographic construction are hard based on average case complexity.
== Lattices ==
A full rank lattice
L
⊂
R
n
{\displaystyle {\mathfrak {L}}\subset \mathbb {R} ^{n}}
is a set of integer linear combinations of
n
{\displaystyle n}
linearly independent vectors
{
b
1
,
…
,
b
n
}
{\displaystyle \{b_{1},\ldots ,b_{n}\}}
, named basis:
L
(
b
1
,
…
,
b
n
)
=
{
∑
i
=
1
n
z
i
b
i
:
z
i
∈
Z
}
=
{
B
z
:
z
∈
Z
n
}
{\displaystyle {\mathfrak {L}}(b_{1},\ldots ,b_{n})=\left\{\sum _{i=1}^{n}z_{i}b_{i}:z_{i}\in \mathbb {Z} \right\}=\{B{\boldsymbol {z}}:{\boldsymbol {z}}\in \mathbb {Z} ^{n}\}}
where
B
∈
R
n
×
n
{\displaystyle B\in \mathbb {R} ^{n\times n}}
is a matrix having basis vectors in its columns.
Remark: Given
B
1
,
B
2
{\displaystyle B_{1},B_{2}}
two bases for lattice
L
{\displaystyle {\mathfrak {L}}}
, there exist unimodular matrices
U
1
{\displaystyle U_{1}}
such that
B
1
=
B
2
U
1
−
1
,
B
2
=
B
1
U
1
{\displaystyle B_{1}=B_{2}U_{1}^{-1},B_{2}=B_{1}U_{1}}
.
== Ideal lattice ==
Definition: Rotational shift operator on
R
n
(
n
≥
2
)
{\displaystyle \mathbb {R} ^{n}(n\geq 2)}
is denoted by
rot
{\displaystyle \operatorname {rot} }
, and is defined as:
∀
x
=
(
x
1
,
…
,
x
n
−
1
,
x
n
)
∈
R
n
:
rot
(
x
1
,
…
,
x
n
−
1
,
x
n
)
=
(
x
n
,
x
1
,
…
,
x
n
−
1
)
{\displaystyle \forall {\boldsymbol {x}}=(x_{1},\ldots ,x_{n-1},x_{n})\in \mathbb {R} ^{n}:\operatorname {rot} (x_{1},\ldots ,x_{n-1},x_{n})=(x_{n},x_{1},\ldots ,x_{n-1})}
=== Cyclic lattices ===
Micciancio introduced cyclic lattices in his work in generalizing the compact knapsack problem to arbitrary rings. A cyclic lattice is a lattice that is closed under rotational shift operator. Formally, cyclic lattices are defined as follows:
Definition: A lattice
L
⊆
Z
n
{\displaystyle {\mathfrak {L}}\subseteq \mathbb {Z} ^{n}}
is cyclic if
∀
x
∈
L
:
rot
(
x
)
∈
L
{\displaystyle \forall {\boldsymbol {x}}\in {\mathfrak {L}}:\operatorname {rot} ({\boldsymbol {x}})\in {\mathfrak {L}}}
.
Examples:
Z
n
{\displaystyle \mathbb {Z} ^{n}}
itself is a cyclic lattice.
Lattices corresponding to any ideal in the quotient polynomial ring
R
=
Z
[
x
]
/
(
x
n
−
1
)
{\displaystyle R=\mathbb {Z} [x]/(x^{n}-1)}
are cyclic:
consider the quotient polynomial ring
R
=
Z
[
x
]
/
(
x
n
−
1
)
{\displaystyle R=\mathbb {Z} [x]/(x^{n}-1)}
, and let
p
(
x
)
{\displaystyle p(x)}
be some polynomial in
R
{\displaystyle R}
, i.e.
p
(
x
)
=
∑
i
=
0
n
−
1
a
i
x
i
{\displaystyle p(x)=\sum _{i=0}^{n-1}a_{i}x^{i}}
where
a
i
∈
Z
{\displaystyle a_{i}\in \mathbb {Z} }
for
i
=
0
,
…
,
n
−
1
{\displaystyle i=0,\ldots ,n-1}
.
Define the embedding coefficient
Z
{\displaystyle \mathbb {Z} }
-module isomorphism
ρ
{\displaystyle \rho }
as:
ρ
:
R
→
Z
n
p
(
x
)
=
∑
i
=
0
n
−
1
a
i
x
i
↦
(
a
0
,
…
,
a
n
−
1
)
{\displaystyle {\begin{aligned}\quad \rho :R&\rightarrow \mathbb {Z} ^{n}\\[4pt]p(x)=\sum _{i=0}^{n-1}a_{i}x^{i}&\mapsto (a_{0},\ldots ,a_{n-1})\end{aligned}}}
Let
I
⊂
R
{\displaystyle I\subset R}
be an ideal. The lattice corresponding to ideal
I
⊂
R
{\displaystyle I\subset R}
, denoted by
L
I
{\displaystyle {\mathfrak {L}}_{I}}
, is a sublattice of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
, and is defined as
L
I
:=
ρ
(
I
)
=
{
(
a
0
,
…
,
a
n
−
1
)
∣
∑
i
=
0
n
−
1
a
i
x
i
∈
I
}
⊂
Z
n
.
{\displaystyle {\mathfrak {L}}_{I}:=\rho (I)=\left\{(a_{0},\ldots ,a_{n-1})\mid \sum _{i=0}^{n-1}a_{i}x^{i}\in I\right\}\subset \mathbb {Z} ^{n}.}
Theorem:
L
⊂
Z
n
{\displaystyle {\mathfrak {L}}\subset \mathbb {Z} ^{n}}
is cyclic if and only if
L
{\displaystyle {\mathfrak {L}}}
corresponds to some ideal
I
{\displaystyle I}
in the quotient polynomial ring
R
=
Z
[
x
]
/
(
x
n
−
1
)
{\displaystyle R=\mathbb {Z} [x]/(x^{n}-1)}
.
proof:
⇐
)
{\displaystyle \Leftarrow )}
We have:
L
=
L
I
:=
ρ
(
I
)
=
{
(
a
0
,
…
,
a
n
−
1
)
∣
∑
i
=
0
n
−
1
a
i
x
i
∈
I
}
{\displaystyle {\mathfrak {L}}={\mathfrak {L}}_{I}:=\rho (I)=\left\{(a_{0},\ldots ,a_{n-1})\mid \sum _{i=0}^{n-1}a_{i}x^{i}\in I\right\}}
Let
(
a
0
,
…
,
a
n
−
1
)
{\displaystyle (a_{0},\ldots ,a_{n-1})}
be an arbitrary element in
L
{\displaystyle {\mathfrak {L}}}
. Then, define
p
(
x
)
=
∑
i
=
0
n
−
1
a
i
x
i
∈
I
{\displaystyle p(x)=\sum _{i=0}^{n-1}a_{i}x^{i}\in I}
. But since
I
{\displaystyle I}
is an ideal, we have
x
p
(
x
)
∈
I
{\displaystyle xp(x)\in I}
. Then,
ρ
(
x
p
(
x
)
)
∈
L
I
{\displaystyle \rho (xp(x))\in {\mathfrak {L}}_{I}}
. But,
ρ
(
x
p
(
x
)
)
=
rot
(
a
0
,
…
,
a
n
−
1
)
∈
L
I
{\displaystyle \rho (xp(x))=\operatorname {rot} (a_{0},\ldots ,a_{n-1})\in {\mathfrak {L}}_{I}}
. Hence,
L
{\displaystyle {\mathfrak {L}}}
is cyclic.
⇒
)
{\displaystyle \Rightarrow )}
Let
L
⊂
Z
n
{\displaystyle {\mathfrak {L}}\subset \mathbb {Z} ^{n}}
be a cyclic lattice. Hence
∀
(
a
0
,
…
,
a
n
−
1
)
∈
L
:
rot
(
a
0
,
…
,
a
n
−
1
)
∈
L
{\displaystyle \forall (a_{0},\ldots ,a_{n-1})\in {\mathfrak {L}}:\operatorname {rot} (a_{0},\ldots ,a_{n-1})\in {\mathfrak {L}}}
.
Define the set of polynomials
I
:=
{
∑
i
=
0
n
−
1
a
i
x
i
∣
(
a
0
,
…
,
a
n
−
1
)
∈
L
}
{\displaystyle I:=\left\{\sum _{i=0}^{n-1}a_{i}x^{i}\mid (a_{0},\ldots ,a_{n-1})\in {\mathfrak {L}}\right\}}
:
Since
L
{\displaystyle {\mathfrak {L}}}
a lattice and hence an additive subgroup of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
,
I
⊂
R
{\displaystyle I\subset R}
is an additive subgroup of
R
{\displaystyle R}
.
Since
L
{\displaystyle {\mathfrak {L}}}
is cyclic,
∀
p
(
x
)
∈
I
:
x
p
(
x
)
∈
I
{\displaystyle \forall p(x)\in I:xp(x)\in I}
.
Hence,
I
⊂
R
{\displaystyle I\subset R}
is an ideal, and consequently,
L
=
L
I
{\displaystyle {\mathfrak {L}}={\mathfrak {L}}_{I}}
.
=== Ideal lattices ===
Let
f
(
x
)
∈
Z
[
x
]
{\displaystyle f(x)\in \mathbb {Z} [x]}
be a monic polynomial of degree
n
{\displaystyle n}
. For cryptographic applications,
f
(
x
)
{\displaystyle f(x)}
is usually selected to be irreducible. The ideal generated by
f
(
x
)
{\displaystyle f(x)}
is:
(
f
(
x
)
)
:=
f
(
x
)
⋅
Z
[
x
]
=
{
f
(
x
)
g
(
x
)
:
∀
g
(
x
)
∈
Z
[
x
]
}
.
{\displaystyle (f(x)):=f(x)\cdot \mathbb {Z} [x]=\{f(x)g(x):\forall g(x)\in \mathbb {Z} [x]\}.}
The quotient polynomial ring
R
=
Z
[
x
]
/
(
f
(
x
)
)
{\displaystyle R=\mathbb {Z} [x]/(f(x))}
partitions
Z
[
x
]
{\displaystyle \mathbb {Z} [x]}
into equivalence classes of polynomials of degree at most
n
−
1
{\displaystyle n-1}
:
R
=
Z
[
x
]
/
(
f
(
x
)
)
=
{
∑
i
=
0
n
−
1
a
i
x
i
:
a
i
∈
Z
}
{\displaystyle R=\mathbb {Z} [x]/(f(x))=\left\{\sum _{i=0}^{n-1}a_{i}x^{i}:a_{i}\in \mathbb {Z} \right\}}
where addition and multiplication are reduced modulo
f
(
x
)
{\displaystyle f(x)}
.
Consider the embedding coefficient
Z
{\displaystyle \mathbb {Z} }
-module isomorphism
ρ
{\displaystyle \rho }
. Then, each ideal in
R
{\displaystyle R}
defines a sublattice of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
called ideal lattice.
Definition:
L
I
{\displaystyle {\mathfrak {L}}_{I}}
, the lattice corresponding to an ideal
I
{\displaystyle I}
, is called ideal lattice. More precisely, consider a quotient polynomial ring
R
=
Z
[
x
]
/
(
p
(
x
)
)
{\displaystyle R=\mathbb {Z} [x]/(p(x))}
, where
(
p
(
x
)
)
{\displaystyle (p(x))}
is the ideal generated by a degree
n
{\displaystyle n}
polynomial
p
(
x
)
∈
Z
[
x
]
{\displaystyle p(x)\in \mathbb {Z} [x]}
.
L
I
{\displaystyle {\mathfrak {L}}_{I}}
, is a sublattice of
Z
n
{\displaystyle \mathbb {Z} ^{n}}
, and is defined as:
L
I
:=
ρ
(
I
)
=
{
(
a
0
,
…
,
a
n
−
1
)
∣
∑
i
=
0
n
−
1
a
i
x
i
∈
I
}
⊂
Z
n
.
{\displaystyle {\mathfrak {L}}_{I}:=\rho (I)=\left\{(a_{0},\ldots ,a_{n-1})\mid \sum _{i=0}^{n-1}a_{i}x^{i}\in I\right\}\subset \mathbb {Z} ^{n}.}
Remark:
It turns out that
GapSVP
γ
{\displaystyle {\text{GapSVP}}_{\gamma }}
for even small
γ
=
p
o
l
y
(
n
)
{\displaystyle \gamma =\operatorname {poly(n)} }
is typically easy on ideal lattices. The intuition is that the algebraic symmetries causes the minimum distance of an ideal to lie within a narrow, easily computable range.
By exploiting the provided algebraic symmetries in ideal lattices, one can convert a short nonzero vector into
n
{\displaystyle n}
linearly independent ones of (nearly) the same length. Therefore, on ideal lattices,
S
V
P
γ
{\displaystyle \mathrm {SVP} _{\gamma }}
and
S
I
V
P
γ
{\displaystyle \mathrm {SIVP} _{\gamma }}
are equivalent with a small loss. Furthermore, even for quantum algorithms,
S
V
P
γ
{\displaystyle \mathrm {SVP} _{\gamma }}
and
S
I
V
P
γ
{\displaystyle \mathrm {SIVP} _{\gamma }}
are believed to be very hard in the worst-case scenario.
== Short integer solution problem ==
The Short Integer Solution (SIS) problem is an average case problem that is used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Ajtai who presented a family of one-way functions based on the SIS problem. He showed that it is secure in an average case if
S
V
P
γ
{\displaystyle \mathrm {SVP} _{\gamma }}
(where
γ
=
n
c
{\displaystyle \gamma =n^{c}}
for some constant
c
>
0
{\displaystyle c>0}
) is hard in a worst-case scenario. Along with applications in classical cryptography, the SIS problem and its variants are utilized in several post-quantum security schemes including CRYSTALS-Dilithium and Falcon.
=== SISq,n,m,β ===
Let
A
∈
Z
q
n
×
m
{\displaystyle A\in \mathbb {Z} _{q}^{n\times m}}
be an
n
×
m
{\displaystyle n\times m}
matrix with entries in
Z
q
{\displaystyle \mathbb {Z} _{q}}
that consists of
m
{\displaystyle m}
uniformly random vectors
a
i
∈
Z
q
n
{\displaystyle {\boldsymbol {a_{i}}}\in \mathbb {Z} _{q}^{n}}
:
A
=
[
a
1
|
⋯
|
a
m
]
{\displaystyle A=[{\boldsymbol {a_{1}}}|\cdots |{\boldsymbol {a_{m}}}]}
. Find a nonzero vector
x
∈
Z
m
{\displaystyle {\boldsymbol {x}}\in \mathbb {Z} ^{m}}
such that for some norm
‖
⋅
‖
{\displaystyle \|\cdot \|}
:
0
<
‖
x
‖
≤
β
{\displaystyle 0<\|{\boldsymbol {x}}\|\leq \beta }
,
f
A
(
x
)
:=
A
x
=
0
∈
Z
q
n
{\displaystyle f_{A}({\boldsymbol {x}}):=A{\boldsymbol {x}}={\boldsymbol {0}}\in \mathbb {Z} _{q}^{n}}
.
A solution to SIS without the required constraint on the length of the solution (
‖
x
‖
≤
β
{\displaystyle \|{\boldsymbol {x}}\|\leq \beta }
) is easy to compute by using Gaussian elimination technique. We also require
β
<
q
{\displaystyle \beta <q}
, otherwise
x
=
(
q
,
0
,
…
,
0
)
∈
Z
m
{\displaystyle {\boldsymbol {x}}=(q,0,\ldots ,0)\in \mathbb {Z} ^{m}}
is a trivial solution.
In order to guarantee
f
A
(
x
)
{\displaystyle f_{A}({\boldsymbol {x}})}
has non-trivial, short solution, we require:
β
≥
n
log
q
{\displaystyle \beta \geq {\sqrt {n\log q}}}
, and
m
≥
n
log
q
{\displaystyle m\geq n\log q}
Theorem: For any
m
=
poly
(
n
)
{\displaystyle m=\operatorname {poly} (n)}
, any
β
>
0
{\displaystyle \beta >0}
, and any sufficiently large
q
≥
β
n
c
{\displaystyle q\geq \beta n^{c}}
(for any constant
c
>
0
{\displaystyle c>0}
), solving
SIS
n
,
m
,
q
,
β
{\displaystyle \operatorname {SIS} _{n,m,q,\beta }}
with nonnegligible probability is at least as hard as solving the
GapSVP
γ
{\displaystyle \operatorname {GapSVP} _{\gamma }}
and
SIVP
γ
{\displaystyle \operatorname {SIVP} _{\gamma }}
for some
γ
=
β
⋅
O
(
n
)
{\displaystyle \gamma =\beta \cdot O({\sqrt {n}})}
with a high probability in the worst-case scenario.
=== R-SISq,n,m,β ===
The SIS problem solved over an ideal ring is also called the Ring-SIS or R-SIS problem. This problem considers a quotient polynomial ring
R
q
=
Z
q
[
x
]
/
(
f
(
x
)
)
{\displaystyle R_{q}=\mathbb {Z} _{q}[x]/(f(x))}
with
f
(
x
)
=
x
n
−
1
{\displaystyle f(x)=x^{n}-1}
for some integer
n
{\displaystyle n}
and with some norm
‖
⋅
‖
{\displaystyle \|\cdot \|}
. Of particular interest are cases where there exists integer
k
{\displaystyle k}
such that
n
=
2
k
{\displaystyle n=2^{k}}
as this restricts the quotient to cyclotomic polynomials.
We then define the problem as follows:
Select
m
{\displaystyle m}
independent uniformly random elements
a
i
∈
R
q
{\displaystyle a_{i}\in R_{q}}
. Define vector
a
→
:=
(
a
1
,
…
,
a
m
)
∈
R
q
m
{\displaystyle {\vec {\boldsymbol {a}}}:=(a_{1},\ldots ,a_{m})\in R_{q}^{m}}
. Find a nonzero vector
z
→
:=
(
z
1
,
…
,
z
m
)
∈
R
m
{\displaystyle {\vec {\boldsymbol {z}}}:=(z_{1},\ldots ,z_{m})\in R^{m}}
such that:
0
<
‖
z
→
‖
≤
β
{\displaystyle 0<\|{\vec {\boldsymbol {z}}}\|\leq \beta }
,
f
a
→
(
z
→
)
:=
a
→
T
.
z
→
=
∑
i
=
1
m
a
i
.
z
i
=
0
∈
R
q
{\displaystyle f_{\vec {\boldsymbol {a}}}({\vec {\boldsymbol {z}}}):={\vec {\boldsymbol {a}}}^{T}.{\vec {\boldsymbol {z}}}=\sum _{i=1}^{m}a_{i}.z_{i}=0\in R_{q}}
.
Recall that to guarantee existence of a solution to SIS problem, we require
m
≈
n
log
q
{\displaystyle m\approx n\log q}
. However, Ring-SIS problem provide us with more compactness and efficacy: to guarantee existence of a solution to Ring-SIS problem, we require
m
≈
log
q
{\displaystyle m\approx \log q}
.
Definition: The nega-circulant matrix of
b
{\displaystyle b}
is defined as:
for
b
=
∑
i
=
0
n
−
1
b
i
x
i
∈
R
,
rot
(
b
)
:=
[
b
0
−
b
n
−
1
…
−
b
1
b
1
b
0
…
−
b
2
⋮
⋮
⋱
⋮
b
n
−
1
b
n
−
2
…
b
0
]
{\displaystyle {\text{for}}\quad b=\sum _{i=0}^{n-1}b_{i}x^{i}\in R,\quad \operatorname {rot} (b):={\begin{bmatrix}b_{0}&-b_{n-1}&\ldots &-b_{1}\\[0.3em]b_{1}&b_{0}&\ldots &-b_{2}\\[0.3em]\vdots &\vdots &\ddots &\vdots \\[0.3em]b_{n-1}&b_{n-2}&\ldots &b_{0}\end{bmatrix}}}
When the quotient polynomial ring is
R
=
Z
[
x
]
/
(
x
n
+
1
)
{\displaystyle R=\mathbb {Z} [x]/(x^{n}+1)}
for
n
=
2
k
{\displaystyle n=2^{k}}
, the ring multiplication
a
i
.
p
i
{\displaystyle a_{i}.p_{i}}
can be efficiently computed by first forming
rot
(
a
i
)
{\displaystyle \operatorname {rot} (a_{i})}
, the nega-circulant matrix of
a
i
{\displaystyle a_{i}}
, and then multiplying
rot
(
a
i
)
{\displaystyle \operatorname {rot} (a_{i})}
with
ρ
(
p
i
(
x
)
)
∈
Z
n
{\displaystyle \rho (p_{i}(x))\in Z^{n}}
, the embedding coefficient vector of
p
i
{\displaystyle p_{i}}
(or, alternatively with
σ
(
p
i
(
x
)
)
∈
Z
n
{\displaystyle \sigma (p_{i}(x))\in Z^{n}}
, the canonical coefficient vector.
Moreover, R-SIS problem is a special case of SIS problem where the matrix
A
{\displaystyle A}
in the SIS problem is restricted to negacirculant blocks:
A
=
[
rot
(
a
1
)
|
⋯
|
rot
(
a
m
)
]
{\displaystyle A=[\operatorname {rot} (a_{1})|\cdots |\operatorname {rot} (a_{m})]}
.
=== M-SISq,n,d,m,β ===
The SIS problem solved over a module lattice is also called the Module-SIS or M-SIS problem. Like R-SIS, this problem considers the quotient polynomial ring
R
=
Z
[
x
]
/
(
f
(
x
)
)
{\displaystyle R=\mathbb {Z} [x]/(f(x))}
and
R
q
=
Z
q
[
x
]
/
(
f
(
x
)
)
{\displaystyle R_{q}=\mathbb {Z} _{q}[x]/(f(x))}
for
f
(
x
)
=
x
n
−
1
{\displaystyle f(x)=x^{n}-1}
with a special interest in cases where
n
{\displaystyle n}
is a power of 2. Then, let
M
{\displaystyle M}
be a module of rank
d
{\displaystyle d}
such that
M
⊆
R
d
{\displaystyle M\subseteq R^{d}}
and let
‖
⋅
‖
{\displaystyle \|\cdot \|}
be an arbitrary norm over
R
q
m
{\displaystyle R_{q}^{m}}
.
We then define the problem as follows:
Select
m
{\displaystyle m}
independent uniformly random elements
a
i
∈
R
q
d
{\displaystyle a_{i}\in R_{q}^{d}}
. Define vector
a
→
:=
(
a
1
,
…
,
a
m
)
∈
R
q
d
×
m
{\displaystyle {\vec {\boldsymbol {a}}}:=(a_{1},\ldots ,a_{m})\in R_{q}^{d\times m}}
. Find a nonzero vector
z
→
:=
(
z
1
,
…
,
z
m
)
∈
R
m
{\displaystyle {\vec {\boldsymbol {z}}}:=(z_{1},\ldots ,z_{m})\in R^{m}}
such that:
0
<
‖
z
→
‖
≤
β
{\displaystyle 0<\|{\vec {\boldsymbol {z}}}\|\leq \beta }
,
f
a
→
(
z
→
)
:=
a
→
T
.
z
→
=
∑
i
=
1
m
a
i
.
z
i
=
0
∈
R
q
d
{\displaystyle f_{\vec {\boldsymbol {a}}}({\vec {\boldsymbol {z}}}):={\vec {\boldsymbol {a}}}^{T}.{\vec {\boldsymbol {z}}}=\sum _{i=1}^{m}a_{i}.z_{i}=0\in R_{q}^{d}}
.
While M-SIS is a less compact variant of SIS than R-SIS, the M-SIS problem is asymptotically at least as hard as R-SIS and therefore gives a tighter bound on the hardness assumption of SIS. This makes assuming the hardness of M-SIS a safer, but less efficient underlying assumption when compared to R-SIS.
== See also ==
Lattice-based cryptography
Homomorphic encryption
Ring learning with errors key exchange
Post-quantum cryptography
Lattice problem
TFNP § PPP
== References == | Wikipedia/Short_integer_solution_problem |
In cryptography, the Cellular Message Encryption Algorithm (CMEA) is a block cipher which was used for securing mobile phones in the United States. CMEA is one of four cryptographic primitives specified in a Telecommunications Industry Association (TIA) standard, and is designed to encrypt the control channel, rather than the voice data. In 1997, a group of cryptographers published attacks on the cipher showing it had several weaknesses which give it a trivial effective strength of a 24-bit to 32-bit cipher.
Some accusations were made that the NSA had pressured the original designers into crippling CMEA, but the NSA has denied any role in the design or selection of the algorithm. The ECMEA and SCEMA ciphers are derived from CMEA.
CMEA is described in U.S. patent 5,159,634. It is byte-oriented, with variable block size, typically 2 to 6 bytes. The key size is only 64 bits. Both of these are unusually small for a modern cipher. The algorithm consists of only 3 passes over the data: a non-linear left-to-right diffusion operation, an unkeyed linear mixing, and another non-linear diffusion that is in fact the inverse of the first. The non-linear operations use a keyed lookup table called the T-box, which uses an unkeyed lookup table called the CaveTable. The algorithm is self-inverse; re-encrypting the ciphertext with the same key is equivalent to decrypting it.
CMEA is severely insecure. There is a chosen-plaintext attack, effective for all block sizes, using 338 chosen plaintexts. For 3-byte blocks (typically used to encrypt each dialled digit), there is a known-plaintext attack using 40 to 80 known plaintexts. For 2-byte blocks, 4 known plaintexts suffice.
The "improved" CMEA, CMEA-I, is not much better: chosen-plaintext attack of it requires less than 850 plaintexts in its adaptive version.
== See also ==
A5/1, the broken encryption algorithm used in the GSM cellular telephone standard
ORYX
CAVE
== References ==
== External links ==
The attack on CMEA
Press release and the NSA response
Cryptanalysis of the Cellular Message Encryption Algorithm David Wagner Bruce Schneier 1997 | Wikipedia/Cellular_Message_Encryption_Algorithm |
In cryptography, padding is any of a number of distinct practices which all include adding data to the beginning, middle, or end of a message prior to encryption. In classical cryptography, padding may include adding nonsense phrases to a message to obscure the fact that many messages end in predictable ways, e.g. sincerely yours.
== Classical cryptography ==
Official messages often start and end in predictable ways: My dear ambassador, Weather report, Sincerely yours, etc. The primary use of padding with classical ciphers is to prevent the cryptanalyst from using that predictability to find known plaintext that aids in breaking the encryption. Random length padding also prevents an attacker from knowing the exact length of the plaintext message.
A famous example of classical padding which caused a great misunderstanding is "the world wonders" incident, which nearly caused an Allied loss at the World War II Battle off Samar, part of the larger Battle of Leyte Gulf. In that example, Admiral Chester Nimitz, the Commander in Chief, U.S. Pacific Fleet in WWII, sent the following message to Admiral Bull Halsey, commander of Task Force Thirty Four (the main Allied fleet) at the Battle of Leyte Gulf, on October 25, 1944:
Where is, repeat, where is Task Force Thirty Four?
With padding (bolded) and metadata added, the message became:
TURKEY TROTS TO WATER GG FROM CINCPAC ACTION COM THIRD FLEET INFO COMINCH CTF SEVENTY-SEVEN X WHERE IS RPT WHERE IS TASK FORCE THIRTY FOUR RR THE WORLD WONDERS
Halsey's radio operator mistook some of the padding for the message and so Admiral Halsey ended up reading the following message:
Where is, repeat, where is Task Force Thirty Four? The world wonders
Admiral Halsey interpreted the padding phrase "the world wonders" as a sarcastic reprimand, which caused him to have an emotional outburst and then lock himself in his bridge and sulk for an hour before he moved his forces to assist at the Battle off Samar. Halsey's radio operator should have been tipped off by the letters RR that "the world wonders" was padding; all other radio operators who received Admiral Nimitz's message correctly removed both padding phrases.
Many classical ciphers arrange the plaintext into particular patterns (e.g., squares, rectangles, etc.) and if the plaintext does not exactly fit, it is often necessary to supply additional letters to fill out the pattern. Using nonsense letters for this purpose has a side benefit of making some kinds of cryptanalysis more difficult.
== Symmetric cryptography ==
=== Hash functions ===
Most modern cryptographic hash functions process messages in fixed-length blocks; all but the earliest hash functions include some sort of padding scheme. It is critical for cryptographic hash functions to employ termination schemes that prevent a hash from being vulnerable to length extension attacks.
Many padding schemes are based on appending predictable data to the final block. For example, the pad could be derived from the total length of the message. This kind of padding scheme is commonly applied to hash algorithms that use the Merkle–Damgård construction such as MD-5, SHA-1, and SHA-2 family such as SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256
=== Block cipher mode of operation ===
Cipher-block chaining (CBC) mode is an example of block cipher mode of operation. Some block cipher modes (CBC and PCBC essentially) for symmetric-key encryption algorithms require plain text input that is a multiple of the block size, so messages may have to be padded to bring them to this length.
There is currently a shift to use streaming mode of operation instead of block mode of operation. An example of streaming mode encryption is the counter mode of operation. Streaming modes of operation can encrypt and decrypt messages of any size and therefore do not require padding. More intricate ways of ending a message such as ciphertext stealing or residual block termination avoid the need for padding.
A disadvantage of padding is that it makes the plain text of the message susceptible to padding oracle attacks. Padding oracle attacks allow the attacker to gain knowledge of the plain text without attacking the block cipher primitive itself. Padding oracle attacks can be avoided by making sure that an attacker cannot gain knowledge about the removal of the padding bytes. This can be accomplished by verifying a message authentication code (MAC) or digital signature before removal of the padding bytes, or by switching to a streaming mode of operation.
==== Bit padding ====
Bit padding can be applied to messages of any size.
A single '1' bit is added to the message and then as many '0' bits as required (possibly none) are added. The number of '0' bits added will depend on the block boundary to which the message needs to be extended. In bit terms this is "1000 ... 0000".
This method can be used to pad messages which are any number of bits long, not necessarily a whole number of bytes long. For example, a message of 23 bits that is padded with 9 bits in order to fill a 32-bit block:
... | 1011 1001 1101 0100 0010 0111 0000 0000 |
This padding is the first step of a two-step padding scheme used in many hash functions including MD5 and SHA. In this context, it is specified by RFC1321 step 3.1.
This padding scheme is defined by ISO/IEC 9797-1 as Padding Method 2.
==== Byte padding ====
Byte padding can be applied to messages that can be encoded as an integral number of bytes.
===== ANSI X9.23 =====
In ANSI X9.23, between 1 and 8 bytes are always added as padding. The block is padded with random bytes (although many implementations use 00) and the last byte of the block is set to the number of bytes added.
Example:
In the following example the block size is 8 bytes, and padding is required for 4 bytes (in hexadecimal format)
... | DD DD DD DD DD DD DD DD | DD DD DD DD 00 00 00 04 |
===== ISO 10126 =====
ISO 10126 (withdrawn, 2007) specifies that the padding should be done at the end of that last block with random bytes, and the padding boundary should be specified by the last byte.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 81 A6 23 04 |
===== PKCS#5 and PKCS#7 =====
PKCS#7 is described in RFC 5652.
Padding is in whole bytes. The value of each added byte is the number of bytes that are added, i.e. N bytes, each of value N are added. The number of bytes added will depend on the block boundary to which the message needs to be extended.
The padding will be one of:
01
02 02
03 03 03
04 04 04 04
05 05 05 05 05
06 06 06 06 06 06
etc.
This padding method (as well as the previous two) is well-defined if and only if N is less than 256.
Example:
In the following example, the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 04 04 04 04 |
If the length of the original data is an integer multiple of the block size B, then an extra block of bytes with value B is added. This is necessary so the deciphering algorithm can determine with certainty whether the last byte of the last block is a pad byte indicating the number of padding bytes added or part of the plaintext message. Consider a plaintext message that is an integer multiple of B bytes with the last byte of plaintext being 01. With no additional information, the deciphering algorithm will not be able to determine whether the last byte is a plaintext byte or a pad byte. However, by adding B bytes each of value B after the 01 plaintext byte, the deciphering algorithm can always treat the last byte as a pad byte and strip the appropriate number of pad bytes off the end of the ciphertext; said number of bytes to be stripped based on the value of the last byte.
PKCS#5 padding is identical to PKCS#7 padding, except that it has only been defined for block ciphers that use a 64-bit (8-byte) block size. In practice, the two can be used interchangeably.
The maximum block size is 255, as it is the biggest number a byte can contain.
===== ISO/IEC 7816-4 =====
ISO/IEC 7816-4:2005 is identical to the bit padding scheme, applied to a plain text of N bytes. This means in practice that the first byte is a mandatory byte valued '80' (Hexadecimal) followed, if needed, by 0 to N − 1 bytes set to '00', until the end of the block is reached. ISO/IEC 7816-4 itself is a communication standard for smart cards containing a file system, and in itself does not contain any cryptographic specifications.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 80 00 00 00 |
The next example shows a padding of just one byte
... | DD DD DD DD DD DD DD DD | DD DD DD DD DD DD DD 80 |
===== Zero padding =====
All the bytes that are required to be padded are padded with zero. The zero padding scheme has not been standardized for encryption, although it is specified for hashes and MACs as Padding Method 1 in ISO/IEC 10118-1 and ISO/IEC 9797-1.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 00 00 00 00 |
Zero padding may not be reversible if the original file ends with one or more zero bytes, making it impossible to distinguish between plaintext data bytes and padding bytes. It may be used when the length of the message can be derived out-of-band. It is often applied to binary encoded strings (null-terminated string) as the null character can usually be stripped off as whitespace.
Zero padding is sometimes also referred to as "null padding" or "zero byte padding". Some implementations may add an additional block of zero bytes if the plaintext is already divisible by the block size.
== Public key cryptography ==
In public key cryptography, padding is the process of preparing a message for encryption or signing using a specification or scheme such as PKCS#1 v2.2, OAEP, PSS, PSSR, IEEE P1363 EMSA2 and EMSA5. A modern form of padding for asymmetric primitives is OAEP applied to the RSA algorithm, when it is used to encrypt a limited number of bytes.
The operation is referred to as "padding" because originally, random material was simply appended to the message to make it long enough for the primitive. This form of padding is not secure and is therefore no longer applied. A modern padding scheme aims to ensure that the attacker cannot manipulate the plaintext to exploit the mathematical structure of the primitive and will usually be accompanied by a proof, often in the random oracle model, that breaking the padding scheme is as hard as solving the hard problem underlying the primitive.
== Traffic analysis and protection via padding ==
Even if perfect cryptographic routines are used, the attacker can gain knowledge of the amount of traffic that was generated. The attacker might not know what Alice and Bob were talking about, but can know that they were talking and how much they talked. In some circumstances this leakage can be highly compromising. Consider for example when a military is organising a secret attack against another nation: it may suffice to alert the other nation for them to know merely that there is a lot of secret activity going on.
As another example, when encrypting Voice Over IP streams that use variable bit rate encoding, the number of bits per unit of time is not obscured, and this can be exploited to guess spoken phrases. Similarly, the burst patterns that common video encoders produce are often sufficient to identify the streaming video a user is watching uniquely. Even the total size of an object alone, such as a website, file, software package download, or online video, can uniquely identify an object, if the attacker knows or can guess a known set the object comes from. The side-channel of encrypted content length was used to extract passwords from HTTPS communications in the well-known CRIME and BREACH attacks.
Padding an encrypted message can make traffic analysis harder by obscuring the true length of its payload. The choice of length to pad a message to may be made either deterministically or randomly; each approach has strengths and weaknesses that apply in different contexts.
=== Randomized padding ===
A random number of additional padding bits or bytes may be appended to the end of a message, together with an indication at the end how much padding was added. If the amount of padding is chosen as a uniform random number between 0 and some maximum M, for example, then an eavesdropper will be unable to determine the message's length precisely within that range. If the maximum padding M is small compared to the message's total size, then this padding will not add much overhead, but the padding will obscure only the least-significant bits of the object's total length, leaving the approximate length of large objects readily observable and hence still potentially uniquely identifiable by their length. If the maximum padding M is comparable to the size of the payload, in contrast, an eavesdropper's uncertainty about the message's true payload size is much larger, at the cost that padding may add up to 100% overhead (2× blow-up) to the message.
In addition, in common scenarios in which an eavesdropper has the opportunity to see many successive messages from the same sender, and those messages are similar in ways the attacker knows or can guess, then the eavesdropper can use statistical techniques to decrease and eventually even eliminate the benefit of randomized padding. For example, suppose a user's application regularly sends messages of the same length, and the eavesdropper knows or can guess fact based on fingerprinting the user's application for example. Alternatively, an active attacker might be able to induce an endpoint to send messages regularly, such as if the victim is a public server. In such cases, the eavesdropper can simply compute the average over many observations to determine the length of the regular message's payload.
=== Deterministic padding ===
A deterministic padding scheme always pads a message payload of a given length to form an encrypted message of a particular corresponding output length. When many payload lengths map to the same padded output length, an eavesdropper cannot distinguish or learn any information about the payload's true length within one of these length buckets, even after many observations of the identical-length messages being transmitted. In this respect, deterministic padding schemes have the advantage of not leaking any additional information with each successive message of the same payload size.
On the other hand, suppose an eavesdropper can benefit from learning about small variations in payload size, such as plus or minus just one byte in a password-guessing attack for example. If the message sender is unlucky enough to send many messages whose payload lengths vary by only one byte, and that length is exactly on the border between two of the deterministic padding classes, then these plus-or-minus one payload lengths will consistently yield different padded lengths as well (plus-or-minus one block for example), leaking exactly the fine-grained information the attacker desires. Against such risks, randomized padding can offer more protection by independently obscuring the least-significant bits of message lengths.
Common deterministic padding methods include padding to a constant block size and padding to the next-larger power of two. Like randomized padding with a small maximum amount M, however, padding deterministically to a block size much smaller than the message payload obscures only the least-significant bits of the messages true length, leaving the messages's true approximate length largely unprotected. Padding messages to a power of two (or any other fixed base) reduces the maximum amount of information that the message can leak via its length from O(log M) to O(log log M). Padding to a power of two increases message size overhead by up to 100%, however, and padding to powers of larger integer bases increase maximum overhead further.
The PADMÉ scheme, proposed for padded uniform random blobs or PURBs, deterministically pads messages to lengths representable as a floating point number whose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This length constraint ensures that a message leaks at most O(log log M) bits of information via its length, like padding to a power of two, but incurs much less overhead of at most 12% for tiny messages and decreasing gradually with message size.
== See also ==
Chaffing and winnowing, mixing in large amounts of nonsense before sending
Ciphertext stealing, another approach to deal with messages that are not a multiple of the block length
Initialization vector, salt (cryptography), which are sometimes confused with padding
Key encapsulation, an alternative to padding for public key systems used to exchange symmetric keys
PURB or padded uniform random blob, an encryption discipline that minimizes leakage from either metadata or length
Russian copulation, another technique to prevent cribs
== References ==
== Further reading ==
XCBC: csrc.nist.gov/groups/ST/toolkit/BCM/documents/workshop2/presentations/xcbc.pdf | Wikipedia/Padding_(cryptography) |
The paranoiac-critical method is a surrealist technique developed by Salvador Dalí in the early 1930s. He employed it in the production of paintings and other artworks, especially those that involved optical illusions and other multiple images. The technique consists of the artist invoking a paranoid state (fear that the self is being manipulated, targeted or controlled by others). The result is a deconstruction of the psychological concept of identity, such that subjectivity becomes the primary aspect of the artwork.
== Origins ==
The surrealists related theories of psychology to the idea of creativity and the production of art. In the mid-1930s André Breton wrote about a "fundamental crisis of the object". The object began being thought of not as a fixed external object but also as an extension of our subjective self. One of the types of objects theorized in surrealism was the phantom object. According to Salvador Dalí, these objects have a minimum of mechanical meaning, but, when viewed, the mind evokes phantom images which are the result of unconscious acts.
The paranoiac-critical method arose from similar surrealistic experiments with psychology and the creation of images such as Max Ernst's frottage or Óscar Domínguez's decalcomania, two surrealist techniques, which involved rubbing pencil or chalk on paper over a textured surface and interpreting the phantom images visible in the texture on the paper.
== Description ==
The aspect of paranoia that Dalí was interested in and which helped inspire the method was the ability of the brain to perceive links between things which rationally are not linked. Dalí described the paranoiac-critical method as a "spontaneous method of irrational knowledge based on the critical and systematic objectivity of the associations and interpretations of delirious phenomena".
Employing the method when creating a work of art uses an active process of the mind to visualize images in the work and incorporate these into the final product. An example of the resulting work is a double image or multiple image in which an ambiguous image can be interpreted in different ways.
André Breton (by way of Guy Mangeot) hailed the method, saying that Dalí's paranoiac-critical method was an "instrument of primary importance" and that it "has immediately shown itself capable of being applied equally to painting, poetry, the cinema, the construction of typical Surrealist objects, fashion, sculpture, the history of art, and even, if necessary, all manner of exegesis".
In his introduction to the 1994 edition of Jacques Lacan's The Four Fundamental Concepts of Psychoanalysis, David Macey stated that "Salvador Dalí's theory of 'paranoic knowledge' is certainly of great relevance to the young Lacan."
== See also ==
Delirious New York, a book that discusses Dalí and the paranoiac-critical method.
== References ==
== Further reading ==
Une lecture paranoïaque-critique de La Maison Tellier (Guy de Maupassant), Jean-Claude Lutanie, Le Veilleur Éditeur, 1993 | Wikipedia/Paranoiac-critical_method |
Cryptography is the practice and study of encrypting information, or in other words, securing information from unauthorized access. There are many different cryptography laws in different nations. Some countries prohibit the export of cryptography software and/or encryption algorithms or cryptoanalysis methods. Some countries require decryption keys to be recoverable in case of a police investigation.
== Overview ==
Issues regarding cryptography law fall into four categories:
Export control, is the restriction on the export of cryptography methods within a country to other countries or commercial entities. There are international export control agreements, the main one being the Wassenaar Arrangement. The Wassenaar Arrangement was created after the dissolution of COCOM (Coordinating Committee for Multilateral Export Controls), which in 1989 "decontrolled password and authentication-only cryptography."
Import controls, which is the restriction on using certain types of cryptography within a country.
Patent issues, deal with the use of cryptography tools that are patented.
Search and seizure issues, on whether and under what circumstances, a person can be compelled to decrypt data files or reveal an encryption key.
== Legal issues ==
=== Prohibitions ===
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous . Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
=== Export controls ===
In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs can similarly transmit and receive emails via TLS, and can send and receive emails encrypted with S/MIME. Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
=== NSA involvement ===
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak in order to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).
=== Digital rights management ===
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech.
=== Forced disclosure of encryption keys ===
In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
== Cryptography law in different countries ==
=== China ===
In October 1999, the State Council promulgated the Regulations on the Administration of Commercial Cryptography. According to these regulations, commercial cryptography was treated as a state secret.
On 26 October 2019, the Standing Committee of the National People's Congress promulgated the Cryptography Law of the People's Republic of China. This law went into effect at the start of 2020. The law categorizes cryptography into three categories:
Core cryptography, which is a state secret and suitable for information up to top secret;
Ordinary cryptography, which is also a state secret and suitable for information up to secret;
Commercial cryptography, which protects information that is not a state secret.
The law also states that there should be a "mechanism of both in-process and ex-post supervision on commercial cryptography, which combines routine supervision with random inspection" (implying that the Chinese government should get access to encrypted servers). It also states that foreign providers of commercial encryption need some sort of state approval.
Cryptosystems authorized for use in China include SM2, SM3, SM4 and SM9.
=== France ===
As of 2011 and since 2004, the law for trust in the digital economy (French: Loi pour la confiance dans l'économie numérique; abbreviated LCEN) mostly liberalized the use of cryptography.
As long as cryptography is only used for authentication and integrity purposes, it can be freely used. The cryptographic key or the nationality of the entities involved in the transaction do not matter. Typical e-business websites fall under this liberalized regime.
Exportation and importation of cryptographic tools to or from foreign countries must be either declared (when the other country is a member of the European Union) or requires an explicit authorization (for countries outside the EU).
=== India ===
Section 69 of the Information Technology Act, 2000 (as amended in 2008) authorizes Indian government officials or policemen to listen in on any phone calls, read any SMS messages or emails, or monitor the websites that anyone visits, without requiring a warrant.: 2 (However, this is a violation of article 21 of the Constitution of India.: 2 ) This section also enables the central government of India or a state government of India to compel any agency to decrypt information.: 4
According to the Information Technology (Intermediaries Guidelines) Rules, 2011, intermediaries are required to provide information to Indian government agencies for investigative or other purposes.: 2
ISP license holders are freely allowed to use encryption keys up to 40 bits. Beyond that, they are required to obtain written permission and to deposit the decryption key with the Department of Telecommunications.: 2–3
Per the 2012 SEBI Master Circular for Stock Exchange or Cash Market (issued by the Securities and Exchange Board of India), it is the responsibility of stock exchanges to maintain data reliability and confidentiality through the use of encryption.: 3 Per Reserve Bank of India guidance issued in 2001, banks must use at least 128-bit SSL to protect browser-to-bank communication; they must also encrypt sensitive data internally.: 3
Electronics, including cryptographic products, is one of the categories of dual-use items in the Special Chemicals, Organisms, Materials, Equipment and Technologies (SCOMET; part of the Foreign Trade (Development & Regulation Act), 1992). However, this regulation does not specify which cryptographic products are subject to export controls.: 3
=== United States ===
In the United States, the International Traffic in Arms Regulation restricts the export of cryptography.
== See also ==
Official Secrets Act - (United Kingdom, India, Ireland, Malaysia and formerly New Zealand)
Regulation of Investigatory Powers Act 2000 (United Kingdom)
Restrictions on the import of cryptography
United States v. Boucher (2009), on the right of a criminal defendant not to reveal a passphrase
FBI–Apple encryption dispute on whether cellphone manufacturers can be compelled to assist in their unlocking
== References ==
== External links ==
Bert-Jaap Koops' Crypto Law Survey - existing and proposed laws and regulations on cryptography | Wikipedia/Cryptography_law |
In the history of cryptography, a grille cipher was a technique for encrypting a plaintext by writing it onto a sheet of paper through a pierced sheet (of paper or cardboard or similar). The earliest known description is due to Jacopo Silvestri in 1526. His proposal was for a rectangular stencil allowing single letters, syllables, or words to be written, then later read, through its various apertures. The written fragments of the plaintext could be further disguised by filling the gaps between the fragments with anodyne words or letters. This variant is also an example of steganography, as are many of the grille ciphers.
== Cardan grille and variations ==
The Cardan grille was invented as a method of secret writing. The word cryptography became the more familiar term for secret communications from the middle of the 17th century. Earlier, the word steganography was common. The other general term for secret writing was cypher - also spelt cipher. There is a modern distinction between cryptography and steganography
Sir Francis Bacon gave three fundamental conditions for ciphers. Paraphrased, these are:
a cipher method should not be difficult to use
it should not be possible for others to recover the plaintext (called 'reading the cipher')
in some cases, the presence of messages should not be suspected
It is difficult to fulfil all three conditions simultaneously. Condition 3 applies to steganography. Bacon meant that a cipher message should, in some cases, not appear to be a cipher at all. The original Cardan Grille met that aim.
Variations on the Cardano original, however, were not intended to fulfill condition 3 and generally failed to meet condition 2 as well. But, few if any ciphers have ever achieved this second condition, so the point is generally a cryptanalyst's delight whenever the grille ciphers are used.
The attraction of a grille cipher for users lies in its ease of use (condition 1). In short, it's very simple.
=== Single-letter grilles ===
Not all ciphers are used for communication with others: records and reminders may be kept in cipher for use of the author alone. A grille is easily usable for protection of brief information such as a key word or a key number in such a use.
In the case of communication by grille cipher, both sender and recipient must possess an identical copy of the grille. The loss of a grille leads to the probable loss of all secret correspondence encrypted with that grille. Either the messages cannot be read (i.e., decrypted) or someone else (with the lost grille) may be reading them.
A further use for such a grille has been suggested: it is a method of generating pseudo-random sequences from a pre-existing text. This view has been proposed in connection with the Voynich manuscript. It is an area of cryptography that David Kahn termed enigmatology and touches on the works of Dr John Dee and ciphers supposedly embedded in the works of Shakespeare proving that Francis Bacon wrote them, which William F. Friedman examined and discredited.
=== Trellis ciphers ===
The Elizabethan spymaster Sir Francis Walsingham (1530–1590) is reported to have used a "trellis" to conceal the letters of a plaintext in communication with his agents. However, he generally preferred the combined code-cipher method known as a nomenclator, which was the practical state-of-the-art in his day. The trellis was described as a device with spaces that was reversible. It appears to have been a transposition tool that produced something much like the Rail fence cipher and resembled a chess board.
Cardano is not known to have proposed this variation, but he was a chess player who wrote a book on gaming, so the pattern would have been familiar to him. Whereas the ordinary Cardan grille has arbitrary perforations, if his method of cutting holes is applied to the white squares of a chess board a regular pattern results.
The encipherer begins with the board in the wrong position for chess. Each successive letter of the message is written in a single square. If the message is written vertically, it is taken off horizontally and vice versa.
After filling in 32 letters, the board is turned through 90 degrees and another 32 letters written (note that flipping the board horizontally or vertically is the equivalent). Shorter messages are filled with null letters (i.e., padding). Messages longer than 64 letters require another turn of the board and another sheet of paper. If the plaintext is too short, each square must be filled up entirely with nulls.
J M T H H D L I S I Y P S L U I A O W A E T I E E N W A P D E N E N E L G O O N N A I T E E F N K E R L O O N D D N T T E N R X
This transposition method produces an invariant pattern and is not satisfactorily secure for anything other than cursory notes.
33, 5, 41, 13, 49, 21, 57, 29, 1, 37, 9, 45, 17, 53, 25, 61, 34, 6, 42, 14, 50, 22, 58, 30, 2, 38, 10, 46, 18, 54, 26, 62, 35, 7, 43, 15, 51, 23, 59, 31, 3, 39, 11, 47, 19, 55, 27, 63, 36, 8, 44, 16, 52, 24, 60, 32, 4, 40, 12, 48, 20, 56, 28, 64
A second transposition is needed to obscure the letters. Following the chess analogy, the route taken might be the knight's move. Or some other path can be agreed upon, such as a reverse spiral, together with a specific number of nulls to pad the start and end of a message.
=== Turning grilles ===
Rectangular Cardan grilles can be placed in four positions. The trellis or chessboard has only two positions, but it gave rise to a more sophisticated turning grille with four positions that can be rotated in two directions.
Baron Edouard Fleissner von Wostrowitz, a retired Austrian cavalry colonel, described a variation on the chess board cipher in 1880 and his grilles were adopted by the German army during World War I. These grilles are often named after Fleissner, although he took his material largely from a German work, published in Tübingen in 1809, written by Klüber who attributed this form of the grille to Cardano, as did Helen Fouché Gaines.
Bauer notes that grilles were used in the 18th century, for example in 1745 in the administration of the Dutch Stadthouder William IV. Later, the mathematician C. F. Hindenburg studied turning grilles more systematically in 1796. '[they]are often called Fleissner grilles in ignorance of their historical origin.'
One form of the Fleissner (or Fleißner) grille makes 16 perforations in an 8x8 grid – 4 holes in each quadrant. If the squares in each quadrant are numbered 1 to 16, all 16 numbers must be used once only. This allows many variations in placing the apertures.
The grille has four positions – North, East, South, West. Each position exposes 16 of the 64 squares. The encipherer places the grille on a sheet and writes the first 16 letters of the message. Then, turning the grille through 90 degrees, the second 16 are written, and so on until the grid is filled.
It is possible to construct grilles of different dimensions; however, if the number of squares in one quadrant is odd, even if the total is an even number, one quadrant or section must contain an extra perforation. Illustrations of the Fleissner grille often take a 6x6 example for ease of space; the number of apertures in one quadrant is 9, so three quadrants contain 2 apertures and one quadrant must have 3. There is no standard pattern of apertures: they are created by the user, in accordance with the above description, with the intention of producing a good mix.
The method gained wide recognition when Jules Verne used a turning grille as a plot device in his novel Mathias Sandorf, published in 1885. Verne had come across the idea in Fleissner's treatise Handbuch der Kryptographie which appeared in 1881.
Fleissner Grilles were constructed in various sizes during World War I and were used by the German Army at the end of 1916. Each grille had a different code name:- 5x5 ANNA; 6X6 BERTA; 7X7 CLARA; 8X8 DORA; 9X9 EMIL; 10X10 FRANZ. Their security was weak, and they were withdrawn after four months.
Another method of indicating the size of the grille in use was to insert a key code at the start of the cipher text: E = 5; F = 6 and so on. The grille can also be rotated in either direction and the starting position does not need to be NORTH. Clearly the working method is by arrangement between sender and receiver and may be operated in accordance with a schedule.
In the following examples, two cipher texts contain the same message. They are constructed from the example grille, beginning in the NORTH position, but one is formed by rotating the grille clockwise and the other anticlockwise. The ciphertext is then taken off the grid in horizontal lines - but it could equally be taken off vertically.
CLOCKWISE
ITIT ILOH GEHE TCDF LENS IIST FANB FSET EPES HENN URRE NEEN TRCG PR&I ODCT SLOE
ANTICLOCKWISE
LEIT CIAH GTHE TIDF LENB IIET FONS FSST URES NEDN EPRE HEEN TRTG PROI ONEC SL&C
In 1925 Luigi Sacco of the Italian Signals Corps began writing a book on ciphers which included reflections on the codes of the Great War, Nozzioni di crittografia. He observed that Fleissner's method could be applied to a fractionating cipher, such as a Delastelle Bifid or Four-Square, with considerable increase in security.
Grille ciphers are also useful device for transposing Chinese characters; they avoid the transcription of words into alphabetic or syllabic characters to which other ciphers (for example, substitution ciphers) can be applied.
After World War I, machine encryption made simple cipher devices obsolete, and grille ciphers fell into disuse except for amateur purposes. Yet, grilles provided seed ideas for transposition ciphers that are reflected in modern cryptography.
== Unusual possibilities ==
=== The d'Agapeyeff cipher ===
The unsolved D'Agapeyeff cipher, which was set as a challenge in 1939, contains 14x14 dinomes and might be based on Sacco's idea of transposing a fractionated cipher text by means of a grille.
=== A Third-Party Grille: the crossword puzzle ===
The distribution of grilles, an example of the difficult problem of key exchange, can be eased by taking a readily-available third-party grid in the form of a newspaper crossword puzzle. Although this is not strictly a grille cipher, it resembles the chessboard with the black squares shifted and it can be used in the Cardan manner. The message text can be written horizontally in the white squares and the ciphertext taken off vertically, or vice versa.
CTATI ETTOL TTOEH RRHEI MUCKE SSEEL AUDUE RITSC VISCH NREHE LEERD DTOHS ESDNN LEWAC LEONT OIIEA RRSET LLPDR EIVYT ELTTD TOXEA E4TMI GIUOD PTRT1 ENCNE ABYMO NOEET EBCAL LUZIU TLEPT SIFNT ONUYK YOOOO
Again, following Sacco's observation, this method disrupts a fractionating cipher such as Seriated Playfair.
Crosswords are also a possible source of keywords. A grid of the size illustrated has a word for each day of the month, the squares being numbered.
== Cryptanalysis ==
The original Cardano Grille was a literary device for gentlemen's private correspondence. Any suspicion of its use can lead to discoveries of hidden messages where no hidden messages exist at all, thus confusing the cryptanalyst. Letters and numbers in a random grid can take shape without substance. Obtaining the grille itself is a chief goal of the attacker.
But all is not lost if a grille copy can't be obtained. The later variants of the Cardano grille present problems which are common to all transposition ciphers. Frequency analysis will show a normal distribution of letters, and will suggest the language in which the plaintext was written. The problem, easily stated though less easily accomplished, is to identify the transposition pattern and so decrypt the ciphertext. Possession of several messages written using the same grille is a considerable aid.
Gaines, in her standard work on hand ciphers and their cryptanalysis, gave a lengthy account of transposition ciphers, and devoted a chapter to the turning grille.
== See also ==
Topics in cryptography
== References ==
== Further reading ==
== External links == | Wikipedia/Grille_(cryptography) |
The Secure Hash Algorithms are a family of cryptographic hash functions published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard (FIPS), including:
SHA-0: A retronym applied to the original version of the 160-bit hash function published in 1993 under the name "SHA". It was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1.
SHA-1: A 160-bit hash function which resembles the earlier MD5 algorithm. This was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Cryptographic weaknesses were discovered in SHA-1, and the standard was no longer approved for most cryptographic uses after 2010.
SHA-2: A family of two similar hash functions, with different block sizes, known as SHA-256 and SHA-512. They differ in the word size; SHA-256 uses 32-bit words where SHA-512 uses 64-bit words. There are also truncated versions of each standard, known as SHA-224, SHA-384, SHA-512/224 and SHA-512/256. These were also designed by the NSA.
SHA-3: A hash function formerly called Keccak, chosen in 2012 after a public competition among non-NSA designers. It supports the same hash lengths as SHA-2, and its internal structure differs significantly from the rest of the SHA family.
The corresponding standards are FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512). NIST has updated Draft FIPS Publication 202, SHA-3 Standard separate from the Secure Hash Standard (SHS).
== Comparison of SHA functions ==
In the table below, internal state means the "internal hash sum" after each compression of a data block.
== Validation ==
All SHA-family algorithms, as FIPS-approved security functions, are subject to official validation by the CMVP (Cryptographic Module Validation Program), a joint program run by the American National Institute of Standards and Technology (NIST) and the Canadian Communications Security Establishment (CSE).
== References == | Wikipedia/Secure_Hash_Algorithms |
Kleptography is the study of stealing information securely and subliminally. The term was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology – Crypto '96.
Kleptography is a subfield of cryptovirology and is a natural extension of the theory of subliminal channels that was pioneered by Gus Simmons while at Sandia National Laboratory. A kleptographic backdoor is synonymously referred to as an asymmetric backdoor. Kleptography encompasses secure and covert communications through cryptosystems and cryptographic protocols. This is reminiscent of, but not the same as steganography that studies covert communications through graphics, video, digital audio data, and so forth.
== Kleptographic attack ==
=== Meaning ===
A kleptographic attack is an attack which uses asymmetric cryptography to implement a cryptographic backdoor. For example, one such attack could be to subtly modify how the public and private key pairs are generated by the cryptosystem so that the private key could be derived from the public key using the attacker's private key. In a well-designed attack, the outputs of the infected cryptosystem would be computationally indistinguishable from the outputs of the corresponding uninfected cryptosystem. If the infected cryptosystem is a black-box implementation such as a hardware security module, a smartcard, or a Trusted Platform Module, a successful attack could go completely unnoticed.
A reverse engineer might be able to uncover a backdoor inserted by an attacker, and when it is a symmetric backdoor, even use it themself. However, by definition a kleptographic backdoor is asymmetric and the reverse-engineer cannot use it. A kleptographic attack (asymmetric backdoor) requires a private key known only to the attacker in order to use the backdoor. In this case, even if the reverse engineer was well-funded and gained complete knowledge of the backdoor, it would remain useless for them to extract the plaintext without the attacker's private key.
=== Construction ===
Kleptographic attacks can be constructed as a cryptotrojan that infects a cryptosystem and opens a backdoor for the attacker, or can be implemented by the manufacturer of a cryptosystem. The attack does not necessarily have to reveal the entirety of the cryptosystem's output; a more complicated attack technique may alternate between producing uninfected output and insecure data with the backdoor present.
=== Design ===
Kleptographic attacks have been designed for RSA key generation, the Diffie–Hellman key exchange, the Digital Signature Algorithm, and other cryptographic algorithms and protocols. SSL, SSH, and IPsec protocols are vulnerable to kleptographic attacks. In each case, the attacker is able to compromise the particular cryptographic algorithm or protocol by inspecting the information that the backdoor information is encoded in (e.g., the public key, the digital signature, the key exchange messages, etc.) and then exploiting the logic of the asymmetric backdoor using their secret key (usually a private key).
A. Juels and J. Guajardo proposed a method (KEGVER) through which a third party can verify RSA key generation. This is devised as a form of distributed key generation in which the secret key is only known to the black box itself. This assures that the key generation process was not modified and that the private key cannot be reproduced through a kleptographic attack.
=== Examples ===
Four practical examples of kleptographic attacks (including a simplified SETUP attack against RSA) can be found in JCrypTool 1.0, the platform-independent version of the open-source CrypTool project. A demonstration of the prevention of kleptographic attacks by means of the KEGVER method is also implemented in JCrypTool.
The Dual_EC_DRBG cryptographic pseudo-random number generator from the NIST SP 800-90A is thought to contain a kleptographic backdoor. Dual_EC_DRBG utilizes elliptic curve cryptography, and NSA is thought to hold a private key which, together with bias flaws in Dual_EC_DRBG, allows NSA to decrypt SSL traffic between computers using Dual_EC_DRBG for example. The algebraic nature of the attack follows the structure of the repeated Dlog Kleptogram in the work of Young and Yung.
== References == | Wikipedia/Kleptography |
A law enforcement agency (LEA) is any government agency responsible for law enforcement within a specific jurisdiction through the employment and deployment of law enforcement officers and their resources. The most common type of law enforcement agency is the police, but various other forms exist as well, including agencies that focus on specific legal violation, or are organized and overseen by certain authorities. They typically have various powers and legal rights to allow them to perform their duties, such as the power of arrest and the use of force.
== Jurisdiction ==
LEAs which have their ability to apply their powers restricted in some way are said to operate within a jurisdiction.
Jurisdictions are traditionally restricted to a geographic area and territory. LEA might be able to apply its powers within a state (e.g. the National Police for the entirety of France), within an administrative division (e.g. the Ontario Provincial Police for Ontario, Canada), within a division of an administrative division (e.g. the Miami-Dade Police Department for Miami-Dade County, Florida, United States), or across a collection of states typically within an international organization or political union (e.g. Europol for the European Union).
Sometimes, an LEA's jurisdiction is determined by the type of violation committed relative to the laws the LEA enforces, who or what the violation affects, or the seriousness of the violation. For example, in the United States, the Postal Inspection Service primarily investigates crimes affecting or misusing the services of the United States Postal Service, such as mail and wire fraud. If, hypothetically, a Postal Inspection Service investigation uncovered tobacco smuggling, the Bureau of Alcohol, Tobacco, Firearms and Explosives would be involved, but the Drug Enforcement Administration would not, as even though they investigate drug smuggling, their jurisdiction does not cover specifically tobacco smuggling. In other cases, an LEA's involvement is determined based on whether their involvement is requested; the Australian Federal Police, for instance, has jurisdiction over all of Australia, but usually takes on complex serious matters referred to it by another agency, and the agency will undertake its own investigations of less serious or complex matters by consensus.
LEA jurisdictions for a country and its divisions can typically be at more than one level. The United States has five basic tiers of law enforcement jurisdiction: federal, state, county, municipality, and special jurisdiction (tribal, airport, transit, railroad, etc.). Only the municipal, county, and state levels are involved in direct policing (i.e. uniformed officers with marked cars and regular patrols), and these can still depend on each agency's role and function. As an example for the American tiers, the Chicago Police Department has jurisdiction over Chicago, but not necessarily the rest of Cook County; while the Cook County Sheriff's Office has jurisdiction over Cook County, for the most part they patrol unincorporated area and operate Cook County Jail, and leave municipalities to municipal police departments; and the rest of Illinois, primarily its state highways, are under the jurisdiction of the Illinois State Police. All three technically have overlapping jurisdictions, and though their regular duties are fairly different and they typically avoid each other's responsible areas (the Cook County Sheriff's Office typically avoids patrolling Chicago unless it is for penal or court-related duties), they are still capable of assisting each other if necessary, usually in the form of higher-tier agencies assisting lower-tier agencies.
In some countries, national or federal police may be involved in direct policing as well, though what they focus on and what their duties are may vary. In Brazil, there are five federal police forces with national jurisdiction—the Federal Police of Brazil, the Federal Highway Police, the Federal Railroad Police, the Federal Penal Police, and the National Public Security Force—but the Highway Police, Railroad Police, and Penal Police are restricted to specific area jurisdictions (the Brazilian Highway System, railways, and prisons respectively) and do not investigate crimes, the Federal Police performs various police duties across the country and does investigate crimes, while the National Public Security Force is a rapid reaction force deployed to assist state authorities on request.
=== Operational areas ===
Often, a LEA's jurisdiction will be geographically divided into operations areas for administrative and logistical efficiency reasons. An operations area is often called a command, division, or office. Colloquially, they are known as beats.
While the operations area of a LEA is sometimes referred to as a jurisdiction, any LEA operations area usually still has legal jurisdiction in all geographic areas the LEA operates, but by policy and consensus the operations area does not normally operate in other geographical operations areas of the LEA. For example, since 2019 the frontline or territorial policing of the United Kingdom's Metropolitan Police has been divided into 12 Basic Command Units, each consisting of two, three, or four of the London boroughs, while the New York City Police Department is divided into 77 precincts.
Sometimes, the one legal jurisdiction is covered by more than one LEA, again for administrative and logistical efficiency reasons, or arising from policy, or historical reasons. In England and Wales, LEAs called constabularies have jurisdiction over their respective areas of legal coverage, but they do not normally operate out of their areas without formal liaison between them. The primary difference between separate agencies and operational areas within the one legal jurisdiction is the degree of flexibility to move resources between versus within agencies. When multiple LEAs cover the one legal jurisdiction, each agency still typically organizes itself into operations areas. In the United States, within a state's legal jurisdiction, county and city LEAs do not have full legal jurisdictional flexibility throughout the state, and this has led in part to mergers of adjacent police agencies.
=== International and multinational law enforcement agencies ===
Jurisdictionally, there can be an important difference between international LEAs and multinational LEAs, even though both are often referred to as "international", even in official documents. An international law enforcement agency has jurisdiction and or operates in multiple countries and across state borders, such as Interpol. A multinational law enforcement agency will typically operate in only one country, or one division of a country, but is made up of personnel from several countries, such as the European Union Police Mission in Bosnia and Herzegovina. International LEAs are typically also multinational, but multinational LEAs are typically not international. LEAs which operate across a collection of countries tend to assist in law enforcement activities, rather than directly enforcing laws, by facilitating the sharing of information necessary for law enforcement between LEAs within those countries.
Within a country, the jurisdiction of law enforcement agencies can be organized and structured in a number of ways to provide law enforcement throughout the country. A law enforcement agency's jurisdiction can be for the whole country or for a division or sub-division within the country.
=== Federal and national law enforcement agencies ===
When a LEA's jurisdiction is for the whole country, it is usually one of two broad types, either federal or national.
==== Federal ====
When the country has a federal constitution, an LEA responsible for the entire country is referred to as a federal law enforcement agency.
The responsibilities of a federal LEA vary from country to country. Federal LEA responsibilities are typically countering fraud against the federation, immigration and border control regarding people and goods, investigating currency counterfeiting, policing of airports and protection of designated national infrastructure, national security, and the protection of the country's head of state and of other designated very important persons, such as the U.S. Secret Service or the U.S. Department of State Diplomatic Security Service.
A federal police agency is a federal LEA that also has the typical police responsibilities of social order and public safety as well as federal law enforcement responsibilities. However, a federal police agency will not usually exercise its powers at a divisional level. Such exercising of powers is typically specific arrangements between the federal and divisional governing bodies.
Examples of federal law enforcement agencies include the:
Argentine Federal Police (Argentina)
Australian Federal Police (Australia)
Federal Police of Brazil (Brazil)
Royal Canadian Mounted Police (Canada)
Bundespolizei (Germany)
Mexican Federal Police (Mexico)
Federal Bureau of Investigation, Federal Protective Service, United States Park Police (United States)
Central Bureau of Investigation (India)
A federated approach to the organization of a country does not necessarily indicate the nature of the organization of law enforcement agencies within the country. Some countries, such as Austria and Belgium, have a relatively unified approach to law enforcement, but still have operationally separate units for federal law enforcement and divisional policing. The United States has a highly fractured approach to law enforcement agencies generally, and this is reflected in American federal law enforcement agencies.
===== Relationship between federal and federated divisions =====
In a federation, there will typically be separate LEAs with jurisdictions for each division within the federation. A federal LEA will have primary responsibility for laws which affect the federation as whole, and which have been enacted by the governing body of the federation.
Members of a federal LEA may be given jurisdiction within a division of a federation for laws enacted by the governing bodies of the divisions either by the relevant division within the federation, or by the federation's governing body. By way of example, the Australian Federal Police is a federal agency and has the legal power to enforce the laws enacted by any Australian state, but will generally only enforce state law if there is a federal aspect to investigate.
Typically, federal LEAs have relatively narrow police responsibilities, the individual divisions within the federation usually establish their own police agencies to enforce laws within the division. However, in some countries federal agencies have jurisdiction in divisions of the federation. This typically happens when the division does not have its own independent status and is dependent on the federation. The Royal Canadian Mounted Police (RCMP) is one such federal agency that also acts as the sole police agency for Canada's three territories, Northwest Territories, Nunavut, and Yukon. This is a direct jurisdictional responsibility and is different from the situation when a governing body makes arrangements with another governing body's LEA to provide law enforcement for its subjects.In federal polities, actions that violate laws in multiple geographical divisions within the federation are escalated to a federal LEA. In other cases, specific crimes deemed to be serious are escalated; in the United States, the FBI has responsibility for the investigation of all kidnapping cases, regardless of whether it involves the crossing of state lines. Some countries provide law enforcement on land and in buildings owned or controlled by the federation by using a federal LEA; for example, the U.S. Department of Homeland Security is responsible for some aspects of federal property law enforcement
Typically, LEAs working in different jurisdictions which overlap in the type of law non-compliance actively establish mechanisms for cooperation, establish joint operations and joints task forces. Often, members of a LEA working outside of their normal jurisdiction on joint operations or task force are sworn in as special members of the host jurisdiction.
==== National ====
A national law enforcement agency is a LEA in a country which does not have divisions capable of making their own laws. A national LEA has the combined responsibilities that federal LEAs and divisional LEAs would have in a federated country. National LEAs are usually divided into operational areas.
To help avoid confusion over jurisdictional responsibility, some federal LEAs, such as the U.S. FBI, explicitly advise that they are not a national law enforcement agency.
A national police agency is a national LEA that also has the typical police responsibilities of social order and public safety as well as national law enforcement responsibilities. Examples of countries with non-federal national police agencies are New Zealand, Italy, Albania, Indonesia, France, Ireland, Japan, Netherlands, Malaysia, the Philippines, and Nicaragua.
=== State law enforcement agencies ===
State police, provincial police, or regional police are a type of subnational territorial police force found in nations organized as federations, typically in North America, South Asia, and Oceania, because each of their state police are mostly at country level. These forces typically have jurisdiction over the relevant sub-national jurisdiction, and may cooperate in law enforcement activities with municipal or national police where either exist.
== Types ==
LEAs can be responsible for the enforcement of laws affecting the behavior of people or the general community (e.g. New York City Police Department), the behavior of commercial organizations and corporations (e.g. Australian Securities and Investments Commission), or for the interests of the country as a whole (e.g. United Kingdom's His Majesty's Revenue and Customs).
=== Police ===
Many law enforcement agencies are police agencies that have a broad range powers and responsibilities. Police agencies, however, also often have a range of responsibilities not specifically related to law enforcement. These responsibilities relate to social order and public safety. While this understanding of policing, being more encompassing than just law enforcement has grown with and is commonly understood by society, it is recognized formally by scholars and academics. A police agency's jurisdiction for social order and public safety will normally be the same as its jurisdiction for law enforcement.
=== Military ===
Military organizations often have law enforcement units. These units within armed forces are generally referred to as military police. This may refer to:
a section of the military solely responsible for policing the armed forces (referred to as provosts)
a separate section of the armed forces responsible for policing in the armed forces and in the ministry of defence (such as the Żandarmeria Wojskowa)
a section of the military solely responsible for policing the civilian population (such as the Romanian Gendarmerie)
the preventative police, with military status, of a state (such as the Brazilian Military Police)
The exact usage and meaning of the terms military police, provost, security forces, and gendarmerie vary from country to country.
Non-military law enforcement agencies are sometimes referred to as civilian police, but usually only in contexts where they need to be distinguished from military police. However, they may still possess a military-like structure and protocol.
In most countries, the term law enforcement agency when used formally includes agencies other than only police agencies. The term law enforcement agency is often used in the United States to refer to police agencies, however, it also includes agencies with peace officer status or agencies which prosecute criminal acts. A county prosecutor or district attorney is considered to be the chief law enforcement officer of a county.
=== Other ===
Other responsibilities of LEAs are typically related to assisting subjects to avoid non-compliance with a law, assisting subjects to remain safe and secure, assisting subjects after a safety impacting event. These include:
policing
social order
public incident mediation
pre-empting anti-social behaviour
dangerous event public logistics
public safety
general search and rescue
crowd control
police presence in public areas
regulation
services and facilities
disaster victim identification
education and awareness campaigns
victim prevention and avoidance
law compliance
public safety
Many LEAs have administrative and service responsibilities, often as their major responsibility, as well as their law enforcement responsibilities. This is typical of agencies such as customs or taxation agencies, which provide services and facilities to allow subjects to comply with relevant laws as their primary responsibilities.
==== Private ====
Private police are law enforcement bodies that are owned or controlled by non-governmental entities. Private police are often utilized in places where public law enforcement is seen as being under-provided. For example, the San Francisco Patrol Special Police was formed to increase security in San Francisco during the California gold rush, and protected the homes and businesses of private clients until February 2024.
===== Railroad police =====
In Canada and the United States, many railroad companies have private railroad police. Examples include the BNSF Police Department, Canadian National Police Service, Canadian Pacific Kansas City Police Service, Union Pacific Police Department, etc. The Canadian National Police Service and Canadian Pacific Kansas City Police Service operate in both countries while the others operate only in the US.
==== Regulatory ====
Many LEAs are also involved in the monitoring or application of regulations and codes of practice. See, for example, Australian Commercial Television Code of Practice, building code, and code enforcement. Monitoring of the application of regulations and codes of practice is not normally considered law enforcement. However, the consistent non-compliance by a subject with regulations or codes of practice may result in the revocation of a license for the subject to operate, and operating without a licence is typically illegal. Also, the failure to apply codes of practice can impact other subjects' safety and life, which can also be illegal.
=== Religious ===
A LEA can be responsible for enforcing secular law or religious law such as Sharia or Halakha. The significant majority of LEAs around the world are secular, and their governing bodies separate religious matters from the governance of their subjects. Religious law enforcement agencies, such as Saudi Arabia's Mutaween or Iran's Guidance Patrol, exist where full separation of government and religious doctrine has not occurred, and are generally considered police agencies, typically religious police, because their primary responsibility is for social order within their jurisdiction and the relevant social order being highly codified as laws.
=== Internal affairs ===
Often, a LEA will have a specific internal unit to ensure that the LEA is complying with relevant laws such as the U.S. Federal Bureau of Investigation's Office of Professional Responsibility. In some countries and regions, specialised or separate LEAs are established to ensure that other LEAs comply with laws and investigate potential violations of laws by law enforcers, like the New South Wales Independent Commission Against Corruption or the Ontario Special Investigations Unit.
== Establishment and constitution ==
Typically, a LEA is established and constituted by the governing body it is supporting, and the personnel making up the LEA are from the governing body's subjects.
For reasons of either logistical efficiency or policy, some divisions with a country will not establish their own LEAs but will instead make arrangements with another LEA, typically from the same country, to provide law enforcement within the division. For example, the Royal Canadian Mounted Police (RCMP) is a federal agency and is contracted by most of Canada's provinces and many municipalities to police them, even though law enforcement in Canada is constitutionally a divided responsibility. This arrangement has been achieved by formal agreement between those provinces and municipalities and the federal government, and reduces the number of agencies policing the same geographical area.
In circumstances where a country or division within a country is not able to establish stable or effective LEAs, typically police agencies, the country might invite other countries to provide personnel, experience, and organisational structure to constitute a LEA, such as the Regional Assistance Mission to the Solomon Islands which has a Participating Police Force working in conjunction with the Solomon Islands Police Force. In circumstances where the United Nations is already providing an administrative support capability within the country, the United Nations may directly establish and constitute a LEA on behalf of the country, as occurred under the United Nations Transitional Administration in East Timor, which operated in Timor-Leste from 1999 to 2002; related is the United Nations Police, which helps provide law enforcement during United Nations peacekeeping missions.
== Powers and law exemptions ==
To enable a LEA to prevent, detect, and investigate non-compliance with laws, the LEA is endowed with powers by its governing body which are not available to non LEA subjects of a governing body. Typically, a LEA is empowered to varying degrees to:
collect information about subjects in the LEA's jurisdiction
intrusively search for information and evidence related to the non-compliance with a law
seize evidence of non-compliance with a law
seize property and assets from subjects
direct subjects to provide information related to the non-compliance with a law
arrest and detain subjects, depriving them of their liberty, but not incarcerate subjects, for alleged non-compliance with a law
lawfully deceive subjects
These powers are not available to subjects other than LEAs within the LEA's jurisdiction and are typically subject to judicial and civil overview.
Usually, these powers are only allowed when it can be shown that a subject is probably already not complying with a law. For example, to undertake an intrusive search, typically a LEA must make an argument and convince a judicial officer of the need to undertake the intrusive search on the basis that it will help detect or prove non-compliance with a law by a specified subject. The judicial officer, if they agree, will then issue a legal instrument, typically called a search warrant, to the LEA, which must be presented to the relevant subject if possible.
=== Lawful deception and law exemption ===
Subjects who do not comply with laws will usually seek to avoid detection by a LEA. When required, in order for the LEA to detect and investigate subjects not complying with laws, the LEA must be able to undertake its activities secretly from the non-complying subject. This, however, may require the LEA to explicitly not comply with a law other subjects must comply with. To allow the LEA to operate and comply with the law, it is given lawful exemption to undertake secret activities. Secret activities by a LEA are often referred to as covert operations.
To deceive a subject and carry out its activities, a LEA may be lawfully allowed to secretly:
Create and operate false identities and personalities and organisations, often referred to as undercover operations or assumed identities, e.g. the Australian Federal Police by virtue of Part 1AC of the Crimes Act 1914.
Allow and assist the illicit movement of licit and illicit substances and wares, sometimes partially substituted with benign materials, often referred to as controlled operations, e.g. Australia's LEAs by virtue of Part 1AB of the Crimes Act 1914.
Listen to and copy communications between subjects, often referred to as telecommunications interception or wiretapping when the communication medium is electronic in nature, e.g. the U.S. Federal Bureau of Investigation by virtue of United States Code 18 Title 18 Part I Chapter 119 Section 2516.
Intrusively observe, listen to, and track subjects, often referred to as technical operations, e.g. Australian LEAs by virtue of the Surveillance Devices Act 2004.
to typically collect information about and evidence of non-compliance with a law and identify other non-complying subjects.
Lawful deception and use of law exemption by a LEA is typically subject to very strong judicial or open civil overview. For example, the Australian Federal Police's controlled operations are subject to open civil review by its governing body, the Parliament of Australia.
=== Other exemptions from laws ===
Law enforcement agencies have other exemptions from laws to allow them to operate in a practical way. For example, many jurisdictions have laws which forbid animals from entering certain areas for health and safety reasons. LEAs are typically exempted from these laws to allow dogs to be used for search and rescue, drug search, explosives search, chase and arrest, etc. This type of exemption is not unique to LEAs. Sight assist dogs are also typically exempted from access restrictions. Members of LEAs may be permitted to openly display firearms in places where this is typically prohibited to civilians, violate various traffic laws in the course of their duties, or detain persons against their will.
Interpol is an international organisation and is essentially stateless, but must operate from some physical location. Interpol is protected from certain laws of the country where it is physically located.
== See also ==
List of anti-corruption agencies
Code enforcement
List of law enforcement agencies grouped by sub category
List of protective service agencies
List of secret police organizations
List of specialist law enforcement agencies
Outline of law enforcement
Specialist law enforcement agency
Traffic police
State police
Sheriff
Police
Police foundation
== References ==
== External links ==
Berlin: Metropolis of crime 1918 - 1933 Part 1 Archived 2019-02-12 at the Wayback Machine, Part 2 Archived 2019-02-12 at the Wayback Machine (warning: graphic depiction of murder and other violence), a Deutsche Welle English television documentary comprehensively depicting a major European police force and its methods, investigations, and political activities during the early 20th century | Wikipedia/Law_enforcement_agency |
Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys to provide equivalent security, compared to cryptosystems based on modular exponentiation in Galois fields, such as the RSA cryptosystem and ElGamal cryptosystem.
Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization.
== History ==
The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.
In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields:
Five prime fields
F
p
{\displaystyle \mathbb {F} _{p}}
for certain primes p of sizes 192, 224, 256, 384, and 521 bits. For each of the prime fields, one elliptic curve is recommended.
Five binary fields
F
2
m
{\displaystyle \mathbb {F} _{2^{m}}}
for m equal 163, 233, 283, 409, and 571. For each of the binary fields, one elliptic curve and one Koblitz curve was selected.
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were chosen for optimal security and implementation efficiency.
At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B, which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The NSA allows their use for protecting information classified up to top secret with 384-bit keys.
Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption.
Elliptic curve cryptography is used successfully in numerous popular protocols, such as Transport Layer Security and Bitcoin.
=== Security concerns ===
In 2013, The New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups.
Additionally, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC.
=== Patents ===
While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology, including at least one ECC scheme (ECMQV). However, RSA Laboratories and Daniel J. Bernstein have argued that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing those patents.
== Elliptic curve theory ==
For the purposes of this article, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation
y
2
=
x
3
+
a
x
+
b
,
{\displaystyle y^{2}=x^{3}+ax+b,}
along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation would be somewhat more complicated.
This set of points, together with the group operation of elliptic curves, is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety:
Div
0
(
E
)
→
Pic
0
(
E
)
≃
E
.
{\displaystyle \operatorname {Div} ^{0}(E)\to \operatorname {Pic} ^{0}(E)\simeq E.}
=== Application to cryptography ===
Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems, such as RSA's 1983 patent, based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors which are far apart. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible (the computational Diffie–Hellman assumption): this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original point and product point. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.
The primary benefit promised by elliptic curve cryptography over alternatives such as RSA is a smaller key size, reducing storage and transmission requirements. For example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key.
=== Cryptographic schemes ===
Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group
(
Z
p
)
×
{\displaystyle (\mathbb {Z} _{p})^{\times }}
with an elliptic curve:
The Elliptic-curve Diffie–Hellman (ECDH) key agreement scheme is based on the Diffie–Hellman scheme,
The Elliptic Curve Integrated Encryption Scheme (ECIES), also known as Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme,
The Elliptic Curve Digital Signature Algorithm (ECDSA) is based on the Digital Signature Algorithm,
The deformation scheme using Harrison's p-adic Manhattan metric,
The Edwards-curve Digital Signature Algorithm (EdDSA) is based on Schnorr signature and uses twisted Edwards curves,
The ECMQV key agreement scheme is based on the MQV key agreement scheme,
The ECQV implicit certificate scheme.
== Implementation ==
Some common implementation considerations include:
=== Domain parameters ===
To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two (
2
m
{\displaystyle 2^{m}}
); the latter case is called the binary case, and this case necessitates the choice of an auxiliary curve denoted by f. Thus the field is defined by p in the prime case and the pair of m and f in the binary case. The elliptic curve is defined by the constants a and b used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. base point) G. For cryptographic application, the order of G, that is the smallest positive number n such that
n
G
=
O
{\displaystyle nG={\mathcal {O}}}
(the point at infinity of the curve, and the identity element), is normally prime. Since n is the size of a subgroup of
E
(
F
p
)
{\displaystyle E(\mathbb {F} _{p})}
it follows from Lagrange's theorem that the number
h
=
1
n
|
E
(
F
p
)
|
{\displaystyle h={\frac {1}{n}}|E(\mathbb {F} _{p})|}
is an integer. In cryptographic applications, this number h, called the cofactor, must be small (
h
≤
4
{\displaystyle h\leq 4}
) and, preferably,
h
=
1
{\displaystyle h=1}
. To summarize: in the prime case, the domain parameters are
(
p
,
a
,
b
,
G
,
n
,
h
)
{\displaystyle (p,a,b,G,n,h)}
; in the binary case, they are
(
m
,
f
,
a
,
b
,
G
,
n
,
h
)
{\displaystyle (m,f,a,b,G,n,h)}
.
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use.
The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents:
NIST, Recommended Elliptic Curves for Government Use
SECG, SEC 2: Recommended Elliptic Curve Domain Parameters
ECC Brainpool (RFC 5639), ECC Brainpool Standard Curves and Curve Generation
SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be specified either by value or by name.
If, despite the preceding admonition, one decides to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
Select a random curve and use a general point-counting algorithm, for example, Schoof's algorithm or the Schoof–Elkies–Atkin algorithm,
Select a random curve from a family which allows easy calculation of the number of points (e.g., Koblitz curves), or
Select the number of points and generate a curve with this number of points using the complex multiplication technique.
Several classes of curves are weak and should be avoided:
Curves over
F
2
m
{\displaystyle \mathbb {F} _{2^{m}}}
with non-prime m are vulnerable to Weil descent attacks.
Curves such that n divides
p
B
−
1
{\displaystyle p^{B}-1}
(where p is the characteristic of the field: q for a prime field, or
2
{\displaystyle 2}
for a binary field) for sufficiently small B are vulnerable to Menezes–Okamoto–Vanstone (MOV) attack which applies usual discrete logarithm problem (DLP) in a small-degree extension field of
F
p
{\displaystyle \mathbb {F} _{p}}
to solve ECDLP. The bound B should be chosen so that discrete logarithms in the field
F
p
B
{\displaystyle \mathbb {F} _{p^{B}}}
are at least as difficult to compute as discrete logs on the elliptic curve
E
(
F
q
)
{\displaystyle E(\mathbb {F} _{q})}
.
Curves such that
|
E
(
F
q
)
|
=
q
{\displaystyle |E(\mathbb {F} _{q})|=q}
are vulnerable to the attack that maps the points on the curve to the additive group of
F
q
{\displaystyle \mathbb {F} _{q}}
.
=== Key sizes ===
Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need
O
(
n
)
{\displaystyle O({\sqrt {n}})}
steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over
F
q
{\displaystyle \mathbb {F} _{q}}
, where
q
≈
2
256
{\displaystyle q\approx 2^{256}}
. This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of n, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.
The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months.
A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.
=== Projective coordinates ===
A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in
F
q
{\displaystyle \mathbb {F} _{q}}
but also an inversion operation. The inversion (for given
x
∈
F
q
{\displaystyle x\in \mathbb {F} _{q}}
find
y
∈
F
q
{\displaystyle y\in \mathbb {F} _{q}}
such that
x
y
=
1
{\displaystyle xy=1}
) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates
(
X
,
Y
,
Z
)
{\displaystyle (X,Y,Z)}
using the following relation:
x
=
X
Z
{\displaystyle x={\frac {X}{Z}}}
,
y
=
Y
Z
{\displaystyle y={\frac {Y}{Z}}}
; in the Jacobian system a point is also represented with three coordinates
(
X
,
Y
,
Z
)
{\displaystyle (X,Y,Z)}
, but a different relation is used:
x
=
X
Z
2
{\displaystyle x={\frac {X}{Z^{2}}}}
,
y
=
Y
Z
3
{\displaystyle y={\frac {Y}{Z^{3}}}}
; in the López–Dahab system the relation is
x
=
X
Z
{\displaystyle x={\frac {X}{Z}}}
,
y
=
Y
Z
2
{\displaystyle y={\frac {Y}{Z^{2}}}}
; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations
(
X
,
Y
,
Z
,
a
Z
4
)
{\displaystyle (X,Y,Z,aZ^{4})}
; and in the Chudnovsky Jacobian system five coordinates are used
(
X
,
Y
,
Z
,
Z
2
,
Z
3
)
{\displaystyle (X,Y,Z,Z^{2},Z^{3})}
. Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.
=== Fast reduction (NIST curves) ===
Reduction modulo p (which is needed for addition and multiplication) can be executed much faster if the prime p is a pseudo-Mersenne prime, that is
p
≈
2
d
{\displaystyle p\approx 2^{d}}
; for example,
p
=
2
521
−
1
{\displaystyle p=2^{521}-1}
or
p
=
2
256
−
2
32
−
2
9
−
2
8
−
2
7
−
2
6
−
2
4
−
1.
{\displaystyle p=2^{256}-2^{32}-2^{9}-2^{8}-2^{7}-2^{6}-2^{4}-1.}
Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations.
The curves over
F
p
{\displaystyle \mathbb {F} _{p}}
with pseudo-Mersenne p are recommended by NIST. Yet another advantage of the NIST curves is that they use a = −3, which improves addition in Jacobian coordinates.
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are suboptimal. Other curves are more secure and run just as fast.
== Security ==
=== Side-channel attacks ===
Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P = Q) and general addition (P ≠ Q) depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards.
=== Backdoors ===
Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.
The SafeCurves project has been launched in order to catalog curves that are easy to implement securely and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.
=== Quantum computing attack ===
Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. For the binary elliptic curve case, 906 qubits are necessary (to break 128 bits of security). In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.
Supersingular Isogeny Diffie–Hellman Key Exchange claimed to provide a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems. However, new classical attacks undermined the security of this protocol.
In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy."
=== Invalid curve attack ===
When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key.
== Alternative representations ==
Alternative representations of elliptic curves include:
Hessian curves
Edwards curves
Twisted curves
Twisted Hessian curves
Twisted Edwards curve
Doubling-oriented Doche–Icart–Kohel curve
Tripling-oriented Doche–Icart–Kohel curve
Jacobian curve
Montgomery curves
== See also ==
== Notes ==
== References ==
Jacques Vélu, Courbes elliptiques (...), Société Mathématique de France, 57, 1-152, Paris, 1978.
== External links ==
Elliptic Curves at Stanford University
Interactive introduction to elliptic curves and elliptic curve cryptography with Sage by Maike Massierer and the CrypTool team
Media related to Elliptic curve at Wikimedia Commons | Wikipedia/Elliptic_Curve_Cryptography |
In cryptography, a sponge function or sponge construction is any of a class of algorithms with finite internal state that take an input bit stream of any length and produce an output bit stream of any desired length. Sponge functions have both theoretical and practical uses. They can be used to model or implement many cryptographic primitives, including cryptographic hashes, message authentication codes, mask generation functions, stream ciphers, pseudo-random number generators, and authenticated encryption.
== Construction ==
A sponge function is built from three components:
a state memory, S, containing b bits,
a function
f
:
{
0
,
1
}
b
→
{
0
,
1
}
b
{\displaystyle f:\{0,1\}^{b}\rightarrow \{0,1\}^{b}}
a padding function P
S is divided into two sections: one of size r (the bitrate) and the remaining part of size c (the capacity). These sections are denoted R and C respectively.
f produces a pseudorandom permutation of the
2
b
{\displaystyle 2^{b}}
states from S.
P appends enough bits to the input string so that the length of the padded input is a whole multiple of the bitrate, r. This means the input is segmented into blocks of r bits.
=== Operation ===
The sponge function "absorbs" (in the sponge metaphor) all blocks of a padded input string as follows:
S is initialized to zero
for each r-bit block B of P(string)
R is replaced with R XOR B (using bitwise XOR)
S is replaced by f(S)
The sponge function output is now ready to be produced ("squeezed out") as follows:
repeat until output is full
output the R portion of S
S is replaced by f(S)
If less than r bits remain to be output, then R will be truncated (only part of R will be output).
Another metaphor describes the state memory as an "entropy pool", with input "poured into" the pool, and the transformation function referred to as "stirring the entropy pool".
Note that input bits are never XORed into the C portion of the state memory, nor are any bits of C ever output directly. The extent to which C is altered by the input depends entirely on the transformation function f. In hash applications, resistance to collision or preimage attacks depends on C, and its size (the "capacity" c) is typically twice the desired resistance level.
=== Duplex construction ===
It is also possible to absorb and squeeze in an alternating fashion. This operation is called the duplex construction or duplexing. It can be the basis of a single pass authenticated encryption system. This have also been used as an efficient variant of the Fiat-Shamir transformation for some protocols.
The state S is initialized to zero
for each r-bit block B of the input
R is XORed with B
S is replaced by f(S)
R is now an output block of size r bits.
=== Overwrite mode ===
It is possible to omit the XOR operations during absorption, while still maintaining the chosen security level. In this mode, in the absorbing phase, the next block of the input overwrites the R part of the state. This allows keeping a smaller state between the steps. Since the R part will be overwritten anyway, it can be discarded in advance, only the C part must be kept.
== Applications ==
Sponge functions have both theoretical and practical uses. In theoretical cryptanalysis, a random sponge function is a sponge construction where f is a random permutation or transformation, as appropriate. Random sponge functions capture more of the practical limitations of cryptographic primitives than does the widely used random oracle model, in particular the finite internal state.
The sponge construction can also be used to build practical cryptographic primitives. For example, the Keccak cryptographic sponge with a 1600-bit state has been selected by NIST as the winner in the SHA-3 competition. The strength of Keccak derives from the intricate, multi-round permutation f that its authors developed. The RC4-redesign called Spritz refers to the sponge-construct to define the algorithm.
For other examples, a sponge function can be used to build authenticated encryption with associated data (AEAD), as well as password hashing schemes.
== References ==
== External links == | Wikipedia/Sponge_function |
Introduced by Martin Hellman and Susan K. Langford in 1994, the differential-linear attack is a mix of both linear cryptanalysis and differential cryptanalysis.
The attack utilises a differential characteristic over part of the cipher with a probability of 1 (for a few rounds—this probability would be much lower for the whole cipher). The rounds immediately following the differential characteristic have a linear approximation defined, and we expect that for each chosen plaintext pair, the probability of the linear approximation holding for one chosen plaintext but not the other will be lower for the correct key. Hellman and Langford have shown that this attack can recover 10 key bits of an 8-round DES with only 512 chosen plaintexts and an 80% chance of success.
The attack was generalised by Eli Biham et al. to use differential characteristics with probability less than 1. Besides DES, it has been applied to FEAL, IDEA, Serpent, Camellia, and even the stream cipher Phelix.
== References ==
Johan Borst (February 1997). "Differential-Linear Cryptanalysis of IDEA". CiteSeerX 10.1.1.49.5084. {{cite journal}}: Cite journal requires |journal= (help)
Johan Borst, Lars R. Knudsen, Vincent Rijmen (May 1997). Two Attacks on Reduced IDEA (PDF). Advances in Cryptology – EUROCRYPT '97. Konstanz: Springer-Verlag. pp. 1–13. Retrieved 2007-03-08.{{cite conference}}: CS1 maint: multiple names: authors list (link)
Eli Biham; Orr Dunkelman; Nathan Keller (December 2002). Enhancing Differential-Linear Cryptanalysis (PDF/gzipped PostScript). Advances in Cryptology, proceeding of ASIACRYPT 2002, Lecture Notes in Computer Science 2501. Queenstown, New Zealand: Springer-Verlag. pp. 254–266. Retrieved 2006-12-07.
Eli Biham, Orr Dunkelman, Nathan Keller (February 2003). Differential-Linear Cryptanalysis of Serpent (PDF/PostScript). 10th International Workshop on Fast Software Encryption (FSE '03). Lund: Springer-Verlag. pp. 9–21. Retrieved 2007-03-08.{{cite conference}}: CS1 maint: multiple names: authors list (link)
Hongjun Wu, Bart Preneel (December 12, 2006). Differential-Linear Attacks against the Stream Cipher Phelix (PDF). 14th International Workshop on Fast Software Encryption (FSE '07). Luxembourg City: Springer-Verlag. Archived from the original (PDF) on 2008-08-20. Retrieved 2007-03-08.
Eli Biham, Orr Dunkelman, Nathan Keller (December 12, 2006). A New Attack on 6-round IDEA. 14th International Workshop on Fast Software Encryption (FSE '07). Luxembourg City: Springer-Verlag.{{cite conference}}: CS1 maint: multiple names: authors list (link) | Wikipedia/Differential-linear_attack |
Hash-based cryptography is the generic term for constructions of cryptographic primitives based on the security of hash functions. It is of interest as a type of post-quantum cryptography.
So far, hash-based cryptography is used to construct digital signatures schemes such as the Merkle signature scheme, zero knowledge and computationally integrity proofs, such as the zk-STARK proof system and range proofs over issued credentials via the HashWires protocol. Hash-based signature schemes combine a one-time signature scheme, such as a Lamport signature, with a Merkle tree structure. Since a one-time signature scheme key can only sign a single message securely, it is practical to combine many such keys within a single, larger structure. A Merkle tree structure is used to this end. In this hierarchical data structure, a hash function and concatenation are used repeatedly to compute tree nodes.
One consideration with hash-based signature schemes is that they can only sign a limited number of messages securely, because of their use of one-time signature schemes. The US National Institute of Standards and Technology (NIST), specified that algorithms in its post-quantum cryptography competition support a minimum of 264 signatures safely.
In 2022, NIST announced SPHINCS+ as one of three algorithms to be standardized for digital signatures. NIST standardized stateful hash-based cryptography based on the eXtended Merkle Signature Scheme (XMSS) and Leighton–Micali Signatures (LMS), which are applicable in different circumstances, in 2020, but noted that the requirement to maintain state when using them makes them more difficult to implement in a way that avoids misuse.
In 2024 NIST announced the Stateless Hash-Based Digital Signature Standard.
== History ==
Leslie Lamport invented hash-based signatures in 1979. The XMSS (eXtended Merkle Signature Scheme) and SPHINCS hash-based signature schemes were introduced in 2011 and 2015, respectively. XMSS was developed by a team of researchers under the direction of Johannes Buchmann and is based both on Merkle's seminal scheme and on the 2007 Generalized Merkle Signature Scheme (GMSS). A multi-tree variant of XMSS, XMSSMT, was described in 2013.
== One-time signature schemes ==
Hash-based signature schemes use one-time signature schemes as their building block. A given one-time signing key can only be used to sign a single message securely. Indeed, signatures reveal part of the signing key. The security of (hash-based) one-time signature schemes relies exclusively on the security of an underlying hash function.
Commonly used one-time signature schemes include the Lamport–Diffie scheme, the Winternitz scheme and its improvements, such as the W-OTS+ scheme. Unlike the seminal Lamport–Diffie scheme, the Winternitz scheme and variants can sign many bits at once. The number of bits to be signed at once is determined by a value: the Winternitz parameter. The existence of this parameter provides a trade-off between size and speed. Large values of the Winternitz parameter yield short signatures and keys, at the price of slower signing and verifying. In practice, a typical value for this parameter is 16.
In the case of stateless hash-based signatures, few-time signature schemes are used. Such schemes allow security to decrease gradually in case a few-time key is used more than once. HORST is an example of a few-time signature scheme.
== Combining many one-time key pairs into a hash-based signature scheme ==
The central idea of hash-based signature schemes is to combine a larger number of one-time key pairs into a single structure to obtain a practical way of signing more than once (yet a limited number of times). This is done using a Merkle tree structure, with possible variations. One public and one private key are constructed from the numerous public and private keys of the underlying one-time scheme. The global public key is the single node at the very top of the Merkle tree. Its value is an output of the selected hash function, so a typical public key size is 32 bytes. The validity of this global public key is related to the validity of a given one-time public key using a sequence of tree nodes. This sequence is called the authentication path. It is stored as part of the signature, and allows a verifier to reconstruct the node path between those two public keys.
The global private key is generally handled using a pseudo-random number generator. It is then sufficient to store a seed value. One-time secret keys are derived successively from the seed value using the generator. With this approach, the global private key is also very small, e.g. typically 32 bytes.
The problem of tree traversal is critical to signing performance. Increasingly efficient approaches have been introduced, dramatically speeding up signing time.
Some hash-based signature schemes use multiple layers of tree, offering faster signing at the price of larger signatures. In such schemes, only the lowest layer of trees is used to sign messages, while all other trees sign root values of lower trees.
The Naor–Yung work shows the pattern by which to transfer a limited time signature of the Merkle type family into an unlimited (regular) signature scheme.
== Properties of hash-based signature schemes ==
Hash-based signature schemes rely on security assumptions about the underlying hash function, but any hash function fulfilling these assumptions can be used. As a consequence, each adequate hash function yields a different corresponding hash-based signature scheme. Even if a given hash function becomes insecure, it is sufficient to replace it by a different, secure one to obtain a secure instantiation of the hash-based signature scheme under consideration. Some hash-based signature schemes (such as XMSS with pseudorandom key generation) are forward secure, meaning that previous signatures remain valid if a secret key is compromised.
The minimality of security assumptions is another characteristic of hash-based signature schemes. Generally, these schemes only require a secure (for instance in the sense of second preimage resistance) cryptographic hash function to guarantee the overall security of the scheme. This kind of assumption is necessary for any digital signature scheme; however, other signature schemes require additional security assumptions, which is not the case here.
Because of their reliance on an underlying one-time signature scheme, hash-based signature schemes can only sign a fixed number of messages securely. In the case of the Merkle and XMSS schemes, a maximum of
2
h
{\displaystyle 2^{h}}
messages can be signed securely, with
h
{\displaystyle h}
the total Merkle tree height.
== Examples of hash-based signature schemes ==
Since Merkle's initial scheme, numerous hash-based signature schemes with performance improvements have been introduced. Recent ones include the XMSS, the Leighton–Micali (LMS), the SPHINCS and the BPQS schemes. Most hash-based signature schemes are stateful, meaning that signing requires updating the secret key, unlike conventional digital signature schemes. For stateful hash-based signature schemes, signing requires keeping state of the used one-time keys and making sure they are never reused. The XMSS, LMS and BPQS schemes are stateful, while the SPHINCS scheme is stateless. SPHINCS signatures are larger than XMSS and LMS signatures. BPQS has been designed specifically for blockchain systems. Additionally to the WOTS+ one-time signature scheme, SPHINCS also uses a few-time (hash-based) signature scheme called HORST. HORST is an improvement of an older few-time signature scheme, HORS (Hash to Obtain Random Subset).
The stateful hash-based schemes XMSS and XMSSMT are specified in RFC 8391 (XMSS: eXtended Merkle Signature Scheme). Leighton–Micali Hash-Based Signatures are specified in RFC 8554. Practical improvements have been proposed in the literature that alleviate the concerns introduced by stateful schemes. Hash functions appropriate for these schemes include SHA-2, SHA-3 and BLAKE.
== Implementations ==
The XMSS, GMSS and SPHINCS schemes are available in the Java Bouncy Castle cryptographic APIs. LMS and XMSS schemes are available in the wolfSSL cryptographic APIs. SPHINCS is implemented in the SUPERCOP benchmarking toolkit. Optimised and unoptimised reference implementations of the XMSS RFC exist. The LMS scheme has been implemented in Python and in C following its Internet-Draft.
== References ==
T. Lange. "Hash-Based Signatures". Encyclopedia of Cryptography and Security, Springer U.S., 2011. [2]
F. T. Leighton, S. Micali. "Large provably fast and secure digital signature schemes based one secure hash functions". US Patent 5,432,852, [3] 1995.
G. Becker. "Merkle Signature Schemes, Merkle Trees and Their Cryptanalysis", seminar 'Post Quantum Cryptology' at the Ruhr-University Bochum, Germany, 2008. [4]
E. Dahmen, M. Dring, E. Klintsevich, J. Buchmann, L. C. Coronado Garcia. "CMSS — An Improved Merkle Signature Scheme". Progress in Cryptology – Indocrypt 2006. [5]
R. Merkle. "Secrecy, authentication and public key systems / A certified digital signature". Ph.D. dissertation, Dept. of Electrical Engineering, Stanford University, 1979. [6] Archived 2018-08-14 at the Wayback Machine
S. Micali, M. Jakobsson, T. Leighton, M. Szydlo. "Fractal Merkle Tree Representation and Traversal". RSA-CT 03. [7]
P. Kampanakis, S. Fluhrer. "LMS vs XMSS: A comparison of the Stateful Hash-Based Signature Proposed Standards". Cryptology ePrint Archive, Report 2017/349. [8]
D. Naor, A. Shenhav, A. Wool. "One-Time Signatures Revisited: Practical Fast Signatures Using Fractal Merkle Tree Traversal". IEEE 24th Convention of Electrical and Electronics Engineers in Israel, 2006. [9] Archived 2018-02-05 at the Wayback Machine
== External links ==
[10] A commented list of literature about hash-based signature schemes.
[11] Another list of references (uncommented). | Wikipedia/Hash-based_cryptography |
Materials MASINT is one of the six major disciplines generally accepted to make up the field of Measurement and Signature Intelligence (MASINT), with due regard that the MASINT subdisciplines may overlap, and MASINT, in turn, is complementary to more traditional intelligence collection and analysis disciplines such as SIGINT and IMINT. MASINT encompasses intelligence gathering activities that bring together disparate elements that do not fit within the definitions of Signals Intelligence (SIGINT), Imagery Intelligence (IMINT), or Human Intelligence (HUMINT).
According to the United States Department of Defense, MASINT is technically derived intelligence (excluding traditional imagery IMINT and signals intelligence SIGINT) that – when collected, processed, and analyzed by dedicated MASINT systems – results in intelligence that detects, tracks, identifies, or describes the signatures (distinctive characteristics) of fixed or dynamic target sources. MASINT was recognized as a formal intelligence discipline in 1986. Materials intelligence is one of the major MASINT disciplines. As with many branches of MASINT, specific techniques may overlap with the six major conceptual disciplines of MASINT defined by the Center for MASINT Studies and Research, which divides MASINT into Electro-optical, Nuclear, Geophysical, Radar, Materials, and Radiofrequency disciplines.
Materials MASINT involves the collection, processing, and analysis of gas, liquid, or solid samples, is critical in defense against chemical, biological, and radiological threats (CBR), or nuclear-biological-chemical (NBC), as well as more general safety and public health activities. It should be distinguished from the discipline of technical intelligence, which does overlap this discipline. To understand the difference, consider that there are multiple ways to understand the propellant of a new enemy weapon. A technical intelligence analyst would work with a captured example of the weapon, or at least pieces of it, to come to that understanding. The technical intelligence analyst might eventually fire the weapon under controlled circumstances.
In contrast, a materials MASINT analyst would collect information on the weapon principally through remote sensing directed on the enemy's use of the weapon. The materials MASINT analysis may learn more about the way the enemy actually uses the weapon, while the technical intelligence analyst may understand more about the manufacture, maintainability, and skills required to use the weapon.
== Disciplines ==
MASINT is made up of six major disciplines, but the disciplines overlap and intertwine. They interact with the more traditional intelligence disciplines of HUMINT, IMINT, and SIGINT. To be more confusing, while MASINT is highly technical and is called such, TECHINT is another discipline, dealing with such things as the analysis of captured equipment.
An example of the interaction is "imagery-defined MASINT (IDM)". In IDM, a MASINT application would measure the image, pixel by pixel, and try to identify the physical materials, or types of energy, that are responsible for pixels or groups of pixels: signatures. When the signatures are then correlated to precise geography, or details of an object, the combined information becomes something greater than the whole of its IMINT and MASINT parts.
The Center for MASINT Studies and Research breaks MASINT into:
Electro-optical MASINT
Nuclear MASINT
Geophysical MASINT
Radar MASINT
Radiofrequency MASINT
Materials MASINT
Samples for materials MASINT can be collected by automatic equipment, such as air samplers, indirectly by humans. Samples, once collected, may be rapidly characterized or undergo extensive forensic laboratory analysis to determine the identity and characteristics of the sources of the samples.
== Materials collection ==
The Fuchs (German for Fox) NBC reconnaissance vehicle is an example of the tactical state of the art for land warfare. This system, in various versions, is used by Germany, the Netherlands, Saudi Arabia, Norway, the UK, the US and UAE. German forces first used it in Kosovo, but the US bought the German units for use in Desert Storm, after modifying it into the XM93. This vehicle can keep up with moving troops, detecting liquid and vapor hazards. Newer versions, like the M1135 nuclear, biological, and chemical reconnaissance vehicle (NBCRV), have enhanced radiation survey, meteorological, chemical, and biological sensors, as well as computer support for . The newer systems are intended for both a CBR battlefield and release other than attack (ROTA) events. ROTA events include industrial accidents as well as terrorist incidents. Its computer systems, complemented by meteorological information and signature information on the CBR agents, can predict propagation and report it using tactical symbols and NBC reports NATO standards ATP45(C).
For airborne sample collection, the pattern increasingly is to use unmanned aerial vehicles (UAV). Still, for long-range missions, a U-2 or reconnaissance version of the C-135 (US) or Nimrod (UK) might be used.
== Chemical materials MASINT ==
There are a wide range of reasons to do chemical analysis of substances to which one's own forces are exposed, as well as learning the nature and signatures of a wide range of chemicals used by other nations.
=== Ammunition, explosive, and rocket propellant analysis ===
Traditional chemical analysis, as well as techniques such as spectroscopy using remote laser excitation, are routine parts of materials intelligence, in contrast to TECHINT evaluating the firing of the material.
=== Chemical warfare and improvised chemical devices ===
Since the advent of chemical warfare in the First World War, there has been an urgent operational requirement for detecting chemical attacks. Early methods depended on color changes in chemically treated paper, or even more lengthy and insensitive manual methods.
To assess a modern chemical sensor, several parameters can be combined to create a figure of merit called the receiver operating characteristic (ROC). These parameters are sensitivity, probability of correct detection, false positive rate and response time. Ideally, the device can have the parameters adjusted for specific situation. It may be more important that the device has a low false positive rate (i.e., is selective, with a low rate of false negatives) or is maximally sensitive, which means accepting false positives. ROC curves are commonly drawn to show sensitivity as a function of false positive rate for a given detection confidence and response time. Too high a false positive rate, without an operator that understands the context, can cause real alarms to be ignored. In an environment where terrorists may improvise, it is not enough to detect formal chemical weapons, but at least 100 highly toxic industrial chemicals from which a weapon could be improvised. (Clarke 2006).
Modern chemical weapon detection is highly automated. One technique involves continual sampling of air through a nondispersive infrared analyzer. More complex instrumentation, such as gas chromatographs coupled to mass spectrometers, are standard laboratory techniques that need to be modified for the field. (Vesser 2001) chemical analysis capability is built around an MM-1 mobile mass spectrometer and air/surface sampler. The US version adds the M43A1 detector component of the first US automatic chemical detector, the 1970s vintage M8.
After Desert Storm field experience, where troops had overestimated the detection capability of the highly selective, but not extremely sensitive, MM-1. A Remote Sensing Chemical Agent Alarm (M21), which is a Fourier Transform Infrared Spectrometer, a form of infrared spectroscopy, that exploits the property that organophosphates, to which the nerve agents belong, have a distinctive signature. The M21 detects chemical agent hazards at line-of-sight distance up to five kilometers away. Adding the M21 has improved the Fox’s vapor detection capabilities and provides more advance warning of a possible vapor chemical warfare agent hazard.
The M21 does not know if it is sensing a specific chemical warfare such as Sarin or an organophosphate insecticide such as malathion. This means that a sensor may give false positives.
Malathion, for example, while not as toxic as a true chemical weapon, very well could be used by terrorists or could be spilled by an accident, in a concentration that may be dangerous. The insecticide parathion is sufficiently toxic that it might be tried as an improvised chemical attack. More specific chemical detectors, however, have tended to have either the signatures of chemical weapons or of industrial chemicals.
The M21 will be succeeded by the Artemis, formerly the Joint Service Lightweight Standoff Chemical Agent Detector (JSLSCAD), which, as opposed to the narrow field of view of the M21, has 360 degree ground coverage and 60 degree air coverage. The Navy is the program manager for Artemis. It is based on LASER radar (LIDAR), detects chemical agent aerosols, vapor, and surface contamination, and gives range from the sensor to the threat. Artemis is being made by a team from Intelletic, Honeywell Technology Center, OPTRA, Inc. and Recon/Optical, Inc. Artemis is not man-portable, so the Army is managing a program for Automatic Chemical Agent Detector and Alarm (ACADA), which will replace the existing M8A1, and operate with the M279 Surface Sampler. This system can be used on helicopters and ships as well as in vehicles or on a ground tripod.
The handheld Improved Chemical Agent Monitor (ICAM) is a hand-held device for monitoring specific chemical agent (i.e., mustard and nerve gas) contamination on surfaces. It works by sensing the molecular ions of specific mobilities (time-of-flight), with software to help analysis.
JCAD, the Joint Chemical Agent Detector, is pocket-sized detector that will detect, identify and quantify chemical agents, in real time, on ships and aircraft. It uses surface acoustic wave technology. The Air Force manages the contract with BAE.
Being built by TRW for the US Marine Corps, the Joint Service Lightweight Nuclear, Biological, Chemical Reconnaissance System (JSLNBCRS) is vehicle-mounted in the HMMWV and LAV. It will detect chemical agents using mass spectrometry.
The Proengin AP2C handheld chemical warfare (CW) agent detector uses flame spectroscopy. It had been restricted to CW agents (AP2C detector) or industrial compounds (toxic industrial materials (TIMS) detector). The newer A4C can detect true chemical agents, as well 49 of 58 chemicals on NATO’s toxic industrial chemical (TIC)/TIM list while avoiding common false positives such as methyl salicylate (synthetic oil of wintergreen). The emitted light is sensed through element-specific filters (AP2C) or on height sensitive spectrometer. The latter directs the light to a diffraction grating on a multi-photodiode detector.
A different approach than troop protection may be appropriate for wide-area chemical survey. The Chemical Agent Dual-Detection Identification Experiment (CADDIE) was developed by the US Navy as a feasibility demonstration of an Unmanned Aerial Vehicle using onboard sensors to locate a suspicious cloud, and then drop disposable ChemSonde sensors into it.
This system demonstrated several characteristics of modern MASINT: a broad-look capability, as with pushbroom radar, and then a close-look with the disposable sensors. The sensors are released from an off-the-shelf ALE-47 Countermeasure Dispenser System, which normally holds chaff, flares, or expendable jammers.
== Biological materials MASINT ==
In modern materials analysis, the line between chemical and biological methods can blur, since immunochemistry, an important discipline, uses biologically created reagents to detect chemical and biological substances. Key characteristics of a technique that can be adapted to field use, as opposed to slow and labor-intensive methods such as culture-based identification, depend on a probe that recognizes and reacts with a molecule, receptor, or other feature of the organism, and a separate transducer recognizes the positive results of the probe and provides it to the operator. The combination is what determines analysis time, sensitivity and specificity. The major families of probe methods are: nucleic acid, antibody/antigen binding, and ligand/receptor interactions. Transducer techniques include: electrochemical, piezoelectric, colorimetric, and optical spectrometric systems.
=== Biological warfare detection ===
A wide range of analytical tools are used in modern microbiological laboratories, and many can be adapted to field use Some that have been adapted include:
Hand-held assays (HHA), similar to pregnancy test strips. Price per operational panel of 8: $65.11, as of 1 Oct 2012.
Electrochemicaluminescence immunoassay, to be in the M1-M analyzer scheduled for 2005.
Polymerase chain reaction (PCR), for confirmatory tests. Available for 10 biological agents in 2004.
Enzyme-linked immunosorbent analysis ELISA on particles filtered from the air.
The original Fuchs, and the slightly modified version fielded by the US in 1991, had biological protection for the crew, but no biological analysis capability. An interim version, the Fuchs biological reconnaissance system (BRS), continually monitored outside air for particulate matter that could be biological weapons, and, if detected would transfer them to a biological safety cabinet (i.e., sealed glove box) for analysis using a variety of genetic and immunologic tests. This interim version, however, involves an NBC Field Laboratory set of vehicles and shelters, not a single mobile system:
Radiation and HazMat (Hazardous Materials) Analysis Lab Shelter
Biological Analysis Lab Shelters
Chemical Analysis Lab Shelter
Command and Sampling Vehicle (the Fuchs proper)
The entire system is transportable by air, ship, or truck (the latter with the Command and Sampling Vehicle self-deploying).
The latest Fuchs 2 version, ordered by the UAE in March 2005 for delivery in 2007, will feature an integrated equipment set, to go inside the glove box, for detecting biological weapons. Analytical methods include ELISA, Polymerase Chain Reaction (PCR), Liquid chromatography-mass spectrometry (LC-MS), and high performance (also called high pressure) liquid chromatography (HPLC). These methods rarely can instantly identify a biological agent, but can give preliminary results, with an adequate sample, in minutes to hours.
The Fuchs 2 also has weather sensors that can help predict the propagation of contaminants. See weather MASINT.
The US Army is implementing an interim Biological Integrated Detection System (BIDS) made by the team Bio Road, Bruker Analytical Systems, Environmental Technologies Group, Harris Corp, and Marion Composites. Also under the Army is the Joint Biological Point Detection System (JBPDS), which will succeed the Army's BIDS. It will also replace the Navy IBADS and give initial capability to the Air Force and Marines. It has complementary trigger, sampler, detector and identification technologies to rapidly and automatically detect and identify biological threat agents. Multiple agents will be detected in a maximum of 15 minutes. JBPDS is built by Batelle and Lockheed Martin.
China also has a BW detection capability. In keeping with the definition of BW as "public health in reverse," PRC writings on the subject treat the matter more in terms of infectious disease control, an approach that is standard everywhere. As one would expect, considerable amount of research has been conducted in China on potential BW agents including tularemia, Q fever, plague, anthrax, West and Eastern Equine Encephalitis, psittacosis, among others.
Some specialized equipment has also been fielded in some unspecified numbers to counter the threat of BW to PLA troops:
Type 76 Microbe Sampling Kit: First introduced in 1975, and includes the 76-1 variant, this portable laboratory can test surface, waterborne, and airborne particles to determine the presence of BW agent threats, and it also has five different types of insect and small animal reference specimens. Resembling a low-tech gravitation/settle plate, a small, rotating mechanism is placed windward, and aerosol particles will adhere to the sampling or petri dish. Disinfectant is supplied along with culturing supplies.
Large-Volume Electrostatic Air Sampler: This equipment has no classification number, and little information is provided concerning its attributes. It probably is similar to the corona discharge-based large volume air sampler (LVAS) used in the West. This technology in general offers excellent results, and it is capable of isolating viral particles from the air, including rabies and human respiratory disease viruses.
JWL-I Model Bioaerosol Sampler: Like the LVAS mentioned above, the reference to this equipment offers little in the way of details. This automated air sampler resembles most closely a single stage impactor, drawing in air and depositing aerosolized particles onto agar for further testing. An example of this type of instrumentation is the Casella slit-to-agar, a single-stage impactor used in civilian environmental monitoring.
WJ-85 microbiological laboratory vehicles were introduced in 1984, which could have resulted in a motorized laboratory platform, described as somewhere between "a railway car and a sedan", is separated into three sections, with airtight sealed gaskets on the doorways. The forward section houses the driver and carriage for occupants, the midsection contains the laboratory room (See Mobile BW Assessment Laboratory), and the rear section contains a decontamination apparatus plus extra clothing. Laboratory equipment includes a glass glove box for handling infectious material, a bacteriostatic device, a refrigerator, an incubator (hengwenxiang), a fluorescent microscope, an inverted microscope, culture media, diagnostic reagents, cell culture instruments, etc. A separate station allows testing for bacteria and viruses, accommodating up to four people. Some 200 bacteria and 50 virus samples for reference and identification are supplied with the laboratory vehicle.
=== Biological counterproliferation MASINT ===
One of the challenges of preventing the proliferation of biological warfare capability is verifying that a legitimate bioengineering facility is not producing weapons. Since many completely legal processes involve trade secrets, production facilities can be reluctant to allow detailed inspection and sampling of what might be a commercial advantage. The Henry L. Stimson Center has done a good deal of conceptual work on an inspection regimen, in which inspectors would use biological tests that looked for genetic materials associated with known weapons. Even when a potential weapon, such as Clostridium botulinum exotoxin (Botox or "botulinus toxin") is discovered, the amounts or preparation may be such that it can be established the use is for legitimate medical, veterinary, or research applications.
These approaches to detecting violations of "dual use" also have the potential for recognizing epidemic organisms in a public health context.
=== Personnel detectors ===
A Vietnam-era sensor, the XM2, generally known as the "people sniffer", detected ammonia concentrations in air, which indicated the presence of groups of people or animals. While it was sensitive, but not selective for people, many water buffalo became targets. Nevertheless, it was considered the best sensor used by the 9th Infantry Division, because, as opposed to other MASINT and SIGINT sensors, it could give helicopter-borne troops real-time detection of targets
As seen in the attached chart, it is compared, in terms of timeliness, to a number of other sensors.
== Nuclear test analysis ==
Monitoring nuclear tests involves both chemical analysis, part of materials MASINT, and analysis of the radioactive emissions of samples, which crosses materials and nuclear MASINT. Not all nuclear MASINT involves materials analysis; see space-based radiation and EMP MASINT sensors.
Nuclear tests, including underground tests that vent into the atmosphere, produce fallout that not only indicates that a nuclear event has taken place, but, through radiochemical analysis of radionuclides in the fallout, characterize the technology and source of the device. MASINT collection of fallout is most commonly done with airborne dust traps, either on manned aircraft or drones.
During FY 1974, SAC missions were flown to gather information on Chinese and French tests. U-2R aircraft, in Operation OLYMPIC RACE, flew missions, near Spain, to capture actual airborne particles that meteorologists predicted would be in that airspace. Another portion of this program involved a US Navy ship, in international waters, that sent unmanned air sampling drones into the cloud. So, in 1974, both U-2R and drone aircraft captured actual airborne particles from nuclear blasts for the MASINT discipline of nuclear Materials Intelligence.
In the current M1135 nuclear, biological, and chemical reconnaissance vehicle and previous US NBC tactical monitoring vehicle, the M93 Fox (which is derived from the German radiation detection version of the TPz Fuchs), is built around the AN/VDR2 radioactivity, detection, indication, and computation (RADIAC) set, capable of measuring beta and gamma radiation both inside and outside the vehicle. This system was first used during Desert Storm.
It is not only important to detect that a nuclear event occurred, but what produced the event. In the context of the North Korean tests, one proposed method involved measuring xenon concentrations in the air. Xenon is a by-product of different fissionable materials' reactions, so could be used to distinguish if air sampling from a North Korean test, either atmospheric testing or leakage from an underground test, could be used to determine if the bomb was nuclear, and, if so, whether the Primary was plutonium or highly enriched uranium (HEU)
== References == | Wikipedia/Materials_MASINT |
Capstone is a United States government long-term project to develop cryptography standards for public and government use. Capstone was authorized by the Computer Security Act of 1987, driven by the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA); the project began in 1993.
== Project ==
The initiative involved four standard algorithms: a data encryption algorithm called Skipjack, along with the Clipper chip that included the Skipjack algorithm, a digital signature algorithm, Digital Signature Algorithm (DSA), a hash function, SHA-1, and a key exchange protocol. Capstone's first implementation was in the Fortezza PCMCIA card. All Capstone components were designed to provide 80-bit security.
The initiative encountered massive resistance from the cryptographic community, and eventually the US government abandoned the effort. The main reasons for this resistance were concerns about Skipjack's design, which was classified, and the use of key escrow in the Clipper chip.
== References ==
== External links ==
EFF archives on Capstone | Wikipedia/Capstone_(cryptography) |
"Communication Theory of Secrecy Systems" is a paper published in 1949 by Claude Shannon discussing cryptography from the viewpoint of information theory. It is one of the foundational treatments (arguably the foundational treatment) of modern cryptography. His work has been described as a "turning point, and marked the closure of classical cryptography and the beginning of modern cryptography." It has also been described as turning cryptography from an "art to a science". It is also a proof that all theoretically unbreakable ciphers must have the same requirements as the one-time pad.
The paper serves as the foundation of secret-key cryptography, including the work of Horst Feistel, the Data Encryption Standard (DES), Advanced Encryption Standard (AES), and more. In the paper, Shannon defined unicity distance, and the principles of confusion and diffusion, which are key to a secure cipher.
Shannon published an earlier version of this research in the formerly classified report A Mathematical Theory of Cryptography, Memorandum MM 45-110-02, Sept. 1, 1945, Bell Laboratories. This report also precedes the publication of his "A Mathematical Theory of Communication", which appeared in 1948.
== See also ==
Confusion and diffusion
Product cipher
One-time pad
Unicity distance
== Notes ==
== References ==
Shannon, Claude. "Communication Theory of Secrecy Systems", Bell System Technical Journal, vol. 28(4), page 656–715, 1949.
Shannon, Claude. "A Mathematical Theory of Cryptography", Memorandum MM 45-110-02, Sept. 1, 1945, Bell Laboratories.
https://www.itsoc.org/about/shannon
== External links ==
Online retyped copy of the paper Archived 2007-06-05 at the Wayback Machine
Scanned version of the published BSTJ paper | Wikipedia/Communication_Theory_of_Secrecy_Systems |
Non-commutative cryptography is the area of cryptology where the cryptographic primitives, methods and systems are based on algebraic structures like semigroups, groups and rings which are non-commutative. One of the earliest applications of a non-commutative algebraic structure for cryptographic purposes was the use of braid groups to develop cryptographic protocols. Later several other non-commutative structures like Thompson groups, polycyclic groups, Grigorchuk groups, and matrix groups have been identified as potential candidates for cryptographic applications. In contrast to non-commutative cryptography, the currently widely used public-key cryptosystems like RSA cryptosystem, Diffie–Hellman key exchange and elliptic curve cryptography are based on number theory and hence depend on commutative algebraic structures.
Non-commutative cryptographic protocols have been developed for solving various cryptographic problems like key exchange, encryption-decryption, and authentication. These protocols are very similar to the corresponding protocols in the commutative case.
== Some non-commutative cryptographic protocols ==
In these protocols it would be assumed that G is a non-abelian group. If w and a are elements of G the notation wa would indicate the element a−1wa.
=== Protocols for key exchange ===
==== Protocol due to Ko, Lee, et al. ====
The following protocol due to Ko, Lee, et al., establishes a common secret key K for Alice and Bob.
An element w of G is published.
Two subgroups A and B of G such that ab = ba for all a in A and b in B are published.
Alice chooses an element a from A and sends wa to Bob. Alice keeps a private.
Bob chooses an element b from B and sends wb to Alice. Bob keeps b private.
Alice computes K = (wb)a = wba.
Bob computes K' = (wa)b=wab.
Since ab = ba, K = K'. Alice and Bob share the common secret key K.
==== Anshel-Anshel-Goldfeld protocol ====
This a key exchange protocol using a non-abelian group G. It is significant because it does not require two commuting subgroups A and B of G as in the case of the protocol due to Ko, Lee, et al.
Elements a1, a2, . . . , ak, b1, b2, . . . , bm from G are selected and published.
Alice picks a private x in G as a word in a1, a2, . . . , ak; that is, x = x( a1, a2, . . . , ak ).
Alice sends b1x, b2x, . . . , bmx to Bob.
Bob picks a private y in G as a word in b1, b2, . . . , bm; that is y = y ( b1, b2, . . . , bm ).
Bob sends a1y, a2y, . . . , aky to Alice.
Alice and Bob share the common secret key K = x−1y−1xy.
Alice computes x ( a1y, a2y, . . . , aky ) = y−1 xy. Pre-multiplying it with x−1, Alice gets K.
Bob computes y ( b1x, b2x, . . . , bmx) = x−1yx. Pre-multiplying it with y−1 and then taking the inverse, Bob gets K.
==== Stickel's key exchange protocol ====
In the original formulation of this protocol the group used was the group of invertible matrices over a finite field.
Let G be a public non-abelian finite group.
Let a, b be public elements of G such that ab ≠ ba. Let the orders of a and b be N and M respectively.
Alice chooses two random numbers n < N and m < M and sends u = ambn to Bob.
Bob picks two random numbers r < N and s < M and sends v = arbs to Alice.
The common key shared by Alice and Bob is K = am + rbn + s.
Alice computes the key by K = amvbn.
Bob computes the key by K = arubs.
=== Protocols for encryption and decryption ===
This protocol describes how to encrypt a secret message and then decrypt using a non-commutative group. Let Alice want to send a secret message m to Bob.
Let G be a non-commutative group. Let A and B be public subgroups of G such that ab = ba for all a in A and b in B.
An element x from G is chosen and published.
Bob chooses a secret key b from A and publishes z = xb as his public key.
Alice chooses a random r from B and computes t = zr.
The encrypted message is C = (xr, H(t)
⊕
{\displaystyle \oplus }
m), where H is some hash function and
⊕
{\displaystyle \oplus }
denotes the XOR operation. Alice sends C to Bob.
To decrypt C, Bob recovers t as follows: (xr)b = xrb = xbr = (xb)r = zr = t. The plain text message send by Alice is P = ( H(t)
⊕
{\displaystyle \oplus }
m )
⊕
{\displaystyle \oplus }
H(t) = m.
=== Protocols for authentication ===
Let Bob want to check whether the sender of a message is really Alice.
Let G be a non-commutative group and let A and B be subgroups of G such that ab = ba for all a in A and b in B.
An element w from G is selected and published.
Alice chooses a private s from A and publishes the pair ( w, t ) where t = w s.
Bob chooses an r from B and sends a challenge w ' = wr to Alice.
Alice sends the response w ' ' = (w ')s to Bob.
Bob checks if w ' ' = tr. If this true, then the identity of Alice is established.
== Security basis of the protocols ==
The basis for the security and strength of the various protocols presented above is the difficulty of the following two problems:
The conjugacy decision problem (also called the conjugacy problem): Given two elements u and v in a group G determine whether there exists an element x in G such that v = ux, that is, such that v = x−1 ux.
The conjugacy search problem: Given two elements u and v in a group G find an element x in G such that v = ux, that is, such that v = x−1 ux.
If no algorithm is known to solve the conjugacy search problem, then the function x → ux can be considered as a one-way function.
== Platform groups ==
A non-commutative group that is used in a particular cryptographic protocol is called the platform group of that protocol. Only groups having certain properties can be used as the platform groups for the implementation of non-commutative cryptographic protocols. Let G be a group suggested as a platform group for a certain non-commutative cryptographic system. The following is a list of the properties expected of G.
The group G must be well-known and well-studied.
The word problem in G should have a fast solution by a deterministic algorithm. There should be an efficiently computable "normal form" for elements of G.
It should be impossible to recover the factors x and y from the product xy in G.
The number of elements of length n in G should grow faster than any polynomial in n. (Here "length n" is the length of a word representing a group element.)
== Examples of platform groups ==
=== Braid groups ===
Let n be a positive integer. The braid group Bn is a group generated by x1, x2, . . . , xn-1 having the following presentation:
B
n
=
⟨
x
1
,
x
2
,
…
,
x
n
−
1
|
x
i
x
j
=
x
j
x
i
if
|
i
−
j
|
>
1
and
x
i
x
j
x
i
=
x
j
x
i
x
j
if
|
i
−
j
|
=
1
⟩
{\displaystyle B_{n}=\left\langle x_{1},x_{2},\ldots ,x_{n-1}{\big |}x_{i}x_{j}=x_{j}x_{i}{\text{ if }}|i-j|>1{\text{ and }}x_{i}x_{j}x_{i}=x_{j}x_{i}x_{j}{\text{ if }}|i-j|=1\right\rangle }
=== Thompson's group ===
Thompson's group is an infinite group F having the following infinite presentation:
F
=
⟨
x
0
,
x
1
,
x
2
,
…
|
x
k
−
1
x
n
x
k
=
x
n
+
1
for
k
<
n
⟩
{\displaystyle F=\left\langle x_{0},x_{1},x_{2},\ldots {\big |}x_{k}^{-1}x_{n}x_{k}=x_{n+1}{\text{ for }}k<n\right\rangle }
=== Grigorchuk's group ===
Let T denote the infinite rooted binary tree. The set V of vertices is the set of all finite binary sequences. Let A(T) denote the set of all automorphisms of T. (An automorphism of T permutes vertices preserving connectedness.) The Grigorchuk's group Γ is the subgroup of A(T) generated by the automorphisms a, b, c, d defined as follows:
a
(
b
1
,
b
2
,
…
,
b
n
)
=
(
1
−
b
1
,
b
2
,
…
,
b
n
)
{\displaystyle a(b_{1},b_{2},\ldots ,b_{n})=(1-b_{1},b_{2},\ldots ,b_{n})}
b
(
b
1
,
b
2
,
…
,
b
n
)
=
{
(
b
1
,
1
−
b
2
,
…
,
b
n
)
if
b
1
=
0
(
b
1
,
c
(
b
2
,
…
,
b
n
)
)
if
b
1
=
1
{\displaystyle b(b_{1},b_{2},\ldots ,b_{n})={\begin{cases}(b_{1},1-b_{2},\ldots ,b_{n})&{\text{ if }}b_{1}=0\\(b_{1},c(b_{2},\ldots ,b_{n}))&{\text{ if }}b_{1}=1\end{cases}}}
c
(
b
1
,
b
2
,
…
,
b
n
)
=
{
(
b
1
,
1
−
b
2
,
…
,
b
n
)
if
b
1
=
0
(
b
1
,
d
(
b
2
,
…
,
b
n
)
)
if
b
1
=
1
{\displaystyle c(b_{1},b_{2},\ldots ,b_{n})={\begin{cases}(b_{1},1-b_{2},\ldots ,b_{n})&{\text{ if }}b_{1}=0\\(b_{1},d(b_{2},\ldots ,b_{n}))&{\text{ if }}b_{1}=1\end{cases}}}
d
(
b
1
,
b
2
,
…
,
b
n
)
=
{
(
b
1
,
b
2
,
…
,
b
n
)
if
b
1
=
0
(
b
1
,
b
(
b
2
,
…
,
b
n
)
)
if
b
1
=
1
{\displaystyle d(b_{1},b_{2},\ldots ,b_{n})={\begin{cases}(b_{1},b_{2},\ldots ,b_{n})&{\text{ if }}b_{1}=0\\(b_{1},b(b_{2},\ldots ,b_{n}))&{\text{ if }}b_{1}=1\end{cases}}}
=== Artin group ===
An Artin group A(Γ) is a group with the following presentation:
A
(
Γ
)
=
⟨
a
1
,
a
2
,
…
,
a
n
|
μ
i
j
=
μ
j
i
for
1
≤
i
<
j
≤
n
⟩
{\displaystyle A(\Gamma )=\left\langle a_{1},a_{2},\ldots ,a_{n}|\mu _{ij}=\mu _{ji}{\text{ for }}1\leq i<j\leq n\right\rangle }
where
μ
i
j
=
a
i
a
j
a
i
…
{\displaystyle \mu _{ij}=a_{i}a_{j}a_{i}\ldots }
(
m
i
j
{\displaystyle m_{ij}}
factors) and
m
i
j
=
m
j
i
{\displaystyle m_{ij}=m_{ji}}
.
=== Matrix groups ===
Let F be a finite field. Groups of matrices over F have been used as the platform groups of certain non-commutative cryptographic protocols.
=== Semidirect products ===
== See also ==
Group-based cryptography
== References ==
== Further reading == | Wikipedia/Non-commutative_cryptography |
In cryptography, a round or round function is a basic transformation that is repeated (iterated) multiple times inside the algorithm. Splitting a large algorithmic function into rounds simplifies both implementation and cryptanalysis.
For example, encryption using an oversimplified three-round cipher can be written as
C
=
R
3
(
R
2
(
R
1
(
P
)
)
)
{\displaystyle C=R_{3}(R_{2}(R_{1}(P)))}
, where C is the ciphertext and P is the plaintext. Typically, rounds
R
1
,
R
2
,
.
.
.
{\displaystyle R_{1},R_{2},...}
are implemented using the same function, parameterized by the round constant and, for block ciphers, the round key from the key schedule. Parameterization is essential to reduce the self-similarity of the cipher, which could lead to slide attacks.
Increasing the number of rounds "almost always" protects against differential and linear cryptanalysis, as for these tools the effort grows exponentially with the number of rounds. However, increasing the number of rounds does not always make weak ciphers into strong ones, as some attacks do not depend on the number of rounds.
The idea of an iterative cipher using repeated application of simple non-commutating operations producing diffusion and confusion goes as far back as 1945, to the then-secret version of C. E. Shannon's work "Communication Theory of Secrecy Systems"; Shannon was inspired by mixing transformations used in the field of dynamical systems theory (cf. horseshoe map). Most of the modern ciphers use iterative design with number of rounds usually chosen between 8 and 32 (with 64 and even 80 used in cryptographic hashes).
For some Feistel-like cipher descriptions, notably that of the RC5, a term "half-round" is used to define the transformation of part of the data (a distinguishing feature of the Feistel design). This operation corresponds to a full round in traditional descriptions of Feistel ciphers (like DES).
== Round constants ==
Inserting round-dependent constants into the encryption process breaks the symmetry between rounds and thus thwarts the most obvious slide attacks. The technique is a standard feature of most modern block ciphers. However, a poor choice of round constants or unintended interrelations between the constants and other cipher components could still allow slide attacks (e.g., attacking the initial version of the format-preserving encryption mode FF3).
Many lightweight ciphers utilize very simple key scheduling: the round keys come from adding the round constants to the encryption key. A poor choice of round constants in this case might make the cipher vulnerable to invariant attacks; ciphers broken this way include SCREAM and Midori64.
== Optimization ==
Daemen and Rijmen assert that one of the goals of optimizing the cipher is reducing the overall workload, the product of the round complexity and the number of rounds. There are two approaches to address this goal:
local optimization improves the worst-case behavior of a single round (two rounds for Feistel ciphers);
global optimization optimizes the worst-case behavior of more than one round, allowing the use of less sophisticated components.
== Reduced-round ciphers ==
Cryptanalysis techniques include the use of versions of ciphers with fewer rounds than specified by their designers. Since a single round is usually cryptographically weak, many attacks that fail to work against the full version of ciphers will work on such reduced-round variants. The result of such attack provides valuable information about the strength of the algorithm, a typical break of the full cipher starts out as a success against a reduced-round one.
Sateesan et al. propose using the reduced-round versions of lightweight hashes and ciphers as non-cryptographic hash functions.
== References ==
== Sources ==
Aumasson, Jean-Philippe (6 November 2017). Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press. pp. 56–57. ISBN 978-1-59327-826-7. OCLC 1012843116.
Beierle, Christof; Canteaut, Anne; Leander, Gregor; Rotella, Yann (2017). "Proving Resistance Against Invariant Attacks: How to Choose the Round Constants" (PDF). Advances in Cryptology – CRYPTO 2017. Lecture Notes in Computer Science. Vol. 10402. Springer International Publishing. pp. 647–678. doi:10.1007/978-3-319-63715-0_22. eISSN 1611-3349. ISBN 978-3-319-63714-3. ISSN 0302-9743.
Biryukov, Alex; Wagner, David (1999). "Slide Attacks". Fast Software Encryption. Lecture Notes in Computer Science. Vol. 1636. Springer Berlin Heidelberg. pp. 245–259. doi:10.1007/3-540-48519-8_18. ISBN 978-3-540-66226-6. ISSN 0302-9743.
Biryukov, Alex (2005). "Product Cipher, Superencryption". Encyclopedia of Cryptography and Security. Springer US. pp. 480–481. doi:10.1007/0-387-23483-7_320. ISBN 978-0-387-23473-1.
Daemen, Joan; Rijmen, Vincent (9 March 2013). The Design of Rijndael: AES - The Advanced Encryption Standard (PDF). Springer Science & Business Media. ISBN 978-3-662-04722-4. OCLC 1259405449.
Dunkelman, Orr; Keller, Nathan; Lasry, Noam; Shamir, Adi (2020). "New Slide Attacks on Almost Self-similar Ciphers". Advances in Cryptology – EUROCRYPT 2020. Lecture Notes in Computer Science. Vol. 12105. Springer International Publishing. pp. 250–279. doi:10.1007/978-3-030-45721-1_10. eISSN 1611-3349. ISBN 978-3-030-45720-4. ISSN 0302-9743.
Kaliski, Burton S.; Yin, Yiqun Lisa (1995). "On Differential and Linear Cryptanalysis of the RC5 Encryption Algorithm" (PDF). Advances in Cryptology – CRYPT0' 95. Lecture Notes in Computer Science. Vol. 963. Springer Berlin Heidelberg. pp. 171–184. doi:10.1007/3-540-44750-4_14. ISBN 978-3-540-60221-7. ISSN 0302-9743.
Robshaw, M.J.B. (August 2, 1995). Block Ciphers (PDF) (Version 2.0 ed.). Redwood City, CA: RSA Laboratories.
Sateesan, Arish; Biesmans, Jelle; Claesen, Thomas; Vliegen, Jo; Mentens, Nele (April 2023). "Optimized algorithms and architectures for fast non-cryptographic hash functions in hardware" (PDF). Microprocessors and Microsystems. 98: 104782. doi:10.1016/j.micpro.2023.104782. ISSN 0141-9331.
Schneier, Bruce (January 2000). "A Self-Study Course in Block-Cipher Cryptanalysis" (PDF). Cryptologia. 24 (1): 18–34. doi:10.1080/0161-110091888754. S2CID 53307028. | Wikipedia/Round_(cryptography) |
In cryptography, a salt is random data fed as an additional input to a one-way function that hashes data, a password or passphrase. Salting helps defend against attacks that use precomputed tables (e.g. rainbow tables), by vastly growing the size of table needed for a successful attack. It also helps protect passwords that occur multiple times in a database, as a new salt is used for each password instance. Additionally, salting does not place any burden on users.
Typically, a unique salt is randomly generated for each password. The salt and the password (or its version after key stretching) are concatenated and fed to a cryptographic hash function, and the output hash value is then stored with the salt in a database. The salt does not need to be encrypted, because knowing the salt would not help the attacker.
Salting is broadly used in cybersecurity, from Unix system credentials to Internet security.
Salts are related to cryptographic nonces.
== Example ==
Without a salt, identical passwords will map to identical hash values, which could make it easier for a hacker to guess the passwords from their hash value.
Instead, a salt is generated and appended to each password, which causes the resultant hash to output different values for the same original password.
The salt and hash are then stored in the database. To later test if a password a user enters is correct, the same process can be performed on it (appending that user's salt to the password and calculating the resultant hash): if the result does not match the stored hash, it could not have been the correct password that was entered.
In practice, a salt is usually generated using a Cryptographically Secure PseudoRandom Number Generator. CSPRNGs are designed to produce unpredictable random numbers which can be alphanumeric. While generally discouraged due to lower security, some systems use timestamps or simple counters as a source of salt. Sometimes, a salt may be generated by combining a random value with additional information, such as a timestamp or user-specific data, to ensure uniqueness across different systems or time periods.
== Common mistakes ==
=== Salt re-use ===
Using the same salt for all passwords is dangerous because a precomputed table which simply accounts for the salt will render the salt useless.
Generation of precomputed tables for databases with unique salts for every password is not viable because of the computational cost of doing so. But, if a common salt is used for all the entries, creating such a table (that accounts for the salt) then becomes a viable and possibly successful attack.
Because salt re-use can cause users with the same password to have the same hash, cracking a single hash can result in other passwords being compromised too.
=== Salt length ===
If a salt is too short, an attacker may precompute a table of every possible salt appended to every likely password. Using a long salt ensures such a table would be prohibitively large. 16 bytes (128 bits) or more is generally sufficient to provide a large enough space of possible values, minimizing the risk of collisions (i.e., two different passwords ending up with the same salt).
== Benefits ==
To understand the difference between cracking a single password and a set of them, consider a file with users and their hashed passwords. Say the file is unsalted. Then an attacker could pick a string, call it attempt[0], and then compute hash(attempt[0]). A user whose hash stored in the file is hash(attempt[0]) may or may not have password attempt[0]. However, even if attempt[0] is not the user's actual password, it will be accepted as if it were, because the system can only check passwords by computing the hash of the password entered and comparing it to the hash stored in the file. Thus, each match cracks a user password, and the chance of a match rises with the number of passwords in the file. In contrast, if salts are used, the attacker would have to compute hash(attempt[0] || salt[a]), compare against entry A, then hash(attempt[0] || salt[b]), compare against entry B, and so on. This prevents any one attempt from cracking multiple passwords, given that salt re-use is avoided.
Salts also combat the use of precomputed tables for cracking passwords. Such a table might simply map common passwords to their hashes, or it might do something more complex, like store the start and end points of a set of precomputed hash chains. In either case, salting can defend against the use of precomputed tables by lengthening hashes and having them draw from larger character sets, making it less likely that the table covers the resulting hashes. In particular, a precomputed table would need to cover the string [salt + hash] rather than simply [hash].
The modern shadow password system, in which password hashes and other security data are stored in a non-public file, somewhat mitigates these concerns. However, they remain relevant in multi-server installations which use centralized password management systems to push passwords or password hashes to multiple systems. In such installations, the root account on each individual system may be treated as less trusted than the administrators of the centralized password system, so it remains worthwhile to ensure that the security of the password hashing algorithm, including the generation of unique salt values, is adequate.
Another (lesser) benefit of a salt is as follows: two users might choose the same string as their password. Without a salt, this password would be stored as the same hash string in the password file. This would disclose the fact that the two accounts have the same password, allowing anyone who knows one of the account's passwords to access the other account. By salting the passwords with two random characters, even if two accounts use the same password, no one can discover this just by reading hashes. Salting also makes it extremely difficult to determine if a person has used the same password for multiple systems.
== Unix implementations ==
=== 1970s–1980s ===
Earlier versions of Unix used a password file /etc/passwd to store the hashes of salted passwords (passwords prefixed with two-character random salts). In these older versions of Unix, the salt was also stored in the passwd file (as cleartext) together with the hash of the salted password. The password file was publicly readable for all users of the system. This was necessary so that user-privileged software tools could find user names and other information. The security of passwords is therefore protected only by the one-way functions (enciphering or hashing) used for the purpose. Early Unix implementations limited passwords to eight characters and used a 12-bit salt, which allowed for 4,096 possible salt values. This was an appropriate balance for 1970s computational and storage costs.
=== 1980s–present ===
The shadow password system is used to limit access to hashes and salt. The salt is eight characters, the hash is 86 characters, and the password length is effectively unlimited, barring stack overflow errors.
== Web-application implementations ==
It is common for a web application to store in a database the hash value of a user's password. Without a salt, a successful SQL injection attack may yield easily crackable passwords. Because many users re-use passwords for multiple sites, the use of a salt is an important component of overall web application security. Some additional references for using a salt to secure password hashes in specific languages or libraries (PHP, the .NET libraries, etc.) can be found in the external links section below.
== See also ==
Password cracking
Cryptographic nonce
Initialization vector
Padding
"Spice" in the Hasty Pudding cipher
Rainbow tables
Pepper (cryptography)
== References ==
== External links ==
Wille, Christoph (2004-01-05). "Storing Passwords - done right!".
OWASP Cryptographic Cheat Sheet
how to encrypt user passwords | Wikipedia/Salt_(cryptography) |
Computer and network surveillance is the monitoring of computer activity and data stored locally on a computer or data being transferred over computer networks such as the Internet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today, and almost all Internet traffic can be monitored.
Surveillance allows governments and other agencies to maintain social control, recognize and monitor threats or any suspicious or abnormal activity, and prevent and investigate criminal activities. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.
Many civil rights and privacy groups, such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in a mass surveillance society, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".
== Network surveillance ==
The vast majority of computer surveillance involves the monitoring of personal data and traffic on the Internet. For example, in the United States, the Communications Assistance For Law Enforcement Act mandates that all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies.
Packet capture (also known as "packet sniffing") is the monitoring of data traffic on a network. Data sent between computers over the Internet or between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. A packet capture appliance intercepts these packets, so that they may be examined and analyzed. Computer technology is needed to perform traffic analysis and sift through intercepted data to look for important/useful information. Under the Communications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic. These technologies can be used both by the intelligence and for illegal activities.
There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group. Billions of dollars per year are spent by agencies such as the Information Awareness Office, NSA, and the FBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies.
Similar systems are now used by Iranian Security dept. to more easily distinguish between peaceful citizens and terrorists. All of the technology has been allegedly installed by German Siemens AG and Finnish Nokia.
The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages to network monitoring. For instance, systems described as "Web 2.0" have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0", stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online. However, Internet surveillance also has a disadvantage. One researcher from Uppsala University said "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance". Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves also monitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive. This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to 'creeping' on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy". The study shows that women can become jealous of other people when they are in an online group.
Virtual assistants have become socially integrated into many people's lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services. They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device. The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement. While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data.
== Corporate surveillance ==
Corporate surveillance of computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such as targeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails (if they use free webmail services), which are kept in a database.
Such type of surveillance is also used to establish business purposes of monitoring, which may include the following:
Preventing misuse of resources. Companies can discourage unproductive personal activities such as online shopping or web surfing on company time. Monitoring employee performance is one way to reduce unnecessary network traffic and reduce the consumption of network bandwidth.
Promoting adherence to policies. Online surveillance is one means of verifying employee observance of company networking policies.
Preventing lawsuits. Firms can be held liable for discrimination or employee harassment in the workplace. Organizations can also be involved in infringement suits through employees that distribute copyrighted material over corporate networks.
Safeguarding records. Federal legislation requires organizations to protect personal information. Monitoring can determine the extent of compliance with company policies and programs overseeing information security. Monitoring may also deter unlawful appropriation of personal information, and potential spam or viruses.
Safeguarding company assets. The protection of intellectual property, trade secrets, and business strategies is a major concern. The ease of information transmission and storage makes it imperative to monitor employee actions as part of a broader policy.
The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm.
For instance, Google Search stores identifying information for each web search. An IP address and the search phrase used are stored in a database for up to 18 months. Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences. Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies "cookies" on each visitor's computer. These cookies track the user across all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the information from their email accounts, and search engine histories, is stored by Google to use to build a profile of the user to deliver better-targeted advertising.
The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. The Department of Homeland Security has openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring.
== Malicious software ==
In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer's hard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and/or report back activities in real-time to its operator through the Internet connection. A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server.
There are multiple ways of installing such software. The most common is remote installation, using a backdoor created by a computer virus or trojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such as CIPAV and Magic Lantern. More often, however, viruses created by other people or spyware installed by marketing agencies can be used to gain access through the security breaches that they create.
Another method is "cracking" into the computer to gain access over a network. An attacker can then install surveillance software remotely. Servers and computers with permanent broadband connections are most vulnerable to this type of attack. Another source of security cracking is employees giving out information or users using brute force tactics to guess their password.
One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from a compact disc, floppy disk, or thumbdrive. This method shares a disadvantage with hardware devices in that it requires physical access to the computer. One well-known worm that uses this method of spreading itself is Stuxnet.
== Social network analysis ==
One common form of surveillance is to create maps of social networks based on data from social networking sites as well as from traffic analysis information from phone call records such as those in the NSA call database, and internet traffic data gathered under CALEA. These social network "maps" are then data mined to extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities.
Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are currently investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network.
Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office:
The purpose of the SSNA algorithms program is to extend techniques of social network analysis to assist with distinguishing potential terrorist cells from legitimate groups of people ... In order to be successful SSNA will require information on the social interactions of the majority of people around the globe. Since the Defense Department cannot easily distinguish between peaceful citizens and terrorists, it will be necessary for them to gather data on innocent civilians as well as on potential terrorists.
== Monitoring from a distance ==
With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting the radiation emitted by the CRT monitor. This form of computer surveillance, known as TEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters.
IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer.
In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act. At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from the Stingray phone tracker device. As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device. Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing. Some cities have pulled out of using the StingRay such as Santa Clara County.
And it has also been shown, by Adi Shamir et al., that even the high frequency noise emitted by a CPU includes information about the instructions being executed.
== Policeware and govware ==
In German-speaking countries, spyware used or made by the government is sometimes called govware. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Known examples include the Swiss MiniPanzer and MegaPanzer and the German R2D2 (trojan).
Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens. Within the U.S., Carnivore was the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails. Magic Lantern is another such application, this time running in a targeted computer in a trojan style and performing keystroke logging. CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan.
The Clipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.” The government portrayed it as the solution to the secret codes or cryptographic keys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda.
The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) without digital rights management (DRM) that prevented access to this material without the permission of the copyright holder.
== Surveillance as an aid to censorship ==
Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance. And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead to self-censorship.
In March 2013 Reporters Without Borders issued a Special report on Internet surveillance that examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists, citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet", Bahrain, China, Iran, Syria, and Vietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", including Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future.
Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone. Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities.
== Countermeasures ==
Countermeasures against surveillance vary based on the type of eavesdropping targeted. Electromagnetic eavesdropping, such as TEMPEST and its derivatives, often requires hardware shielding, such as Faraday cages, to block unintended emissions. To prevent interception of data in transit, encryption is a key defense. When properly implemented with end-to-end encryption, or while using tools such as Tor, and provided the device remains uncompromised and free from direct monitoring via electromagnetic analysis, audio recording, or similar methodologies, the content of communication is generally considered secure.
For a number of years, numerous government initiatives have sought to weaken encryption or introduce backdoors for law enforcement access. Privacy advocates and the broader technology industry strongly oppose these measures, arguing that any backdoor would inevitably be discovered and exploited by malicious actors. Such vulnerabilities would endanger everyone's private data while failing to hinder criminals, who could switch to alternative platforms or create their own encrypted systems.
Surveillance remains effective even when encryption is correctly employed, by exploiting metadata that is often accessible to packet sniffers unless countermeasures are applied. This includes DNS queries, IP addresses, phone numbers, URLs, timestamps, and communication durations, which can reveal significant information about user activity and interactions or associations with a person of interest.
== See also ==
Anonymizer, a software system that attempts to make network activity untraceable
Computer surveillance in the workplace
Cyber spying
Datacasting, a means of broadcasting files and Web pages using radio waves, allowing receivers near total immunity from traditional network surveillance techniques.
Differential privacy, a method to maximize the accuracy of queries from statistical databases while minimizing the chances of violating the privacy of individuals.
ECHELON, a signals intelligence (SIGINT) collection and analysis network operated on behalf of Australia, Canada, New Zealand, the United Kingdom, and the United States, also known as AUSCANNZUKUS and Five Eyes
GhostNet, a large-scale cyber spying operation discovered in March 2009
List of government surveillance projects
Internet censorship and surveillance by country
Mass surveillance
China's Golden Shield Project
Mass surveillance in Australia
Mass surveillance in China
Mass surveillance in East Germany
Mass surveillance in India
Mass surveillance in North Korea
Mass surveillance in the United Kingdom
Mass surveillance in the United States
Surveillance
Surveillance by the United States government:
2013 mass surveillance disclosures, reports about NSA and its international partners' mass surveillance of foreign nationals and U.S. citizens
Bullrun (code name), a highly classified NSA program to preserve its ability to eavesdrop on encrypted communications by influencing and weakening encryption standards, by obtaining master encryption keys, and by gaining access to data before or after it is encrypted either by agreement, by force of law, or by computer network exploitation (hacking)
Carnivore, a U.S. Federal Bureau of Investigation system to monitor email and electronic communications
COINTELPRO, a series of covert, and at times illegal, projects conducted by the FBI aimed at U.S. domestic political organizations
Communications Assistance For Law Enforcement Act
Computer and Internet Protocol Address Verifier (CIPAV), a data gathering tool used by the U.S. Federal Bureau of Investigation (FBI)
Dropmire, a secret surveillance program by the NSA aimed at surveillance of foreign embassies and diplomatic staff, including those of NATO allies
Magic Lantern, keystroke logging software developed by the U.S. Federal Bureau of Investigation
Mass surveillance in the United States
NSA call database, a database containing metadata for hundreds of billions of telephone calls made in the U.S.
NSA warrantless surveillance (2001–07)
NSA whistleblowers: William Binney, Thomas Andrews Drake, Mark Klein, Edward Snowden, Thomas Tamm, Russ Tice
Spying on United Nations leaders by United States diplomats
Stellar Wind (code name), code name for information collected under the President's Surveillance Program
Tailored Access Operations, NSA's hacking program
Terrorist Surveillance Program, an NSA electronic surveillance program
Total Information Awareness, a project of the Defense Advanced Research Projects Agency (DARPA)
TEMPEST, codename for studies of unintentional intelligence-bearing signals which, if intercepted and analyzed, may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment
== References ==
== External links ==
"Selected Papers in Anonymity", Free Haven Project, accessed 16 September 2011.
Yan, W. (2019) Introduction to Intelligent Surveillance: Surveillance Data Capture, Transmission, and Analytics, Springer. | Wikipedia/Computer_and_network_surveillance |
Cryptography is the practice and study of encrypting information, or in other words, securing information from unauthorized access. There are many different cryptography laws in different nations. Some countries prohibit the export of cryptography software and/or encryption algorithms or cryptoanalysis methods. Some countries require decryption keys to be recoverable in case of a police investigation.
== Overview ==
Issues regarding cryptography law fall into four categories:
Export control, is the restriction on the export of cryptography methods within a country to other countries or commercial entities. There are international export control agreements, the main one being the Wassenaar Arrangement. The Wassenaar Arrangement was created after the dissolution of COCOM (Coordinating Committee for Multilateral Export Controls), which in 1989 "decontrolled password and authentication-only cryptography."
Import controls, which is the restriction on using certain types of cryptography within a country.
Patent issues, deal with the use of cryptography tools that are patented.
Search and seizure issues, on whether and under what circumstances, a person can be compelled to decrypt data files or reveal an encryption key.
== Legal issues ==
=== Prohibitions ===
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous . Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high-quality cryptography possible.
In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has since relaxed many of these rules. In China and Iran, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.
In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List. Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high-quality encryption techniques became well known around the globe.
=== Export controls ===
In the 1990s, there were several challenges to US export regulation of cryptography. After the source code for Philip Zimmermann's Pretty Good Privacy (PGP) encryption program found its way onto the Internet in June 1991, a complaint by RSA Security (then called RSA Data Security, Inc.) resulted in a lengthy criminal investigation of Zimmermann by the US Customs Service and the FBI, though no charges were ever filed. Daniel J. Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.
In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled. Cryptography exports from the US became less strictly regulated as a consequence of a major relaxation in 2000; there are no longer very many restrictions on key sizes in US-exported mass-market software. Since this relaxation in US export restrictions, and because most personal computers connected to the Internet include US-sourced web browsers such as Firefox or Internet Explorer, almost every Internet user worldwide has potential access to quality cryptography via their browsers (e.g., via Transport Layer Security). The Mozilla Thunderbird and Microsoft Outlook E-mail client programs can similarly transmit and receive emails via TLS, and can send and receive emails encrypted with S/MIME. Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
=== NSA involvement ===
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. The NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography. DES was designed to be resistant to differential cryptanalysis, a powerful and general cryptanalytic technique known to the NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s. According to Steven Levy, IBM discovered differential cryptanalysis, but kept the technique secret at the NSA's request. The technique became publicly known only when Biham and Shamir re-discovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have.
Another instance of the NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm (called Skipjack) was then classified (declassified in 1998, long after the Clipper initiative lapsed). The classified cipher caused concerns that the NSA had deliberately made the cipher weak in order to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs's Principle, as the scheme included a special escrow key held by the government for use by law enforcement (i.e. wiretapping).
=== Digital rights management ===
Cryptography is central to digital rights management (DRM), a group of techniques for technologically controlling use of copyrighted material, being widely implemented and deployed at the behest of some copyright holders. In 1998, U.S. President Bill Clinton signed the Digital Millennium Copyright Act (DMCA), which criminalized all production, dissemination, and use of certain cryptanalytic techniques and technology (now known or later discovered); specifically, those that could be used to circumvent DRM technological schemes. This had a noticeable impact on the cryptography research community since an argument can be made that any cryptanalytic research violated the DMCA. Similar statutes have since been enacted in several countries and regions, including the implementation in the EU Copyright Directive. Similar restrictions are called for by treaties signed by World Intellectual Property Organization member-states.
The United States Department of Justice and FBI have not enforced the DMCA as rigorously as had been feared by some, but the law, nonetheless, remains a controversial one. Niels Ferguson, a well-respected cryptography researcher, has publicly stated that he will not release some of his research into an Intel security design for fear of prosecution under the DMCA. Cryptologist Bruce Schneier has argued that the DMCA encourages vendor lock-in, while inhibiting actual measures toward cyber-security. Both Alan Cox (longtime Linux kernel developer) and Edward Felten (and some of his students at Princeton) have encountered problems related to the Act. Dmitry Sklyarov was arrested during a visit to the US from Russia, and jailed for five months pending trial for alleged violations of the DMCA arising from work he had done in Russia, where the work was legal. In 2007, the cryptographic keys responsible for Blu-ray and HD DVD content scrambling were discovered and released onto the Internet. In both cases, the Motion Picture Association of America sent out numerous DMCA takedown notices, and there was a massive Internet backlash triggered by the perceived impact of such notices on fair use and free speech.
=== Forced disclosure of encryption keys ===
In the United Kingdom, the Regulation of Investigatory Powers Act gives UK police the powers to force suspects to decrypt files or hand over passwords that protect encryption keys. Failure to comply is an offense in its own right, punishable on conviction by a two-year jail sentence or up to five years in cases involving national security. Successful prosecutions have occurred under the Act; the first, in 2009, resulted in a term of 13 months' imprisonment. Similar forced disclosure laws in Australia, Finland, France, and India compel individual suspects under investigation to hand over encryption keys or passwords during a criminal investigation.
In the United States, the federal criminal case of United States v. Fricosu addressed whether a search warrant can compel a person to reveal an encryption passphrase or password. The Electronic Frontier Foundation (EFF) argued that this is a violation of the protection from self-incrimination given by the Fifth Amendment. In 2012, the court ruled that under the All Writs Act, the defendant was required to produce an unencrypted hard drive for the court.
In many jurisdictions, the legal status of forced disclosure remains unclear.
The 2016 FBI–Apple encryption dispute concerns the ability of courts in the United States to compel manufacturers' assistance in unlocking cell phones whose contents are cryptographically protected.
As a potential counter-measure to forced disclosure some cryptographic software supports plausible deniability, where the encrypted data is indistinguishable from unused random data (for example such as that of a drive which has been securely wiped).
== Cryptography law in different countries ==
=== China ===
In October 1999, the State Council promulgated the Regulations on the Administration of Commercial Cryptography. According to these regulations, commercial cryptography was treated as a state secret.
On 26 October 2019, the Standing Committee of the National People's Congress promulgated the Cryptography Law of the People's Republic of China. This law went into effect at the start of 2020. The law categorizes cryptography into three categories:
Core cryptography, which is a state secret and suitable for information up to top secret;
Ordinary cryptography, which is also a state secret and suitable for information up to secret;
Commercial cryptography, which protects information that is not a state secret.
The law also states that there should be a "mechanism of both in-process and ex-post supervision on commercial cryptography, which combines routine supervision with random inspection" (implying that the Chinese government should get access to encrypted servers). It also states that foreign providers of commercial encryption need some sort of state approval.
Cryptosystems authorized for use in China include SM2, SM3, SM4 and SM9.
=== France ===
As of 2011 and since 2004, the law for trust in the digital economy (French: Loi pour la confiance dans l'économie numérique; abbreviated LCEN) mostly liberalized the use of cryptography.
As long as cryptography is only used for authentication and integrity purposes, it can be freely used. The cryptographic key or the nationality of the entities involved in the transaction do not matter. Typical e-business websites fall under this liberalized regime.
Exportation and importation of cryptographic tools to or from foreign countries must be either declared (when the other country is a member of the European Union) or requires an explicit authorization (for countries outside the EU).
=== India ===
Section 69 of the Information Technology Act, 2000 (as amended in 2008) authorizes Indian government officials or policemen to listen in on any phone calls, read any SMS messages or emails, or monitor the websites that anyone visits, without requiring a warrant.: 2 (However, this is a violation of article 21 of the Constitution of India.: 2 ) This section also enables the central government of India or a state government of India to compel any agency to decrypt information.: 4
According to the Information Technology (Intermediaries Guidelines) Rules, 2011, intermediaries are required to provide information to Indian government agencies for investigative or other purposes.: 2
ISP license holders are freely allowed to use encryption keys up to 40 bits. Beyond that, they are required to obtain written permission and to deposit the decryption key with the Department of Telecommunications.: 2–3
Per the 2012 SEBI Master Circular for Stock Exchange or Cash Market (issued by the Securities and Exchange Board of India), it is the responsibility of stock exchanges to maintain data reliability and confidentiality through the use of encryption.: 3 Per Reserve Bank of India guidance issued in 2001, banks must use at least 128-bit SSL to protect browser-to-bank communication; they must also encrypt sensitive data internally.: 3
Electronics, including cryptographic products, is one of the categories of dual-use items in the Special Chemicals, Organisms, Materials, Equipment and Technologies (SCOMET; part of the Foreign Trade (Development & Regulation Act), 1992). However, this regulation does not specify which cryptographic products are subject to export controls.: 3
=== United States ===
In the United States, the International Traffic in Arms Regulation restricts the export of cryptography.
== See also ==
Official Secrets Act - (United Kingdom, India, Ireland, Malaysia and formerly New Zealand)
Regulation of Investigatory Powers Act 2000 (United Kingdom)
Restrictions on the import of cryptography
United States v. Boucher (2009), on the right of a criminal defendant not to reveal a passphrase
FBI–Apple encryption dispute on whether cellphone manufacturers can be compelled to assist in their unlocking
== References ==
== External links ==
Bert-Jaap Koops' Crypto Law Survey - existing and proposed laws and regulations on cryptography | Wikipedia/Cryptography_laws_in_different_nations |
Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions support important standards of post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems — which could, theoretically, be defeated using Shor's algorithm on a quantum computer — some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.
In 2024 NIST announced the Module-Lattice-Based Digital Signature Standard for post-quantum cryptography.
== History ==
In 1996, Miklós Ajtai introduced the first lattice-based cryptographic construction whose security could be based on the hardness of well-studied lattice problems, and Cynthia Dwork showed that a certain average-case lattice problem, known as short integer solutions (SIS), is at least as hard to solve as a worst-case lattice problem. She then showed a cryptographic hash function whose security is equivalent to the computational hardness of SIS.
In 1998, Jeffrey Hoffstein, Jill Pipher, and Joseph H. Silverman introduced a lattice-based public-key encryption scheme, known as NTRU. However, their scheme is not known to be at least as hard as solving a worst-case lattice problem.
The first lattice-based public-key encryption scheme whose security was proven under worst-case hardness assumptions was introduced by Oded Regev in 2005, together with the learning with errors problem (LWE). Since then, much follow-up work has focused on improving Regev's security proof and improving the efficiency of the original scheme. Much more work has been devoted to constructing additional cryptographic primitives based on LWE and related problems. For example, in 2009, Craig Gentry introduced the first fully homomorphic encryption scheme, which was based on a lattice problem.
== Mathematical background ==
In linear algebra, a lattice
L
⊂
R
n
{\displaystyle L\subset \mathbb {R} ^{n}}
is the set of all integer linear combinations of vectors from a basis
{
b
1
,
…
,
b
n
}
{\displaystyle \{\mathbf {b} _{1},\ldots ,\mathbf {b} _{n}\}}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
. In other words,
L
=
{
∑
a
i
b
i
:
a
i
∈
Z
}
.
{\displaystyle L={\Big \{}\sum a_{i}\mathbf {b} _{i}:a_{i}\in \mathbb {Z} {\Big \}}.}
For example,
Z
n
{\displaystyle \mathbb {Z} ^{n}}
is a lattice, generated by the standard basis for
R
n
{\displaystyle \mathbb {R} ^{n}}
. Crucially, the basis for a lattice is not unique. For example, the vectors
(
3
,
1
,
4
)
{\displaystyle (3,1,4)}
,
(
1
,
5
,
9
)
{\displaystyle (1,5,9)}
, and
(
2
,
−
1
,
0
)
{\displaystyle (2,-1,0)}
form an alternative basis for
Z
3
{\displaystyle \mathbb {Z} ^{3}}
.
The most important lattice-based computational problem is the shortest vector problem (SVP or sometimes GapSVP), which asks us to approximate the minimal Euclidean length of a non-zero lattice vector. This problem is thought to be hard to solve efficiently, even with approximation factors that are polynomial in
n
{\displaystyle n}
, and even with a quantum computer. Many (though not all) lattice-based cryptographic constructions are known to be secure if SVP is in fact hard in this regime.
== Selected lattice-based schemes ==
This section presents selected lattice-based schemes, grouped by primitive.
=== Encryption ===
Selected schemes for the purpose of encryption:
GGH encryption scheme, which is based in the closest vector problem (CVP). In 1999, Nguyen published a critical flaw in the scheme's design.
NTRUEncrypt.
=== Homomorphic encryption ===
Selected schemes for the purpose of homomorphic encryption:
Gentry's original scheme.
Brakerski and Vaikuntanathan.
=== Hash functions ===
Selected lattice-based cryptographic schemes for the purpose of hashing:
SWIFFT.
Lattice Based Hash Function (LASH).
=== Key exchange ===
Selected schemes for the purpose of key exchange, also called key establishment, key encapsulation and key encapsulation mechanism (KEM):
CRYSTALS-Kyber, which is built upon module learning with errors (module-LWE). Kyber was selected for standardization by the NIST in 2023. In August 2023, NIST published FIPS 203 (Initial Public Draft), and started referring to their Kyber version as Module-Lattice-based Key Encapsulation Mechanism (ML-KEM).
FrodoKEM, a scheme based on the learning with errors (LWE) problem. FrodoKEM joined the standardization call conducted by the National Institute of Standards and Technology (NIST), and lived up to the 3rd round of the process. It was then discarded due to low performance reasons. In October, 2022, the Twitter account associated to cryptologist Daniel J. Bernstein posted security issues in frodokem640.
NewHope is based on the ring learning with errors (RLWE) problem.
NTRU Prime.
Peikert's work, which is based on the ring learning with errors (RLWE) problem.
Saber, which is based on the module learning with rounding (module-LWR) problem.
=== Signing ===
This section lists a selection of lattice-based schemes for the purpose of digital signatures.
CRYSTALS-Dilithium, which is built upon module learning with errors (module-LWE) and module short integer solution (module-SIS). Dilithium was selected for standardization by the NIST. According to a message from Ray Perlner, writing on behalf of the NIST PQC team, the NIST module-LWE signing standard is to be based on version 3.1 of the Dilithium specification.
Falcon, which is built upon short integer solution (SIS) over NTRU. Falcon was selected for standardization by the NIST.
GGH signature scheme.
Güneysu, Lyubashevsky, and Pöppelmann's work, which is based on ring learning with errors (RLWE).
MITAKA, a variant of Falcon.
NTRUSign.
qTESLA, which is based on ring learning with errors (RLWE). The qTESLA scheme joined the standardization call conducted by the National Institute of Standards and Technology (NIST).
==== CRYSTALS-Dilithium ====
CRYSTALS-Dilithium or simply Dilithium is built upon module-LWE and module-SIS. Dilithium was selected by the NIST as the basis for a digital signature standard. According to a message from Ray Perlner, writing on behalf of the NIST PQC team, the NIST module-LWE signing standard is to be based on version 3.1 of the Dilithium specification. NIST's changes on Dilithium 3.1 intend to support additional randomness in signing (hedged signing) and other improvements.
Dilithium was one of the two digital signature schemes initially chosen by the NIST in their post-quantum cryptography process, the other one being SPHINCS⁺, which is not based on lattices but on hashes.
In August 2023, NIST published FIPS 204 (Initial Public Draft), and started calling Dilithium "Module-Lattice-Based Digital Signature Algorithm" (ML-DSA).
As of October 2023, ML-DSA was being implemented as a part of Libgcrypt, according to Falko Strenzke.
In August 2024, NIST officially standardized CRYSTALS-Dilithium under the name ML-DSA, establishing it as the primary standard (FIPS 204) for quantum-resistant digital signatures.
== Security ==
Lattice-based cryptographic constructions hold a great promise for public-key post-quantum cryptography. Indeed, the main alternative forms of public-key cryptography are schemes based on the hardness of factoring and related problems and schemes based on the hardness of the discrete logarithm and related problems. However, both factoring and the discrete logarithm problem are known to be solvable in polynomial time on a quantum computer. Furthermore, algorithms for factorization tend to yield algorithms for discrete logarithm, and conversely. This further motivates the study of constructions based on alternative assumptions, such as the hardness of lattice problems.
Many lattice-based cryptographic schemes are known to be secure assuming the worst-case hardness of certain lattice problems. I.e., if there exists an algorithm that can efficiently break the cryptographic scheme with non-negligible probability, then there exists an efficient algorithm that solves a certain lattice problem on any input. However, for the practical lattice-based constructions (such as schemes based on NTRU and even schemes based on LWE with efficient parameters), meaningful reduction-based guarantees of security are not known.
Assessments of the security levels provided by reduction arguments from hard problems - based on recommended parameter sizes, standard estimates of the computational complexity of the hard problems, and detailed examination of the steps in the reductions - are called concrete security and sometimes practice-oriented provable security. Some authors who have investigated concrete security for lattice-based cryptosystems have found that the provable security results for such systems do not provide any meaningful concrete security for practical values of the parameters.
== Functionality ==
For many cryptographic primitives, the only known constructions are based on lattices or closely related objects. These primitives include fully homomorphic encryption, indistinguishability obfuscation, cryptographic multilinear maps, and functional encryption.
== See also ==
Lattice problems
Learning with errors
Homomorphic encryption
Post-quantum cryptography
Ring learning with errors
Ring learning with Errors Key Exchange
== References ==
== Bibliography ==
Oded Goldreich, Shafi Goldwasser, and Shai Halevi. "Public-key cryptosystems from lattice reduction problems". In Crypto ’97: Proceedings of the 17th Annual International Cryptology Conference on Advances in Cryptology, pages 112–131, London, UK, 1997. Springer-Verlag.
Oded Regev. Lattice-based cryptography. In Advances in cryptology (CRYPTO), pages 131–141, 2006. | Wikipedia/Lattice-based_cryptography |
Books on cryptography have been published sporadically and with variable quality for a long time. This is despite the paradox that secrecy is of the essence in sending confidential messages – see Kerckhoffs' principle.
In contrast, the revolutions in cryptography and secure communications since the 1970s are covered in the available literature.
== Early history ==
An early example of a book about cryptography was a Roman work, now lost and known only by references. Many early cryptographic works were esoteric, mystical, and/or reputation-promoting; cryptography being mysterious, there was much opportunity for such things. At least one work by Trithemius was banned by the Catholic Church and put on the Index Librorum Prohibitorum as being about black magic or witchcraft. Many writers claimed to have invented unbreakable ciphers. None were, though it sometimes took a long while to establish this.
In the 19th century, the general standard improved somewhat (e.g., works by Auguste Kerckhoffs, Friedrich Kasiski, and Étienne Bazeries). Colonel Parker Hitt and William Friedman in the early 20th century also wrote books on cryptography. These authors, and others, mostly abandoned any mystical or magical tone.
== Open literature versus classified literature ==
With the invention of radio, much of military communications went wireless, allowing the possibility of enemy interception much more readily than tapping into a landline. This increased the need to protect communications. By the end of World War I, cryptography and its literature began to be officially limited. One exception was the 1931 book The American Black Chamber by Herbert Yardley, which gave some insight into American cryptologic success stories, including the Zimmermann telegram and the breaking of Japanese codes during the Washington Naval Conference.
== List ==
=== Overview of cryptography ===
Bertram, Linda A. / Dooble, Gunther van / et al. (Eds.): Nomenclatura: Encyclopedia of modern Cryptography and Internet Security - From AutoCrypt and Exponential Encryption to Zero-Knowledge-Proof Keys, 2019, ISBN 9783746066684.
Piper, Fred and Sean Murphy, Cryptography : A Very Short Introduction ISBN 0-19-280315-8 This book outlines the major goals, uses, methods, and developments in cryptography.
=== Significant books ===
Significant books on cryptography include:
Aumasson, Jean-Philippe (2017), Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press, 2017, ISBN 9781593278267.[1] Presents modern cryptography in a readable way, suitable for practitioners, software engineers, and others who want to learn practice-oriented cryptography. Each chapter includes a discussion of common implementation mistakes using real-world examples and details what could go wrong and how to avoid these pitfalls.
Aumasson, Jean-Philippe (2021), Crypto Dictionary: 500 Tasty Tidbits for the Curious Cryptographer. No Starch Press, 2021, ISBN 9781718501409.[2] Ultimate desktop dictionary with hundreds of definitions organized alphabetically for all things cryptographic. The book also includes discussions of the threat that quantum computing is posing to current cryptosystems and a nod to post-quantum algorithms, such as lattice-based cryptographic schemes.
Bertram, Linda A. / Dooble, Gunther van: Transformation of Cryptography - Fundamental concepts of Encryption, Milestones, Mega-Trends and sustainable Change in regard to Secret Communications and its Nomenclatura, 2019, ISBN 978-3749450749.
Candela, Rosario (1938). The Military Cipher of Commandant Bazeries. New York: Cardanus Press, This book detailed the cracking of a famous code from 1898 created by Commandant Bazeries, a brilliant French Army Cryptanalyst.
Falconer, John (1685). Cryptomenysis Patefacta, or Art of Secret Information Disclosed Without a Key. One of the earliest English texts on cryptography.
Ferguson, Niels, and Schneier, Bruce (2003). Practical Cryptography, Wiley, ISBN 0-471-22357-3. A cryptosystem design consideration primer. Covers both algorithms and protocols. This is an in-depth consideration of one cryptographic problem, including paths not taken and some reasons why. At the time of its publication, most of the material was not otherwise available in a single source. Some was not otherwise available at all. According to the authors, it is (in some sense) a follow-up to Applied Cryptography.
Gaines, Helen Fouché (1939). Cryptanalysis, Dover, ISBN 0-486-20097-3. Considered one of the classic books on the subject, and includes many sample ciphertext for practice. It reflects public amateur practice as of the inter-War period. The book was compiled as one of the first projects of the American Cryptogram Association.
Goldreich, Oded (2001 and 2004). Foundations of Cryptography. Cambridge University Press. Presents the theoretical foundations of cryptography in a detailed and comprehensive manner. A must-read for anyone interested in the theory of cryptography.
Katz, Jonathan and Lindell, Yehuda (2007 and 2014). Introduction to Modern Cryptography, CRC Press. Presents modern cryptography at a level appropriate for undergraduates, graduate students, or practitioners. Assumes mathematical maturity but presents all the necessary mathematical and computer science background.
Konheim, Alan G. (1981). Cryptography: A Primer, John Wiley & Sons, ISBN 0-471-08132-9. Written by one of the IBM team who developed DES.
Mao, Wenbo (2004). Modern Cryptography Theory and Practice ISBN 0-13-066943-1. An up-to-date book on cryptography. Touches on provable security, and written with students and practitioners in mind.
Mel, H.X., and Baker, Doris (2001). Cryptography Decrypted, Addison Wesley ISBN 0-201-61647-5. This technical overview of basic cryptographic components (including extensive diagrams and graphics) explains the evolution of cryptography from the simplest concepts to some modern concepts. It details the basics of symmetric key, and asymmetric key ciphers, MACs, SSL, secure mail and IPsec. No math background is required, though there's some coverage of the mathematics underlying public key/private key crypto in the appendix.
A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone (1996) Handbook of Applied Cryptography ISBN 0-8493-8523-7. Equivalent to Applied Cryptography in many ways, but somewhat more mathematical. For the technically inclined. Covers few meta-cryptographic topics, such as crypto system design. This is currently (2004) regarded as the standard reference work in technical cryptography.
Paar, Christof and Jan Pelzl (2009). Understanding Cryptography: A Textbook for Students and Practitioners, Springer, ISBN 978-3-642-04100-6. Very accessible introduction to applied cryptography which covers most schemes of practical relevance. The focus is on being a textbook, i.e., it has pedagogical approach, many problems and further reading sections. The main target audience are readers without a background in pure mathematics.
Patterson, Wayne (1987). Mathematical Cryptology for Computer Scientists and Mathematicians, Rowman & Littlefield, ISBN 0-8476-7438-X
Rosulek, Mike (2018). The Joy of Cryptography Presents modern cryptography at a level appropriate for undergraduates.
Schneier, Bruce (1996). Applied Cryptography, 2 ed, Wiley, (ISBN 0-471-11709-9). Survey of mostly obsolete cryptography with some commentary on 1990s legal environment. Aimed at engineers without mathematical background, including source code for obsolete ciphers. Lacks guidance for choosing cryptographic components and combining them into protocols and engineered systems. Contemporaneously influential on a generation of engineers, hackers, and cryptographers. Supplanted by Cryptography Engineering.
Smart, Nigel (2004). Cryptography: An introduction ISBN 0-07-709987-7. Similar in intent to Applied Cryptography but less comprehensive. Covers more modern material and is aimed at undergraduates covering topics such as number theory and group theory not generally covered in cryptography books.
Stinson, Douglas (2005). Cryptography: Theory and Practice ISBN 1-58488-508-4. Covers topics in a textbook style but with more mathematical detail than is usual.
Young, Adam L. and Moti Yung (2004). Malicious Cryptography: Exposing Cryptovirology, ISBN 0764568469, ISBN 9780764568466, John Wiley & Sons. Covers topics regarding use of cryptography as an attack tool in systems as was introduced in the 1990s: Kleptography which deals with hidden subversion of cryptosystems, and, more generally, Cryptovirology which predicted Ransomware in which cryptography is used as a tool to disable computing systems, in a way that is reversible only by the attacker, generally requiring ransom payment(s).
Washington, Lawrence C. (2003). Elliptic Curves: Number Theory and Cryptography ISBN 1-58488-365-0. A book focusing on elliptic curves, beginning at an undergraduate level (at least for those who have had a course on abstract algebra), and progressing into much more advanced topics, even at the end touching on Andrew Wiles' proof of the Taniyama–Shimura conjecture which led to the proof of Fermat's Last Theorem.
Welsh, Dominic (1988). Codes and Cryptography, Oxford University Press, A brief textbook intended for undergraduates. Some coverage of fundamental information theory. Requires some mathematical maturity; is well written, and otherwise accessible.
==== The Codebreakers ====
From the end of World War II until the early 1980s most aspects of modern cryptography were regarded as the special concern of governments and the military and were protected by custom and, in some cases, by statute. The most significant work to be published on cryptography in this period is undoubtedly David Kahn's The Codebreakers, which was published at a time (mid-1960s) when virtually no information on the modern practice of cryptography was available. Kahn has said that over ninety percent of its content was previously unpublished.
The book caused serious concern at the NSA despite its lack of coverage of specific modern cryptographic practice, so much so that after failing to prevent the book being published, NSA staff were informed to not even acknowledge the existence of the book if asked. In the US military, mere possession of a copy by cryptographic personnel was grounds for some considerable suspicion. Perhaps the single greatest importance of the book was the impact it had on the next generation of cryptographers. Whitfield Diffie has made comments in interviews about the effect it had on him.
=== Cryptographic environment/context or security ===
Schneier, Bruce – Secrets and Lies, Wiley, ISBN 0-471-25311-1, a discussion of the context within which cryptography and cryptosystems work. Practical Cryptography also includes some contextual material in the discussion of crypto system design.
Schneier, Bruce – Beyond Fear: Thinking Sensibly About Security in an Uncertain World, Wiley, ISBN 0-387-02620-7
Anderson, Ross – Security Engineering, Wiley, ISBN 0-471-38922-6 (online version), advanced coverage of computer security issues, including cryptography. Covers much more than merely cryptography. Brief on most topics due to the breadth of coverage. Well written, especially compared to the usual standard.
Edney, Jon and Arbaugh, William A – Real 802.11 Security: Wi-Fi Protected Access and 802.11i, Addison-Wesley, ISBN 0-321-13620-9, covers the use of cryptography in Wi-Fi networks. Includes details on Wi-Fi Protected Access (which is based on the IEEE 802.11i specification). The book is slightly out of date as it was written before IEEE 802.11i was finalized but much of the content is still useful for those who want to find out how encryption and authentication is done in a Wi-Fi network.
=== Declassified works ===
Boak, David G. A History of U.S. Communications Security (Volumes I and II); the David G. Boak Lectures, National Security Agency (NSA), 1973, A frank, detailed, and often humorous series of lectures delivered to new NSA hires by a long time insider, largely declassified as of 2015.
Callimahos, Lambros D. and Friedman, William F. Military Cryptanalytics. A (partly) declassified text intended as a training manual for NSA cryptanalysts.
Friedman, William F., Six Lectures on Cryptology, National Cryptology School, U.S. National Security Agency, 1965, declassified 1977, 1984
Friedman, William F. (October 14, 1940). "Preliminary Historical Report on the Solution of the Type "B" Machine" (PDF). Archived from the original (PDF) on April 4, 2013. (How the Japanese Purple cipher was broken, declassified 2001)
=== History of cryptography ===
Bamford, James, The Puzzle Palace: A Report on America's Most Secret Agency (1982)(ISBN 0-14-006748-5), and the more recent Body of Secrets: Anatomy of the Ultra-Secret National Security Agency (2001). The first is one of a very few books about the US Government's NSA. The second is also about NSA but concentrates more on its history. There is some very interesting material in Body of Secrets about US attempts (the TICOM mission) to investigate German cryptographic efforts immediately as WW II wound down.
Gustave Bertrand, Enigma ou la plus grande énigme de la guerre 1939–1945 (Enigma: the Greatest Enigma of the War of 1939–1945), Paris, 1973. The first public disclosure in the West of the breaking of Enigma, by the chief of French military cryptography prior to WW II. The first public disclosure anywhere was made in the first edition of Bitwa o tajemnice by the late Władysław Kozaczuk.
James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001: an overview of major 20th-century episodes in cryptology and espionage, particularly strong regarding the misappropriation of credit for conspicuous achievements.
Kahn, David – The Codebreakers (1967) (ISBN 0-684-83130-9) A single-volume source for cryptographic history, at least for events up to the mid-'60s (i.e., to just before DES and the public release of asymmetric key cryptography). The added chapter on more recent developments (in the most recent edition) is quite thin. Kahn has written other books and articles on cryptography, and on cryptographic history. They are very highly regarded.
Kozaczuk, Władysław, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War II, edited and translated by Christopher Kasparek, Frederick, MD, 1984: a history of cryptological efforts against Enigma, concentrating on the contributions of Polish mathematicians Marian Rejewski, Jerzy Różycki and Henryk Zygalski; of particular interest to specialists will be several technical appendices by Rejewski.
Levy, Steven – Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age (2001) (ISBN 0-14-024432-8): a journalistic overview of the development of public cryptographic techniques and the US regulatory context for cryptography. This is an account of a major policy conflict.
Singh, Simon, The Code Book (ISBN 1-85702-889-9): an anecdotal introduction to the history of cryptography. Covers more recent material than does even the revised edition of Kahn's The Codebreakers. Clearly written and quite readable. The included cryptanalytic contest has been won and the prize awarded, but the cyphertexts are still worth attempting.
Bauer, F L, Decrypted Secrets, This book is unusual. It is both a history of cryptography, and a discussion of mathematical topics related to cryptography. In his review, David Kahn said he thought it the best book he'd read on the subject. It is essentially two books, in more or less alternating chapters. Originally in German, and the translation shows it in places. Some surprising content, e.g., in the discussion of President Edgar Hoover's Secretary of State, Henry Stimson.
Budiansky, Stephen, Battle of Wits: a one-volume history of cryptography in WW II. It is well written, well researched, and responsible. Technical material (e.g., a description of the cryptanalysis of Enigma) is limited, but clearly presented.
Budiansky, Stephen, Code Warriors: NSA's Codebreakers and the Secret Intelligence War Against the Soviet Union (Knopf, 2016). (ISBN 0385352662): A sweeping, in-depth history of NSA, whose famous “cult of silence” has left the agency shrouded in mystery for decades.
Prados, John – Combined Fleet Decoded, An account of cryptography in the Pacific Theatre of World War II with special emphasis on the Japanese side. Reflects extensive research in Japanese sources and recently available US material. Contains material not previously accessible or unavailable.
Marks, Leo, Between Silk and Cyanide: a Codemaker's Story, 1941–1945, (HarperCollins, 1998). (ISBN 0-684-86780-X). A humorous but informative account of code-making and -breaking in Britain's WWII Special Operations Executive.
Mundy, Liza, Code Girls, (Hachette Books, 2017) (ISBN 978-0-316-35253-6) An account of some of the thousands of women recruited for U.S. cryptologic work before and during World War II, including top analysts such as Elizebeth Smith Friedman and Agnes Meyer Driscoll, lesser known but outstanding contributors like Genevieve Grotjan Feinstein and Ann Zeilinger Caracristi, and many others, and how the women made a strategic difference in the war.
Yardley, Herbert, The American Black Chamber (ISBN 0-345-29867-5), a classic 1931 account of American code-breaking during and after World War I; and Chinese Black Chamber: An Adventure in Espionage (ISBN 0-395-34648-7), about Yardley's work with the Chinese government in the years just before World War II. Yardley has an enduring reputation for embellishment, and some of the material in these books is less than reliable. The American Black Chamber was written after the New York operation Yardley ran was shut down by Secretary of State Henry L. Stimson and the US Army, on the grounds that "gentlemen don't read each other's mail".
=== Historic works ===
Abu Yusuf Yaqub ibn Ishaq al-Sabbah Al-Kindi, (A Manuscript on Deciphering Cryptographic Messages), 9th century included first known explanation of frequency analysis cryptanalysis
Michel de Nostredame, (16th century prophet famed since 1555 for prognostications), known widely for his "Les Propheties" sets of quatrains composed from four languages into a ciphertext, deciphered in a series called "Rise to Consciousness" (Deschausses, M., Outskirts Press, Denver, CO, Nov 2008).
Roger Bacon (English friar and polymath), Epistle on the secret Works of Art and Nullity of Magic, 13th century, possibly the first European work on cryptography since Classical times, written in Latin and not widely available then or now
Johannes Trithemius, Steganographia ("Hidden Writing"), written ca. 1499; pub 1606, banned by the Catholic Church 1609 as alleged discussion of magic, see Polygraphiae (below).
Johannes Trithemius, Polygraphiae Libri Sex ("Six Books on Polygraphy"), 1518, first printed book on cryptography (thought to really be about magic by some observers at the time)
Giovan Battista Bellaso, La cifra del. Sig. Giovan Battista Bellaso, 1553, first pub of the cypher widely misattributed to Vigenère.
Giambattista della Porta, De Furtivis Literarum Notis ("On concealed characters in writing"), 1563.
Blaise de Vigenère, Traicte de Chiffres, 1585.
Gustavus Selenus, Cryptomenytics, 1624, (modern era English trans by J W H Walden)
John Wilkins, Mercury, 1647, earliest printed book in English about cryptography
Johann Ludwig Klüber, Kryptographik Lehrbuch der Geheimschreibekunst ("Cryptology: Instruction Book on the Art of Secret Writing"), 1809.
Friedrich Kasiski, Die Geheimschriften und die Dechiffrierkunst ("Secret writing and the Art of Deciphering"), pub 1863, contained the first public description of a technique for cryptanalyzing polyalphabetic cyphers.
Etienne Bazeries, Les Chiffres secrets dévoilés ("Secret ciphers unveiled") about 1900.
Émile Victor Théodore Myszkowski, Cryptographie indéchiffrable: basée sur de nouvelles combinaisons rationelles ("Unbreakable cryptography"), published 1902.
William F. Friedman and others, the Riverbank Publications, a series of pamphlets written during and after World War I that are considered seminal to modern cryptanalysis, including no. 22 on the Index of Coincidence.
=== Fiction ===
Neal Stephenson – Cryptonomicon (1999) (ISBN 0-06-051280-6) The adventures of some World War II codebreakers and their modern-day progeny.
Edgar Allan Poe – "The Gold-Bug" (1843) An eccentric man discovers an ancient parchment which contains a cryptogram which, when solved, leads to the discovery of buried treasure. Includes a lengthy discourse on a method of solving a simple cypher.
Sir Arthur Conan Doyle – The Dancing Men. Holmes becomes involved in a case which features messages left lying around. They are written in a substitution cypher, which Holmes promptly discerns. Solving the cypher leads to solving the case.
Ken Follett – The Key to Rebecca (1980), World War II spy novel whose plot revolves around the heroes' efforts to cryptanalyze a book cipher with time running out.
Clifford B. Hicks – Alvin's Secret Code (1963), a children's novel which introduces some basics of cryptography and cryptanalysis.
Robert Harris – Enigma (1995) (ISBN 0-09-999200-0) Novel partly set in Britain's World War II codebreaking centre at Bletchley Park.
Ari Juels – Tetraktys (2009) (ISBN 0-9822837-0-9) Pits a classicist turned cryptographer against an ancient Pythagorean cult. Written by RSA Labs chief scientist.
Dan Brown - Digital Fortress (1998), a thriller takes a plunge into the NSA's cryptology wing giving the readers a modern and technology oriented view of the codebreaking in vogue.
Max Hernandez - Thieves Emporium (2013), a novel that examines how the world will change if cryptography makes fully bi-directional anonymous communications possible. A technically accurate document, it shows the effects of crypto from the citizen's standpoint rather than the NSA.
Barry Eisler, Fault Line (2009) ISBN 978-0-345-50508-8. A thriller about a race to nab software (of the cryptovirology type) which is capable of shutting down cyberspace.
== References ==
== External links ==
Listing and reviews for a large number of books in cryptography
A long list of works of fiction where the use of cryptology is a significant plot element. The list is in English.
List of where cryptography features in literature — list is presented in German. It draws on the English list above. | Wikipedia/Books_on_cryptography |
Differential cryptanalysis is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibits non-random behavior, and exploiting such properties to recover the secret key (cryptography key).
== History ==
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.: 8–9
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique. IBM kept some secrets, as Coppersmith explains: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography." Within IBM, differential cryptanalysis was known as the "T-attack" or "Tickle attack".
While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eight chosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order of 247 chosen plaintexts.
== Attack mechanics ==
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing. There are, however, extensions that would allow a known plaintext or even a ciphertext-only attack. The basic method uses pairs of plaintexts related by a constant difference. Difference can be defined in several ways, but the eXclusive OR (XOR) operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called a differential. Their statistical properties depend upon the nature of the S-boxes used for encryption, so the attacker analyses differentials
(
Δ
x
,
Δ
y
)
{\displaystyle (\Delta _{x},\Delta _{y})}
where
Δ
y
=
S
(
x
⊕
Δ
x
)
⊕
S
(
x
)
{\displaystyle \Delta _{y}=S(x\oplus \Delta _{x})\oplus S(x)}
(and ⊕ denotes exclusive or) for each such S-box S. In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, the cipher can be distinguished from random. More sophisticated variations allow the key to be recovered faster than an exhaustive search.
In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at least r − 1 rounds, where r is the total number of rounds. The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key.
For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm's internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed a differential characteristic.
Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack and many including the Advanced Encryption Standard, have been proven secure against the attack.
== Attack in detail ==
The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables or S-boxes). Observing the desired output difference (between two chosen or known plaintext inputs) suggests possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in the AES cipher for instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4.
In essence, to protect a cipher from the attack, for an n-bit non-linear function one would ideally seek as close to 2−(n − 1) as possible to achieve differential uniformity. When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a much weaker non-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds, no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)50 or 2−300 which is far lower than the required threshold of 2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(27)) using either cubing or inversion (there are other exponents that can be used as well). For instance, S(x) = x3 in any odd binary field is immune to differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks, they lose to algebraic attacks. That is, they are possible to describe and solve via a SAT solver. This is in part why AES (for instance) has an affine mapping after the inversion.
== Specialized types ==
Higher-order differential cryptanalysis
Truncated differential cryptanalysis
Impossible differential cryptanalysis
Boomerang attack
== See also ==
Cryptography
Integral cryptanalysis
Linear cryptanalysis
Differential equations of addition
== References ==
== Further reading ==
== External links ==
A tutorial on differential (and linear) cryptanalysis
Helger Lipmaa's links on differential cryptanalysis
A description of the attack applied to DES at the Wayback Machine (archived October 19, 2007) | Wikipedia/Differential_cryptanalysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.