text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Hierarchy theory is a means of studying ecological systems in which the relationship between all of the components is of great complexity. Hierarchy theory focuses on levels of organization and issues of scale, with a specific focus on the role of the observer in the definition of the system. Complexity in this context does not refer to an intrinsic property of the system but to the possibility of representing the systems in a plurality of non-equivalent ways depending on the pre-analytical choices of the observer. Instead of analyzing the whole structure, hierarchy theory refers to the analysis of hierarchical levels, and the interactions between them.
== See also ==
Biological organisation
Timothy F. H. Allen
Deep history
Big History
Deep time
Deep ecology
Infrastructure-based development
World-systems theory
Structuralist economics
Dependency theory
== References ==
== Further reading ==
Brooks, Daniel Stephen (August 2014). The concept of levels of organization in the biological sciences (Ph.D. thesis). Bielefeld: Bielefeld University. OCLC 942715109.
Eronen, Markus I. (August 2014). "Levels of organization: a deflationary account". Biology and Philosophy. 30 (1): 39–58. doi:10.1007/s10539-014-9461-z. S2CID 145635601.
Potochnik, Angela; McGill, Brian J. (January 2012). "The limitations of hierarchical organization" (PDF). Philosophy of Science. 79 (1): 120–140. doi:10.1086/663237. JSTOR 663237. S2CID 123858030.
Ahl, Valerie; Allen, Timothy F. H. (1996). Hierarchy theory: a vision, vocabulary, and epistemology. New York: Columbia University Press. ISBN 0231084803. OCLC 34149766.
Allen, Timothy F. H.; Hoekstra, Thomas W. (2015) [1992]. Toward a unified ecology. Complexity in ecological systems series (2nd ed.). New York: Columbia University Press. ISBN 9780231168885. OCLC 920475391.
O'Neill, Robert V.; Deangelis, Donald Lee; Waide, J. B.; Allen, Timothy F. H. (1986). A hierarchical concept of ecosystems. Monographs in population biology. Vol. 23. Princeton, NJ: Princeton University Press. ISBN 069108436X. OCLC 13526197.
Allen, Timothy F. H; Starr, Thomas B. (2017) [1982]. Hierarchy: perspectives for ecological complexity (2nd ed.). Chicago: University of Chicago Press. doi:10.7208/chicago/9780226489711.001.0001. ISBN 978-0226489544. OCLC 967919711.
Guttman, Burton S. (February 1976). "Is 'levels of organization' a useful biological concept?". BioScience. 26 (2): 112–113. doi:10.2307/1297326. JSTOR 1297326.
Pattee, Howard Hunt (1973). Hierarchy theory: the challenge of complex systems. International library of systems theory and philosophy. New York: George Braziller. ISBN 080760674X. OCLC 638741. | Wikipedia/Hierarchy_theory |
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants.
The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
As an example, ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones to direct each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee, another social insect.
This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and shares some similarities with estimation of distribution algorithms.
== Overview ==
In the natural world, ants of some species (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely to stop travelling at random and instead follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, is marched over more frequently, and thus the pheromone density becomes higher on shorter paths than longer ones. Pheromone evaporation also has the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained. The influence of pheromone evaporation in real ant systems is unclear, but it is very important in artificial systems.
The overall result is that when one ant finds a good (i.e., short) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leads to many ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to be solved.
=== Ambient networks of intelligent objects ===
New concepts are required since “intelligence” is no longer centralized but can be found throughout all minuscule objects. Anthropocentric concepts have been known to lead to the production of IT systems in which data processing, control units and calculating power are centralized. These centralized units have continually increased their performance and can be compared to the human brain. The model of the brain has become the ultimate vision of computers. Ambient networks of intelligent objects and, sooner or later, a new generation of information systems that are even more diffused and based on nanotechnology, will profoundly change this concept. Small devices that can be compared to insects do not possess a high intelligence on their own. Indeed, their intelligence can be classed as fairly limited. It is, for example, impossible to integrate a high performance calculator with the power to solve any kind of mathematical problem into a biochip that is implanted into the human body or integrated in an intelligent tag designed to trace commercial articles. However, once those objects are interconnected they develop a form of intelligence that can be compared to a colony of ants or bees. In the case of certain problems, this type of intelligence can be superior to the reasoning of a centralized system similar to the brain.
Nature offers several examples of how minuscule organisms, if they all follow the same basic rule, can create a form of collective intelligence on the macroscopic level. Colonies of social insects perfectly illustrate this model which greatly differs from human societies. This model is based on the cooperation of independent units with simple and unpredictable behavior. They move through their surrounding area to carry out certain tasks and only possess a very limited amount of information to do so. A colony of ants, for example, represents numerous qualities that can also be applied to a network of ambient objects. Colonies of ants have a very high capacity to adapt themselves to changes in the environment, as well as great strength in dealing with situations where one individual fails to carry out a given task. This kind of flexibility would also be very useful for mobile networks of objects which are perpetually developing. Parcels of information that move from a computer to a digital object behave in the same way as ants would do. They move through the network and pass from one node to the next with the objective of arriving at their final destination as quickly as possible.
=== Artificial pheromone system ===
Pheromone-based communication is one of the most effective ways of communication which is widely observed in nature. Pheromone is used by social insects such as
bees, ants and termites; both for inter-agent and agent-swarm communications. Due to its feasibility, artificial pheromones have been adopted in multi-robot and swarm robotic systems. Pheromone-based communication was implemented by different means such as chemical or physical (RFID tags, light, sound) ways. However, those implementations were not able to replicate all the aspects of pheromones as seen in nature.
Using projected light was presented in a 2007 IEEE paper by Garnier, Simon, et al. as an experimental setup to study pheromone-based communication with micro autonomous robots. Another study presented a system in which pheromones were implemented via a horizontal LCD screen on which the robots moved, with the robots having downward facing light sensors to register the patterns beneath them.
== Algorithm and formula ==
In the ant colony optimization algorithms, an artificial ant is a simple computational agent that searches for good solutions to a given optimization problem. To apply an ant colony algorithm, the optimization problem needs to be converted into the problem of finding the shortest path on a weighted graph. In the first step of each iteration, each ant stochastically constructs a solution, i.e. the order in which the edges in the graph should be followed. In the second step, the paths found by the different ants are compared. The last step consists of updating the pheromone levels on each edge.
procedure ACO_MetaHeuristic is
while not terminated do
generateSolutions()
daemonActions()
pheromoneUpdate()
repeat
end procedure
=== Edge selection ===
Each ant needs to construct a solution to move through the graph. To select the next edge in its tour, an ant will consider the length of each edge available from its current position, as well as the corresponding pheromone level. At each step of the algorithm, each ant moves from a state
x
{\displaystyle x}
to state
y
{\displaystyle y}
, corresponding to a more complete intermediate solution. Thus, each ant
k
{\displaystyle k}
computes a set
A
k
(
x
)
{\displaystyle A_{k}(x)}
of feasible expansions to its current state in each iteration, and moves to one of these in probability. For ant
k
{\displaystyle k}
, the probability
p
x
y
k
{\displaystyle p_{xy}^{k}}
of moving from state
x
{\displaystyle x}
to state
y
{\displaystyle y}
depends on the combination of two values, the attractiveness
η
x
y
{\displaystyle \eta _{xy}}
of the move, as computed by some heuristic indicating the a priori desirability of that move and the trail level
τ
x
y
{\displaystyle \tau _{xy}}
of the move, indicating how proficient it has been in the past to make that particular move. The trail level represents a posteriori indication of the desirability of that move.
In general, the
k
{\displaystyle k}
th ant moves from state
x
{\displaystyle x}
to state
y
{\displaystyle y}
with probability
p
x
y
k
=
(
τ
x
y
α
)
(
η
x
y
β
)
∑
z
∈
a
l
l
o
w
e
d
y
(
τ
x
z
α
)
(
η
x
z
β
)
{\displaystyle p_{xy}^{k}={\frac {(\tau _{xy}^{\alpha })(\eta _{xy}^{\beta })}{\sum _{z\in \mathrm {allowed} _{y}}(\tau _{xz}^{\alpha })(\eta _{xz}^{\beta })}}}
where
τ
x
y
{\displaystyle \tau _{xy}}
is the amount of pheromone deposited for transition from state
x
{\displaystyle x}
to
y
{\displaystyle y}
,
α
{\displaystyle \alpha }
≥ 0 is a parameter to control the influence of
τ
x
y
{\displaystyle \tau _{xy}}
,
η
x
y
{\displaystyle \eta _{xy}}
is the desirability of state transition
x
y
{\displaystyle xy}
(a priori knowledge, typically
1
/
d
x
y
{\displaystyle 1/d_{xy}}
, where
d
{\displaystyle d}
is the distance) and
β
{\displaystyle \beta }
≥ 1 is a parameter to control the influence of
η
x
y
{\displaystyle \eta _{xy}}
.
τ
x
z
{\displaystyle \tau _{xz}}
and
η
x
z
{\displaystyle \eta _{xz}}
represent the trail level and attractiveness for the other possible state transitions.
=== Pheromone update ===
Trails are usually updated when all ants have completed their solution, increasing or decreasing the level of trails corresponding to moves that were part of "good" or "bad" solutions, respectively. An example of a global pheromone updating rule is now
τ
x
y
←
(
1
−
ρ
)
τ
x
y
+
∑
k
m
Δ
τ
x
y
k
{\displaystyle \tau _{xy}\leftarrow (1-\rho )\tau _{xy}+\sum _{k}^{m}\Delta \tau _{xy}^{k}}
where
τ
x
y
{\displaystyle \tau _{xy}}
is the amount of pheromone deposited for a state transition
x
y
{\displaystyle xy}
,
ρ
{\displaystyle \rho }
is the pheromone evaporation coefficient,
m
{\displaystyle m}
is the number of ants and
Δ
τ
x
y
k
{\displaystyle \Delta \tau _{xy}^{k}}
is the amount of pheromone deposited by
k
{\displaystyle k}
th ant, typically given for a TSP problem (with moves corresponding to arcs of the graph) by
Δ
τ
x
y
k
=
{
Q
/
L
k
if ant
k
uses curve
x
y
in its tour
0
otherwise
{\displaystyle \Delta \tau _{xy}^{k}={\begin{cases}Q/L_{k}&{\mbox{if ant }}k{\mbox{ uses curve }}xy{\mbox{ in its tour}}\\0&{\mbox{otherwise}}\end{cases}}}
where
L
k
{\displaystyle L_{k}}
is the cost of the
k
{\displaystyle k}
th ant's tour (typically length) and
Q
{\displaystyle Q}
is a constant.
== Common extensions ==
Here are some of the most popular variations of ACO algorithms.
=== Ant system (AS) ===
The ant system is the first ACO algorithm. This algorithm corresponds to the one presented above. It was developed by Dorigo.
=== Ant colony system (ACS) ===
In the ant colony system algorithm, the original ant system was modified in three aspects:
The edge selection is biased towards exploitation (i.e. favoring the probability of selecting the shortest edges with a large amount of pheromone);
While building a solution, ants change the pheromone level of the edges they are selecting by applying a local pheromone updating rule;
At the end of each iteration, only the best ant is allowed to update the trails by applying a modified global pheromone updating rule.
=== Elitist ant system ===
In this algorithm, the global best solution deposits pheromone on its trail after every iteration (even if this trail has not been revisited), along with all the other ants. The elitist strategy has as its objective directing the search of all ants to construct a solution to contain links of the current best route.
=== Max-min ant system (MMAS) ===
This algorithm controls the maximum and minimum pheromone amounts on each trail. Only the global best tour or the iteration best tour are allowed to add pheromone to its trail. To avoid stagnation of the search algorithm, the range of possible pheromone amounts on each trail is limited to an interval [τmax,τmin]. All edges are initialized to τmax to force a higher exploration of solutions. The trails are reinitialized to τmax when nearing stagnation.
=== Rank-based ant system (ASrank) ===
All solutions are ranked according to their length. Only a fixed number of the best ants in this iteration are allowed to update their trials. The amount of pheromone deposited is weighted for each solution, such that solutions with shorter paths deposit more pheromone than the solutions with longer paths.
=== Parallel ant colony optimization (PACO) ===
An ant colony system (ACS) with communication strategies is developed. The artificial ants are partitioned into several groups. Seven communication
methods for updating the pheromone level between groups in ACS are proposed and work on the traveling salesman problem.
=== Continuous orthogonal ant colony (COAC) ===
The pheromone deposit mechanism of COAC is to enable ants to search for solutions collaboratively and effectively. By using an orthogonal design method, ants in the feasible domain can explore their chosen regions rapidly and efficiently, with enhanced global search capability and accuracy. The orthogonal design method and the adaptive radius adjustment method can also be extended to other optimization algorithms for delivering wider advantages in solving practical problems.
=== Recursive ant colony optimization ===
It is a recursive form of ant system which divides the whole search domain into several sub-domains and solves the objective on these subdomains. The results from all the subdomains are compared and the best few of them are promoted for the next level. The subdomains corresponding to the selected results are further subdivided and the process is repeated until an output of desired precision is obtained. This method has been tested on ill-posed geophysical inversion problems and works well.
== Convergence ==
For some versions of the algorithm, it is possible to prove that it is convergent (i.e., it is able to find the global optimum in finite time). The first evidence of convergence for an ant colony algorithm was made in 2000, the graph-based ant system algorithm, and later on for the ACS and MMAS algorithms. Like most metaheuristics, it is very difficult to estimate the theoretical speed of convergence. A performance analysis of a continuous ant colony algorithm with respect to its various parameters (edge selection strategy, distance measure metric, and pheromone evaporation rate) showed that its performance and rate of convergence are sensitive to the chosen parameter values, and especially to the value of the pheromone evaporation rate. In 2004, Zlochin and his colleagues showed that ACO-type algorithms are closely related to stochastic gradient descent, Cross-entropy method and estimation of distribution algorithm. They proposed an umbrella term "Model-based search" to describe this class of metaheuristics.
== Applications ==
Ant colony optimization algorithms have been applied to many combinatorial optimization problems, ranging from quadratic assignment to protein folding or routing vehicles and a lot of derived methods have been adapted to dynamic problems in real variables, stochastic problems, multi-targets and parallel implementations.
It has also been used to produce near-optimal solutions to the travelling salesman problem. They have an advantage over simulated annealing and genetic algorithm approaches of similar problems when the graph may change dynamically; the ant colony algorithm can be run continuously and adapt to changes in real time. This is of interest in network routing and urban transportation systems.
The first ACO algorithm was called the ant system and it was aimed to solve the travelling salesman problem, in which the goal is to find the shortest round-trip to link a series of cities. The general algorithm is relatively simple and based on a set of ants, each making one of the possible round-trips along the cities. At each stage, the ant chooses to move from one city to another according to some rules:
It must visit each city exactly once;
A distant city has less chance of being chosen (the visibility);
The more intense the pheromone trail laid out on an edge between two cities, the greater the probability that that edge will be chosen;
Having completed its journey, the ant deposits more pheromones on all edges it traversed, if the journey is short;
After each iteration, trails of pheromones evaporate.
=== Scheduling problem ===
Sequential ordering problem (SOP)
Job-shop scheduling problem (JSP)
Open-shop scheduling problem (OSP)
Permutation flow shop problem (PFSP)
Single machine total tardiness problem (SMTTP)
Single machine total weighted tardiness problem (SMTWTP)
Resource-constrained project scheduling problem (RCPSP)
Group-shop scheduling problem (GSP)
Single-machine total tardiness problem with sequence dependent setup times (SMTTPDST)
Multistage flowshop scheduling problem (MFSP) with sequence dependent setup/changeover times
Assembly sequence planning (ASP) problems
=== Vehicle routing problem ===
Capacitated vehicle routing problem (CVRP)
Multi-depot vehicle routing problem (MDVRP)
Period vehicle routing problem (PVRP)
Split delivery vehicle routing problem (SDVRP)
Stochastic vehicle routing problem (SVRP)
Vehicle routing problem with pick-up and delivery (VRPPD)
Vehicle routing problem with time windows (VRPTW)
Time dependent vehicle routing problem with time windows (TDVRPTW)
Vehicle routing problem with time windows and multiple service workers (VRPTWMS)
=== Assignment problem ===
Quadratic assignment problem (QAP)
Generalized assignment problem (GAP)
Frequency assignment problem (FAP)
Redundancy allocation problem (RAP)
=== Set problem ===
Set cover problem (SCP)
Partition problem (SPP)
Weight constrained graph tree partition problem (WCGTPP)
Arc-weighted l-cardinality tree problem (AWlCTP)
Multiple knapsack problem (MKP)
Maximum independent set problem (MIS)
=== Device sizing problem in nanoelectronics physical design ===
Ant colony optimization (ACO) based optimization of 45 nm CMOS-based sense amplifier circuit could converge to optimal solutions in very minimal time.
Ant colony optimization (ACO) based reversible circuit synthesis could improve efficiency significantly.
=== Antennas optimization and synthesis ===
To optimize the form of antennas, ant colony algorithms can be used. As example can be considered antennas RFID-tags based on ant colony algorithms (ACO), loopback and unloopback vibrators 10×10
=== Image processing ===
The ACO algorithm is used in image processing for image edge detection and edge linking.
Edge detection:
The graph here is the 2-D image and the ants traverse from one pixel depositing pheromone. The movement of ants from one pixel to another is directed by the local variation of the image's intensity values. This movement causes the highest density of the pheromone to be deposited at the edges.
The following are the steps involved in edge detection using ACO:
Step 1: Initialization. Randomly place
K
{\displaystyle K}
ants on the image
I
M
1
M
2
{\displaystyle I_{M_{1}M_{2}}}
where
K
=
(
M
1
∗
M
2
)
1
2
{\displaystyle K=(M_{1}*M_{2})^{\tfrac {1}{2}}}
. Pheromone matrix
τ
(
i
,
j
)
{\displaystyle \tau _{(i,j)}}
are initialized with a random value. The major challenge in the initialization process is determining the heuristic matrix.
There are various methods to determine the heuristic matrix. For the below example the heuristic matrix was calculated based on the local statistics:
the local statistics at the pixel position
(
i
,
j
)
{\displaystyle (i,j)}
.
η
(
i
,
j
)
=
1
Z
∗
V
c
∗
I
(
i
,
j
)
,
{\displaystyle \eta _{(i,j)}={\tfrac {1}{Z}}*Vc*I_{(i,j)},}
where
I
{\displaystyle I}
is the image of size
M
1
∗
M
2
{\displaystyle M_{1}*M_{2}}
,
Z
=
∑
i
=
1
:
M
1
∑
j
=
1
:
M
2
V
c
(
I
i
,
j
)
{\displaystyle Z=\sum _{i=1:M_{1}}\sum _{j=1:M_{2}}Vc(I_{i,j})}
is a normalization factor, and
V
c
(
I
i
,
j
)
=
f
(
|
I
(
i
−
2
,
j
−
1
)
−
I
(
i
+
2
,
j
+
1
)
|
+
|
I
(
i
−
2
,
j
+
1
)
−
I
(
i
+
2
,
j
−
1
)
|
+
|
I
(
i
−
1
,
j
−
2
)
−
I
(
i
+
1
,
j
+
2
)
|
+
|
I
(
i
−
1
,
j
−
1
)
−
I
(
i
+
1
,
j
+
1
)
|
+
|
I
(
i
−
1
,
j
)
−
I
(
i
+
1
,
j
)
|
+
|
I
(
i
−
1
,
j
+
1
)
−
I
(
i
−
1
,
j
−
1
)
|
+
|
I
(
i
−
1
,
j
+
2
)
−
I
(
i
−
1
,
j
−
2
)
|
+
|
I
(
i
,
j
−
1
)
−
I
(
i
,
j
+
1
)
|
)
{\displaystyle {\begin{aligned}Vc(I_{i,j})=&f\left(\left\vert I_{(i-2,j-1)}-I_{(i+2,j+1)}\right\vert +\left\vert I_{(i-2,j+1)}-I_{(i+2,j-1)}\right\vert \right.\\&+\left\vert I_{(i-1,j-2)}-I_{(i+1,j+2)}\right\vert +\left\vert I_{(i-1,j-1)}-I_{(i+1,j+1)}\right\vert \\&+\left\vert I_{(i-1,j)}-I_{(i+1,j)}\right\vert +\left\vert I_{(i-1,j+1)}-I_{(i-1,j-1)}\right\vert \\&+\left.\left\vert I_{(i-1,j+2)}-I_{(i-1,j-2)}\right\vert +\left\vert I_{(i,j-1)}-I_{(i,j+1)}\right\vert \right)\end{aligned}}}
f
(
⋅
)
{\displaystyle f(\cdot )}
can be calculated using the following functions:
f
(
x
)
=
λ
x
,
for x ≥ 0; (1)
{\displaystyle f(x)=\lambda x,\quad {\text{for x ≥ 0; (1)}}}
f
(
x
)
=
λ
x
2
,
for x ≥ 0; (2)
{\displaystyle f(x)=\lambda x^{2},\quad {\text{for x ≥ 0; (2)}}}
f
(
x
)
=
{
sin
(
π
x
2
λ
)
,
for 0 ≤ x ≤
λ
; (3)
0
,
else
{\displaystyle f(x)={\begin{cases}\sin({\frac {\pi x}{2\lambda }}),&{\text{for 0 ≤ x ≤}}\lambda {\text{; (3)}}\\0,&{\text{else}}\end{cases}}}
f
(
x
)
=
{
π
x
sin
(
π
x
2
λ
)
,
for 0 ≤ x ≤
λ
; (4)
0
,
else
{\displaystyle f(x)={\begin{cases}\pi x\sin({\frac {\pi x}{2\lambda }}),&{\text{for 0 ≤ x ≤}}\lambda {\text{; (4)}}\\0,&{\text{else}}\end{cases}}}
The parameter
λ
{\displaystyle \lambda }
in each of above functions adjusts the functions’ respective shapes.
Step 2: Construction process. The ant's movement is based on 4-connected pixels or 8-connected pixels. The probability with which the ant moves is given by the probability equation
P
x
,
y
{\displaystyle P_{x,y}}
Step 3 and step 5: Update process. The pheromone matrix is updated twice. in step 3 the trail of the ant (given by
τ
(
x
,
y
)
{\displaystyle \tau _{(x,y)}}
) is updated where as in step 5 the evaporation rate of the trail is updated which is given by:
τ
n
e
w
←
(
1
−
ψ
)
τ
o
l
d
+
ψ
τ
0
{\displaystyle \tau _{new}\leftarrow (1-\psi )\tau _{old}+\psi \tau _{0}}
,
where
ψ
{\displaystyle \psi }
is the pheromone decay coefficient
0
<
τ
<
1
{\displaystyle 0<\tau <1}
Step 7: Decision process. Once the K ants have moved a fixed distance L for N iteration, the decision whether it is an edge or not is based on the threshold T on the pheromone matrix τ. Threshold for the below example is calculated based on Otsu's method.
Image edge detected using ACO: The images below are generated using different functions given by the equation (1) to (4).
Edge linking: ACO has also proven effective in edge linking algorithms.
=== Other applications ===
Bankruptcy prediction
Classification
Connection-oriented network routing
Connectionless network routing
Data mining
Discounted cash flows in project scheduling
Distributed information retrieval
Energy and electricity network design
Grid workflow scheduling problem
Inhibitory peptide design for protein protein interactions
Intelligent testing system
Power electronic circuit design
Protein folding
System identification
== Definition difficulty ==
With an ACO algorithm, the shortest path in a graph, between two points A and B, is built from a combination of several paths. It is not easy to give a precise definition of what algorithm is or is not an ant colony, because the definition may vary according to the authors and uses. Broadly speaking, ant colony algorithms are regarded as populated metaheuristics with each solution represented by an ant moving in the search space. Ants mark the best solutions and take account of previous markings to optimize their search. They can be seen as probabilistic multi-agent algorithms using a probability distribution to make the transition between each iteration. In their versions for combinatorial problems, they use an iterative construction of solutions. According to some authors, the thing which distinguishes ACO algorithms from other relatives (such as algorithms to estimate the distribution or particle swarm optimization) is precisely their constructive aspect. In combinatorial problems, it is possible that the best solution eventually be found, even though no ant would prove effective. Thus, in the example of the travelling salesman problem, it is not necessary that an ant actually travels the shortest route: the shortest route can be built from the strongest segments of the best solutions. However, this definition can be problematic in the case of problems in real variables, where no structure of 'neighbours' exists. The collective behaviour of social insects remains a source of inspiration for researchers. The wide variety of algorithms (for optimization or not) seeking self-organization in biological systems has led to the concept of "swarm intelligence", which is a very general framework in which ant colony algorithms fit.
== Stigmergy algorithms ==
There is in practice a large number of algorithms claiming to be "ant colonies", without always sharing the general framework of optimization by canonical ant colonies. In practice, the use of an exchange of information between ants via the environment (a principle called "stigmergy") is deemed enough for an algorithm to belong to the class of ant colony algorithms. This principle has led some authors to create the term "value" to organize methods and behavior based on search of food, sorting larvae, division of labour and cooperative transportation.
== Related methods ==
Genetic algorithms (GA)
These maintain a pool of solutions rather than just one. The process of finding superior solutions mimics that of evolution, with solutions being combined or mutated to alter the pool of solutions, with solutions of inferior quality being discarded.
Estimation of distribution algorithm (EDA)
An evolutionary algorithm that substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as probabilistic graphical models, from which new solutions can be sampled or generated from guided-crossover.
Simulated annealing (SA)
A related global optimization technique which traverses the search space by generating neighboring solutions of the current solution. A superior neighbor is always accepted. An inferior neighbor is accepted probabilistically based on the difference in quality and a temperature parameter. The temperature parameter is modified as the algorithm progresses to alter the nature of the search.
Reactive search optimization
Focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution.
Tabu search (TS)
Similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest fitness of those generated. To prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
Artificial immune system (AIS)
Modeled on vertebrate immune systems.
Particle swarm optimization (PSO)
A swarm intelligence method.
Intelligent water drops (IWD)
A swarm-based optimization algorithm based on natural water drops flowing in rivers
Gravitational search algorithm (GSA)
A swarm intelligence method.
Ant colony clustering method (ACCM)
A method that make use of clustering approach, extending the ACO.
Stochastic diffusion search (SDS)
An agent-based probabilistic global search and optimization technique best suited to problems where the objective function can be decomposed into multiple independent partial-functions.
== History ==
Chronology of ant colony optimization algorithms.
1959, Pierre-Paul Grassé invented the theory of stigmergy to explain the behavior of nest building in termites;
1983, Deneubourg and his colleagues studied the collective behavior of ants;
1988, and Moyson Manderick have an article on self-organization among ants;
1989, the work of Goss, Aron, Deneubourg and Pasteels on the collective behavior of Argentine ants, which will give the idea of ant colony optimization algorithms;
1989, implementation of a model of behavior for food by Ebling and his colleagues;
1991, M. Dorigo proposed the ant system in his doctoral thesis (which was published in 1992). A technical report extracted from the thesis and co-authored by V. Maniezzo and A. Colorni was published five years later;
1994, Appleby and Steward of British Telecommunications Plc published the first application to telecommunications networks
1995, Gambardella and Dorigo proposed ant-q, the preliminary version of ant colony system as first extension of ant system;.
1996, Gambardella and Dorigo proposed ant colony system
1996, publication of the article on ant system;
1997, Dorigo and Gambardella proposed ant colony system hybridized with local search;
1997, Schoonderwoerd and his colleagues published an improved application to telecommunication networks;
1998, Dorigo launches first conference dedicated to the ACO algorithms;
1998, Stützle proposes initial parallel implementations;
1999, Gambardella, Taillard and Agazzi proposed macs-vrptw, first multi ant colony system applied to vehicle routing problems with time windows,
1999, Bonabeau, Dorigo and Theraulaz publish a book dealing mainly with artificial ants
2000, special issue of the Future Generation Computer Systems journal on ant algorithms
2000, Hoos and Stützle invent the max-min ant system;
2000, first applications to the scheduling, scheduling sequence and the satisfaction of constraints;
2000, Gutjahr provides the first evidence of convergence for an algorithm of ant colonies
2001, the first use of COA algorithms by companies (Eurobios and AntOptima);
2001, Iredi and his colleagues published the first multi-objective algorithm
2002, first applications in the design of schedule, Bayesian networks;
2002, Bianchi and her colleagues suggested the first algorithm for stochastic problem;
2004, Dorigo and Stützle publish the Ant Colony Optimization book with MIT Press
2004, Zlochin and Dorigo show that some algorithms are equivalent to the stochastic gradient descent, the cross-entropy method and algorithms to estimate distribution
2005, first applications to protein folding problems.
2012, Prabhakar and colleagues publish research relating to the operation of individual ants communicating in tandem without pheromones, mirroring the principles of computer network organization. The communication model has been compared to the Transmission Control Protocol.
2016, first application to peptide sequence design.
2017, successful integration of the multi-criteria decision-making method PROMETHEE into the ACO algorithm (HUMANT algorithm).
== References ==
== Publications (selected) ==
M. Dorigo, 1992. Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italy.
M. Dorigo, V. Maniezzo & A. Colorni, 1996. "Ant System: Optimization by a Colony of Cooperating Agents", IEEE Transactions on Systems, Man, and Cybernetics–Part B, 26 (1): 29–41.
M. Dorigo & L. M. Gambardella, 1997. "Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem". IEEE Transactions on Evolutionary Computation, 1 (1): 53–66.
M. Dorigo, G. Di Caro & L. M. Gambardella, 1999. "Ant Algorithms for Discrete Optimization Archived 2018-10-06 at the Wayback Machine". Artificial Life, 5 (2): 137–172.
E. Bonabeau, M. Dorigo et G. Theraulaz, 1999. Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press. ISBN 0-19-513159-2
M. Dorigo & T. Stützle, 2004. Ant Colony Optimization, MIT Press. ISBN 0-262-04219-3
M. Dorigo, 2007. "Ant Colony Optimization". Scholarpedia.
C. Blum, 2005 "Ant colony optimization: Introduction and recent trends". Physics of Life Reviews, 2: 353-373
M. Dorigo, M. Birattari & T. Stützle, 2006 Ant Colony Optimization: Artificial Ants as a Computational Intelligence Technique. TR/IRIDIA/2006-023
Mohd Murtadha Mohamad,"Articulated Robots Motion Planning Using Foraging Ant Strategy", Journal of Information Technology - Special Issues in Artificial Intelligence, Vol. 20, No. 4 pp. 163–181, December 2008, ISSN 0128-3790.
N. Monmarché, F. Guinand & P. Siarry (eds), "Artificial Ants", August 2010 Hardback 576 pp. ISBN 978-1-84821-194-0.
A. Kazharov, V. Kureichik, 2010. "Ant colony optimization algorithms for solving transportation problems", Journal of Computer and Systems Sciences International, Vol. 49. No. 1. pp. 30–43.
C-M. Pintea, 2014, Advances in Bio-inspired Computing for Combinatorial Optimization Problem, Springer ISBN 978-3-642-40178-7
K. Saleem, N. Fisal, M. A. Baharudin, A. A. Ahmed, S. Hafizah and S. Kamilah, "Ant colony inspired self-optimized routing protocol based on cross layer architecture for wireless sensor networks", WSEAS Trans. Commun., vol. 9, no. 10, pp. 669–678, 2010. ISBN 978-960-474-200-4
K. Saleem and N. Fisal, "Enhanced Ant Colony algorithm for self-optimized data assured routing in wireless sensor networks", Networks (ICON) 2012 18th IEEE International Conference on, pp. 422–427. ISBN 978-1-4673-4523-1
Abolmaali S, Roodposhti FR. Portfolio Optimization Using Ant Colony Method a Case Study on Tehran Stock Exchange. Journal of Accounting. 2018 Mar;8(1).
== External links ==
Scholarpedia Ant Colony Optimization page
Ant Colony Optimization Home Page
"Ant Colony Optimization" - Russian scientific and research community
AntSim - Simulation of Ant Colony Algorithms
MIDACO-Solver General purpose optimization software based on ant colony optimization (Matlab, Excel, VBA, C/C++, R, C#, Java, Fortran and Python)
University of Kaiserslautern, Germany, AG Wehn: Ant Colony Optimization Applet Visualization of Traveling Salesman solved by ant system with numerous options and parameters (Java Applet)
Ant algorithm simulation (Java Applet)
Java Ant Colony System Framework
Ant Colony Optimization Algorithm Implementation (Python Notebook) | Wikipedia/Ant_colony_optimization_algorithms |
Network motifs are recurrent and statistically significant subgraphs or patterns of a larger graph. All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs.
Network motifs are sub-graphs that repeat themselves in a specific network or even among various networks. Each of these sub-graphs, defined by a particular pattern of interactions between vertices, may reflect a framework in which particular functions are achieved efficiently. Indeed, motifs are of notable importance largely because they may reflect functional properties. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Although network motifs may provide a deep insight into the network's functional abilities, their detection is computationally challenging.
== Definitions ==
Let G = (V, E) and G′ = (V′, E′) be two graphs. Graph G′ is a sub-graph of graph G (written as G′ ⊆ G) if V′ ⊆ V and E′ ⊆ E ∩ (V′ × V′). If G′ ⊆ G and G′ contains all of the edges ⟨u, v⟩ ∈ E with u, v ∈ V′, then G′ is an induced sub-graph of G. We call G′ and G isomorphic (written as G′ ↔ G), if there exists a bijection (one-to-one correspondence) f:V′ → V with ⟨u, v⟩ ∈ E′ ⇔ ⟨f(u), f(v)⟩ ∈ E for all u, v ∈ V′. The mapping f is called an isomorphism between G and G′.
When G″ ⊂ G and there exists an isomorphism between the sub-graph G″ and a graph G′, this mapping represents an appearance of G′ in G. The number of appearances of graph G′ in G is called the frequency FG of G′ in G. A graph is called recurrent (or frequent) in G when its frequency FG(G′) is above a predefined threshold or cut-off value. We use terms pattern and frequent sub-graph in this review interchangeably. There is an ensemble Ω(G) of random graphs corresponding to the null-model associated to G. We should choose N random graphs uniformly from Ω(G) and calculate the frequency for a particular frequent sub-graph G′ in G. If the frequency of G′ in G is higher than its arithmetic mean frequency in N random graphs Ri, where 1 ≤ i ≤ N, we call this recurrent pattern significant and hence treat G′ as a network motif for G. For a small graph G′, the network G, and a set of randomized networks R(G) ⊆ Ω(R), where R(G) = N, the Z-score of the frequency of G′ is given by
Z
(
G
′
)
=
F
G
(
G
′
)
−
μ
R
(
G
′
)
σ
R
(
G
′
)
{\displaystyle Z(G^{\prime })={\frac {F_{G}(G^{\prime })-\mu _{R}(G^{\prime })}{\sigma _{R}(G^{\prime })}}}
where μR(G′) and σR(G′) stand for the mean and standard deviation of the frequency in set R(G), respectively. The larger the Z(G′), the more significant is the sub-graph G′ as a motif. Alternatively, another measurement in statistical hypothesis testing that can be considered in motif detection is the p-value, given as the probability of FR(G′) ≥ FG(G′) (as its null-hypothesis), where FR(G′) indicates the frequency of G' in a randomized network. A sub-graph with p-value less than a threshold (commonly 0.01 or 0.05) will be treated as a significant pattern. The p-value for the frequency of G′ is defined as
P
(
G
′
)
=
1
N
∑
i
=
1
N
δ
(
c
(
i
)
)
c
(
i
)
:
F
R
i
(
G
′
)
≥
F
G
(
G
′
)
{\displaystyle P(G^{\prime })={\frac {1}{N}}\sum _{i=1}^{N}\delta (c(i))\quad c(i):F_{R}^{i}(G^{\prime })\geq F_{G}(G^{\prime })}
where N indicates the number of randomized networks, i is defined over an ensemble of randomized networks, and the Kronecker delta function δ(c(i)) is one if the condition c(i) holds. The concentration of a particular n-size sub-graph G′ in network G refers to the ratio of the sub-graph appearance in the network to the total n-size non-isomorphic sub-graphs' frequencies, which is formulated by
C
G
(
G
′
)
=
F
G
(
G
′
)
∑
i
F
G
(
G
i
)
{\displaystyle C_{G}(G^{\prime })={\frac {F_{G}(G^{\prime })}{\sum _{i}F_{G}(G_{i})}}}
where index i is defined over the set of all non-isomorphic n-size graphs. Another statistical measurement is defined for evaluating network motifs, but it is rarely used in known algorithms. This measurement is introduced by Picard et al. in 2008 and used the Poisson distribution, rather than the Gaussian normal distribution that is implicitly being used above.
In addition, three specific concepts of sub-graph frequency have been proposed. As the figure illustrates, the first frequency concept F1 considers all matches of a graph in original network. This definition is similar to what we have introduced above. The second concept F2 is defined as the maximum number of edge-disjoint instances of a given graph in original network. And finally, the frequency concept F3 entails matches with disjoint edges and nodes. Therefore, the two concepts F2 and F3 restrict the usage of elements of the graph, and as can be inferred, the frequency of a sub-graph declines by imposing restrictions on network element usage. As a result, a network motif detection algorithm would pass over more candidate sub-graphs if we insist on frequency concepts F2 and F3.
== History ==
The study of network motifs was pioneered by Holland and Leinhardt who introduced the concept of a triad census of networks. They introduced methods to enumerate various types of subgraph configurations, and test whether the subgraph counts are statistically different from those expected in random networks.
This idea was further generalized in 2002 by Uri Alon and his group when network motifs were discovered in the gene regulation (transcription) network of the bacteria E. coli and then in a large set of natural networks. Since then, a considerable number of studies have been conducted on the subject. Some of these studies focus on the biological applications, while others focus on the computational theory of network motifs.
The biological studies endeavor to interpret the motifs detected for biological networks. For example, in work following, the network motifs found in E. coli were discovered in the transcription networks of other bacteria as well as yeast and higher organisms. A distinct set of network motifs were identified in other types of biological networks such as neuronal networks and protein interaction networks.
The computational research has focused on improving existing motif detection tools to assist the biological investigations and allow larger networks to be analyzed. Several different algorithms have been provided so far, which are elaborated in the next section in chronological order.
Most recently, the acc-MOTIF tool to detect network motifs was released.
== Motif discovery algorithms ==
Various solutions have been proposed for the challenging problem of network motif (NM) discovery. These algorithms can be classified under various paradigms such as exact counting methods, sampling methods, pattern growth methods and so on. However, motif discovery problem comprises two main steps: first, calculating the number of occurrences of a sub-graph and then, evaluating the sub-graph significance. The recurrence is significant if it is detectably far more than expected. Roughly speaking, the expected number of appearances of a sub-graph can be determined by a Null-model, which is defined by an ensemble of random networks with some of the same properties as the original network.
Until 2004, the only exact counting method for NM detection was the brute-force one proposed by Milo et al.. This algorithm was successful for discovering small motifs, but using this method for finding even size 5 or 6 motifs was not computationally feasible. Hence, a new approach to this problem was needed.
Here, a review on computational aspects of major algorithms is given and their related benefits and drawbacks from an algorithmic perspective are discussed.
=== Classification of algorithms ===
The table below lists the motif discovery algorithms that will be described in this section. They can be divided into two general categories: those based on exact counting and those using statistical sampling and estimations instead. Because the second group does not count all the occurrences of a subgraph in the main network, the algorithms belonging to this group are faster, but they might yield in biased and unrealistic results.
In the next level, the exact counting algorithms can be classified to network-centric and subgraph-centric methods. The algorithms of the first class search the given network for all subgraphs of a given size, while the algorithms falling into the second class first generate different possible non-isomorphic graphs of the given size, and then explore the network for each generated subgraph separately. Each approach has its advantages and disadvantages which are discussed below.
The table also indicates whether an algorithm can be used for directed or undirected networks as well as induced or non-induced subgraphs.
=== mfinder ===
Kashtan et al. published mfinder, the first motif-mining tool, in 2004. It implements two kinds of motif finding algorithms: a full enumeration and the first sampling method.
Their sampling discovery algorithm was based on edge sampling throughout the network. This algorithm estimates concentrations of induced sub-graphs and can be utilized for motif discovery in directed or undirected networks. The sampling procedure of the algorithm starts from an arbitrary edge of the network that leads to a sub-graph of size two, and then expands the sub-graph by choosing a random edge that is incident to the current sub-graph. After that, it continues choosing random neighboring edges until a sub-graph of size n is obtained. Finally, the sampled sub-graph is expanded to include all of the edges that exist in the network between these n nodes. When an algorithm uses a sampling approach, taking unbiased samples is the most important issue that the algorithm might address. The sampling procedure, however, does not take samples uniformly and therefore Kashtan et al. proposed a weighting scheme that assigns different weights to the different sub-graphs within network. The underlying principle of weight allocation is exploiting the information of the sampling probability for each sub-graph, i.e. the probable sub-graphs will obtain comparatively less weights in comparison to the improbable sub-graphs; hence, the algorithm must calculate the sampling probability of each sub-graph that has been sampled. This weighting technique assists mfinder to determine sub-graph concentrations impartially.
In expanded to include sharp contrast to exhaustive search, the computational time of the algorithm surprisingly is asymptotically independent of the network size. An analysis of the computational time of the algorithm has shown that it takes O(nn) for each sample of a sub-graph of size n from the network. On the other hand, there is no analysis in on the classification time of sampled sub-graphs that requires solving the graph isomorphism problem for each sub-graph sample. Additionally, an extra computational effort is imposed on the algorithm by the sub-graph weight calculation. But it is unavoidable to say that the algorithm may sample the same sub-graph multiple times – spending time without gathering any information. In conclusion, by taking the advantages of sampling, the algorithm performs more efficiently than an exhaustive search algorithm; however, it only determines sub-graphs concentrations approximately. This algorithm can find motifs up to size 6 because of its main implementation, and as result it gives the most significant motif, not all the others too. Also, it is necessary to mention that this tool has no option of visual presentation. The sampling algorithm is shown briefly:
=== FPF (Mavisto) ===
Schreiber and Schwöbbermeyer proposed an algorithm named flexible pattern finder (FPF) for extracting frequent sub-graphs of an input network and implemented it in a system named Mavisto. Their algorithm exploits the downward closure property which is applicable for frequency concepts F2 and F3. The downward closure property asserts that the frequency for sub-graphs decrease monotonically by increasing the size of sub-graphs; however, this property does not hold necessarily for frequency concept F1. FPF is based on a pattern tree (see figure) consisting of nodes that represents different graphs (or patterns), where the parent of each node is a sub-graph of its children nodes; in other words, the corresponding graph of each pattern tree's node is expanded by adding a new edge to the graph of its parent node.
At first, the FPF algorithm enumerates and maintains the information of all matches of a sub-graph located at the root of the pattern tree. Then, one-by-one it builds child nodes of the previous node in the pattern tree by adding one edge supported by a matching edge in the target graph, and tries to expand all of the previous information about matches to the new sub-graph (child node). In next step, it decides whether the frequency of the current pattern is lower than a predefined threshold or not. If it is lower and if downward closure holds, FPF can abandon that path and not traverse further in this part of the tree; as a result, unnecessary computation is avoided. This procedure is continued until there is no remaining path to traverse.
The advantage of the algorithm is that it does not consider infrequent sub-graphs and tries to finish the enumeration process as soon as possible; therefore, it only spends time for promising nodes in the pattern tree and discards all other nodes. As an added bonus, the pattern tree notion permits FPF to be implemented and executed in a parallel manner since it is possible to traverse each path of the pattern tree independently. However, FPF is most useful for frequency concepts F2 and F3, because downward closure is not applicable to F1. Nevertheless, the pattern tree is still practical for F1 if the algorithm runs in parallel. Another advantage of the algorithm is that the implementation of this algorithm has no limitation on motif size, which makes it more amenable to improvements. The pseudocode of FPF (Mavisto) is shown below:
=== ESU (FANMOD) ===
The sampling bias of Kashtan et al. provided great impetus for designing better algorithms for the NM discovery problem. Although Kashtan et al. tried to settle this drawback by means of a weighting scheme, this method imposed an undesired overhead on the running time as well a more complicated implementation. This tool is one of the most useful ones, as it supports visual options and also is an efficient algorithm with respect to time. But, it has a limitation on motif size as it does not allow searching for motifs of size 9 or higher because of the way the tool is implemented.
Wernicke introduced an algorithm named RAND-ESU that provides a significant improvement over mfinder. This algorithm, which is based on the exact enumeration algorithm ESU, has been implemented as an application called FANMOD. RAND-ESU is a NM discovery algorithm applicable for both directed and undirected networks, effectively exploits an unbiased node sampling throughout the network, and prevents overcounting sub-graphs more than once. Furthermore, RAND-ESU uses a novel analytical approach called DIRECT for determining sub-graph significance instead of using an ensemble of random networks as a Null-model. The DIRECT method estimates the sub-graph concentration without explicitly generating random networks. Empirically, the DIRECT method is more efficient in comparison with the random network ensemble in case of sub-graphs with a very low concentration; however, the classical Null-model is faster than the DIRECT method for highly concentrated sub-graphs. In the following, we detail the ESU algorithm and then we show how this exact algorithm can be modified efficiently to RAND-ESU that estimates sub-graphs concentrations.
The algorithms ESU and RAND-ESU are fairly simple, and hence easy to implement. ESU first finds the set of all induced sub-graphs of size k, let Sk be this set. ESU can be implemented as a recursive function; the running of this function can be displayed as a tree-like structure of depth k, called the ESU-Tree (see figure). Each of the ESU-Tree nodes indicate the status of the recursive function that entails two consecutive sets SUB and EXT. SUB refers to nodes in the target network that are adjacent and establish a partial sub-graph of size |SUB| ≤ k. If |SUB| = k, the algorithm has found an induced complete sub-graph, so Sk = SUB ∪ Sk. However, if |SUB| < k, the algorithm must expand SUB to achieve cardinality k. This is done by the EXT set that contains all the nodes that satisfy two conditions: First, each of the nodes in EXT must be adjacent to at least one of the nodes in SUB; second, their numerical labels must be larger than the label of first element in SUB. The first condition makes sure that the expansion of SUB nodes yields a connected graph and the second condition causes ESU-Tree leaves (see figure) to be distinct; as a result, it prevents overcounting. Note that, the EXT set is not a static set, so in each step it may expand by some new nodes that do not breach the two conditions. The next step of ESU involves classification of sub-graphs placed in the ESU-Tree leaves into non-isomorphic size-k graph classes; consequently, ESU determines sub-graphs frequencies and concentrations. This stage has been implemented simply by employing McKay's nauty algorithm, which classifies each sub-graph by performing a graph isomorphism test. Therefore, ESU finds the set of all induced k-size sub-graphs in a target graph by a recursive algorithm and then determines their frequency using an efficient tool.
The procedure of implementing RAND-ESU is quite straightforward and is one of the main advantages of FANMOD. One can change the ESU algorithm to explore just a portion of the ESU-Tree leaves by applying a probability value 0 ≤ pd ≤ 1 for each level of the ESU-Tree and oblige ESU to traverse each child node of a node in level d-1 with probability pd. This new algorithm is called RAND-ESU. Evidently, when pd = 1 for all levels, RAND-ESU acts like ESU. For pd = 0 the algorithm finds nothing. Note that, this procedure ensures that the chances of visiting each leaf of the ESU-Tree are the same, resulting in unbiased sampling of sub-graphs through the network. The probability of visiting each leaf is Πdpd and this is identical for all of the ESU-Tree leaves; therefore, this method guarantees unbiased sampling of sub-graphs from the network. Nonetheless, determining the value of pd for 1 ≤ d ≤ k is another issue that must be determined manually by an expert to get precise results of sub-graph concentrations. While there is no lucid prescript for this matter, the Wernicke provides some general observations that may help in determining p_d values. In summary, RAND-ESU is a very fast algorithm for NM discovery in the case of induced sub-graphs supporting unbiased sampling method. Although, the main ESU algorithm and so the FANMOD tool is known for discovering induced sub-graphs, there is trivial modification to ESU which makes it possible for finding non-induced sub-graphs, too. The pseudo code of ESU (FANMOD) is shown below:
=== NeMoFinder ===
Chen et al. introduced a new NM discovery algorithm called NeMoFinder, which adapts the idea in SPIN to extract frequent trees and after that expands them into non-isomorphic graphs. NeMoFinder utilizes frequent size-n trees to partition the input network into a collection of size-n graphs, afterward finding frequent size-n sub-graphs by expansion of frequent trees edge-by-edge until getting a complete size-n graph Kn. The algorithm finds NMs in undirected networks and is not limited to extracting only induced sub-graphs. Furthermore, NeMoFinder is an exact enumeration algorithm and is not based on a sampling method. As Chen et al. claim, NeMoFinder is applicable for detecting relatively large NMs, for instance, finding NMs up to size-12 from the whole S. cerevisiae (yeast) PPI network as the authors claimed.
NeMoFinder consists of three main steps. First, finding frequent size-n trees, then utilizing repeated size-n trees to divide the entire network into a collection of size-n graphs, finally, performing sub-graph join operations to find frequent size-n sub-graphs. In the first step, the algorithm detects all non-isomorphic size-n trees and mappings from a tree to the network. In the second step, the ranges of these mappings are employed to partition the network into size-n graphs. Up to this step, there is no distinction between NeMoFinder and an exact enumeration method. However, a large portion of non-isomorphic size-n graphs still remain. NeMoFinder exploits a heuristic to enumerate non-tree size-n graphs by the obtained information from the preceding steps. The main advantage of the algorithm is in the third step, which generates candidate sub-graphs from previously enumerated sub-graphs. This generation of new size-n sub-graphs is done by joining each previous sub-graph with derivative sub-graphs from itself called cousin sub-graphs. These new sub-graphs contain one additional edge in comparison to the previous sub-graphs. However, there exist some problems in generating new sub-graphs: There is no clear method to derive cousins from a graph, joining a sub-graph with its cousins leads to redundancy in generating particular sub-graph more than once, and cousin determination is done by a canonical representation of the adjacency matrix which is not closed under join operation. NeMoFinder is an efficient network motif finding algorithm for motifs up to size 12 only for protein-protein interaction networks, which are presented as undirected graphs. And it is not able to work on directed networks which are so important in the field of complex and biological networks. The pseudocode of NeMoFinder is shown below:
=== Grochow–Kellis ===
Grochow and Kellis proposed an exact algorithm for enumerating sub-graph appearances. The algorithm is based on a motif-centric approach, which means that the frequency of a given sub-graph, called the query graph, is exhaustively determined by searching for all possible mappings from the query graph into the larger network. It is claimed that a motif-centric method in comparison to network-centric methods has some beneficial features. First of all it avoids the increased complexity of sub-graph enumeration. Also, by using mapping instead of enumerating, it enables an improvement in the isomorphism test. To improve the performance of the algorithm, since it is an inefficient exact enumeration algorithm, the authors introduced a fast method which is called symmetry-breaking conditions. During straightforward sub-graph isomorphism tests, a sub-graph may be mapped to the same sub-graph of the query graph multiple times. In the Grochow–Kellis (GK) algorithm symmetry-breaking is used to avoid such multiple mappings. Here we introduce the GK algorithm and the symmetry-breaking condition which eliminates redundant isomorphism tests.
The GK algorithm discovers the whole set of mappings of a given query graph to the network in two major steps. It starts with the computation of symmetry-breaking conditions of the query graph. Next, by means of a branch-and-bound method, the algorithm tries to find every possible mapping from the query graph to the network that meets the associated symmetry-breaking conditions. An example of the usage of symmetry-breaking conditions in GK algorithm is demonstrated in figure.
As it is mentioned above, the symmetry-breaking technique is a simple mechanism that precludes spending time finding a sub-graph more than once due to its symmetries. Note that, computing symmetry-breaking conditions requires finding all automorphisms of a given query graph. Even though, there is no efficient (or polynomial time) algorithm for the graph automorphism problem, this problem can be tackled efficiently in practice by McKay's tools. As it is claimed, using symmetry-breaking conditions in NM detection lead to save a great deal of running time. Moreover, it can be inferred from the results in that using the symmetry-breaking conditions results in high efficiency particularly for directed networks in comparison to undirected networks. The symmetry-breaking conditions used in the GK algorithm are similar to the restriction which ESU algorithm applies to the labels in EXT and SUB sets. In conclusion, the GK algorithm computes the exact number of appearance of a given query graph in a large complex network and exploiting symmetry-breaking conditions improves the algorithm performance. Also, GK algorithm is one of the known algorithms having no limitation for motif size in implementation and potentially it can find motifs of any size.
=== Color-coding approach ===
Most algorithms in the field of NM discovery are used to find induced sub-graphs of a network. In 2008, Noga Alon et al. introduced an approach for finding non-induced sub-graphs too. Their technique works on undirected networks such as PPI ones. Also, it counts non-induced trees and bounded treewidth sub-graphs. This method is applied for sub-graphs of size up to 10.
This algorithm counts the number of non-induced occurrences of a tree T with k = O(logn) vertices in a network G with n vertices as follows:
Color coding. Color each vertex of input network G independently and uniformly at random with one of the k colors.
Counting. Apply a dynamic programming routine to count the number of non-induced occurrences of T in which each vertex has a unique color. For more details on this step, see.
Repeat the above two steps O(ek) times and add up the number of occurrences of T to get an estimate on the number of its occurrences in G.
As available PPI networks are far from complete and error free, this approach is suitable for NM discovery for such networks. As Grochow–Kellis Algorithm and this one are the ones popular for non-induced sub-graphs, it is worth to mention that the algorithm introduced by Alon et al. is less time-consuming than the Grochow–Kellis Algorithm.
=== MODA ===
Omidi et al. introduced a new algorithm for motif detection named MODA which is applicable for induced and non-induced NM discovery in undirected networks. It is based on the motif-centric approach discussed in the Grochow–Kellis algorithm section. It is very important to distinguish motif-centric algorithms such as MODA and GK algorithm because of their ability to work as query-finding algorithms. This feature allows such algorithms to be able to find a single motif query or a small number of motif queries (not all possible sub-graphs of a given size) with larger sizes. As the number of possible non-isomorphic sub-graphs increases exponentially with sub-graph size, for large size motifs (even larger than 10), the network-centric algorithms, those looking for all possible sub-graphs, face a problem. Although motif-centric algorithms also have problems in discovering all possible large size sub-graphs, but their ability to find small numbers of them is sometimes a significant property.
Using a hierarchical structure called an expansion tree, the MODA algorithm is able to extract NMs of a given size systematically and similar to FPF that avoids enumerating unpromising sub-graphs; MODA takes into consideration potential queries (or candidate sub-graphs) that would result in frequent sub-graphs. Despite the fact that MODA resembles FPF in using a tree like structure, the expansion tree is applicable merely for computing frequency concept F1. As we will discuss next, the advantage of this algorithm is that it does not carry out the sub-graph isomorphism test for non-tree query graphs. Additionally, it utilizes a sampling method in order to speed up the running time of the algorithm.
Here is the main idea: by a simple criterion one can generalize a mapping of a k-size graph into the network to its same size supergraphs. For example, suppose there is mapping f(G) of graph G with k nodes into the network and we have a same size graph G′ with one more edge &langu, v⟩; fG will map G′ into the network, if there is an edge ⟨fG(u), fG(v)⟩ in the network. As a result, we can exploit the mapping set of a graph to determine the frequencies of its same order supergraphs simply in O(1) time without carrying out sub-graph isomorphism testing. The algorithm starts ingeniously with minimally connected query graphs of size k and finds their mappings in the network via sub-graph isomorphism. After that, with conservation of the graph size, it expands previously considered query graphs edge-by-edge and computes the frequency of these expanded graphs as mentioned above. The expansion process continues until reaching a complete graph Kk (fully connected with k(k-1)⁄2 edge).
As discussed above, the algorithm starts by computing sub-tree frequencies in the network and then expands sub-trees edge by edge. One way to implement this idea is called the expansion tree Tk for each k. Figure shows the expansion tree for size-4 sub-graphs. Tk organizes the running process and provides query graphs in a hierarchical manner. Strictly speaking, the expansion tree Tk is simply a directed acyclic graph or DAG, with its root number k indicating the graph size existing in the expansion tree and each of its other nodes containing the adjacency matrix of a distinct k-size query graph. Nodes in the first level of Tk are all distinct k-size trees and by traversing Tk in depth query graphs expand with one edge at each level. A query graph in a node is a sub-graph of the query graph in a node's child with one edge difference. The longest path in Tk consists of (k2-3k+4)/2 edges and is the path from the root to the leaf node holding the complete graph. Generating expansion trees can be done by a simple routine which is explained in.
MODA traverses Tk and when it extracts query trees from the first level of Tk it computes their mapping sets and saves these mappings for the next step. For non-tree queries from Tk, the algorithm extracts the mappings associated with the parent node in Tk and determines which of these mappings can support the current query graphs. The process will continue until the algorithm gets the complete query graph. The query tree mappings are extracted using the Grochow–Kellis algorithm. For computing the frequency of non-tree query graphs, the algorithm employs a simple routine that takes O(1) steps. In addition, MODA exploits a sampling method where the sampling of each node in the network is linearly proportional to the node degree, the probability distribution is exactly similar to the well-known Barabási-Albert preferential attachment model in the field of complex networks. This approach generates approximations; however, the results are almost stable in different executions since sub-graphs aggregate around highly connected nodes. The pseudocode of MODA is shown below:
=== Kavosh ===
A recently introduced algorithm named Kavosh aims at improved main memory usage. Kavosh is usable to detect NM in both directed and undirected networks. The main idea of the enumeration is similar to the GK and MODA algorithms, which first find all k-size sub-graphs that a particular node participated in, then remove the node, and subsequently repeat this process for the remaining nodes.
For counting the sub-graphs of size k that include a particular node, trees with maximum depth of k, rooted at this node and based on neighborhood relationship are implicitly built. Children of each node include both incoming and outgoing adjacent nodes. To descend the tree, a child is chosen at each level with the restriction that a particular child can be included only if it has not been included at any upper level. After having descended to the lowest level possible, the tree is again ascended and the process is repeated with the stipulation that nodes visited in earlier paths of a descendant are now considered unvisited nodes. A final restriction in building trees is that all children in a particular tree must have numerical labels larger than the label of the root of the tree. The restrictions on the labels of the children are similar to the conditions which GK and ESU algorithm use to avoid overcounting sub-graphs.
The protocol for extracting sub-graphs makes use of the compositions of an integer. For the extraction of sub-graphs of size k, all possible compositions of the integer k-1 must be considered. The compositions of k-1 consist of all possible manners of expressing k-1 as a sum of positive integers. Summations in which the order of the summands differs are considered distinct. A composition can be expressed as k2,k3,...,km where k2 + k3 + ... + km = k-1. To count sub-graphs based on the composition, ki nodes are selected from the i-th level of the tree to be nodes of the sub-graphs (i = 2,3,...,m). The k-1 selected nodes along with the node at the root define a sub-graph within the network. After discovering a sub-graph involved as a match in the target network, in order to be able to evaluate the size of each class according to the target network, Kavosh employs the nauty algorithm in the same way as FANMOD. The enumeration part of Kavosh algorithm is shown below:
Recently a Cytoscape plugin called CytoKavosh is developed for this software.
=== G-Tries ===
In 2010, Pedro Ribeiro and Fernando Silva proposed a novel data structure for storing a collection of sub-graphs, called a g-trie. This data structure, which is conceptually akin to a prefix tree, stores sub-graphs according to their structures and finds occurrences of each of these sub-graphs in a larger graph. One of the noticeable aspects of this data structure is that coming to the network motif discovery, the sub-graphs in the main network are needed to be evaluated. So, there is no need to find the ones in random network which are not in the main network. This can be one of the time-consuming parts in the algorithms in which all sub-graphs in random networks are derived.
A g-trie is a multiway tree that can store a collection of graphs. Each tree node contains information about a single graph vertex and its corresponding edges to ancestor nodes. A path from the root to a leaf corresponds to one single graph. Descendants of a g-trie node share a common sub-graph. Constructing a g-trie is well described in. After constructing a g-trie, the counting part takes place. The main idea in counting process is to backtrack by all possible sub-graphs, but at the same time do the isomorphism tests. This backtracking technique is essentially the same technique employed by other motif-centric approaches like MODA and GK algorithms. Taking advantage of common substructures in the sense that at a given time there is a partial isomorphic match for several different candidate sub-graphs.
Among the mentioned algorithms, G-Tries is the fastest. But, the excessive use of memory is the drawback of this algorithm, which might limit the size of discoverable motifs by a personal computer with average memory.
=== ParaMODA and NemoMap ===
ParaMODA and NemoMap are fast algorithms published in 2017 and 2018, respectively. They aren't as scalable as many of the others.
=== Comparison ===
Tables and figure below show the results of running the mentioned algorithms on different standard networks. These results are taken from the corresponding sources, thus they should be treated individually.
== Well-established motifs and their functions ==
Much experimental work has been devoted to understanding network motifs in gene regulatory networks. These networks control which genes are expressed in the cell in response to biological signals. The network is defined such that genes are nodes, and directed edges represent the control of one gene by a transcription factor (regulatory protein that binds DNA) encoded by another gene. Thus, network motifs are patterns of genes regulating each other's transcription rate. When analyzing transcription networks, it is seen that the same network motifs appear again and again in diverse organisms from bacteria to human. The transcription network of E. coli and yeast, for example, is made of three main motif families, that make up almost the entire network. The leading hypothesis is that the network motif were independently selected by evolutionary processes in a converging manner, since the creation or elimination of regulatory interactions is fast on evolutionary time scale, relative to the rate at which genes change, Furthermore, experiments on the dynamics generated by network motifs in living cells indicate that they have characteristic dynamical functions. This suggests that the network motif serve as building blocks in gene regulatory networks that are beneficial to the organism.
The functions associated with common network motifs in transcription networks were explored and demonstrated by several research projects both theoretically and experimentally. Below are some of the most common network motifs and their associated function.
=== Negative auto-regulation (NAR) ===
One of simplest and most abundant network motifs in E. coli is negative auto-regulation in which a transcription factor (TF) represses its own transcription. This motif was shown to perform two important functions. The first function is response acceleration. NAR was shown to speed-up the response to signals both theoretically and experimentally. This was first shown in a synthetic transcription network and later on in the natural context in the SOS DNA repair system of E. coli. The second function is increased stability of the auto-regulated gene product concentration against stochastic noise, thus reducing variations in protein levels between different cells.
=== Positive auto-regulation (PAR) ===
Positive auto-regulation (PAR) occurs when a transcription factor enhances its own rate of production. Opposite to the NAR motif this motif slows the response time compared to simple regulation. In the case of a strong PAR the motif may lead to a bimodal distribution of protein levels in cell populations.
=== Feed-forward loops (FFL) ===
This motif is commonly found in many gene systems and organisms. The motif consists of three genes and three regulatory interactions. The target gene C is regulated by 2 TFs A and B and in addition TF B is also regulated by TF A . Since each of the regulatory interactions may either be positive or negative there are possibly eight types of FFL motifs. Two of those eight types: the coherent type 1 FFL (C1-FFL) (where all interactions are positive) and the incoherent type 1 FFL (I1-FFL) (A activates C and also activates B which represses C) are found much more frequently in the transcription network of E. coli and yeast than the other six types. In addition to the structure of the circuitry the way in which the signals from A and B are integrated by the C promoter should also be considered. In most of the cases the FFL is either an AND gate (A and B are required for C activation) or OR gate (either A or B are sufficient for C activation) but other input function are also possible.
=== Coherent type 1 FFL (C1-FFL) ===
The C1-FFL with an AND gate was shown to have a function of a 'sign-sensitive delay' element and a persistence detector both theoretically and experimentally with the arabinose system of E. coli. This means that this motif can provide pulse filtration in which short pulses of signal will not generate a response but persistent signals will generate a response after short delay. The shut off of the output when a persistent pulse is ended will be fast. The opposite behavior emerges in the case of a sum gate with fast response and delayed shut off as was demonstrated in the flagella system of E. coli. De novo evolution of C1-FFLs in gene regulatory networks has been demonstrated computationally in response to selection to filter out an idealized short signal pulse, but for non-idealized noise, a dynamics-based system of feed-forward regulation with different topology was instead favored.
=== Incoherent type 1 FFL (I1-FFL) ===
The I1-FFL is a pulse generator and response accelerator. The two signal pathways of the I1-FFL act in opposite directions where one pathway activates Z and the other represses it. When the repression is complete this leads to a pulse-like dynamics. It was also demonstrated experimentally that the I1-FFL can serve as response accelerator in a way which is similar to the NAR motif. The difference is that the I1-FFL can speed-up the response of any gene and not necessarily a transcription factor gene. An additional function was assigned to the I1-FFL network motif: it was shown both theoretically and experimentally that the I1-FFL can generate non-monotonic input function in both a synthetic and native systems. Finally, expression units that incorporate incoherent feedforward control of the gene product provide adaptation to the amount of DNA template and can be superior to simple combinations of constitutive promoters. Feedforward regulation displayed better adaptation than negative feedback, and circuits based on RNA interference were the most robust to variation in DNA template amounts. De novo evolution of I1-FFLs in gene regulatory networks has been demonstrated computationally in response to selection to a generate a pulse, with I1-FFLs being more evolutionarily accessible, but not superior, relative to an alternative motif in which it is the output rather than the input that activates the repressor.
=== Multi-output FFLs ===
In some cases the same regulators X and Y regulate several Z genes of the same system. By adjusting the strength of the interactions this motif was shown to determine the temporal order of gene activation. This was demonstrated experimentally in the flagella system of E. coli.
=== Single-input modules (SIM) ===
This motif occurs when a single regulator regulates a set of genes with no additional regulation. This is useful when the genes are cooperatively carrying out a specific function and therefore always need to be activated in a synchronized manner. By adjusting the strength of the interactions it can create temporal expression program of the genes it regulates.
In the literature, Multiple-input modules (MIM) arose as a generalization of SIM. However, the precise definitions of SIM and MIM have been a source of inconsistency. There are attempts to provide orthogonal definitions for canonical motifs in biological networks and algorithms to enumerate them, especially SIM, MIM and Bi-Fan (2x2 MIM).
=== Dense overlapping regulons (DOR) ===
This motif occurs in the case that several regulators combinatorially control a set of genes with diverse regulatory combinations. This motif was found in E. coli in various systems such as carbon utilization, anaerobic growth, stress response and others. In order to better understand the function of this motif one has to obtain more information about the way the multiple inputs are integrated by the genes. Kaplan et al. has mapped the input functions of the sugar utilization genes in E. coli, showing diverse shapes.
== Activity motifs ==
An interesting generalization of the network-motifs, activity motifs are over occurring patterns that can be found when nodes and edges in the network are annotated with quantitative features. For instance, when edges in a metabolic pathways are annotated with the magnitude or timing of the corresponding gene expression, some patterns are over occurring given the underlying network structure.
== Criticism ==
An assumption (sometimes more sometimes less implicit) behind the preservation of a topological sub-structure is that it is of a particular functional importance. This assumption has recently been questioned. Some authors have argued that motifs, like bi-fan motifs, might show a variety depending on the network context, and therefore, structure of the motif does not necessarily determine function. Indeed, an analysis of motifs in the C. elegans brain connectome in terms of “uncolored nodes” (nodes without a functional tag) revealed no significant difference in motif abundance compared to chance. When nodes are assigned colors according to their functional role in the network, however, (for example, different colors for sensory neurons, motor neurons, or interneurons), particular colored motifs are found to be used significantly more than expected by chance, reflecting the functional role of the motif. Certain bi-fan motifs, for example, appear with significantly enhanced frequency, while other colored bi-fan motifs do not. Because the number of colored motifs increases exponentially with the number of colors, a search for colored motifs with significant bias can only be carried out for a small number of colors (node types). Network structure certainly does not always indicate function; this is an idea that has been around for some time, for another example see the Sin operon.
Most analyses of motif function are carried out looking at the motif operating in isolation. Recent research provides good evidence that network context, i.e. the connections of the motif to the rest of the network, is too important to draw inferences on function from local structure only — the cited paper also reviews the criticisms and alternative explanations for the observed data. An analysis of the impact of a single motif module on the global dynamics of a network is studied in. Yet another recent work suggests that certain topological features of biological networks naturally give rise to the common appearance of canonical motifs, thereby questioning whether frequencies of occurrences are reasonable evidence that the structures of motifs are selected for their functional contribution to the operation of networks.
== See also ==
Clique (graph theory)
Graphical model
== References ==
== External links ==
A software tool that can detect network motifs
bio-physics-wiki NETWORK MOTIFS
FANMOD: a tool for fast network motif detection
MAVisto: network motif analysis and visualisation tool
NeMoFinder
Grochow–Kellis
MODA
Kavosh
CytoKavosh
G-Tries
acc-MOTIF detection tool | Wikipedia/Network_motif |
Systemic therapy is a type of psychotherapy that seeks to address people in relationships, dealing with the interactions of groups and their interactional patterns and dynamics.
Early forms of systemic therapy were based on cybernetics and systems theory. Systemic therapy practically addresses stagnant behavior patterns within living systems without analyzing their cause. The therapist's role is to introduce creative "nudges" to help systems change themselves. This approach is increasingly applied in various fields like business, education, politics, psychiatry, social work, and family medicine.
== History ==
Systemic therapy has its roots in family therapy, or more precisely family systems therapy as it later came to be known. In particular, systemic therapy traces its roots to the Milan school of Mara Selvini Palazzoli, but also derives from the work of Salvador Minuchin, Murray Bowen, Ivan Boszormenyi-Nagy, as well as Virginia Satir and Jay Haley from MRI in Palo Alto. These early schools of family therapy represented therapeutic adaptations of the larger interdisciplinary field of systems theory which originated in the fields of biology and physiology.
The Systemic Family Therapy develops from Murray Bowen's theory, from the research he conducted in the late 1940s till the early 1950s at the NIMH. The research project had families live on the research ward for extended periods. Bowen and his staff conducted extensive observational research on each family's interactions. Bowen's theory of Systemic Family therapy had 8 concepts: "Triangles", "Differentiation of Self", "Nuclear Family Emotional Process", "Family Projection Process", "Multigenerational Transmission Process", "Emotional Cutoff", "Sibling Position", "Societal Emotional Process" In the late 1960s, he introduced the theory of family systems which was based on the structure and behavior of the family’s relationship system as opposed to traditional individual therapy. Bowen researched the family patterns of people with schizophrenia who were receiving treatment and the patterns of his own family of origin when families were viewed as complex systems. The number of elements and how they are organized can alter how complex the system is. The system is required to have control and feedback mechanisms, which is where cybernetics come in place. Norbert Wiener, a mathematician, came up with the term Cybernetics which refers to the study of the automatic control system. Another contributor to this system came from Gregory Bateson, he created the idea that the family is a system governed by cybernetic principles. In one of those principles the Systemic theory is mentioned, this theory explains further into how individuals interact with each other, their connections to others, patterns, and their relationships.
Early forms of systemic therapy were based on cybernetics. In the 1970s this understanding of systems theory was central to the structural (Minuchin) and strategic (Haley, Selvini Palazzoli) schools of family therapy which would later develop into systemic therapy. In the light of postmodern critique, the notion that one could control systems or say objectively "what is" came increasingly into question. Based largely on the work of anthropologists Gregory Bateson and Margaret Mead, this resulted in a shift towards what is known as "second-order cybernetics" which acknowledges the influence of the subjective observer in any study, essentially applying the principles of cybernetics to cybernetics – examining the examination.
As a result, the focus of systemic therapy (ca. 1980 and forward) has moved away from a modernist model of linear causality and understanding of reality as objective, to a postmodern understanding of reality as socially and linguistically constructed.
== Practical application ==
Systemic therapy approaches problems practically rather than analytically. It seeks to identify stagnant patterns of behavior within a living system - a group of people, such as a family. It then addresses those patterns directly, without analysing their cause. Systemic therapy does not attempt to determine past causes, such as subconscious impulses or childhood trauma, or to diagnose. Thus, it differs from psychoanalytic and psychodynamic forms of family therapy (for example, the work of Horst-Eberhard Richter).
Systemic therapies are increasingly being used in personal and professional settings, but also have evidence in benefitting children with mental disorders as well. Behavioral disorders that affect mood and learning abilities have working evidence that supports the implementation of systemic therapy amongst younger groups of children who may struggle with these issues (Retzlaff et al., 2013). The approach of reframing daily struggles for those with mood disorders helps to aid in the grounding and practicality of their situations. Those receiving help from systemic therapies are set to focus on the realities of their daily lives and offer a pragmatic perspective on problem-solving skill sets.
When approaching systemic therapy, a multitude of factors are considered in order to reach the desired results. Approach is determined on a case-by-case basis, involving the consideration of factors such as; mental disorders, the adolescent’s upbringing, situational life events, stress induced by societal factors, unconventional family dynamics, etc. (Lorås, 2017). The methodology of systemic therapy involves an amalgamation of various data points to be able to practice what approach might be best to implement for the individual. All contributing stress factors of the individual's reality are considered during the development of the grounded theory analysis, in order to best aid the individuals in need.
Although systemic therapy does not attempt to determine past causes, it is important to recognize that Systemic Therapy is used in family therapy also known as "Systemic Family Therapy". These practices can often be seen and used in families or children that abuse drugs, have behavior problems, chronic illness, and many other uses (Cottrell & Boston, 2002) These are some way Systemic Therapy has been utilized in our mental health institutions, and continues to be practiced on patients.
A key point of this postmodern perspective is not a denial of absolutes. Instead, the therapist recognises that they do not hold the capacity to change people or systems. Their role is to introduce creative "nudges" which help systems to change themselves.
An interesting study by Eugene K. Epstein supports the idea that a therapist does not hold the capacity to change people or systems. Epstein argues that although we can't change systems, we can influence them. Part of Postmodernism relies on our self-agency, our cultures, practices, etc. (Epstein, 2016) Therefore these views and cultural biases affect and influence the approach of therapy, in this instance systemic Therapy. Therapists and those practicing Systemic Therapy can analyze and see patterns of emotions. Many times people can feel constrained on what they feel or be confused about what they are feeling, when you can clarify and understand what emotions you are feeling it can lead to a positive change (Bertrando & Arcelloni, 2014). This means Systemic Therapy also helps exercise emotional interpretation.
There are various forms of techniques that involve systemic therapy. One form of therapy used is structural family therapy. This consists of Structural family therapists interfering to form the ideal family structures that are known. As for families who have complex family dynamics. A few techniques that are advised to put into practice is to confront the complex family boundaries. As well as, reestablishing the family structure by shifting the families composure and forming family relatives in pairs opposed to one another. These are a few procedures that are believed to restore position scales.
An additional, overview that best helps to comprehend this approach is the outcome of this form of therapy is to gather family individuals closer to the model. Therefore the proper approach is to use guidance and recommendations. The therapists believe this is one of the most effective techniques. The therapist addresses this form of technique by implementing an oral form of communication. For instance, the therapist will begin by asking a series of questions. The questions involve demonstrating characteristics of authority.The individual who discusses new indications establishes to a situation or set of routines. Then the therapist will provide the individuals with a certain scenario that will help them better navigate an upcoming conflict that may arise. This will allow family individuals to engage in discussion and offer possible resolutions.
Also, there is additional information that provides insight into the positive outcome of systemic interference in families of children with distinct difficulties. This refers to family therapy or additional family-orientated techniques. This refers to family therapy or additional family-orientated techniques. For instance, family-orientated interceptions have demonstrated positive results regarding infants' sleeping issues.
There is a brief discussion of the positive impact that family-orientated approaches are a proper remedy for establishing wakening issues. These are the most common issues presented during the infancy stage. In these forms of techniques, parents are advised on how to minimize their infant's afternoon naps. And constructing effective nighttime practices. As well as, eliminating parent-infant interaction during the nighttime sleeping cycle. There was also a sleeping agenda that helped minimize the sudden awakening of infants.
The final result indicated that the systemic approach helped reduce the awakening in infants and had a positive resultion on their sleeping issues.
Another technique that involves systemic therapy is conceptualization, which allows the therapist to gather the patient's symptoms in context and looks into how the patient experiences creating a pattern with other individuals or family. These forms of systemic therapy help people of any age group resolve their issues. Issues including anger management, addictions to substances, relationship problems, mood disorders, and more. Human interactions are connected to their emotions and in terms can branch out to their social, or cultural interventions. Evidence supports how systemic interventions have a positive effect on infants and certain emotional problems they may have such as behavior issues.
Systemic therapy neither attempts a 'treatment of causes' nor of symptoms; rather it gives living systems nudges that help them to develop new patterns together, taking on a new organizational structure that allows growth.
While family systems therapy only addresses families, systemic therapy in a similar fashion to
Systemic hypothesising addresses other systems. The systemic approach is increasingly used in business, education, politics, psychiatry, social work, and family medicine.
== See also ==
List of therapies
Systems theory
Family therapy
Systemic coaching
Systems psychology
== References == | Wikipedia/Systemic_therapy |
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference.
== Methodology ==
=== Optimization problems ===
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, organisms, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits (also called bit set or bit string). Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
==== Initialization ====
The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.
==== Selection ====
During each successive generation, a portion of the existing population is selected to reproduce for a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem-dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.
==== Genetic operators ====
The next step is to generate a second generation population of solutions from those selected, through a combination of genetic operators: crossover (also called recombination), and mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.
It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem's complexity class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
==== Heuristics ====
In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.
==== Termination ====
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
A solution is found that satisfies minimum criteria
Fixed number of generations reached
Allocated budget (computation time/money) reached
The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
Manual inspection
Combinations of the above
== The building block hypothesis ==
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
Goldberg describes the heuristic as follows:
"Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.
== Limitations ==
The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:
Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems.
Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane . In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.
The "better" solution is only in comparison to other solutions. As a result, the stopping criterion is not clear in every problem.
In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation.
Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
GAs cannot effectively solve problems in which the only fitness measure is a binary pass/fail outcome (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.
For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.
== Variants ==
=== Chromosome representation ===
The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
Another important expansion of the Genetic Algorithm (GA) accessible solution space was driven by the need to make representations amenable to variable levels of knowledge about the solution states. Variable-length representations were inspired by the observation that, in nature, evolution tends to progress from simpler organisms to more complex ones—suggesting an underlying rationale for embracing flexible structures. A second, more pragmatic motivation was that most real-world engineering and knowledge-based problems do not naturally conform to rigid knowledge structures.
These early innovations in variable-length representations laid essential groundwork for the development of Genetic programming, which further extended the classical GA paradigm. Such representations required enhancements to the simplistic genetic operators used for fixed-length chromosomes, enabling the emergence of more sophisticated and adaptive GA models.
=== Elitism ===
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.
=== Parallel implementations ===
Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.
=== Adaptive GAs ===
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.
Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. Recent approaches use more abstract variables for deciding pc and pm. Examples are dominance & co-dominance principles and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.
It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing.
This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA.
== Problem domains ==
Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields
In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task:
[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
[...]
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs.
== History ==
In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.
=== Commercial products ===
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.
In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).
== Related techniques ==
=== Parent fields ===
Genetic algorithms are a sub-field:
Evolutionary algorithms
Evolutionary computing
Metaheuristics
Stochastic optimization
Optimization
=== Related fields ===
==== Evolutionary algorithms ====
Evolutionary algorithms is a sub-field of evolutionary computing.
Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover.
Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. There are many variants of Genetic Programming, including Cartesian genetic programming, Gene expression programming, grammatical evolution, Linear genetic programming, Multi expression programming etc.
Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
==== Swarm intelligence ====
Swarm intelligence is a sub-field of evolutionary computing.
Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas.
Although considered an Estimation of distribution algorithm, Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.
==== Other evolutionary computing algorithms ====
Evolutionary computation is a sub-field of the metaheuristic methods.
Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.
Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
Differential evolution (DE) inspired by migration of superorganisms.
Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant.
==== Other metaheuristic methods ====
Metaheuristic methods broadly fall within stochastic optimisation methods.
Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.
==== Other stochastic optimisation methods ====
The cross-entropy (CE) method generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics.
== See also ==
Genetic programming
List of genetic algorithm applications
Genetic algorithms in signal processing (a.k.a. particle filters)
Propagation of schema
Universal Darwinism
Metaheuristics
Learning classifier system
Rule-based machine learning
== References ==
== Bibliography ==
== External links ==
=== Resources ===
[1] Provides a list of resources in the genetic algorithms field
An Overview of the History and Flavors of Evolutionary Algorithms
=== Tutorials ===
Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma
An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints.
A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University An excellent tutorial with much theory
"Essentials of Metaheuristics", 2009 (225 p). Free open text by Sean Luke.
Global Optimization Algorithms – Theory and Application Archived 11 September 2008 at the Wayback Machine
Genetic Algorithms in Python Tutorial with the intuition behind GAs and Python implementation.
Genetic Algorithms evolves to solve the prisoner's dilemma. Written by Robert Axelrod. | Wikipedia/Genetic_algorithm |
In control theory, affect control theory proposes that individuals maintain affective meanings through their actions and interpretations of events. The activity of social institutions occurs through maintenance of culturally based affective meanings.
== Affective meaning ==
Besides a denotative meaning, every concept has an affective meaning, or connotation, that varies along three dimensions: evaluation – goodness versus badness, potency – powerfulness versus powerlessness, and activity – liveliness versus torpidity. Affective meanings can be measured with semantic differentials yielding a three-number profile indicating how the concept is positioned on evaluation, potency, and activity (EPA). Osgood demonstrated that an elementary concept conveyed by a word or idiom has a normative affective meaning within a particular culture.
A stable affective meaning derived either from personal experience or from cultural inculcation is called a sentiment, or fundamental affective meaning, in affect control theory. Affect control theory has inspired assembly of dictionaries of EPA sentiments for thousands of concepts involved in social life – identities, behaviours, settings, personal attributes, and emotions. Sentiment dictionaries have been constructed with ratings of respondents from the US, Canada, Northern Ireland, Germany, Japan, China and Taiwan.
== Impression formation ==
Each concept that is in play in a situation has a transient affective meaning in addition to an associated sentiment. The transient corresponds to an impression created by recent events.
Events modify impressions on all three EPA dimensions in complex ways that are described with non-linear equations obtained through empirical studies.
Here are two examples of impression-formation processes.
An actor who behaves disagreeably seems less good, especially if the object of the behavior is innocent and powerless, like a child.
A powerful person seems desperate when performing extremely forceful acts on another, and the object person may seem invincible.
A social action creates impressions of the actor, the object person, the behavior, and the setting.
=== Deflections ===
Deflections are the distances in the EPA space between transient and fundamental affective meanings. For example, a mother complimented by a stranger feels that the unknown individual is much nicer than a stranger is supposed to be, and a bit too potent and active as well – thus there is a moderate distance between the impression created and the mother's sentiment about strangers. High deflections in a situation produce an aura of unlikeliness or uncanniness. It is theorized that high deflections maintained over time generate psychological stress.
The basic cybernetic idea of affect control theory can be stated in terms of deflections. An individual selects a behavior that produces the minimum deflections for concepts involved in the action. Minimization of deflections is described by equations derived with calculus from empirical impression-formation equations.
== Action ==
On entering a scene an individual defines the situation by assigning identities to each participant, frequently in accord with an encompassing social institution. While defining the situation, the individual tries to maintain the affective meaning of self through adoption of an identity whose sentiment serves as a surrogate for the individual's self-sentiment. The identities assembled in the definition of the situation determine the sentiments that the individual tries to maintain behaviorally.
Confirming sentiments associated with institutional identities – like doctor–patient, lawyer–client, or professor–student – creates institutionally relevant role behavior.
Confirming sentiments associated with negatively evaluated identities – like bully, glutton, loafer, or scatterbrain – generates deviant behavior.
Affect control theory's sentiment databases and mathematical model are combined in a computer simulation program for analyzing social interaction in various cultures.
== Emotions ==
According to affect control theory, an event generates emotions for the individuals involved in the event by changing impressions of the individuals. The emotion is a function of the impression created of the individual and of the difference between that impression and the sentiment attached to the individual's identity Thus, for example, an event that creates a negative impression of an individual generates unpleasant emotion for that person, and the unpleasantness is worse if the individual believes she has a highly valued identity. Similarly, an event creating a positive impression generates a pleasant emotion, all the more pleasant if the individual believes he has a disvalued identity in the situation.
Non-linear equations describing how transients and fundamentals combine to produce emotions have been derived in empirical studies Affect control theory's computer simulation program uses these equations to predict emotions that arise in social interaction, and displays the predictions via facial expressions that are computer drawn, as well as in terms of emotion words.
Based on cybernetic studies by Pavloski and Goldstein, that utilise perceptual control theory, Heise hypothesizes that emotion is distinct from stress. For example, a parent enjoying intensely pleasant emotions while interacting with an offspring suffers no stress. A homeowner attending to a sponging house guest may feel no emotion and yet be experiencing substantial stress.
== Interpretations ==
Others' behaviors are interpreted so as to minimize the deflections they cause. For example, a man turning away from another and exiting through a doorway could be engaged in several different actions, like departing from, deserting, or escaping from the other. Observers choose among the alternatives so as to minimize deflections associated with their definitions of the situation. Observers who assigned different identities to the observed individuals could have different interpretations of the behavior.
Re-definition of the situation may follow an event that causes large deflections which cannot be resolved by reinterpreting the behavior. In this case, observers assign new identities that are confirmed by the behavior. For example, seeing a father slap a son, one might re-define the father as an abusive parent, or perhaps as a strict disciplinarian; or one might re-define the son as an arrogant brat. Affect control theory's computer program predicts the plausible re-identifications, thereby providing a formal model for labeling theory.
The sentiment associated with an identity can change to befit the kinds of events in which that identity is involved, when situations keep arising where the identity is deflected in the same way, especially when identities are informal and non-institutionalized.
== Applications ==
Affect control theory has been used in research on emotions, gender, social structure, politics, deviance and law, the arts, and business. Affect Control Theory was analyzed through the use of Quantitative Methods in research, using mathematics to look at data and interpret their findings. However, recent applications of this theory have explored the concept of Affect Control Theory through Qualitative Research Methods. This process involves obtaining data through the use of interviews, observations, and questionnaires. Affect Control Theory has been explored through Qualitative measures in interviewing the family, friends, and loved ones of individuals who were murdered, looking at how the idea of forgiveness changes based on their interpretation of the situation. Computer programs have also been an important part of understanding Affect Control Theory, beginning with the use of "Interact," a computer program designed to create social situations with the user to understand how an individual will react based on what is happening within the moment. "Interact" has been an essential tool in research, using it to understand social interaction and the maintenance of affect between individuals. The use of interviews and observations have improved the understanding of Affect Control Theory through Qualitative research methods. A bibliography of research studies in these areas is provided by David R. Heise and at the research program's website.
== Extensions ==
A probabilistic and decision theoretic extension of affect control theory generalizes the original theory in order to allow for uncertainty about identities, changing identities, and explicit non-affective goals.
== See also ==
Affect display
== References ==
== Further reading ==
Averett, Christine; Heise, David (1987), "Modified social identities: Amalgamations, attributions, and emotions", Journal of Mathematical Sociology, vol. 13, no. 1–2, pp. 103–132, doi:10.1080/0022250X.1987.9990028.
Britt, Lory; Heise, David (1992), "Impressions of self-directed action", Social Psychology Quarterly, vol. 55, no. 4, American Sociological Association, pp. 335–350, doi:10.2307/2786951, JSTOR 2786951.
Goldstein, David (1989), "Control theory applied to stress management", in Hershberger, Wayne (ed.), Volitional Action: Conation and Control, New York: Elsevier, pp. 481–492.
Gollob, Harry (1968), "Impression formation and word combination in sentences", Journal of Personality and Social Psychology, vol. 10, no. 4, pp. 341–53, doi:10.1037/h0026822.
Gollob, Harry; Rossman, B. B. (1973), "Judgments of an actor's 'Power and ability to influence others'", Journal of Experimental Social Psychology, vol. 9, no. 5, pp. 391–406, doi:10.1016/S0022-1031(73)80004-6.
Heise, David (1979), Understanding Events: Affect and the Construction of Social Action, New York: Cambridge University Press.
Heise, David (1997). Interact On-Line (Java applet).
Heise, David (2001), "Project Magellan: Collecting Cross-Cultural Affective Meanings Via the Internet", Electronic Journal of Sociology.
Heise, David (2004), "Enculturating agents with expressive role behavior", in Payr, Sabine; Trappl, Robert (eds.), Agent Culture: Human-Agent Interaction in a Multicultural World, Mahwah, NJ: Lawrence Erlbaum, pp. 127–142.
Heise, David (2006), "Sentiment formation in social interaction", in McClelland, Kent; Fararo, Thomas (eds.), Purpose, Meaning, and Action : Control Systems Theories in Sociology, pp. 189–211.
Heise, David (2007), Expressive Order: Confirming Sentiments in Social Actions, New York: Springer.
Heise, David; MacKinnon, Neil (1987), "Affective bases of likelihood perception", Journal of Mathematical Sociology, vol. 13, no. 1–2, pp. 133–151, doi:10.1080/0022250X.1987.9990029.
Heise, David; Thomas, Lisa (1989), "Predicting impressions created by combinations of emotion and social identity", Social Psychology Quarterly, vol. 52, no. 2, American Sociological Association, pp. 141–148, doi:10.2307/2786913, JSTOR 2786913.
Heise, David; Weir, Brian (1999), "A test of symbolic interactionist predictions about emotions in imagined situations", Symbolic Interaction, vol. 22, pp. 129–161.
Hoey, Jesse; Alhothali, Areej; Schroeder, Tobias (2013). "Bayesian Affect Control Theory". Proceedings of Affective Computing and Intelligent Interaction. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. pp. 166–172. CiteSeerX 10.1.1.674.1851. doi:10.1109/ACII.2013.34. ISBN 978-0-7695-5048-0. S2CID 2839953..
Lively, Kathryn; Heise, David (2004), "Sociological Realms of Emotional Experience", American Journal of Sociology, vol. 109, no. 5, pp. 1109–36, CiteSeerX 10.1.1.378.464, doi:10.1086/381915, S2CID 145687147.
MacKinnon, Neil (1994), Symbolic Interactionism as Affect Control, Albany, NY: State University of New York Press.
Nelson, Steven (2006), "Redefining a bizarre situation: Relative concept stability in affect control theory", Social Psychology Quarterly, vol. 69, no. 3, pp. 215–234, doi:10.1177/019027250606900301, S2CID 145192242.
Osgood, Charles; May, W. H.; Miron, M. S. (1975), Cross-Cultural Universals of Affective Meaning, Urbana: University of Illinois Press.
Pavloski, Raymond (1989), "The physiological stress of thwarted intentions", in Hershberger, Wayne (ed.), Volitional Action: Conation and Control, New York: Elsevier, pp. 215–232.
Schneider, Andreas; Heise, David (1995), "Simulating Symbolic Interaction", Journal of Mathematical Sociology, vol. 20, no. 2, pp. 271–287, doi:10.1080/0022250X.1995.9990165.
Schröder, Tobias (2011), "A model of impression-formation and attribution among Germans", Journal of Language and Social Psychology, vol. 30, pp. 82–102, doi:10.1177/0261927X10387103, S2CID 145128817.
Smith-Lovin, Lynn; Heise, David (1988), Analyzing Social Interaction: Advances in Affect Control Theory, New York: Gordon and Breach. [This is a reprint of the Journal of Mathematical Sociology, Volume 13 (1-2), and it contains cited articles by Averett & Heise and Heise & MacKinnon.]
Smith, Herman; Matsuno, Takanori; Umino, Michio (1994), "How similar are impression-formation processes among Japanese and Americans?", Social Psychology Quarterly, vol. 57, no. 2, American Sociological Association, pp. 124–139, doi:10.2307/2786706, JSTOR 2786706.
Smith, Herman; Matsuno, Takanori; Ike, Shuuichirou (2001), "The affective basis of attributional processes among Japanese and Americans", Social Psychology Quarterly, vol. 64, no. 2, American Sociological Association, pp. 180–194, doi:10.2307/3090132, JSTOR 3090132.
Smith, Herman; Francis, Linda (2005), "Social versus self-directed events among Japanese and Americans: Self-actualization, emotions, moods, and trait disposition labeling", Social Forces, vol. 84, no. 2, pp. 821–830, doi:10.1353/sof.2006.0035, S2CID 143653483.
== External links ==
Affect Control Theory (copy of original site)
new ACT site
BayesACT webpage (with further links to current research on ACT) | Wikipedia/Affect_control_theory |
Systems thinking is a way of making sense of the complexity of the world by looking at it in terms of wholes and relationships rather than by splitting it down into its parts. It has been used as a way of exploring and developing effective action in complex contexts, enabling systems change. Systems thinking draws on and contributes to systems theory and the system sciences.
== History ==
=== Ptolemaic system versus the Copernican system ===
The term system is polysemic: Robert Hooke (1674) used it in multiple senses, in his System of the World,: p.24 but also in the sense of the Ptolemaic system versus the Copernican system: 450 of the relation of the planets to the fixed stars which are cataloged in Hipparchus' and Ptolemy's Star catalog. Hooke's claim was answered in magisterial detail by Newton's (1687) Philosophiæ Naturalis Principia Mathematica, Book three, The System of the World: Book three (that is, the system of the world is a physical system).
Newton's approach, using dynamical systems continues to this day. In brief, Newton's equations (a system of equations) have methods for their solution.
=== Feedback control systems ===
By 1824, the Carnot cycle presented an engineering challenge, which was how to maintain the operating temperatures of the hot and cold working fluids of the physical plant. In 1868, James Clerk Maxwell presented a framework for, and a limited solution to, the problem of controlling the rotational speed of a physical plant. Maxwell's solution echoed James Watt's (1784) centrifugal moderator (denoted as element Q) for maintaining (but not enforcing) the constant speed of a physical plant (that is, Q represents a moderator, but not a governor, by Maxwell's definition).
Maxwell's approach, which linearized the equations of motion of the system, produced a tractable method of solution.: 428–429 Norbert Wiener identified this approach as an influence on his studies of cybernetics during World War II and Wiener even proposed treating some subsystems under investigation as black boxes.: 242 Methods for solutions of the systems of equations then become the subject of study, as in feedback control systems, in stability theory, in constraint satisfaction problems, the unification algorithm, type inference, and so forth.
=== Applications ===
"So, how do we change the structure of systems to produce more of what we want and less of that which is undesirable? ... MIT’s Jay Forrester likes to say that the average manager can ... guess with great accuracy where to look for leverage points—places in the system where a small change could lead to a large shift in behavior".: 146 — Donella Meadows, (2008) Thinking In Systems: A Primer p.145
== Characteristics ==
...What is a system? A system is a set of things ... interconnected in such a way that they produce their own pattern of behavior over time. ... But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world
[a system] is "an integrated whole even though composed of diverse, interacting, specialized structures and subjunctions"
Subsystems serve as part of a larger system, but each comprises a system in its own right. Each frequently can be described reductively, with properties obeying its own laws, such as Newton's System of the World, in which entire planets, stars, and their satellites can be treated, sometimes in a scientific way as dynamical systems, entirely mathematically, as demonstrated by Johannes Kepler's equation (1619) for the orbit of Mars before Newton's Principia appeared in 1687.
Black boxes are subsystems whose operation can be characterized by their inputs and outputs, without regard to further detail.: 87–88
=== Particular systems ===
Political systems were recognized as early as the millennia before the common era.
Biological systems were recognized in Aristotle's lagoon ca. 350 BCE.
Economic systems were recognized by 1776.
Social systems were recognized by the 19th and 20th centuries of the common era.
Radar systems were developed in World War II in subsystem fashion; they were made up of transmitter, receiver, power supply, and signal processing subsystems, to defend against airborne attacks.
Dynamical systems of ordinary differential equations were shown to exhibit stable behavior given a suitable Lyapunov control function by Aleksandr Lyapunov in 1892.
Thermodynamic systems were treated as early as the eighteenth century, in which it was discovered that heat could be created without limit, but that for closed systems, laws of thermodynamics could be formulated. Ilya Prigogine (1980) has identified situations in which systems far from equilibrium can exhibit stable behavior; once a Lyapunov function has been identified, future and past can be distinguished, and scientific activity can begin.: 212–213
=== Systems far from equilibrium ===
Living systems are resilient, and are far from equilibrium.: Ch.3 Homeostasis is the analog to equilibrium, for a living system; the concept was described in 1849, and the term was coined in 1926.
Resilient systems are self-organizing;: Ch.3
The scope of functional controls is hierarchical, in a resilient system.: Ch.3
== Frameworks and methodologies ==
Frameworks and methodologies for systems thinking include:
Critical systems heuristics: in particular, there can be twelve boundary categories for the systems when organizing one's thinking and actions.
Critical systems thinking, including the E P I C approach.
DSRP, a framework for systems thinking that attempts to generalise all other approaches.
Ontology engineering of representation, formal naming and definition of categories, and the properties and the relations between concepts, data, and entities.
Soft systems methodology, including the CATWOE approach and rich pictures.
Systemic design, for example using the double diamond approach.
System dynamics of stocks, flows, and internal feedback loops.
Viable system model: uses 5 subsystems.
== See also ==
Biogeochemistry – Study of chemical cycles of the earth that are either driven by or influence biological activity
Conceptual systems – System composed of non-physical objects, i.e. ideas or conceptsPages displaying short descriptions of redirect targets
Management cybernetics – Application of cybernetics to management and organizations
Operations research – Discipline concerning the application of advanced analytical methods
Systems engineering – Interdisciplinary field of engineering
Industrial ecology – Study of matter and energy flow in industrial systems
== Notes ==
== References ==
=== Sources ===
Russell L. Ackoff (1968) "General Systems Theory and Systems Research Contrasting Conceptions of Systems Science." in: Views on a General Systems Theory: Proceedings from the Second System Symposium, Mihajlo D. Mesarovic (ed.).
A.C. Ehresmann, J.-P. Vanbremeersch (1987) Hierarchical evolutive systems: A mathematical model for complex systems" Bulletin of Mathematical Biology Volume 49, Issue 1, Pages 13–50
NJTA Kramer & J de Smit (1977) Systems thinking: Concepts and Notions, Springer. 148 pages
A. H. Louie (November 1983) "Categorical system theory" Bulletin of Mathematical Biology volume 45, pages 1047–1072
DonellaMeadows.org Systems Thinking Resources
Gerald Midgley (ed.) (2002) Systems Thinking, SAGE Publications. 4 volume set: 1,492 pages List of chapter titles
Robert Rosen. (1958) “The Representation of Biological Systems from the Standpoint of the Theory of Categories". Bull. math. Biophys. 20, 317–342.
Peter Senge, (1990) The Fifth Discipline | Wikipedia/Systems_thinking |
Systemic design is an interdiscipline that integrates systems thinking and design practices. It is a pluralistic field, with several dialects including systems-oriented design. Influences have included critical systems thinking and second-order cybernetics. In 2021, the Design Council (UK) began advocating for a systemic design approach and embedded it in a revision of their double diamond model.
Systemic design is closely related to sustainability as it aims to create solutions that are not only designed to have a good environmental impact, but are also socially and economically beneficial. In fact, from a systemic design approach, the system to be designed, its context with its relationships and its environment receive synchronous attention. Systemic design's discourse has been developed through Relating Systems Thinking and Design—a series of symposia held annually since 2012.
== History ==
=== 1960 to 1990 ===
Systems thinking in design has a long history with origins in the design methods movement during the 1960s and 1970s, such as the idea of wicked problems developed by Horst Rittel.
The theories about complexity help the management of an entire system, and the suggested design approaches help the planning of different divergent elements. The complexity theories evolved on the basis that living systems continually draw upon external sources of energy and maintain a stable state of low entropy, on the basis of the General Systems Theory by Karl Ludwig von Bertalanffy (1968). Some of the next rationales applied those theories also on artificial systems: complexity models of living systems address also productive models with their organizations and management, where the relationships between parts are more important than the parts themselves.
=== 1990 to 2010 ===
Treating productive organizations as complex adaptive systems allows for new management models that address economical, social and environmental benefits (Pisek and Wilson, 2001.) In that field, cluster theory (Porter, 1990) evolved in more environmentally sensitive theories, like industrial ecology (Frosh and Gallopoulos, 1989) and industrial symbiosis (Chertow, 2000). Design thinking offers a way to creatively and strategically reconfigure a design concept in a situation with systemic integration (Buchanan, 1992).
In 1994, Gunter Pauli and Heitor Gurgulino de Souza founded the research institute Zero Emission Research and Initiatives (ZERI), starting from the idea that progress should embed respect for the environment and natural techniques that will allow production processes to be part of the ecosystem.
Strong interdisciplinary and transdisciplinarity approaches are critical during the design phase (Fuller, 1981) with the increasing involvement of different disciplines, including urban planning, public policy, business management and environmental sciences (Chertow et al., 2004). As an interdiscipline, systemic design joins systems thinking and design methodology to support humanity centred and systems oriented design academe and practice (Bistagnino, 2011; Sevaldson, 2011; Nelson and Stolterman, 2012; Jones, 2014; Toso at al., 2012).
=== 2010 to present ===
Numerous design projects demonstrate systemic design in their approach, including diverse topics involving food networks, industrial processes and water purification, revitalization of internal areas through art and tourism, circular economy, exhibition and fairs, social inclusion, and marginalization.
Since 2014 several scholarly journals have acknowledged systemic design with special publications, and in 2022, the Systemic Design Association launched “Contexts—The Journal of Systemic Design.” The proceedings repository, Relating Systems Thinking and Design, exceeded 1000 articles in 2023.
== Relating Systems Thinking and Design (RSD) ==
Since 2012, host organisations have held an annual symposium dedicated to systemic design, Relating Systems Thinking and Design (RSD). Proceedings are available via the searchable repository on RSDsymposium.org.
== Research groups and innovation labs ==
Academic research groups with a focus on systemic design include:
Communication, Culture & Technology lab at Georgetown University, Washington DC, hosts of RSD12 in 2023.
Policy Lab is a part of the UK Civil Service with a "mission is to radically improve policy making through design, innovation and people-centred approaches".
Radical Methodologies Research Group at the University of Brighton, Brighton, UK, hosts of RSD11 in 2022.
Relating Systems Thinking and Design a searchable repository of articles from the proceedings of the annual symposia.
Strategic Innovation Lab (sLab) at OCADU, Toronto, Canada.
Sys—Systemic Design Lab at the Politecnico di Torino, Torino, Italy.
Systemic Design and Sustainability Research Group at Oslo Metropolitan University.
Systemic Design Association the international membership organisation.
Systems Engineering Design research group at Chalmers University of Technology, Gothenburg, Sweden.
== Academic programmes ==
Academic programmes in systemic design include:
Systems oriented design is an example of a systemic design approach being used at the Oslo School of Architecture and Design.
Politecnico di Torino: Master of Science in Systemic Design.
The Strategic Foresight and innovation master program at OCAD University Toronto.
National Institute of Design (NID) India. Systems Thinking and Design is part of the academic programme at NID.
At the University of Montreal, the Master's degree in Applied Science in Design, Design and Complexity (DESCO).
The Kunsthochschule Kassel, in Kassel (Germany) offered the "Systemdesign" degree in the Product Design programme
== References == | Wikipedia/Systemic_design |
Systems ecology is an interdisciplinary field of ecology, a subset of Earth system science, that takes a holistic approach to the study of ecological systems, especially ecosystems. Systems ecology can be seen as an application of general systems theory to ecology. Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
== Overview ==
Systems ecology seeks a holistic view of the interactions and transactions within and between biological and ecological systems. Systems ecologists realise that the function of any ecosystem can be influenced by human economics in fundamental ways. They have therefore taken an additional transdisciplinary step by including economics in the consideration of ecological-economic systems. In the words of R.L. Kitching:
Systems ecology can be defined as the approach to the study of ecology of organisms using the techniques and philosophy of systems analysis: that is, the methods and tools developed, largely in engineering, for studying, characterizing and making predictions about complex entities, that is, systems.
In any study of an ecological system, an essential early procedure is to draw a diagram of the system of interest ... diagrams indicate the system's boundaries by a solid line. Within these boundaries, series of components are isolated which have been chosen to represent that portion of the world in which the systems analyst is interested ... If there are no connections across the systems' boundaries with the surrounding systems environments, the systems are described as closed. Ecological work, however, deals almost exclusively with open systems.
As a mode of scientific enquiry, a central feature of Systems Ecology is the general application of the principles of energetics to all systems at any scale. Perhaps the most notable proponent of this view was Howard T. Odum - sometimes considered the father of ecosystems ecology. In this approach the principles of energetics constitute ecosystem principles. Reasoning by formal analogy from one system to another enables the Systems Ecologist to see principles functioning in an analogous manner across system-scale boundaries. H.T. Odum commonly used the Energy Systems Language as a tool for making systems diagrams and flow charts.
The fourth of these principles, the principle of maximum power efficiency, takes central place in the analysis and synthesis of ecological systems. The fourth principle suggests that the most evolutionarily advantageous system function occurs when the environmental load matches the internal resistance of the system. The further the environmental load is from matching the internal resistance, the further the system is away from its sustainable steady state. Therefore, the systems ecologist engages in a task of resistance and impedance matching in ecological engineering, just as the electronic engineer would do.
== Closely related fields ==
=== Earth systems engineering and management ===
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion."
=== Ecological economics ===
Ecological economics is a transdisciplinary field of academic research that addresses the dynamic and spatial interdependence between human economies and natural ecosystems. Ecological economics brings together and connects different disciplines, within the natural and social sciences but especially between these broad areas. As the name suggests, the field is made up of researchers with a background in economics and ecology. An important motivation for the emergence of ecological economics has been criticism on the assumptions and approaches of traditional (mainstream) environmental and resource economics.
=== Ecological energetics ===
Ecological energetics is the quantitative study of the flow of energy through ecological systems. It aims to uncover the principles which describe the propensity of such energy flows through the trophic, or 'energy availing' levels of ecological networks. In systems ecology the principles of ecosystem energy flows or "ecosystem laws" (i.e. principles of ecological energetics) are considered formally analogous to the principles of energetics.
=== Ecological humanities ===
Ecological humanities aims to bridge the divides between the sciences and the humanities, and between Western, Eastern and Indigenous ways of knowing nature. Like ecocentric political theory, the ecological humanities are characterised by a connectivity ontology and a commitment to two fundamental axioms relating to the need to submit to ecological laws and to see humanity as part of a larger living system.
=== Ecosystem ecology ===
Ecosystem ecology is the integrated study of biotic and abiotic components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals. Ecosystem ecology examines physical and biological structure and examines how these ecosystem characteristics interact.
The relationship between systems ecology and ecosystem ecology is complex. Much of systems ecology can be considered a subset of ecosystem ecology. Ecosystem ecology also utilizes methods that have little to do with the holistic approach of systems ecology. However, systems ecology more actively considers external influences such as economics that usually fall outside the bounds of ecosystem ecology. Whereas ecosystem ecology can be defined as the scientific study of ecosystems, systems ecology is more of a particular approach to the study of ecological systems and phenomena that interact with these systems.
=== Industrial ecology ===
Industrial ecology is the study of industrial processes as linear (open loop) systems, in which resource and capital investments move through the system to become waste, to a closed loop system where wastes become inputs for new processes.
== See also ==
== References ==
== Bibliography ==
== External links ==
Organisations
Systems Ecology Department at the Stockholm University.
Systems Ecology Department at the University of Amsterdam.
Systems ecology Lab at SUNY-ESF.
Systems Ecology program at the University of Florida
Systems Ecology program at the University of Montana
Terrestrial Systems Ecology of ETH Zürich. | Wikipedia/Systems_ecology |
Ecological systems theory is a broad term used to capture the theoretical contributions of developmental psychologist Urie Bronfenbrenner. Bronfenbrenner developed the foundations of the theory throughout his career, published a major statement of the theory in American Psychologist, articulated it in a series of propositions and hypotheses in his most cited book, The Ecology of Human Development and further developing it in The Bioecological Model of Human Development and later writings. A primary contribution of ecological systems theory was to systemically examine contextual variability in development processes. As the theory evolved, it placed increasing emphasis on the role of the developing person as an active agent in development and on understanding developmental process rather than "social addresses" (e.g., gender, ethnicity) as explanatory mechanisms.
== Overview ==
Ecological systems theory describes a scientific approach to studying lifespan development that emphasizes the interrelationship of different developmental processes (e.g., cognitive, social, biological). It is characterized by its emphasis on naturalistic and quasi-experimental studies, although several important studies using this framework use experimental methodology. Although developmental processes are thought to be universal, they are thought to (a) show contextual variability in their likelihood of occurring, (b) occur in different constellations in different settings and (c) affect different people differently. Because of this variability, scientists working within this framework use individual and contextual variability to provide insight into these universal processes.
The foundations of ecological systems theory can be seen throughout Bronfennbrenner's career. For example, in the 1950s he analyzed historical and social class variations in parenting practices, in the 1960s he wrote an analysis of gender differences focusing on the different cultural meanings of the same parenting practices for boys and girls, and in the 1970s he compared childrearing in the US and USSR, focusing how cultural differences in the concordance of values across social institutions change parent influences.
The formal development of ecological systems theory occurred in three major stages. A major statement of the theory was published in American Psychologist. Bronfenbrenner critiqued then current methods of studying children in laboratories as providing a limited window on development, calling it "the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time" (p. 513) and calling for more "ecologically valid" studies of developing individuals in their natural environment. For example, he argued that laboratory studies of children provided insight into their behavior in an unfamiliar ("strange") setting that had limited generalizability to their behavior in more familiar environments, such as home or school. The Ecology of Human Development articulated a series of definitions, propositions and hypotheses that could be used to study human development. This work categorized developmental processes, beginning with genetic and personal characteristics, though proximal influences that the developing person interacted with directly (e.g., social relationships), to influences such as parents' work, government policies or cultural value systems that affected them indirectly. As the theory evolved, it placed increasing emphasis on the role of the developing person as an active agent in development and on understanding developmental process rather than "social addresses" (e.g., gender, ethnicity) as explanatory mechanisms. The final form of the theory, developed in conjunction with Stephen Ceci, was called the Bioecological Model of Human Development and addresses critiques that previous statements of the theory under-emphasized individual difference and efficacy. Developmental processes were conceived of as co-occurring in niches that were lawfully defined and reinforcing. Because of this, Bronfenbrenner was a strong proponent of using social policy interventions as both a way of using science to improve child well-being and as an important scientific tool. Early examples of the application of ecological systems theory are evident in Head Start.
== The five systems ==
Microsystem: Refers to the institutions and groups that most immediately and directly impact the child's development including: family, school, siblings, neighborhood, and peers.
Mesosystem: Consists of interconnections between the microsystems, for example between the family and teachers or between the child's peers and the family.
Exosystem: Involves links between social settings that do not involve the child. For example, a child's experience at home may be influenced by their parent's experiences at work. A parent might receive a promotion that requires more travel, which in turn increases conflict with the other parent resulting in changes in their patterns of interaction with the child.
Macrosystem: Describes the overarching culture that influences the developing child, as well as the microsystems and mesosystems embedded in those cultures. Cultural contexts can differ based on geographic location, socioeconomic status, poverty, and ethnicity. Members of a cultural group often share a common identity, heritage, and values. Macrosystems evolve across time and from generation to generation.
Chronosystem: Consists of the pattern of environmental events and transitions over the life course, as well as changing socio-historical circumstances. For example, researchers have found that the negative effects of divorce on children often peak in the first year after the divorce. By two years after the divorce, family interaction is less chaotic and more stable. An example of changing sociohistorical circumstances is the increase in opportunities for women to pursue a career during the last thirty years.
Later work by Bronfenbrenner considered the role of biology in this model as well; thus the theory has sometimes been called the bioecological model.
Per this theoretical construction, each system contains roles, norms and rules which may shape psychological development. For example, an inner-city family faces many challenges which an affluent family in a gated community does not, and vice versa. The inner-city family is more likely to experience environmental hardships, like crime and squalor. On the other hand, the sheltered family is more likely to lack the nurturing support of extended family.
Since its publication in 1979, Bronfenbrenner's major statement of this theory, The Ecology of Human Development has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of his groundbreaking work in human ecology, these environments—from the family to economic and political structures—have come to be viewed as part of the life course from childhood through adulthood.
Bronfenbrenner has identified Soviet developmental psychologist Lev Vygotsky and German-born psychologist Kurt Lewin as important influences on his theory.
Bronfenbrenner's work provides one of the foundational elements of the ecological counseling perspective, as espoused by Robert K. Conyne, Ellen Cook, and the University of Cincinnati Counseling Program.
There are many different theories related to human development. Human ecology theory emphasizes environmental factors as central to development.
== See also ==
Bioecological model
Ecosystem
Ecosystem ecology
Systems ecology
Systems psychology
Theoretical ecology
Urie Bronfenbrenner
== References ==
The diagram of the ecosystemic model was created by Buehler (2000) as part of a dissertation on assessing interactions between a child, their family, and the school and medical systems.
== Further reading ==
Urie Bronfenbrenner. (2009). The Ecology of Human Development: Experiments by Nature and Design. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-22457-4
Dede Paquette & John Ryan. (2001). Bronfenbrenner’s Ecological Systems Theory
Woodside, Arch G.; Caldwell, Marylouise; Spurr, Ray (2006). "Advancing Ecological Systems Theory in Lifestyle, Leisure, and Travel Research". Journal of Travel Research. 44 (3): 259–272. doi:10.1177/0047287505282945. S2CID 154292561.
Marlowe E. Trance, Kerstin O. Flores. (2014). " Child and Adolescent Development" Vol. 32. no. 5 9407
Ecological Systems Review
The ecological framework facilitates organizing information about people and their environment in
order to understand their interconnectedness. Individuals move through a series of life transitions,
all of which necessitate environmental support and coping skills. Social problems involving
health care, family relations, inadequate income, mental health difficulties, conflicts with law
enforcement agencies, unemployment, educational difficulties, and so on can all be subsumed
under the ecological model, which would enable practitioners to assess factors that are relevant
to such problems (Hepworth, Rooney, Rooney, Strom-Gottfried, & Larsen, 2010, p. 16). Thus,
examining the ecological contexts of parenting success of children with disabilities is particularly
important. Utilizing Bronfenbrenner's (1977, 1979) ecological framework, this article explores
parenting success factors at the micro- (i.e., parenting practice, parent-child relations), meso-
(i.e., caregivers' marital relations, religious social support), and macro-system levels (i.e., cultural
variations, racial and ethnic disparities, and health care delivery system) of practice. | Wikipedia/Ecological_systems_theory |
In computer science, robustness is the ability of a computer system to cope with errors during execution and cope with erroneous input. Robustness can encompass many areas of computer science, such as robust programming, robust machine learning, and Robust Security Network. Formal techniques, such as fuzz testing, are essential to showing robustness since this type of testing involves invalid or unexpected inputs. Alternatively, fault injection can be used to test robustness. Various commercial products perform robustness testing of software analysis.
== Introduction ==
In general, building robust systems that encompass every point of possible failure is difficult because of the vast quantity of possible inputs and input combinations. Since all inputs and input combinations would require too much time to test, developers cannot run through all cases exhaustively. Instead, the developer will try to generalize such cases. For example, imagine inputting some integer values. Some selected inputs might consist of a negative number, zero, and a positive number. When using these numbers to test software in this way, the developer generalizes the set of all reals into three numbers. This is a more efficient and manageable method, but more prone to failure. Generalizing test cases is an example of just one technique to deal with failure—specifically, failure due to invalid user input. Systems generally may also fail due to other reasons as well, such as disconnecting from a network.
Regardless, complex systems should still handle any errors encountered gracefully. There are many examples of such successful systems. Some of the most robust systems are evolvable and can be easily adapted to new situations.
== Challenges ==
Programs and software are tools focused on a very specific task, and thus are not generalized and flexible. However, observations in systems such as the internet or biological systems demonstrate adaptation to their environments. One of the ways biological systems adapt to environments is through the use of redundancy. Many organs are redundant in humans. The kidney is one such example. Humans generally only need one kidney, but having a second kidney allows room for failure. This same principle may be taken to apply to software, but there are some challenges. When applying the principle of redundancy to computer science, blindly adding code is not suggested. Blindly adding code introduces more errors, makes the system more complex, and renders it harder to understand. Code that does not provide any reinforcement to the already existing code is unwanted. The new code must instead possess equivalent functionality, so that if a function is broken, another providing the same function can replace it, using manual or automated software diversity. To do so, the new code must know how and when to accommodate the failure point. This means more logic needs to be added to the system. But as a system adds more logic, components, and increases in size, it becomes more complex. Thus, when making a more redundant system, the system also becomes more complex and developers must consider balancing redundancy with complexity.
Currently, computer science practices do not focus on building robust systems. Rather, they tend to focus on scalability and efficiency. One of the main reasons why there is no focus on robustness today is because it is hard to do in a general way.
== Areas ==
=== Robust programming ===
Robust programming is a style of programming that focuses on handling unexpected termination and unexpected actions. It requires code to handle these terminations and actions gracefully by displaying accurate and unambiguous error messages. These error messages allow the user to more easily debug the program.
==== Principles ====
Paranoia
When building software, the programmer assumes users are out to break their code. The programmer also assumes that their own written code may fail or work incorrectly.
Stupidity
The programmer assumes users will try incorrect, bogus and malformed inputs. As a consequence, the programmer returns to the user an unambiguous, intuitive error message that does not require looking up error codes. The error message should try to be as accurate as possible without being misleading to the user, so that the problem can be fixed with ease.
Dangerous implements
Users should not gain access to libraries, data structures, or pointers to data structures. This information should be hidden from the user so that the user does not accidentally modify them and introduce a bug in the code. When such interfaces are correctly built, users use them without finding loopholes to modify the interface. The interface should already be correctly implemented, so the user does not need to make modifications. The user therefore focuses solely on their own code.
Can't happen
Very often, code is modified and may introduce a possibility that an "impossible" case occurs. Impossible cases are therefore assumed to be highly unlikely instead. The developer thinks about how to handle the case that is highly unlikely, and implements the handling accordingly.
=== Robust machine learning ===
Robust machine learning typically refers to the robustness of machine learning algorithms. For a machine learning algorithm to be considered robust, either the testing error has to be consistent with the training error, or the performance is stable after adding some noise to the dataset. Recently, consistently with their rise in popularity, there has been an increasing interest in the robustness of neural networks. This is particularly due their vulnerability to adverserial attacks.
=== Robust network design ===
Robust network design is the study of network design in the face of variable or uncertain demands. In a sense, robustness in network design is broad just like robustness in software design because of the vast possibilities of changes or inputs.
=== Robust algorithms ===
There exist algorithms that tolerate errors in the input.
== See also ==
Fault tolerance
Defensive programming
Non-functional requirement
== References == | Wikipedia/Robustness_(computer_science) |
Systems theory in anthropology is an interdisciplinary, non-representative, non-referential, and non-Cartesian approach that brings together natural and social sciences to understand society in its complexity. The basic idea of a system theory in social science is to solve the classical problem of duality; mind-body, subject-object, form-content, signifier-signified, and structure-agency. Systems theory suggests that instead of creating closed categories into binaries (subject-object), the system should stay open so as to allow free flow of process and interactions. In this way the binaries are dissolved.
Complex systems in nature involve a dynamic interaction of many variables (e.g. animals, plants, insects and bacteria; predators and prey; climate, the seasons and the weather, etc.) These interactions can adapt to changing conditions but maintain a balance both between the various parts and as a whole; this balance is maintained through homeostasis. Human societies are also complex systems. Work to define complex systems scientifically arose first in math in the late 19th century, and was later applied to biology in the 1920s to explain ecosystems, then later to social sciences.
Anthropologist Gregory Bateson is the most influential and earliest propagator of systems theory in social sciences. In the 1940s, as a result of the Macy conferences, he immediately recognized its application to human societies with their many variables and the flexible but sustainable balance that they maintain. Bateson describes system as "any unit containing feedback structure and therefore competent to process information." Thus an open system allows interaction between concepts and materiality or subject and the environment or abstract and real. In natural science, systems theory has been a widely used approach. Austrian biologist, Karl Ludwig von Bertalanffy, developed the idea of the general systems theory (GST). The GST is a multidisciplinary approach of system analysis.
== Main concepts in systems theory ==
=== Non-representational and non-referential ===
One of the central elements of the systems theory is to move away from the representational system to the non-representation of things. What it means is that instead of imposing mental concepts, which reduce complexity of a materiality by limiting the variations or malleability onto the objects, one should trace the network of things. According to Gregory Bateson, "ethos, eidos, sociology, economics, cultural structure, social structure, and all the rest of these words refer only to scientists' ways of putting the jigsaw puzzle." The tracing rather than projecting mental images bring in sight material reality that has been obscured under the universalizing concepts.
=== Non-Cartesian ===
Since the European Enlightenment, the Western philosophy has placed the individual, as an indispensable category, at the center of the universe. René Descartes' famous aphorism, 'I think therefore I am' proves that a person is a rational subject whose feature of thinking brings the human into existence. The Cartesian subject, therefore, is a scientific individual who imposes mental concepts on things in order to control the nature or simply what exists outside his mind. This subject-centered view of the universe has reduced the complex nature of the universe. One of the biggest challenges for system theory is thus to displace or de-center the Cartesian subject as a center of a universe and as a rational being. The idea is to make human beings not a supreme entity but rather to situate them as any other being in the universe. The humans are not thinking Cartesian subject but they dwell alongside nature. This brings back the human to its original place and introduces nature in the equation. The systems theory, therefore, encourages a non-unitary subject in opposition to a Cartesian subject.
=== Complexity ===
Once the Cartesian individual is dissolved, the social sciences will move away from a subject-centered view of the world. The challenge is then how to non-represent empirical reality without reducing the complexity of a system. To put it simply, instead of representing things by us let the things speak through us. These questions led materialists philosophers such as Deleuze and Guattari to develop a "science" for understanding reality without imposing our mental projections. The way they encourage is instead of throwing conceptual ideas we should do tracing. Tracing requires one to connect disparate assemblages or appendages not into a unified center but rather into a Rhizome or an open system.
=== Open system and closed system ===
Ludwig Bertalanffy describes two types of systems: open system and closed system. The open systems are systems that allow interactions between its internal elements and the environment. An open system is defined as a "system in exchange of matter with its environment, presenting import and export, building-up and breaking-down of its material components." For example, living organism. Closed systems, on the other hand, are considered to be isolated from their environment. For instance, thermodynamics that applies to closed systems.
== Tracing "systems theory" in anthropology ==
=== Marx–Weber debates ===
Although the term 'system theory' is never mentioned in the work of Karl Marx and Max Weber, the fundamental idea behind systems theory does penetrate deeply in to their understanding of social reality. One can easily see the challenges that both Marx and Weber faced in their work. Breaking away from Hegelian speculative philosophy, Marx developed a social theory based on historical materialism, arguing that it is not consciousness that determines being, but in fact, it is social being that determines consciousness. More specifically, it is human beings' social activity, labor, that causes, shapes, and informs human thinking. Based on labor, Marx develops his entire social theory that specifically questions reified, bourgeois capitalism. Labor, class conflict, commodity, value, surplus-value, bourgeoisie, and proletariat are thus central concepts in Marxian social theory. In contrast to the Cartesian "pure and rational subjectivity," Marx introduced social activity as the force that produces rationality. He was interested in finding sophisticated, scientific universal laws of society, though contrary to positivist mechanistic approaches which take facts as given, and then develop causal relationship out of them.
Max Weber found Marxist ideas useful, however, limited in explaining complex societal practices and activities. Drawing on hermeneutic tradition, Weber introduced multiple rationalities in the modern schema of thinking and used interpretive approach in understanding the meaning of a phenomenon placed in the webs of significance. Contrary to Marx, who was searching for the universal laws of the society, Weber attempts an interpretive understanding of social action in order to arrive at a "causal explanation of its course and effects." Here the word course signifies Weber's non-deterministic approach to a phenomenon. The social actions have subjective meanings that should be understood in its given context. Weber's interpretive approach in understanding the meaning of an action in relation to its environment delineated a contextualized social framework for cultural relativism.
Since we exist in webs of significance and the objective analysis would detach us from a concrete reality which we are all part of it, Weber suggested ideal-types; an analytical and conceptual construct "formed by the accentuation of one or more points of view and by the synthesis of a great many diffuse, discrete, more or less present, and occasionally absent concrete individual phenomena, which are arranged according to those one-sidedly emphasized viewpoints into a unified analytical construct." Although they are analytical concepts, they serve as reference points in interpreting the meaning of society's heterogeneous and polymorphous activities. In other words, ideal-types are simplified and typified empirical reality, but they are not reality in themselves. Bureaucracy, authority, religion, etc. are all ideal-types, according to Weber, and do not exist in the real world. They assist social scientists in selecting culturally significant elements of a larger whole which can be contrasted with each other to demonstrate their interrelationship, patterns of formation, and similar societal functions. Weber's selected ideal-types – bureaucracy, religion, and capitalism – are culturally significant variables through which he demonstrated show multiple functionalities of social behavior.
Similarly, Weber emphasizes that Marxist laws are also ideal-types. The concept of class, economy, capitalism, proletariat and bourgeoisie, revolution and state, along with other Marxian models are heuristic tools for understanding a society in its context. Thus, according to Weber, Marxist ideal-types could prove fruitful only if used to access a given society. However, Weber warns of dangerousness or perniciousness in relation to Marxist ideal-types when seen as empirical reality. The reason is that Marxist practitioners have imposed analytical concepts as ahistorical and universal categories to reduce concrete-process and activities from the polymorphous actions into a simplified phenomenon. This renders social phenomena not only ahistorical but also devoid of spatio-temporal rigour, decontextualized, and categorizes chaos and ruptures under the general label of bourgeoisie exploitation. In fact, history emerged as a metanarrative of a class struggle, moving in a chronological order, and future anticipated as a revolutionary overthrow of state apparatuses by the workers. For instance, the state as an ideal-type imported to the physical world has deceived and diverted political activism away from the real sites of power such as corporations and discourses.
Similarly class as an ideal-type, projected to a society, which is an ensemble of population, becomes dangerous because it marginalizes and undermines organic linkages of kinship, language, race, and ethnicity. This is a significant point because society is not composed of two conflicting classes, bourgeoisie and proletariat, and does not just have vicissitudes along economic lines. It does not exist in binaries, as Marxist ideal-types would suppose. In fact, it is a reality in which people of various denominations – class backgrounds, religious affiliations, kinship and family ties, gender, and ethnic and linguistic differences – do not only experience conflict, but also practice cooperation in everyday life. Thus when one inserts ideal-types into this concrete dynamic process one does categorical violence to multifariousness of the population and similarly reduces feeling, emotions, non-economic social standing such as honor, and status, as Weber describes, to economism. Moreover, the ideal-types should also be treated relevant to a context that defines and delimits the former's parameters.
Weber's intervention came at the right moment when Marxism – particularly vulgar Marxism – reduced "non-economic" practices and beliefs, the superstructure, to a determined base, the mode of production. Similarly, speculative philosophy imposed its own metaphysical categories on diverse concrete realities thus making a particular instance ahistorical. Weber approaches both the methods, materialist and purely idealist, as "equally possible, but each, of it does not serve as the preparation, but as the conclusion of an investigation." To prove this point, Weber demonstrated how ethics and morality played a significant role in the rise of modern capitalism. The Protestant work ethic, for instance, functioned as sophisticated mechanism that encouraged population to "care for the self", which served as an underpinning social activity for bourgeois capitalism. Of course, work ethics was not the only element, utilitarian philosophy equally contributed in forming a bureaucratic work culture whose side-effects are all too well known to the modern world.
In response to the reductive approach of economism or vulgar Marxism, as it is also known, Louis Althusser and Raymond Williams introduced new understanding to Marxist thought. Althusser and Williams introduced politics and culture as new entry points alongside the mode of production in Marxist methodology. However, there is a sharp contrast between the scholars' arguments. Taking Williams as our point of discussion, he criticizes the mechanistic approach to Marxism that encourages a close reading of Marxian concepts. Concepts such as being, consciousness, class, capital, labor, labor power, commodity, economy, politics, etc. are not closed categories but rather interactive, engaging, and open practices or praxis. Althusser, on the other hand, proposes ‘overdetermination' as multiple forces rather than isolated single force or modes of production. However, he argues that the economy is "determinant in the last instance."
== Closed systems ==
In anthropology, the term 'system' is used widely for describing socio-cultural phenomena of a given society in a holistic way. For instance, kinship system, marriage system, cultural system, religious system, totemic system, etc. This systemic approach to a society shows the anxieties of the earliest anthropologists to capture the reality without reducing the complexity of a given community. In their quest of searching the underline pattern of a reality, they "discovered" the kinship system as a fundamental structure of the natives. However, their systems are closed systems because they reduce the complexity and fluidity by imposing anthropological concepts such as genealogy, kinship, heredity, marriage.
=== Cultural relativism ===
Franz Boas was the first anthropologist to problematize the notion of culture. Challenging the modern hegemony of culture, Boas introduced the idea of cultural relativism (understanding culture in its context). Drawing on his extensive fieldwork in the northwestern United States and British Columbia, Boas discusses culture separate from physical environment, biology, and most importantly discarded evolutionary models that represent civilization as a progressive entity following chronological development. Moreover, cultural boundaries, according to Boas, are not barriers to intermixing and should not be seen as obstacle to multiculturalism. In fact, boundaries must be seen as "porous and permeable," and "pluralized." His critique on the concept of modern race and culture had political implications in the racial politics of the United States in the 1920s. In his chapter, "The Race Problem in Modern Society," one can feel Boas' intellectual effort toward separating the natural from the social sciences and setting up the space for genuine political solutions for race relations.
=== Structural-functionalism ===
A. R. Radcliffe-Brown developed a structural functionalism approach in anthropology. He believed that concrete reality is "not any sort of entity but a process, the process of social life." Radcliffe-Brown emphasized on learning the social form especially a kinship system of primitive societies. The way in which one can study the pattern of life is by conceptually delineating a relation determined by a kinship or marriage, "and that we can give a general analytical description of them as constituting a system." The systems consist of structure which is referred to "some sort of ordered arrangement of parts or components." The intervening variable between the processes and structure is a function. The three concepts of process, structure, and function are thus "components of a single theory as a scheme of interpretation of human social systems." Most importantly, function "is the part it plays in, the contribution it makes to, the life of the organism as a whole." Thus the functionality of each part in the system works together to maintain a harmony or internal consistency.
British anthropologist, E. R. Leach, went beyond the instrumentalist argument of Radcliffe-Brown's structural-functionalism, which approached social norms, kinship, etc. in functionalist terms rather than as social fields, or arenas of contestation. According to Leach, "the nicely ordered ranking of lineage seniority conceals a vicious element of competition." In fact, Leach was sensitive to "the essential difference between the ritual description of structural relations and the anthropologist's scientific description." For instance, in his book, Leach argues, "the question that whether a particular community is gumlao, or gumsa, or Shan is not necessarily ascertainable in the realm of empirical facts; it is a question, in part at any rate, of the attitudes and ideas of particular individuals at a particular time." Thus, Leach separated conceptual categories from empirical realities.
=== Structural anthropology ===
Swiss linguist Ferdinand de Saussure, in search of discovering universal laws of language, formulated a general science of linguistic by bifurcating language into langue, abstract system of language, and parole, utterance or speech. The phonemes, fundamental unit of sound, are the basic structure of a language. The linguistic community gives a social dimension to a language. Moreover, linguistic signs are arbitrary and change only comes with time and not by individual will. Drawing on structural linguistics, Claude Lévi-Strauss transforms the world into a text and thus subjected social phenomena to linguistic laws as formulated by Saussure. For instance, the "primitive systems" such as kinship, magic, mythologies, and rituals are scrutinized under the similar linguistic dichotomies of abstract normative system (objective) and utterance (subjective). The division did not only split social actions, but it also conditioned them to the categories of abstract systems that are made up of deep structures. For example, Lévi-Strauss suggests, "Kinship phenomena are of the same type as linguistic phenomena." As Saussure discovered phonemes as the basic structures of language, Lévi-Strauss identified (1) consanguinity, (2) affinity, and (3) descent as the deep structures of kinship. These "microsociological" levels serve "to discover the most general structural laws." The deep structures acquire meanings only with respect to the system they constitute. "Like phonemes, kinship terms are elements of meaning; like phonemes, they acquire meaning only if they are integrated into systems." Like the langue and parole distinctions of language, kinship system consists of (1) system of terminology (vocabulary), through which relationships are expressed and (2) system of attitudes (psychological or social) functions for social cohesion. To elaborate the dynamic interdependence between systems of terminology and attitudes, Lévi-Strauss rejected Radcliffe-Brown's idea that a system of attitudes is merely the manifestation of a system of terminology on the affective level. He turned to the concept of the avunculate as a part of a whole, which consists of three types of relationship consanguinity, affinity, and descent. Thus, Lévi-Strauss identified complex avuncular relationships, contrary to atomism and simplified labels of avunculate associated with matrilineal descent. Furthermore, he suggested that kinship systems "exist only in human consciousness; it is an arbitrary system of representations, not the spontaneous development of a real situation." The meaning of an element (avunculate) exists only in relation to a kinship structure.
Lévi-Strauss elaborates the meaning and structure point further in his essay titled "The Sorcerer and His Magic." The sorcerer, patient, and group, according to Lévi-Strauss, comprise a shaman complex, which makes social consensus an underlying pattern for understanding. The work of a sorcerer is to reintegrate divergent expressions or feelings of patients into "patterns present in the group's culture. The assimilation of such patterns is the only means of objectivizing subjective states, of formulating inexpressible feelings, and of integrating inarticulated experiences into a system." The three examples that Lévi-Strauss mentions relate to magic, a practice reached as a social consensus, by a group of people including sorcerer and patient. It seems that people make sense of certain activities through beliefs, created by social consensus, and not by the effectiveness of magical practices. The community's belief in social consensus thus determines social roles and sets rules and categories for attitudes. Perhaps, in this essay, magic is system of terminology, a langue, whereas, individual behavior is a system of attitude, parole. Attitudes make sense or acquire meaning through magic. Here, magic is a language.
=== Interpretive anthropology ===
Influenced by Hermeneutic tradition, Clifford Geertz developed an interpretive anthropology of understanding the meaning of the society. The hermeneutic approach allows Geertz to close the distance between an ethnographer and a given culture similar to reader and text relationship. The reader reads a text and generates his/her own meaning. Instead of imposing concepts to represent reality, ethnographers should read the culture and interpret the multiplicities of meaning expressed or hidden in the society. In his influential essay, Thick Description: Towards an Interpretive Theory of Culture, Geertz argues that "man is an animal suspended in webs of significance he himself has spun."
=== Practice theory ===
French sociologist, Pierre Bourdieu challenges the same duality of phenomenology (subjective) and structuralism (objective) through his Practice theory. This idea precisely challenges the reductive approach of economism that places symbolic interest in opposition to economic interests. Similarly, it also rejects subjected-centered view of the world. Bourdieu attempts to close this gap by developing the concept of symbolic capital, for instance, a prestige, as readily convertible back into economic capital and hence, is ‘the most valuable form of accumulation.' Therefore, economic and symbolic both works together and should be studied as a general science of the economy of practices.
== System theory: Gregory Bateson ==
British anthropologist, Gregory Bateson, is the most influential and one of the earliest founders of System Theory in anthropology. He developed an interdisciplinary approach that included communication theory, cybernetics, and mathematical logic. In his collection of essays, The Sacred Unity, Bateson argues that there are "ecological systems, social systems, and the individual organism plus the environment with which it interacts is itself a system in this technical sense." By adding environment with systems, Bateson closes the gap between the dualities such as subject and object. "Playing upon the differences between formalization and process, or crystallization and randomness, Bateson sought to transcend other dualisms–mind versus nature, organism versus environment, concept versus context, and subject versus object." Bateson set out the general rule of systems theory. He says:
The basic rule of systems theory is that, if you want to understand some phenomenon or appearance, you must consider that phenomenon within the context of all completed circuits which are relevant to it. The emphasis is on the concept of the completed communicational circuit and implicit in the theory is the expectation that all units containing completed circuits will show mental characteristics. The mind, in other words, is immanent in the circuitry. We are accustomed to thinking of the mind as somehow contained within the skin of an organism, but the circuitry is not contained within the skin.
=== Poststructuralist influence ===
Bateson's work influenced major poststructuralist scholars especially Gilles Deleuze and Félix Guattari. In fact, the very word 'plateau' in Deleuze and Guattari's magnum opus, A Thousand Plateaus, came from Bateson's work on Balinese culture. They wrote: "Gregory Bateson uses the word plateau to designate something very special: a continuous, self-vibrating region of intensities whose development avoids any orientation toward a culmination point or external end." Bateson pioneered an interdisciplinary approach in anthropology. He coined the term "ecology of mind" to demonstrate that what "goes on in one's head and in one's behavior" is interlocked and constitutes a network. Guattari wrote:
Gregory Bateson has clearly shown that what he calls the "ecology of ideas" cannot be contained within the domain of the psychology of the individual, but organizes itself into systems or "minds", the boundaries of which no longer coincide with the participant individuals.
=== Posthumanist turn and ethnographic writing ===
In anthropology, the task of representing a native point of view has been a challenging one. The idea behind the ethnographic writing is to understand a complexity of an everyday life of the people without undermining or reducing the native account. Historically, as mentioned above, ethnographers insert raw data, collected in the fieldwork, into the writing "machine". The output is usually the neat categories of ethnicity, identity, classes, kinship, genealogy, religion, culture, violence, and numerous other. With the posthumanist turn, however, the art of ethnographic writing has suffered serious challenges. Anthropologists are now thinking of experimenting with new style of writing. For instance, writing with natives or multiple authorship.
== See also ==
Complex systems
Social systems
Systems science
Systems theory
== References ==
== Further reading ==
Gregory Bateson, A Sacred Unity: Further Steps to an Ecology of Mind
Ludwig von Bertalanffy. General System Theory: Foundations, Development, Applications. Revised edition. New York: George Braziller. ISBN 978-0-8076-0453-3
Rosi Braidotti. Nomadic Subjects: Embodiment and Sexual Difference in Contemporary Feminist Theory. New York: Columbia UP 1994. ISBN 0-231-08235-5
---. Transpositions: On Nomadic Ethics. Cambridge, UK; Malden, MA: Polity, 2006. ISBN 978-0-7456-3596-5
Georges Canguilhem. The Normal and the Pathological. Trans. Carolyn R. Fawcett. New York: Zone, 1991. ISBN 978-0-942299-59-5
Lilie Chouliaraki and Norman Fairclough. Discourse in Late Modernity: Rethinking Critical Discourse Analysis. Edinburgh: Edinburgh UP, 2000. ISBN 978-0-7486-1082-2
Manuel De Landa, A Thousand Years of Nonlinear History. New York: Zone Books. 1997. ISBN 0-942299-32-9
---. A New Philosophy of Society: Assemblage Theory and Social Complexity. New York: Continuum, 2006. ISBN 978-0-8264-9169-5
Gilles Deleuze and Félix Guattari. Anti-Œdipus: Capitalism and Schizophrenia. Minneapolis: U of Minnesota P, 1987. ISBN 978-0-8166-1402-8
---. Thousand Plateaus. Minneapolis: U of Minnesota P, 1987. ISBN 978-0-8166-1402-8
Jürgen Habermas. Theory of Communicative Action, Vol. 1. Trans. Thomas McCarthy. Boston: Beacon, 1985. ISBN 978-0-8070-1507-0
---. Theory of Communicative Action, Vol. 2. Trans. Thomas McCarthy. Boston: Beacon, 1985. ISBN 978-0-8070-1401-1
Stuart Hall, ed. Representation: Cultural Representations and Signifying Practices. Thousand Oaks, CA: Sage, 1997. ISBN 978-0-7619-5432-3
Donna Haraway. "A Cyborg Manifesto." Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledte, 1991. 149-181
Julia Kristeva. "The System and the Speaking Subject." The Kristeva Reader. Ed. Toril Moi. Oxford: Basil Blackwell, 1986. 24–33. (see also <http://www.kristeva.fr/> & <http://www.phillwebb.net/History/TwentiethCentury/continental/(post)structuralisms/StructuralistPsychoanalysis/Kristeva/Kristeva.htm>)
Ervin Laszlo. The Systems View of the World: A Holistic Vision for Our Time. New York: Hampton Press, 1996. ISBN 978-1-57273-053-3
Bruno Latour. Reassembling the Social: An Introduction to Actor-Network Theory. New York: Oxford UP, 2007 ISBN 978-0-19-925605-1
Niklas Luhmann. Art as a Social System. Trans. Eva M. Knodt. Stanford, CA: Stanford UP, 2000. ISBN 978-0-8047-3907-8
---. Social Systems. Stanford, CA: Trans. John Bednarz, Jr., with Dirk Baecker. Stanford UP, 1996. ISBN 978-0-8047-2625-2
Nina Lykke and Rosi Braidotti, eds. Monsters, Goddesses and Cyborgs: Feminist Confrontations with Science, Medicine and Cyberspace. London: Zed Books, 1996. ISBN 978-1-85649-382-6
Humberto Maturana and Bernhard Pörksen. From Being to Doing: The Origins of the Biology of Cognition. Trans. Wolfram Karl Koeck and Alison Rosemary Koeck. Heidelberg: Carl-Auer Verlag, 2004. ISBN 978-3-89670-448-1
Humberto Maturana and F. J. Varela. Autopoiesis and Cognition: The Realization of the Living. New York: Springer, 1991. ISBN 978-90-277-1016-1
Moretti, Franco. Graphs, Maps, Trees: Abstract Models for a Literary History. London, New York: Verso, 2005.
Paul R. Samson and David Pitt, eds. The Biosphere and Noosphere Reader: Global Environment, Society and Change. London, New York: Routledge, 2002 [1999]. 0-415-16645-4 EBOOK online from UT library
John Tresch (1998). "Heredity is an Open System: Gregory Bateson as Descendant and Ancestor". In: Anthropology Today, Vol. 14, No. 6 (Dec., 1998), pp. 3–6.
Vladimir I. Vernadsky. The Biosphere. Trans. David B. Langmuir. New York: Copernicus/Springer Verlag, 1997.
== External links ==
New England Complex System Institute
Commonwealth Scientific and Industrial Research Organisation (CSIRO) | Wikipedia/Systems_theory_in_anthropology |
In mathematics, set-theoretic topology is a subject that combines set theory and general topology. It focuses on topological questions that can be solved using set-theoretic methods, for example, Suslin's problem.
== Objects studied in set-theoretic topology ==
=== Dowker spaces ===
In the mathematical field of general topology, a Dowker space is a topological space that is T4 but not countably paracompact.
Dowker conjectured that there were no Dowker spaces, and the conjecture was not resolved until M.E. Rudin constructed one in 1971. Rudin's counterexample is a very large space (of cardinality
ℵ
ω
ℵ
0
{\displaystyle \aleph _{\omega }^{\aleph _{0}}}
) and is generally not well-behaved. Zoltán Balogh gave the first ZFC construction of a small (cardinality continuum) example, which was more well-behaved than Rudin's. Using PCF theory, M. Kojman and S. Shelah constructed a subspace of Rudin's Dowker space of cardinality
ℵ
ω
+
1
{\displaystyle \aleph _{\omega +1}}
that is also Dowker.
=== Normal Moore spaces ===
A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC.
=== Cardinal functions ===
Cardinal functions are widely used in topology as a tool for describing various topological properties. Below are some examples. (Note: some authors, arguing that "there are no finite cardinal numbers in general topology", prefer to define the cardinal functions listed below so that they never take on finite cardinal numbers as values; this requires modifying some of the definitions given below, e.g. by adding "
+
ℵ
0
{\displaystyle \;\;+\;\aleph _{0}}
" to the right-hand side of the definitions, etc.)
Perhaps the simplest cardinal invariants of a topological space X are its cardinality and the cardinality of its topology, denoted respectively by |X| and o(X).
The weight w(X ) of a topological space X is the smallest possible cardinality of a base for X. When w(X )
≤
ℵ
0
{\displaystyle \leq \aleph _{0}}
the space X is said to be second countable.
The
π
{\displaystyle \pi }
-weight of a space X is the smallest cardinality of a
π
{\displaystyle \pi }
-base for X. (A
π
{\displaystyle \pi }
-base is a set of nonempty opens whose supersets includes all opens.)
The character of a topological space X at a point x is the smallest cardinality of a local base for x. The character of space X is When
χ
(
X
)
≤
ℵ
0
{\displaystyle \chi (X)\leq \aleph _{0}}
the space X is said to be first countable.
The density d(X ) of a space X is the smallest cardinality of a dense subset of X. When
d
(
X
)
≤
ℵ
0
{\displaystyle {\rm {{d}(X)\leq \aleph _{0}}}}
the space X is said to be separable.
The Lindelöf number L(X ) of a space X is the smallest infinite cardinality such that every open cover has a subcover of cardinality no more than L(X ). When
L
(
X
)
=
ℵ
0
{\displaystyle {\rm {{L}(X)=\aleph _{0}}}}
the space X is said to be a Lindelöf space.
The cellularity of a space X is
The Hereditary cellularity (sometimes spread) is the least upper bound of cellularities of its subsets: or
The tightness t(x, X) of a topological space X at a point
x
∈
X
{\displaystyle x\in X}
is the smallest cardinal number
α
{\displaystyle \alpha }
such that, whenever
x
∈
c
l
X
(
Y
)
{\displaystyle x\in {\rm {cl}}_{X}(Y)}
for some subset Y of X, there exists a subset Z of Y, with |Z | ≤
α
{\displaystyle \alpha }
, such that
x
∈
c
l
X
(
Z
)
{\displaystyle x\in {\rm {cl}}_{X}(Z)}
. Symbolically, The tightness of a space X is
t
(
X
)
=
sup
{
t
(
x
,
X
)
:
x
∈
X
}
{\displaystyle t(X)=\sup\{t(x,X):x\in X\}}
. When t(X) =
ℵ
0
{\displaystyle \aleph _{0}}
the space X is said to be countably generated or countably tight.
The augmented tightness of a space X,
t
+
(
X
)
{\displaystyle t^{+}(X)}
is the smallest regular cardinal
α
{\displaystyle \alpha }
such that for any
Y
⊆
X
{\displaystyle Y\subseteq X}
,
x
∈
c
l
X
(
Y
)
{\displaystyle x\in {\rm {cl}}_{X}(Y)}
there is a subset Z of Y with cardinality less than
α
{\displaystyle \alpha }
, such that
x
∈
c
l
X
(
Z
)
{\displaystyle x\in {\rm {cl}}_{X}(Z)}
.
=== Martin's axiom ===
For any cardinal k, we define a statement, denoted by MA(k):
For any partial order P satisfying the countable chain condition (hereafter ccc) and any family D of dense sets in P such that |D| ≤ k, there is a filter F on P such that F ∩ d is non-empty for every d in D.
Since it is a theorem of ZFC that MA(c) fails, Martin's axiom is stated as:
Martin's axiom (MA): For every k < c, MA(k) holds.
In this case (for application of ccc), an antichain is a subset A of P such that any two distinct members of A are incompatible (two elements are said to be compatible if there exists a common element below both of them in the partial order). This differs from, for example, the notion of antichain in the context of trees.
MA(
2
ℵ
0
{\displaystyle 2^{\aleph _{0}}}
) is false: [0, 1] is a compact Hausdorff space, which is separable and so ccc. It has no isolated points, so points in it are nowhere dense, but it is the union of
2
ℵ
0
{\displaystyle 2^{\aleph _{0}}}
many points.
An equivalent formulation is: If X is a compact Hausdorff topological space which satisfies the ccc then X is not the union of k or fewer nowhere dense subsets.
Martin's axiom has a number of other interesting combinatorial, analytic and topological consequences:
The union of k or fewer null sets in an atomless σ-finite Borel measure on a Polish space is null. In particular, the union of k or fewer subsets of R of Lebesgue measure 0 also has Lebesgue measure 0.
A compact Hausdorff space X with |X| < 2k is sequentially compact, i.e., every sequence has a convergent subsequence.
No non-principal ultrafilter on N has a base of cardinality < k.
Equivalently for any x in βN\N we have χ(x) ≥ k, where χ is the character of x, and so χ(βN) ≥ k.
MA(
ℵ
1
{\displaystyle \aleph _{1}}
) implies that a product of ccc topological spaces is ccc (this in turn implies there are no Suslin lines).
MA + ¬CH implies that there exists a Whitehead group that is not free; Shelah used this to show that the Whitehead problem is independent of ZFC.
=== Forcing ===
Forcing is a technique invented by Paul Cohen for proving consistency and independence results. It was first used, in 1963, to prove the independence of the axiom of choice and the continuum hypothesis from Zermelo–Fraenkel set theory. Forcing was considerably reworked and simplified in the 1960s, and has proven to be an extremely powerful technique both within set theory and in areas of mathematical logic such as recursion theory.
Intuitively, forcing consists of expanding the set theoretical universe V to a larger universe V*. In this bigger universe, for example, one might have many new subsets of ω = {0,1,2,…} that were not there in the old universe, and thereby violate the continuum hypothesis. While impossible on the face of it, this is just another version of Cantor's paradox about infinity. In principle, one could consider
V
∗
=
V
×
{
0
,
1
}
,
{\displaystyle V^{*}=V\times \{0,1\},\,}
identify
x
∈
V
{\displaystyle x\in V}
with
(
x
,
0
)
{\displaystyle (x,0)}
, and then introduce an expanded membership relation involving the "new" sets of the form
(
x
,
1
)
{\displaystyle (x,1)}
. Forcing is a more elaborate version of this idea, reducing the expansion to the existence of one new set, and allowing for fine control over the properties of the expanded universe.
See the main articles for applications such as random reals.
== References ==
== Further reading ==
Kenneth Kunen; Jerry E. Vaughan, eds. (1984). Handbook of Set-Theoretic Topology. North-Holland. ISBN 0-444-86580-2. | Wikipedia/Set-theoretic_topology |
In mathematics, the term homology, originally introduced in algebraic topology, has three primary, closely-related usages. The most direct usage of the term is to take the homology of a chain complex, resulting in a sequence of abelian groups called homology groups. This operation, in turn, allows one to associate various named homologies or homology theories to various other types of mathematical objects. Lastly, since there are many homology theories for topological spaces that produce the same answer, one also often speaks of the homology of a topological space. (This latter notion of homology admits more intuitive descriptions for 1- or 2-dimensional topological spaces, and is sometimes referenced in popular mathematics.) There is also a related notion of the cohomology of a cochain complex, giving rise to various cohomology theories, in addition to the notion of the cohomology of a topological space.
== Homology of chain complexes ==
To take the homology of a chain complex, one starts with a chain complex, which is a sequence
(
C
∙
,
d
∙
)
{\displaystyle (C_{\bullet },d_{\bullet })}
of abelian groups
C
n
{\displaystyle C_{n}}
(whose elements are called chains) and group homomorphisms
d
n
{\displaystyle d_{n}}
(called boundary maps) such that the composition of any two consecutive maps is zero:
C
∙
:
⋯
⟶
C
n
+
1
⟶
d
n
+
1
C
n
⟶
d
n
C
n
−
1
⟶
d
n
−
1
⋯
,
d
n
∘
d
n
+
1
=
0.
{\displaystyle C_{\bullet }:\cdots \longrightarrow C_{n+1}{\stackrel {d_{n+1}}{\longrightarrow }}C_{n}{\stackrel {d_{n}}{\longrightarrow }}C_{n-1}{\stackrel {d_{n-1}}{\longrightarrow }}\cdots ,\quad d_{n}\circ d_{n+1}=0.}
The
n
{\displaystyle n}
th homology group
H
n
{\displaystyle H_{n}}
of this chain complex is then the quotient group
H
n
=
Z
n
/
B
n
{\displaystyle H_{n}=Z_{n}/B_{n}}
of cycles modulo boundaries, where the
n
{\displaystyle n}
th group of cycles
Z
n
{\displaystyle Z_{n}}
is given by the kernel subgroup
Z
n
:=
ker
d
n
:=
{
c
∈
C
n
|
d
n
(
c
)
=
0
}
{\displaystyle Z_{n}:=\ker d_{n}:=\{c\in C_{n}\,|\;d_{n}(c)=0\}}
, and the
n
{\displaystyle n}
th group of boundaries
B
n
{\displaystyle B_{n}}
is given by the image subgroup
B
n
:=
i
m
d
n
+
1
:=
{
d
n
+
1
(
c
)
|
c
∈
C
n
+
1
}
{\displaystyle B_{n}:=\mathrm {im} \,d_{n+1}:=\{d_{n+1}(c)\,|\;c\in C_{n+1}\}}
. One can optionally endow chain complexes with additional structure, for example by additionally taking the groups
C
n
{\displaystyle C_{n}}
to be modules over a coefficient ring
R
{\displaystyle R}
, and taking the boundary maps
d
n
{\displaystyle d_{n}}
to be
R
{\displaystyle R}
-module homomorphisms, resulting in homology groups
H
n
{\displaystyle H_{n}}
that are also quotient modules. Tools from homological algebra can be used to relate homology groups of different chain complexes.
== Homology theories ==
To associate a homology theory to other types of mathematical objects, one first gives a prescription for associating chain complexes to that object, and then takes the homology of such a chain complex. For the homology theory to be valid, all such chain complexes associated to the same mathematical object must have the same homology. The resulting homology theory is often named according to the type of chain complex prescribed. For example, singular homology, Morse homology, Khovanov homology, and Hochschild homology are respectively obtained from singular chain complexes, Morse complexes, Khovanov complexes, and Hochschild complexes. In other cases, such as for group homology, there are multiple common methods to compute the same homology groups.
In the language of category theory, a homology theory is a type of functor from the category of the mathematical object being studied to the category of abelian groups and group homomorphisms, or more generally to the category corresponding to the associated chain complexes. One can also formulate homology theories as derived functors on appropriate abelian categories, measuring the failure of an appropriate functor to be exact. One can describe this latter construction explicitly in terms of resolutions, or more abstractly from the perspective of derived categories or model categories.
Regardless of how they are formulated, homology theories help provide information about the structure of the mathematical objects to which they are associated, and can sometimes help distinguish different objects.
== Homology of a topological space ==
Perhaps the most familiar usage of the term homology is for the homology of a topological space. For sufficiently nice topological spaces and compatible choices of coefficient rings, any homology theory satisfying the Eilenberg-Steenrod axioms yields the same homology groups as the singular homology (see below) of that topological space, with the consequence that one often simply refers to the "homology" of that space, instead of specifying which homology theory was used to compute the homology groups in question.
For 1-dimensional topological spaces, probably the simplest homology theory to use is graph homology, which could be regarded as a 1-dimensional special case of simplicial homology, the latter of which involves a decomposition of the topological space into simplices. (Simplices are a generalization of triangles to arbitrary dimension; for example, an edge in a graph is homeomorphic to a one-dimensional simplex, and a triangle-based pyramid is a 3-simplex.) Simplicial homology can in turn be generalized to singular homology, which allows more general maps of simplices into the topological space. Replacing simplices with disks of various dimensions results in a related construction called cellular homology.
There are also other ways of computing these homology groups, for example via Morse homology, or by taking the output of the Universal Coefficient Theorem when applied to a cohomology theory such as Čech cohomology or (in the case of real coefficients) De Rham cohomology.
=== Inspirations for homology (informal discussion) ===
One of the ideas that led to the development of homology was the observation that certain low-dimensional shapes can be topologically distinguished by examining their "holes." For instance, a figure-eight shape has more holes than a circle
S
1
{\displaystyle S^{1}}
, and a 2-torus
T
2
{\displaystyle T^{2}}
(a 2-dimensional surface shaped like an inner tube) has different holes from a 2-sphere
S
2
{\displaystyle S^{2}}
(a 2-dimensional surface shaped like a basketball).
Studying topological features such as these led to the notion of the cycles that represent homology classes (the elements of homology groups). For example, the two embedded circles in a figure-eight shape provide examples of one-dimensional cycles, or 1-cycles, and the 2-torus
T
2
{\displaystyle T^{2}}
and 2-sphere
S
2
{\displaystyle S^{2}}
represent 2-cycles. Cycles form a group under the operation of formal addition, which refers to adding cycles symbolically rather than combining them geometrically. Any formal sum of cycles is again called a cycle.
=== Cycles and boundaries (informal discussion) ===
Explicit constructions of homology groups are somewhat technical. As mentioned above, an explicit realization of the homology groups
H
n
(
X
)
{\displaystyle H_{n}(X)}
of a topological space
X
{\displaystyle X}
is defined in terms of the cycles and boundaries of a chain complex
(
C
∙
,
d
∙
)
{\displaystyle (C_{\bullet },d_{\bullet })}
associated to
X
{\displaystyle X}
, where the type of chain complex depends on the choice of homology theory in use. These cycles and boundaries are elements of abelian groups, and are defined in terms of the boundary homomorphisms
d
n
:
C
n
→
C
n
−
1
{\displaystyle d_{n}:C_{n}\to C_{n-1}}
of the chain complex, where each
C
n
{\displaystyle C_{n}}
is an abelian group, and the
d
n
{\displaystyle d_{n}}
are group homomorphisms that satisfy
d
n
−
1
∘
d
n
=
0
{\displaystyle d_{n-1}\circ d_{n}=0}
for all
n
{\displaystyle n}
.
Since such constructions are somewhat technical, informal discussions of homology sometimes focus instead on topological notions that parallel some of the group-theoretic aspects of cycles and boundaries.
For example, in the context of chain complexes, a boundary is any element of the image
B
n
:=
i
m
d
n
+
1
:=
{
d
n
+
1
(
c
)
|
c
∈
C
n
+
1
}
{\displaystyle B_{n}:=\mathrm {im} \,d_{n+1}:=\{d_{n+1}(c)\,|\;c\in C_{n+1}\}}
of the boundary homomorphism
d
n
:
C
n
→
C
n
−
1
{\displaystyle d_{n}:C_{n}\to C_{n-1}}
, for some
n
{\displaystyle n}
. In topology, the boundary of a space is technically obtained by taking the space's closure minus its interior, but it is also a notion familiar from examples, e.g., the boundary of the unit disk is the unit circle, or more topologically, the boundary of
D
2
{\displaystyle D^{2}}
is
S
1
{\displaystyle S^{1}}
.
Topologically, the boundary of the closed interval
[
0
,
1
]
{\displaystyle [0,1]}
is given by the disjoint union
{
0
}
⨿
{
1
}
{\displaystyle \{0\}\,\amalg \,\{1\}}
, and with respect to suitable orientation conventions, the oriented boundary of
[
0
,
1
]
{\displaystyle [0,1]}
is given by the union of a positively-oriented
{
1
}
{\displaystyle \{1\}}
with a negatively oriented
{
0
}
.
{\displaystyle \{0\}.}
The simplicial chain complex analog of this statement is that
d
1
(
[
0
,
1
]
)
=
{
1
}
−
{
0
}
{\displaystyle d_{1}([0,1])=\{1\}-\{0\}}
. (Since
d
1
{\displaystyle d_{1}}
is a homomorphism, this implies
d
1
(
k
⋅
[
0
,
1
]
)
=
k
⋅
{
1
}
−
k
⋅
{
0
}
{\displaystyle d_{1}(k\cdot [0,1])=k\cdot \{1\}-k\cdot \{0\}}
for any integer
k
{\displaystyle k}
.)
In the context of chain complexes, a cycle is any element of the kernel
Z
n
:=
ker
d
n
:=
{
c
∈
C
n
|
d
n
(
c
)
=
0
}
{\displaystyle Z_{n}:=\ker d_{n}:=\{c\in C_{n}\,|\;d_{n}(c)=0\}}
, for some
n
{\displaystyle n}
. In other words,
c
∈
C
n
{\displaystyle c\in C_{n}}
is a cycle if and only if
d
n
(
c
)
=
0
{\displaystyle d_{n}(c)=0}
. The closest topological analog of this idea would be a shape that has "no boundary," in the sense that its boundary is the empty set. For example, since
S
1
,
S
2
{\displaystyle S^{1},S^{2}}
, and
T
2
{\displaystyle T^{2}}
have no boundary, one can associate cycles to each of these spaces. However, the chain complex notion of cycles (elements whose boundary is a "zero chain") is more general than the topological notion of a shape with no boundary.
It is this topological notion of no boundary that people generally have in mind when they claim that cycles can intuitively be thought of as detecting holes. The idea is that for no-boundary shapes like
S
1
{\displaystyle S^{1}}
,
S
2
{\displaystyle S^{2}}
, and
T
2
{\displaystyle T^{2}}
, it is possible in each case to glue on a larger shape for which the original shape is the boundary. For instance, starting with a circle
S
1
{\displaystyle S^{1}}
, one could glue a 2-dimensional disk
D
2
{\displaystyle D^{2}}
to that
S
1
{\displaystyle S^{1}}
such that the
S
1
{\displaystyle S^{1}}
is the boundary of that
D
2
{\displaystyle D^{2}}
. Similarly, given a two-sphere
S
2
{\displaystyle S^{2}}
, one can glue a ball
B
3
{\displaystyle B^{3}}
to that
S
2
{\displaystyle S^{2}}
such that the
S
2
{\displaystyle S^{2}}
is the boundary of that
B
3
{\displaystyle B^{3}}
. This phenomenon is sometimes described as saying that
S
2
{\displaystyle S^{2}}
has a
B
3
{\displaystyle B^{3}}
-shaped "hole" or that it could be "filled in" with a
B
3
{\displaystyle B^{3}}
.
More generally, any shape with no boundary can be "filled in" with a cone, since if a given space
Y
{\displaystyle Y}
has no boundary, then the boundary of the cone on
Y
{\displaystyle Y}
is given by
Y
{\displaystyle Y}
, and so if one "filled in"
Y
{\displaystyle Y}
by gluing the cone on
Y
{\displaystyle Y}
onto
Y
{\displaystyle Y}
, then
Y
{\displaystyle Y}
would be the boundary of that cone. (For example, a cone on
S
1
{\displaystyle S^{1}}
is homeomorphic to a disk
D
2
{\displaystyle D^{2}}
whose boundary is that
S
1
{\displaystyle S^{1}}
.) However, it is sometimes desirable to restrict to nicer spaces such as manifolds, and not every cone is homeomorphic to a manifold. Embedded representatives of 1-cycles, 3-cycles, and oriented 2-cycles all admit manifold-shaped holes, but for example the real projective plane
R
P
2
{\displaystyle \mathbb {RP} ^{2}}
and complex projective plane
C
P
2
{\displaystyle \mathbb {CP} ^{2}}
have nontrivial cobordism classes and therefore cannot be "filled in" with manifolds.
On the other hand, the boundaries discussed in the homology of a topological space
X
{\displaystyle X}
are different from the boundaries of "filled in" holes, because the homology of a topological space
X
{\displaystyle X}
has to do with the original space
X
{\displaystyle X}
, and not with new shapes built from gluing extra pieces onto
X
{\displaystyle X}
. For example, any embedded circle
C
{\displaystyle C}
in
S
2
{\displaystyle S^{2}}
already bounds some embedded disk
D
{\displaystyle D}
in
S
2
{\displaystyle S^{2}}
, so such
C
{\displaystyle C}
gives rise to a boundary class in the homology of
S
2
{\displaystyle S^{2}}
. By contrast, no embedding of
S
1
{\displaystyle S^{1}}
into one of the 2 lobes of the figure-eight shape gives a boundary, despite the fact that it is possible to glue a disk onto a figure-eight lobe.
=== Homology groups ===
Given a sufficiently-nice topological space
X
{\displaystyle X}
, a choice of appropriate homology theory, and a chain complex
(
C
∙
,
d
∙
)
{\displaystyle (C_{\bullet },d_{\bullet })}
associated to
X
{\displaystyle X}
that is compatible with that homology theory, the
n
{\displaystyle n}
th homology group
H
n
(
X
)
{\displaystyle H_{n}(X)}
is then given by the quotient group
H
n
(
X
)
=
Z
n
/
B
n
{\displaystyle H_{n}(X)=Z_{n}/B_{n}}
of
n
{\displaystyle n}
-cycles (
n
{\displaystyle n}
-dimensional cycles) modulo
n
{\displaystyle n}
-dimensional boundaries. In other words, the elements of
H
n
(
X
)
{\displaystyle H_{n}(X)}
, called homology classes, are equivalence classes whose representatives are
n
{\displaystyle n}
-cycles, and any two cycles are regarded as equal in
H
n
(
X
)
{\displaystyle H_{n}(X)}
if and only if they differ by the addition of a boundary. This also implies that the "zero" element of
H
n
(
X
)
{\displaystyle H_{n}(X)}
is given by the group of
n
{\displaystyle n}
-dimensional boundaries, which also includes formal sums of such boundaries.
== Informal examples ==
The homology of a topological space X is a set of topological invariants of X represented by its homology groups
H
0
(
X
)
,
H
1
(
X
)
,
H
2
(
X
)
,
…
{\displaystyle H_{0}(X),H_{1}(X),H_{2}(X),\ldots }
where the
k
t
h
{\displaystyle k^{\rm {th}}}
homology group
H
k
(
X
)
{\displaystyle H_{k}(X)}
describes, informally, the number of holes in X with a k-dimensional boundary. A 0-dimensional-boundary hole is simply a gap between two components. Consequently,
H
0
(
X
)
{\displaystyle H_{0}(X)}
describes the path-connected components of X.
A one-dimensional sphere
S
1
{\displaystyle S^{1}}
is a circle. It has a single connected component and a one-dimensional-boundary hole, but no higher-dimensional holes. The corresponding homology groups are given as
H
k
(
S
1
)
=
{
Z
k
=
0
,
1
{
0
}
otherwise
{\displaystyle H_{k}\left(S^{1}\right)={\begin{cases}\mathbb {Z} &k=0,1\\\{0\}&{\text{otherwise}}\end{cases}}}
where
Z
{\displaystyle \mathbb {Z} }
is the group of integers and
{
0
}
{\displaystyle \{0\}}
is the trivial group. The group
H
1
(
S
1
)
=
Z
{\displaystyle H_{1}\left(S^{1}\right)=\mathbb {Z} }
represents a finitely-generated abelian group, with a single generator representing the one-dimensional hole contained in a circle.
A two-dimensional sphere
S
2
{\displaystyle S^{2}}
has a single connected component, no one-dimensional-boundary holes, a two-dimensional-boundary hole, and no higher-dimensional holes. The corresponding homology groups are
H
k
(
S
2
)
=
{
Z
k
=
0
,
2
{
0
}
otherwise
{\displaystyle H_{k}\left(S^{2}\right)={\begin{cases}\mathbb {Z} &k=0,2\\\{0\}&{\text{otherwise}}\end{cases}}}
In general for an n-dimensional sphere
S
n
,
{\displaystyle S^{n},}
the homology groups are
H
k
(
S
n
)
=
{
Z
k
=
0
,
n
{
0
}
otherwise
{\displaystyle H_{k}\left(S^{n}\right)={\begin{cases}\mathbb {Z} &k=0,n\\\{0\}&{\text{otherwise}}\end{cases}}}
A two-dimensional ball
B
2
{\displaystyle B^{2}}
is a solid disc. It has a single path-connected component, but in contrast to the circle, has no higher-dimensional holes. The corresponding homology groups are all trivial except for
H
0
(
B
2
)
=
Z
{\displaystyle H_{0}\left(B^{2}\right)=\mathbb {Z} }
. In general, for an n-dimensional ball
B
n
,
{\displaystyle B^{n},}
H
k
(
B
n
)
=
{
Z
k
=
0
{
0
}
otherwise
{\displaystyle H_{k}\left(B^{n}\right)={\begin{cases}\mathbb {Z} &k=0\\\{0\}&{\text{otherwise}}\end{cases}}}
The torus is defined as a product of two circles
T
2
=
S
1
×
S
1
{\displaystyle T^{2}=S^{1}\times S^{1}}
. The torus has a single path-connected component, two independent one-dimensional holes (indicated by circles in red and blue) and one two-dimensional hole as the interior of the torus. The corresponding homology groups are
H
k
(
T
2
)
=
{
Z
k
=
0
,
2
Z
×
Z
k
=
1
{
0
}
otherwise
{\displaystyle H_{k}(T^{2})={\begin{cases}\mathbb {Z} &k=0,2\\\mathbb {Z} \times \mathbb {Z} &k=1\\\{0\}&{\text{otherwise}}\end{cases}}}
If n products of a topological space X is written as
X
n
{\displaystyle X^{n}}
, then in general, for an n-dimensional torus
T
n
=
(
S
1
)
n
{\displaystyle T^{n}=(S^{1})^{n}}
,
H
k
(
T
n
)
=
{
Z
(
n
k
)
0
≤
k
≤
n
{
0
}
otherwise
{\displaystyle H_{k}(T^{n})={\begin{cases}\mathbb {Z} ^{\binom {n}{k}}&0\leq k\leq n\\\{0\}&{\text{otherwise}}\end{cases}}}
(see Torus § n-dimensional torus and Betti number § More examples for more details).
The two independent 1-dimensional holes form independent generators in a finitely-generated abelian group, expressed as the product group
Z
×
Z
.
{\displaystyle \mathbb {Z} \times \mathbb {Z} .}
For the projective plane P, a simple computation shows (where
Z
2
{\displaystyle \mathbb {Z} _{2}}
is the cyclic group of order 2):
H
k
(
P
)
=
{
Z
k
=
0
Z
2
k
=
1
{
0
}
otherwise
{\displaystyle H_{k}(P)={\begin{cases}\mathbb {Z} &k=0\\\mathbb {Z} _{2}&k=1\\\{0\}&{\text{otherwise}}\end{cases}}}
H
0
(
P
)
=
Z
{\displaystyle H_{0}(P)=\mathbb {Z} }
corresponds, as in the previous examples, to the fact that there is a single connected component.
H
1
(
P
)
=
Z
2
{\displaystyle H_{1}(P)=\mathbb {Z} _{2}}
is a new phenomenon: intuitively, it corresponds to the fact that there is a single non-contractible "loop", but if we do the loop twice, it becomes contractible to zero. This phenomenon is called torsion.
== Construction of homology groups ==
The following text describes a general algorithm for constructing the homology groups. It may be easier for the reader to look at some simple examples first: graph homology and simplicial homology.
The general construction begins with an object such as a topological space X, on which one first defines a chain complex C(X) encoding information about X. A chain complex is a sequence of abelian groups or modules
C
0
,
C
1
,
C
2
,
…
{\displaystyle C_{0},C_{1},C_{2},\ldots }
. connected by homomorphisms
∂
n
:
C
n
→
C
n
−
1
,
{\displaystyle \partial _{n}:C_{n}\to C_{n-1},}
which are called boundary operators. That is,
⋯
⟶
∂
n
+
1
C
n
⟶
∂
n
C
n
−
1
⟶
∂
n
−
1
⋯
⟶
∂
2
C
1
⟶
∂
1
C
0
⟶
∂
0
0
{\displaystyle \dotsb {\overset {\partial _{n+1}}{\longrightarrow \,}}C_{n}{\overset {\partial _{n}}{\longrightarrow \,}}C_{n-1}{\overset {\partial _{n-1}}{\longrightarrow \,}}\dotsb {\overset {\partial _{2}}{\longrightarrow \,}}C_{1}{\overset {\partial _{1}}{\longrightarrow \,}}C_{0}{\overset {\partial _{0}}{\longrightarrow \,}}0}
where 0 denotes the trivial group and
C
i
≡
0
{\displaystyle C_{i}\equiv 0}
for i < 0. It is also required that the composition of any two consecutive boundary operators be trivial. That is, for all n,
∂
n
∘
∂
n
+
1
=
0
n
+
1
,
n
−
1
,
{\displaystyle \partial _{n}\circ \partial _{n+1}=0_{n+1,n-1},}
i.e., the constant map sending every element of
C
n
+
1
{\displaystyle C_{n+1}}
to the group identity in
C
n
−
1
.
{\displaystyle C_{n-1}.}
The statement that the boundary of a boundary is trivial is equivalent to the statement that
i
m
(
∂
n
+
1
)
⊆
ker
(
∂
n
)
{\displaystyle \mathrm {im} (\partial _{n+1})\subseteq \ker(\partial _{n})}
, where
i
m
(
∂
n
+
1
)
{\displaystyle \mathrm {im} (\partial _{n+1})}
denotes the image of the boundary operator and
ker
(
∂
n
)
{\displaystyle \ker(\partial _{n})}
its kernel. Elements of
B
n
(
X
)
=
i
m
(
∂
n
+
1
)
{\displaystyle B_{n}(X)=\mathrm {im} (\partial _{n+1})}
are called boundaries and elements of
Z
n
(
X
)
=
ker
(
∂
n
)
{\displaystyle Z_{n}(X)=\ker(\partial _{n})}
are called cycles.
Since each chain group Cn is abelian all its subgroups are normal. Then because
ker
(
∂
n
)
{\displaystyle \ker(\partial _{n})}
is a subgroup of Cn,
ker
(
∂
n
)
{\displaystyle \ker(\partial _{n})}
is abelian, and since
i
m
(
∂
n
+
1
)
⊆
ker
(
∂
n
)
{\displaystyle \mathrm {im} (\partial _{n+1})\subseteq \ker(\partial _{n})}
therefore
i
m
(
∂
n
+
1
)
{\displaystyle \mathrm {im} (\partial _{n+1})}
is a normal subgroup of
ker
(
∂
n
)
{\displaystyle \ker(\partial _{n})}
. Then one can create the quotient group
H
n
(
X
)
:=
ker
(
∂
n
)
/
i
m
(
∂
n
+
1
)
=
Z
n
(
X
)
/
B
n
(
X
)
,
{\displaystyle H_{n}(X):=\ker(\partial _{n})/\mathrm {im} (\partial _{n+1})=Z_{n}(X)/B_{n}(X),}
called the nth homology group of X. The elements of Hn(X) are called homology classes. Each homology class is an equivalence class over cycles and two cycles in the same homology class are said to be homologous.
A chain complex is said to be exact if the image of the (n+1)th map is always equal to the kernel of the nth map. The homology groups of X therefore measure "how far" the chain complex associated to X is from being exact.
The reduced homology groups of a chain complex C(X) are defined as homologies of the augmented chain complex
⋯
⟶
∂
n
+
1
C
n
⟶
∂
n
C
n
−
1
⟶
∂
n
−
1
⋯
⟶
∂
2
C
1
⟶
∂
1
C
0
⟶
ϵ
Z
⟶
0
{\displaystyle \dotsb {\overset {\partial _{n+1}}{\longrightarrow \,}}C_{n}{\overset {\partial _{n}}{\longrightarrow \,}}C_{n-1}{\overset {\partial _{n-1}}{\longrightarrow \,}}\dotsb {\overset {\partial _{2}}{\longrightarrow \,}}C_{1}{\overset {\partial _{1}}{\longrightarrow \,}}C_{0}{\overset {\epsilon }{\longrightarrow \,}}\mathbb {Z} {\longrightarrow \,}0}
where the boundary operator
ϵ
{\displaystyle \epsilon }
is
ϵ
(
∑
i
n
i
σ
i
)
=
∑
i
n
i
{\displaystyle \epsilon \left(\sum _{i}n_{i}\sigma _{i}\right)=\sum _{i}n_{i}}
for a combination
∑
n
i
σ
i
,
{\displaystyle \sum n_{i}\sigma _{i},}
of points
σ
i
,
{\displaystyle \sigma _{i},}
which are the fixed generators of C0. The reduced homology groups
H
~
i
(
X
)
{\displaystyle {\tilde {H}}_{i}(X)}
coincide with
H
i
(
X
)
{\displaystyle H_{i}(X)}
for
i
≠
0.
{\displaystyle i\neq 0.}
The extra
Z
{\displaystyle \mathbb {Z} }
in the chain complex represents the unique map
[
∅
]
⟶
X
{\displaystyle [\emptyset ]\longrightarrow X}
from the empty simplex to X.
Computing the cycle
Z
n
(
X
)
{\displaystyle Z_{n}(X)}
and boundary
B
n
(
X
)
{\displaystyle B_{n}(X)}
groups is usually rather difficult since they have a very large number of generators. On the other hand, there are tools which make the task easier.
The simplicial homology groups Hn(X) of a simplicial complex X are defined using the simplicial chain complex C(X), with Cn(X) the free abelian group generated by the n-simplices of X. See simplicial homology for details.
The singular homology groups Hn(X) are defined for any topological space X, and agree with the simplicial homology groups for a simplicial complex.
Cohomology groups are formally similar to homology groups: one starts with a cochain complex, which is the same as a chain complex but whose arrows, now denoted
d
n
,
{\displaystyle d_{n},}
point in the direction of increasing n rather than decreasing n; then the groups
ker
(
d
n
)
=
Z
n
(
X
)
{\displaystyle \ker \left(d^{n}\right)=Z^{n}(X)}
of cocycles and
i
m
(
d
n
−
1
)
=
B
n
(
X
)
{\displaystyle \mathrm {im} \left(d^{n-1}\right)=B^{n}(X)}
of coboundaries follow from the same description. The nth cohomology group of X is then the quotient group
H
n
(
X
)
=
Z
n
(
X
)
/
B
n
(
X
)
,
{\displaystyle H^{n}(X)=Z^{n}(X)/B^{n}(X),}
in analogy with the nth homology group.
== Homology vs. homotopy ==
The nth homotopy group
π
n
(
X
)
{\displaystyle \pi _{n}(X)}
of a topological space
X
{\displaystyle X}
is the group of homotopy classes of basepoint-preserving maps from the
n
{\displaystyle n}
-sphere
S
n
{\displaystyle S^{n}}
to
X
{\displaystyle X}
, under the group operation of concatenation. The most fundamental homotopy group is the fundamental group
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
. For connected
X
{\displaystyle X}
, the Hurewicz theorem describes a homomorphism
h
∗
:
π
n
(
X
)
→
H
n
(
X
)
{\displaystyle h_{*}:\pi _{n}(X)\to H_{n}(X)}
called the Hurewicz homomorphism. For
n
>
1
{\displaystyle n>1}
, this homomorphism can be complicated, but when
n
=
1
{\displaystyle n=1}
, the Hurewicz homomorphism coincides with abelianization. That is,
h
∗
:
π
1
(
X
)
→
H
1
(
X
)
{\displaystyle h_{*}:\pi _{1}(X)\to H_{1}(X)}
is surjective and its kernel is the commutator subgroup of
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
, with the consequence that
H
1
(
X
)
{\displaystyle H_{1}(X)}
is isomorphic to the abelianization of
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
. Higher homotopy groups are sometimes difficult to compute. For instance, the homotopy groups of spheres are poorly understood and are not known in general, in contrast to the straightforward description given above for the homology groups.
For an
n
=
1
{\displaystyle n=1}
example, suppose
X
{\displaystyle X}
is the figure eight. As usual, its first homotopy group, or fundamental group,
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
is the group of homotopy classes of directed loops starting and ending at a predetermined point (e.g. its center). It is isomorphic to the free group of rank 2,
π
1
(
X
)
≅
Z
∗
Z
{\displaystyle \pi _{1}(X)\cong \mathbb {Z} *\mathbb {Z} }
, which is not commutative: looping around the lefthand cycle and then around the righthand cycle is different from looping around the righthand cycle and then looping around the lefthand cycle. By contrast, the figure eight's first homology group
H
1
(
X
)
≅
Z
×
Z
{\displaystyle H_{1}(X)\cong \mathbb {Z} \times \mathbb {Z} }
is abelian. To express this explicitly in terms of homology classes of cycles, one could take the homology class
l
{\displaystyle l}
of the lefthand cycle and the homology class
r
{\displaystyle r}
of the righthand cycle as basis elements of
H
1
(
X
)
{\displaystyle H_{1}(X)}
, allowing us to write
H
1
(
X
)
=
{
a
l
l
+
a
r
r
|
a
l
,
a
r
∈
Z
}
{\displaystyle H_{1}(X)=\{a_{l}l+a_{r}r\,|\;a_{l},a_{r}\in \mathbb {Z} \}}
.
== Types of homology ==
The different types of homology theory arise from functors mapping from various categories of mathematical objects to the category of chain complexes. In each case the composition of the functor from objects to chain complexes and the functor from chain complexes to homology groups defines the overall homology functor for the theory.
=== Simplicial homology ===
The motivating example comes from algebraic topology: the simplicial homology of a simplicial complex X. Here the chain group Cn is the free abelian group or free module whose generators are the n-dimensional oriented simplexes of X. The orientation is captured by ordering the complex's vertices and expressing an oriented simplex
σ
{\displaystyle \sigma }
as an n-tuple
(
σ
[
0
]
,
σ
[
1
]
,
…
,
σ
[
n
]
)
{\displaystyle (\sigma [0],\sigma [1],\dots ,\sigma [n])}
of its vertices listed in increasing order (i.e.
σ
[
0
]
<
σ
[
1
]
<
⋯
<
σ
[
n
]
{\displaystyle \sigma [0]<\sigma [1]<\cdots <\sigma [n]}
in the complex's vertex ordering, where
σ
[
i
]
{\displaystyle \sigma [i]}
is the
i
{\displaystyle i}
th vertex appearing in the tuple). The mapping
∂
n
{\displaystyle \partial _{n}}
from Cn to Cn−1 is called the boundary mapping and sends the simplex
σ
=
(
σ
[
0
]
,
σ
[
1
]
,
…
,
σ
[
n
]
)
{\displaystyle \sigma =(\sigma [0],\sigma [1],\dots ,\sigma [n])}
to the formal sum
∂
n
(
σ
)
=
∑
i
=
0
n
(
−
1
)
i
(
σ
[
0
]
,
…
,
σ
[
i
−
1
]
,
σ
[
i
+
1
]
,
…
,
σ
[
n
]
)
,
{\displaystyle \partial _{n}(\sigma )=\sum _{i=0}^{n}(-1)^{i}\left(\sigma [0],\dots ,\sigma [i-1],\sigma [i+1],\dots ,\sigma [n]\right),}
\
which is evaluated as 0 if
n
=
0.
{\displaystyle n=0.}
This behavior on the generators induces a homomorphism on all of Cn as follows. Given an element
c
∈
C
n
{\displaystyle c\in C_{n}}
, write it as the sum of generators
c
=
∑
σ
i
∈
X
n
m
i
σ
i
,
{\textstyle c=\sum _{\sigma _{i}\in X_{n}}m_{i}\sigma _{i},}
where
X
n
{\displaystyle X_{n}}
is the set of n-simplexes in X and the mi are coefficients from the ring Cn is defined over (usually integers, unless otherwise specified). Then define
∂
n
(
c
)
=
∑
σ
i
∈
X
n
m
i
∂
n
(
σ
i
)
.
{\displaystyle \partial _{n}(c)=\sum _{\sigma _{i}\in X_{n}}m_{i}\partial _{n}(\sigma _{i}).}
The dimension of the n-th homology of X turns out to be the number of "holes" in X at dimension n. It may be computed by putting matrix representations of these boundary mappings in Smith normal form.
=== Singular homology ===
Using simplicial homology example as a model, one can define a singular homology for any topological space X. A chain complex for X is defined by taking Cn to be the free abelian group (or free module) whose generators are all continuous maps from n-dimensional simplices into X. The homomorphisms ∂n arise from the boundary maps of simplices.
=== Group homology ===
In abstract algebra, one uses homology to define derived functors, for example the Tor functors. Here one starts with some covariant additive functor F and some module X. The chain complex for X is defined as follows: first find a free module
F
1
{\displaystyle F_{1}}
and a surjective homomorphism
p
1
:
F
1
→
X
.
{\displaystyle p_{1}:F_{1}\to X.}
Then one finds a free module
F
2
{\displaystyle F_{2}}
and a surjective homomorphism
p
2
:
F
2
→
ker
(
p
1
)
.
{\displaystyle p_{2}:F_{2}\to \ker \left(p_{1}\right).}
Continuing in this fashion, a sequence of free modules
F
n
{\displaystyle F_{n}}
and homomorphisms
p
n
{\displaystyle p_{n}}
can be defined. By applying the functor F to this sequence, one obtains a chain complex; the homology
H
n
{\displaystyle H_{n}}
of this complex depends only on F and X and is, by definition, the n-th derived functor of F, applied to X.
A common use of group (co)homology
H
2
(
G
,
M
)
{\displaystyle H^{2}(G,M)}
is to classify the possible extension groups E which contain a given G-module M as a normal subgroup and have a given quotient group G, so that
G
=
E
/
M
.
{\displaystyle G=E/M.}
=== Other homology theories ===
== Homology functors ==
Chain complexes form a category: A morphism from the chain complex (
d
n
:
A
n
→
A
n
−
1
{\displaystyle d_{n}:A_{n}\to A_{n-1}}
) to the chain complex (
e
n
:
B
n
→
B
n
−
1
{\displaystyle e_{n}:B_{n}\to B_{n-1}}
) is a sequence of homomorphisms
f
n
:
A
n
→
B
n
{\displaystyle f_{n}:A_{n}\to B_{n}}
such that
f
n
−
1
∘
d
n
=
e
n
∘
f
n
{\displaystyle f_{n-1}\circ d_{n}=e_{n}\circ f_{n}}
for all n. The n-th homology Hn can be viewed as a covariant functor from the category of chain complexes to the category of abelian groups (or modules).
If the chain complex depends on the object X in a covariant manner (meaning that any morphism
X
→
Y
{\displaystyle X\to Y}
induces a morphism from the chain complex of X to the chain complex of Y), then the Hn are covariant functors from the category that X belongs to into the category of abelian groups (or modules).
The only difference between homology and cohomology is that in cohomology the chain complexes depend in a contravariant manner on X, and that therefore the homology groups (which are called cohomology groups in this context and denoted by Hn) form contravariant functors from the category that X belongs to into the category of abelian groups or modules.
== Properties ==
If (
d
n
:
A
n
→
A
n
−
1
{\displaystyle d_{n}:A_{n}\to A_{n-1}}
) is a chain complex such that all but finitely many An are zero, and the others are finitely generated abelian groups (or finite-dimensional vector spaces), then we can define the Euler characteristic
χ
=
∑
(
−
1
)
n
r
a
n
k
(
A
n
)
{\displaystyle \chi =\sum (-1)^{n}\,\mathrm {rank} (A_{n})}
(using the rank in the case of abelian groups and the Hamel dimension in the case of vector spaces). It turns out that the Euler characteristic can also be computed on the level of homology:
χ
=
∑
(
−
1
)
n
r
a
n
k
(
H
n
)
{\displaystyle \chi =\sum (-1)^{n}\,\mathrm {rank} (H_{n})}
and, especially in algebraic topology, this provides two ways to compute the important invariant
χ
{\displaystyle \chi }
for the object X which gave rise to the chain complex.
Every short exact sequence
0
→
A
→
B
→
C
→
0
{\displaystyle 0\rightarrow A\rightarrow B\rightarrow C\rightarrow 0}
of chain complexes gives rise to a long exact sequence of homology groups
⋯
→
H
n
(
A
)
→
H
n
(
B
)
→
H
n
(
C
)
→
H
n
−
1
(
A
)
→
H
n
−
1
(
B
)
→
H
n
−
1
(
C
)
→
H
n
−
2
(
A
)
→
⋯
{\displaystyle \cdots \to H_{n}(A)\to H_{n}(B)\to H_{n}(C)\to H_{n-1}(A)\to H_{n-1}(B)\to H_{n-1}(C)\to H_{n-2}(A)\to \cdots }
All maps in this long exact sequence are induced by the maps between the chain complexes, except for the maps
H
n
(
C
)
→
H
n
−
1
(
A
)
{\displaystyle H_{n}(C)\to H_{n-1}(A)}
The latter are called connecting homomorphisms and are provided by the zig-zag lemma. This lemma can be applied to homology in numerous ways that aid in calculating homology groups, such as the theories of relative homology and Mayer-Vietoris sequences.
== Applications ==
=== Application in pure mathematics ===
Notable theorems proved using homology include the following:
The Brouwer fixed point theorem: If f is any continuous map from the ball Bn to itself, then there is a fixed point
a
∈
B
n
{\displaystyle a\in B^{n}}
with
f
(
a
)
=
a
.
{\displaystyle f(a)=a.}
Invariance of domain: If U is an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
and
f
:
U
→
R
n
{\displaystyle f:U\to \mathbb {R} ^{n}}
is an injective continuous map, then
V
=
f
(
U
)
{\displaystyle V=f(U)}
is open and f is a homeomorphism between U and V.
The Hairy ball theorem: any continuous vector field on the 2-sphere (or more generally, the 2k-sphere for any
k
≥
1
{\displaystyle k\geq 1}
) vanishes at some point.
The Borsuk–Ulam theorem: any continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. (Two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.)
Invariance of dimension: if non-empty open subsets
U
⊆
R
m
{\displaystyle U\subseteq \mathbb {R} ^{m}}
and
V
⊆
R
n
{\displaystyle V\subseteq \mathbb {R} ^{n}}
are homeomorphic, then
m
=
n
.
{\displaystyle m=n.}
=== Application in science and engineering ===
In topological data analysis, data sets are regarded as a point cloud sampling of a manifold or algebraic variety embedded in Euclidean space. By linking nearest neighbor points in the cloud into a triangulation, a simplicial approximation of the manifold is created and its simplicial homology may be calculated. Finding techniques to robustly calculate homology using various triangulation strategies over multiple length scales is the topic of persistent homology.
In sensor networks, sensors may communicate information via an ad-hoc network that dynamically changes in time. To understand the global context of this set of local measurements and communication paths, it is useful to compute the homology of the network topology to evaluate, for instance, holes in coverage.
In dynamical systems theory in physics, Poincaré was one of the first to consider the interplay between the invariant manifold of a dynamical system and its topological invariants. Morse theory relates the dynamics of a gradient flow on a manifold to, for example, its homology. Floer homology extended this to infinite-dimensional manifolds. The KAM theorem established that periodic orbits can follow complex trajectories; in particular, they may form braids that can be investigated using Floer homology.
In one class of finite element methods, boundary-value problems for differential equations involving the Hodge-Laplace operator may need to be solved on topologically nontrivial domains, for example, in electromagnetic simulations. In these simulations, solution is aided by fixing the cohomology class of the solution based on the chosen boundary conditions and the homology of the domain. FEM domains can be triangulated, from which the simplicial homology can be calculated.
== Software ==
Various software packages have been developed for the purposes of computing homology groups of finite cell complexes. Linbox is a C++ library for performing fast matrix operations, including Smith normal form; it interfaces with both Gap and Maple. Chomp, CAPD::Redhom Archived 2013-07-15 at the Wayback Machine and Perseus are also written in C++. All three implement pre-processing algorithms based on simple-homotopy equivalence and discrete Morse theory to perform homology-preserving reductions of the input cell complexes before resorting to matrix algebra. Kenzo is written in Lisp, and in addition to homology it may also be used to generate presentations of homotopy groups of finite simplicial complexes. Gmsh includes a homology solver for finite element meshes, which can generate Cohomology bases directly usable by finite element software.
== Some non-homology-based discussions of surfaces ==
=== Origins ===
Homology theory can be said to start with the Euler polyhedron formula, or Euler characteristic. This was followed by Riemann's definition of genus and n-fold connectedness numerical invariants in 1857 and Betti's proof in 1871 of the independence of "homology numbers" from the choice of basis.
=== Surfaces ===
On the ordinary sphere
S
2
{\displaystyle S^{2}}
, the curve b in the diagram can be shrunk to the pole, and even the equatorial great circle a can be shrunk in the same way. The Jordan curve theorem shows that any closed curve such as c can be similarly shrunk to a point. This implies that
S
2
{\displaystyle S^{2}}
has trivial fundamental group, so as a consequence, it also has trivial first homology group.
The torus
T
2
{\displaystyle T^{2}}
has closed curves which cannot be continuously deformed into each other, for example in the diagram none of the cycles a, b or c can be deformed into one another. In particular, cycles a and b cannot be shrunk to a point whereas cycle c can.
If the torus surface is cut along both a and b, it can be opened out and flattened into a rectangle or, more conveniently, a square. One opposite pair of sides represents the cut along a, and the other opposite pair represents the cut along b.
The edges of the square may then be glued back together in different ways. The square can be twisted to allow edges to meet in the opposite direction, as shown by the arrows in the diagram. The various ways of gluing the sides yield just four topologically distinct surfaces:
K
2
{\displaystyle K^{2}}
is the Klein bottle, which is a torus with a twist in it (In the square diagram, the twist can be seen as the reversal of the bottom arrow). It is a theorem that the re-glued surface must self-intersect (when immersed in Euclidean 3-space). Like the torus, cycles a and b cannot be shrunk while c can be. But unlike the torus, following b forwards right round and back reverses left and right, because b happens to cross over the twist given to one join. If an equidistant cut on one side of b is made, it returns on the other side and goes round the surface a second time before returning to its starting point, cutting out a twisted Möbius strip. Because local left and right can be arbitrarily re-oriented in this way, the surface as a whole is said to be non-orientable.
The projective plane
P
2
{\displaystyle P^{2}}
has both joins twisted. The uncut form, generally represented as the Boy surface, is visually complex, so a hemispherical embedding is shown in the diagram, in which antipodal points around the rim such as A and A′ are identified as the same point. Again, a is non-shrinkable while c is. If b were only wound once, it would also be non-shrinkable and reverse left and right. However it is wound a second time, which swaps right and left back again; it can be shrunk to a point and is homologous to c.
Cycles can be joined or added together, as a and b on the torus were when it was cut open and flattened down. In the Klein bottle diagram, a goes round one way and −a goes round the opposite way. If a is thought of as a cut, then −a can be thought of as a gluing operation. Making a cut and then re-gluing it does not change the surface, so a + (−a) = 0.
But now consider two a-cycles. Since the Klein bottle is nonorientable, you can transport one of them all the way round the bottle (along the b-cycle), and it will come back as −a. This is because the Klein bottle is made from a cylinder, whose a-cycle ends are glued together with opposite orientations. Hence 2a = a + a = a + (−a) = 0. This phenomenon is called torsion. Similarly, in the projective plane, following the unshrinkable cycle b round twice remarkably creates a trivial cycle which can be shrunk to a point; that is, b + b = 0. Because b must be followed around twice to achieve a zero cycle, the surface is said to have a torsion coefficient of 2. However, following a b-cycle around twice in the Klein bottle gives simply b + b = 2b, since this cycle lives in a torsion-free homology class. This corresponds to the fact that in the fundamental polygon of the Klein bottle, only one pair of sides is glued with a twist, whereas in the projective plane both sides are twisted.
A square is a contractible topological space, which implies that it has trivial homology. Consequently, additional cuts disconnect it. The square is not the only shape in the plane that can be glued into a surface. Gluing opposite sides of an octagon, for example, produces a surface with two holes. In fact, all closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2n-gons) can be glued to make different manifolds. Conversely, a closed surface with n non-zero classes can be cut into a 2n-gon. Variations are also possible, for example a hexagon may also be glued to form a torus.
The first recognisable theory of homology was published by Henri Poincaré in his seminal paper "Analysis situs", J. Ecole polytech. (2) 1. 1–121 (1895). The paper introduced homology classes and relations. The possible configurations of orientable cycles are classified by the Betti numbers of the manifold (Betti numbers are a refinement of the Euler characteristic). Classifying the non-orientable cycles requires additional information about torsion coefficients.
The complete classification of 1- and 2-manifolds is given in the table.
Notes
For a non-orientable surface, a hole is equivalent to two cross-caps.
Any closed 2-manifold can be realised as the connected sum of g tori and c projective planes, where the 2-sphere
S
2
{\displaystyle S^{2}}
is regarded as the empty connected sum. Homology is preserved by the operation of connected sum.
In a search for increased rigour, Poincaré went on to develop the simplicial homology of a triangulated manifold and to create what is now called a simplicial chain complex. Chain complexes (since greatly generalized) form the basis for most modern treatments of homology.
Emmy Noether and, independently, Leopold Vietoris and Walther Mayer further developed the theory of algebraic homology groups in the period 1925–28. The new combinatorial topology formally treated topological classes as abelian groups. Homology groups are finitely generated abelian groups, and homology classes are elements of these groups. The Betti numbers of the manifold are the rank of the free part of the homology group, and in the special case of surfaces, the torsion part of the homology group only occurs for non-orientable cycles.
The subsequent spread of homology groups brought a change of terminology and viewpoint from "combinatorial topology" to "algebraic topology". Algebraic homology remains the primary method of classifying manifolds.
== See also ==
Betti number
Cycle space
De Rham cohomology
Eilenberg–Steenrod axioms
Extraordinary homology theory
Homological algebra
Homological conjectures in commutative algebra
Homological connectivity
Homological dimension
Homotopy group
Künneth theorem
List of cohomology theories – also has a list of homology theories
Poincaré duality
== References ==
== Further reading ==
Cartan, Henri Paul; Eilenberg, Samuel (1956). Homological Algebra. Princeton mathematical series. Vol. 19. Princeton University Press. ISBN 9780674079779. OCLC 529171. {{cite book}}: ISBN / Date incompatibility (help)
Edelsbrunner, Herbert; Harer, John L. (2010). "Computational Topology: An Introduction". American Mathematical Society.
Eilenberg, Samuel; Moore, J.C. (1965). Foundations of relative homological algebra. Memoirs of the American Mathematical Society number. Vol. 55. American Mathematical Society. ISBN 9780821812556. OCLC 1361982.
Gowers, Timothy; Barrow-Green, June; Leader, Imre, eds. (2010), The Princeton Companion to Mathematics, Princeton University Press, ISBN 9781400830398.
Hatcher, A. (2002), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0. Detailed discussion of homology theories for simplicial complexes and manifolds, singular homology, etc.
Hilton, Peter (1988), "A Brief, Subjective History of Homology and Homotopy Theory in This Century", Mathematics Magazine, 60 (5), Mathematical Association of America: 282–291, doi:10.1080/0025570X.1988.11977391, JSTOR 2689545
Kaczynski, Tomasz; Mischaikow, Konstantin; Mrozek, Marian (2004). Computational Homology. Springer. ISBN 9780387215976.
Richeson, D. (2008), Euler's Gem: The Polyhedron Formula and the Birth of Topology, Princeton University.
Spanier, Edwin H. (1966), Algebraic Topology, Springer, p. 155, ISBN 0-387-90646-0.
Stillwell, John (1993), "Homology Theory and Abelianization", Classical Topology and Combinatorial Group Theory, Graduate Texts in Mathematics, vol. 72, Springer, pp. 169–184, doi:10.1007/978-1-4612-4372-4_6, ISBN 978-0-387-97970-0.
Teicher, M., ed. (1999), The Heritage of Emmy Noether, Israel Mathematical Conference Proceedings, Bar-Ilan University/American Mathematical Society/Oxford University Press, ISBN 978-0-19-851045-1, OCLC 223099225
Weibel, Charles A. (1999), "28. History of Homological Algebra" (PDF), in James, I. M. (ed.), History of Topology, Elsevier, ISBN 9780080534077.
== External links ==
Homology group at Encyclopaedia of Mathematics
[1] Algebraic topology Allen Hatcher – Chapter 2 on homology | Wikipedia/Homology_theory |
In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function.
Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem.
The lemma is named after the mathematician Pavel Samuilovich Urysohn.
== Discussion ==
Two subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
of a topological space
X
{\displaystyle X}
are said to be separated by neighbourhoods if there are neighbourhoods
U
{\displaystyle U}
of
A
{\displaystyle A}
and
V
{\displaystyle V}
of
B
{\displaystyle B}
that are disjoint. In particular
A
{\displaystyle A}
and
B
{\displaystyle B}
are necessarily disjoint.
Two plain subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
are said to be separated by a continuous function if there exists a continuous function
f
:
X
→
[
0
,
1
]
{\displaystyle f:X\to [0,1]}
from
X
{\displaystyle X}
into the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
such that
f
(
a
)
=
0
{\displaystyle f(a)=0}
for all
a
∈
A
{\displaystyle a\in A}
and
f
(
b
)
=
1
{\displaystyle f(b)=1}
for all
b
∈
B
.
{\displaystyle b\in B.}
Any such function is called a Urysohn function for
A
{\displaystyle A}
and
B
.
{\displaystyle B.}
In particular
A
{\displaystyle A}
and
B
{\displaystyle B}
are necessarily disjoint.
It follows that if two subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
are separated by a function then so are their closures. Also it follows that if two subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
are separated by a function then
A
{\displaystyle A}
and
B
{\displaystyle B}
are separated by neighbourhoods.
A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function.
The sets
A
{\displaystyle A}
and
B
{\displaystyle B}
need not be precisely separated by
f
{\displaystyle f}
, i.e., it is not necessary and guaranteed that
f
(
x
)
≠
0
{\displaystyle f(x)\neq 0}
and
≠
1
{\displaystyle \neq 1}
for
x
{\displaystyle x}
outside
A
{\displaystyle A}
and
B
.
{\displaystyle B.}
A topological space
X
{\displaystyle X}
in which every two disjoint closed subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
are precisely separated by a continuous function is perfectly normal.
Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff.
== Formal statement ==
A topological space
X
{\displaystyle X}
is normal if and only if, for any two non-empty closed disjoint subsets
A
{\displaystyle A}
and
B
{\displaystyle B}
of
X
,
{\displaystyle X,}
there exists a continuous map
f
:
X
→
[
0
,
1
]
{\displaystyle f:X\to [0,1]}
such that
f
(
A
)
=
{
0
}
{\displaystyle f(A)=\{0\}}
and
f
(
B
)
=
{
1
}
.
{\displaystyle f(B)=\{1\}.}
== Proof sketch ==
The proof proceeds by repeatedly applying the following alternate characterization of normality. If
X
{\displaystyle X}
is a normal space,
Z
{\displaystyle Z}
is an open subset of
X
{\displaystyle X}
, and
Y
⊆
Z
{\displaystyle Y\subseteq Z}
is closed, then there exists an open
U
{\displaystyle U}
and a closed
V
{\displaystyle V}
such that
Y
⊆
U
⊆
V
⊆
Z
{\displaystyle Y\subseteq U\subseteq V\subseteq Z}
.
Let
A
{\displaystyle A}
and
B
{\displaystyle B}
be disjoint closed subsets of
X
{\displaystyle X}
. The main idea of the proof is to repeatedly apply this characterization of normality to
A
{\displaystyle A}
and
B
∁
{\displaystyle B^{\complement }}
, continuing with the new sets built on every step.
The sets we build are indexed by dyadic fractions. For every dyadic fraction
r
∈
(
0
,
1
)
{\displaystyle r\in (0,1)}
, we construct an open subset
U
(
r
)
{\displaystyle U(r)}
and a closed subset
V
(
r
)
{\displaystyle V(r)}
of
X
{\displaystyle X}
such that:
A
⊆
U
(
r
)
{\displaystyle A\subseteq U(r)}
and
V
(
r
)
⊆
B
∁
{\displaystyle V(r)\subseteq B^{\complement }}
for all
r
{\displaystyle r}
,
U
(
r
)
⊆
V
(
r
)
{\displaystyle U(r)\subseteq V(r)}
for all
r
{\displaystyle r}
,
For
r
<
s
{\displaystyle r<s}
,
V
(
r
)
⊆
U
(
s
)
{\displaystyle V(r)\subseteq U(s)}
.
Intuitively, the sets
U
(
r
)
{\displaystyle U(r)}
and
V
(
r
)
{\displaystyle V(r)}
expand outwards in layers from
A
{\displaystyle A}
:
A
⊆
B
∁
A
⊆
U
(
1
/
2
)
⊆
V
(
1
/
2
)
⊆
B
∁
A
⊆
U
(
1
/
4
)
⊆
V
(
1
/
4
)
⊆
U
(
1
/
2
)
⊆
V
(
1
/
2
)
⊆
U
(
3
/
4
)
⊆
V
(
3
/
4
)
⊆
B
∁
{\displaystyle {\begin{array}{ccccccccccccccc}A&&&&&&&\subseteq &&&&&&&B^{\complement }\\A&&&\subseteq &&&\ U(1/2)&\subseteq &V(1/2)&&&\subseteq &&&B^{\complement }\\A&\subseteq &U(1/4)&\subseteq &V(1/4)&\subseteq &U(1/2)&\subseteq &V(1/2)&\subseteq &U(3/4)&\subseteq &V(3/4)&\subseteq &B^{\complement }\end{array}}}
This construction proceeds by mathematical induction. For the base step, we define two extra sets
U
(
1
)
=
B
∁
{\displaystyle U(1)=B^{\complement }}
and
V
(
0
)
=
A
{\displaystyle V(0)=A}
.
Now assume that
n
≥
0
{\displaystyle n\geq 0}
and that the sets
U
(
k
/
2
n
)
{\displaystyle U\left(k/2^{n}\right)}
and
V
(
k
/
2
n
)
{\displaystyle V\left(k/2^{n}\right)}
have already been constructed for
k
∈
{
1
,
…
,
2
n
−
1
}
{\displaystyle k\in \{1,\ldots ,2^{n}-1\}}
. Note that this is vacuously satisfied for
n
=
0
{\displaystyle n=0}
. Since
X
{\displaystyle X}
is normal, for any
a
∈
{
0
,
1
,
…
,
2
n
−
1
}
{\displaystyle a\in \left\{0,1,\ldots ,2^{n}-1\right\}}
, we can find an open set and a closed set such that
V
(
a
2
n
)
⊆
U
(
2
a
+
1
2
n
+
1
)
⊆
V
(
2
a
+
1
2
n
+
1
)
⊆
U
(
a
+
1
2
n
)
{\displaystyle V\left({\frac {a}{2^{n}}}\right)\subseteq U\left({\frac {2a+1}{2^{n+1}}}\right)\subseteq V\left({\frac {2a+1}{2^{n+1}}}\right)\subseteq U\left({\frac {a+1}{2^{n}}}\right)}
The above three conditions are then verified.
Once we have these sets, we define
f
(
x
)
=
1
{\displaystyle f(x)=1}
if
x
∉
U
(
r
)
{\displaystyle x\not \in U(r)}
for any
r
{\displaystyle r}
; otherwise
f
(
x
)
=
inf
{
r
:
x
∈
U
(
r
)
}
{\displaystyle f(x)=\inf\{r:x\in U(r)\}}
for every
x
∈
X
{\displaystyle x\in X}
, where
inf
{\displaystyle \inf }
denotes the infimum. Using the fact that the dyadic rationals are dense, it is then not too hard to show that
f
{\displaystyle f}
is continuous and has the property
f
(
A
)
⊆
{
0
}
{\displaystyle f(A)\subseteq \{0\}}
and
f
(
B
)
⊆
{
1
}
.
{\displaystyle f(B)\subseteq \{1\}.}
This step requires the
V
(
r
)
{\displaystyle V(r)}
sets in order to work.
The Mizar project has completely formalised and automatically checked a proof of Urysohn's lemma in the URYSOHN3 file.
== See also ==
Mollifier
== Notes ==
== References ==
Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
Willard, Stephen (1970). General Topology. Dover Publications. ISBN 0-486-43479-6.
== External links ==
"Urysohn lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Mizar system proof: https://www.mizar.org/version/current/html/urysohn3.html#T20 | Wikipedia/Urysohn's_lemma |
In the mathematical field of point-set topology, a continuum (plural: "continua") is a nonempty compact connected metric space, or, less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua.
== Definitions ==
A continuum that contains more than one point is called nondegenerate.
A subset A of a continuum X such that A itself is a continuum is called a subcontinuum of X. A space homeomorphic to a subcontinuum of the Euclidean plane R2 is called a planar continuum.
A continuum X is homogeneous if for every two points x and y in X, there exists a homeomorphism h: X → X such that h(x) = y.
A Peano continuum is a continuum that is locally connected at each point.
An indecomposable continuum is a continuum that cannot be represented as the union of two proper subcontinua. A continuum X is hereditarily indecomposable if every subcontinuum of X is indecomposable.
The dimension of a continuum usually means its topological dimension. A one-dimensional continuum is often called a curve.
== Examples ==
An arc is a space homeomorphic to the closed interval [0,1]. If h: [0,1] → X is a homeomorphism and h(0) = p and h(1) = q then p and q are called the endpoints of X; one also says that X is an arc from p to q. An arc is the simplest and most familiar type of a continuum. It is one-dimensional, arcwise connected, and locally connected.
The topologist's sine curve is a subset of the plane that is the union of the graph of the function f(x) = sin(1/x), 0 < x ≤ 1 with the segment −1 ≤ y ≤ 1 of the y-axis. It is a one-dimensional continuum that is not arcwise connected, and it is locally disconnected at the points along the y-axis.
The Warsaw circle is obtained by "closing up" the topologist's sine curve by an arc connecting (0,−1) and (1,sin(1)). It is a one-dimensional continuum whose homotopy groups are all trivial, but it is not a contractible space.
An n-cell is a space homeomorphic to the closed ball in the Euclidean space Rn. It is contractible and is the simplest example of an n-dimensional continuum.
An n-sphere is a space homeomorphic to the standard n-sphere in the (n + 1)-dimensional Euclidean space. It is an n-dimensional homogeneous continuum that is not contractible, and therefore different from an n-cell.
The Hilbert cube is an infinite-dimensional continuum.
Solenoids are among the simplest examples of indecomposable homogeneous continua. They are neither arcwise connected nor locally connected.
The Sierpinski carpet, also known as the Sierpinski universal curve, is a one-dimensional planar Peano continuum that contains a homeomorphic image of any one-dimensional planar continuum.
The pseudo-arc is a homogeneous hereditarily indecomposable planar continuum.
== Properties ==
There are two fundamental techniques for constructing continua, by means of nested intersections and inverse limits.
If {Xn} is a nested family of continua, i.e. Xn ⊇ Xn+1, then their intersection is a continuum.
If {(Xn, fn)} is an inverse sequence of continua Xn, called the coordinate spaces, together with continuous maps fn: Xn+1 → Xn, called the bonding maps, then its inverse limit is a continuum.
A finite or countable product of continua is a continuum.
== See also ==
Linear continuum
Menger sponge
Shape theory (mathematics)
== References ==
== Sources ==
Sam B. Nadler, Jr, Continuum theory. An introduction. Pure and Applied Mathematics, Marcel Dekker. ISBN 0-8247-8659-9.
== External links ==
Open problems in continuum theory
Examples in continuum theory
Continuum Theory and Topological Dynamics, M. Barge and J. Kennedy, in Open Problems in Topology, J. van Mill and G.M. Reed (Editors) Elsevier Science Publishers B.V. (North-Holland), 1990.
Hyperspacewiki | Wikipedia/Continuum_(topology) |
Algebraic K-theory is a subject area in mathematics with connections to geometry, topology, ring theory, and number theory. Geometric, algebraic, and arithmetic objects are assigned objects called K-groups. These are groups in the sense of abstract algebra. They contain detailed information about the original object but are notoriously difficult to compute; for example, an important outstanding problem is to compute the K-groups of the integers.
K-theory was discovered in the late 1950s by Alexander Grothendieck in his study of intersection theory on algebraic varieties. In the modern language, Grothendieck defined only K0, the zeroth K-group, but even this single group has plenty of applications, such as the Grothendieck–Riemann–Roch theorem. Intersection theory is still a motivating force in the development of (higher) algebraic K-theory through its links with motivic cohomology and specifically Chow groups. The subject also includes classical number-theoretic topics like quadratic reciprocity and embeddings of number fields into the real numbers and complex numbers, as well as more modern concerns like the construction of higher regulators and special values of L-functions.
The lower K-groups were discovered first, in the sense that adequate descriptions of these groups in terms of other algebraic structures were found. For example, if F is a field, then K0(F) is isomorphic to the integers Z and is closely related to the notion of vector space dimension. For a commutative ring R, the group K0(R) is related to the Picard group of R, and when R is the ring of integers in a number field, this generalizes the classical construction of the class group. The group K1(R) is closely related to the group of units R×, and if R is a field, it is exactly the group of units. For a number field F, the group K2(F) is related to class field theory, the Hilbert symbol, and the solvability of quadratic equations over completions. In contrast, finding the correct definition of the higher K-groups of rings was a difficult achievement of Daniel Quillen, and many of the basic facts about the higher K-groups of algebraic varieties were not known until the work of Robert Thomason.
== History ==
The history of K-theory was detailed by Charles Weibel.
=== The Grothendieck group K0 ===
In the 19th century, Bernhard Riemann and his student Gustav Roch proved what is now known as the Riemann–Roch theorem. If X is a Riemann surface, then the sets of meromorphic functions and meromorphic differential forms on X form vector spaces. A line bundle on X determines subspaces of these vector spaces, and if X is projective, then these subspaces are finite dimensional. The Riemann–Roch theorem states that the difference in dimensions between these subspaces is equal to the degree of the line bundle (a measure of twistedness) plus one minus the genus of X. In the mid-20th century, the Riemann–Roch theorem was generalized by Friedrich Hirzebruch to all algebraic varieties. In Hirzebruch's formulation, the Hirzebruch–Riemann–Roch theorem, the theorem became a statement about Euler characteristics: The Euler characteristic of a vector bundle on an algebraic variety (which is the alternating sum of the dimensions of its cohomology groups) equals the Euler characteristic of the trivial bundle plus a correction factor coming from characteristic classes of the vector bundle. This is a generalization because on a projective Riemann surface, the Euler characteristic of a line bundle equals the difference in dimensions mentioned previously, the Euler characteristic of the trivial bundle is one minus the genus, and the only nontrivial characteristic class is the degree.
The subject of K-theory takes its name from a 1957 construction of Alexander Grothendieck which appeared in the Grothendieck–Riemann–Roch theorem, his generalization of Hirzebruch's theorem. Let X be a smooth algebraic variety. To each vector bundle on X, Grothendieck associates an invariant, its class. The set of all classes on X was called K(X) from the German Klasse. By definition, K(X) is a quotient of the free abelian group on isomorphism classes of vector bundles on X, and so it is an abelian group. If the basis element corresponding to a vector bundle V is denoted [V], then for each short exact sequence of vector bundles:
0
→
V
′
→
V
→
V
″
→
0
,
{\displaystyle 0\to V'\to V\to V''\to 0,}
Grothendieck imposed the relation [V] = [V′] + [V″]. These generators and relations define K(X), and they imply that it is the universal way to assign invariants to vector bundles in a way compatible with exact sequences.
Grothendieck took the perspective that the Riemann–Roch theorem is a statement about morphisms of varieties, not the varieties themselves. He proved that there is a homomorphism from K(X) to the Chow groups of X coming from the Chern character and Todd class of X. Additionally, he proved that a proper morphism f : X → Y to a smooth variety Y determines a homomorphism f* : K(X) → K(Y) called the pushforward. This gives two ways of determining an element in the Chow group of Y from a vector bundle on X: Starting from X, one can first compute the pushforward in K-theory and then apply the Chern character and Todd class of Y, or one can first apply the Chern character and Todd class of X and then compute the pushforward for Chow groups. The Grothendieck–Riemann–Roch theorem says that these are equal. When Y is a point, a vector bundle is a vector space, the class of a vector space is its dimension, and the Grothendieck–Riemann–Roch theorem specializes to Hirzebruch's theorem.
The group K(X) is now known as K0(X). Upon replacing vector bundles by projective modules, K0 also became defined for non-commutative rings, where it had applications to group representations. Atiyah and Hirzebruch quickly transported Grothendieck's construction to topology and used it to define topological K-theory. Topological K-theory was one of the first examples of an extraordinary cohomology theory: It associates to each topological space X (satisfying some mild technical constraints) a sequence of groups Kn(X) which satisfy all the Eilenberg–Steenrod axioms except the normalization axiom. The setting of algebraic varieties, however, is much more rigid, and the flexible constructions used in topology were not available. While the group K0 seemed to satisfy the necessary properties to be the beginning of a cohomology theory of algebraic varieties and of non-commutative rings, there was no clear definition of the higher Kn(X). Even as such definitions were developed, technical issues surrounding restriction and gluing usually forced Kn to be defined only for rings, not for varieties.
=== K0, K1, and K2 ===
A group closely related to K1 for group rings was earlier introduced by J.H.C. Whitehead. Henri Poincaré had attempted to define the Betti numbers of a manifold in terms of a triangulation. His methods, however, had a serious gap: Poincaré could not prove that two triangulations of a manifold always yielded the same Betti numbers. It was clearly true that Betti numbers were unchanged by subdividing the triangulation, and therefore it was clear that any two triangulations that shared a common subdivision had the same Betti numbers. What was not known was that any two triangulations admitted a common subdivision. This hypothesis became a conjecture known as the Hauptvermutung (roughly "main conjecture"). The fact that triangulations were stable under subdivision led J.H.C. Whitehead to introduce the notion of simple homotopy type. A simple homotopy equivalence is defined in terms of adding simplices or cells to a simplicial complex or cell complex in such a way that each additional simplex or cell deformation retracts into a subdivision of the old space. Part of the motivation for this definition is that a subdivision of a triangulation is simple homotopy equivalent to the original triangulation, and therefore two triangulations that share a common subdivision must be simple homotopy equivalent. Whitehead proved that simple homotopy equivalence is a finer invariant than homotopy equivalence by introducing an invariant called the torsion. The torsion of a homotopy equivalence takes values in a group now called the Whitehead group and denoted Wh(π), where π is the fundamental group of the target complex. Whitehead found examples of non-trivial torsion and thereby proved that some homotopy equivalences were not simple. The Whitehead group was later discovered to be a quotient of K1(Zπ), where Zπ is the integral group ring of π. Later John Milnor used Reidemeister torsion, an invariant related to Whitehead torsion, to disprove the Hauptvermutung.
The first adequate definition of K1 of a ring was made by Hyman Bass and Stephen Schanuel. In topological K-theory, K1 is defined using vector bundles on a suspension of the space. All such vector bundles come from the clutching construction, where two trivial vector bundles on two halves of a space are glued along a common strip of the space. This gluing data is expressed using the general linear group, but elements of that group coming from elementary matrices (matrices corresponding to elementary row or column operations) define equivalent gluings. Motivated by this, the Bass–Schanuel definition of K1 of a ring R is GL(R) / E(R), where GL(R) is the infinite general linear group (the union of all GLn(R)) and E(R) is the subgroup of elementary matrices. They also provided a definition of K0 of a homomorphism of rings and proved that K0 and K1 could be fit together into an exact sequence similar to the relative homology exact sequence.
Work in K-theory from this period culminated in Bass' book Algebraic K-theory. In addition to providing a coherent exposition of the results then known, Bass improved many of the statements of the theorems. Of particular note is that Bass, building on his earlier work with Murthy, provided the first proof of what is now known as the fundamental theorem of algebraic K-theory. This is a four-term exact sequence relating K0 of a ring R to K1 of R, the polynomial ring R[t], and the localization R[t, t−1]. Bass recognized that this theorem provided a description of K0 entirely in terms of K1. By applying this description recursively, he produced negative K-groups K−n(R). In independent work, Max Karoubi gave another definition of negative K-groups for certain categories and proved that his definitions yielded that same groups as those of Bass.
The next major development in the subject came with the definition of K2. Steinberg studied the universal central extensions of a Chevalley group over a field and gave an explicit presentation of this group in terms of generators and relations. In the case of the group En(k) of elementary matrices, the universal central extension is now written Stn(k) and called the Steinberg group. In the spring of 1967, John Milnor defined K2(R) to be the kernel of the homomorphism St(R) → E(R). The group K2 further extended some of the exact sequences known for K1 and K0, and it had striking applications to number theory. Hideya Matsumoto's 1968 thesis showed that for a field F, K2(F) was isomorphic to:
F
×
⊗
Z
F
×
/
⟨
x
⊗
(
1
−
x
)
:
x
∈
F
∖
{
0
,
1
}
⟩
.
{\displaystyle F^{\times }\otimes _{\mathbf {Z} }F^{\times }/\langle x\otimes (1-x)\colon x\in F\setminus \{0,1\}\rangle .}
This relation is also satisfied by the Hilbert symbol, which expresses the solvability of quadratic equations over local fields. In particular, John Tate was able to prove that K2(Q) is essentially structured around the law of quadratic reciprocity.
=== Higher K-groups ===
In the late 1960s and early 1970s, several definitions of higher K-theory were proposed. Swan and Gersten both produced definitions of Kn for all n, and Gersten proved that his and Swan's theories were equivalent, but the two theories were not known to satisfy all the expected properties. Nobile and Villamayor also proposed a definition of higher K-groups. Karoubi and Villamayor defined well-behaved K-groups for all n, but their equivalent of K1 was sometimes a proper quotient of the Bass–Schanuel K1. Their K-groups are now called KVn and are related to homotopy-invariant modifications of K-theory.
Inspired in part by Matsumoto's theorem, Milnor made a definition of the higher K-groups of a field. He referred to his definition as "purely ad hoc", and it neither appeared to generalize to all rings nor did it appear to be the correct definition of the higher K-theory of fields. Much later, it was discovered by Nesterenko and Suslin and by Totaro that Milnor K-theory is actually a direct summand of the true K-theory of the field. Specifically, K-groups have a filtration called the weight filtration, and the Milnor K-theory of a field is the highest weight-graded piece of the K-theory. Additionally, Thomason discovered that there is no analog of Milnor K-theory for a general variety.
The first definition of higher K-theory to be widely accepted was Daniel Quillen's. As part of Quillen's work on the Adams conjecture in topology, he had constructed maps from the classifying spaces BGL(Fq) to the homotopy fiber of ψq − 1, where ψq is the qth Adams operation acting on the classifying space BU. This map is acyclic, and after modifying BGL(Fq) slightly to produce a new space BGL(Fq)+, the map became a homotopy equivalence. This modification was called the plus construction. The Adams operations had been known to be related to Chern classes and to K-theory since the work of Grothendieck, and so Quillen was led to define the K-theory of R as the homotopy groups of BGL(R)+. Not only did this recover K1 and K2, the relation of K-theory to the Adams operations allowed Quillen to compute the K-groups of finite fields.
The classifying space BGL is connected, so Quillen's definition failed to give the correct value for K0. Additionally, it did not give any negative K-groups. Since K0 had a known and accepted definition it was possible to sidestep this difficulty, but it remained technically awkward. Conceptually, the problem was that the definition sprung from GL, which was classically the source of K1. Because GL knows only about gluing vector bundles, not about the vector bundles themselves, it was impossible for it to describe K0.
Inspired by conversations with Quillen, Segal soon introduced another approach to constructing algebraic K-theory under the name of Γ-objects. Segal's approach is a homotopy analog of Grothendieck's construction of K0. Where Grothendieck worked with isomorphism classes of bundles, Segal worked with the bundles themselves and used isomorphisms of the bundles as part of his data. This results in a spectrum whose homotopy groups are the higher K-groups (including K0). However, Segal's approach was only able to impose relations for split exact sequences, not general exact sequences. In the category of projective modules over a ring, every short exact sequence splits, and so Γ-objects could be used to define the K-theory of a ring. However, there are non-split short exact sequences in the category of vector bundles on a variety and in the category of all modules over a ring, so Segal's approach did not apply to all cases of interest.
In the spring of 1972, Quillen found another approach to the construction of higher K-theory which was to prove enormously successful. This new definition began with an exact category, a category satisfying certain formal properties similar to, but slightly weaker than, the properties satisfied by a category of modules or vector bundles. From this he constructed an auxiliary category using a new device called his "Q-construction." Like Segal's Γ-objects, the Q-construction has its roots in Grothendieck's definition of K0. Unlike Grothendieck's definition, however, the Q-construction builds a category, not an abelian group, and unlike Segal's Γ-objects, the Q-construction works directly with short exact sequences. If C is an abelian category, then QC is a category with the same objects as C but whose morphisms are defined in terms of short exact sequences in C. The K-groups of the exact category are the homotopy groups of ΩBQC, the loop space of the geometric realization (taking the loop space corrects the indexing). Quillen additionally proved his "+ = Q theorem" that his two definitions of K-theory agreed with each other. This yielded the correct K0 and led to simpler proofs, but still did not yield any negative K-groups.
All abelian categories are exact categories, but not all exact categories are abelian. Because Quillen was able to work in this more general situation, he was able to use exact categories as tools in his proofs. This technique allowed him to prove many of the basic theorems of algebraic K-theory. Additionally, it was possible to prove that the earlier definitions of Swan and Gersten were equivalent to Quillen's under certain conditions.
K-theory now appeared to be a homology theory for rings and a cohomology theory for varieties. However, many of its basic theorems carried the hypothesis that the ring or variety in question was regular. One of the basic expected relations was a long exact sequence (called the "localization sequence") relating the K-theory of a variety X and an open subset U. Quillen was unable to prove the existence of the localization sequence in full generality. He was, however, able to prove its existence for a related theory called G-theory (or sometimes K′-theory). G-theory had been defined early in the development of the subject by Grothendieck. Grothendieck defined G0(X) for a variety X to be the free abelian group on isomorphism classes of coherent sheaves on X, modulo relations coming from exact sequences of coherent sheaves. In the categorical framework adopted by later authors, the K-theory of a variety is the K-theory of its category of vector bundles, while its G-theory is the K-theory of its category of coherent sheaves. Not only could Quillen prove the existence of a localization exact sequence for G-theory, he could prove that for a regular ring or variety, K-theory equaled G-theory, and therefore K-theory of regular varieties had a localization exact sequence. Since this sequence was fundamental to many of the facts in the subject, regularity hypotheses pervaded early work on higher K-theory.
=== Applications of algebraic K-theory in topology ===
The earliest application of algebraic K-theory to topology was Whitehead's construction of Whitehead torsion. A closely related construction was found by C. T. C. Wall in 1963. Wall found that a space X dominated by a finite complex has a generalized Euler characteristic taking values in a quotient of K0(Zπ), where π is the fundamental group of the space. This invariant is called Wall's finiteness obstruction because X is homotopy equivalent to a finite complex if and only if the invariant vanishes. Laurent Siebenmann in his thesis found an invariant similar to Wall's that gives an obstruction to an open manifold being the interior of a compact manifold with boundary. If two manifolds with boundary M and N have isomorphic interiors (in TOP, PL, or DIFF as appropriate), then the isomorphism between them defines an h-cobordism between M and N.
Whitehead torsion was eventually reinterpreted in a more directly K-theoretic way. This reinterpretation happened through the study of h-cobordisms. Two n-dimensional manifolds M and N are h-cobordant if there exists an (n + 1)-dimensional manifold with boundary W whose boundary is the disjoint union of M and N and for which the inclusions of M and N into W are homotopy equivalences (in the categories TOP, PL, or DIFF). Stephen Smale's h-cobordism theorem asserted that if n ≥ 5, W is compact, and M, N, and W are simply connected, then W is isomorphic to the cylinder M × [0, 1] (in TOP, PL, or DIFF as appropriate). This theorem proved the Poincaré conjecture for n ≥ 5.
If M and N are not assumed to be simply connected, then an h-cobordism need not be a cylinder. The s-cobordism theorem, due independently to Mazur, Stallings, and Barden, explains the general situation: An h-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion M ⊂ W vanishes. This generalizes the h-cobordism theorem because the simple connectedness hypotheses imply that the relevant Whitehead group is trivial. In fact the s-cobordism theorem implies that there is a bijective correspondence between isomorphism classes of h-cobordisms and elements of the Whitehead group.
An obvious question associated with the existence of h-cobordisms is their uniqueness. The natural notion of equivalence is isotopy. Jean Cerf proved that for simply connected smooth manifolds M of dimension at least 5, isotopy of h-cobordisms is the same as a weaker notion called pseudo-isotopy. Hatcher and Wagoner studied the components of the space of pseudo-isotopies and related it to a quotient of K2(Zπ).
The proper context for the s-cobordism theorem is the classifying space of h-cobordisms. If M is a CAT manifold, then HCAT(M) is a space that classifies bundles of h-cobordisms on M. The s-cobordism theorem can be reinterpreted as the statement that the set of connected components of this space is the Whitehead group of π1(M). This space contains strictly more information than the Whitehead group; for example, the connected component of the trivial cobordism describes the possible cylinders on M and in particular is the obstruction to the uniqueness of a homotopy between a manifold and M × [0, 1]. Consideration of these questions led Waldhausen to introduce his algebraic K-theory of spaces. The algebraic K-theory of M is a space A(M) which is defined so that it plays essentially the same role for higher K-groups as K1(Zπ1(M)) does for M. In particular, Waldhausen showed that there is a map from A(M) to a space Wh(M) which generalizes the map K1(Zπ1(M)) → Wh(π1(M)) and whose homotopy fiber is a homology theory.
In order to fully develop A-theory, Waldhausen made significant technical advances in the foundations of K-theory. Waldhausen introduced Waldhausen categories, and for a Waldhausen category C he introduced a simplicial category S⋅C (the S is for Segal) defined in terms of chains of cofibrations in C. This freed the foundations of K-theory from the need to invoke analogs of exact sequences.
=== Algebraic topology and algebraic geometry in algebraic K-theory ===
Quillen suggested to his student Kenneth Brown that it might be possible to create a theory of sheaves of spectra of which K-theory would provide an example. The sheaf of K-theory spectra would, to each open subset of a variety, associate the K-theory of that open subset. Brown developed such a theory for his thesis. Simultaneously, Gersten had the same idea. At a Seattle conference in autumn of 1972, they together discovered a spectral sequence converging from the sheaf cohomology of
K
n
{\displaystyle {\mathcal {K}}_{n}}
, the sheaf of Kn-groups on X, to the K-group of the total space. This is now called the Brown–Gersten spectral sequence.
Spencer Bloch, influenced by Gersten's work on sheaves of K-groups, proved that on a regular surface, the cohomology group
H
2
(
X
,
K
2
)
{\displaystyle H^{2}(X,{\mathcal {K}}_{2})}
is isomorphic to the Chow group CH2(X) of codimension 2 cycles on X. Inspired by this, Gersten conjectured that for a regular local ring R with fraction field F, Kn(R) injects into Kn(F) for all n. Soon Quillen proved that this is true when R contains a field, and using this he proved that
H
p
(
X
,
K
p
)
≅
CH
p
(
X
)
{\displaystyle H^{p}(X,{\mathcal {K}}_{p})\cong \operatorname {CH} ^{p}(X)}
for all p. This is known as Bloch's formula. While progress has been made on Gersten's conjecture since then, the general case remains open.
Lichtenbaum conjectured that special values of the zeta function of a number field could be expressed in terms of the K-groups of the ring of integers of the field. These special values were known to be related to the étale cohomology of the ring of integers. Quillen therefore generalized Lichtenbaum's conjecture, predicting the existence of a spectral sequence like the Atiyah–Hirzebruch spectral sequence in topological K-theory. Quillen's proposed spectral sequence would start from the étale cohomology of a ring R and, in high enough degrees and after completing at a prime l invertible in R, abut to the l-adic completion of the K-theory of R. In the case studied by Lichtenbaum, the spectral sequence would degenerate, yielding Lichtenbaum's conjecture.
The necessity of localizing at a prime l suggested to Browder that there should be a variant of K-theory with finite coefficients. He introduced K-theory groups Kn(R; Z/lZ) which were Z/lZ-vector spaces, and he found an analog of the Bott element in topological K-theory. Soule used this theory to construct "étale Chern classes", an analog of topological Chern classes which took elements of algebraic K-theory to classes in étale cohomology. Unlike algebraic K-theory, étale cohomology is highly computable, so étale Chern classes provided an effective tool for detecting the existence of elements in K-theory. William G. Dwyer and Eric Friedlander then invented an analog of K-theory for the étale topology called étale K-theory. For varieties defined over the complex numbers, étale K-theory is isomorphic to topological K-theory. Moreover, étale K-theory admitted a spectral sequence similar to the one conjectured by Quillen. Thomason proved around 1980 that after inverting the Bott element, algebraic K-theory with finite coefficients became isomorphic to étale K-theory.
Throughout the 1970s and early 1980s, K-theory on singular varieties still lacked adequate foundations. While it was believed that Quillen's K-theory gave the correct groups, it was not known that these groups had all of the envisaged properties. For this, algebraic K-theory had to be reformulated. This was done by Thomason in a lengthy monograph which he co-credited to his dead friend Thomas Trobaugh, who he said gave him a key idea in a dream. Thomason combined Waldhausen's construction of K-theory with the foundations of intersection theory described in volume six of Grothendieck's Séminaire de Géométrie Algébrique du Bois Marie. There, K0 was described in terms of complexes of sheaves on algebraic varieties. Thomason discovered that if one worked with in derived category of sheaves, there was a simple description of when a complex of sheaves could be extended from an open subset of a variety to the whole variety. By applying Waldhausen's construction of K-theory to derived categories, Thomason was able to prove that algebraic K-theory had all the expected properties of a cohomology theory.
In 1976, R. Keith Dennis discovered an entirely novel technique for computing K-theory based on Hochschild homology. This was based around the existence of the Dennis trace map, a homomorphism from K-theory to Hochschild homology. While the Dennis trace map seemed to be successful for calculations of K-theory with finite coefficients, it was less successful for rational calculations. Goodwillie, motivated by his "calculus of functors", conjectured the existence of a theory intermediate to K-theory and Hochschild homology. He called this theory topological Hochschild homology because its ground ring should be the sphere spectrum (considered as a ring whose operations are defined only up to homotopy). In the mid-1980s, Bokstedt gave a definition of topological Hochschild homology that satisfied nearly all of Goodwillie's conjectural properties, and this made possible further computations of K-groups. Bokstedt's version of the Dennis trace map was a transformation of spectra K → THH. This transformation factored through the fixed points of a circle action on THH, which suggested a relationship with cyclic homology. In the course of proving an algebraic K-theory analog of the Novikov conjecture, Bokstedt, Hsiang, and Madsen introduced topological cyclic homology, which bore the same relationship to topological Hochschild homology as cyclic homology did to Hochschild homology.
The Dennis trace map to topological Hochschild homology factors through topological cyclic homology, providing an even more detailed tool for calculations. In 1996, Dundas, Goodwillie, and McCarthy proved that topological cyclic homology has in a precise sense the same local structure as algebraic K-theory, so that if a calculation in K-theory or topological cyclic homology is possible, then many other "nearby" calculations follow.
== Lower K-groups ==
The lower K-groups were discovered first, and given various ad hoc descriptions, which remain useful. Throughout, let A be a ring.
=== K0 ===
The functor K0 takes a ring A to the Grothendieck group of the set of isomorphism classes of its finitely generated projective modules, regarded as a monoid under direct sum. Any ring homomorphism A → B gives a map K0(A) → K0(B) by mapping (the class of) a projective A-module M to M ⊗A B, making K0 a covariant functor.
If the ring A is commutative, we can define a subgroup of K0(A) as the set
K
~
0
(
A
)
=
⋂
p
prime ideal of
A
K
e
r
dim
p
,
{\displaystyle {\tilde {K}}_{0}\left(A\right)=\bigcap \limits _{{\mathfrak {p}}{\text{ prime ideal of }}A}\mathrm {Ker} \dim _{\mathfrak {p}},}
where :
dim
p
:
K
0
(
A
)
→
Z
{\displaystyle \dim _{\mathfrak {p}}:K_{0}\left(A\right)\to \mathbf {Z} }
is the map sending every (class of a) finitely generated projective A-module M to the rank of the free
A
p
{\displaystyle A_{\mathfrak {p}}}
-module
M
p
{\displaystyle M_{\mathfrak {p}}}
(this module is indeed free, as any finitely generated projective module over a local ring is free). This subgroup
K
~
0
(
A
)
{\displaystyle {\tilde {K}}_{0}\left(A\right)}
is known as the reduced zeroth K-theory of A.
If B is a ring without an identity element, we can extend the definition of K0 as follows. Let A = B⊕Z be the extension of B to a ring with unity obtained by adjoining an identity element (0,1). There is a short exact sequence B → A → Z and we define K0(B) to be the kernel of the corresponding map K0(A) → K0(Z) = Z.
==== Examples ====
(Projective) modules over a field k are vector spaces and K0(k) is isomorphic to Z, by dimension.
Finitely generated projective modules over a local ring A are free and so in this case once again K0(A) is isomorphic to Z, by rank.
For A a Dedekind domain, K0(A) = Pic(A) ⊕ Z, where Pic(A) is the Picard group of A,
An algebro-geometric variant of this construction is applied to the category of algebraic varieties; it associates with a given algebraic variety X the Grothendieck's K-group of the category of locally free sheaves (or coherent sheaves) on X. Given a compact topological space X, the topological K-theory Ktop(X) of (real) vector bundles over X coincides with K0 of the ring of continuous real-valued functions on X.
==== Relative K0 ====
Let I be an ideal of A and define the "double" to be a subring of the Cartesian product A×A:
D
(
A
,
I
)
=
{
(
x
,
y
)
∈
A
×
A
:
x
−
y
∈
I
}
.
{\displaystyle D(A,I)=\{(x,y)\in A\times A:x-y\in I\}\ .}
The relative K-group is defined in terms of the "double"
K
0
(
A
,
I
)
=
ker
(
K
0
(
D
(
A
,
I
)
)
→
K
0
(
A
)
)
.
{\displaystyle K_{0}(A,I)=\ker \left({K_{0}(D(A,I))\rightarrow K_{0}(A)}\right)\ .}
where the map is induced by projection along the first factor.
The relative K0(A,I) is isomorphic to K0(I), regarding I as a ring without identity. The independence from A is an analogue of the Excision theorem in homology.
==== K0 as a ring ====
If A is a commutative ring, then the tensor product of projective modules is again projective, and so tensor product induces a multiplication turning K0 into a commutative ring with the class [A] as identity. The exterior product similarly induces a λ-ring structure.
The Picard group embeds as a subgroup of the group of units K0(A)∗.
=== K1 ===
Hyman Bass provided this definition, which generalizes the group of units of a ring: K1(A) is the abelianization of the infinite general linear group:
K
1
(
A
)
=
GL
(
A
)
ab
=
GL
(
A
)
/
[
GL
(
A
)
,
GL
(
A
)
]
{\displaystyle K_{1}(A)=\operatorname {GL} (A)^{\mbox{ab}}=\operatorname {GL} (A)/[\operatorname {GL} (A),\operatorname {GL} (A)]}
Here
GL
(
A
)
=
lim
→
GL
(
n
,
A
)
{\displaystyle \operatorname {GL} (A)=\varinjlim \operatorname {GL} (n,A)}
is the direct limit of the
GL
(
n
)
{\displaystyle \operatorname {GL} (n)}
, which embeds in
GL
(
n
+
1
)
{\displaystyle \operatorname {GL} (n+1)}
as the upper left block matrix, and
[
GL
(
A
)
,
GL
(
A
)
]
{\displaystyle [\operatorname {GL} (A),\operatorname {GL} (A)]}
is its commutator subgroup. Define an elementary matrix to be one which is the sum of an identity matrix and a single off-diagonal element (this is a subset of the elementary matrices used in linear algebra). Then Whitehead's lemma states that the group
E
(
A
)
{\displaystyle \operatorname {E} (A)}
generated by elementary matrices equals the commutator subgroup
[
GL
(
A
)
,
GL
(
A
)
]
{\displaystyle [\operatorname {GL} (A),\operatorname {GL} (A)]}
. Indeed, the group
GL
(
A
)
/
E
(
A
)
{\displaystyle \operatorname {GL} (A)/\operatorname {E} (A)}
was first defined and studied by Whitehead, and is called the Whitehead group of the ring
A
{\displaystyle A}
.
==== Relative K1 ====
The relative K-group is defined in terms of the "double"
K
1
(
A
,
I
)
=
ker
(
K
1
(
D
(
A
,
I
)
)
→
K
1
(
A
)
)
.
{\displaystyle K_{1}(A,I)=\ker \left({K_{1}(D(A,I))\rightarrow K_{1}(A)}\right)\ .}
There is a natural exact sequence
K
1
(
A
,
I
)
→
K
1
(
A
)
→
K
1
(
A
/
I
)
→
K
0
(
A
,
I
)
→
K
0
(
A
)
→
K
0
(
A
/
I
)
.
{\displaystyle K_{1}(A,I)\rightarrow K_{1}(A)\rightarrow K_{1}(A/I)\rightarrow K_{0}(A,I)\rightarrow K_{0}(A)\rightarrow K_{0}(A/I)\ .}
==== Commutative rings and fields ====
For a commutative ring
A
{\displaystyle A}
, one can define a determinant
det
:
GL
(
A
)
→
A
×
{\displaystyle \det :\operatorname {GL} (A)\to A^{\times }}
to the group of units of
A
{\displaystyle A}
, which vanishes on
E
(
A
)
{\displaystyle \operatorname {E} (A)}
and thus descends to a map
det
:
K
1
(
A
)
→
A
×
{\displaystyle \det :K_{1}(A)\to A^{\times }}
. As
E
(
A
)
◃
SL
(
A
)
{\displaystyle \operatorname {E} (A)\triangleleft \operatorname {SL} (A)}
, one can also define the special Whitehead group
S
K
1
(
A
)
=
SL
(
A
)
/
E
(
A
)
{\displaystyle SK_{1}(A)=\operatorname {SL} (A)/\operatorname {E} (A)}
. This map splits via the map
A
×
→
GL
(
1
,
A
)
→
K
1
(
A
)
{\displaystyle A^{\times }\to \operatorname {GL} (1,A)\to K_{1}(A)}
(unit in the upper left corner), and hence is onto, and has the special Whitehead group as kernel, yielding the split short exact sequence:
1
→
S
K
1
(
A
)
→
K
1
(
A
)
→
A
∗
→
1
,
{\displaystyle 1\to SK_{1}(A)\to K_{1}(A)\to A^{*}\to 1,}
which is a quotient of the usual split short exact sequence defining the special linear group, namely
1
→
SL
(
A
)
→
GL
(
A
)
→
A
∗
→
1.
{\displaystyle 1\to \operatorname {SL} (A)\to \operatorname {GL} (A)\to A^{*}\to 1.}
The determinant is split by including the group of units
A
×
=
GL
(
1
,
A
)
{\displaystyle A^{\times }=\operatorname {GL} (1,A)}
into the general linear group
GL
(
A
)
{\displaystyle \operatorname {GL} (A)}
, so
K
1
(
A
)
{\displaystyle K_{1}(A)}
splits as the direct sum of the group of units and the special Whitehead group:
K
1
(
A
)
≅
A
×
⊕
S
K
1
(
A
)
{\displaystyle K_{1}(A)\cong A^{\times }\oplus SK_{1}(A)}
.
When
A
{\displaystyle A}
is a Euclidean domain (e.g. a field, or the integers)
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
vanishes, and the determinant map is an isomorphism from
K
1
(
A
)
{\displaystyle K_{1}(A)}
to
A
×
{\displaystyle A^{\times }}
. This is false in general for PIDs, thus providing one of the rare mathematical features of Euclidean domains that do not generalize to all PIDs. An explicit PID such that
S
K
1
{\displaystyle SK_{1}}
is nonzero was given by Ischebeck in 1980 and by Grayson in 1981. If
A
{\displaystyle A}
is a Dedekind domain whose quotient field is an algebraic number field (a finite extension of the rationals) then Milnor (1971, corollary 16.3) shows that
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
vanishes.
The vanishing of
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
can be interpreted as saying that
K
1
{\displaystyle K_{1}}
is generated by the image of
GL
1
{\displaystyle \operatorname {GL} _{1}}
in GL. When this fails, one can ask whether
K
1
{\displaystyle K_{1}}
is generated by the image of
GL
2
{\displaystyle \operatorname {GL} _{2}}
. For a Dedekind domain, this is the case: indeed,
K
1
{\displaystyle K_{1}}
is generated by the images of
GL
1
{\displaystyle \operatorname {GL} _{1}}
and
SL
2
{\displaystyle \operatorname {SL} _{2}}
in
GL
{\displaystyle \operatorname {GL} }
. The subgroup of
S
K
1
{\displaystyle SK_{1}}
generated by
SL
2
{\displaystyle \operatorname {SL} _{2}}
may be studied by Mennicke symbols. For Dedekind domains with all quotients by maximal ideals finite,
S
K
1
{\displaystyle SK_{1}}
is a torsion group.
For a non-commutative ring, the determinant cannot in general be defined, but the map
GL
(
A
)
→
K
1
(
A
)
{\displaystyle \operatorname {GL} (A)\to K_{1}(A)}
is a generalisation of the determinant.
==== Central simple algebras ====
In the case of a central simple algebra
A
{\displaystyle A}
over a field
F
{\displaystyle F}
, the reduced norm provides a generalisation of the determinant giving a map
K
1
(
A
)
→
F
×
{\displaystyle K_{1}(A)\to F^{\times }}
and
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
may be defined as the kernel. Wang's theorem states that if
A
{\displaystyle A}
has prime degree then
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
is trivial, and this may be extended to square-free degree. Wang also showed that
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
is trivial for any central simple algebra over a number field, but Platonov has given examples of algebras of degree prime squared for which
S
K
1
(
A
)
{\displaystyle SK_{1}(A)}
is non-trivial.
=== K2 ===
John Milnor found the right definition of K2: it is the center of the Steinberg group St(A) of A.
It can also be defined as the kernel of the map
φ
:
St
(
A
)
→
G
L
(
A
)
,
{\displaystyle \varphi \colon \operatorname {St} (A)\to \mathrm {GL} (A),}
or as the Schur multiplier of the group of elementary matrices.
For a field, K2 is determined by Steinberg symbols: this leads to Matsumoto's theorem.
One can compute that K2 is zero for any finite field. The computation of K2(Q) is complicated: Tate proved
K
2
(
Q
)
=
(
Z
/
4
)
∗
×
∏
p
odd prime
(
Z
/
p
)
∗
{\displaystyle K_{2}(\mathbf {Q} )=(\mathbf {Z} /4)^{*}\times \prod _{p{\text{ odd prime}}}(\mathbf {Z} /p)^{*}\ }
and remarked that the proof followed Gauss's first proof of the Law of Quadratic Reciprocity.
For non-Archimedean local fields, the group K2(F) is the direct sum of a finite cyclic group of order m, say, and a divisible group K2(F)m.
We have K2(Z) = Z/2, and in general K2 is finite for the ring of integers of a number field.
We further have K2(Z/n) = Z/2 if n is divisible by 4, and otherwise zero.
==== Matsumoto's theorem ====
Matsumoto's theorem states that for a field k, the second K-group is given by
K
2
(
k
)
=
k
×
⊗
Z
k
×
/
⟨
a
⊗
(
1
−
a
)
∣
a
≠
0
,
1
⟩
.
{\displaystyle K_{2}(k)=k^{\times }\otimes _{\mathbf {Z} }k^{\times }/\langle a\otimes (1-a)\mid a\not =0,1\rangle .}
Matsumoto's original theorem is even more general: For any root system, it gives a presentation for the unstable K-theory. This presentation is different from the one given here only for symplectic root systems. For non-symplectic root systems, the unstable second K-group with respect to the root system is exactly the stable K-group for GL(A). Unstable second K-groups (in this context) are defined by taking the kernel of the universal central extension of the Chevalley group of universal type for a given root system. This construction yields the kernel of the Steinberg extension for the root systems An (n > 1) and, in the limit, stable second K-groups.
==== Long exact sequences ====
If A is a Dedekind domain with field of fractions F then there is a long exact sequence
K
2
F
→
⊕
p
K
1
A
/
p
→
K
1
A
→
K
1
F
→
⊕
p
K
0
A
/
p
→
K
0
A
→
K
0
F
→
0
{\displaystyle K_{2}F\rightarrow \oplus _{\mathbf {p} }K_{1}A/{\mathbf {p} }\rightarrow K_{1}A\rightarrow K_{1}F\rightarrow \oplus _{\mathbf {p} }K_{0}A/{\mathbf {p} }\rightarrow K_{0}A\rightarrow K_{0}F\rightarrow 0\ }
where p runs over all prime ideals of A.
There is also an extension of the exact sequence for relative K1 and K0:
K
2
(
A
)
→
K
2
(
A
/
I
)
→
K
1
(
A
,
I
)
→
K
1
(
A
)
⋯
.
{\displaystyle K_{2}(A)\rightarrow K_{2}(A/I)\rightarrow K_{1}(A,I)\rightarrow K_{1}(A)\cdots \ .}
==== Pairing ====
There is a pairing on K1 with values in K2. Given commuting matrices X and Y over A, take elements x and y in the Steinberg group with X,Y as images. The commutator
x
y
x
−
1
y
−
1
{\displaystyle xyx^{-1}y^{-1}}
is an element of K2. The map is not always surjective.
== Milnor K-theory ==
The above expression for K2 of a field k led Milnor to the following definition of "higher" K-groups by
K
∗
M
(
k
)
:=
T
∗
(
k
×
)
/
(
a
⊗
(
1
−
a
)
)
,
{\displaystyle K_{*}^{M}(k):=T^{*}(k^{\times })/(a\otimes (1-a)),}
thus as graded parts of a quotient of the tensor algebra of the multiplicative group k× by the two-sided ideal, generated by the
{
a
⊗
(
1
−
a
)
:
a
≠
0
,
1
}
.
{\displaystyle \left\{a\otimes (1-a):\ a\neq 0,1\right\}.}
For n = 0,1,2 these coincide with those below, but for n ≧ 3 they differ in general. For example, we have KMn(Fq) = 0 for n ≧ 2
but KnFq is nonzero for odd n (see below).
The tensor product on the tensor algebra induces a product
K
m
×
K
n
→
K
m
+
n
{\displaystyle K_{m}\times K_{n}\rightarrow K_{m+n}}
making
K
∗
M
(
F
)
{\displaystyle K_{*}^{M}(F)}
a graded ring which is graded-commutative.
The images of elements
a
1
⊗
⋯
⊗
a
n
{\displaystyle a_{1}\otimes \cdots \otimes a_{n}}
in
K
n
M
(
k
)
{\displaystyle K_{n}^{M}(k)}
are termed symbols, denoted
{
a
1
,
…
,
a
n
}
{\displaystyle \{a_{1},\ldots ,a_{n}\}}
. For integer m invertible in k there is a map
∂
:
k
∗
→
H
1
(
k
,
μ
m
)
{\displaystyle \partial :k^{*}\rightarrow H^{1}(k,\mu _{m})}
where
μ
m
{\displaystyle \mu _{m}}
denotes the group of m-th roots of unity in some separable extension of k. This extends to
∂
n
:
k
∗
×
⋯
×
k
∗
→
H
n
(
k
,
μ
m
⊗
n
)
{\displaystyle \partial ^{n}:k^{*}\times \cdots \times k^{*}\rightarrow H^{n}\left({k,\mu _{m}^{\otimes n}}\right)\ }
satisfying the defining relations of the Milnor K-group. Hence
∂
n
{\displaystyle \partial ^{n}}
may be regarded as a map on
K
n
M
(
k
)
{\displaystyle K_{n}^{M}(k)}
, called the Galois symbol map.
The relation between étale (or Galois) cohomology of the field and Milnor K-theory modulo 2 is the Milnor conjecture, proven by Vladimir Voevodsky. The analogous statement for odd primes is the Bloch-Kato conjecture, proved by Voevodsky, Rost, and others.
== Higher K-theory ==
The accepted definitions of higher K-groups were given by Quillen (1973), after a few years during which several incompatible definitions were suggested. The object of the program was to find definitions of K(R) and K(R,I) in terms of classifying spaces so that
R ⇒ K(R) and (R,I) ⇒ K(R,I) are functors into a homotopy category of spaces and the long exact sequence for relative K-groups arises as the long exact homotopy sequence of a fibration K(R,I) → K(R) → K(R/I).
Quillen gave two constructions, the "plus-construction" and the "Q-construction", the latter subsequently modified in different ways. The two constructions yield the same K-groups.
=== The +-construction ===
One possible definition of higher algebraic K-theory of rings was given by Quillen
K
n
(
R
)
=
π
n
(
B
GL
(
R
)
+
)
,
{\displaystyle K_{n}(R)=\pi _{n}(B\operatorname {GL} (R)^{+}),}
Here πn is a homotopy group, GL(R) is the direct limit of the general linear groups over R for the size of the matrix tending to infinity, B is the classifying space construction of homotopy theory, and the + is Quillen's plus construction. He originally found this idea while studying the group cohomology of
G
L
n
(
F
q
)
{\displaystyle GL_{n}(\mathbb {F} _{q})}
and noted some of his calculations were related to
K
1
(
F
q
)
{\displaystyle K_{1}(\mathbb {F} _{q})}
.
This definition only holds for n > 0 so one often defines the higher algebraic K-theory via
K
n
(
R
)
=
π
n
(
B
GL
(
R
)
+
×
K
0
(
R
)
)
{\displaystyle K_{n}(R)=\pi _{n}(B\operatorname {GL} (R)^{+}\times K_{0}(R))}
Since BGL(R)+ is path connected and K0(R) discrete, this definition doesn't differ in higher degrees and also holds for n = 0.
=== The Q-construction ===
The Q-construction gives the same results as the +-construction, but it applies in more general situations. Moreover, the definition is more direct in the sense that the K-groups, defined via the Q-construction are functorial by definition. This fact is not automatic in the plus-construction.
Suppose
P
{\displaystyle P}
is an exact category; associated to
P
{\displaystyle P}
a new category
Q
P
{\displaystyle QP}
is defined, objects of which are those of
P
{\displaystyle P}
and morphisms from M′ to M″ are isomorphism classes of diagrams
M
′
⟵
N
⟶
M
″
,
{\displaystyle M'\longleftarrow N\longrightarrow M'',}
where the first arrow is an admissible epimorphism and the second arrow is an admissible monomorphism. Note the morphisms in
Q
P
{\displaystyle QP}
are analogous to the definitions of morphisms in the category of motives, where morphisms are given as correspondences
Z
⊂
X
×
Y
{\displaystyle Z\subset X\times Y}
such that
X
←
Z
→
Y
{\displaystyle X\leftarrow Z\rightarrow Y}
is a diagram where the arrow on the left is a covering map (hence surjective) and the arrow on the right is injective. This category can then be turned into a topological space using the classifying space construction
B
Q
P
{\displaystyle BQP}
, which is defined to be the geometric realisation of the nerve of
Q
P
{\displaystyle QP}
. Then, the i-th K-group of the exact category
P
{\displaystyle P}
is then defined as
K
i
(
P
)
=
π
i
+
1
(
B
Q
P
,
0
)
{\displaystyle K_{i}(P)=\pi _{i+1}(\mathrm {BQ} P,0)}
with a fixed zero-object
0
{\displaystyle 0}
. Note the classifying space of a groupoid
B
G
{\displaystyle B{\mathcal {G}}}
moves the homotopy groups up one degree, hence the shift in degrees for
K
i
{\displaystyle K_{i}}
being
π
i
+
1
{\displaystyle \pi _{i+1}}
of a space.
This definition coincides with the above definition of K0(P). If P is the category of finitely generated projective R-modules, this definition agrees with the above BGL+
definition of Kn(R) for all n.
More generally, for a scheme X, the higher K-groups of X are defined to be the K-groups of (the exact category of) locally free coherent sheaves on X.
The following variant of this is also used: instead of finitely generated projective (= locally free) modules, take finitely generated modules. The resulting K-groups are usually written Gn(R). When R is a noetherian regular ring, then G- and K-theory coincide. Indeed, the global dimension of regular rings is finite, i.e. any finitely generated module has a finite projective resolution P* → M, and a simple argument shows that the canonical map K0(R) → G0(R) is an isomorphism, with [M]=Σ ± [Pn]. This isomorphism extends to the higher K-groups, too.
=== The S-construction ===
A third construction of K-theory groups is the S-construction, due to Waldhausen. It applies to categories with cofibrations (also called Waldhausen categories). This is a more general concept than exact categories.
== Examples ==
While the Quillen algebraic K-theory has provided deep insight into various aspects of algebraic geometry and topology, the K-groups have proved particularly difficult to compute except in a few isolated but interesting cases. (See also: K-groups of a field.)
=== Algebraic K-groups of finite fields ===
The first and one of the most important calculations of the higher algebraic K-groups of a ring were made by Quillen himself for the case of finite fields:
If Fq is the finite field with q elements, then:
K0(Fq) = Z,
K2i(Fq) = 0 for i ≥1,
K2i–1(Fq) = Z/(q i − 1)Z for i ≥ 1.
Rick Jardine (1993) reproved Quillen's computation using different methods.
=== Algebraic K-groups of rings of integers ===
Quillen proved that if A is the ring of algebraic integers in an algebraic number field F (a finite extension of the rationals), then the algebraic K-groups of A are finitely generated. Armand Borel used this to calculate Ki(A) and Ki(F) modulo torsion. For example, for the integers Z, Borel proved that (modulo torsion)
Ki (Z)/tors.=0 for positive i unless i=4k+1 with k positive
K4k+1 (Z)/tors.= Z for positive k.
The torsion subgroups of K2i+1(Z), and the orders of the finite groups K4k+2(Z) have recently been determined, but whether the latter groups are cyclic, and whether the groups K4k(Z) vanish depends upon Vandiver's conjecture about the class groups of cyclotomic integers. See Quillen–Lichtenbaum conjecture for more details.
== Applications and open questions ==
Algebraic K-groups are used in conjectures on special values of L-functions and the formulation of a non-commutative main conjecture of Iwasawa theory and in construction of higher regulators.
Parshin's conjecture concerns the higher algebraic K-groups for smooth varieties over finite fields, and states that in this case the groups vanish up to torsion.
Another fundamental conjecture due to Hyman Bass (Bass' conjecture) says that all of the groups Gn(A) are finitely generated when A is a finitely generated Z-algebra. (The groups
Gn(A) are the K-groups of the category of finitely generated A-modules)
== See also ==
Additive K-theory
Bloch's formula
Fundamental theorem of algebraic K-theory
Basic theorems in algebraic K-theory
K-theory
K-theory of a category
K-group of a field
K-theory spectrum
Redshift conjecture
Topological K-theory
Rigidity (K-theory)
== Notes ==
== References ==
Bass, Hyman (1968), Algebraic K-theory, Mathematics Lecture Note Series, New York-Amsterdam: W.A. Benjamin, Inc., Zbl 0174.30302
Friedlander, Eric; Grayson, Daniel, eds. (2005), Handbook of K-Theory, Berlin, New York: Springer-Verlag, doi:10.1007/3-540-27855-9, ISBN 978-3-540-30436-4, MR 2182598
Friedlander, Eric M.; Weibel, Charles W. (1999), An overview of algebraic K-theory, World Sci. Publ., River Edge, NJ, pp. 1–119, MR 1715873
Gille, Philippe; Szamuely, Tamás (2006), Central simple algebras and Galois cohomology, Cambridge Studies in Advanced Mathematics, vol. 101, Cambridge: Cambridge University Press, ISBN 978-0-521-86103-8, Zbl 1137.12001
Gras, Georges (2003), Class field theory. From theory to practice, Springer Monographs in Mathematics, Berlin: Springer-Verlag, ISBN 978-3-540-44133-5, Zbl 1019.11032
Jardine, John Frederick (1993), "The K-theory of finite fields, revisited", K-Theory, 7 (6): 579–595, doi:10.1007/BF00961219, MR 1268594
Lam, Tsit-Yuen (2005), Introduction to Quadratic Forms over Fields, Graduate Studies in Mathematics, vol. 67, American Mathematical Society, ISBN 978-0-8218-1095-8, MR 2104929, Zbl 1068.11023
Lemmermeyer, Franz (2000), Reciprocity laws. From Euler to Eisenstein, Springer Monographs in Mathematics, Berlin: Springer-Verlag, doi:10.1007/978-3-662-12893-0, ISBN 978-3-540-66957-9, MR 1761696, Zbl 0949.11002
Milnor, John Willard (1970), "Algebraic K-theory and quadratic forms", Inventiones Mathematicae, 9 (4): 318–344, Bibcode:1970InMat...9..318M, doi:10.1007/BF01425486, ISSN 0020-9910, MR 0260844
Milnor, John Willard (1971), Introduction to algebraic K-theory, Annals of Mathematics Studies, vol. 72, Princeton, NJ: Princeton University Press, MR 0349811, Zbl 0237.18005 (lower K-groups)
Quillen, Daniel (1973), "Higher algebraic K-theory. I", Algebraic K-theory, I: Higher K-theories (Proc. Conf., Battelle Memorial Inst., Seattle, Wash., 1972), Lecture Notes in Math, vol. 341, Berlin, New York: Springer-Verlag, pp. 85–147, doi:10.1007/BFb0067053, ISBN 978-3-540-06434-3, MR 0338129
Quillen, Daniel (1975), "Higher algebraic K-theory", Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1, Montreal, Quebec: Canad. Math. Congress, pp. 171–176, MR 0422392 (Quillen's Q-construction)
Quillen, Daniel (1974), "Higher K-theory for categories with exact sequences", New developments in topology (Proc. Sympos. Algebraic Topology, Oxford, 1972), London Math. Soc. Lecture Note Ser., vol. 11, Cambridge University Press, pp. 95–103, MR 0335604 (relation of Q-construction to plus-construction)
Rosenberg, Jonathan (1994), Algebraic K-theory and its applications, Graduate Texts in Mathematics, vol. 147, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-4314-4, ISBN 978-0-387-94248-3, MR 1282290, Zbl 0801.19001. Errata
Seiler, Wolfgang (1988), "λ-Rings and Adams Operations in Algebraic K-Theory", in Rapoport, M.; Schneider, P.; Schappacher, N. (eds.), Beilinson's Conjectures on Special Values of L-Functions, Boston, MA: Academic Press, ISBN 978-0-12-581120-0
Silvester, John R. (1981), Introduction to algebraic K-theory, Chapman and Hall Mathematics Series, London, New York: Chapman and Hall, ISBN 978-0-412-22700-4, Zbl 0468.18006
Weibel, Charles (2005), "Algebraic K-theory of rings of integers in local and global fields" (PDF), Handbook of K-theory, Berlin, New York: Springer-Verlag, pp. 139–190, doi:10.1007/3-540-27855-9_5, ISBN 978-3-540-23019-9, MR 2181823 (survey article)
Weibel, Charles (1999), "The development of algebraic 𝐾-theory before 1980", The development of algebraic K-theory before 1980, Contemporary Mathematics, vol. 243, Providence, RI: American Mathematical Society, pp. 211–238, doi:10.1090/conm/243/03695, ISBN 978-0-8218-1087-3, MR 1732049
== Further reading ==
Lluis-Puebla, Emilio; Loday, Jean-Louis; Gillet, Henri; Soulé, Christophe; Snaith, Victor (1992), Higher algebraic K-theory: an overview, Lecture Notes in Mathematics, vol. 1491, Berlin, Heidelberg: Springer-Verlag, ISBN 978-3-540-55007-5, Zbl 0746.19001
Magurn, Bruce A. (2009), An algebraic introduction to K-theory, Encyclopedia of Mathematics and its Applications, vol. 87 (corrected paperback ed.), Cambridge University Press, ISBN 978-0-521-10658-0
Srinivas, V. (2008), Algebraic K-theory, Modern Birkhäuser Classics (Paperback reprint of the 1996 2nd ed.), Boston, MA: Birkhäuser, ISBN 978-0-8176-4736-0, Zbl 1125.19300
Weibel, C., The K-book: An introduction to algebraic K-theory
=== Pedagogical references ===
Higher Algebraic K-Theory: an overview
Rosenberg, Jonathan (1994), Algebraic K-theory and its applications, Graduate Texts in Mathematics, vol. 147, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-4314-4, ISBN 978-0-387-94248-3, MR 1282290, Zbl 0801.19001. Errata
Weibel, Charles (2013), The K-book: an introduction to Algebraic K-theory, Graduate Studies in Mathematics, vol. 145, AMS
=== Historical references ===
Atiyah, Michael F.; Hirzebruch, Friedrich (1961), Vector bundles and homogeneous spaces, Proc. Sympos. Pure Math., vol. 3, American Mathematical Society, pp. 7–38
Barden, Dennis (1964), On the Structure and Classification of Differential Manifolds (Thesis), Cambridge University
Bass, Hyman; Murthy, M.P. (1967), "Grothendieck groups and Picard groups of abelian group rings", Annals of Mathematics, 86 (1): 16–73, doi:10.2307/1970360, JSTOR 1970360
Bass, Hyman; Schanuel, S. (1962), "The homotopy theory of projective modules", Bulletin of the American Mathematical Society, 68 (4): 425–428, doi:10.1090/s0002-9904-1962-10826-x
Bass, Hyman (1968), Algebraic K-theory, Benjamin
Bloch, Spencer (1974), "K2 of algebraic cycles", Annals of Mathematics, 99 (2): 349–379, doi:10.2307/1970902, JSTOR 1970902
Bokstedt, M., Topological Hochschild homology. Preprint, Bielefeld, 1986.
Bokstedt, M., Hsiang, W. C., Madsen, I., The cyclotomic trace and algebraic K-theory of spaces. Invent. Math., 111(3) (1993), 465–539.
Borel, Armand; Serre, Jean-Pierre (1958), "Le theoreme de Riemann–Roch", Bulletin de la Société Mathématique de France, 86: 97–136, doi:10.24033/bsmf.1500
Browder, William (1978), Algebraic K-theory with coefficients Z/p, Lecture Notes in Mathematics, vol. 657, Springer–Verlag, pp. 40–84
Brown, K., Gersten, S., Algebraic K-theory as generalized sheaf cohomology, Algebraic K-theory I, Lecture Notes in Math., vol. 341, Springer-Verlag, 1973, pp. 266–292.
Cerf, Jean (1970), "La stratification naturelle des espaces de fonctions differentiables reelles et le theoreme de la pseudo-isotopie", Publications Mathématiques de l'IHÉS, 39: 5–173, doi:10.1007/BF02684687
Dennis, R. K., Higher algebraic K-theory and Hochschild homology, unpublished preprint (1976).
Gersten, S (1971), "On the functor K2", J. Algebra, 17 (2): 212–237, doi:10.1016/0021-8693(71)90030-5
Grothendieck, Alexander, Classes de fasiceaux et theoreme de Riemann–Roch, mimeographed notes, Princeton 1957.
Hatcher, Allen; Wagoner, John (1973), "Pseudo-isotopies of compact manifolds", Astérisque, 6, MR 0353337
Karoubi, Max (1968), "Foncteurs derives et K-theorie. Categories filtres", Comptes Rendus de l'Académie des Sciences, Série A-B, 267: A328 – A331
Karoubi, Max; Villamayor, O. (1971), "K-theorie algebrique et K-theorie topologique", Math. Scand., 28: 265–307, doi:10.7146/math.scand.a-11024
Matsumoto, Hideya (1969), "Sur les sous-groupes aritmetiques des groupes semi-simples deployes", Annales Scientifiques de l'École Normale Supérieure, 2: 1–62, doi:10.24033/asens.1174
Mazur, Barry (1963), "Differential topology from the point of view of simple homotopy theory" (PDF), Publications Mathématiques de l'IHÉS, 15: 5–93
Milnor, J (1970), "Algebraic K-theory and Quadratic Forms", Invent. Math., 9 (4): 318–344, Bibcode:1970InMat...9..318M, doi:10.1007/bf01425486
Milnor, J., Introduction to Algebraic K-theory, Princeton Univ. Press, 1971.
Nobile, A., Villamayor, O., Sur la K-theorie algebrique, Annales Scientifiques de l'École Normale Supérieure, 4e serie, 1, no. 3, 1968, 581–616.
Quillen, Daniel, Cohomology of groups, Proc. ICM Nice 1970, vol. 2, Gauthier-Villars, Paris, 1971, 47–52.
Quillen, Daniel, Higher algebraic K-theory I, Algebraic K-theory I, Lecture Notes in Math., vol. 341, Springer Verlag, 1973, 85–147.
Quillen, Daniel, Higher algebraic K-theory, Proc. Intern. Congress Math., Vancouver, 1974, vol. I, Canad. Math. Soc., 1975, pp. 171–176.
Segal, Graeme (1974), "Categories and cohomology theories", Topology, 13 (3): 293–312, doi:10.1016/0040-9383(74)90022-6
Siebenmann, Larry, The Obstruction to Finding a Boundary for an Open Manifold of Dimension Greater than Five, Thesis, Princeton University (1965).
Smale, S (1962), "On the structure of manifolds", Amer. J. Math., 84 (3): 387–399, doi:10.2307/2372978, JSTOR 2372978
Steinberg, R., Generateurs, relations et revetements de groupes algebriques, ́Colloq. Theorie des Groupes Algebriques, Gauthier-Villars, Paris, 1962, pp. 113–127. (French)
Swan, Richard, Nonabelian homological algebra and K-theory, Proc. Sympos. Pure Math., vol. XVII, 1970, pp. 88–123.
Thomason, R. W., Algebraic K-theory and étale cohomology, Ann. Scient. Ec. Norm. Sup. 18, 4e serie (1985), 437–552; erratum 22 (1989), 675–677.
Thomason, R. W., Le principe de sciendage et l'inexistence d'une K-theorie de Milnor globale, Topology 31, no. 3, 1992, 571–588.
Thomason, Robert W.; Trobaugh, Thomas (1990), "Higher Algebraic K-Theory of Schemes and of Derived Categories", The Grothendieck Festschrift Volume III, Progr. Math., vol. 88, Boston, MA: Birkhäuser Boston, pp. 247–435, doi:10.1007/978-0-8176-4576-2_10, ISBN 978-0-8176-3487-2, MR 1106918
Waldhausen, F., Algebraic K-theory of topological spaces. I, in Algebraic and geometric topology (Proc. Sympos. Pure Math., Stanford Univ., Stanford, Calif., 1976), Part 1, pp. 35–60, Proc. Sympos. Pure Math., XXXII, Amer. Math. Soc., Providence, R.I., 1978.
Waldhausen, F., Algebraic K-theory of spaces, in Algebraic and geometric topology (New Brunswick, N.J., 1983), Lecture Notes in Mathematics, vol. 1126 (1985), 318–419.
Wall, C. T. C. (1965), "Finiteness conditions for CW-complexes", Annals of Mathematics, 81 (1): 56–69, doi:10.2307/1970382, JSTOR 1970382
Whitehead, J.H.C. (1941), "On incidence matrices, nuclei and homotopy types", Annals of Mathematics, 42 (5): 1197–1239, doi:10.2307/1970465, JSTOR 1970465
Whitehead, J.H.C. (1950), "Simple homotopy types", Amer. J. Math., 72 (1): 1–57, doi:10.2307/2372133, JSTOR 2372133
Whitehead, J.H.C. (1939), "Simplicial spaces, nuclei and m-groups", Proc. London Math. Soc., 45: 243–327, doi:10.1112/plms/s2-45.1.243
== External links ==
The K-Theory Foundation | Wikipedia/Algebraic_K-theory |
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
== History ==
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of
y
=
f
(
x
)
{\displaystyle y=f(x)}
as follows: an infinitely small increment
α
{\displaystyle \alpha }
of the independent variable x always produces an infinitely small change
f
(
x
+
α
)
−
f
(
x
)
{\displaystyle f(x+\alpha )-f(x)}
of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
== Real functions ==
=== Definition ===
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x tends to c, is equal to
f
(
c
)
.
{\displaystyle f(c).}
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
(the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
is continuous on its whole domain, which is the closed interval
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and the tangent function
x
↦
tan
x
.
{\displaystyle x\mapsto \tan x.}
When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and
x
↦
sin
(
1
x
)
{\textstyle x\mapsto \sin({\frac {1}{x}})}
are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let
f
:
D
→
R
{\textstyle f:D\to \mathbb {R} }
be a function whose domain
D
{\displaystyle D}
is contained in
R
{\displaystyle \mathbb {R} }
of real numbers.
Some (but not all) possibilities for
D
{\displaystyle D}
are:
D
{\displaystyle D}
is the whole real line; that is,
D
=
R
{\displaystyle D=\mathbb {R} }
D
{\displaystyle D}
is a closed interval of the form
D
=
[
a
,
b
]
=
{
x
∈
R
∣
a
≤
x
≤
b
}
,
{\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},}
where a and b are real numbers
D
{\displaystyle D}
is an open interval of the form
D
=
(
a
,
b
)
=
{
x
∈
R
∣
a
<
x
<
b
}
,
{\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},}
where a and b are real numbers
In the case of an open interval,
a
{\displaystyle a}
and
b
{\displaystyle b}
do not belong to
D
{\displaystyle D}
, and the values
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
are not defined, and if they are, they do not matter for continuity on
D
{\displaystyle D}
.
==== Definition in terms of limits of functions ====
The function f is continuous at some point c of its domain if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x approaches c through the domain of f, exists and is equal to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation, this is written as
lim
x
→
c
f
(
x
)
=
f
(
c
)
.
{\displaystyle \lim _{x\to c}{f(x)}=f(c).}
In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal
f
(
c
)
.
{\displaystyle f(c).}
(Here, we have assumed that the domain of f does not have any isolated points.)
==== Definition in terms of neighborhoods ====
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point
f
(
c
)
{\displaystyle f(c)}
as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood
N
1
(
f
(
c
)
)
{\displaystyle N_{1}(f(c))}
there is a neighborhood
N
2
(
c
)
{\displaystyle N_{2}(c)}
in its domain such that
f
(
x
)
∈
N
1
(
f
(
c
)
)
{\displaystyle f(x)\in N_{1}(f(c))}
whenever
x
∈
N
2
(
c
)
.
{\displaystyle x\in N_{2}(c).}
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
==== Definition in terms of limits of sequences ====
One can instead require that for any sequence
(
x
n
)
n
∈
N
{\displaystyle (x_{n})_{n\in \mathbb {N} }}
of points in the domain which converges to c, the corresponding sequence
(
f
(
x
n
)
)
n
∈
N
{\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}
converges to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation,
∀
(
x
n
)
n
∈
N
⊂
D
:
lim
n
→
∞
x
n
=
c
⇒
lim
n
→
∞
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.}
==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ====
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
as above and an element
x
0
{\displaystyle x_{0}}
of the domain
D
{\displaystyle D}
,
f
{\displaystyle f}
is said to be continuous at the point
x
0
{\displaystyle x_{0}}
when the following holds: For any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
however small, there exists some positive real number
δ
>
0
{\displaystyle \delta >0}
such that for all
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
with
x
0
−
δ
<
x
<
x
0
+
δ
,
{\displaystyle x_{0}-\delta <x<x_{0}+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
0
)
−
ε
<
f
(
x
)
<
f
(
x
0
)
+
ε
.
{\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .}
Alternatively written, continuity of
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
at
x
0
∈
D
{\displaystyle x_{0}\in D}
means that for every
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that for all
x
∈
D
{\displaystyle x\in D}
:
|
x
−
x
0
|
<
δ
implies
|
f
(
x
)
−
f
(
x
0
)
|
<
ε
.
{\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .}
More intuitively, we can say that if we want to get all the
f
(
x
)
{\displaystyle f(x)}
values to stay in some small neighborhood around
f
(
x
0
)
,
{\displaystyle f\left(x_{0}\right),}
we need to choose a small enough neighborhood for the
x
{\displaystyle x}
values around
x
0
.
{\displaystyle x_{0}.}
If we can do that no matter how small the
f
(
x
0
)
{\displaystyle f(x_{0})}
neighborhood is, then
f
{\displaystyle f}
is continuous at
x
0
.
{\displaystyle x_{0}.}
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval
x
0
−
δ
<
x
<
x
0
+
δ
{\displaystyle x_{0}-\delta <x<x_{0}+\delta }
be entirely within the domain
D
{\displaystyle D}
, but Jordan removed that restriction.
==== Definition in terms of control of the remainder ====
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function
C
:
[
0
,
∞
)
→
[
0
,
∞
]
{\displaystyle C:[0,\infty )\to [0,\infty ]}
is called a control function if
C is non-decreasing
inf
δ
>
0
C
(
δ
)
=
0
{\displaystyle \inf _{\delta >0}C(\delta )=0}
A function
f
:
D
→
R
{\displaystyle f:D\to R}
is C-continuous at
x
0
{\displaystyle x_{0}}
if there exists such a neighbourhood
N
(
x
0
)
{\textstyle N(x_{0})}
that
|
f
(
x
)
−
f
(
x
0
)
|
≤
C
(
|
x
−
x
0
|
)
for all
x
∈
D
∩
N
(
x
0
)
{\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})}
A function is continuous in
x
0
{\displaystyle x_{0}}
if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions
C
{\displaystyle {\mathcal {C}}}
a function is
C
{\displaystyle {\mathcal {C}}}
-continuous if it is
C
{\displaystyle C}
-continuous for some
C
∈
C
.
{\displaystyle C\in {\mathcal {C}}.}
For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions
C
L
i
p
s
c
h
i
t
z
=
{
C
:
C
(
δ
)
=
K
|
δ
|
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}}
C
Hölder
−
α
=
{
C
:
C
(
δ
)
=
K
|
δ
|
α
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}}
C
uniform cont.
=
{
C
:
C
(
0
)
=
0
}
{\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}}
respectively.
==== Definition using oscillation ====
Continuity can also be defined in terms of oscillation: a function f is continuous at a point
x
0
{\displaystyle x_{0}}
if and only if its oscillation at that point is zero; in symbols,
ω
f
(
x
0
)
=
0.
{\displaystyle \omega _{f}(x_{0})=0.}
A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than
ε
{\displaystyle \varepsilon }
(hence a
G
δ
{\displaystyle G_{\delta }}
set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given
ε
0
{\displaystyle \varepsilon _{0}}
there is no
δ
{\displaystyle \delta }
that satisfies the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition, then the oscillation is at least
ε
0
,
{\displaystyle \varepsilon _{0},}
and conversely if for every
ε
{\displaystyle \varepsilon }
there is a desired
δ
,
{\displaystyle \delta ,}
the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
==== Definition using the hyperreals ====
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
=== Rules for continuity ===
Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules:
Every constant function is continuous
The identity function
f
(
x
)
=
x
{\displaystyle f(x)=x}
is continuous
Addition and multiplication: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then their sum
f
+
g
{\displaystyle f+g}
and their product
f
⋅
g
{\displaystyle f\cdot g}
are continuous on the intersection
D
f
∩
D
g
{\displaystyle D_{f}\cap D_{g}}
, where
f
+
g
{\displaystyle f+g}
and
f
g
{\displaystyle fg}
are defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
and
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
{\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)}
.
Reciprocal: If the function
f
{\displaystyle f}
is continuous on the domain
D
f
{\displaystyle D_{f}}
, then its reciprocal
1
f
{\displaystyle {\tfrac {1}{f}}}
, defined by
(
1
f
)
(
x
)
=
1
f
(
x
)
{\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}}
is continuous on the domain
D
f
∖
f
−
1
(
0
)
{\displaystyle D_{f}\setminus f^{-1}(0)}
, that is, the domain
D
f
{\displaystyle D_{f}}
from which the points
x
{\displaystyle x}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
are removed.
Function composition: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then the composition
g
∘
f
{\displaystyle g\circ f}
defined by
1
{\displaystyle {1}}
is continuous on
D
f
∩
f
−
1
(
D
g
)
{\displaystyle D_{f}\cap f^{-1}(D_{g})}
, that the part of
D
f
{\displaystyle D_{f}}
that is mapped by
f
{\displaystyle f}
inside
D
g
{\displaystyle D_{g}}
.
The sine and cosine functions (
sin
x
{\displaystyle \sin x}
and
cos
x
{\displaystyle \cos x}
) are continuous everywhere.
The exponential function
e
x
{\displaystyle e^{x}}
is continuous everywhere.
The natural logarithm
ln
x
{\displaystyle \ln x}
is continuous on the domain formed by all positive real numbers
{
x
∣
x
>
0
}
{\displaystyle \{x\mid x>0\}}
.
These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator.
An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by
sinc
(
0
)
=
1
{\displaystyle \operatorname {sinc} (0)=1}
and
sinc
(
x
)
=
sin
x
x
{\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}}
for
x
≠
0
{\displaystyle x\neq 0}
. The above rules show immediately that the function is continuous for
x
≠
0
{\displaystyle x\neq 0}
, but, for proving the continuity at
0
{\displaystyle 0}
, one has to prove
lim
x
→
0
sin
x
x
=
1.
{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.}
As this is true, one gets that the sinc function is continuous function on all real numbers.
=== Examples of discontinuous functions ===
An example of a discontinuous function is the Heaviside step function
H
{\displaystyle H}
, defined by
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0
{\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}}
Pick for instance
ε
=
1
/
2
{\displaystyle \varepsilon =1/2}
. Then there is no
δ
{\displaystyle \delta }
-neighborhood around
x
=
0
{\displaystyle x=0}
, i.e. no open interval
(
−
δ
,
δ
)
{\displaystyle (-\delta ,\;\delta )}
with
δ
>
0
,
{\displaystyle \delta >0,}
that will force all the
H
(
x
)
{\displaystyle H(x)}
values to be within the
ε
{\displaystyle \varepsilon }
-neighborhood of
H
(
0
)
{\displaystyle H(0)}
, i.e. within
(
1
/
2
,
3
/
2
)
{\displaystyle (1/2,\;3/2)}
. Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
sgn
(
x
)
=
{
1
if
x
>
0
0
if
x
=
0
−
1
if
x
<
0
{\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}}
is discontinuous at
x
=
0
{\displaystyle x=0}
but continuous everywhere else. Yet another example: the function
f
(
x
)
=
{
sin
(
x
−
2
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is continuous everywhere apart from
x
=
0
{\displaystyle x=0}
.
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
f
(
x
)
=
{
1
if
x
=
0
1
q
if
x
=
p
q
(in lowest terms) is a rational number
0
if
x
is irrational
.
{\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}}
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
D
(
x
)
=
{
0
if
x
is irrational
(
∈
R
∖
Q
)
1
if
x
is rational
(
∈
Q
)
{\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}}
is nowhere continuous.
=== Properties ===
==== A useful lemma ====
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is continuous at a point
x
0
,
{\displaystyle x_{0},}
and
y
0
{\displaystyle y_{0}}
be a value such
f
(
x
0
)
≠
y
0
.
{\displaystyle f\left(x_{0}\right)\neq y_{0}.}
Then
f
(
x
)
≠
y
0
{\displaystyle f(x)\neq y_{0}}
throughout some neighbourhood of
x
0
.
{\displaystyle x_{0}.}
Proof: By the definition of continuity, take
ε
=
|
y
0
−
f
(
x
0
)
|
2
>
0
{\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0}
, then there exists
δ
>
0
{\displaystyle \delta >0}
such that
|
f
(
x
)
−
f
(
x
0
)
|
<
|
y
0
−
f
(
x
0
)
|
2
whenever
|
x
−
x
0
|
<
δ
{\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta }
Suppose there is a point in the neighbourhood
|
x
−
x
0
|
<
δ
{\displaystyle |x-x_{0}|<\delta }
for which
f
(
x
)
=
y
0
;
{\displaystyle f(x)=y_{0};}
then we have the contradiction
|
f
(
x
0
)
−
y
0
|
<
|
f
(
x
0
)
−
y
0
|
2
.
{\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.}
==== Intermediate value theorem ====
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval
[
a
,
b
]
,
{\displaystyle [a,b],}
and k is some number between
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
,
{\displaystyle f(b),}
then there is some number
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
such that
f
(
c
)
=
k
.
{\displaystyle f(c)=k.}
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on
[
a
,
b
]
{\displaystyle [a,b]}
and
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
differ in sign, then, at some point
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
f
(
c
)
{\displaystyle f(c)}
must equal zero.
==== Extreme value theorem ====
The extreme value theorem states that if a function f is defined on a closed interval
[
a
,
b
]
{\displaystyle [a,b]}
(or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists
c
∈
[
a
,
b
]
{\displaystyle c\in [a,b]}
with
f
(
c
)
≥
f
(
x
)
{\displaystyle f(c)\geq f(x)}
for all
x
∈
[
a
,
b
]
.
{\displaystyle x\in [a,b].}
The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval
(
a
,
b
)
{\displaystyle (a,b)}
(or any set that is not both closed and bounded), as, for example, the continuous function
f
(
x
)
=
1
x
,
{\displaystyle f(x)={\frac {1}{x}},}
defined on the open interval (0,1), does not attain a maximum, being unbounded above.
==== Relation to differentiability and integrability ====
Every differentiable function
f
:
(
a
,
b
)
→
R
{\displaystyle f:(a,b)\to \mathbb {R} }
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
f
(
x
)
=
|
x
|
=
{
x
if
x
≥
0
−
x
if
x
<
0
{\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}}
is everywhere continuous. However, it is not differentiable at
x
=
0
{\displaystyle x=0}
(but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted
C
1
(
(
a
,
b
)
)
.
{\displaystyle C^{1}((a,b)).}
More generally, the set of functions
f
:
Ω
→
R
{\displaystyle f:\Omega \to \mathbb {R} }
(from an open interval (or open subset of
R
{\displaystyle \mathbb {R} }
)
Ω
{\displaystyle \Omega }
to the reals) such that f is
n
{\displaystyle n}
times differentiable and such that the
n
{\displaystyle n}
-th derivative of f is continuous is denoted
C
n
(
Ω
)
.
{\displaystyle C^{n}(\Omega ).}
See differentiability class. In the field of computer graphics, properties related (but not identical) to
C
0
,
C
1
,
C
2
{\displaystyle C^{0},C^{1},C^{2}}
are sometimes called
G
0
{\displaystyle G^{0}}
(continuity of position),
G
1
{\displaystyle G^{1}}
(continuity of tangency), and
G
2
{\displaystyle G^{2}}
(continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
f
:
[
a
,
b
]
→
R
{\displaystyle f:[a,b]\to \mathbb {R} }
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
==== Pointwise and uniform limits ====
Given a sequence
f
1
,
f
2
,
…
:
I
→
R
{\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} }
of functions such that the limit
f
(
x
)
:=
lim
n
→
∞
f
n
(
x
)
{\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)}
exists for all
x
∈
D
,
{\displaystyle x\in D,}
, the resulting function
f
(
x
)
{\displaystyle f(x)}
is referred to as the pointwise limit of the sequence of functions
(
f
n
)
n
∈
N
.
{\displaystyle \left(f_{n}\right)_{n\in N}.}
The pointwise limit function need not be continuous, even if all functions
f
n
{\displaystyle f_{n}}
are continuous, as the animation at the right shows. However, f is continuous if all functions
f
n
{\displaystyle f_{n}}
are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
=== Directional Continuity ===
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number
ε
>
0
{\displaystyle \varepsilon >0}
however small, there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
c
<
x
<
c
+
δ
,
{\displaystyle c<x<c+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
will satisfy
|
f
(
x
)
−
f
(
c
)
|
<
ε
.
{\displaystyle |f(x)-f(c)|<\varepsilon .}
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with
c
−
δ
<
x
<
c
{\displaystyle c-\delta <x<c}
yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.
=== Semicontinuity ===
A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
|
x
−
c
|
<
δ
,
{\displaystyle |x-c|<\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
)
≥
f
(
c
)
−
ϵ
.
{\displaystyle f(x)\geq f(c)-\epsilon .}
The reverse condition is upper semi-continuity.
== Continuous functions between metric spaces ==
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set
X
{\displaystyle X}
equipped with a function (called metric)
d
X
,
{\displaystyle d_{X},}
that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
d
X
:
X
×
X
→
R
{\displaystyle d_{X}:X\times X\to \mathbb {R} }
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces
(
X
,
d
X
)
{\displaystyle \left(X,d_{X}\right)}
and
(
Y
,
d
Y
)
{\displaystyle \left(Y,d_{Y}\right)}
and a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
then
f
{\displaystyle f}
is continuous at the point
c
∈
X
{\displaystyle c\in X}
(with respect to the given metrics) if for any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a positive real number
δ
>
0
{\displaystyle \delta >0}
such that all
x
∈
X
{\displaystyle x\in X}
satisfying
d
X
(
x
,
c
)
<
δ
{\displaystyle d_{X}(x,c)<\delta }
will also satisfy
d
Y
(
f
(
x
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(x),f(c))<\varepsilon .}
As in the case of real functions above, this is equivalent to the condition that for every sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
lim
x
n
=
c
,
{\displaystyle \lim x_{n}=c,}
we have
lim
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \lim f\left(x_{n}\right)=f(c).}
The latter condition can be weakened as follows:
f
{\displaystyle f}
is continuous at the point
c
{\displaystyle c}
if and only if for every convergent sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
c
{\displaystyle c}
, the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
is a Cauchy sequence, and
c
{\displaystyle c}
is in the domain of
f
{\displaystyle f}
.
The set of points at which a function between metric spaces is continuous is a
G
δ
{\displaystyle G_{\delta }}
set – this follows from the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
T
:
V
→
W
{\displaystyle T:V\to W}
between normed vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
(which are vector spaces equipped with a compatible norm, denoted
‖
x
‖
{\displaystyle \|x\|}
) is continuous if and only if it is bounded, that is, there is a constant
K
{\displaystyle K}
such that
‖
T
(
x
)
‖
≤
K
‖
x
‖
{\displaystyle \|T(x)\|\leq K\|x\|}
for all
x
∈
V
.
{\displaystyle x\in V.}
=== Uniform, Hölder and Lipschitz continuity ===
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way
δ
{\displaystyle \delta }
depends on
ε
{\displaystyle \varepsilon }
and c in the definition above. Intuitively, a function f as above is uniformly continuous if the
δ
{\displaystyle \delta }
does
not depend on the point c. More precisely, it is required that for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that for every
c
,
b
∈
X
{\displaystyle c,b\in X}
with
d
X
(
b
,
c
)
<
δ
,
{\displaystyle d_{X}(b,c)<\delta ,}
we have that
d
Y
(
f
(
b
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(b),f(c))<\varepsilon .}
Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all
b
,
c
∈
X
,
{\displaystyle b,c\in X,}
the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
(
d
X
(
b
,
c
)
)
α
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }}
holds. Any Hölder continuous function is uniformly continuous. The particular case
α
=
1
{\displaystyle \alpha =1}
is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
d
X
(
b
,
c
)
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)}
holds for any
b
,
c
∈
X
.
{\displaystyle b,c\in X.}
The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
== Continuous functions between topological spaces ==
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two topological spaces X and Y is continuous if for every open set
V
⊆
Y
,
{\displaystyle V\subseteq Y,}
the inverse image
f
−
1
(
V
)
=
{
x
∈
X
|
f
(
x
)
∈
V
}
{\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}}
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology
T
X
{\displaystyle T_{X}}
), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
f
:
X
→
T
{\displaystyle f:X\to T}
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
=== Continuity at a point ===
The translation in the language of neighborhoods of the
(
ε
,
δ
)
{\displaystyle (\varepsilon ,\delta )}
-definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and
f
−
1
(
V
)
{\displaystyle f^{-1}(V)}
is the largest subset U of X such that
f
(
U
)
⊆
V
,
{\displaystyle f(U)\subseteq V,}
this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at every point of X if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given
x
∈
X
,
{\displaystyle x\in X,}
a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if whenever
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
that converges to
x
{\displaystyle x}
in
X
,
{\displaystyle X,}
which is expressed by writing
B
→
x
,
{\displaystyle {\mathcal {B}}\to x,}
then necessarily
f
(
B
)
→
f
(
x
)
{\displaystyle f({\mathcal {B}})\to f(x)}
in
Y
.
{\displaystyle Y.}
If
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
denotes the neighborhood filter at
x
{\displaystyle x}
then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if
f
(
N
(
x
)
)
→
f
(
x
)
{\displaystyle f({\mathcal {N}}(x))\to f(x)}
in
Y
.
{\displaystyle Y.}
Moreover, this happens if and only if the prefilter
f
(
N
(
x
)
)
{\displaystyle f({\mathcal {N}}(x))}
is a filter base for the neighborhood filter of
f
(
x
)
{\displaystyle f(x)}
in
Y
.
{\displaystyle Y.}
=== Alternative definitions ===
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
==== Sequences and nets ====
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is sequentially continuous if whenever a sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
converges to a limit
x
,
{\displaystyle x,}
the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
converges to
f
(
x
)
.
{\displaystyle f(x).}
Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If
X
{\displaystyle X}
is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if
X
{\displaystyle X}
is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
==== Closure operator and interior operator definitions ====
In terms of the interior and closure operators, we have the following equivalences,
If we declare that a point
x
{\displaystyle x}
is close to a subset
A
⊆
X
{\displaystyle A\subseteq X}
if
x
∈
cl
X
A
,
{\displaystyle x\in \operatorname {cl} _{X}A,}
then this terminology allows for a plain English description of continuity:
f
{\displaystyle f}
is continuous if and only if for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
f
{\displaystyle f}
maps points that are close to
A
{\displaystyle A}
to points that are close to
f
(
A
)
.
{\displaystyle f(A).}
Similarly,
f
{\displaystyle f}
is continuous at a fixed given point
x
∈
X
{\displaystyle x\in X}
if and only if whenever
x
{\displaystyle x}
is close to a subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
then
f
(
x
)
{\displaystyle f(x)}
is close to
f
(
A
)
.
{\displaystyle f(A).}
Instead of specifying topological spaces by their open subsets, any topology on
X
{\displaystyle X}
can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset
A
{\displaystyle A}
of a topological space
X
{\displaystyle X}
to its topological closure
cl
X
A
{\displaystyle \operatorname {cl} _{X}A}
satisfies the Kuratowski closure axioms. Conversely, for any closure operator
A
↦
cl
A
{\displaystyle A\mapsto \operatorname {cl} A}
there exists a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
X
∖
cl
A
:
A
⊆
X
}
{\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}}
) such that for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
cl
A
{\displaystyle \operatorname {cl} A}
is equal to the topological closure
cl
(
X
,
τ
)
A
{\displaystyle \operatorname {cl} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with closure operators (both denoted by
cl
{\displaystyle \operatorname {cl} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
(
cl
A
)
⊆
cl
(
f
(
A
)
)
{\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))}
for every subset
A
⊆
X
.
{\displaystyle A\subseteq X.}
Similarly, the map that sends a subset
A
{\displaystyle A}
of
X
{\displaystyle X}
to its topological interior
int
X
A
{\displaystyle \operatorname {int} _{X}A}
defines an interior operator. Conversely, any interior operator
A
↦
int
A
{\displaystyle A\mapsto \operatorname {int} A}
induces a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
int
A
:
A
⊆
X
}
{\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}}
) such that for every
A
⊆
X
,
{\displaystyle A\subseteq X,}
int
A
{\displaystyle \operatorname {int} A}
is equal to the topological interior
int
(
X
,
τ
)
A
{\displaystyle \operatorname {int} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with interior operators (both denoted by
int
{\displaystyle \operatorname {int} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
−
1
(
int
B
)
⊆
int
(
f
−
1
(
B
)
)
{\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)}
for every subset
B
⊆
Y
.
{\displaystyle B\subseteq Y.}
==== Filters and prefilters ====
Continuity can also be characterized in terms of filters. A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if whenever a filter
B
{\displaystyle {\mathcal {B}}}
on
X
{\displaystyle X}
converges in
X
{\displaystyle X}
to a point
x
∈
X
,
{\displaystyle x\in X,}
then the prefilter
f
(
B
)
{\displaystyle f({\mathcal {B}})}
converges in
Y
{\displaystyle Y}
to
f
(
x
)
.
{\displaystyle f(x).}
This characterization remains true if the word "filter" is replaced by "prefilter."
=== Properties ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
are continuous, then so is the composition
g
∘
f
:
X
→
Z
.
{\displaystyle g\circ f:X\to Z.}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology
τ
1
{\displaystyle \tau _{1}}
is said to be coarser than another topology
τ
2
{\displaystyle \tau _{2}}
(notation:
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
) if every open subset with respect to
τ
1
{\displaystyle \tau _{1}}
is also open with respect to
τ
2
.
{\displaystyle \tau _{2}.}
Then, the identity map
id
X
:
(
X
,
τ
2
)
→
(
X
,
τ
1
)
{\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)}
is continuous if and only if
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
(see also comparison of topologies). More generally, a continuous function
(
X
,
τ
X
)
→
(
Y
,
τ
Y
)
{\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)}
stays continuous if the topology
τ
Y
{\displaystyle \tau _{Y}}
is replaced by a coarser topology and/or
τ
X
{\displaystyle \tau _{X}}
is replaced by a finer topology.
=== Homeomorphisms ===
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function
f
−
1
{\displaystyle f^{-1}}
need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
=== Defining topologies via continuous functions ===
Given a function
f
:
X
→
S
,
{\displaystyle f:X\to S,}
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which
f
−
1
(
A
)
{\displaystyle f^{-1}(A)}
is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that
A
=
f
−
1
(
U
)
{\displaystyle A=f^{-1}(U)}
for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions
S
→
X
{\displaystyle S\to X}
into all topological spaces X. Dually, a similar idea can be applied to maps
X
→
S
.
{\displaystyle X\to S.}
== Related notions ==
If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a continuous function from some subset
S
{\displaystyle S}
of a topological space
X
{\displaystyle X}
then a continuous extension of
f
{\displaystyle f}
to
X
{\displaystyle X}
is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
such that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for every
s
∈
S
,
{\displaystyle s\in S,}
which is a condition that often written as
f
=
F
|
S
.
{\displaystyle f=F{\big \vert }_{S}.}
In words, it is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
that restricts to
f
{\displaystyle f}
on
S
.
{\displaystyle S.}
This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is not continuous, then it could not possibly have a continuous extension. If
Y
{\displaystyle Y}
is a Hausdorff space and
S
{\displaystyle S}
is a dense subset of
X
{\displaystyle X}
then a continuous extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
to
X
,
{\displaystyle X,}
if one exists, will be unique. The Blumberg theorem states that if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is an arbitrary function then there exists a dense subset
D
{\displaystyle D}
of
R
{\displaystyle \mathbb {R} }
such that the restriction
f
|
D
:
D
→
R
{\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} }
is continuous; in other words, every function
R
→
R
{\displaystyle \mathbb {R} \to \mathbb {R} }
can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between particular types of partially ordered sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is continuous if for each directed subset
A
{\displaystyle A}
of
X
,
{\displaystyle X,}
we have
sup
f
(
A
)
=
f
(
sup
A
)
.
{\displaystyle \sup f(A)=f(\sup A).}
Here
sup
{\displaystyle \,\sup \,}
is the supremum with respect to the orderings in
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
F
:
C
→
D
{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}
between two categories is called continuous if it commutes with small limits. That is to say,
lim
←
i
∈
I
F
(
C
i
)
≅
F
(
lim
←
i
∈
I
C
i
)
{\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)}
for any small (that is, indexed by a set
I
,
{\displaystyle I,}
as opposed to a class) diagram of objects in
C
{\displaystyle {\mathcal {C}}}
.
A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function
f
:
E
→
R
k
{\displaystyle f:E\to \mathbb {R} ^{k}}
defined on a Lebesgue measurable set
E
⊆
R
n
{\displaystyle E\subseteq \mathbb {R} ^{n}}
is called approximately continuous at a point
x
0
∈
E
{\displaystyle x_{0}\in E}
if the approximate limit of
f
{\displaystyle f}
at
x
0
{\displaystyle x_{0}}
exists and equals
f
(
x
0
)
{\displaystyle f(x_{0})}
. This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
== See also ==
Direction-preserving function - an analog of a continuous function in discrete spaces.
== References ==
== Bibliography ==
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
"Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Continuity_(topology) |
In mathematics, combinatorial topology was an older name for algebraic topology, dating from the time when topological invariants of spaces (for example the Betti numbers) were regarded as derived from combinatorial decompositions of spaces, such as decomposition into simplicial complexes. After the proof of the simplicial approximation theorem this approach provided rigour.
The change of name reflected the move to organise topological classes such as cycles-modulo-boundaries explicitly into abelian groups. This point of view is often attributed to Emmy Noether, and so the change of title may reflect her influence. The transition is also attributed to the work of Heinz Hopf, who was influenced by Noether, and to Leopold Vietoris and Walther Mayer, who independently defined homology.
A fairly precise date can be supplied in the internal notes of the Bourbaki group. While this kind of topology was still "combinatorial" in 1942, it had become "algebraic" by 1944. This corresponds also to the period where homological algebra and category theory were introduced for the study of topological spaces, and largely supplanted combinatorial methods.
More recently the term combinatorial topology has been revived for investigations carried out by treating topological objects as composed of pieces as in the older combinatorial topology, which is again found useful.
Azriel Rosenfeld (1973) proposed digital topology for a type of image processing that can be considered as a new development of combinatorial topology. The digital forms of the Euler characteristic theorem and the Gauss–Bonnet theorem were obtained by Li Chen and Yongwu Rong. A 2D grid cell topology already appeared in the Alexandrov–Hopf book Topologie I (1935).
Gottfried Wilhelm Leibniz had envisioned a form of combinatorial topology as early as 1679 in his work Characteristica Geometrica.
== See also ==
Hauptvermutung
Topological combinatorics
Topological graph theory
== Notes ==
== References ==
Alexandrov, Pavel S. (1956), Combinatorial Topology Vols. I, II, III, translated by Horace Komm, Graylock Press, MR 1643155
Hilton, Peter (1988), "A Brief, Subjective History of Homology and Homotopy Theory in This Century", Mathematics Magazine, 60 (5), Mathematical Association of America: 282–291, doi:10.1080/0025570X.1988.11977391, JSTOR 2689545
Teicher, Mina, ed. (1999), The Heritage of Emmy Noether, Israel Mathematical Conference Proceedings, Bar-Ilan University/American Mathematical Society/Oxford University Press, ISBN 978-0-19-851045-1, OCLC 223099225
Novikov, Sergei P. (2001) [1994], "Combinatorial topology", Encyclopedia of Mathematics, EMS Press | Wikipedia/Combinatorial_topology |
In mathematics, higher category theory is the part of category theory at a higher order, which means that some equalities are replaced by explicit arrows in order to be able to explicitly study the structure behind those equalities. Higher category theory is often applied in algebraic topology (especially in homotopy theory), where one studies algebraic invariants of spaces, such as the fundamental weak ∞-groupoid.
In higher category theory, the concept of higher categorical structures, such as (∞-categories), allows for a more robust treatment of homotopy theory, enabling one to capture finer homotopical distinctions, such as differentiating two topological spaces that have the same fundamental group but differ in their higher homotopy groups. This approach is particularly valuable when dealing with spaces with intricate topological features, such as the Eilenberg-MacLane space.
== Strict higher categories ==
An ordinary category has objects and morphisms, which are called 1-morphisms in the context of higher category theory. A 2-category generalizes this by also including 2-morphisms between the 1-morphisms. Continuing this up to n-morphisms between (n − 1)-morphisms gives an n-category.
Just as the category known as Cat, which is the category of small categories and functors is actually a 2-category with natural transformations as its 2-morphisms, the category n-Cat of (small) n-categories is actually an (n + 1)-category.
An n-category is defined by induction on n by:
A 0-category is a set,
An (n + 1)-category is a category enriched over the category n-Cat.
So a 1-category is just a (locally small) category.
The monoidal structure of Set is the one given by the cartesian product as tensor and a singleton as unit. In fact any category with finite products can be given a monoidal structure. The recursive construction of n-Cat works fine because if a category C has finite products, the category of C-enriched categories has finite products too.
While this concept is too strict for some purposes in for example, homotopy theory, where "weak" structures arise in the form of higher categories, strict cubical higher homotopy groupoids have also arisen as giving a new foundation for algebraic topology on the border between homology and homotopy theory; see the article Nonabelian algebraic topology, referenced in the book below.
== Weak higher categories ==
In weak n-categories, the associativity and identity conditions are no longer strict (that is, they are not given by equalities), but rather are satisfied up to an isomorphism of the next level. An example in topology is the composition of paths, where the identity and association conditions hold only up to reparameterization, and hence up to homotopy, which is the 2-isomorphism for this 2-category. These n-isomorphisms must well behave between hom-sets and expressing this is the difficulty in the definition of weak n-categories. Weak 2-categories, also called bicategories, were the first to be defined explicitly. A particularity of these is that a bicategory with one object is exactly a monoidal category, so that bicategories can be said to be "monoidal categories with many objects." Weak 3-categories, also called tricategories, and higher-level generalizations are increasingly harder to define explicitly. Several definitions have been given, and telling when they are equivalent, and in what sense, has become a new object of study in category theory.
== Quasi-categories ==
Weak Kan complexes, or quasi-categories, are simplicial sets satisfying a weak version of the Kan condition. André Joyal showed that they are a good foundation for higher category theory by constructing the Joyal model structure on the category of simplicial sets, whose fibrant objects are exactly quasi-categories. Recently, in 2009, the theory has been systematized further by Jacob Lurie who simply calls them infinity categories, though the latter term is also a generic term for all models of (infinity, k) categories for any k.
== Simplicially enriched categories ==
Simplicially enriched categories, or simplicial categories, are categories enriched over simplicial sets. However, when we look at them as a model for (infinity, 1)-categories, then many categorical notions (e.g., limits) do not agree with the corresponding notions in the sense of enriched categories. The same for other enriched models like topologically enriched categories.
== Topologically enriched categories ==
Topologically enriched categories (sometimes simply called topological categories) are categories enriched over some convenient category of topological spaces, e.g. the category of compactly generated Hausdorff spaces.
== Segal categories ==
These are models of higher categories introduced by Hirschowitz and Simpson in 1998, partly inspired by results of Graeme Segal in 1974.
== See also ==
Higher-dimensional algebra
General abstract nonsense
Categorification
Coherency (homotopy theory)
== Notes ==
== References ==
Baez, John C.; Dolan, James (1998). "Categorification". arXiv:math/9802029.
Leinster, Tom (2004). Higher Operads, Higher Categories. Cambridge University Press. arXiv:math.CT/0305049. ISBN 0-521-53215-9.
Simpson, Carlos (2010). "Homotopy theory of higher categories". arXiv:1001.4071 [math.CT]. Draft of a book. Alternative PDF with hyperlinks)
Lurie, Jacob (2009). Higher Topos Theory. Princeton University Press. arXiv:math.CT/0608040. ISBN 978-0-691-14048-3. As PDF.
nLab, the collective and open wiki notebook project on higher category theory and applications in physics, mathematics and philosophy
Joyal's Catlab, a wiki dedicated to polished expositions of categorical and higher categorical mathematics with proofs
Brown, Ronald; Higgins, Philip J.; Sivera, Rafael (2011). Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids. Tracts in Mathematics. Vol. 15. European Mathematical Society. ISBN 978-3-03719-083-8.
== Further reading ==
John C. Baez and Michael Shulman, Lectures on 𝑛-categories and cohomology, Towards higher categories, IMA Vol. Math. Appl., vol.152, Springer, New York, 2010, pp. 1–68. MR2664619
Notes on tetracategories by Todd Trimble.
https://ncatlab.org/nlab/show/higher+category+theory
https://ncatlab.org/nlab/show/geometric+shape+for+higher+structures
== External links ==
Baez, John (24 February 1996). "Week 73: Tale of n-Categories".
The n-Category Cafe — a group blog devoted to higher category theory.
Leinster, Tom (8 March 2010). "A Perspective on Higher Category Theory". | Wikipedia/Higher_category_theory |
In algebraic topology, the cellular approximation theorem states that a map between CW-complexes can always be taken to be of a specific type. Concretely, if X and Y are CW-complexes, and f : X → Y is a continuous map, then f is said to be cellular, if f takes the n-skeleton of X to the n-skeleton of Y for all n, i.e. if
f
(
X
n
)
⊆
Y
n
{\displaystyle f(X^{n})\subseteq Y^{n}}
for all n. The content of the cellular approximation theorem is then that any continuous map f : X → Y between CW-complexes X and Y is homotopic to a cellular map, and if f is already cellular on a subcomplex A of X, then we can furthermore choose the homotopy to be stationary on A. From an algebraic topological viewpoint, any map between CW-complexes can thus be taken to be cellular.
== Idea of proof ==
The proof can be given by induction after n, with the statement that f is cellular on the skeleton Xn. For the base case n=0, notice that every path-component of Y must contain a 0-cell. The image under f of a 0-cell of X can thus be connected to a 0-cell of Y by a path, but this gives a homotopy from f to a map which is cellular on the 0-skeleton of X.
Assume inductively that f is cellular on the (n − 1)-skeleton of X, and let en be an n-cell of X. The closure of en is compact in X, being the image of the characteristic map of the cell, and hence the image of the closure of en under f is also compact in Y. Then it is a general result of CW-complexes that any compact subspace of a CW-complex meets (that is, intersects non-trivially) only finitely many cells of the complex. Thus f(en) meets at most finitely many cells of Y, so we can take
e
k
⊆
Y
{\displaystyle e^{k}\subseteq Y}
to be a cell of highest dimension meeting f(en). If
k
≤
n
{\displaystyle k\leq n}
, the map f is already cellular on en, since in this case only cells of the n-skeleton of Y meets f(en), so we may assume that k > n. It is then a technical, non-trivial result (see Hatcher) that the restriction of f to
X
n
−
1
∪
e
n
{\displaystyle X^{n-1}\cup e^{n}}
can be homotoped relative to Xn-1 to a map missing a point p ∈ ek. Since Yk − {p} deformation retracts onto the subspace Yk-ek, we can further homotope the restriction of f to
X
n
−
1
∪
e
n
{\displaystyle X^{n-1}\cup e^{n}}
to a map, say, g, with the property that g(en) misses the cell ek of Y, still relative to Xn-1. Since f(en) met only finitely many cells of Y to begin with, we can repeat this process finitely many times to make
f
(
e
n
)
{\displaystyle f(e^{n})}
miss all cells of Y of dimension larger than n.
We repeat this process for every n-cell of X, fixing cells of the subcomplex A on which f is already cellular, and we thus obtain a homotopy (relative to the (n − 1)-skeleton of X and the n-cells of A) of the restriction of f to Xn to a map cellular on all cells of X of dimension at most n. Using then the homotopy extension property to extend this to a homotopy on all of X, and patching these homotopies together, will finish the proof. For details, consult Hatcher.
== Applications ==
=== Some homotopy groups ===
The cellular approximation theorem can be used to immediately calculate some homotopy groups. In particular, if
n
<
k
,
{\displaystyle n<k,}
then
π
n
(
S
k
)
=
0.
{\displaystyle \pi _{n}(S^{k})=0.}
Give
S
n
{\displaystyle S^{n}}
and
S
k
{\displaystyle S^{k}}
their canonical CW-structure, with one 0-cell each, and with one n-cell for
S
n
{\displaystyle S^{n}}
and one k-cell for
S
k
.
{\displaystyle S^{k}.}
Any base-point preserving map
f
:
S
n
→
S
k
{\displaystyle f\colon S^{n}\to S^{k}}
is then homotopic to a map whose image lies in the n-skeleton of
S
k
,
{\displaystyle S^{k},}
which consists of the base point only. That is, any such map is nullhomotopic.
=== Cellular approximation for pairs ===
Let f:(X,A)→(Y,B) be a map of CW-pairs, that is, f is a map from X to Y, and the image of
A
⊆
X
{\displaystyle A\subseteq X\,}
under f sits inside B. Then f is homotopic to a cellular map (X,A)→(Y,B). To see this, restrict f to A and use cellular approximation to obtain a homotopy of f to a cellular map on A. Use the homotopy extension property to extend this homotopy to all of X, and apply cellular approximation again to obtain a map cellular on X, but without violating the cellular property on A.
As a consequence, we have that a CW-pair (X,A) is n-connected, if all cells of
X
−
A
{\displaystyle X-A}
have dimension strictly greater than n: If
i
≤
n
{\displaystyle i\leq n\,}
, then any map
(
D
i
,
∂
D
i
)
{\displaystyle (D^{i},\partial D^{i})\,}
→(X,A) is homotopic to a cellular map of pairs, and since the n-skeleton of X sits inside A, any such map is homotopic to a map whose image is in A, and hence it is 0 in the relative homotopy group
π
i
(
X
,
A
)
{\displaystyle \pi _{i}(X,A)\,}
.
We have in particular that
(
X
,
X
n
)
{\displaystyle (X,X^{n})\,}
is n-connected, so it follows from the long exact sequence of homotopy groups for the pair
(
X
,
X
n
)
{\displaystyle (X,X^{n})\,}
that we have isomorphisms
π
i
(
X
n
)
{\displaystyle \pi _{i}(X^{n})\,}
→
π
i
(
X
)
{\displaystyle \pi _{i}(X)\,}
for all
i
<
n
{\displaystyle i<n\,}
and a surjection
π
n
(
X
n
)
{\displaystyle \pi _{n}(X^{n})\,}
→
π
n
(
X
)
{\displaystyle \pi _{n}(X)\,}
.
=== CW approximation ===
For every space X one can construct a CW complex Z and a weak homotopy equivalence
f
:
Z
→
X
{\displaystyle f\colon Z\to X}
that is called a CW approximation to X. CW approximation, being a weak homotopy equivalence, induces isomorphisms on homology and cohomology groups of X. Thus one often can use CW approximation to reduce a general statement to a simpler version that only concerns CW complexes.
CW approximation is constructed inducting on skeleta
Z
i
{\displaystyle Z_{i}}
of
Z
{\displaystyle Z}
, so that the maps
(
f
i
)
∗
:
π
k
(
Z
i
)
→
π
k
(
X
)
{\displaystyle (f_{i})_{*}\colon \pi _{k}(Z_{i})\to \pi _{k}(X)}
are isomorphic for
k
<
i
{\displaystyle k<i}
and are onto for
k
=
i
{\displaystyle k=i}
(for any basepoint). Then
Z
i
+
1
{\displaystyle Z_{i+1}}
is built from
Z
i
{\displaystyle Z_{i}}
by attaching (i+1)-cells that (for all basepoints)
are attached by the mappings
S
i
→
Z
i
{\displaystyle S^{i}\to Z_{i}}
that generate the kernel of
π
i
(
Z
i
)
→
π
i
(
X
)
{\displaystyle \pi _{i}(Z_{i})\to \pi _{i}(X)}
(and are mapped to X by the contraction of the corresponding spheroids)
are attached by constant mappings and are mapped to X to generate
π
i
+
1
(
X
)
{\displaystyle \pi _{i+1}(X)}
(or
π
i
+
1
(
X
)
/
(
f
i
)
∗
(
π
i
+
1
(
Z
i
)
)
{\displaystyle \pi _{i+1}(X)/(f_{i})_{*}(\pi _{i+1}(Z_{i}))}
).
The cellular approximation ensures then that adding (i+1)-cells doesn't affect
π
k
(
Z
i
)
→
≅
π
k
(
X
)
{\displaystyle \pi _{k}(Z_{i}){\stackrel {\cong }{\to }}\pi _{k}(X)}
for
k
<
i
{\displaystyle k<i}
, while
π
i
(
Z
i
)
{\displaystyle \pi _{i}(Z_{i})}
gets factored by the classes of the attachment mappings
S
i
→
Z
i
{\displaystyle S^{i}\to Z_{i}}
of these cells giving
π
i
(
Z
i
+
1
)
→
≅
π
i
(
X
)
{\displaystyle \pi _{i}(Z_{i+1}){\stackrel {\cong }{\to }}\pi _{i}(X)}
. Surjectivity of
π
i
+
1
(
Z
i
+
1
)
→
π
i
+
1
(
X
)
{\displaystyle \pi _{i+1}(Z_{i+1})\to \pi _{i+1}(X)}
is evident from the second step of the construction.
== References ==
Hatcher, Allen (2005), Algebraic topology, Cambridge University Press, ISBN 978-0-521-79540-1 | Wikipedia/Cellular_approximation_theorem |
In the mathematical field of geometric topology, the Poincaré conjecture (UK: , US: , French: [pwɛ̃kaʁe]) is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space.
Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century.
The eventual proof built upon Richard S. Hamilton's program of using the Ricci flow to solve the problem. By developing a number of new techniques and results in the theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In papers posted to the arXiv repository in 2002 and 2003, Perelman presented his work proving the Poincaré conjecture (and the more powerful geometrization conjecture of William Thurston). Over the next several years, several mathematicians studied his papers and produced detailed formulations of his work.
Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize in 2011 and the Leroy P. Steele Prize for Seminal Contribution to Research in 2009. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006. The Clay Mathematics Institute, having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$1 million in 2010 for the conjecture's resolution. He declined the award, saying that Hamilton's contribution had been equal to his own.
== Overview ==
The Poincaré conjecture was a mathematical problem in the field of geometric topology. In terms of the vocabulary of that field, it says the following:
Poincaré conjecture.Every three-dimensional topological manifold which is closed, connected, and has trivial fundamental group is homeomorphic to the three-dimensional sphere.
Familiar shapes, such as the surface of a ball (which is known in mathematics as the two-dimensional sphere) or of a torus, are two-dimensional. The surface of a ball has trivial fundamental group, meaning that any loop drawn on the surface can be continuously deformed to a single point. By contrast, the surface of a torus has nontrivial fundamental group, as there are loops on the surface which cannot be so deformed. Both are topological manifolds which are closed (meaning that they have no boundary and take up a finite region of space) and connected (meaning that they consist of a single piece). Two closed manifolds are said to be homeomorphic when it is possible for the points of one to be reallocated to the other in a continuous way. Because the (non)triviality of the fundamental group is known to be invariant under homeomorphism, it follows that the two-dimensional sphere and torus are not homeomorphic.
The two-dimensional analogue of the Poincaré conjecture says that any two-dimensional topological manifold which is closed and connected but non-homeomorphic to the two-dimensional sphere must possess a loop which cannot be continuously contracted to a point. (This is illustrated by the example of the torus, as above.) This analogue is known to be true via the classification of closed and connected two-dimensional topological manifolds, which was understood in various forms since the 1860s. In higher dimensions, the closed and connected topological manifolds do not have a straightforward classification, precluding an easy resolution of the Poincaré conjecture.
== History ==
=== Poincaré's question ===
In the 1800s, Bernhard Riemann and Enrico Betti initiated the study of topological invariants of manifolds. They introduced the Betti numbers, which associate to any manifold a list of nonnegative integers. Riemann showed that a closed connected two-dimensional manifold is fully characterized by its Betti numbers. As part of his 1895 paper Analysis Situs (announced in 1892), Poincaré showed that Riemann's result does not extend to higher dimensions. To do this he introduced the fundamental group as a novel topological invariant, and was able to exhibit examples of three-dimensional manifolds which have the same Betti numbers but distinct fundamental groups. He posed the question of whether the fundamental group is sufficient to topologically characterize a manifold (of given dimension), although he made no attempt to pursue the answer, saying only that it would "demand lengthy and difficult study".
The primary purpose of Poincaré's paper was the interpretation of the Betti numbers in terms of his newly-introduced homology groups, along with the Poincaré duality theorem on the symmetry of Betti numbers. Following criticism of the completeness of his arguments, he released a number of subsequent "supplements" to enhance and correct his work. The closing remark of his second supplement, published in 1900, said:
In order to avoid making this work too prolonged, I confine myself to stating the following theorem, the proof of which will require further developments:
Each polyhedron which has all its Betti numbers equal to 1 and all its tables Tq orientable is simply connected, i.e., homeomorphic to a hypersphere.
(In a modern language, taking note of the fact that Poincaré is using the terminology of simple-connectedness in an unusual way, this says that a closed connected oriented manifold with the homology of a sphere must be homeomorphic to a sphere.) This modified his negative generalization of Riemann's work in two ways. Firstly, he was now making use of the full homology groups and not only the Betti numbers. Secondly, he narrowed the scope of the problem from asking if an arbitrary manifold is characterized by topological invariants to asking whether the sphere can be so characterized.
However, after publication he found his announced theorem to be incorrect. In his fifth and final supplement, published in 1904, he proved this with the counterexample of the Poincaré homology sphere, which is a closed connected three-dimensional manifold which has the homology of the sphere but whose fundamental group has 120 elements. This example made it clear that homology is not powerful enough to characterize the topology of a manifold. In the closing remarks of the fifth supplement, Poincaré modified his erroneous theorem to use the fundamental group instead of homology:
One question remains to be dealt with: is it possible for the fundamental group of V to reduce to the identity without V being simply connected? [...] However, this question would carry us too far away.
In this remark, as in the closing remark of the second supplement, Poincaré used the term "simply connected" in a way which is at odds with modern usage, as well as his own 1895 definition of the term. (According to modern usage, Poincaré's question is a tautology, asking if it is possible for a manifold to be simply connected without being simply connected.) However, as can be inferred from context, Poincaré was asking whether the triviality of the fundamental group uniquely characterizes the sphere.
Throughout the work of Riemann, Betti, and Poincaré, the topological notions in question are not defined or used in a way that would be recognized as precise from a modern perspective. Even the key notion of a "manifold" was not used in a consistent way in Poincaré's own work, and there was frequent confusion between the notion of a topological manifold, a PL manifold, and a smooth manifold. For this reason, it is not possible to read Poincaré's questions unambiguously. It is only through the formalization and vocabulary of topology as developed by later mathematicians that Poincaré's closing question has been understood as the "Poincaré conjecture" as stated in the preceding section.
However, despite its usual phrasing in the form of a conjecture, proposing that all manifolds of a certain type are homeomorphic to the sphere, Poincaré only posed an open-ended question, without venturing to conjecture one way or the other. Moreover, there is no evidence as to which way he believed his question would be answered.
=== Solutions ===
In the 1930s, J. H. C. Whitehead claimed a proof but then retracted it. In the process, he discovered some examples of simply-connected (indeed contractible, i.e. homotopically equivalent to a point) non-compact 3-manifolds not homeomorphic to
R
3
{\displaystyle \mathbb {R} ^{3}}
, the prototype of which is now called the Whitehead manifold.
In the 1950s and 1960s, other mathematicians attempted proofs of the conjecture only to discover that they contained flaws. Influential mathematicians such as Georges de Rham, R. H. Bing, Wolfgang Haken, Edwin E. Moise, and Christos Papakyriakopoulos attempted to prove the conjecture. In 1958, R. H. Bing proved a weak version of the Poincaré conjecture: if every simple closed curve of a compact 3-manifold is contained in a 3-ball, then the manifold is homeomorphic to the 3-sphere. Bing also described some of the pitfalls in trying to prove the Poincaré conjecture.
Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true.
Over time, the conjecture gained the reputation of being particularly tricky to tackle. John Milnor commented that sometimes the errors in false proofs can be "rather subtle and difficult to detect". Work on the conjecture improved understanding of 3-manifolds. Experts in the field were often reluctant to announce proofs and tended to view any such announcement with skepticism. The 1980s and 1990s witnessed some well-publicized fallacious proofs (which were not actually published in peer-reviewed form).
An exposition of attempts to prove this conjecture can be found in the non-technical book Poincaré's Prize by George Szpiro.
=== Dimensions ===
The classification of closed surfaces gives an affirmative answer to the analogous question in two dimensions. For dimensions greater than three, one can pose the Generalized Poincaré conjecture: is a homotopy n-sphere homeomorphic to the n-sphere? A stronger assumption than simply-connectedness is necessary; in dimensions four and higher there are simply-connected, closed manifolds which are not homotopy equivalent to an n-sphere.
Historically, while the conjecture in dimension three seemed plausible, the generalized conjecture was thought to be false. In 1961, Stephen Smale shocked mathematicians by proving the Generalized Poincaré conjecture for dimensions greater than four and extended his techniques to prove the fundamental h-cobordism theorem. In 1982, Michael Freedman proved the Poincaré conjecture in four dimensions. Freedman's work left open the possibility that there is a smooth four-manifold homeomorphic to the four-sphere which is not diffeomorphic to the four-sphere. This so-called smooth Poincaré conjecture, in dimension four, remains open and is thought to be very difficult. Milnor's exotic spheres show that the smooth Poincaré conjecture is false in dimension seven, for example.
These earlier successes in higher dimensions left the case of three dimensions in limbo. The Poincaré conjecture was essentially true in both dimension four and all higher dimensions for substantially different reasons. In dimension three, the conjecture had an uncertain reputation until the geometrization conjecture put it into a framework governing all 3-manifolds. John Morgan wrote:
It is my view that before Thurston's work on hyperbolic 3-manifolds and … the Geometrization conjecture there was no consensus among the experts as to whether the Poincaré conjecture was true or false. After Thurston's work, notwithstanding the fact that it had no direct bearing on the Poincaré conjecture, a consensus developed that the Poincaré conjecture (and the Geometrization conjecture) were true.
=== Hamilton's program and solution ===
Hamilton's program was started in his 1982 paper in which he introduced the Ricci flow on a manifold and showed how to use it to prove some special cases of the Poincaré conjecture. In the following years, he extended this work but was unable to prove the conjecture. The actual solution was not found until Grigori Perelman published his papers.
In late 2002 and 2003, Perelman posted three papers on arXiv. In these papers, he sketched a proof of the Poincaré conjecture and a more general conjecture, Thurston's geometrization conjecture, completing the Ricci flow program outlined earlier by Richard S. Hamilton.
From May to July 2006, several groups presented papers that filled in the details of Perelman's proof of the Poincaré conjecture, as follows:
Bruce Kleiner and John W. Lott posted a paper on arXiv in May 2006 which filled in the details of Perelman's proof of the geometrization conjecture, following partial versions which had been publicly available since 2003. Their manuscript was published in the journal Geometry and Topology in 2008. A small number of corrections were made in 2011 and 2013; for instance, the first version of their published paper made use of an incorrect version of Hamilton's compactness theorem for Ricci flow.
Huai-Dong Cao and Xi-Ping Zhu published a paper in the June 2006 issue of the Asian Journal of Mathematics with an exposition of the complete proof of the Poincaré and geometrization conjectures. The opening paragraph of their paper stated
In this paper, we shall present the Hamilton-Perelman theory of Ricci flow. Based on it, we shall give the first written account of a complete proof of the Poincaré conjecture and the geometrization conjecture of Thurston. While the complete work is an accumulated efforts of many geometric analysts, the major contributors are unquestionably Hamilton and Perelman.
Some observers interpreted Cao and Zhu as taking credit for Perelman's work. They later posted a revised version, with new wording, on arXiv. In addition, a page of their exposition was essentially identical to a page in one of Kleiner and Lott's early publicly available drafts; this was also amended in the revised version, together with an apology by the journal's editorial board.
John Morgan and Gang Tian posted a paper on arXiv in July 2006 which gave a detailed proof of just the Poincaré Conjecture (which is somewhat easier than the full geometrization conjecture) and expanded this to a book.
All three groups found that the gaps in Perelman's papers were minor and could be filled in using his own techniques.
On August 22, 2006, the ICM awarded Perelman the Fields Medal for his work on the Ricci flow, but Perelman refused the medal. John Morgan spoke at the ICM on the Poincaré conjecture on August 24, 2006, declaring that "in 2003, Perelman solved the Poincaré Conjecture".
In December 2006, the journal Science honored the proof of Poincaré conjecture as the Breakthrough of the Year and featured it on its cover.
== Ricci flow with surgery ==
Hamilton's program for proving the Poincaré conjecture involves first putting a Riemannian metric on the unknown simply connected closed 3-manifold. The basic idea is to try to "improve" this metric; for example, if the metric can be improved enough so that it has constant positive curvature, then according to classical results in Riemannian geometry, it must be the 3-sphere. Hamilton prescribed the "Ricci flow equations" for improving the metric;
∂
t
g
i
j
=
−
2
R
i
j
{\displaystyle \partial _{t}g_{ij}=-2R_{ij}}
where g is the metric and R its Ricci curvature, and one hopes that, as the time t increases, the manifold becomes easier to understand. Ricci flow expands the negative curvature part of the manifold and contracts the positive curvature part.
In some cases, Hamilton was able to show that this works; for example, his original breakthrough was to show that if the Riemannian manifold has positive Ricci curvature everywhere, then the above procedure can only be followed for a bounded interval of parameter values,
t
∈
[
0
,
T
)
{\displaystyle t\in [0,T)}
with
T
<
∞
{\displaystyle T<\infty }
, and more significantly, that there are numbers
c
t
{\displaystyle c_{t}}
such that as
t
↗
T
{\displaystyle t\nearrow T}
, the Riemannian metrics
c
t
g
(
t
)
{\displaystyle c_{t}g(t)}
smoothly converge to one of constant positive curvature. According to classical Riemannian geometry, the only simply-connected compact manifold which can support a Riemannian metric of constant positive curvature is the sphere. So, in effect, Hamilton showed a special case of the Poincaré conjecture: if a compact simply-connected 3-manifold supports a Riemannian metric of positive Ricci curvature, then it must be diffeomorphic to the 3-sphere.
If, instead, one only has an arbitrary Riemannian metric, the Ricci flow equations must lead to more complicated singularities. Perelman's major achievement was to show that, if one takes a certain perspective, if they appear in finite time, these singularities can only look like shrinking spheres or cylinders. With a quantitative understanding of this phenomenon, he cuts the manifold along the singularities, splitting the manifold into several pieces and then continues with the Ricci flow on each of these pieces. This procedure is known as Ricci flow with surgery.
Perelman provided a separate argument based on curve shortening flow to show that, on a simply-connected compact 3-manifold, any solution of the Ricci flow with surgery becomes extinct in finite time. An alternative argument, based on the min-max theory of minimal surfaces and geometric measure theory, was provided by Tobias Colding and William Minicozzi. Hence, in the simply-connected context, the above finite-time phenomena of Ricci flow with surgery is all that is relevant. In fact, this is even true if the fundamental group is a free product of finite groups and cyclic groups.
This condition on the fundamental group turns out to be necessary and sufficient for finite time extinction. It is equivalent to saying that the prime decomposition of the manifold has no acyclic components and turns out to be equivalent to the condition that all geometric pieces of the manifold have geometries based on the two Thurston geometries S2 × R and S3. In the context that one makes no assumption about the fundamental group whatsoever, Perelman made a further technical study of the limit of the manifold for infinitely large times, and in so doing, proved Thurston's geometrization conjecture: at large times, the manifold has a thick-thin decomposition, whose thick piece has a hyperbolic structure, and whose thin piece is a graph manifold. Due to Perelman's and Colding and Minicozzi's results, however, these further results are unnecessary in order to prove the Poincaré conjecture.
== Solution ==
On November 11, 2002, Russian mathematician Grigori Perelman posted the first of a series of three eprints on arXiv outlining a solution of the Poincaré conjecture. Perelman's proof uses a modified version of a Ricci flow program developed by Richard S. Hamilton. In August 2006, Perelman was awarded, but declined, the Fields Medal (worth $15,000 CAD) for his work on the Ricci flow. On March 18, 2010, the Clay Mathematics Institute awarded Perelman the $1 million Millennium Prize in recognition of his proof. Perelman rejected that prize as well.
Perelman proved the conjecture by deforming the manifold using the Ricci flow (which behaves similarly to the heat equation that describes the diffusion of heat through an object). The Ricci flow usually deforms the manifold towards a rounder shape, except for some cases where it stretches the manifold apart from itself towards what are known as singularities. Perelman and Hamilton then chop the manifold at the singularities (a process called "surgery"), causing the separate pieces to form into ball-like shapes. Major steps in the proof involve showing how manifolds behave when they are deformed by the Ricci flow, examining what sort of singularities develop, determining whether this surgery process can be completed, and establishing that the surgery need not be repeated infinitely many times.
The first step is to deform the manifold using the Ricci flow. The Ricci flow was defined by Richard S. Hamilton as a way to deform manifolds. The formula for the Ricci flow is an imitation of the heat equation, which describes the way heat flows in a solid. Like the heat flow, Ricci flow tends towards uniform behavior. Unlike the heat flow, the Ricci flow could run into singularities and stop functioning. A singularity in a manifold is a place where it is not differentiable: like a corner or a cusp or a pinching. The Ricci flow was only defined for smooth differentiable manifolds. Hamilton used the Ricci flow to prove that some compact manifolds were diffeomorphic to spheres, and he hoped to apply it to prove the Poincaré conjecture. He needed to understand the singularities.
Hamilton created a list of possible singularities that could form, but he was concerned that some singularities might lead to difficulties. He wanted to cut the manifold at the singularities and paste in caps and then run the Ricci flow again, so he needed to understand the singularities and show that certain kinds of singularities do not occur. Perelman discovered the singularities were all very simple: consider that a cylinder is formed by 'stretching' a circle along a line in another dimension, repeating that process with spheres instead of circles essentially gives the form of the singularities. Perelman proved this using something called the "Reduced Volume", which is closely related to an eigenvalue of a certain elliptic equation.
Sometimes, an otherwise complicated operation reduces to multiplication by a scalar (a number). Such numbers are called eigenvalues of that operation. Eigenvalues are closely related to vibration frequencies and are used in analyzing a famous problem: can you hear the shape of a drum? Essentially, an eigenvalue is like a note being played by the manifold. Perelman proved this note goes up as the manifold is deformed by the Ricci flow. This helped him eliminate some of the more troublesome singularities that had concerned Hamilton, particularly the cigar soliton solution, which looked like a strand sticking out of a manifold with nothing on the other side. In essence, Perelman showed that all the strands that form can be cut and capped and none stick out on one side only.
Completing the proof, Perelman takes any compact, simply connected, three-dimensional manifold without boundary and starts to run the Ricci flow. This deforms the manifold into round pieces with strands running between them. He cuts the strands and continues deforming the manifold until, eventually, he is left with a collection of round three-dimensional spheres. Then, he rebuilds the original manifold by connecting the spheres together with three-dimensional cylinders, morphs them into a round shape, and sees that, despite all the initial confusion, the manifold was, in fact, homeomorphic to a sphere.
One immediate question posed was how one could be sure that infinitely many cuts are not necessary. This was raised due to the cutting potentially progressing forever. Perelman proved this cannot happen by using minimal surfaces on the manifold. A minimal surface is one on which any local deformation increases area; a familiar example is a soap film spanning a bent loop of wire. Hamilton had shown that the area of a minimal surface decreases as the manifold undergoes Ricci flow. Perelman verified what happened to the area of the minimal surface when the manifold was sliced. He proved that, eventually, the area is so small that any cut after the area is that small can only be chopping off three-dimensional spheres and not more complicated pieces. This is described as a battle with a Hydra by Sormani in Szpiro's book cited below. This last part of the proof appeared in Perelman's third and final paper on the subject.
== See also ==
Manifold Destiny
== References ==
== Further reading ==
Kleiner, Bruce; Lott, John (2008). "Notes on Perelman's papers". Geometry & Topology. 12 (5): 2587–2855. arXiv:math/0605667. doi:10.2140/gt.2008.12.2587. MR 2460872. S2CID 119133773.
Huai-Dong Cao; Xi-Ping Zhu (December 3, 2006). "Hamilton-Perelman's Proof of the Poincaré Conjecture and the Geometrization Conjecture". arXiv:math.DG/0612069.
Morgan, John W.; Tian, Gang (2007). Ricci Flow and the Poincaré Conjecture. Clay Mathematics Monographs. Vol. 3. Providence, RI: American Mathematical Society. arXiv:math/0607607. ISBN 978-0-8218-4328-4. MR 2334563.
O'Shea, Donal (2007). The Poincaré Conjecture: In Search of the Shape of the Universe. Walker & Company. ISBN 978-0-8027-1654-5.
Perelman, Grisha (November 11, 2002). "The entropy formula for the Ricci flow and its geometric applications". arXiv:math.DG/0211159.
Perelman, Grisha (March 10, 2003). "Ricci flow with surgery on three-manifolds". arXiv:math.DG/0303109.
Perelman, Grisha (July 17, 2003). "Finite extinction time for the solutions to the Ricci flow on certain three-manifolds". arXiv:math.DG/0307245.
Szpiro, George (2008). Poincaré's Prize: The Hundred-Year Quest to Solve One of Math's Greatest Puzzles. Plume. ISBN 978-0-452-28964-2.
Stillwell, John (2012). "Poincaré and the early history of 3-manifolds". Bulletin of the American Mathematical Society. 49 (4): 555–576. doi:10.1090/S0273-0979-2012-01385-X. MR 2958930.
Yau, Shing-Tung; Nadis, Steve (2019). The Shape of a Life: One Mathematician's Search for the Universe's Hidden Geometry. New Haven, CT: Yale University Press. ISBN 978-0-300-23590-6. MR 3930611.
== External links ==
"The Poincaré Conjecture" – BBC Radio 4 programme In Our Time, 2 November 2006. Contributors June Barrow-Green, Lecturer in the History of Mathematics at the Open University, Ian Stewart, Professor of Mathematics at the University of Warwick, Marcus du Sautoy, Professor of Mathematics at the University of Oxford, and presenter Melvyn Bragg. | Wikipedia/Poincaré_conjecture |
In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category.
A model category is a category with classes of morphisms called weak equivalences, fibrations, and cofibrations, satisfying several axioms. The associated homotopy category of a model category has the same objects, but the morphisms are changed in order to make the weak equivalences into isomorphisms. It is a useful observation that the associated homotopy category depends only on the weak equivalences, not on the fibrations and cofibrations.
== Topological spaces ==
Model categories were defined by Quillen as an axiomatization of homotopy theory that applies to topological spaces, but also to many other categories in algebra and geometry. The example that started the subject is the category of topological spaces with Serre fibrations as fibrations and weak homotopy equivalences as weak equivalences (the cofibrations for this model structure can be described as the retracts of relative cell complexes X ⊆ Y). By definition, a continuous mapping f: X → Y of spaces is called a weak homotopy equivalence if the induced function on sets of path components
f
∗
:
π
0
(
X
)
→
π
0
(
Y
)
{\displaystyle f_{*}\colon \pi _{0}(X)\to \pi _{0}(Y)}
is bijective, and for every point x in X and every n ≥ 1, the induced homomorphism
f
∗
:
π
n
(
X
,
x
)
→
π
n
(
Y
,
f
(
x
)
)
{\displaystyle f_{*}\colon \pi _{n}(X,x)\to \pi _{n}(Y,f(x))}
on homotopy groups is bijective. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.)
For simply connected topological spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the induced homomorphism f*: Hn(X,Z) → Hn(Y,Z) on singular homology groups is bijective for all n. Likewise, for simply connected spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the pullback homomorphism f*: Hn(Y,Z) → Hn(X,Z) on singular cohomology is bijective for all n.
Example: Let X be the set of natural numbers {0, 1, 2, ...} and let Y be the set {0} ∪ {1, 1/2, 1/3, ...}, both with the subspace topology from the real line. Define f: X → Y by mapping 0 to 0 and n to 1/n for positive integers n. Then f is continuous, and in fact a weak homotopy equivalence, but it is not a homotopy equivalence.
The homotopy category of topological spaces (obtained by inverting the weak homotopy equivalences) greatly simplifies the category of topological spaces. Indeed, this homotopy category is equivalent to the category of CW complexes with morphisms being homotopy classes of continuous maps.
Many other model structures on the category of topological spaces have also been considered. For example, in the Strøm model structure on topological spaces, the fibrations are the Hurewicz fibrations and the weak equivalences are the homotopy equivalences.
== Chain complexes ==
Some other important model categories involve chain complexes. Let A be a Grothendieck abelian category, for example the category of modules over a ring or the category of sheaves of abelian groups on a topological space. Define a category C(A) with objects the complexes X of objects in A,
⋯
→
X
1
→
X
0
→
X
−
1
→
⋯
,
{\displaystyle \cdots \to X_{1}\to X_{0}\to X_{-1}\to \cdots ,}
and morphisms the chain maps. (It is equivalent to consider "cochain complexes" of objects of A, where the numbering is written as
⋯
→
X
−
1
→
X
0
→
X
1
→
⋯
,
{\displaystyle \cdots \to X^{-1}\to X^{0}\to X^{1}\to \cdots ,}
simply by defining Xi = X−i.)
The category C(A) has a model structure in which the cofibrations are the monomorphisms and the weak equivalences are the quasi-isomorphisms. By definition, a chain map f: X → Y is a quasi-isomorphism if the induced homomorphism
f
∗
:
H
n
(
X
)
→
H
n
(
Y
)
{\displaystyle f_{*}\colon H_{n}(X)\to H_{n}(Y)}
on homology is an isomorphism for all integers n. (Here Hn(X) is the object of A defined as the kernel of Xn → Xn−1 modulo the image of Xn+1 → Xn.) The resulting homotopy category is called the derived category D(A).
== Trivial fibrations and trivial cofibrations ==
In any model category, a fibration that is also a weak equivalence is called a trivial (or acyclic) fibration. A cofibration that is also a weak equivalence is called a trivial (or acyclic) cofibration.
== Notes ==
== References ==
Beke, Tibor (2000), "Sheafifiable homotopy model categories", Mathematical Proceedings of the Cambridge Philosophical Society, 129: 447–473, arXiv:math/0102087, Bibcode:2000MPCPS.129..447B, doi:10.1017/S0305004100004722, MR 1780498
Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0, MR 1867354
Hovey, Mark (1999), Model Categories (PDF), American Mathematical Society, ISBN 0-8218-1359-5, MR 1650134
Strøm, Arne (1972), "The homotopy category is a homotopy category", Archiv der Mathematik, 23: 435–441, doi:10.1007/BF01304912, MR 0321082 | Wikipedia/Weak_equivalence_(homotopy_theory) |
In algebraic topology, a k-chain
is a formal linear combination of the k-cells in a cell complex. In simplicial complexes (respectively, cubical complexes), k-chains are combinations of k-simplices (respectively, k-cubes), but not necessarily connected. Chains are used in homology; the elements of a homology group are equivalence classes of chains.
== Definition ==
For a simplicial complex
X
{\displaystyle X}
, the group
C
n
(
X
)
{\displaystyle C_{n}(X)}
of
n
{\displaystyle n}
-chains of
X
{\displaystyle X}
is given by:
C
n
(
X
)
=
{
∑
i
m
i
σ
i
|
m
i
∈
Z
}
{\displaystyle C_{n}(X)=\left\{\sum \limits _{i}m_{i}\sigma _{i}|m_{i}\in \mathbb {Z} \right\}}
where
σ
i
{\displaystyle \sigma _{i}}
are singular
n
{\displaystyle n}
-simplices of
X
{\displaystyle X}
. Note that an element in
C
n
(
X
)
{\displaystyle C_{n}(X)}
is not necessarily a connected simplicial complex.
== Integration on chains ==
Integration is defined on chains by taking the linear combination of integrals over the simplices in the chain with coefficients (which are typically integers).
The set of all k-chains forms a group and the sequence of these groups is called a chain complex.
== Boundary operator on chains ==
The boundary of a chain is the linear combination of boundaries of the simplices in the chain. The boundary of a k-chain is a (k−1)-chain. Note that the boundary of a simplex is not a simplex, but a chain with coefficients 1 or −1 – thus chains are the closure of simplices under the boundary operator.
Example 1: The boundary of a path is the formal difference of its endpoints: it is a telescoping sum. To illustrate, if the 1-chain
c
=
t
1
+
t
2
+
t
3
{\displaystyle c=t_{1}+t_{2}+t_{3}\,}
is a path from point
v
1
{\displaystyle v_{1}\,}
to point
v
4
{\displaystyle v_{4}\,}
, where
t
1
=
[
v
1
,
v
2
]
{\displaystyle t_{1}=[v_{1},v_{2}]\,}
,
t
2
=
[
v
2
,
v
3
]
{\displaystyle t_{2}=[v_{2},v_{3}]\,}
and
t
3
=
[
v
3
,
v
4
]
{\displaystyle t_{3}=[v_{3},v_{4}]\,}
are its constituent 1-simplices, then
∂
1
c
=
∂
1
(
t
1
+
t
2
+
t
3
)
=
∂
1
(
t
1
)
+
∂
1
(
t
2
)
+
∂
1
(
t
3
)
=
∂
1
(
[
v
1
,
v
2
]
)
+
∂
1
(
[
v
2
,
v
3
]
)
+
∂
1
(
[
v
3
,
v
4
]
)
=
(
[
v
2
]
−
[
v
1
]
)
+
(
[
v
3
]
−
[
v
2
]
)
+
(
[
v
4
]
−
[
v
3
]
)
=
[
v
4
]
−
[
v
1
]
.
{\displaystyle {\begin{aligned}\partial _{1}c&=\partial _{1}(t_{1}+t_{2}+t_{3})\\&=\partial _{1}(t_{1})+\partial _{1}(t_{2})+\partial _{1}(t_{3})\\&=\partial _{1}([v_{1},v_{2}])+\partial _{1}([v_{2},v_{3}])+\partial _{1}([v_{3},v_{4}])\\&=([v_{2}]-[v_{1}])+([v_{3}]-[v_{2}])+([v_{4}]-[v_{3}])\\&=[v_{4}]-[v_{1}].\end{aligned}}}
Example 2: The boundary of the triangle is a formal sum of its edges with signs arranged to make the traversal of the boundary counterclockwise.
A chain is called a cycle when its boundary is zero. A chain that is the boundary of another chain is called a boundary. Boundaries are cycles,
so chains form a chain complex, whose homology groups (cycles modulo boundaries) are called simplicial homology groups.
Example 3: The plane punctured at the origin has nontrivial 1-homology group since the unit circle is a cycle, but not a boundary.
In differential geometry, the duality between the boundary operator on chains and the exterior derivative is expressed by the general Stokes' theorem.
== References == | Wikipedia/Chain_(algebraic_topology) |
Digital topology deals with properties and features of two-dimensional (2D) or three-dimensional (3D) digital images
that correspond to topological properties (e.g., connectedness) or topological features (e.g., boundaries) of objects.
Concepts and results of digital topology are used to specify and justify important (low-level) image analysis algorithms,
including algorithms for thinning, border or surface tracing, counting of components or tunnels, or region-filling.
== History ==
Digital topology was first studied in the late 1960s by the computer image analysis researcher Azriel Rosenfeld (1931–2004), whose publications on the subject played a major role in establishing and developing the field. The term "digital topology" was itself invented by Rosenfeld, who used it in a 1973 publication for the first time.
A related work called the grid cell topology, which could be considered as a link to classic combinatorial topology, appeared in the book of Pavel Alexandrov and Heinz Hopf, Topologie I (1935). Rosenfeld et al. proposed digital connectivity such as 4-connectivity and 8-connectivity in two dimensions as well as 6-connectivity and 26-connectivity in three dimensions. The labeling method for inferring a connected component was studied in the 1970s. Theodosios Pavlidis (1982) suggested the use of graph-theoretic algorithms such as the depth-first search method for finding connected components. Vladimir A. Kovalevsky (1989) extended the Alexandrov–Hopf 2D grid cell topology to three and higher dimensions. He also proposed (2008) a more general axiomatic theory of locally finite topological spaces and abstract cell complexes formerly suggested by Ernst Steinitz (1908). It is the Alexandrov topology. The book from 2008 contains new definitions of topological balls and spheres independent of a metric and numerous applications to digital image analysis.
In the early 1980s, digital surfaces were studied. David Morgenthaler and Rosenfeld (1981) gave a mathematical definition of surfaces in three-dimensional digital space. This definition contains a total of nine types of digital surfaces. The digital manifold was studied in the 1990s. A recursive definition of the digital k-manifold was proposed intuitively by Chen and Zhang in 1993. Many applications were found in image processing and computer vision.
== Basic results ==
A basic (early) result in digital topology says that 2D binary images require the alternative use of 4- or 8-adjacency or "pixel connectivity" (for "object" or "non-object"
pixels) to ensure the basic topological duality of separation and connectedness. This alternative use corresponds to open or closed
sets in the 2D grid cell topology, and the result generalizes to 3D: the alternative use of 6- or 26-adjacency corresponds
to open or closed sets in the 3D grid cell topology. Grid cell topology also applies to multilevel (e.g., color) 2D or 3D images,
for example based on a total order of possible image values and applying a 'maximum-label rule' (see the book by Klette and Rosenfeld, 2004).
Digital topology is highly related to combinatorial topology. The main differences between them are: (1) digital topology mainly studies digital objects that are formed by grid cells (the cells of integer lattices), rather than more general cell complexes, and (2) digital topology also deals with non-Jordan manifolds.
A combinatorial manifold is a kind of manifold which is a discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes. A digital manifold is a special kind of combinatorial manifold which is defined in digital space i.e. grid cell space.
A digital form of the Gauss–Bonnet theorem is: Let M be a closed digital 2D manifold in direct adjacency (i.e., a (6,26)-surface in 3D).
The formula for genus is
g
=
1
+
(
M
5
+
2
M
6
−
M
3
)
/
8
{\displaystyle g=1+(M_{5}+2M_{6}-M_{3})/8}
,
where
M
i
{\displaystyle M_{i}}
indicates the set of surface-points each of which has i adjacent points on the surface (Chen and Rong, ICPR 2008).
If M is simply connected, i.e.,
g
=
0
{\displaystyle g=0}
, then
M
3
=
8
+
M
5
+
2
M
6
{\displaystyle M_{3}=8+M_{5}+2M_{6}}
. (See also Euler characteristic.)
== See also ==
Digital geometry
Combinatorial topology
Computational geometry
Computational topology
Topological data analysis
Topology
Discrete mathematics
Geospatial topology
== References ==
Herman, Gabor T. (1998). Geometry of Digital Spaces. Applied and Numerical Harmonic Analysis. Boston, MA: Birkhäuser Boston, Inc. ISBN 978-0-8176-3897-9. MR 1711168.
Kong, Tat Yung; Rosenfeld, Azriel, eds. (1996). Topological Algorithms for Digital Image Processing. Elsevier. ISBN 0-444-89754-2.
Voss, Klaus (1993). Discrete Images, Objects, and Functions in
Z
n
{\displaystyle \mathbb {Z} ^{n}}
. Algorithms and Combinatorics. Vol. 11. Berlin: Springer-Verlag. doi:10.1007/978-3-642-46779-0. ISBN 0-387-55943-4. MR 1224678.
Chen, L. (2004). Discrete Surfaces and Manifolds: A Theory of Digital-Discrete Geometry and Topology. SP Computing. ISBN 0-9755122-1-8.
Klette, R.; Rosenfeld, Azriel (2004). Digital Geometry. Morgan Kaufmann. ISBN 1-55860-861-3.
Morgenthaler, David G.; Rosenfeld, Azriel (1981). "Surfaces in three-dimensional digital images". Information and Control. 51 (3): 227–247. doi:10.1016/S0019-9958(81)90290-4. MR 0686842.
Pavlidis, Theo (1982). Algorithms for graphics and image processing. Lecture Notes in Mathematics. Vol. 877. Rockville, MD: Computer Science Press. ISBN 0-914894-65-X. MR 0643798.
Vladimir A. Kovalevsky. (2008). Geometry of Locally Finite Spaces. Berlin: Publishing House Dr. Baerbel Kovalevski. 2008. ISBN 978-3-9812252-0-4. | Wikipedia/Digital_topology |
In mathematics, especially (higher) category theory, higher-dimensional algebra is the study of categorified structures. It has applications in nonabelian algebraic topology, and generalizes abstract algebra.
== Higher-dimensional categories ==
A first step towards defining higher dimensional algebras is the concept of 2-category of higher category theory, followed by the more 'geometric' concept of double category.
A higher level concept is thus defined as a category of categories, or super-category, which generalises to higher dimensions the notion of category – regarded as any structure which is an interpretation of Lawvere's axioms of the elementary theory of abstract categories (ETAC). Thus, a supercategory and also a super-category, can be regarded as natural extensions of the concepts of meta-category, multicategory, and multi-graph, k-partite graph, or colored graph (see a color figure, and also its definition in graph theory).
Supercategories were first introduced in 1970, and were subsequently developed for applications in theoretical physics (especially quantum field theory and topological quantum field theory) and mathematical biology or mathematical biophysics.
Other pathways in higher-dimensional algebra involve: bicategories, homomorphisms of bicategories, variable categories (also known as indexed or parametrized categories), topoi, effective descent, and enriched and internal categories.
== Double groupoids ==
In higher-dimensional algebra (HDA), a double groupoid is a generalisation of a one-dimensional groupoid to two dimensions, and the latter groupoid can be considered as a special case of a category with all invertible arrows, or morphisms.
Double groupoids are often used to capture information about geometrical objects such as higher-dimensional manifolds (or n-dimensional manifolds). In general, an n-dimensional manifold is a space that locally looks like an n-dimensional Euclidean space, but whose global structure may be non-Euclidean.
Double groupoids were first introduced by Ronald Brown in Double groupoids and crossed modules (1976), and were further developed towards applications in nonabelian algebraic topology. A related, 'dual' concept is that of a double algebroid, and the more general concept of R-algebroid.
== Nonabelian algebraic topology ==
See Nonabelian algebraic topology
== Applications ==
=== Theoretical physics ===
In quantum field theory, there exist quantum categories. and quantum double groupoids. One can consider quantum double groupoids to be fundamental groupoids defined via a 2-functor, which allows one to think about the physically interesting case of quantum fundamental groupoids (QFGs) in terms of the bicategory Span(Groupoids), and then constructing 2-Hilbert spaces and 2-linear maps for manifolds and cobordisms. At the next step, one obtains cobordisms with corners via natural transformations of such 2-functors. A claim was then made that, with the gauge group SU(2), "the extended TQFT, or ETQFT, gives a theory equivalent to the Ponzano–Regge model of quantum gravity"; similarly, the Turaev–Viro model would be then obtained with representations of SUq(2). Therefore, one can describe the state space of a gauge theory – or many kinds of quantum field theories (QFTs) and local quantum physics, in terms of the transformation groupoids given by symmetries, as for example in the case of a gauge theory, by the gauge transformations acting on states that are, in this case, connections. In the case of symmetries related to quantum groups, one would obtain structures that are representation categories of quantum groupoids, instead of the 2-vector spaces that are representation categories of groupoids.
=== Quantum physics ===
== See also ==
== Notes ==
== Further reading ==
Brown, R.; Higgins, P.J.; Sivera, R. (2011). Nonabelian Algebraic Topology: filtered spaces, crossed complexes, cubical homotopy groupoids. Vol. Tracts Vol 15. European Mathematical Society. arXiv:math/0407275. doi:10.4171/083. ISBN 978-3-03719-083-8. (Downloadable PDF available)
Brown, R.; Mosa, G.H. (1999). "Double categories, thin structures and connections". Theory and Applications of Categories. 5: 163–175. CiteSeerX 10.1.1.438.8991.
Brown, R. (2002). Categorical Structures for Descent and Galois Theory. Fields Institute.
Brown, R. (1987). "From groups to groupoids: a brief survey" (PDF). Bulletin of the London Mathematical Society. 19 (2): 113–134. CiteSeerX 10.1.1.363.1859. doi:10.1112/blms/19.2.113. hdl:10338.dmlcz/140413. This give some of the history of groupoids, namely the origins in work of Heinrich Brandt on quadratic forms, and an indication of later work up to 1987, with 160 references.
Brown, Ronald (2018). "Higher Dimensional Group Theory". groupoids.org.uk. Bangor University. A web article with many references explaining how the groupoid concept has led to notions of higher-dimensional groupoids, not available in group theory, with applications in homotopy theory and in group cohomology.
Brown, R.; Higgins, P.J. (1981). "On the algebra of cubes". Journal of Pure and Applied Algebra. 21 (3): 233–260. doi:10.1016/0022-4049(81)90018-9.
Mackenzie, K.C.H. (2005). General theory of Lie groupoids and Lie algebroids. London Mathematical Society Lecture Note Series. Vol. 213. Cambridge University Press. ISBN 978-0-521-49928-6. Archived from the original on 2005-03-10.
Brown, R. (2006). Topology and Groupoids. Booksurge. ISBN 978-1-4196-2722-4. Revised and extended edition of a book previously published in 1968 and 1988. E-version available from website.
Borceux, F.; Janelidze, G. (2001). Galois theories. Cambridge University Press. ISBN 978-0-521-07041-6. OCLC 1167627177. Archived from the original on 2012-12-23. Shows how generalisations of Galois theory lead to Galois groupoids.
Baez, J.; Dolan, J. (1998). "Higher-Dimensional Algebra III. n-Categories and the Algebra of Opetopes". Advances in Mathematics. 135 (2): 145–206. arXiv:q-alg/9702014. Bibcode:1997q.alg.....2014B. doi:10.1006/aima.1997.1695. S2CID 18857286.
Baianu, I.C. (1970). "Organismic Supercategories: II. On Multistable Systems" (PDF). The Bulletin of Mathematical Biophysics. 32 (4): 539–61. doi:10.1007/BF02476770. PMID 4327361.
Baianu, I.C.; Marinescu, M. (1974). "On A Functorial Construction of (M, R)-Systems". Revue Roumaine de Mathématiques Pures et Appliquées. 19: 388–391.
Baianu, I.C. (1987). "Computer Models and Automata Theory in Biology and Medicine". In M. Witten (ed.). Mathematical Models in Medicine. Vol. 7. Pergamon Press. pp. 1513–77. ISBN 978-0-08-034692-2. OCLC 939260427. CERN Preprint No. EXT-2004-072. ASIN 0080346928 ASIN 0080346928.
"Higher dimensional Homotopy". PlanetPhysics. Archived from the original on 2009-08-13.
Janelidze, George (1990). "Pure Galois theory in categories". Journal of Algebra. 132 (2): 270–286. doi:10.1016/0021-8693(90)90130-G.
Janelidze, George (1993). "Galois theory in variable categories". Applied Categorical Structures. 1: 103–110. doi:10.1007/BF00872989. S2CID 22258886.. | Wikipedia/Higher-dimensional_algebra |
Invariant theory is a branch of abstract algebra dealing with actions of groups on algebraic varieties, such as vector spaces, from the point of view of their effect on functions. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. For example, if we consider the action of the special linear group SLn on the space of n by n matrices by left multiplication, then the determinant is an invariant of this action because the determinant of A X equals the determinant of X, when A is in SLn.
== Introduction ==
Let
G
{\displaystyle G}
be a group, and
V
{\displaystyle V}
a finite-dimensional vector space over a field
k
{\displaystyle k}
(which in classical invariant theory was usually assumed to be the complex numbers). A representation of
G
{\displaystyle G}
in
V
{\displaystyle V}
is a group homomorphism
π
:
G
→
G
L
(
V
)
{\displaystyle \pi :G\to GL(V)}
, which induces a group action of
G
{\displaystyle G}
on
V
{\displaystyle V}
. If
k
[
V
]
{\displaystyle k[V]}
is the space of polynomial functions on
V
{\displaystyle V}
, then the group action of
G
{\displaystyle G}
on
V
{\displaystyle V}
produces an action on
k
[
V
]
{\displaystyle k[V]}
by the following formula:
(
g
⋅
f
)
(
x
)
:=
f
(
g
−
1
(
x
)
)
∀
x
∈
V
,
g
∈
G
,
f
∈
k
[
V
]
.
{\displaystyle (g\cdot f)(x):=f(g^{-1}(x))\qquad \forall x\in V,g\in G,f\in k[V].}
With this action it is natural to consider the subspace of all polynomial functions which are invariant under this group action, in other words the set of polynomials such that
g
⋅
f
=
f
{\displaystyle g\cdot f=f}
for all
g
∈
G
{\displaystyle g\in G}
. This space of invariant polynomials is denoted
k
[
V
]
G
{\displaystyle k[V]^{G}}
.
First problem of invariant theory: Is
k
[
V
]
G
{\displaystyle k[V]^{G}}
a finitely generated algebra over
k
{\displaystyle k}
?
For example, if
G
=
S
L
n
{\displaystyle G=SL_{n}}
and
V
=
M
n
{\displaystyle V=M_{n}}
the space of square matrices, and the action of
G
{\displaystyle G}
on
V
{\displaystyle V}
is given by left multiplication, then
k
[
V
]
G
{\displaystyle k[V]^{G}}
is isomorphic to a polynomial algebra in one variable, generated by the determinant. In other words, in this case, every invariant polynomial is a linear combination of powers of the determinant polynomial. So in this case,
k
[
V
]
G
{\displaystyle k[V]^{G}}
is finitely generated over
k
{\displaystyle k}
.
If the answer is yes, then the next question is to find a minimal basis, and ask whether the module of polynomial relations between the basis elements (known as the syzygies) is finitely generated over
k
[
V
]
{\displaystyle k[V]}
.
Invariant theory of finite groups has intimate connections with Galois theory. One of the first major results was the main theorem on the symmetric functions that described the invariants of the symmetric group
S
n
{\displaystyle S_{n}}
acting on the polynomial ring
R
[
x
1
,
…
,
x
n
{\displaystyle R[x_{1},\ldots ,x_{n}}
] by permutations of the variables. More generally, the Chevalley–Shephard–Todd theorem characterizes finite groups whose algebra of invariants is a polynomial ring. Modern research in invariant theory of finite groups emphasizes "effective" results, such as explicit bounds on the degrees of the generators. The case of positive characteristic, ideologically close to modular representation theory, is an area of active study, with links to algebraic topology.
Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence was projective geometry, where invariant theory was expected to play a major role in organizing the material. One of the highlights of this relationship is the symbolic method. Representation theory of semisimple Lie groups has its roots in invariant theory.
David Hilbert's work on the question of the finite generation of the algebra of invariants (1890) resulted in the creation of a new mathematical discipline, abstract algebra. A later paper of Hilbert (1893) dealt with the same questions in more constructive and geometric ways, but remained virtually unknown until David Mumford brought these ideas back to life in the 1960s, in a considerably more general and modern form, in his geometric invariant theory. In large measure due to the influence of Mumford, the subject of invariant theory is seen to encompass the theory of actions of linear algebraic groups on affine and projective varieties. A distinct strand of invariant theory, going back to the classical constructive and combinatorial methods of the nineteenth century, has been developed by Gian-Carlo Rota and his school. A prominent example of this circle of ideas is given by the theory of standard monomials.
== Examples ==
Simple examples of invariant theory come from computing the invariant monomials from a group action. For example, consider the
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
-action on
C
[
x
,
y
]
{\displaystyle \mathbb {C} [x,y]}
sending
x
↦
−
x
y
↦
−
y
{\displaystyle {\begin{aligned}x\mapsto -x&&y\mapsto -y\end{aligned}}}
Then, since
x
2
,
x
y
,
y
2
{\displaystyle x^{2},xy,y^{2}}
are the lowest degree monomials which are invariant, we have that
C
[
x
,
y
]
Z
/
2
Z
≅
C
[
x
2
,
x
y
,
y
2
]
≅
C
[
a
,
b
,
c
]
(
a
c
−
b
2
)
{\displaystyle \mathbb {C} [x,y]^{\mathbb {Z} /2\mathbb {Z} }\cong \mathbb {C} [x^{2},xy,y^{2}]\cong {\frac {\mathbb {C} [a,b,c]}{(ac-b^{2})}}}
This example forms the basis for doing many computations.
== The nineteenth-century origins ==
Cayley first established invariant theory in his "On the Theory of Linear Transformations (1845)." In the opening of his paper, Cayley credits an 1841 paper of George Boole, "investigations were suggested to me by a very elegant paper on the same subject... by Mr Boole." (Boole's paper was Exposition of a General Theory of Linear Transformations, Cambridge Mathematical Journal.)
Classically, the term "invariant theory" refers to the study of invariant algebraic forms (equivalently, symmetric tensors) for the action of linear transformations. This was a major field of study in the latter part of the nineteenth century. Current theories relating to the symmetric group and symmetric functions, commutative algebra, moduli spaces and the representations of Lie groups are rooted in this area.
In greater detail, given a finite-dimensional vector space V of dimension n we can consider the symmetric algebra S(Sr(V)) of the polynomials of degree r over V, and the action on it of GL(V). It is actually more accurate to consider the relative invariants of GL(V), or representations of SL(V), if we are going to speak of invariants: that is because a scalar multiple of the identity will act on a tensor of rank r in S(V) through the r-th power 'weight' of the scalar. The point is then to define the subalgebra of invariants I(Sr(V)) for the action. We are, in classical language, looking at invariants of n-ary r-ics, where n is the dimension of V. (This is not the same as finding invariants of GL(V) on S(V); this is an uninteresting problem as the only such invariants are constants.) The case that was most studied was invariants of binary forms where n = 2.
Other work included that of Felix Klein in computing the invariant rings of finite group actions on
C
2
{\displaystyle \mathbf {C} ^{2}}
(the binary polyhedral groups, classified by the ADE classification); these are the coordinate rings of du Val singularities.
The work of David Hilbert, proving that I(V) was finitely presented in many cases, almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later. Explicit calculations for particular purposes have been known in modern times (for example Shioda, with the binary octavics).
== Hilbert's theorems ==
Hilbert (1890) proved that if V is a finite-dimensional representation of the complex algebraic group G = SLn(C) then the ring of invariants of G acting on the ring of polynomials R = S(V) is finitely generated. His proof used the Reynolds operator ρ from R to RG with the properties
ρ(1) = 1
ρ(a + b) = ρ(a) + ρ(b)
ρ(ab) = a ρ(b) whenever a is an invariant.
Hilbert constructed the Reynolds operator explicitly using Cayley's omega process Ω, though now it is more common to construct ρ indirectly as follows: for compact groups G, the Reynolds operator is given by taking the average over G, and non-compact reductive groups can be reduced to the case of compact groups using Weyl's unitarian trick.
Given the Reynolds operator, Hilbert's theorem is proved as follows. The ring R is a polynomial ring so is graded by degrees, and the ideal I is defined to be the ideal generated by the homogeneous invariants of positive degrees. By Hilbert's basis theorem the ideal I is finitely generated (as an ideal). Hence, I is finitely generated by finitely many invariants of G (because if we are given any – possibly infinite – subset S that generates a finitely generated ideal I, then I is already generated by some finite subset of S). Let i1,...,in be a finite set of invariants of G generating I (as an ideal). The key idea is to show that these generate the ring RG of invariants. Suppose that x is some homogeneous invariant of degree d > 0. Then
x = a1i1 + ... + anin
for some aj in the ring R because x is in the ideal I. We can assume that aj is homogeneous of degree d − deg ij for every j (otherwise, we replace aj by its homogeneous component of degree d − deg ij; if we do this for every j, the equation x = a1i1 + ... + anin will remain valid). Now, applying the Reynolds operator to x = a1i1 + ... + anin gives
x = ρ(a1)i1 + ... + ρ(an)in
We are now going to show that x lies in the R-algebra generated by i1,...,in.
First, let us do this in the case when the elements ρ(ak) all have degree less than d. In this case, they are all in the R-algebra generated by i1,...,in (by our induction assumption). Therefore, x is also in this R-algebra (since x = ρ(a1)i1 + ... + ρ(an)in).
In the general case, we cannot be sure that the elements ρ(ak) all have degree less than d. But we can replace each ρ(ak) by its homogeneous component of degree d − deg ij. As a result, these modified ρ(ak) are still G-invariants (because every homogeneous component of a G-invariant is a G-invariant) and have degree less than d (since deg ik > 0). The equation x = ρ(a1)i1 + ... + ρ(an)in still holds for our modified ρ(ak), so we can again conclude that x lies in the R-algebra generated by i1,...,in.
Hence, by induction on the degree, all elements of RG are in the R-algebra generated by i1,...,in.
== Geometric invariant theory ==
The modern formulation of geometric invariant theory is due to David Mumford, and emphasizes the construction of a quotient by the group action that should capture invariant information through its coordinate ring. It is a subtle theory, in that success is obtained by excluding some 'bad' orbits and identifying others with 'good' orbits. In a separate development the symbolic method of invariant theory, an apparently heuristic combinatorial notation, has been rehabilitated.
One motivation was to construct moduli spaces in algebraic geometry as quotients of schemes parametrizing marked objects. In the 1970s and 1980s the theory developed
interactions with symplectic geometry and equivariant topology, and was used to construct moduli spaces of objects in differential geometry, such as instantons and monopoles.
== See also ==
== References ==
Dieudonné, Jean A.; Carrell, James B. (1970), "Invariant theory, old and new", Advances in Mathematics, 4 (1): 1–80, doi:10.1016/0001-8708(70)90015-0, ISSN 0001-8708, MR 0255525 Reprinted as Dieudonné, Jean A.; Carrell, James B. (1971), Invariant theory, old and new, Boston, MA: Academic Press, ISBN 978-0-12-215540-6, MR 0279102
Dolgachev, Igor (2003), Lectures on invariant theory, London Mathematical Society Lecture Note Series, vol. 296, Cambridge University Press, doi:10.1017/CBO9780511615436, ISBN 978-0-521-52548-0, MR 2004511
Grace, J. H.; Young, Alfred (1903), The algebra of invariants, Cambridge: Cambridge University Press
Grosshans, Frank D. (1997), Algebraic homogeneous spaces and invariant theory, New York: Springer, ISBN 3-540-63628-5
Hilbert, David (1890), "Ueber die Theorie der algebraischen Formen", Mathematische Annalen, 36 (4): 473–534, doi:10.1007/BF01208503, ISSN 0025-5831
Hilbert, D. (1893), "Über die vollen Invariantensysteme (On Full Invariant Systems)", Math. Annalen, 42 (3): 313, doi:10.1007/BF01444162
Kung, Joseph P. S.; Rota, Gian-Carlo (1984), "The invariant theory of binary forms", Bulletin of the American Mathematical Society, New Series, 10 (1): 27–85, doi:10.1090/S0273-0979-1984-15188-7, ISSN 0002-9904, MR 0722856
Neusel, Mara D.; Smith, Larry (2002), Invariant Theory of Finite Groups, Providence, RI: American Mathematical Society, ISBN 0-8218-2916-5 A recent resource for learning about modular invariants of finite groups.
Olver, Peter J. (1999), Classical invariant theory, Cambridge: Cambridge University Press, ISBN 0-521-55821-2 An undergraduate level introduction to the classical theory of invariants of binary forms, including the Omega process starting at page 87.
Popov, V.L. (2001) [1994], "Invariants, theory of", Encyclopedia of Mathematics, EMS Press
Springer, T. A. (1977), Invariant Theory, New York: Springer, ISBN 0-387-08242-5 An older but still useful survey.
Sturmfels, Bernd (1993), Algorithms in Invariant Theory, New York: Springer, ISBN 0-387-82445-6 A beautiful introduction to the theory of invariants of finite groups and techniques for computing them using Gröbner bases.
Weyl, Hermann (1939), The Classical Groups. Their Invariants and Representations, Princeton University Press, ISBN 978-0-691-05756-9, MR 0000255 {{citation}}: ISBN / Date incompatibility (help)
Weyl, Hermann (1939b), "Invariants", Duke Mathematical Journal, 5 (3): 489–502, doi:10.1215/S0012-7094-39-00540-5, ISSN 0012-7094, MR 0000030
== External links ==
H. Kraft, C. Procesi, Classical Invariant Theory, a Primer
V. L. Popov, E. B. Vinberg, ``Invariant Theory", in Algebraic geometry. IV. Encyclopaedia of Mathematical Sciences, 55 (translated from 1989 Russian edition) Springer-Verlag, Berlin, 1994; vi+284 pp.; ISBN 3-540-54682-0 | Wikipedia/Algebraic_invariant |
In mathematics, the algebraic topology on the set of group representations from G to a topological group H is the topology of pointwise convergence, i.e. pi converges to p if the limit of pi(g) = p(g) for every g in G.
This terminology is often used in the case of the algebraic topology on the set of discrete, faithful representations of a Kleinian group into PSL(2,C). Another topology, the geometric topology (also called the Chabauty topology), can be put on the set of images of the representations, and its closure can include extra Kleinian groups that are not images of points in the closure in the algebraic topology. This fundamental distinction is behind the phenomenon of hyperbolic Dehn surgery and plays an important role in the general theory of hyperbolic 3-manifolds.
== References ==
William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978–1981). | Wikipedia/Algebraic_topology_(object) |
In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life, such as those in shoelaces and rope, a mathematical knot differs in that the ends are joined so it cannot be undone, the simplest knot being a ring (or "unknot"). In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space,
E
3
{\displaystyle \mathbb {E} ^{3}}
. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of
R
3
{\displaystyle \mathbb {R} ^{3}}
upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting it or passing it through itself.
Knots can be described in various ways. Using different description methods, there may be more than one description of the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram, in which any knot can be drawn in many different ways. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot.
A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished using a knot invariant, a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants.
The original motivation for the founders of knot theory was to create a table of knots and links, which are knots of several components entangled with each other. More than six billion knots and links have been tabulated since the beginnings of knot theory in the 19th century.
To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see knot (mathematics). For example, a higher-dimensional knot is an n-dimensional sphere embedded in (n+2)-dimensional Euclidean space.
== History ==
Archaeologists have discovered that knot tying dates back to prehistoric times. Besides their uses such as recording information and tying objects together, knots have interested humans for their aesthetics and spiritual symbolism. Knots appear in various forms of Chinese artwork dating from several centuries BC (see Chinese knotting). The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often representing strength in unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork.
A mathematical theory of knots was first developed in 1771 by Alexandre-Théophile Vandermonde who explicitly noted the importance of topological features when discussing the properties of knots related to the geometry of position. Mathematical studies of knots began in the 19th century with Carl Friedrich Gauss, who defined the linking integral (Silver 2006). In the 1860s, Lord Kelvin's theory that atoms were knots in the aether led to Peter Guthrie Tait's creation of the first knot tables for complete classification. Tait, in 1885, published a table of knots with up to ten crossings, and what came to be known as the Tait conjectures. This record motivated the early knot theorists, but knot theory eventually became part of the emerging subject of topology.
These topologists in the early part of the 20th century—Max Dehn, J. W. Alexander, and others—studied knots from the point of view of the knot group and invariants from homology theory such as the Alexander polynomial. This would be the main approach to knot theory until a series of breakthroughs transformed the subject.
In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. Many knots were shown to be hyperbolic knots, enabling the use of geometry in defining new, powerful knot invariants. The discovery of the Jones polynomial by Vaughan Jones in 1984 (Sossinsky 2002, pp. 71–89), and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. A plethora of knot invariants have been invented since then, utilizing sophisticated tools such as quantum groups and Floer homology.
In the last several decades of the 20th century, scientists became interested in studying physical knots in order to understand knotting phenomena in DNA and other polymers. Knot theory can be used to determine if a molecule is chiral (has a "handedness") or not (Simon 1986). Tangles, strings with both ends fixed in place, have been effectively used in studying the action of topoisomerase on DNA (Flapan 2000). Knot theory may be crucial in the construction of quantum computers, through the model of topological quantum computation (Collins 2006).
== Knot equivalence ==
A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop (Adams 2004) (Sossinsky 2002). Simply, we can say a knot
K
{\displaystyle K}
is a "simple closed curve" (see Curve) — that is: a "nearly" injective and continuous function
K
:
[
0
,
1
]
→
R
3
{\displaystyle K\colon [0,1]\to \mathbb {R} ^{3}}
, with the only "non-injectivity" being
K
(
0
)
=
K
(
1
)
{\displaystyle K(0)=K(1)}
. Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot.
The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A formal mathematical definition is that two knots
K
1
,
K
2
{\displaystyle K_{1},K_{2}}
are equivalent if there is an orientation-preserving homeomorphism
h
:
R
3
→
R
3
{\displaystyle h\colon \mathbb {R} ^{3}\to \mathbb {R} ^{3}}
with
h
(
K
1
)
=
K
2
{\displaystyle h(K_{1})=K_{2}}
.
What this definition of knot equivalence means is that two knots are equivalent when there is a continuous family of homeomorphisms
{
h
t
:
R
3
→
R
3
f
o
r
0
≤
t
≤
1
}
{\displaystyle \{h_{t}:\mathbb {R} ^{3}\rightarrow \mathbb {R} ^{3}\ \mathrm {for} \ 0\leq t\leq 1\}}
of space onto itself, such that the last one of them carries the first knot onto the second knot. (In detail: Two knots
K
1
{\displaystyle K_{1}}
and
K
2
{\displaystyle K_{2}}
are equivalent if there exists a continuous mapping
H
:
R
3
×
[
0
,
1
]
→
R
3
{\displaystyle H:\mathbb {R} ^{3}\times [0,1]\rightarrow \mathbb {R} ^{3}}
such that a) for each
t
∈
[
0
,
1
]
{\displaystyle t\in [0,1]}
the mapping taking
x
∈
R
3
{\displaystyle x\in \mathbb {R} ^{3}}
to
H
(
x
,
t
)
∈
R
3
{\displaystyle H(x,t)\in \mathbb {R} ^{3}}
is a homeomorphism of
R
3
{\displaystyle \mathbb {R} ^{3}}
onto itself; b)
H
(
x
,
0
)
=
x
{\displaystyle H(x,0)=x}
for all
x
∈
R
3
{\displaystyle x\in \mathbb {R} ^{3}}
; and c)
H
(
K
1
,
1
)
=
K
2
{\displaystyle H(K_{1},1)=K_{2}}
. Such a function
H
{\displaystyle H}
is known as an ambient isotopy.)
These two notions of knot equivalence agree exactly about which knots are equivalent: Two knots that are equivalent under the orientation-preserving homeomorphism definition are also equivalent under the ambient isotopy definition, because any orientation-preserving homeomorphisms of
R
3
{\displaystyle \mathbb {R} ^{3}}
to itself is the final stage of an ambient isotopy starting from the identity. Conversely, two knots equivalent under the ambient isotopy definition are also equivalent under the orientation-preserving homeomorphism definition, because the
t
=
1
{\displaystyle t=1}
(final) stage of the ambient isotopy must be an orientation-preserving homeomorphism carrying one knot to the other.
The basic problem of knot theory, the recognition problem, is determining the equivalence of two knots. Algorithms exist to solve this problem, with the first given by Wolfgang Haken in the late 1960s (Hass 1998). Nonetheless, these algorithms can be extremely time-consuming, and a major issue in the theory is to understand how hard this problem really is (Hass 1998). The special case of recognizing the unknot, called the unknotting problem, is of particular interest (Hoste 2005). In February 2021 Marc Lackenby announced a new unknot recognition algorithm that runs in quasi-polynomial time.
== Knot diagrams ==
A useful way to visualise and manipulate knots is to project the knot onto a plane—think of the knot casting a shadow on the wall. A small change in the direction of projection will ensure that it is one-to-one except at the double points, called crossings, where the "shadow" of the knot crosses itself once transversely (Rolfsen 1976). At each crossing, to be able to recreate the original knot, the over-strand must be distinguished from the under-strand. This is often done by creating a break in the strand going underneath. The resulting diagram is an immersed plane curve with the additional data of which strand is over and which is under at each crossing. (These diagrams are called knot diagrams when they represent a knot and link diagrams when they represent a link.) Analogously, knotted surfaces in 4-space can be related to immersed surfaces in 3-space.
A reduced diagram is a knot diagram in which there are no reducible crossings (also nugatory or removable crossings), or in which all of the reducible crossings have been removed. A petal projection is a type of projection in which, instead of forming double points, all strands of the knot meet at a single crossing point, connected to it by loops forming non-nested "petals".
=== Reidemeister moves ===
In 1927, working with this diagrammatic form of knots, J. W. Alexander and Garland Baird Briggs, and independently Kurt Reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown below. These operations, now called the Reidemeister moves, are:
The proof that diagrams of equivalent knots are connected by Reidemeister moves relies on an analysis of what happens under the planar projection of the movement taking one knot to another. The movement can be arranged so that almost all of the time the projection will be a knot diagram, except at finitely many times when an "event" or "catastrophe" occurs, such as when more than two strands cross at a point or multiple strands become tangent at a point. A close inspection will show that complicated events can be eliminated, leaving only the simplest events: (1) a "kink" forming or being straightened out; (2) two strands becoming tangent at a point and passing through; and (3) three strands crossing at a point. These are precisely the Reidemeister moves (Sossinsky 2002, ch. 3) (Lickorish 1997, ch. 1).
== Knot invariants ==
A knot invariant is a "quantity" that is the same for equivalent knots (Adams 2004) (Lickorish 1997) (Rolfsen 1976). For example, if the invariant is computed from a knot diagram, it should give the same value for two knot diagrams representing equivalent knots. An invariant may take the same value on two different knots, so by itself may be incapable of distinguishing all knots. An elementary invariant is tricolorability.
"Classical" knot invariants include the knot group, which is the fundamental group of the knot complement, and the Alexander polynomial, which can be computed from the Alexander invariant, a module constructed from the infinite cyclic cover of the knot complement (Lickorish 1997)(Rolfsen 1976). In the late 20th century, invariants such as "quantum" knot polynomials, Vassiliev invariants and hyperbolic invariants were discovered. These aforementioned invariants are only the tip of the iceberg of modern knot theory.
=== Knot polynomials ===
A knot polynomial is a knot invariant that is a polynomial. Well-known examples include the Jones polynomial, the Alexander polynomial, and the Kauffman polynomial. A variant of the Alexander polynomial, the Alexander–Conway polynomial, is a polynomial in the variable z with integer coefficients (Lickorish 1997).
The Alexander–Conway polynomial is actually defined in terms of links, which consist of one or more knots entangled with each other. The concepts explained above for knots, e.g. diagrams and Reidemeister moves, also hold for links.
Consider an oriented link diagram, i.e. one in which every component of the link has a preferred direction indicated by an arrow. For a given crossing of the diagram, let
L
+
,
L
−
,
L
0
{\displaystyle L_{+},L_{-},L_{0}}
be the oriented link diagrams resulting from changing the diagram as indicated in the figure:
The original diagram might be either
L
+
{\displaystyle L_{+}}
or
L
−
{\displaystyle L_{-}}
, depending on the chosen crossing's configuration. Then the Alexander–Conway polynomial,
C
(
z
)
{\displaystyle C(z)}
, is recursively defined according to the rules:
C
(
O
)
=
1
{\displaystyle C(O)=1}
(where
O
{\displaystyle O}
is any diagram of the unknot)
C
(
L
+
)
=
C
(
L
−
)
+
z
C
(
L
0
)
.
{\displaystyle C(L_{+})=C(L_{-})+zC(L_{0}).}
The second rule is what is often referred to as a skein relation. To check that these rules give an invariant of an oriented link, one should determine that the polynomial does not change under the three Reidemeister moves. Many important knot polynomials can be defined in this way.
The following is an example of a typical computation using a skein relation. It computes the Alexander–Conway polynomial of the trefoil knot. The yellow patches indicate where the relation is applied.
C() = C() + z C()
gives the unknot and the Hopf link. Applying the relation to the Hopf link where indicated,
C() = C() + z C()
gives a link deformable to one with 0 crossings (it is actually the unlink of two components) and an unknot. The unlink takes a bit of sneakiness:
C() = C() + z C()
which implies that C(unlink of two components) = 0, since the first two polynomials are of the unknot and thus equal.
Putting all this together will show:
C
(
t
r
e
f
o
i
l
)
=
1
+
z
(
0
+
z
)
=
1
+
z
2
{\displaystyle C(\mathrm {trefoil} )=1+z(0+z)=1+z^{2}}
Since the Alexander–Conway polynomial is a knot invariant, this shows that the trefoil is not equivalent to the unknot. So the trefoil really is "knotted".
Actually, there are two trefoil knots, called the right and left-handed trefoils, which are mirror images of each other (take a diagram of the trefoil given above and change each crossing to the other way to get the mirror image). These are not equivalent to each other, meaning that they are not amphichiral. This was shown by Max Dehn, before the invention of knot polynomials, using group theoretical methods (Dehn 1914). But the Alexander–Conway polynomial of each kind of trefoil will be the same, as can be seen by going through the computation above with the mirror image. The Jones polynomial can in fact distinguish between the left- and right-handed trefoil knots (Lickorish 1997).
=== Hyperbolic invariants ===
William Thurston proved many knots are hyperbolic knots, meaning that the knot complement (i.e., the set of points of 3-space not on the knot) admits a geometric structure, in particular that of hyperbolic geometry. The hyperbolic structure depends only on the knot so any quantity computed from the hyperbolic structure is then a knot invariant (Adams 2004).
Geometry lets us visualize what the inside of a knot or link complement looks like by imagining light rays as traveling along the geodesics of the geometry. An example is provided by the picture of the complement of the Borromean rings. The inhabitant of this link complement is viewing the space from near the red component. The balls in the picture are views of horoball neighborhoods of the link. By thickening the link in a standard way, the horoball neighborhoods of the link components are obtained. Even though the boundary of a neighborhood is a torus, when viewed from inside the link complement, it looks like a sphere. Each link component shows up as infinitely many spheres (of one color) as there are infinitely many light rays from the observer to the link component. The fundamental parallelogram (which is indicated in the picture), tiles both vertically and horizontally and shows how to extend the pattern of spheres infinitely.
This pattern, the horoball pattern, is itself a useful invariant. Other hyperbolic invariants include the shape of the fundamental parallelogram, length of shortest geodesic, and volume. Modern knot and link tabulation efforts have utilized these invariants effectively. Fast computers and clever methods of obtaining these invariants make calculating these invariants, in practice, a simple task (Adams, Hildebrand & Weeks 1991).
== Higher dimensions ==
A knot in three dimensions can be untied when placed in four-dimensional space. This is done by changing crossings. Suppose one strand is behind another as seen from a chosen point. Lift it into the fourth dimension, so there is no obstacle (the front strand having no component there); then slide it forward, and drop it back, now in front. Analogies for the plane would be lifting a string up off the surface, or removing a dot from inside a circle.
In fact, in four dimensions, any non-intersecting closed loop of one-dimensional string is equivalent to an unknot. First "push" the loop into a three-dimensional subspace, which is always possible, though technical to explain.
Four-dimensional space occurs in classical knot theory, however, and an important topic is the study of slice knots and ribbon knots. A notorious open problem asks whether every slice knot is also ribbon.
=== Knotting spheres of higher dimension ===
Since a knot can be considered topologically a 1-dimensional sphere, the next generalization is to consider a two-dimensional sphere (
S
2
{\displaystyle \mathbb {S} ^{2}}
) embedded in 4-dimensional Euclidean space (
R
4
{\displaystyle \mathbb {R} ^{4}}
). Such an embedding is knotted if there is no homeomorphism of
R
4
{\displaystyle \mathbb {R} ^{4}}
onto itself taking the embedded 2-sphere to the standard "round" embedding of the 2-sphere. Suspended knots and spun knots are two typical families of such 2-sphere knots.
The mathematical technique called "general position" implies that for a given n-sphere in m-dimensional Euclidean space, if m is large enough (depending on n), the sphere should be unknotted. In general, piecewise-linear n-spheres form knots only in (n + 2)-dimensional space (Zeeman 1963), although this is no longer a requirement for smoothly knotted spheres. In fact, there are smoothly knotted
(
4
k
−
1
)
{\displaystyle (4k-1)}
-spheres in 6k-dimensional space; e.g., there is a smoothly knotted 3-sphere in
R
6
{\displaystyle \mathbb {R} ^{6}}
(Haefliger 1962) (Levine 1965). Thus the codimension of a smooth knot can be arbitrarily large when not fixing the dimension of the knotted sphere; however, any smooth k-sphere embedded in
R
n
{\displaystyle \mathbb {R} ^{n}}
with
2
n
−
3
k
−
3
>
0
{\displaystyle 2n-3k-3>0}
is unknotted. The notion of a knot has further generalisations in mathematics, see: Knot (mathematics), isotopy classification of embeddings.
Every knot in the n-sphere
S
n
{\displaystyle \mathbb {S} ^{n}}
is the link of a real-algebraic set with isolated singularity in
R
n
+
1
{\displaystyle \mathbb {R} ^{n+1}}
(Akbulut & King 1981).
An n-knot is a single
S
n
{\displaystyle \mathbb {S} ^{n}}
embedded in
R
m
{\displaystyle \mathbb {R} ^{m}}
. An n-link consists of k-copies of
S
n
{\displaystyle \mathbb {S} ^{n}}
embedded in
R
m
{\displaystyle \mathbb {R} ^{m}}
, where k is a natural number. Both the
m
=
n
+
2
{\displaystyle m=n+2}
and the
m
>
n
+
2
{\displaystyle m>n+2}
cases are well studied, and so is the
n
>
1
{\displaystyle n>1}
case.
== Adding knots ==
Two knots can be added by cutting both knots and joining the pairs of ends. The operation is called the knot sum, or sometimes the connected sum or composition of two knots. This can be formally defined as follows (Adams 2004): consider a planar projection of each knot and suppose these projections are disjoint. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot while the rest of the rectangle is disjoint from the knots. Form a new knot by deleting the first pair of opposite sides and adjoining the other pair of opposite sides. The resulting knot is a sum of the original knots. Depending on how this is done, two different knots (but no more) may result. This ambiguity in the sum can be eliminated regarding the knots as oriented, i.e. having a preferred direction of travel along the knot, and requiring the arcs of the knots in the sum are oriented consistently with the oriented boundary of the rectangle.
The knot sum of oriented knots is commutative and associative. A knot is prime if it is non-trivial and cannot be written as the knot sum of two non-trivial knots. A knot that can be written as such a sum is composite. There is a prime decomposition for knots, analogous to prime and composite numbers (Schubert 1949). For oriented knots, this decomposition is also unique. Higher-dimensional knots can also be added but there are some differences. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions, at least when one considers smooth knots in codimension at least 3.
Knots can also be constructed using the circuit topology approach. This is done by combining basic units called soft contacts using five operations (Parallel, Series, Cross, Concerted, and Sub). The approach is applicable to open chains as well and can also be extended to include the so-called hard contacts.
== Tabulating knots ==
Traditionally, knots have been catalogued in terms of crossing number. Knot tables generally include only prime knots, and only one entry for a knot and its mirror image (even if they are different) (Hoste, Thistlethwaite & Weeks 1998). The number of nontrivial knots of a given crossing number increases rapidly, making tabulation computationally difficult (Hoste 2005, p. 20). Tabulation efforts have succeeded in enumerating over 6 billion knots and links (Hoste 2005, p. 28). The sequence of the number of prime knots of a given crossing number, up to crossing number 16, is 0, 0, 1, 1, 2, 3, 7, 21, 49, 165, 552, 2176, 9988, 46972, 253293, 1388705... (sequence A002863 in the OEIS). While exponential upper and lower bounds for this sequence are known, it has not been proven that this sequence is strictly increasing (Adams 2004).
The first knot tables by Tait, Little, and Kirkman used knot diagrams, although Tait also used a precursor to the Dowker notation. Different notations have been invented for knots which allow more efficient tabulation (Hoste 2005).
The early tables attempted to list all knots of at most 10 crossings, and all alternating knots of 11 crossings (Hoste, Thistlethwaite & Weeks 1998). The development of knot theory due to Alexander, Reidemeister, Seifert, and others eased the task of verification and tables of knots up to and including 9 crossings were published by Alexander–Briggs and Reidemeister in the late 1920s.
The first major verification of this work was done in the 1960s by John Horton Conway, who not only developed a new notation but also the Alexander–Conway polynomial (Conway 1970) (Doll & Hoste 1991). This verified the list of knots of at most 11 crossings and a new list of links up to 10 crossings. Conway found a number of omissions but only one duplication in the Tait–Little tables; however he missed the duplicates called the Perko pair, which would only be noticed in 1974 by Kenneth Perko (Perko 1974). This famous error would propagate when Dale Rolfsen added a knot table in his influential text, based on Conway's work. Conway's 1970 paper on knot theory also contains a typographical duplication on its non-alternating 11-crossing knots page and omits 4 examples — 2 previously listed in D. Lombardero's 1968 Princeton senior thesis and 2 more subsequently discovered by Alain Caudron. [see Perko (1982), Primality of certain knots, Topology Proceedings] Less famous is the duplicate in his 10 crossing link table: 2.-2.-20.20 is the mirror of 8*-20:-20. [See Perko (2016), Historical highlights of non-cyclic knot theory, J. Knot Theory Ramifications].
In the late 1990s Hoste, Thistlethwaite, and Weeks tabulated all the knots through 16 crossings (Hoste, Thistlethwaite & Weeks 1998). In 2003 Rankin, Flint, and Schermann, tabulated the alternating knots through 22 crossings (Hoste 2005). In 2020 Burton tabulated all prime knots with up to 19 crossings (Burton 2020).
=== Alexander–Briggs notation ===
This is the most traditional notation, due to the 1927 paper of James W. Alexander and Garland B. Briggs and later extended by Dale Rolfsen in his knot table (see image above and List of prime knots). The notation simply organizes knots by their crossing number. One writes the crossing number with a subscript to denote its order amongst all knots with that crossing number. This order is arbitrary and so has no special significance (though in each number of crossings the twist knot comes after the torus knot). Links are written by the crossing number with a superscript to denote the number of components and a subscript to denote its order within the links with the same number of components and crossings. Thus the trefoil knot is notated 31 and the Hopf link is 221. Alexander–Briggs names in the range 10162 to 10166 are ambiguous, due to the discovery of the Perko pair in Charles Newton Little's original and subsequent knot tables, and differences in approach to correcting this error in knot tables and other publications created after this point.
=== Dowker–Thistlethwaite notation ===
The Dowker–Thistlethwaite notation, also called the Dowker notation or code, for a knot is a finite sequence of even integers. The numbers are generated by following the knot and marking the crossings with consecutive integers. Since each crossing is visited twice, this creates a pairing of even integers with odd integers. An appropriate sign is given to indicate over and undercrossing. For example, in this figure the knot diagram has crossings labelled with the pairs (1,6) (3,−12) (5,2) (7,8) (9,−4) and (11,−10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6, −12, 2, 8, −4, −10. A knot diagram has more than one possible Dowker notation, and there is a well-understood ambiguity when reconstructing a knot from a Dowker–Thistlethwaite notation.
=== Conway notation ===
The Conway notation for knots and links, named after John Horton Conway, is based on the theory of tangles (Conway 1970). The advantage of this notation is that it reflects some properties of the knot or link.
The notation describes how to construct a particular link diagram of the link. Start with a basic polyhedron, a 4-valent connected planar graph with no digon regions. Such a polyhedron is denoted first by the number of vertices then a number of asterisks which determine the polyhedron's position on a list of basic polyhedra. For example, 10** denotes the second 10-vertex polyhedron on Conway's list.
Each vertex then has an algebraic tangle substituted into it (each vertex is oriented so there is no arbitrary choice in substitution). Each such tangle has a notation consisting of numbers and + or − signs.
An example is 1*2 −3 2. The 1* denotes the only 1-vertex basic polyhedron. The 2 −3 2 is a sequence describing the continued fraction associated to a rational tangle. One inserts this tangle at the vertex of the basic polyhedron 1*.
A more complicated example is 8*3.1.2 0.1.1.1.1.1 Here again 8* refers to a basic polyhedron with 8 vertices. The periods separate the notation for each tangle.
Any link admits such a description, and it is clear this is a very compact notation even for very large crossing number. There are some further shorthands usually used. The last example is usually written 8*3:2 0, where the ones are omitted and kept the number of dots excepting the dots at the end. For an algebraic knot such as in the first example, 1* is often omitted.
Conway's pioneering paper on the subject lists up to 10-vertex basic polyhedra of which he uses to tabulate links, which have become standard for those links. For a further listing of higher vertex polyhedra, there are nonstandard choices available.
=== Gauss code ===
Gauss code, similar to the Dowker–Thistlethwaite notation, represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labeled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3
Gauss code is limited in its ability to identify knots. This problem is partially addressed with by the extended Gauss code.
== See also ==
Arithmetic rope
Circuit topology
Lamp cord trick
Legendrian submanifolds and knots
List of knot theory topics
Molecular knot
Necktie § Knots
Quantum topology
Ribbon theory
== References ==
=== Sources ===
Adams, Colin (2004), The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots, American Mathematical Society, ISBN 978-0-8218-3678-1
Adams, Colin; Crawford, Thomas; DeMeo, Benjamin; Landry, Michael; Lin, Alex Tong; Montee, MurphyKate; Park, Seojung; Venkatesh, Saraswathi; Yhee, Farrah (2015), "Knot projections with a single multi-crossing", Journal of Knot Theory and Its Ramifications, 24 (3): 1550011, 30, arXiv:1208.5742, doi:10.1142/S021821651550011X, MR 3342136, S2CID 119320887
Adams, Colin; Hildebrand, Martin; Weeks, Jeffrey (1991), "Hyperbolic invariants of knots and links", Transactions of the American Mathematical Society, 326 (1): 1–56, doi:10.1090/s0002-9947-1991-0994161-2, JSTOR 2001854
Akbulut, Selman; King, Henry C. (1981), "All knots are algebraic", Comment. Math. Helv., 56 (3): 339–351, doi:10.1007/BF02566217, S2CID 120218312
Bar-Natan, Dror (1995), "On the Vassiliev knot invariants", Topology, 34 (2): 423–472, doi:10.1016/0040-9383(95)93237-2
Burton, Benjamin A. (2020). "The Next 350 Million Knots". 36th International Symposium on Computational Geometry (SoCG 2020). Leibniz Int. Proc. Inform. Vol. 164. Schloss Dagstuhl–Leibniz-Zentrum für Informatik. pp. 25:1–25:17. doi:10.4230/LIPIcs.SoCG.2020.25.
Collins, Graham (April 2006), "Computing with Quantum Knots", Scientific American, 294 (4): 56–63, Bibcode:2006SciAm.294d..56C, doi:10.1038/scientificamerican0406-56, PMID 16596880
Dehn, Max (1914), "Die beiden Kleeblattschlingen", Mathematische Annalen, 75 (3): 402–413, doi:10.1007/BF01563732, S2CID 120452571
Conway, John H. (1970), "An enumeration of knots and links, and some of their algebraic properties", Computational Problems in Abstract Algebra, Pergamon, pp. 329–358, doi:10.1016/B978-0-08-012975-4.50034-5, ISBN 978-0-08-012975-4
Doll, Helmut; Hoste, Jim (1991), "A tabulation of oriented links. With microfiche supplement", Math. Comp., 57 (196): 747–761, Bibcode:1991MaCom..57..747D, doi:10.1090/S0025-5718-1991-1094946-4
Flapan, Erica (2000), When topology meets chemistry: A topological look at molecular chirality, Outlook, Cambridge University Press, ISBN 978-0-521-66254-3
Haefliger, André (1962), "Knotted (4k − 1)-spheres in 6k-space", Annals of Mathematics, Second Series, 75 (3): 452–466, doi:10.2307/1970208, JSTOR 1970208
Haken, Wolfgang (1962), "Über das Homöomorphieproblem der 3-Mannigfaltigkeiten. I", Mathematische Zeitschrift, 80: 89–120, doi:10.1007/BF01162369, ISSN 0025-5874, MR 0160196
Hass, Joel (1998), "Algorithms for recognizing knots and 3-manifolds", Chaos, Solitons and Fractals, 9 (4–5): 569–581, arXiv:math/9712269, Bibcode:1998CSF.....9..569H, doi:10.1016/S0960-0779(97)00109-4, S2CID 7381505
Hoste, Jim; Thistlethwaite, Morwen; Weeks, Jeffrey (1998), "The First 1,701,935 Knots", Math. Intelligencer, 20 (4): 33–48, doi:10.1007/BF03025227, S2CID 18027155
Hoste, Jim (2005). "The Enumeration and Classification of Knots and Links". Handbook of Knot Theory. pp. 209–232. doi:10.1016/B978-044451452-3/50006-X. ISBN 978-0-444-51452-3.
Levine, Jerome (1965), "A classification of differentiable knots", Annals of Mathematics, Second Series, 1982 (1): 15–50, doi:10.2307/1970561, JSTOR 1970561
Kontsevich, M. (1993). "Vassiliev's knot invariants". I. M. Gelfand Seminar. ADVSOV. Vol. 16. pp. 137–150. doi:10.1090/advsov/016.2/04. ISBN 978-0-8218-4117-4.
Lickorish, W. B. Raymond (1997), An Introduction to Knot Theory, Graduate Texts in Mathematics, vol. 175, Springer-Verlag, doi:10.1007/978-1-4612-0691-0, ISBN 978-0-387-98254-0, S2CID 122824389
Perko, Kenneth (1974), "On the classification of knots", Proceedings of the American Mathematical Society, 45 (2): 262–6, doi:10.2307/2040074, JSTOR 2040074
Rolfsen, Dale (1976), Knots and Links, Mathematics Lecture Series, vol. 7, Berkeley, California: Publish or Perish, ISBN 978-0-914098-16-4, MR 0515288
Schubert, Horst (1949). Die eindeutige Zerlegbarkeit eines Knotens in Primknoten. doi:10.1007/978-3-642-45813-2. ISBN 978-3-540-01419-5. {{cite book}}: ISBN / Date incompatibility (help)
Silver, Daniel (2006). "Knot Theory's Odd Origins". American Scientist. 94 (2): 158. doi:10.1511/2006.2.158.
Simon, Jonathan (1986), "Topological chirality of certain molecules", Topology, 25 (2): 229–235, doi:10.1016/0040-9383(86)90041-8
Sossinsky, Alexei (2002), Knots, mathematics with a twist, Harvard University Press, ISBN 978-0-674-00944-8
Turaev, Vladimir G. (2016). Quantum Invariants of Knots and 3-Manifolds. doi:10.1515/9783110435221. ISBN 978-3-11-043522-1. S2CID 118682559.
Weisstein, Eric W. (2013). "Reduced Knot Diagram". MathWorld. Wolfram. Retrieved 8 May 2013.
Weisstein, Eric W. (2013a). "Reducible Crossing". MathWorld. Wolfram. Retrieved 8 May 2013.
Witten, Edward (1989), "Quantum field theory and the Jones polynomial", Comm. Math. Phys., 121 (3): 351–399, Bibcode:1989CMaPh.121..351W, doi:10.1007/BF01217730, S2CID 14951363
Zeeman, Erik C. (1963), "Unknotting combinatorial balls", Annals of Mathematics, Second Series, 78 (3): 501–526, doi:10.2307/1970538, JSTOR 1970538
=== Footnotes ===
== Further reading ==
=== Introductory textbooks ===
There are a number of introductions to knot theory. A classical introduction for graduate students or advanced undergraduates is (Rolfsen 1976). Other good texts from the references are (Adams 2004) and (Lickorish 1997). Adams is informal and accessible for the most part to high schoolers. Lickorish is a rigorous introduction for graduate students, covering a nice mix of classical and modern topics. (Cromwell 2004) is suitable for undergraduates who know point-set topology; knowledge of algebraic topology is not required.
Burde, Gerhard; Zieschang, Heiner (1985), Knots, De Gruyter Studies in Mathematics, vol. 5, Walter de Gruyter, ISBN 978-3-11-008675-1
Crowell, Richard H.; Fox, Ralph (1977). Introduction to Knot Theory. Springer. ISBN 978-0-387-90272-2.
Kauffman, Louis H. (1987), On Knots, Princeton University Press, ISBN 978-0-691-08435-0
Kauffman, Louis H. (2013), Knots and Physics (4th ed.), World Scientific, ISBN 978-981-4383-00-4
Cromwell, Peter R. (2004), Knots and Links, Cambridge University Press, ISBN 978-0-521-54831-1
=== Surveys ===
Menasco, William W.; Thistlethwaite, Morwen, eds. (2005), Handbook of Knot Theory, Elsevier, ISBN 978-0-444-51452-3
Menasco and Thistlethwaite's handbook surveys a mix of topics relevant to current research trends in a manner accessible to advanced undergraduates but of interest to professional researchers.
Livio, Mario (2009), "Ch. 8: Unreasonable Effectiveness?", Is God a Mathematician?, Simon & Schuster, pp. 203–218, ISBN 978-0-7432-9405-8
== External links ==
"Mathematics and Knots" This is an online version of an exhibition developed for the 1989 Royal Society "PopMath RoadShow". Its aim was to use knots to present methods of mathematics to the general public.
=== History ===
Thomson, Sir William (1867), "On Vortex Atoms", Proceedings of the Royal Society of Edinburgh, VI: 94–105
Silliman, Robert H. (December 1963), "William Thomson: Smoke Rings and Nineteenth-Century Atomism", Isis, 54 (4): 461–474, doi:10.1086/349764, JSTOR 228151, S2CID 144988108
Movie of a modern recreation of Tait's smoke ring experiment
History of knot theory (on the home page of Andrew Ranicki)
=== Knot tables and software ===
KnotInfo: Table of Knot Invariants and Knot Theory Resources
The Knot Atlas — detailed info on individual knots in knot tables
KnotPlot — software to investigate geometric properties of knots
Knotscape — software to create images of knots
Knoutilus — online database and image generator of knots
KnotData.html — Wolfram Mathematica function for investigating knots
Regina — software for low-dimensional topology with native support for knots and links. Tables of prime knots with up to 19 crossings | Wikipedia/Knot_theory |
In mathematics, specifically in topology,
the interior of a subset S of a topological space X is the union of all subsets of S that are open in X.
A point that is in the interior of S is an interior point of S.
The interior of S is the complement of the closure of the complement of S.
In this sense interior and closure are dual notions.
The exterior of a set S is the complement of the closure of S; it consists of the points that are in neither the set nor its boundary.
The interior, boundary, and exterior of a subset together partition the whole space into three blocks (or fewer when one or more of these is empty).
The interior and exterior of a closed curve are a slightly different concept; see the Jordan curve theorem.
== Definitions ==
=== Interior point ===
If
S
{\displaystyle S}
is a subset of a Euclidean space, then
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
if there exists an open ball centered at
x
{\displaystyle x}
which is completely contained in
S
.
{\displaystyle S.}
(This is illustrated in the introductory section to this article.)
This definition generalizes to any subset
S
{\displaystyle S}
of a metric space
X
{\displaystyle X}
with metric
d
{\displaystyle d}
:
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
if there exists a real number
r
>
0
,
{\displaystyle r>0,}
such that
y
{\displaystyle y}
is in
S
{\displaystyle S}
whenever the distance
d
(
x
,
y
)
<
r
.
{\displaystyle d(x,y)<r.}
This definition generalizes to topological spaces by replacing "open ball" with "open set".
If
S
{\displaystyle S}
is a subset of a topological space
X
{\displaystyle X}
then
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
in
X
{\displaystyle X}
if
x
{\displaystyle x}
is contained in an open subset of
X
{\displaystyle X}
that is completely contained in
S
.
{\displaystyle S.}
(Equivalently,
x
{\displaystyle x}
is an interior point of
S
{\displaystyle S}
if
S
{\displaystyle S}
is a neighbourhood of
x
.
{\displaystyle x.}
)
=== Interior of a set ===
The interior of a subset
S
{\displaystyle S}
of a topological space
X
,
{\displaystyle X,}
denoted by
int
X
S
{\displaystyle \operatorname {int} _{X}S}
or
int
S
{\displaystyle \operatorname {int} S}
or
S
∘
,
{\displaystyle S^{\circ },}
can be defined in any of the following equivalent ways:
int
S
{\displaystyle \operatorname {int} S}
is the largest open subset of
X
{\displaystyle X}
contained in
S
.
{\displaystyle S.}
int
S
{\displaystyle \operatorname {int} S}
is the union of all open sets of
X
{\displaystyle X}
contained in
S
.
{\displaystyle S.}
int
S
{\displaystyle \operatorname {int} S}
is the set of all interior points of
S
.
{\displaystyle S.}
If the space
X
{\displaystyle X}
is understood from context then the shorter notation
int
S
{\displaystyle \operatorname {int} S}
is usually preferred to
int
X
S
.
{\displaystyle \operatorname {int} _{X}S.}
== Examples ==
In any space, the interior of the empty set is the empty set.
In any space
X
,
{\displaystyle X,}
if
S
⊆
X
,
{\displaystyle S\subseteq X,}
then
int
S
⊆
S
.
{\displaystyle \operatorname {int} S\subseteq S.}
If
X
{\displaystyle X}
is the real line
R
{\displaystyle \mathbb {R} }
(with the standard topology), then
int
(
[
0
,
1
]
)
=
(
0
,
1
)
{\displaystyle \operatorname {int} ([0,1])=(0,1)}
whereas the interior of the set
Q
{\displaystyle \mathbb {Q} }
of rational numbers is empty:
int
Q
=
∅
.
{\displaystyle \operatorname {int} \mathbb {Q} =\varnothing .}
If
X
{\displaystyle X}
is the complex plane
C
,
{\displaystyle \mathbb {C} ,}
then
int
(
{
z
∈
C
:
|
z
|
≤
1
}
)
=
{
z
∈
C
:
|
z
|
<
1
}
.
{\displaystyle \operatorname {int} (\{z\in \mathbb {C} :|z|\leq 1\})=\{z\in \mathbb {C} :|z|<1\}.}
In any Euclidean space, the interior of any finite set is the empty set.
On the set of real numbers, one can put other topologies rather than the standard one:
If
X
{\displaystyle X}
is the real numbers
R
{\displaystyle \mathbb {R} }
with the lower limit topology, then
int
(
[
0
,
1
]
)
=
[
0
,
1
)
.
{\displaystyle \operatorname {int} ([0,1])=[0,1).}
If one considers on
R
{\displaystyle \mathbb {R} }
the topology in which every set is open, then
int
(
[
0
,
1
]
)
=
[
0
,
1
]
.
{\displaystyle \operatorname {int} ([0,1])=[0,1].}
If one considers on
R
{\displaystyle \mathbb {R} }
the topology in which the only open sets are the empty set and
R
{\displaystyle \mathbb {R} }
itself, then
int
(
[
0
,
1
]
)
{\displaystyle \operatorname {int} ([0,1])}
is the empty set.
These examples show that the interior of a set depends upon the topology of the underlying space.
The last two examples are special cases of the following.
In any discrete space, since every set is open, every set is equal to its interior.
In any indiscrete space
X
,
{\displaystyle X,}
since the only open sets are the empty set and
X
{\displaystyle X}
itself,
int
X
=
X
{\displaystyle \operatorname {int} X=X}
and for every proper subset
S
{\displaystyle S}
of
X
,
{\displaystyle X,}
int
S
{\displaystyle \operatorname {int} S}
is the empty set.
== Properties ==
Let
X
{\displaystyle X}
be a topological space and let
S
{\displaystyle S}
and
T
{\displaystyle T}
be subsets of
X
.
{\displaystyle X.}
int
S
{\displaystyle \operatorname {int} S}
is open in
X
.
{\displaystyle X.}
If
T
{\displaystyle T}
is open in
X
{\displaystyle X}
then
T
⊆
S
{\displaystyle T\subseteq S}
if and only if
T
⊆
int
S
.
{\displaystyle T\subseteq \operatorname {int} S.}
int
S
{\displaystyle \operatorname {int} S}
is an open subset of
S
{\displaystyle S}
when
S
{\displaystyle S}
is given the subspace topology.
S
{\displaystyle S}
is an open subset of
X
{\displaystyle X}
if and only if
int
S
=
S
.
{\displaystyle \operatorname {int} S=S.}
Intensive:
int
S
⊆
S
.
{\displaystyle \operatorname {int} S\subseteq S.}
Idempotence:
int
(
int
S
)
=
int
S
.
{\displaystyle \operatorname {int} (\operatorname {int} S)=\operatorname {int} S.}
Preserves/distributes over binary intersection:
int
(
S
∩
T
)
=
(
int
S
)
∩
(
int
T
)
.
{\displaystyle \operatorname {int} (S\cap T)=(\operatorname {int} S)\cap (\operatorname {int} T).}
However, the interior operator does not distribute over unions since only
int
(
S
∪
T
)
⊇
(
int
S
)
∪
(
int
T
)
{\displaystyle \operatorname {int} (S\cup T)~\supseteq ~(\operatorname {int} S)\cup (\operatorname {int} T)}
is guaranteed in general and equality might not hold. For example, if
X
=
R
,
S
=
(
−
∞
,
0
]
,
{\displaystyle X=\mathbb {R} ,S=(-\infty ,0],}
and
T
=
(
0
,
∞
)
{\displaystyle T=(0,\infty )}
then
(
int
S
)
∪
(
int
T
)
=
(
−
∞
,
0
)
∪
(
0
,
∞
)
=
R
∖
{
0
}
{\displaystyle (\operatorname {int} S)\cup (\operatorname {int} T)=(-\infty ,0)\cup (0,\infty )=\mathbb {R} \setminus \{0\}}
is a proper subset of
int
(
S
∪
T
)
=
int
R
=
R
.
{\displaystyle \operatorname {int} (S\cup T)=\operatorname {int} \mathbb {R} =\mathbb {R} .}
Monotone/nondecreasing with respect to
⊆
{\displaystyle \subseteq }
: If
S
⊆
T
{\displaystyle S\subseteq T}
then
int
S
⊆
int
T
.
{\displaystyle \operatorname {int} S\subseteq \operatorname {int} T.}
Other properties include:
If
S
{\displaystyle S}
is closed in
X
{\displaystyle X}
and
int
T
=
∅
{\displaystyle \operatorname {int} T=\varnothing }
then
int
(
S
∪
T
)
=
int
S
.
{\displaystyle \operatorname {int} (S\cup T)=\operatorname {int} S.}
Relationship with closure
The above statements will remain true if all instances of the symbols/words
"interior", "int", "open", "subset", and "largest"
are respectively replaced by
"closure", "cl", "closed", "superset", and "smallest"
and the following symbols are swapped:
"
⊆
{\displaystyle \subseteq }
" swapped with "
⊇
{\displaystyle \supseteq }
"
"
∪
{\displaystyle \cup }
" swapped with "
∩
{\displaystyle \cap }
"
For more details on this matter, see interior operator below or the article Kuratowski closure axioms.
== Interior operator ==
The interior operator
int
X
{\displaystyle \operatorname {int} _{X}}
is dual to the closure operator, which is denoted by
cl
X
{\displaystyle \operatorname {cl} _{X}}
or by an overline —, in the sense that
int
X
S
=
X
∖
(
X
∖
S
)
¯
{\displaystyle \operatorname {int} _{X}S=X\setminus {\overline {(X\setminus S)}}}
and also
S
¯
=
X
∖
int
X
(
X
∖
S
)
,
{\displaystyle {\overline {S}}=X\setminus \operatorname {int} _{X}(X\setminus S),}
where
X
{\displaystyle X}
is the topological space containing
S
,
{\displaystyle S,}
and the backslash
∖
{\displaystyle \,\setminus \,}
denotes set-theoretic difference.
Therefore, the abstract theory of closure operators and the Kuratowski closure axioms can be readily translated into the language of interior operators, by replacing sets with their complements in
X
.
{\displaystyle X.}
In general, the interior operator does not commute with unions. However, in a complete metric space the following result does hold:
The result above implies that every complete metric space is a Baire space.
== Exterior of a set ==
The exterior of a subset
S
{\displaystyle S}
of a topological space
X
,
{\displaystyle X,}
denoted by
ext
X
S
{\displaystyle \operatorname {ext} _{X}S}
or simply
ext
S
,
{\displaystyle \operatorname {ext} S,}
is the largest open set disjoint from
S
,
{\displaystyle S,}
namely, it is the union of all open sets in
X
{\displaystyle X}
that are disjoint from
S
.
{\displaystyle S.}
The exterior is the interior of the complement, which is the same as the complement of the closure; in formulas,
ext
S
=
int
(
X
∖
S
)
=
X
∖
S
¯
.
{\displaystyle \operatorname {ext} S=\operatorname {int} (X\setminus S)=X\setminus {\overline {S}}.}
Similarly, the interior is the exterior of the complement:
int
S
=
ext
(
X
∖
S
)
.
{\displaystyle \operatorname {int} S=\operatorname {ext} (X\setminus S).}
The interior, boundary, and exterior of a set
S
{\displaystyle S}
together partition the whole space into three blocks (or fewer when one or more of these is empty):
X
=
int
S
∪
∂
S
∪
ext
S
,
{\displaystyle X=\operatorname {int} S\cup \partial S\cup \operatorname {ext} S,}
where
∂
S
{\displaystyle \partial S}
denotes the boundary of
S
.
{\displaystyle S.}
The interior and exterior are always open, while the boundary is closed.
Some of the properties of the exterior operator are unlike those of the interior operator:
The exterior operator reverses inclusions; if
S
⊆
T
,
{\displaystyle S\subseteq T,}
then
ext
T
⊆
ext
S
.
{\displaystyle \operatorname {ext} T\subseteq \operatorname {ext} S.}
The exterior operator is not idempotent. It does have the property that
int
S
⊆
ext
(
ext
S
)
.
{\displaystyle \operatorname {int} S\subseteq \operatorname {ext} \left(\operatorname {ext} S\right).}
== Interior-disjoint shapes ==
Two shapes
a
{\displaystyle a}
and
b
{\displaystyle b}
are called interior-disjoint if the intersection of their interiors is empty.
Interior-disjoint shapes may or may not intersect in their boundary.
== See also ==
Algebraic interior – Generalization of topological interior
DE-9IM – Topological model
Interior algebra – Algebraic structure
Jordan curve theorem – A closed curve divides the plane into two regions
Quasi-relative interior – Generalization of algebraic interior
Relative interior – Generalization of topological interior
== References ==
== Bibliography ==
Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129.
Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303.
Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011.
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750.
Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153.
Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities)
Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753.
Wilansky, Albert (17 October 2008) [1970]. Topology for Analysis. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-46903-4. OCLC 227923899.
Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
== External links ==
Interior at PlanetMath. | Wikipedia/Interior_(topology) |
In mathematics, low-dimensional topology is the branch of topology that studies manifolds, or more generally topological spaces, of four or fewer dimensions. Representative topics are the structure theory of 3-manifolds and 4-manifolds, knot theory, and braid groups. This can be regarded as a part of geometric topology. It may also be used to refer to the study of topological spaces of dimension 1, though this is more typically considered part of continuum theory.
== History ==
A number of advances starting in the 1960s had the effect of emphasising low dimensions in topology. The solution by Stephen Smale, in 1961, of the Poincaré conjecture in five or more dimensions made dimensions three and four seem the hardest; and indeed they required new methods, while the freedom of higher dimensions meant that questions could be reduced to computational methods available in surgery theory. Thurston's geometrization conjecture, formulated in the late 1970s, offered a framework that suggested geometry and topology were closely intertwined in low dimensions, and Thurston's proof of geometrization for Haken manifolds utilized a variety of tools from previously only weakly linked areas of mathematics. Vaughan Jones' discovery of the Jones polynomial in the early 1980s not only led knot theory in new directions but gave rise to still mysterious connections between low-dimensional topology and mathematical physics. In 2002, Grigori Perelman announced a proof of the three-dimensional Poincaré conjecture, using Richard S. Hamilton's Ricci flow, an idea belonging to the field of geometric analysis.
Overall, this progress has led to better integration of the field into the rest of mathematics.
== Two dimensions ==
A surface is a two-dimensional, topological manifold. The most familiar examples are those that arise as the boundaries of solid objects in ordinary three-dimensional Euclidean space R3—for example, the surface of a ball. On the other hand, there are surfaces, such as the Klein bottle, that cannot be embedded in three-dimensional Euclidean space without introducing singularities or self-intersections.
=== Classification of surfaces ===
The classification theorem of closed surfaces states that any connected closed surface is homeomorphic to some member of one of these three families:
the sphere;
the connected sum of g tori, for
g
≥
1
{\displaystyle g\geq 1}
;
the connected sum of k real projective planes, for
k
≥
1
{\displaystyle k\geq 1}
.
The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number g of tori involved is called the genus of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of g tori is 2 − 2g.
The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of k of them is 2 − k.
=== Teichmüller space ===
In mathematics, the Teichmüller space TX of a (real) topological surface X, is a space that parameterizes complex structures on X up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Each point in TX may be regarded as an isomorphism class of 'marked' Riemann surfaces where a 'marking' is an isotopy class of homeomorphisms from X to X.
The Teichmüller space is the universal covering orbifold of the (Riemann) moduli space.
Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The underlying topological space of Teichmüller space was studied by Fricke, and the Teichmüller metric on it was introduced by Oswald Teichmüller (1940).
=== Uniformization theorem ===
In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of the three domains: the open unit disk, the complex plane, or the Riemann sphere. In particular it admits a Riemannian metric of constant curvature. This classifies Riemannian surfaces as elliptic (positively curved—rather, admitting a constant positively curved metric), parabolic (flat), and hyperbolic (negatively curved) according to their universal cover.
The uniformization theorem is a generalization of the Riemann mapping theorem from proper simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces.
== Three dimensions ==
A topological space X is a 3-manifold if every point in X has a neighbourhood that is homeomorphic to Euclidean 3-space.
The topological, piecewise-linear, and smooth categories are all equivalent in three dimensions, so little distinction is made in whether we are dealing with say, topological 3-manifolds, or smooth 3-manifolds.
Phenomena in three dimensions can be strikingly different from phenomena in other dimensions, and so there is a prevalence of very specialized techniques that do not generalize to dimensions greater than three. This special role has led to the discovery of close connections to a diversity of other fields, such as knot theory, geometric group theory, hyperbolic geometry, number theory, Teichmüller theory, topological quantum field theory, gauge theory, Floer homology, and partial differential equations. 3-manifold theory is considered a part of low-dimensional topology or geometric topology.
=== Knot and braid theory ===
Knot theory is the study of mathematical knots. While inspired by knots that appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3 (since we're using topology, a circle isn't bound to the classical geometric concept, but to all of its homeomorphisms). Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself.
Knot complements are frequently-studied 3-manifolds. The knot complement of a tame knot K is the three-dimensional space surrounding the knot. To make this precise, suppose that K is a knot in a three-manifold M (most often, M is the 3-sphere). Let N be a tubular neighborhood of K; so N is a solid torus. The knot complement is then the complement of N,
X
K
=
M
−
interior
(
N
)
.
{\displaystyle X_{K}=M-{\mbox{interior}}(N).}
A related topic is braid theory. Braid theory is an abstract geometric theory studying the everyday braid concept, and some generalizations. The idea is that braids can be organized into groups, in which the group operation is 'do the first braid on a set of strings, and then follow it with a second on the twisted strings'. Such groups may be described by explicit presentations, as was shown by Emil Artin (1947). For an elementary treatment along these lines, see the article on braid groups. Braid groups may also be given a deeper mathematical interpretation: as the fundamental group of certain configuration spaces.
=== Hyperbolic 3-manifolds ===
A hyperbolic 3-manifold is a 3-manifold equipped with a complete Riemannian metric of constant sectional curvature -1. In other words, it is the quotient of three-dimensional hyperbolic space by a subgroup of hyperbolic isometries acting freely and properly discontinuously. See also Kleinian model.
Its thick-thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and/or ends that are the product of a Euclidean surface and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact. In this case, the ends are of the form torus cross the closed half-ray and are called cusps. Knot complements are the most commonly studied cusped manifolds.
=== Poincaré conjecture and geometrization ===
Thurston's geometrization conjecture states that certain three-dimensional topological spaces each have a unique geometric structure that can be associated with them. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply-connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic).
In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William Thurston (1982), and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
== Four dimensions ==
A 4-manifold is a 4-dimensional topological manifold. A smooth 4-manifold is a 4-manifold with a smooth structure. In dimension four, in marked contrast with lower dimensions, topological and smooth manifolds are quite different. There exist some topological 4-manifolds that admit no smooth structure and even if there exists a smooth structure it need not be unique (i.e. there are smooth 4-manifolds that are homeomorphic but not diffeomorphic).
4-manifolds are of importance in physics because, in General Relativity, spacetime is modeled as a pseudo-Riemannian 4-manifold.
=== Exotic R4 ===
An exotic R4 is a differentiable manifold that is homeomorphic but not diffeomorphic to the Euclidean space R4. The first examples were found in the early 1980s by Michael Freedman, by using the contrast between Freedman's theorems about topological 4-manifolds, and Simon Donaldson's theorems about smooth 4-manifolds. There is a continuum of non-diffeomorphic differentiable structures of R4, as was shown first by Clifford Taubes.
Prior to this construction, non-diffeomorphic smooth structures on spheres—exotic spheres—were already known to exist, although the question of the existence of such structures for the particular case of the 4-sphere remained open (and still remains open to this day). For any positive integer n other than 4, there are no exotic smooth structures on Rn; in other words, if n ≠ 4 then any smooth manifold homeomorphic to Rn is diffeomorphic to Rn.
=== Other special phenomena in four dimensions ===
There are several fundamental theorems about manifolds that can be proved by low-dimensional methods in dimensions at most 3, and by completely different high-dimensional methods in dimension at least 5, but which are false in four dimensions. Here are some examples:
In dimensions other than 4, the Kirby–Siebenmann invariant provides the obstruction to the existence of a PL structure; in other words a compact topological manifold has a PL structure if and only if its Kirby–Siebenmann invariant in H4(M,Z/2Z) vanishes. In dimension 3 and lower, every topological manifold admits an essentially unique PL structure. In dimension 4 there are many examples with vanishing Kirby–Siebenmann invariant but no PL structure.
In any dimension other than 4, a compact topological manifold has only a finite number of essentially distinct PL or smooth structures. In dimension 4, compact manifolds can have a countable infinite number of non-diffeomorphic smooth structures.
Four is the only dimension n for which Rn can have an exotic smooth structure. R4 has an uncountable number of exotic smooth structures; see exotic R4.
The solution to the smooth Poincaré conjecture is known in all dimensions other than 4 (it is usually false in dimensions at least 7; see exotic sphere). The Poincaré conjecture for PL manifolds has been proved for all dimensions other than 4, but it is not known whether it is true in 4 dimensions (it is equivalent to the smooth Poincaré conjecture in 4 dimensions).
The smooth h-cobordism theorem holds for cobordisms provided that neither the cobordism nor its boundary has dimension 4. It can fail if the boundary of the cobordism has dimension 4 (as shown by Donaldson). If the cobordism has dimension 4, then it is unknown whether the h-cobordism theorem holds.
A topological manifold of dimension not equal to 4 has a handlebody decomposition. Manifolds of dimension 4 have a handlebody decomposition if and only if they are smoothable.
There are compact 4-dimensional topological manifolds that are not homeomorphic to any simplicial complex. In dimension at least 5 the existence of topological manifolds not homeomorphic to a simplicial complex was an open problem. In 2013, Ciprian Manolescu posted a preprint on ArXiv showing that there are manifolds in each dimension greater than or equal to 5, that are not homeomorphic to a simplicial complex.
== A few typical theorems that distinguish low-dimensional topology ==
There are several theorems that in effect state that many of the most basic tools used to study high-dimensional manifolds do not apply to low-dimensional manifolds, such as:
Steenrod's theorem states that an orientable 3-manifold has a trivial tangent bundle. Stated another way, the only characteristic class of a 3-manifold is the obstruction to orientability.
Any closed 3-manifold is the boundary of a 4-manifold. This theorem is due independently to several people: it follows from the Dehn–Lickorish theorem via a Heegaard splitting of the 3-manifold. It also follows from René Thom's computation of the cobordism ring of closed manifolds.
The existence of exotic smooth structures on R4. This was originally observed by Michael Freedman, based on the work of Simon Donaldson and Andrew Casson. It has since been elaborated by Freedman, Robert Gompf, Clifford Taubes and Laurence Taylor to show there exists a continuum of non-diffeomorphic smooth structures on R4. Meanwhile, Rn is known to have exactly one smooth structure up to diffeomorphism provided n ≠ 4.
== See also ==
List of geometric topology topics
== References ==
== External links ==
Rob Kirby's Problems in Low-Dimensional Topology – gzipped postscript file (1.4 MB)
Mark Brittenham's links to low dimensional topology – lists of homepages, conferences, etc. | Wikipedia/Low-dimensional_topology |
Approximations for the mathematical constant pi (π) in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era. In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
Further progress was not made until the 14th century, when Madhava of Sangamagrama developed approximations correct to eleven and then thirteen digits. Jamshīd al-Kāshī achieved sixteen digits next. Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega).
The record of manual approximation of π is held by William Shanks, who calculated 527 decimals correctly in 1853. Since the middle of the 20th century, the approximation of π has been the task of electronic digital computers (for a comprehensive account, see Chronology of computation of π). On April 2, 2025, the current record was established by Linus Media Group and Kioxia with Alexander Yee's y-cruncher with 300 trillion (3×1014) digits.
== Early history ==
The best known approximations to π dating to before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.
Some Egyptologists
have claimed that the ancient Egyptians used an approximation of π as 22⁄7 = 3.142857 (about 0.04% too high) from as early as the Old Kingdom (c. 2700–2200 BC).
This claim has been met with skepticism.
Babylonian mathematics usually approximated π to 3, sufficient for the architectural projects of the time (notably also reflected in the description of Solomon's Temple in the Hebrew Bible). The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BCE) gives a better approximation of π as 25⁄8 = 3.125, about 0.528% below the exact value.
At about the same time, the Egyptian Rhind Mathematical Papyrus (dated to the Second Intermediate Period, c. 1600 BCE, although stated to be a copy of an older, Middle Kingdom text) implies an approximation of π as 256⁄81 ≈ 3.16 (accurate to 0.6 percent) by calculating the area of a circle via approximation with the octagon.
Astronomical calculations in the Shatapatha Brahmana (c. 6th century BCE) use a fractional approximation of 339⁄108 ≈ 3.139.
The Mahabharata (500 BCE – 300 CE) offers an approximation of 3, in the ratios offered in Bhishma Parva verses: 6.12.40–45.
...
The Moon is handed down by memory to be eleven thousand yojanas in diameter. Its peripheral circle happens to be thirty three thousand yojanas when calculated.
...
The Sun is eight thousand yojanas and another two thousand
yojanas in diameter. From that its peripheral circle comes to be equal to thirty thousand yojanas.
...
In the 3rd century BCE, Archimedes proved the sharp inequalities 223⁄71 < π < 22⁄7, by means of regular 96-gons (accuracies of 2·10−4 and 4·10−4, respectively).
In the 2nd century CE, Ptolemy used the value 377⁄120, the first known approximation accurate to three decimal places (accuracy 2·10−5). It is equal to
3
+
8
/
60
+
30
/
60
2
,
{\displaystyle 3+8/60+30/60^{2},}
which is accurate to two sexagesimal digits.
The Chinese mathematician Liu Hui in 263 CE computed π to between 3.141024 and 3.142708 by inscribing a 96-gon and 192-gon; the average of these two values is 3.141866 (accuracy 9·10−5).
He also suggested that 3.14 was a good enough approximation for practical purposes. He has also frequently been credited with a later and more accurate result, π ≈ 3927⁄1250 = 3.1416 (accuracy 2·10−6), although some scholars instead believe that this is due to the later (5th-century) Chinese mathematician Zu Chongzhi.
Zu Chongzhi is known to have computed π to be between 3.1415926 and 3.1415927, which was correct to seven decimal places. He also gave two other approximations of π: π ≈ 22⁄7 and π ≈ 355⁄113, which are not as accurate as his decimal result. The latter fraction is the best possible rational approximation of π using fewer than five decimal digits in the numerator and denominator. Zu Chongzhi's results surpass the accuracy reached in Hellenistic mathematics, and would remain without improvement for close to a millennium.
In Gupta-era India (6th century), mathematician Aryabhata, in his astronomical treatise Āryabhaṭīya stated:
Add 4 to 100, multiply by 8 and add to 62,000. This is 'approximately' the circumference of a circle whose diameter is 20,000.
Approximating π to four decimal places: π ≈ 62832⁄20000 = 3.1416, Aryabhata stated that his result "approximately" (āsanna "approaching") gave the circumference of a circle. His 15th-century commentator Nilakantha Somayaji (Kerala school of astronomy and mathematics) has argued that the word means not only that this is an approximation, but that the value is incommensurable (irrational).
== Middle Ages ==
Further progress was not made for nearly a millennium, until the 14th century, when Indian mathematician and astronomer Madhava of Sangamagrama, founder of the Kerala school of astronomy and mathematics, found the Maclaurin series for arctangent, and then two infinite series for π. One of them is now known as the Madhava–Leibniz series, based on
π
=
4
arctan
(
1
)
:
{\displaystyle \pi =4\arctan(1):}
π
=
4
(
1
−
1
3
+
1
5
−
1
7
+
⋯
)
{\displaystyle \pi =4\left(1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots \right)}
The other was based on
π
=
6
arctan
(
1
/
3
)
:
{\displaystyle \pi =6\arctan(1/{\sqrt {3}}):}
π
=
12
∑
k
=
0
∞
(
−
3
)
−
k
2
k
+
1
=
12
∑
k
=
0
∞
(
−
1
3
)
k
2
k
+
1
=
12
(
1
−
1
3
⋅
3
+
1
5
⋅
3
2
−
1
7
⋅
3
3
+
⋯
)
{\displaystyle \pi ={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-3)^{-k}}{2k+1}}={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-{\frac {1}{3}})^{k}}{2k+1}}={\sqrt {12}}\left(1-{1 \over 3\cdot 3}+{1 \over 5\cdot 3^{2}}-{1 \over 7\cdot 3^{3}}+\cdots \right)}
He used the first 21 terms to compute an approximation of π correct to 11 decimal places as 3.14159265359.
He also improved the formula based on arctan(1) by including a correction:
π
/
4
≈
1
−
1
3
+
1
5
−
1
7
+
⋯
−
(
−
1
)
n
2
n
−
1
±
n
2
+
1
4
n
3
+
5
n
{\displaystyle \pi /4\approx 1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots -{\frac {(-1)^{n}}{2n-1}}\pm {\frac {n^{2}+1}{4n^{3}+5n}}}
It is not known how he came up with this correction. Using this he found an approximation of π to 13 decimal places of accuracy when n = 75.
Indian mathematician Bhaskara II used regular polygons with up to 384 sides to obtain a close approximation of π, calculating it as 3.141666.
Jamshīd al-Kāshī (Kāshānī), a Persian astronomer and mathematician, correctly computed the fractional part of 2π to 9 sexagesimal digits in 1424, and translated this into 16 decimal digits after the decimal point:
2
π
≈
6.2831853071795864
,
{\displaystyle 2\pi \approx 6.2831853071795864,}
which gives 16 correct digits for π after the decimal point:
π
≈
3.1415926535897932
{\displaystyle \pi \approx 3.1415926535897932}
He achieved this level of accuracy by calculating the perimeter of a regular polygon with 3 × 228 sides.
== 16th to 19th centuries ==
In the second half of the 16th century, the French mathematician François Viète discovered an infinite product that converged on π known as Viète's formula.
The German-Dutch mathematician Ludolph van Ceulen (circa 1600) computed the first 35 decimal places of π with a 262-gon. He was so proud of this accomplishment that he had them inscribed on his tombstone.
In Cyclometricus (1621), Willebrord Snellius demonstrated that the perimeter of the inscribed polygon converges on the circumference twice as fast as does the perimeter of the corresponding circumscribed polygon. This was proved by Christiaan Huygens in 1654. Snellius was able to obtain seven digits of π from a 96-sided polygon.
In 1656, John Wallis published the Wallis product:
π
2
=
∏
n
=
1
∞
4
n
2
4
n
2
−
1
=
∏
n
=
1
∞
(
2
n
2
n
−
1
⋅
2
n
2
n
+
1
)
=
(
2
1
⋅
2
3
)
⋅
(
4
3
⋅
4
5
)
⋅
(
6
5
⋅
6
7
)
⋅
(
8
7
⋅
8
9
)
⋅
⋯
{\displaystyle {\frac {\pi }{2}}=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots }
In 1706, John Machin used Gregory's series (the Taylor series for arctangent) and the identity
1
4
π
=
4
arccot
5
−
arccot
239
{\textstyle {\tfrac {1}{4}}\pi =4\operatorname {arccot} 5-\operatorname {arccot} 239}
to calculate 100 digits of π (see § Machin-like formula below). In 1719, Thomas de Lagny used a similar identity to calculate 127 digits (of which 112 were correct). In 1789, the Slovene mathematician Jurij Vega improved John Machin's formula to calculate the first 140 digits, of which the first 126 were correct. In 1841, William Rutherford calculated 208 digits, of which the first 152 were correct.
The magnitude of such precision (152 decimal places) can be put into context by the fact that the circumference of the largest known object, the observable universe, can be calculated from its diameter (93 billion light-years) to a precision of less than one Planck length (at 1.6162×10−35 meters, the shortest unit of length expected to be directly measurable) using π expressed to just 62 decimal places.
The English amateur mathematician William Shanks calculated π to 530 decimal places in January 1853, of which the first 527 were correct (the last few likely being incorrect due to round-off errors). He subsequently expanded his calculation to 607 decimal places in April 1853, but an error introduced right at the 530th decimal place rendered the rest of his calculation erroneous; due to the nature of Machin's formula, the error propagated back to the 528th decimal place, leaving only the first 527 digits correct once again. Twenty years later, Shanks expanded his calculation to 707 decimal places in April 1873. Due to this being an expansion of his previous calculation, most of the new digits were incorrect as well. Shanks was said to have calculated new digits all morning and would then spend all afternoon checking his morning's work. This was the longest expansion of π until the advent of the electronic digital computer three-quarters of a century later.
== 20th and 21st centuries ==
In 1910, the Indian mathematician Srinivasa Ramanujan found several rapidly converging infinite series of π, including
1
π
=
2
2
9801
∑
k
=
0
∞
(
4
k
)
!
(
1103
+
26390
k
)
(
k
!
)
4
396
4
k
{\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}}
which computes a further eight decimal places of π with each term in the series. His series are now the basis for the fastest algorithms currently used to calculate π. Evaluating the first term alone yields a value correct to seven decimal places:
π
≈
9801
2206
2
≈
3.14159273
{\displaystyle \pi \approx {\frac {9801}{2206{\sqrt {2}}}}\approx 3.14159273}
See Ramanujan–Sato series.
From the mid-20th century onwards, all improvements in calculation of π have been done with the help of calculators or computers.
In 1944−45, D. F. Ferguson, with the aid of a mechanical desk calculator, found that William Shanks had made a mistake in the 528th decimal place, and that all succeeding digits were incorrect.
In the early years of the computer, an expansion of π to 100000 decimal places: 78 was computed by Maryland mathematician Daniel Shanks (no relation to the aforementioned William Shanks) and his team at the United States Naval Research Laboratory in Washington, D.C. In 1961, Shanks and his team used two different power series for calculating the digits of π. For one, it was known that any error would produce a value slightly high, and for the other, it was known that any error would produce a value slightly low. And hence, as long as the two series produced the same digits, there was a very high confidence that they were correct. The first 100,265 digits of π were published in 1962.: 80–99 The authors outlined what would be needed to calculate π to 1 million decimal places and concluded that the task was beyond that day's technology, but would be possible in five to seven years.: 78
In 1989, the Chudnovsky brothers computed π to over 1 billion decimal places on the supercomputer IBM 3090 using the following variation of Ramanujan's infinite series of π:
1
π
=
12
∑
k
=
0
∞
(
−
1
)
k
(
6
k
)
!
(
13591409
+
545140134
k
)
(
3
k
)
!
(
k
!
)
3
640320
3
k
+
3
/
2
.
{\displaystyle {\frac {1}{\pi }}=12\sum _{k=0}^{\infty }{\frac {(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}.}
Records since then have all been accomplished using the Chudnovsky algorithm.
In 1999, Yasumasa Kanada and his team at the University of Tokyo computed π to over 200 billion decimal places on the supercomputer HITACHI SR8000/MPP (128 nodes) using another variation of Ramanujan's infinite series of π.
In November 2002, Yasumasa Kanada and a team of 9 others used the Hitachi SR8000, a 64-node supercomputer with 1 terabyte of main memory, to calculate π to roughly 1.24 trillion digits in around 600 hours (25 days).
=== Recent records ===
In August 2009, a Japanese supercomputer called the T2K Open Supercomputer more than doubled the previous record by calculating π to roughly 2.6 trillion digits in approximately 73 hours and 36 minutes.
In December 2009, Fabrice Bellard used a home computer to compute 2.7 trillion decimal digits of π. Calculations were performed in base 2 (binary), then the result was converted to base 10 (decimal). The calculation, conversion, and verification steps took a total of 131 days.
In August 2010, Shigeru Kondo used Alexander Yee's y-cruncher to calculate 5 trillion digits of π. This was the world record for any type of calculation, but significantly it was performed on a home computer built by Kondo. The calculation was done between 4 May and 3 August, with the primary and secondary verifications taking 64 and 66 hours respectively.
In October 2011, Shigeru Kondo broke his own record by computing ten trillion (1013) and fifty digits using the same method but with better hardware.
In December 2013, Kondo broke his own record for a second time when he computed 12.1 trillion digits of π.
In October 2014, Sandon Van Ness, going by the pseudonym "houkouonchi" used y-cruncher to calculate 13.3 trillion digits of π.
In November 2016, Peter Trueb and his sponsors computed on y-cruncher and fully verified 22.4 trillion digits of π (22,459,157,718,361 (πe × 1012)). The computation took (with three interruptions) 105 days to complete, the limitation of further expansion being primarily storage space.
In March 2019, Emma Haruka Iwao, an employee at Google, computed 31.4 (approximately 10π) trillion digits of pi using y-cruncher and Google Cloud machines. This took 121 days to complete.
In January 2020, Timothy Mullican announced the computation of 50 trillion digits over 303 days.
On 14 August 2021, a team (DAViS) at the University of Applied Sciences of the Grisons announced completion of the computation of π to 62.8 (approximately 20π) trillion digits.
On 8 June 2022, Emma Haruka Iwao announced on the Google Cloud Blog the computation of 100 trillion (1014) digits of π over 158 days using Alexander Yee's y-cruncher.
On 14 March 2024, Jordan Ranous, Kevin O’Brien and Brian Beeler computed π to 105 trillion digits, also using y-cruncher.
On 28 June 2024, the StorageReview Team computed π to 202 trillion digits, also using y-cruncher.
On 2 April 2025, Linus Media Group and Kioxia computed π to 300 trillion digits, also using y-cruncher.
== Practical approximations ==
Depending on the purpose of a calculation, π can be approximated by using fractions for ease of calculation. The most notable such approximations are 22⁄7 (relative error of about 4·10−4) and 355⁄113 (relative error of about 8·10−8).
In Chinese mathematics, the fractions 22/7 and 355/113 are known as Yuelü (约率; yuēlǜ; 'approximate ratio') and Milü (密率; mìlǜ; 'close ratio').
== Non-mathematical "definitions" of π ==
Of some notability are legal or historical texts purportedly "defining π" to have some rational value, such as the "Indiana Pi Bill" of 1897, which stated "the ratio of the diameter and circumference is as five-fourths to four" (which would imply "π = 3.2") and a passage in the Hebrew Bible that implies that π = 3.
=== Indiana bill ===
The so-called "Indiana Pi Bill" from 1897 has often been characterized as an attempt to "legislate the value of Pi". Rather, the bill dealt with a purported solution to the problem of geometrically "squaring the circle".
The bill was nearly passed by the Indiana General Assembly in the U.S., and has been claimed to imply a number of different values for π, although the closest it comes to explicitly asserting one is the wording "the ratio of the diameter and circumference is as five-fourths to four", which would make π = 16⁄5 = 3.2, a discrepancy of nearly 2 percent. A mathematics professor who happened to be present the day the bill was brought up for consideration in the Senate, after it had passed in the House, helped to stop the passage of the bill on its second reading, after which the assembly thoroughly ridiculed it before postponing it indefinitely.
=== Imputed biblical value ===
It is sometimes claimed that the Hebrew Bible implies that "π equals three", based on a passage in 1 Kings 7:23 and 2 Chronicles 4:2 giving measurements for the round basin located in front of the Temple in Jerusalem as having a diameter of 10 cubits and a circumference of 30 cubits.
The issue is discussed in the Talmud and in Rabbinic literature. Among the many explanations and comments are these:
Rabbi Nehemiah explained this in his Mishnat ha-Middot (the earliest known Hebrew text on geometry, ca. 150 CE) by saying that the diameter was measured from the outside rim while the circumference was measured along the inner rim. This interpretation implies a brim about 0.225 cubit (or, assuming an 18-inch "cubit", some 4 inches), or one and a third "handbreadths," thick (cf. NRSV and NRSV).
Maimonides states (ca. 1168 CE) that π can only be known approximately, so the value 3 was given as accurate enough for religious purposes. This is taken by some as the earliest assertion that π is irrational.
There is still some debate on this passage in biblical scholarship. Many reconstructions of the basin show a wider brim (or flared lip) extending outward from the bowl itself by several inches to match the description given in NRSV In the succeeding verses, the rim is described as "a handbreadth thick; and the brim thereof was wrought like the brim of a cup, like the flower of a lily: it received and held three thousand baths" NRSV, which suggests a shape that can be encompassed with a string shorter than the total length of the brim, e.g., a Lilium flower or a Teacup.
== Development of efficient formulae ==
=== Polygon approximation to a circle ===
Archimedes, in his Measurement of a Circle, created the first algorithm for the calculation of π based on the idea that the perimeter of any (convex) polygon inscribed in a circle is less than the circumference of the circle, which, in turn, is less than the perimeter of any circumscribed polygon. He started with inscribed and circumscribed regular hexagons, whose perimeters are readily determined. He then shows how to calculate the perimeters of regular polygons of twice as many sides that are inscribed and circumscribed about the same circle. This is a recursive procedure which would be described today as follows: Let pk and Pk denote the perimeters of regular polygons of k sides that are inscribed and circumscribed about the same circle, respectively. Then,
P
2
n
=
2
p
n
P
n
p
n
+
P
n
,
p
2
n
=
p
n
P
2
n
.
{\displaystyle P_{2n}={\frac {2p_{n}P_{n}}{p_{n}+P_{n}}},\quad \quad p_{2n}={\sqrt {p_{n}P_{2n}}}.}
Archimedes uses this to successively compute P12, p12, P24, p24, P48, p48, P96 and p96. Using these last values he obtains
3
10
71
<
π
<
3
1
7
.
{\displaystyle 3{\frac {10}{71}}<\pi <3{\frac {1}{7}}.}
It is not known why Archimedes stopped at a 96-sided polygon; it only takes patience to extend the computations. Heron reports in his Metrica (about 60 CE) that Archimedes continued the computation in a now lost book, but then attributes an incorrect value to him.
Archimedes uses no trigonometry in this computation and the difficulty in applying the method lies in obtaining good approximations for the square roots that are involved. Trigonometry, in the form of a table of chord lengths in a circle, was probably used by Claudius Ptolemy of Alexandria to obtain the value of π given in the Almagest (circa 150 CE).
Advances in the approximation of π (when the methods are known) were made by increasing the number of sides of the polygons used in the computation. A trigonometric improvement by Willebrord Snell (1621) obtains better bounds from a pair of bounds obtained from the polygon method. Thus, more accurate results were obtained from polygons with fewer sides. Viète's formula, published by François Viète in 1593, was derived by Viète using a closely related polygonal method, but with areas rather than perimeters of polygons whose numbers of sides are powers of two.
The last major attempt to compute π by this method was carried out by Grienberger in 1630 who calculated 39 decimal places of π using Snell's refinement.
=== Machin-like formula ===
For fast calculations, one may use formulae such as Machin's:
π
4
=
4
arctan
1
5
−
arctan
1
239
{\displaystyle {\frac {\pi }{4}}=4\arctan {\frac {1}{5}}-\arctan {\frac {1}{239}}}
together with the Taylor series expansion of the function arctan(x). This formula is most easily verified using polar coordinates of complex numbers, producing:
(
5
+
i
)
4
⋅
(
239
−
i
)
=
2
2
⋅
13
4
(
1
+
i
)
.
{\displaystyle (5+i)^{4}\cdot (239-i)=2^{2}\cdot 13^{4}(1+i).}
((x),(y) = {239, 132} is a solution to the Pell equation x2 − 2y2 = −1.)
Formulae of this kind are known as Machin-like formulae. Machin's particular formula was used well into the computer era for calculating record numbers of digits of π, but more recently other similar formulae have been used as well.
For instance, Shanks and his team used the following Machin-like formula in 1961 to compute the first 100,000 digits of π:
π
4
=
6
arctan
1
8
+
2
arctan
1
57
+
arctan
1
239
{\displaystyle {\frac {\pi }{4}}=6\arctan {\frac {1}{8}}+2\arctan {\frac {1}{57}}+\arctan {\frac {1}{239}}}
and they used another Machin-like formula,
π
4
=
12
arctan
1
18
+
8
arctan
1
57
−
5
arctan
1
239
{\displaystyle {\frac {\pi }{4}}=12\arctan {\frac {1}{18}}+8\arctan {\frac {1}{57}}-5\arctan {\frac {1}{239}}}
as a check.
The record as of December 2002 by Yasumasa Kanada of Tokyo University stood at 1,241,100,000,000 digits. The following Machin-like formulae were used for this:
π
4
=
12
arctan
1
49
+
32
arctan
1
57
−
5
arctan
1
239
+
12
arctan
1
110443
{\displaystyle {\frac {\pi }{4}}=12\arctan {\frac {1}{49}}+32\arctan {\frac {1}{57}}-5\arctan {\frac {1}{239}}+12\arctan {\frac {1}{110443}}}
K. Takano (1982).
π
4
=
44
arctan
1
57
+
7
arctan
1
239
−
12
arctan
1
682
+
24
arctan
1
12943
{\displaystyle {\frac {\pi }{4}}=44\arctan {\frac {1}{57}}+7\arctan {\frac {1}{239}}-12\arctan {\frac {1}{682}}+24\arctan {\frac {1}{12943}}}
F. C. M. Størmer (1896).
=== Other classical formulae ===
Other formulae that have been used to compute estimates of π include:
Liu Hui (see also Viète's formula):
π
≈
768
2
−
2
+
2
+
2
+
2
+
2
+
2
+
2
+
2
+
1
≈
3.141590463236763.
{\displaystyle {\begin{aligned}\pi &\approx 768{\sqrt {2-{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+1}}}}}}}}}}}}}}}}}}\\&\approx 3.141590463236763.\end{aligned}}}
Madhava:
π
=
12
∑
k
=
0
∞
(
−
3
)
−
k
2
k
+
1
=
12
∑
k
=
0
∞
(
−
1
3
)
k
2
k
+
1
=
12
(
1
1
⋅
3
0
−
1
3
⋅
3
1
+
1
5
⋅
3
2
−
1
7
⋅
3
3
+
⋯
)
{\displaystyle \pi ={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-3)^{-k}}{2k+1}}={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-{\frac {1}{3}})^{k}}{2k+1}}={\sqrt {12}}\left({1 \over 1\cdot 3^{0}}-{1 \over 3\cdot 3^{1}}+{1 \over 5\cdot 3^{2}}-{1 \over 7\cdot 3^{3}}+\cdots \right)}
Newton / Euler Convergence Transformation:
arctan
x
=
x
1
+
x
2
∑
k
=
0
∞
(
2
k
)
!
!
x
2
k
(
2
k
+
1
)
!
!
(
1
+
x
2
)
k
=
x
1
+
x
2
+
2
3
x
3
(
1
+
x
2
)
2
+
2
⋅
4
3
⋅
5
x
5
(
1
+
x
2
)
3
+
⋯
π
2
=
∑
k
=
0
∞
k
!
(
2
k
+
1
)
!
!
=
∑
k
=
0
∞
2
k
k
!
2
(
2
k
+
1
)
!
=
1
+
1
3
(
1
+
2
5
(
1
+
3
7
(
1
+
⋯
)
)
)
{\displaystyle {\begin{aligned}\arctan x&={\frac {x}{1+x^{2}}}\sum _{k=0}^{\infty }{\frac {(2k)!!\,x^{2k}}{(2k+1)!!\,(1+x^{2})^{k}}}={\frac {x}{1+x^{2}}}+{\frac {2}{3}}{\frac {x^{3}}{(1+x^{2})^{2}}}+{\frac {2\cdot 4}{3\cdot 5}}{\frac {x^{5}}{(1+x^{2})^{3}}}+\cdots \\[10mu]{\frac {\pi }{2}}&=\sum _{k=0}^{\infty }{\frac {k!}{(2k+1)!!}}=\sum _{k=0}^{\infty }{\cfrac {2^{k}k!^{2}}{(2k+1)!}}=1+{\frac {1}{3}}\left(1+{\frac {2}{5}}\left(1+{\frac {3}{7}}\left(1+\cdots \right)\right)\right)\end{aligned}}}
where m!! is the double factorial, the product of the positive integers up to m with the same parity.
Euler:
π
=
20
arctan
1
7
+
8
arctan
3
79
{\displaystyle {\pi }=20\arctan {\frac {1}{7}}+8\arctan {\frac {3}{79}}}
(Evaluated using the preceding series for arctan.)
Ramanujan:
1
π
=
2
2
9801
∑
k
=
0
∞
(
4
k
)
!
(
1103
+
26390
k
)
(
k
!
)
4
396
4
k
{\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}}
David Chudnovsky and Gregory Chudnovsky:
1
π
=
12
∑
k
=
0
∞
(
−
1
)
k
(
6
k
)
!
(
13591409
+
545140134
k
)
(
3
k
)
!
(
k
!
)
3
640320
3
k
+
3
/
2
{\displaystyle {\frac {1}{\pi }}=12\sum _{k=0}^{\infty }{\frac {(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}}
Ramanujan's work is the basis for the Chudnovsky algorithm, the fastest algorithms used, as of the turn of the millennium, to calculate π.
=== Modern algorithms ===
Extremely long decimal expansions of π are typically computed with iterative formulae like the Gauss–Legendre algorithm and Borwein's algorithm. The latter, found in 1985 by Jonathan and Peter Borwein, converges extremely quickly:
For
y
0
=
2
−
1
,
a
0
=
6
−
4
2
{\displaystyle y_{0}={\sqrt {2}}-1,\ a_{0}=6-4{\sqrt {2}}}
and
y
k
+
1
=
(
1
−
f
(
y
k
)
)
/
(
1
+
f
(
y
k
)
)
,
a
k
+
1
=
a
k
(
1
+
y
k
+
1
)
4
−
2
2
k
+
3
y
k
+
1
(
1
+
y
k
+
1
+
y
k
+
1
2
)
{\displaystyle y_{k+1}=(1-f(y_{k}))/(1+f(y_{k}))~,~a_{k+1}=a_{k}(1+y_{k+1})^{4}-2^{2k+3}y_{k+1}(1+y_{k+1}+y_{k+1}^{2})}
where
f
(
y
)
=
(
1
−
y
4
)
1
/
4
{\displaystyle f(y)=(1-y^{4})^{1/4}}
, the sequence
1
/
a
k
{\displaystyle 1/a_{k}}
converges quartically to π, giving about 100 digits in three steps and over a trillion digits after 20 steps. Even though the Chudnovsky series is only linearly convergent, the Chudnovsky algorithm might be faster than the iterative algorithms in practice; that depends on technological factors such as memory sizes and access times. For breaking world records, the iterative algorithms are used less commonly than the Chudnovsky algorithm since they are memory-intensive.
The first one million digits of π and 1⁄π are available from Project Gutenberg. A former calculation record (December 2002) by Yasumasa Kanada of Tokyo University stood at 1.24 trillion digits, which were computed in September 2002 on a 64-node Hitachi supercomputer with 1 terabyte of main memory, which carries out 2 trillion operations per second, nearly twice as many as the computer used for the previous record (206 billion digits). The following Machin-like formulae were used for this:
π
4
=
12
arctan
1
49
+
32
arctan
1
57
−
5
arctan
1
239
+
12
arctan
1
110443
{\displaystyle {\frac {\pi }{4}}=12\arctan {\frac {1}{49}}+32\arctan {\frac {1}{57}}-5\arctan {\frac {1}{239}}+12\arctan {\frac {1}{110443}}}
(Kikuo Takano (1982))
π
4
=
44
arctan
1
57
+
7
arctan
1
239
−
12
arctan
1
682
+
24
arctan
1
12943
{\displaystyle {\frac {\pi }{4}}=44\arctan {\frac {1}{57}}+7\arctan {\frac {1}{239}}-12\arctan {\frac {1}{682}}+24\arctan {\frac {1}{12943}}}
(F. C. M. Størmer (1896)).
These approximations have so many digits that they are no longer of any practical use, except for testing new supercomputers. Properties like the potential normality of π will always depend on the infinite string of digits on the end, not on any finite computation.
=== Miscellaneous approximations ===
As well as the formulas and approximations such as
22
7
{\displaystyle {\tfrac {22}{7}}}
and
355
113
{\displaystyle {\tfrac {355}{113}}}
discussed elsewhere in this article,
The following expressions have been used to estimate π:
Accurate to three digits:
2
+
3
=
3.146
+
.
{\displaystyle {\sqrt {2}}+{\sqrt {3}}=3.146^{+}.}
Karl Popper conjectured that Plato knew this expression, that he believed it to be exactly π, and that this is responsible for some of Plato's confidence in the universal power of geometry and for Plato's repeated discussion of special right triangles that are either isosceles or halves of equilateral triangles.
Accurate to four digits:
1
+
e
−
γ
=
3.1410
+
,
{\displaystyle 1+e-\gamma =3.1410^{+},}
where
e
{\displaystyle e}
is the natural logarithmic base and
γ
{\displaystyle \gamma }
is Euler's constant, and
31
3
=
3.1413
+
.
{\displaystyle {\sqrt[{3}]{31}}=3.1413^{+}.}
Accurate to four digits (or five significant figures):
7
+
6
+
5
=
3.1416
+
.
{\displaystyle {\sqrt {7+{\sqrt {6+{\sqrt {5}}}}}}=3.1416^{+}.}
An approximation by Ramanujan, accurate to 4 digits (or five significant figures):
9
5
+
9
5
=
3.1416
+
.
{\displaystyle {\frac {9}{5}}+{\sqrt {\frac {9}{5}}}=3.1416^{+}.}
Accurate to five digits:
7
7
4
9
=
3.14156
+
,
{\displaystyle {\frac {7^{7}}{4^{9}}}=3.14156^{+},}
306
5
=
3.14155
+
,
{\displaystyle {\sqrt[{5}]{306}}=3.14155^{+},}
and (by Kochański)
40
3
−
2
3
=
3.14153
+
.
{\displaystyle {\sqrt {{40 \over 3}-2{\sqrt {3}}\ }}=3.14153^{+}.}
accurate to six digits:
(
2
−
2
2
−
2
2
2
)
2
=
3.14159
6
+
.
{\displaystyle \left(2-{\frac {\sqrt {2{\sqrt {2}}-2}}{2^{2}}}\right)^{2}=3.14159\ 6^{+}.}
accurate to eight digits:
(
58
4
−
37
2
33
)
−
1
=
66
2
33
29
−
148
=
3.14159
263
+
{\displaystyle \left({\frac {\sqrt {58}}{4}}-{\frac {37{\sqrt {2}}}{33}}\right)^{-1}={\frac {66{\sqrt {2}}}{33{\sqrt {29}}-148}}=3.14159\ 263^{+}}
This is the case that cannot be obtained from Ramanujan's approximation (22).
accurate to nine digits:
3
4
+
2
4
+
1
2
+
(
2
3
)
2
4
=
2143
22
4
=
3.14159
2652
+
{\displaystyle {\sqrt[{4}]{3^{4}+2^{4}+{\frac {1}{2+({\frac {2}{3}})^{2}}}}}={\sqrt[{4}]{\frac {2143}{22}}}=3.14159\ 2652^{+}}
This is from Ramanujan, who claimed the Goddess of Namagiri appeared to him in a dream and told him the true value of π.
accurate to ten digits (or eleven significant figures):
10
100
11222.11122
193
=
3.14159
26536
+
{\displaystyle {\sqrt[{193}]{\frac {10^{100}}{11222.11122}}}=3.14159\ 26536^{+}}
This approximation follows the observation that the 193rd power of 1/π yields the sequence 1122211125... Replacing 5 by 2 completes the symmetry without reducing the correct digits of π, while inserting a central decimal point remarkably fixes the accompanying magnitude at 10100.
accurate to 12 decimal places:
(
163
6
−
181
10005
)
−
1
=
3.14159
26535
89
+
{\displaystyle \left({\frac {\sqrt {163}}{6}}-{\frac {181}{\sqrt {10005}}}\right)^{-1}=3.14159\ 26535\ 89^{+}}
This is obtained from the Chudnovsky series (truncate the series (1.4) at the first term and let E6(τ163)2/E4(τ163)3 = 151931373056001/151931373056000 ≈ 1).
accurate to 16 digits:
2510613731736
2
1130173253125
=
3.14159
26535
89793
9
+
{\displaystyle {\frac {2510613731736{\sqrt {2}}}{1130173253125}}=3.14159\ 26535\ 89793\ 9^{+}}
- inverse of sum of first two terms of Ramanujan series.
165707065
52746197
=
3.14159
26535
89793
4
+
{\displaystyle {\frac {165707065}{52746197}}=3.14159\ 26535\ 89793\ 4^{+}}
accurate to 18 digits:
(
253
4
−
643
11
903
−
223
172
)
−
1
=
3.14159
26535
89793
2387
+
{\displaystyle \left({\frac {\sqrt {253}}{4}}-{\frac {643{\sqrt {11}}}{903}}-{\frac {223}{172}}\right)^{-1}=3.14159\ 26535\ 89793\ 2387^{+}}
This is the approximation (22) in Ramanujan's paper with n = 253.
accurate to 19 digits:
3949122332
2
1777729635
=
3.14159
26535
89793
2382
+
{\displaystyle {\frac {3949122332{\sqrt {2}}}{1777729635}}=3.14159\ 26535\ 89793\ 2382^{+}}
- improved inverse of sum of first two terms of Ramanujan series.
accurate to 24 digits:
2286635172367940241408
2
1029347477390786609545
=
3.14159
26535
89793
23846
2649
+
{\displaystyle {\frac {2286635172367940241408{\sqrt {2}}}{1029347477390786609545}}=3.14159\ 26535\ 89793\ 23846\ 2649^{+}}
- inverse of sum of first three terms of Ramanujan series.
accurate to 25 decimal places:
1
10
ln
(
2
21
(
5
4
−
1
)
24
+
24
)
=
3.14159
26535
89793
23846
26433
9
+
{\displaystyle {\frac {1}{10}}\ln \left({\frac {2^{21}}{({\sqrt[{4}]{5}}-1)^{24}}}+24\right)=3.14159\ 26535\ 89793\ 23846\ 26433\ 9^{+}}
This is derived from Ramanujan's class invariant g100 = 25/8/(51/4 − 1).
accurate to 30 decimal places:
ln
(
640320
3
+
744
)
163
=
3.14159
26535
89793
23846
26433
83279
+
{\displaystyle {\frac {\ln(640320^{3}+744)}{\sqrt {163}}}=3.14159\ 26535\ 89793\ 23846\ 26433\ 83279^{+}}
Derived from the closeness of Ramanujan constant to the integer 6403203+744. This does not admit obvious generalizations in the integers, because there are only finitely many Heegner numbers and negative discriminants d with class number h(−d) = 1, and d = 163 is the largest one in absolute value.
accurate to 52 decimal places:
ln
(
5280
3
(
236674
+
30303
61
)
3
+
744
)
427
{\displaystyle {\frac {\ln(5280^{3}(236674+30303{\sqrt {61}})^{3}+744)}{\sqrt {427}}}}
Like the one above, a consequence of the j-invariant. Among negative discriminants with class number 2, this d the largest in absolute value.
accurate to 52 decimal places:
ln
(
2
−
30
(
(
3
+
5
)
(
5
+
7
)
(
7
+
11
)
(
11
+
3
)
)
12
−
24
)
5
7
11
{\displaystyle {\frac {\ln(2^{-30}((3+{\sqrt {5}})({\sqrt {5}}+{\sqrt {7}})({\sqrt {7}}+{\sqrt {11}})({\sqrt {11}}+3))^{12}-24)}{{\sqrt {5}}{\sqrt {7}}{\sqrt {11}}}}}
This is derived from Ramanujan's class invariant G385.
accurate to 161 decimal places:
ln
(
(
2
u
)
6
+
24
)
3502
{\displaystyle {\frac {\ln {\big (}(2u)^{6}+24{\big )}}{\sqrt {3502}}}}
where u is a product of four simple quartic units,
u
=
(
a
+
a
2
−
1
)
2
(
b
+
b
2
−
1
)
2
(
c
+
c
2
−
1
)
(
d
+
d
2
−
1
)
{\displaystyle u=(a+{\sqrt {a^{2}-1}})^{2}(b+{\sqrt {b^{2}-1}})^{2}(c+{\sqrt {c^{2}-1}})(d+{\sqrt {d^{2}-1}})}
and,
a
=
1
2
(
23
+
4
34
)
b
=
1
2
(
19
2
+
7
17
)
c
=
(
429
+
304
2
)
d
=
1
2
(
627
+
442
2
)
{\displaystyle {\begin{aligned}a&={\tfrac {1}{2}}(23+4{\sqrt {34}})\\b&={\tfrac {1}{2}}(19{\sqrt {2}}+7{\sqrt {17}})\\c&=(429+304{\sqrt {2}})\\d&={\tfrac {1}{2}}(627+442{\sqrt {2}})\end{aligned}}}
Based on one found by Daniel Shanks. Similar to the previous two, but this time is a quotient of a modular form, namely the Dedekind eta function, and where the argument involves
τ
=
−
3502
{\displaystyle \tau ={\sqrt {-3502}}}
. The discriminant d = 3502 has h(−d) = 16.
accurate to 256 digits:
15261343909396942111177730086852826352374060766771618308167575028500999
48590509502030754798379641288876701245663220023884870402810360529259
.
.
.
{\displaystyle {\frac {15261343909396942111177730086852826352374060766771618308167575028500999}{48590509502030754798379641288876701245663220023884870402810360529259}}...}
.
.
.
551152789881364457516133280872003443353677807669620554743
10005
3134188302895457201473978137944378665098227220269702217081111
{\displaystyle ...{\frac {551152789881364457516133280872003443353677807669620554743{\sqrt {10005}}}{3134188302895457201473978137944378665098227220269702217081111}}}
- improved inverse of sum of the first nineteen terms of Chudnovsky series.
The continued fraction representation of π can be used to generate successive best rational approximations. These approximations are the best possible rational approximations of π relative to the size of their denominators. Here is a list of the first thirteen of these:
3
1
,
22
7
,
333
106
,
355
113
,
103993
33102
,
104348
33215
,
208341
66317
,
312689
99532
,
833719
265381
,
1146408
364913
,
4272943
1360120
,
5419351
1725033
{\displaystyle {\frac {3}{1}},{\frac {22}{7}},{\frac {333}{106}},{\frac {355}{113}},{\frac {103993}{33102}},{\frac {104348}{33215}},{\frac {208341}{66317}},{\frac {312689}{99532}},{\frac {833719}{265381}},{\frac {1146408}{364913}},{\frac {4272943}{1360120}},{\frac {5419351}{1725033}}}
Of these,
355
113
{\displaystyle {\frac {355}{113}}}
is the only fraction in this sequence that gives more exact digits of π (i.e. 7) than the number of digits needed to approximate it (i.e. 6). The accuracy can be improved by using other fractions with larger numerators and denominators, but, for most such fractions, more digits are required in the approximation than correct significant figures achieved in the result.
=== Summing a circle's area ===
Pi can be obtained from a circle if its radius and area are known using the relationship:
A
=
π
r
2
.
{\displaystyle A=\pi r^{2}.}
If a circle with radius r is drawn with its center at the point (0, 0), any point whose distance from the origin is less than r will fall inside the circle. The Pythagorean theorem gives the distance from any point (x, y) to the center:
d
=
x
2
+
y
2
.
{\displaystyle d={\sqrt {x^{2}+y^{2}}}.}
Mathematical "graph paper" is formed by imagining a 1×1 square centered around each cell (x, y), where x and y are integers between −r and r. Squares whose center resides inside or exactly on the border of the circle can then be counted by testing whether, for each cell (x, y),
x
2
+
y
2
≤
r
.
{\displaystyle {\sqrt {x^{2}+y^{2}}}\leq r.}
The total number of cells satisfying that condition thus approximates the area of the circle, which then can be used to calculate an approximation of π. Closer approximations can be produced by using larger values of r.
Mathematically, this formula can be written:
π
=
lim
r
→
∞
1
r
2
∑
x
=
−
r
r
∑
y
=
−
r
r
{
1
if
x
2
+
y
2
≤
r
0
if
x
2
+
y
2
>
r
.
{\displaystyle \pi =\lim _{r\to \infty }{\frac {1}{r^{2}}}\sum _{x=-r}^{r}\;\sum _{y=-r}^{r}{\begin{cases}1&{\text{if }}{\sqrt {x^{2}+y^{2}}}\leq r\\0&{\text{if }}{\sqrt {x^{2}+y^{2}}}>r.\end{cases}}}
In other words, begin by choosing a value for r. Consider all cells (x, y) in which both x and y are integers between −r and r. Starting at 0, add 1 for each cell whose distance to the origin (0, 0) is less than or equal to r. When finished, divide the sum, representing the area of a circle of radius r, by r2 to find the approximation of π.
For example, if r is 5, then the cells considered are:
The 12 cells (0, ±5), (±5, 0), (±3, ±4), (±4, ±3) are exactly on the circle, and 69 cells are completely inside, so the approximate area is 81, and π is calculated to be approximately 3.24 because 81/52 = 3.24. Results for some values of r are shown in the table below:
Similarly, the more complex approximations of π given below involve repeated calculations of some sort, yielding closer and closer approximations with increasing numbers of calculations.
=== Continued fractions ===
Besides its simple continued fraction representation [3; 7, 15, 1, 292, 1, 1, ...], which displays no discernible pattern, π has many generalized continued fraction representations generated by a simple rule, including these two.
π
=
3
+
1
2
6
+
3
2
6
+
5
2
6
+
⋱
{\displaystyle \pi ={3+{\cfrac {1^{2}}{6+{\cfrac {3^{2}}{6+{\cfrac {5^{2}}{6+\ddots \,}}}}}}}}
π
=
4
1
+
1
2
3
+
2
2
5
+
3
2
7
+
4
2
9
+
⋱
=
3
+
1
2
5
+
4
2
7
+
3
2
9
+
6
2
11
+
5
2
13
+
⋱
{\displaystyle \pi ={\cfrac {4}{1+{\cfrac {1^{2}}{3+{\cfrac {2^{2}}{5+{\cfrac {3^{2}}{7+{\cfrac {4^{2}}{9+\ddots }}}}}}}}}}=3+{\cfrac {1^{2}}{5+{\cfrac {4^{2}}{7+{\cfrac {3^{2}}{9+{\cfrac {6^{2}}{11+{\cfrac {5^{2}}{13+\ddots }}}}}}}}}}}
The remainder of the Madhava–Leibniz series can be expressed as generalized continued fraction as follows.
π
=
4
∑
n
=
1
m
(
−
1
)
n
−
1
2
n
−
1
+
2
(
−
1
)
m
2
m
+
1
2
2
m
+
2
2
2
m
+
3
2
2
m
+
⋱
(
m
=
1
,
2
,
3
,
…
)
{\displaystyle \pi =4\sum _{n=1}^{m}{\frac {(-1)^{n-1}}{2n-1}}+{\cfrac {2(-1)^{m}}{2m+{\cfrac {1^{2}}{2m+{\cfrac {2^{2}}{2m+{\cfrac {3^{2}}{2m+\ddots }}}}}}}}\qquad (m=1,2,3,\ldots )}
Note that Madhava's correction term is
2
2
m
+
1
2
2
m
+
2
2
2
m
=
4
m
2
+
1
4
m
3
+
5
m
{\displaystyle {\frac {2}{2m+{\frac {1^{2}}{2m+{\frac {2^{2}}{2m}}}}}}=4{\frac {m^{2}+1}{4m^{3}+5m}}}
.
The well-known values 22/7 and 355/113 are respectively the second and fourth continued fraction approximations to π.
=== Trigonometry ===
==== Gregory–Leibniz series ====
The Gregory–Leibniz series
π
=
4
∑
n
=
0
∞
(
−
1
)
n
2
n
+
1
=
4
(
1
1
−
1
3
+
1
5
−
1
7
+
−
⋯
)
{\displaystyle \pi =4\sum _{n=0}^{\infty }{\cfrac {(-1)^{n}}{2n+1}}=4\left({\frac {1}{1}}-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+-\cdots \right)}
is the power series for arctan(x) specialized to x = 1. It converges too slowly to be of practical interest. However, the power series converges much faster for smaller values of
x
{\displaystyle x}
, which leads to formulae where
π
{\displaystyle \pi }
arises as the sum of small angles with rational tangents, known as Machin-like formulae.
==== Arctangent ====
Knowing that 4 arctan 1 = π, the formula can be simplified to get:
π
=
2
(
1
+
1
3
+
1
⋅
2
3
⋅
5
+
1
⋅
2
⋅
3
3
⋅
5
⋅
7
+
1
⋅
2
⋅
3
⋅
4
3
⋅
5
⋅
7
⋅
9
+
1
⋅
2
⋅
3
⋅
4
⋅
5
3
⋅
5
⋅
7
⋅
9
⋅
11
+
⋯
)
=
2
∑
n
=
0
∞
n
!
(
2
n
+
1
)
!
!
=
∑
n
=
0
∞
2
n
+
1
n
!
2
(
2
n
+
1
)
!
=
∑
n
=
0
∞
2
n
+
1
(
2
n
n
)
(
2
n
+
1
)
=
2
+
2
3
+
4
15
+
4
35
+
16
315
+
16
693
+
32
3003
+
32
6435
+
256
109395
+
256
230945
+
⋯
{\displaystyle {\begin{aligned}\pi &=2\left(1+{\cfrac {1}{3}}+{\cfrac {1\cdot 2}{3\cdot 5}}+{\cfrac {1\cdot 2\cdot 3}{3\cdot 5\cdot 7}}+{\cfrac {1\cdot 2\cdot 3\cdot 4}{3\cdot 5\cdot 7\cdot 9}}+{\cfrac {1\cdot 2\cdot 3\cdot 4\cdot 5}{3\cdot 5\cdot 7\cdot 9\cdot 11}}+\cdots \right)\\&=2\sum _{n=0}^{\infty }{\cfrac {n!}{(2n+1)!!}}=\sum _{n=0}^{\infty }{\cfrac {2^{n+1}n!^{2}}{(2n+1)!}}=\sum _{n=0}^{\infty }{\cfrac {2^{n+1}}{{\binom {2n}{n}}(2n+1)}}\\&=2+{\frac {2}{3}}+{\frac {4}{15}}+{\frac {4}{35}}+{\frac {16}{315}}+{\frac {16}{693}}+{\frac {32}{3003}}+{\frac {32}{6435}}+{\frac {256}{109395}}+{\frac {256}{230945}}+\cdots \end{aligned}}}
with a convergence such that each additional 10 terms yields at least three more digits.
π
=
2
+
1
3
(
2
+
2
5
(
2
+
3
7
(
2
+
⋯
)
)
)
{\displaystyle \pi =2+{\frac {1}{3}}\left(2+{\frac {2}{5}}\left(2+{\frac {3}{7}}\left(2+\cdots \right)\right)\right)}
This series is the basis for a decimal spigot algorithm by Rabinowitz and Wagon.
Another formula for
π
{\displaystyle \pi }
involving arctangent function is given by
π
2
k
+
1
=
arctan
2
−
a
k
−
1
a
k
,
k
≥
2
,
{\displaystyle {\frac {\pi }{2^{k+1}}}=\arctan {\frac {\sqrt {2-a_{k-1}}}{a_{k}}},\qquad \qquad k\geq 2,}
where
a
k
=
2
+
a
k
−
1
{\displaystyle a_{k}={\sqrt {2+a_{k-1}}}}
such that
a
1
=
2
{\displaystyle a_{1}={\sqrt {2}}}
. Approximations can be made by using, for example, the rapidly convergent Euler formula
arctan
(
x
)
=
∑
n
=
0
∞
2
2
n
(
n
!
)
2
(
2
n
+
1
)
!
x
2
n
+
1
(
1
+
x
2
)
n
+
1
.
{\displaystyle \arctan(x)=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}\;{\frac {x^{2n+1}}{(1+x^{2})^{n+1}}}.}
Alternatively, the following simple expansion series of the arctangent function can be used
arctan
(
x
)
=
2
∑
n
=
1
∞
1
2
n
−
1
a
n
(
x
)
a
n
2
(
x
)
+
b
n
2
(
x
)
,
{\displaystyle \arctan(x)=2\sum _{n=1}^{\infty }{{\frac {1}{2n-1}}{\frac {{{a}_{n}}\left(x\right)}{a_{n}^{2}\left(x\right)+b_{n}^{2}\left(x\right)}}},}
where
a
1
(
x
)
=
2
/
x
,
b
1
(
x
)
=
1
,
a
n
(
x
)
=
a
n
−
1
(
x
)
(
1
−
4
/
x
2
)
+
4
b
n
−
1
(
x
)
/
x
,
b
n
(
x
)
=
b
n
−
1
(
x
)
(
1
−
4
/
x
2
)
−
4
a
n
−
1
(
x
)
/
x
,
{\displaystyle {\begin{aligned}&a_{1}(x)=2/x,\\&b_{1}(x)=1,\\&a_{n}(x)=a_{n-1}(x)\,\left(1-4/x^{2}\right)+4b_{n-1}(x)/x,\\&b_{n}(x)=b_{n-1}(x)\,\left(1-4/x^{2}\right)-4a_{n-1}(x)/x,\end{aligned}}}
to approximate
π
{\displaystyle \pi }
with even more rapid convergence. Convergence in this arctangent formula for
π
{\displaystyle \pi }
improves as integer
k
{\displaystyle k}
increases.
The constant
π
{\displaystyle \pi }
can also be expressed by infinite sum of arctangent functions as
π
2
=
∑
n
=
0
∞
arctan
1
F
2
n
+
1
=
arctan
1
1
+
arctan
1
2
+
arctan
1
5
+
arctan
1
13
+
⋯
{\displaystyle {\frac {\pi }{2}}=\sum _{n=0}^{\infty }\arctan {\frac {1}{F_{2n+1}}}=\arctan {\frac {1}{1}}+\arctan {\frac {1}{2}}+\arctan {\frac {1}{5}}+\arctan {\frac {1}{13}}+\cdots }
and
π
4
=
∑
k
≥
2
arctan
2
−
a
k
−
1
a
k
,
{\displaystyle {\frac {\pi }{4}}=\sum _{k\geq 2}\arctan {\frac {\sqrt {2-a_{k-1}}}{a_{k}}},}
where
F
n
{\displaystyle F_{n}}
is the n-th Fibonacci number. However, these two formulae for
π
{\displaystyle \pi }
are much slower in convergence because of set of arctangent functions that are involved in computation.
==== Arcsine ====
Observing an equilateral triangle and noting that
sin
(
π
6
)
=
1
2
{\displaystyle \sin \left({\frac {\pi }{6}}\right)={\frac {1}{2}}}
yields
π
=
6
sin
−
1
(
1
2
)
=
6
(
1
2
+
1
2
⋅
3
⋅
2
3
+
1
⋅
3
2
⋅
4
⋅
5
⋅
2
5
+
1
⋅
3
⋅
5
2
⋅
4
⋅
6
⋅
7
⋅
2
7
+
⋯
)
=
3
16
0
⋅
1
+
6
16
1
⋅
3
+
18
16
2
⋅
5
+
60
16
3
⋅
7
+
⋯
=
∑
n
=
0
∞
3
⋅
(
2
n
n
)
16
n
(
2
n
+
1
)
=
3
+
1
8
+
9
640
+
15
7168
+
35
98304
+
189
2883584
+
693
54525952
+
429
167772160
+
⋯
{\displaystyle {\begin{aligned}\pi &=6\sin ^{-1}\left({\frac {1}{2}}\right)=6\left({\frac {1}{2}}+{\frac {1}{2\cdot 3\cdot 2^{3}}}+{\frac {1\cdot 3}{2\cdot 4\cdot 5\cdot 2^{5}}}+{\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6\cdot 7\cdot 2^{7}}}+\cdots \!\right)\\&={\frac {3}{16^{0}\cdot 1}}+{\frac {6}{16^{1}\cdot 3}}+{\frac {18}{16^{2}\cdot 5}}+{\frac {60}{16^{3}\cdot 7}}+\cdots \!=\sum _{n=0}^{\infty }{\frac {3\cdot {\binom {2n}{n}}}{16^{n}(2n+1)}}\\&=3+{\frac {1}{8}}+{\frac {9}{640}}+{\frac {15}{7168}}+{\frac {35}{98304}}+{\frac {189}{2883584}}+{\frac {693}{54525952}}+{\frac {429}{167772160}}+\cdots \end{aligned}}}
with a convergence such that each additional five terms yields at least three more digits.
== Digit extraction methods ==
The Bailey–Borwein–Plouffe formula (BBP) for calculating π was discovered in 1995 by Simon Plouffe. Using a spigot algorithm, the formula can compute any particular base 16 digit of π—returning the hexadecimal value of the digit—without computing the intervening digits.
π
=
∑
n
=
0
∞
(
4
8
n
+
1
−
2
8
n
+
4
−
1
8
n
+
5
−
1
8
n
+
6
)
(
1
16
)
n
{\displaystyle \pi =\sum _{n=0}^{\infty }\left({\frac {4}{8n+1}}-{\frac {2}{8n+4}}-{\frac {1}{8n+5}}-{\frac {1}{8n+6}}\right)\left({\frac {1}{16}}\right)^{n}}
In 1996, Plouffe derived an algorithm to extract the nth decimal digit of π (using base 10 math to extract a base 10 digit), and which can do so with an improved speed of O(n3(log n)3) time. The algorithm does not require memory for storage of a full n-digit result, so the one-millionth digit of π could in principle be computed using a pocket calculator. (However, it would be quite tedious and impractical to do so.)
π
+
3
=
∑
n
=
1
∞
n
2
n
n
!
2
(
2
n
)
!
{\displaystyle \pi +3=\sum _{n=1}^{\infty }{\frac {n2^{n}n!^{2}}{(2n)!}}}
The calculation speed of Plouffe's formula was improved to O(n2) by Fabrice Bellard, who derived an alternative formula (albeit only in base 2 math) for computing π.
π
=
1
2
6
∑
n
=
0
∞
(
−
1
)
n
2
10
n
(
−
2
5
4
n
+
1
−
1
4
n
+
3
+
2
8
10
n
+
1
−
2
6
10
n
+
3
−
2
2
10
n
+
5
−
2
2
10
n
+
7
+
1
10
n
+
9
)
{\displaystyle \pi ={\frac {1}{2^{6}}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2^{10n}}}\left(-{\frac {2^{5}}{4n+1}}-{\frac {1}{4n+3}}+{\frac {2^{8}}{10n+1}}-{\frac {2^{6}}{10n+3}}-{\frac {2^{2}}{10n+5}}-{\frac {2^{2}}{10n+7}}+{\frac {1}{10n+9}}\right)}
== Efficient methods ==
Many other expressions for π were developed and published by Indian mathematician Srinivasa Ramanujan. He worked with mathematician Godfrey Harold Hardy in England for a number of years.
Extremely long decimal expansions of π are typically computed with the Gauss–Legendre algorithm and Borwein's algorithm; the Salamin–Brent algorithm, which was invented in 1976, has also been used.
In 1997, David H. Bailey, Peter Borwein and Simon Plouffe published a paper (Bailey, 1997) on a new formula for π as an infinite series:
π
=
∑
k
=
0
∞
1
16
k
(
4
8
k
+
1
−
2
8
k
+
4
−
1
8
k
+
5
−
1
8
k
+
6
)
.
{\displaystyle \pi =\sum _{k=0}^{\infty }{\frac {1}{16^{k}}}\left({\frac {4}{8k+1}}-{\frac {2}{8k+4}}-{\frac {1}{8k+5}}-{\frac {1}{8k+6}}\right).}
This formula permits one to fairly readily compute the kth binary or hexadecimal digit of π, without having to compute the preceding k − 1 digits. Bailey's website contains the derivation as well as implementations in various programming languages. The PiHex project computed 64 bits around the quadrillionth bit of π (which turns out to be 0).
Fabrice Bellard further improved on BBP with his formula:
π
=
1
2
6
∑
n
=
0
∞
(
−
1
)
n
2
10
n
(
−
2
5
4
n
+
1
−
1
4
n
+
3
+
2
8
10
n
+
1
−
2
6
10
n
+
3
−
2
2
10
n
+
5
−
2
2
10
n
+
7
+
1
10
n
+
9
)
{\displaystyle \pi ={\frac {1}{2^{6}}}\sum _{n=0}^{\infty }{\frac {{(-1)}^{n}}{2^{10n}}}\left(-{\frac {2^{5}}{4n+1}}-{\frac {1}{4n+3}}+{\frac {2^{8}}{10n+1}}-{\frac {2^{6}}{10n+3}}-{\frac {2^{2}}{10n+5}}-{\frac {2^{2}}{10n+7}}+{\frac {1}{10n+9}}\right)}
Other formulae that have been used to compute estimates of π include:
π
2
=
∑
k
=
0
∞
k
!
(
2
k
+
1
)
!
!
=
∑
k
=
0
∞
2
k
k
!
2
(
2
k
+
1
)
!
=
1
+
1
3
(
1
+
2
5
(
1
+
3
7
(
1
+
⋯
)
)
)
{\displaystyle {\frac {\pi }{2}}=\sum _{k=0}^{\infty }{\frac {k!}{(2k+1)!!}}=\sum _{k=0}^{\infty }{\frac {2^{k}k!^{2}}{(2k+1)!}}=1+{\frac {1}{3}}\left(1+{\frac {2}{5}}\left(1+{\frac {3}{7}}\left(1+\cdots \right)\right)\right)}
Newton.
1
π
=
2
2
9801
∑
k
=
0
∞
(
4
k
)
!
(
1103
+
26390
k
)
(
k
!
)
4
396
4
k
{\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}}
Srinivasa Ramanujan.
This converges extraordinarily rapidly. Ramanujan's work is the basis for the fastest algorithms used, as of the turn of the millennium, to calculate π.
In 1988, David Chudnovsky and Gregory Chudnovsky found an even faster-converging series (the Chudnovsky algorithm):
1
π
=
1
426880
10005
∑
k
=
0
∞
(
6
k
)
!
(
13591409
+
545140134
k
)
(
3
k
)
!
(
k
!
)
3
(
−
640320
)
3
k
{\displaystyle {\frac {1}{\pi }}={\frac {1}{426880{\sqrt {10005}}}}\sum _{k=0}^{\infty }{\frac {(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}(-640320)^{3k}}}}
.
The speed of various algorithms for computing pi to n correct digits is shown below in descending order of asymptotic complexity. M(n) is the complexity of the multiplication algorithm employed.
== Projects ==
=== Pi Hex ===
Pi Hex was a project to compute three specific binary digits of π using a distributed network of several hundred computers. In 2000, after two years, the project finished computing the five trillionth (5*1012), the forty trillionth, and the quadrillionth (1015) bits. All three of them turned out to be 0.
== Software for calculating π ==
Over the years, several programs have been written for calculating π to many digits on personal computers.
=== General purpose ===
Most computer algebra systems can calculate π and other common mathematical constants to any desired precision.
Functions for calculating π are also included in many general libraries for arbitrary-precision arithmetic, for instance Class Library for Numbers, MPFR and SymPy.
=== Special purpose ===
Programs designed for calculating π may have better performance than general-purpose mathematical software. They typically implement checkpointing and efficient disk swapping to facilitate extremely long-running and memory-expensive computations.
TachusPi by Fabrice Bellard is the program used by himself to compute world record number of digits of pi in 2009.
y-cruncher by Alexander Yee is the program which every world record holder since Shigeru Kondo in 2010 has used to compute world record numbers of digits. y-cruncher can also be used to calculate other constants and holds world records for several of them.
PiFast by Xavier Gourdon was the fastest program for Microsoft Windows in 2003. According to its author, it can compute one million digits in 3.5 seconds on a 2.4 GHz Pentium 4. PiFast can also compute other irrational numbers like e and √2. It can also work at lesser efficiency with very little memory (down to a few tens of megabytes to compute well over a billion (109) digits). This tool is a popular benchmark in the overclocking community. PiFast 4.4 is available from Stu's Pi page. PiFast 4.3 is available from Gourdon's page.
QuickPi by Steve Pagliarulo for Windows is faster than PiFast for runs of under 400 million digits. Version 4.5 is available on Stu's Pi Page below. Like PiFast, QuickPi can also compute other irrational numbers like e, √2, and √3. The software may be obtained from the Pi-Hacks Yahoo! forum, or from Stu's Pi page.
Super PI by Kanada Laboratory in the University of Tokyo is the program for Microsoft Windows for runs from 16,000 to 33,550,000 digits. It can compute one million digits in 40 minutes, two million digits in 90 minutes and four million digits in 220 minutes on a Pentium 90 MHz. Super PI version 1.9 is available from Super PI 1.9 page.
== See also ==
Diophantine approximation
Milü
Madhava's correction term
Pi is 3
== Notes ==
== References ==
Bailey, David H.; Borwein, Peter B. & Plouffe, Simon (April 1997). "On the Rapid Computation of Various Polylogarithmic Constants" (PDF). Mathematics of Computation. 66 (218): 903–913. Bibcode:1997MaCom..66..903B. doi:10.1090/S0025-5718-97-00856-9.
Beckmann, Petr (1971). A History of π. New York: St. Martin's Press. ISBN 978-0-88029-418-8. MR 0449960.
Eves, Howard (1992). An Introduction to the History of Mathematics (6th ed.). Saunders College Publishing. ISBN 978-0-03-029558-4.
Joseph, George G. (2000). The Crest of the Peacock: Non-European Roots of Mathematics (New ed., London : Penguin ed.). London: Penguin. ISBN 978-0-14-027778-4.
Jackson, K; Stamp, J. (2002). Pyramid: Beyond Imagination. Inside the Great Pyramid of Giza. London: BBC. ISBN 9780563488033.
Berggren, Lennart; Borwein, Jonathan M.; Borwein, Peter B. (2004). Pi: a source book (3rd ed.). New York: Springer Science + Business Media LLC. ISBN 978-1-4757-4217-6. | Wikipedia/Approximations_of_π |
In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates (traditionally also called variables) with coefficients in another ring, often a field.
Often, the term "polynomial ring" refers implicitly to the special case of a polynomial ring in one indeterminate over a field. The importance of such polynomial rings relies on the high number of properties that they have in common with the ring of the integers.
Polynomial rings occur and are often fundamental in many parts of mathematics such as number theory, commutative algebra, and algebraic geometry. In ring theory, many classes of rings, such as unique factorization domains, regular rings, group rings, rings of formal power series, Ore polynomials, graded rings, have been introduced for generalizing some properties of polynomial rings.
A closely related notion is that of the ring of polynomial functions on a vector space, and, more generally, ring of regular functions on an algebraic variety.
== Definition (univariate case) ==
Let K be a field or (more generally) a commutative ring.
The polynomial ring in X over K, which is denoted K[X], can be defined in several equivalent ways. One of them is to define K[X] as the set of expressions, called polynomials in X, of the form
p
=
p
0
+
p
1
X
+
p
2
X
2
+
⋯
+
p
m
−
1
X
m
−
1
+
p
m
X
m
,
{\displaystyle p=p_{0}+p_{1}X+p_{2}X^{2}+\cdots +p_{m-1}X^{m-1}+p_{m}X^{m},}
where p0, p1, …, pm, the coefficients of p, are elements of K, pm ≠ 0 if m > 0, and X, X2, …, are symbols, which are considered as "powers" of X, and follow the usual rules of exponentiation: X0 = 1, X1 = X, and
X
k
X
l
=
X
k
+
l
{\displaystyle X^{k}\,X^{l}=X^{k+l}}
for any nonnegative integers k and l. The symbol X is called an indeterminate or variable. (The term of "variable" comes from the terminology of polynomial functions. However, here, X has no value (other than itself), and cannot vary, being a constant in the polynomial ring.)
Two polynomials are equal when the corresponding coefficients of each Xk are equal.
One can think of the ring K[X] as arising from K by adding one new element X that is external to K, commutes with all elements of K, and has no other specific properties. This can be used for an equivalent definition of polynomial rings.
The polynomial ring in X over K is equipped with an addition, a multiplication and a scalar multiplication that make it a commutative algebra. These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if
p
=
p
0
+
p
1
X
+
p
2
X
2
+
⋯
+
p
m
X
m
,
{\displaystyle p=p_{0}+p_{1}X+p_{2}X^{2}+\cdots +p_{m}X^{m},}
and
q
=
q
0
+
q
1
X
+
q
2
X
2
+
⋯
+
q
n
X
n
,
{\displaystyle q=q_{0}+q_{1}X+q_{2}X^{2}+\cdots +q_{n}X^{n},}
then
p
+
q
=
r
0
+
r
1
X
+
r
2
X
2
+
⋯
+
r
k
X
k
,
{\displaystyle p+q=r_{0}+r_{1}X+r_{2}X^{2}+\cdots +r_{k}X^{k},}
and
p
q
=
s
0
+
s
1
X
+
s
2
X
2
+
⋯
+
s
l
X
l
,
{\displaystyle pq=s_{0}+s_{1}X+s_{2}X^{2}+\cdots +s_{l}X^{l},}
where k = max(m, n), l = m + n,
r
i
=
p
i
+
q
i
{\displaystyle r_{i}=p_{i}+q_{i}}
and
s
i
=
p
0
q
i
+
p
1
q
i
−
1
+
⋯
+
p
i
q
0
.
{\displaystyle s_{i}=p_{0}q_{i}+p_{1}q_{i-1}+\cdots +p_{i}q_{0}.}
In these formulas, the polynomials p and q are extended by adding "dummy terms" with zero coefficients, so that all pi and qi that appear in the formulas are defined. Specifically, if m < n, then pi = 0 for m < i ≤ n.
The scalar multiplication is the special case of the multiplication where p = p0 is reduced to its constant term (the term that is independent of X); that is
p
0
(
q
0
+
q
1
X
+
⋯
+
q
n
X
n
)
=
p
0
q
0
+
(
p
0
q
1
)
X
+
⋯
+
(
p
0
q
n
)
X
n
{\displaystyle p_{0}\left(q_{0}+q_{1}X+\dots +q_{n}X^{n}\right)=p_{0}q_{0}+\left(p_{0}q_{1}\right)X+\cdots +\left(p_{0}q_{n}\right)X^{n}}
It is straightforward to verify that these three operations satisfy the axioms of a commutative algebra over K. Therefore, polynomial rings are also called polynomial algebras.
Another equivalent definition is often preferred, although less intuitive, because it is easier to make it completely rigorous, which consists in defining a polynomial as an infinite sequence (p0, p1, p2, …) of elements of K, having the property that only a finite number of the elements are nonzero, or equivalently, a sequence for which there is some m so that pn = 0 for n > m. In this case, p0 and X are considered as alternate notations for
the sequences (p0, 0, 0, …) and (0, 1, 0, 0, …), respectively. A straightforward use of the operation rules shows that the expression
p
0
+
p
1
X
+
p
2
X
2
+
⋯
+
p
m
X
m
{\displaystyle p_{0}+p_{1}X+p_{2}X^{2}+\cdots +p_{m}X^{m}}
is then an alternate notation for the sequence
(p0, p1, p2, …, pm, 0, 0, …).
=== Terminology ===
Let
p
=
p
0
+
p
1
X
+
p
2
X
2
+
⋯
+
p
m
−
1
X
m
−
1
+
p
m
X
m
,
{\displaystyle p=p_{0}+p_{1}X+p_{2}X^{2}+\cdots +p_{m-1}X^{m-1}+p_{m}X^{m},}
be a nonzero polynomial with
p
m
≠
0
{\displaystyle p_{m}\neq 0}
The constant term of p is
p
0
.
{\displaystyle p_{0}.}
It is zero in the case of the zero polynomial.
The degree of p, written deg(p) is
m
,
{\displaystyle m,}
the largest k such that the coefficient of Xk is not zero.
The leading coefficient of p is
p
m
.
{\displaystyle p_{m}.}
In the special case of the zero polynomial, all of whose coefficients are zero, the leading coefficient is undefined, and the degree has been variously left undefined, defined to be −1, or defined to be a −∞.
A constant polynomial is either the zero polynomial, or a polynomial of degree zero.
A nonzero polynomial is monic if its leading coefficient is
1.
{\displaystyle 1.}
Given two polynomials p and q, if the degree of the zero polynomial is defined to be
−
∞
,
{\displaystyle -\infty ,}
one has
deg
(
p
+
q
)
≤
max
(
deg
(
p
)
,
deg
(
q
)
)
,
{\displaystyle \deg(p+q)\leq \max(\deg(p),\deg(q)),}
and, over a field, or more generally an integral domain,
deg
(
p
q
)
=
deg
(
p
)
+
deg
(
q
)
.
{\displaystyle \deg(pq)=\deg(p)+\deg(q).}
It follows immediately that, if K is an integral domain, then so is K[X].
It follows also that, if K is an integral domain, a polynomial is a unit (that is, it has a multiplicative inverse) if and only if it is constant and is a unit in K.
Two polynomials are associated if either one is the product of the other by a unit.
Over a field, every nonzero polynomial is associated to a unique monic polynomial.
Given two polynomials, p and q, one says that p divides q, p is a divisor of q, or q is a multiple of p, if there is a polynomial r such that q = pr.
A polynomial is irreducible if it is not the product of two non-constant polynomials, or equivalently, if its divisors are either constant polynomials or have the same degree.
=== Polynomial evaluation ===
Let K be a field or, more generally, a commutative ring, and R a ring containing K. For any polynomial P in K[X] and any element a in R, the substitution of X with a in P defines an element of R, which is denoted P(a). This element is obtained by carrying on in R after the substitution the operations indicated by the expression of the polynomial. This computation is called the evaluation of P at a. For example, if we have
P
=
X
2
−
1
,
{\displaystyle P=X^{2}-1,}
we have
P
(
3
)
=
3
2
−
1
=
8
,
P
(
X
2
+
1
)
=
(
X
2
+
1
)
2
−
1
=
X
4
+
2
X
2
{\displaystyle {\begin{aligned}P(3)&=3^{2}-1=8,\\P(X^{2}+1)&=\left(X^{2}+1\right)^{2}-1=X^{4}+2X^{2}\end{aligned}}}
(in the first example R = K, and in the second one R = K[X]). Substituting X for itself results in
P
=
P
(
X
)
,
{\displaystyle P=P(X),}
explaining why the sentences "Let P be a polynomial" and "Let P(X) be a polynomial" are equivalent.
The polynomial function defined by a polynomial P is the function from K into K that is defined by
x
↦
P
(
x
)
.
{\displaystyle x\mapsto P(x).}
If K is an infinite field, two different polynomials define different polynomial functions, but this property is false for finite fields. For example, if K is a field with q elements, then the polynomials 0 and Xq − X both define the zero function.
For every a in R, the evaluation at a, that is, the map
P
↦
P
(
a
)
{\displaystyle P\mapsto P(a)}
defines an algebra homomorphism from K[X] to R, which is the unique homomorphism from K[X] to R that fixes K, and maps X to a. In other words, K[X] has the following universal property:
For every ring R containing K, and every element a of R, there is a unique algebra homomorphism from K[X] to R that fixes K, and maps X to a.
As for all universal properties, this defines the pair (K[X], X) up to a unique isomorphism, and can therefore be taken as a definition of K[X].
The image of the map
P
↦
P
(
a
)
{\displaystyle P\mapsto P(a)}
, that is, the subset of R obtained by substituting a for X in elements of K[X], is denoted K[a]. For example,
Z
[
2
]
=
{
P
(
2
)
∣
P
(
X
)
∈
Z
[
X
]
}
{\displaystyle \mathbb {Z} [{\sqrt {2}}]=\{P({\sqrt {2}})\mid P(X)\in \mathbb {Z} [X]\}}
, and the simplification rules for the powers of a square root imply
Z
[
2
]
=
{
a
+
b
2
∣
a
∈
Z
,
b
∈
Z
}
.
{\displaystyle \mathbb {Z} [{\sqrt {2}}]=\{a+b{\sqrt {2}}\mid a\in \mathbb {Z} ,b\in \mathbb {Z} \}.}
== Univariate polynomials over a field ==
If K is a field, the polynomial ring K[X] has many properties that are similar to those of the ring of integers
Z
.
{\displaystyle \mathbb {Z} .}
Most of these similarities result from the similarity between the long division of integers and the long division of polynomials.
Most of the properties of K[X] that are listed in this section do not remain true if K is not a field, or if one considers polynomials in several indeterminates.
Like for integers, the Euclidean division of polynomials has a property of uniqueness. That is, given two polynomials a and b ≠ 0 in K[X], there is a unique pair (q, r) of polynomials such that a = bq + r, and either r = 0 or deg(r) < deg(b). This makes K[X] a Euclidean domain. However, most other Euclidean domains (except integers) do not have any property of uniqueness for the division nor an easy algorithm (such as long division) for computing the Euclidean division.
The Euclidean division is the basis of the Euclidean algorithm for polynomials that computes a polynomial greatest common divisor of two polynomials. Here, "greatest" means "having a maximal degree" or, equivalently, being maximal for the preorder defined by the degree. Given a greatest common divisor of two polynomials, the other greatest common divisors are obtained by multiplication by a nonzero constant (that is, all greatest common divisors of a and b are associated). In particular, two polynomials that are not both zero have a unique greatest common divisor that is monic (leading coefficient equal to 1).
The extended Euclidean algorithm allows computing (and proving) Bézout's identity. In the case of K[X], it may be stated as follows. Given two polynomials p and q of respective degrees m and n, if their monic greatest common divisor g has the degree d, then there is a unique pair (a, b) of polynomials such that
a
p
+
b
q
=
g
,
{\displaystyle ap+bq=g,}
and
deg
(
a
)
≤
n
−
d
,
deg
(
b
)
<
m
−
d
.
{\displaystyle \deg(a)\leq n-d,\quad \deg(b)<m-d.}
(For making this true in the limiting case where m = d or n = d, one has to define as negative the degree of the zero polynomial. Moreover, the equality
deg
(
a
)
=
n
−
d
{\displaystyle \deg(a)=n-d}
can occur only if p and q are associated.) The uniqueness property is rather specific to K[X]. In the case of the integers the same property is true, if degrees are replaced by absolute values, but, for having uniqueness, one must require a > 0.
Euclid's lemma applies to K[X]. That is, if a divides bc, and is coprime with b, then a divides c. Here, coprime means that the monic greatest common divisor is 1. Proof: By hypothesis and Bézout's identity, there are e, p, and q such that ae = bc and 1 = ap + bq. So
c
=
c
(
a
p
+
b
q
)
=
c
a
p
+
a
e
q
=
a
(
c
p
+
e
q
)
.
{\displaystyle c=c(ap+bq)=cap+aeq=a(cp+eq).}
The unique factorization property results from Euclid's lemma. In the case of integers, this is the fundamental theorem of arithmetic. In the case of K[X], it may be stated as: every non-constant polynomial can be expressed in a unique way as the product of a constant, and one or several irreducible monic polynomials; this decomposition is unique up to the order of the factors. In other terms K[X] is a unique factorization domain. If K is the field of complex numbers, the fundamental theorem of algebra asserts that a univariate polynomial is irreducible if and only if its degree is one. In this case the unique factorization property can be restated as: every non-constant univariate polynomial over the complex numbers can be expressed in a unique way as the product of a constant, and one or several polynomials of the form X − r; this decomposition is unique up to the order of the factors. For each factor, r is a root of the polynomial, and the number of occurrences of a factor is the multiplicity of the corresponding root.
=== Derivation ===
The (formal) derivative of the polynomial
a
0
+
a
1
X
+
a
2
X
2
+
⋯
+
a
n
X
n
{\displaystyle a_{0}+a_{1}X+a_{2}X^{2}+\cdots +a_{n}X^{n}}
is the polynomial
a
1
+
2
a
2
X
+
⋯
+
n
a
n
X
n
−
1
.
{\displaystyle a_{1}+2a_{2}X+\cdots +na_{n}X^{n-1}.}
In the case of polynomials with real or complex coefficients, this is the standard derivative. The above formula defines the derivative of a polynomial even if the coefficients belong to a ring on which no notion of limit is defined. The derivative makes the polynomial ring a differential algebra.
The existence of the derivative is one of the main properties of a polynomial ring that is not shared with integers, and makes some computations easier on a polynomial ring than on integers.
==== Square-free factorization ====
A polynomial with coefficients in a field or integral domain is square-free if it does not have a multiple root in the algebraically closed field containing its coefficients. In particular, a polynomial of degree n with real or complex coefficients is square-free if it has n distinct complex roots. Equivalently, a polynomial over a field is square-free if and only if the greatest common divisor of the polynomial and its derivative is 1.
A square-free factorization of a polynomial is an expression for that polynomial as a product of powers of pairwise relatively prime square-free factors. Over the real numbers (or any other field of characteristic 0), such a factorization can be computed efficiently by Yun's algorithm. Less efficient algorithms are known for square-free factorization of polynomials over finite fields.
==== Lagrange interpolation ====
Given a finite set of ordered pairs
(
x
j
,
y
j
)
{\displaystyle (x_{j},y_{j})}
with entries in a field and distinct values
x
j
{\displaystyle x_{j}}
, among the polynomials
f
(
x
)
{\displaystyle f(x)}
that interpolate these points (so that
f
(
x
j
)
=
y
j
{\displaystyle f(x_{j})=y_{j}}
for all
j
{\displaystyle j}
), there is a unique polynomial of smallest degree. This is the Lagrange interpolation polynomial
L
(
x
)
{\displaystyle L(x)}
. If there are
k
{\displaystyle k}
ordered pairs, the degree of
L
(
x
)
{\displaystyle L(x)}
is at most
k
−
1
{\displaystyle k-1}
. The polynomial
L
(
x
)
{\displaystyle L(x)}
can be computed explicitly in terms of the input data
(
x
j
,
y
j
)
{\displaystyle (x_{j},y_{j})}
.
==== Polynomial decomposition ====
A decomposition of a polynomial is a way of expressing it as a composition of other polynomials of degree larger than 1. A polynomial that cannot be decomposed is indecomposable. Ritt's polynomial decomposition theorem asserts that if
f
=
g
1
∘
g
2
∘
⋯
∘
g
m
=
h
1
∘
h
2
∘
⋯
∘
h
n
{\displaystyle f=g_{1}\circ g_{2}\circ \cdots \circ g_{m}=h_{1}\circ h_{2}\circ \cdots \circ h_{n}}
are two different decompositions of the polynomial
f
{\displaystyle f}
, then
m
=
n
{\displaystyle m=n}
and the degrees of the indecomposables in one decomposition are the same as the degrees of the indecomposables in the other decomposition (though not necessarily in the same order).
=== Factorization ===
Except for factorization, all previous properties of K[X] are effective, since their proofs, as sketched above, are associated with algorithms for testing the property and computing the polynomials whose existence are asserted. Moreover these algorithms are efficient, as their computational complexity is a quadratic function of the input size.
The situation is completely different for factorization: the proof of the unique factorization does not give any hint for a method for factorizing. Already for the integers, there is no known algorithm running on a classical (non-quantum) computer for factorizing them in polynomial time. This is the basis of the RSA cryptosystem, widely used for secure Internet communications.
In the case of K[X], the factors, and the methods for computing them, depend strongly on K. Over the complex numbers, the irreducible factors (those that cannot be factorized further) are all of degree one, while, over the real numbers, there are irreducible polynomials of degree 2, and, over the rational numbers, there are irreducible polynomials of any degree. For example, the polynomial
X
4
−
2
{\displaystyle X^{4}-2}
is irreducible over the rational numbers, is factored as
(
X
−
2
4
)
(
X
+
2
4
)
(
X
2
+
2
)
{\displaystyle (X-{\sqrt[{4}]{2}})(X+{\sqrt[{4}]{2}})(X^{2}+{\sqrt {2}})}
over the real numbers and, and as
(
X
−
2
4
)
(
X
+
2
4
)
(
X
−
i
2
4
)
(
X
+
i
2
4
)
{\displaystyle (X-{\sqrt[{4}]{2}})(X+{\sqrt[{4}]{2}})(X-i{\sqrt[{4}]{2}})(X+i{\sqrt[{4}]{2}})}
over the complex numbers.
The existence of a factorization algorithm depends also on the ground field. In the case of the real or complex numbers, Abel–Ruffini theorem shows that the roots of some polynomials, and thus the irreducible factors, cannot be computed exactly. Therefore, a factorization algorithm can compute only approximations of the factors. Various algorithms have been designed for computing such approximations, see Root finding of polynomials.
There is an example of a field K such that there exist exact algorithms for the arithmetic operations of K, but there cannot exist any algorithm for deciding whether a polynomial of the form
X
p
−
a
{\displaystyle X^{p}-a}
is irreducible or is a product of polynomials of lower degree.
On the other hand, over the rational numbers and over finite fields, the situation is better than for integer factorization, as there are factorization algorithms that have a polynomial complexity. They are implemented in most general purpose computer algebra systems.
=== Minimal polynomial ===
If θ is an element of an associative K-algebra L, the polynomial evaluation at θ is the unique algebra homomorphism φ from K[X] into L that maps X to θ and does not affect the elements of K itself (it is the identity map on K). It consists of substituting X with θ in every polynomial. That is,
φ
(
a
m
X
m
+
a
m
−
1
X
m
−
1
+
⋯
+
a
1
X
+
a
0
)
=
a
m
θ
m
+
a
m
−
1
θ
m
−
1
+
⋯
+
a
1
θ
+
a
0
.
{\displaystyle \varphi \left(a_{m}X^{m}+a_{m-1}X^{m-1}+\cdots +a_{1}X+a_{0}\right)=a_{m}\theta ^{m}+a_{m-1}\theta ^{m-1}+\cdots +a_{1}\theta +a_{0}.}
The image of this evaluation homomorphism is the subalgebra generated by θ, which is necessarily commutative.
If φ is injective, the subalgebra generated by θ is isomorphic to K[X]. In this case, this subalgebra is often denoted by K[θ]. The notation ambiguity is generally harmless, because of the isomorphism.
If the evaluation homomorphism is not injective, this means that its kernel is a nonzero ideal, consisting of all polynomials that become zero when X is substituted with θ. This ideal consists of all multiples of some monic polynomial, that is called the minimal polynomial of θ. The term minimal is motivated by the fact that its degree is minimal among the degrees of the elements of the ideal.
There are two main cases where minimal polynomials are considered.
In field theory and number theory, an element θ of an extension field L of K is algebraic over K if it is a root of some polynomial with coefficients in K. The minimal polynomial over K of θ is thus the monic polynomial of minimal degree that has θ as a root. Because L is a field, this minimal polynomial is necessarily irreducible over K. For example, the minimal polynomial (over the reals as well as over the rationals) of the complex number i is
X
2
+
1
{\displaystyle X^{2}+1}
. The cyclotomic polynomials are the minimal polynomials of the roots of unity.
In linear algebra, the n×n square matrices over K form an associative K-algebra of finite dimension (as a vector space). Therefore the evaluation homomorphism cannot be injective, and every matrix has a minimal polynomial (not necessarily irreducible). By Cayley–Hamilton theorem, the evaluation homomorphism maps to zero the characteristic polynomial of a matrix. It follows that the minimal polynomial divides the characteristic polynomial, and therefore that the degree of the minimal polynomial is at most n.
=== Quotient ring ===
In the case of K[X], the quotient ring by an ideal can be built, as in the general case, as a set of equivalence classes. However, as each equivalence class contains exactly one polynomial of minimal degree, another construction is often more convenient.
Given a polynomial p of degree d, the quotient ring of K[X] by the ideal generated by p can be identified with the vector space of the polynomials of degrees less than d, with the "multiplication modulo p" as a multiplication, the multiplication modulo p consisting of the remainder under the division by p of the (usual) product of polynomials. This quotient ring is variously denoted as
K
[
X
]
/
p
K
[
X
]
,
{\displaystyle K[X]/pK[X],}
K
[
X
]
/
⟨
p
⟩
,
{\displaystyle K[X]/\langle p\rangle ,}
K
[
X
]
/
(
p
)
,
{\displaystyle K[X]/(p),}
or simply
K
[
X
]
/
p
.
{\displaystyle K[X]/p.}
The ring
K
[
X
]
/
(
p
)
{\displaystyle K[X]/(p)}
is a field if and only if p is an irreducible polynomial. In fact, if p is irreducible, every nonzero polynomial q of lower degree is coprime with p, and Bézout's identity allows computing r and s such that sp + qr = 1; so, r is the multiplicative inverse of q modulo p. Conversely, if p is reducible, then there exist polynomials a, b of degrees lower than deg(p) such that ab = p ; so a, b are nonzero zero divisors modulo p, and cannot be invertible.
For example, the standard definition of the field of the complex numbers can be summarized by saying that it is the quotient ring
C
=
R
[
X
]
/
(
X
2
+
1
)
,
{\displaystyle \mathbb {C} =\mathbb {R} [X]/(X^{2}+1),}
and that the image of X in
C
{\displaystyle \mathbb {C} }
is denoted by i. In fact, by the above description, this quotient consists of all polynomials of degree one in i, which have the form a + bi, with a and b in
R
.
{\displaystyle \mathbb {R} .}
The remainder of the Euclidean division that is needed for multiplying two elements of the quotient ring is obtained by replacing i2 by −1 in their product as polynomials (this is exactly the usual definition of the product of complex numbers).
Let θ be an algebraic element in a K-algebra A. By algebraic, one means that θ has a minimal polynomial p. The first ring isomorphism theorem asserts that the substitution homomorphism induces an isomorphism of
K
[
X
]
/
(
p
)
{\displaystyle K[X]/(p)}
onto the image K[θ] of the substitution homomorphism. In particular, if A is a simple extension of K generated by θ, this allows identifying A and
K
[
X
]
/
(
p
)
.
{\displaystyle K[X]/(p).}
This identification is widely used in algebraic number theory.
=== Modules ===
The structure theorem for finitely generated modules over a principal ideal domain applies to
K[X], when K is a field. This means that every finitely generated module over K[X] may be decomposed into a direct sum of a free module and finitely many modules of the form
K
[
X
]
/
⟨
P
k
⟩
{\displaystyle K[X]/\left\langle P^{k}\right\rangle }
, where P is an irreducible polynomial over K and k a positive integer.
== Definition (multivariate case) ==
Given n symbols
X
1
,
…
,
X
n
,
{\displaystyle X_{1},\dots ,X_{n},}
called indeterminates, a monomial (also called power product)
X
1
α
1
⋯
X
n
α
n
{\displaystyle X_{1}^{\alpha _{1}}\cdots X_{n}^{\alpha _{n}}}
is a formal product of these indeterminates, possibly raised to a nonnegative power. As usual, exponents equal to one and factors with a zero exponent can be omitted. In particular,
X
1
0
⋯
X
n
0
=
1.
{\displaystyle X_{1}^{0}\cdots X_{n}^{0}=1.}
The tuple of exponents α = (α1, …, αn) is called the multidegree or exponent vector of the monomial. For a less cumbersome notation, the abbreviation
X
α
=
X
1
α
1
⋯
X
n
α
n
{\displaystyle X^{\alpha }=X_{1}^{\alpha _{1}}\cdots X_{n}^{\alpha _{n}}}
is often used. The degree of a monomial Xα, frequently denoted deg α or |α|, is the sum of its exponents:
deg
α
=
∑
i
=
1
n
α
i
.
{\displaystyle \deg \alpha =\sum _{i=1}^{n}\alpha _{i}.}
A polynomial in these indeterminates, with coefficients in a field K, or more generally a ring, is a finite linear combination of monomials
p
=
∑
α
p
α
X
α
{\displaystyle p=\sum _{\alpha }p_{\alpha }X^{\alpha }}
with coefficients in K. The degree of a nonzero polynomial is the maximum of the degrees of its monomials with nonzero coefficients.
The set of polynomials in
X
1
,
…
,
X
n
,
{\displaystyle X_{1},\dots ,X_{n},}
denoted
K
[
X
1
,
…
,
X
n
]
,
{\displaystyle K[X_{1},\dots ,X_{n}],}
is thus a vector space (or a free module, if K is a ring) that has the monomials as a basis.
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\dots ,X_{n}]}
is naturally equipped (see below) with a multiplication that makes a ring, and an associative algebra over K, called the polynomial ring in n indeterminates over K (the definite article the reflects that it is uniquely defined up to the name and the order of the indeterminates. If the ring K is commutative,
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\dots ,X_{n}]}
is also a commutative ring.
=== Operations in K[X1, ..., Xn] ===
Addition and scalar multiplication of polynomials are those of a vector space or free module equipped by a specific basis (here the basis of the monomials). Explicitly, let
p
=
∑
α
∈
I
p
α
X
α
,
q
=
∑
β
∈
J
q
β
X
β
,
{\displaystyle p=\sum _{\alpha \in I}p_{\alpha }X^{\alpha },\quad q=\sum _{\beta \in J}q_{\beta }X^{\beta },}
where I and J are finite sets of exponent vectors.
The scalar multiplication of p and a scalar
c
∈
K
{\displaystyle c\in K}
is
c
p
=
∑
α
∈
I
c
p
α
X
α
.
{\displaystyle cp=\sum _{\alpha \in I}cp_{\alpha }X^{\alpha }.}
The addition of p and q is
p
+
q
=
∑
α
∈
I
∪
J
(
p
α
+
q
α
)
X
α
,
{\displaystyle p+q=\sum _{\alpha \in I\cup J}(p_{\alpha }+q_{\alpha })X^{\alpha },}
where
p
α
=
0
{\displaystyle p_{\alpha }=0}
if
α
∉
I
,
{\displaystyle \alpha \not \in I,}
and
q
β
=
0
{\displaystyle q_{\beta }=0}
if
β
∉
J
.
{\displaystyle \beta \not \in J.}
Moreover, if one has
p
α
+
q
α
=
0
{\displaystyle p_{\alpha }+q_{\alpha }=0}
for some
α
∈
I
∩
J
,
{\displaystyle \alpha \in I\cap J,}
the corresponding zero term is removed from the result.
The multiplication is
p
q
=
∑
γ
∈
I
+
J
(
∑
α
,
β
∣
α
+
β
=
γ
p
α
q
β
)
X
γ
,
{\displaystyle pq=\sum _{\gamma \in I+J}\left(\sum _{\alpha ,\beta \mid \alpha +\beta =\gamma }p_{\alpha }q_{\beta }\right)X^{\gamma },}
where
I
+
J
{\displaystyle I+J}
is the set of the sums of one exponent vector in I and one other in J (usual sum of vectors). In particular, the product of two monomials is a monomial whose exponent vector is the sum of the exponent vectors of the factors.
The verification of the axioms of an associative algebra is straightforward.
=== Polynomial expression ===
A polynomial expression is an expression built with scalars (elements of K), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers.
As all these operations are defined in
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\dots ,X_{n}]}
a polynomial expression represents a polynomial, that is an element of
K
[
X
1
,
…
,
X
n
]
.
{\displaystyle K[X_{1},\dots ,X_{n}].}
The definition of a polynomial as a linear combination of monomials is a particular polynomial expression, which is often called the canonical form, normal form, or expanded form of the polynomial.
Given a polynomial expression, one can compute the expanded form of the represented polynomial by expanding with the distributive law all the products that have a sum among their factors, and then using commutativity (except for the product of two scalars), and associativity for transforming the terms of the resulting sum into products of a scalar and a monomial; then one gets the canonical form by regrouping the like terms.
The distinction between a polynomial expression and the polynomial that it represents is relatively recent, and mainly motivated by the rise of computer algebra, where, for example, the test whether two polynomial expressions represent the same polynomial may be a nontrivial computation.
=== Categorical characterization ===
If K is a commutative ring, the polynomial ring K[X1, …, Xn] has the following universal property: for every commutative K-algebra A, and every n-tuple (x1, …, xn) of elements of A, there is a unique algebra homomorphism from K[X1, …, Xn] to A that maps each
X
i
{\displaystyle X_{i}}
to the corresponding
x
i
.
{\displaystyle x_{i}.}
This homomorphism is the evaluation homomorphism that consists in substituting
X
i
{\displaystyle X_{i}}
with
x
i
{\displaystyle x_{i}}
in every polynomial.
As it is the case for every universal property, this characterizes the pair
(
K
[
X
1
,
…
,
X
n
]
,
(
X
1
,
…
,
X
n
)
)
{\displaystyle (K[X_{1},\dots ,X_{n}],(X_{1},\dots ,X_{n}))}
up to a unique isomorphism.
This may also be interpreted in terms of adjoint functors. More precisely, let SET and ALG be respectively the categories of sets and commutative K-algebras (here, and in the following, the morphisms are trivially defined). There is a forgetful functor
F
:
A
L
G
→
S
E
T
{\displaystyle \mathrm {F} :\mathrm {ALG} \to \mathrm {SET} }
that maps algebras to their underlying sets. On the other hand, the map
X
↦
K
[
X
]
{\displaystyle X\mapsto K[X]}
defines a functor
P
O
L
:
S
E
T
→
A
L
G
{\displaystyle \mathrm {POL} :\mathrm {SET} \to \mathrm {ALG} }
in the other direction. (If X is infinite, K[X] is the set of all polynomials in a finite number of elements of X.)
The universal property of the polynomial ring means that F and POL are adjoint functors. That is, there is a bijection
Hom
S
E
T
(
X
,
F
(
A
)
)
≅
Hom
A
L
G
(
K
[
X
]
,
A
)
.
{\displaystyle \operatorname {Hom} _{\mathrm {SET} }(X,\operatorname {F} (A))\cong \operatorname {Hom} _{\mathrm {ALG} }(K[X],A).}
This may be expressed also by saying that polynomial rings are free commutative algebras, since they are free objects in the category of commutative algebras. Similarly, a polynomial ring with integer coefficients is the free commutative ring over its set of variables, since commutative rings and commutative algebras over the integers are the same thing.
== Graded structure ==
Every polynomial ring is a graded ring: one can write the polynomial ring
R
=
K
[
X
1
,
…
,
X
n
]
{\displaystyle R=K[X_{1},\ldots ,X_{n}]}
as a direct sum
R
=
⨁
i
=
0
∞
R
i
{\displaystyle R=\bigoplus _{i=0}^{\infty }R_{i}}
where
R
i
{\displaystyle R_{i}}
is the subspace consisting of all homogeneous polynomials of degree
i
{\displaystyle i}
(along with the zero polynomial); then for any elements
f
∈
R
i
{\displaystyle f\in R_{i}}
and
g
∈
R
j
{\displaystyle g\in R_{j}}
, their product
f
g
{\displaystyle fg}
belongs to
R
i
+
j
{\displaystyle R_{i+j}}
.
== Univariate over a ring vs. multivariate ==
A polynomial in
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
can be considered as a univariate polynomial in the indeterminate
X
n
{\displaystyle X_{n}}
over the ring
K
[
X
1
,
…
,
X
n
−
1
]
,
{\displaystyle K[X_{1},\ldots ,X_{n-1}],}
by regrouping the terms that contain the same power of
X
n
,
{\displaystyle X_{n},}
that is, by using the identity
∑
(
α
1
,
…
,
α
n
)
∈
I
c
α
1
,
…
,
α
n
X
1
α
1
⋯
X
n
α
n
=
∑
i
(
∑
(
α
1
,
…
,
α
n
−
1
)
∣
(
α
1
,
…
,
α
n
−
1
,
i
)
∈
I
c
α
1
,
…
,
α
n
−
1
X
1
α
1
⋯
X
n
−
1
α
n
−
1
)
X
n
i
,
{\displaystyle \sum _{(\alpha _{1},\ldots ,\alpha _{n})\in I}c_{\alpha _{1},\ldots ,\alpha _{n}}X_{1}^{\alpha _{1}}\cdots X_{n}^{\alpha _{n}}=\sum _{i}\left(\sum _{(\alpha _{1},\ldots ,\alpha _{n-1})\mid (\alpha _{1},\ldots ,\alpha _{n-1},i)\in I}c_{\alpha _{1},\ldots ,\alpha _{n-1}}X_{1}^{\alpha _{1}}\cdots X_{n-1}^{\alpha _{n-1}}\right)X_{n}^{i},}
which results from the distributivity and associativity of ring operations.
This means that one has an algebra isomorphism
K
[
X
1
,
…
,
X
n
]
≅
(
K
[
X
1
,
…
,
X
n
−
1
]
)
[
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]\cong (K[X_{1},\ldots ,X_{n-1}])[X_{n}]}
that maps each indeterminate to itself. (This isomorphism is often written as an equality, which is justified by the fact that polynomial rings are defined up to a unique isomorphism.)
In other words, a multivariate polynomial ring can be considered as a univariate polynomial over a smaller polynomial ring. This is commonly used for proving properties of multivariate polynomial rings, by induction on the number of indeterminates.
The main such properties are listed below.
=== Properties that pass from R to R[X] ===
In this section, R is a commutative ring, K is a field, X denotes a single indeterminate, and, as usual,
Z
{\displaystyle \mathbb {Z} }
is the ring of integers. Here is the list of the main ring properties that remain true when passing from R to R[X].
If R is an integral domain then the same holds for R[X] (since the leading coefficient of a product of polynomials is, if not zero, the product of the leading coefficients of the factors).
In particular,
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
and
Z
[
X
1
,
…
,
X
n
]
{\displaystyle \mathbb {Z} [X_{1},\ldots ,X_{n}]}
are integral domains.
If R is a unique factorization domain then the same holds for R[X]. This results from Gauss's lemma and the unique factorization property of
L
[
X
]
,
{\displaystyle L[X],}
where L is the field of fractions of R.
In particular,
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
and
Z
[
X
1
,
…
,
X
n
]
{\displaystyle \mathbb {Z} [X_{1},\ldots ,X_{n}]}
are unique factorization domains.
If R is a Noetherian ring, then the same holds for R[X].
In particular,
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
and
Z
[
X
1
,
…
,
X
n
]
{\displaystyle \mathbb {Z} [X_{1},\ldots ,X_{n}]}
are Noetherian rings; this is Hilbert's basis theorem.
If R is a Noetherian ring, then
dim
R
[
X
]
=
1
+
dim
R
,
{\displaystyle \dim R[X]=1+\dim R,}
where "
dim
{\displaystyle \dim }
" denotes the Krull dimension.
In particular,
dim
K
[
X
1
,
…
,
X
n
]
=
n
{\displaystyle \dim K[X_{1},\ldots ,X_{n}]=n}
and
dim
Z
[
X
1
,
…
,
X
n
]
=
n
+
1.
{\displaystyle \dim \mathbb {Z} [X_{1},\ldots ,X_{n}]=n+1.}
If R is a regular ring, then the same holds for R[X]; in this case, one has
gl
dim
R
[
X
]
=
dim
R
[
X
]
=
1
+
gl
dim
R
=
1
+
dim
R
,
{\displaystyle \operatorname {gl} \,\dim R[X]=\dim R[X]=1+\operatorname {gl} \,\dim R=1+\dim R,}
where "
gl
dim
{\displaystyle \operatorname {gl} \,\dim }
" denotes the global dimension.
In particular,
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
and
Z
[
X
1
,
…
,
X
n
]
{\displaystyle \mathbb {Z} [X_{1},\ldots ,X_{n}]}
are regular rings,
gl
dim
Z
[
X
1
,
…
,
X
n
]
=
n
+
1
,
{\displaystyle \operatorname {gl} \,\dim \mathbb {Z} [X_{1},\ldots ,X_{n}]=n+1,}
and
gl
dim
K
[
X
1
,
…
,
X
n
]
=
n
.
{\displaystyle \operatorname {gl} \,\dim K[X_{1},\ldots ,X_{n}]=n.}
The latter equality is Hilbert's syzygy theorem.
== Several indeterminates over a field ==
Polynomial rings in several variables over a field are fundamental in invariant theory and algebraic geometry. Some of their properties, such as those described above can be reduced to the case of a single indeterminate, but this is not always the case. In particular, because of the geometric applications, many interesting properties must be invariant under affine or projective transformations of the indeterminates. This often implies that one cannot select one of the indeterminates for a recurrence on the indeterminates.
Bézout's theorem, Hilbert's Nullstellensatz and Jacobian conjecture are among the most famous properties that are specific to multivariate polynomials over a field.
=== Hilbert's Nullstellensatz ===
The Nullstellensatz (German for "zero-locus theorem") is a theorem, first proved by David Hilbert, which extends to the multivariate case some aspects of the fundamental theorem of algebra. It is foundational for algebraic geometry, as establishing a strong link between the algebraic properties of
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
and the geometric properties of algebraic varieties, that are (roughly speaking) set of points defined by implicit polynomial equations.
The Nullstellensatz, has three main versions, each being a corollary of any other. Two of these versions are given below. For the third version, the reader is referred to the main article on the Nullstellensatz.
The first version generalizes the fact that a nonzero univariate polynomial has a complex zero if and only if it is not a constant. The statement is: a set of polynomials S in
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
has a common zero in an algebraically closed field containing K, if and only if 1 does not belong to the ideal generated by S, that is, if 1 is not a linear combination of elements of S with polynomial coefficients.
The second version generalizes the fact that the irreducible univariate polynomials over the complex numbers are associate to a polynomial of the form
X
−
α
.
{\displaystyle X-\alpha .}
The statement is: If K is algebraically closed, then the maximal ideals of
K
[
X
1
,
…
,
X
n
]
{\displaystyle K[X_{1},\ldots ,X_{n}]}
have the form
⟨
X
1
−
α
1
,
…
,
X
n
−
α
n
⟩
.
{\displaystyle \langle X_{1}-\alpha _{1},\ldots ,X_{n}-\alpha _{n}\rangle .}
=== Bézout's theorem ===
Bézout's theorem may be viewed as a multivariate generalization of the version of the fundamental theorem of algebra that asserts that a univariate polynomial of degree n has n complex roots, if they are counted with their multiplicities.
In the case of bivariate polynomials, it states that two polynomials of degrees d and e in two variables, which have no common factors of positive degree, have exactly de common zeros in an algebraically closed field containing the coefficients, if the zeros are counted with their multiplicity and include the zeros at infinity.
For stating the general case, and not considering "zero at infinity" as special zeros, it is convenient to work with homogeneous polynomials, and consider zeros in a projective space. In this context, a projective zero of a homogeneous polynomial
P
(
X
0
,
…
,
X
n
)
{\displaystyle P(X_{0},\ldots ,X_{n})}
is, up to a scaling, a (n + 1)-tuple
(
x
0
,
…
,
x
n
)
{\displaystyle (x_{0},\ldots ,x_{n})}
of elements of K that is different from (0, …, 0), and such that
P
(
x
0
,
…
,
x
n
)
=
0
{\displaystyle P(x_{0},\ldots ,x_{n})=0}
. Here, "up to a scaling" means that
(
x
0
,
…
,
x
n
)
{\displaystyle (x_{0},\ldots ,x_{n})}
and
(
λ
x
0
,
…
,
λ
x
n
)
{\displaystyle (\lambda x_{0},\ldots ,\lambda x_{n})}
are considered as the same zero for any nonzero
λ
∈
K
.
{\displaystyle \lambda \in K.}
In other words, a zero is a set of homogeneous coordinates of a point in a projective space of dimension n.
Then, Bézout's theorem states: Given n homogeneous polynomials of degrees
d
1
,
…
,
d
n
{\displaystyle d_{1},\ldots ,d_{n}}
in n + 1 indeterminates, which have only a finite number of common projective zeros in an algebraically closed extension of K, the sum of the multiplicities of these zeros is the product
d
1
⋯
d
n
.
{\displaystyle d_{1}\cdots d_{n}.}
=== Jacobian conjecture ===
== Generalizations ==
Polynomial rings can be generalized in a great many ways, including polynomial rings with generalized exponents, power series rings, noncommutative polynomial rings, skew polynomial rings, and polynomial rigs.
=== Infinitely many variables ===
One slight generalization of polynomial rings is to allow for infinitely many indeterminates. Each monomial still involves only a finite number of indeterminates (so that its degree remains finite), and each polynomial is a still a (finite) linear combination of monomials. Thus, any individual polynomial involves only finitely many indeterminates, and any finite computation involving polynomials remains inside some subring of polynomials in finitely many indeterminates. This generalization has the same property of usual polynomial rings, of being the free commutative algebra, the only difference is that it is a free object over an infinite set.
One can also consider a strictly larger ring, by defining as a generalized polynomial an infinite (or finite) formal sum of monomials with a bounded degree. This ring is larger than the usual polynomial ring, as it includes infinite sums of variables. However, it is smaller than the ring of power series in infinitely many variables. Such a ring is used for constructing the ring of symmetric functions over an infinite set.
=== Generalized exponents ===
A simple generalization only changes the set from which the exponents on the variable are drawn. The formulas for addition and multiplication make sense as long as one can add exponents: Xi ⋅ Xj = Xi+j. A set for which addition makes sense (is closed and associative) is called a monoid. The set of functions from a monoid N to a ring R which are nonzero at only finitely many places can be given the structure of a ring known as R[N], the monoid ring of N with coefficients in R. The addition is defined component-wise, so that if c = a + b, then cn = an + bn for every n in N. The multiplication is defined as the Cauchy product, so that if c = a ⋅ b, then for each n in N, cn is the sum of all aibj where i, j range over all pairs of elements of N which sum to n.
When N is commutative, it is convenient to denote the function a in R[N] as the formal sum:
∑
n
∈
N
a
n
X
n
{\displaystyle \sum _{n\in N}a_{n}X^{n}}
and then the formulas for addition and multiplication are the familiar:
(
∑
n
∈
N
a
n
X
n
)
+
(
∑
n
∈
N
b
n
X
n
)
=
∑
n
∈
N
(
a
n
+
b
n
)
X
n
{\displaystyle \left(\sum _{n\in N}a_{n}X^{n}\right)+\left(\sum _{n\in N}b_{n}X^{n}\right)=\sum _{n\in N}\left(a_{n}+b_{n}\right)X^{n}}
and
(
∑
n
∈
N
a
n
X
n
)
⋅
(
∑
n
∈
N
b
n
X
n
)
=
∑
n
∈
N
(
∑
i
+
j
=
n
a
i
b
j
)
X
n
{\displaystyle \left(\sum _{n\in N}a_{n}X^{n}\right)\cdot \left(\sum _{n\in N}b_{n}X^{n}\right)=\sum _{n\in N}\left(\sum _{i+j=n}a_{i}b_{j}\right)X^{n}}
where the latter sum is taken over all i, j in N that sum to n.
Some authors such as (Lang 2002, II,§3) go so far as to take this monoid definition as the starting point, and regular single variable polynomials are the special case where N is the monoid of non-negative integers. Polynomials in several variables simply take N to be the direct product of several copies of the monoid of non-negative integers.
Several interesting examples of rings and groups are formed by taking N to be the additive monoid of non-negative rational numbers, (Osborne 2000, §4.4). See also Puiseux series.
=== Power series ===
Power series generalize the choice of exponent in a different direction by allowing infinitely many nonzero terms. This requires various hypotheses on the monoid N used for the exponents, to ensure that the sums in the Cauchy product are finite sums. Alternatively, a topology can be placed on the ring, and then one restricts to convergent infinite sums. For the standard choice of N, the non-negative integers, there is no trouble, and the ring of formal power series is defined as the set of functions from N to a ring R with addition component-wise, and multiplication given by the Cauchy product. The ring of power series can also be seen as the ring completion of the polynomial ring with respect to the ideal generated by x.
=== Noncommutative polynomial rings ===
For polynomial rings of more than one variable, the products X⋅Y and Y⋅X are simply defined to be equal. A more general notion of polynomial ring is obtained when the distinction between these two formal products is maintained. Formally, the polynomial ring in n noncommuting variables with coefficients in the ring R is the monoid ring R[N], where the monoid N is the free monoid on n letters, also known as the set of all strings over an alphabet of n symbols, with multiplication given by concatenation. Neither the coefficients nor the variables need commute amongst themselves, but the coefficients and variables commute with each other.
Just as the polynomial ring in n variables with coefficients in the commutative ring R is the free commutative R-algebra of rank n, the noncommutative polynomial ring in n variables with coefficients in the commutative ring R is the free associative, unital R-algebra on n generators, which is noncommutative when n > 1.
=== Differential and skew-polynomial rings ===
Other generalizations of polynomials are differential and skew-polynomial rings.
A differential polynomial ring is a ring of differential operators formed from a ring R and a derivation δ of R into R. This derivation operates on R, and will be denoted X, when viewed as an operator. The elements of R also operate on R by multiplication. The composition of operators is denoted as the usual multiplication. It follows that the relation δ(ab) = aδ(b) + δ(a)b may be rewritten
as
X
⋅
a
=
a
⋅
X
+
δ
(
a
)
.
{\displaystyle X\cdot a=a\cdot X+\delta (a).}
This relation may be extended to define a skew multiplication between two polynomials in X with coefficients in R, which make them a noncommutative ring.
The standard example, called a Weyl algebra, takes R to be a (usual) polynomial ring k[Y ], and δ to be the standard polynomial derivative
∂
∂
Y
{\displaystyle {\tfrac {\partial }{\partial Y}}}
. Taking a = Y in the above relation, one gets the canonical commutation relation, X⋅Y − Y⋅X = 1. Extending this relation by associativity and distributivity allows explicitly constructing the Weyl algebra. (Lam 2001, §1,ex1.9).
The skew-polynomial ring is defined similarly for a ring R and a ring endomorphism f of R, by extending the multiplication from the relation X⋅r = f(r)⋅X to produce an associative multiplication that distributes over the standard addition. More generally, given a homomorphism F from the monoid N of the positive integers into the endomorphism ring of R, the formula Xn⋅r = F(n)(r)⋅Xn allows constructing a skew-polynomial ring. (Lam 2001, §1,ex 1.11) Skew polynomial rings are closely related to crossed product algebras.
=== Polynomial rigs ===
The definition of a polynomial ring can be generalised by relaxing the requirement that the algebraic structure R be a field or a ring to the requirement that R only be a semifield or rig; the resulting polynomial structure/extension R[X] is a polynomial rig. For example, the set of all multivariate polynomials with natural number coefficients is a polynomial rig.
== See also ==
Additive polynomial
Laurent polynomial
== Notes ==
== References ==
Hall, F. M. (1969), "Section 3.6", An Introduction to Abstract Algebra, vol. 2, Cambridge University Press, ISBN 0521084849
Herstein, I. N. (1975), "Section 3.9", Topics in Algebra, Wiley, ISBN 0471010901, polynomial ring.
Lam, Tsit-Yuen (2001), A First Course in Noncommutative Rings, Springer-Verlag, ISBN 978-0-387-95325-0
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556
Osborne, M. Scott (2000), Basic homological algebra, Graduate Texts in Mathematics, vol. 196, Springer-Verlag, doi:10.1007/978-1-4612-1278-2, ISBN 978-0-387-98934-1, MR 1757274 | Wikipedia/Polynomial_algebra |
A timeline of number theory.
== Before 1000 BCE ==
ca. 20,000 BCE — Nile Valley, Ishango Bone: possibly the earliest reference to prime numbers and Egyptian multiplication although this is disputed.
== About 300 BCE ==
300 BCE — Euclid proves the number of prime numbers is infinite.
== 1st millennium AD ==
250 — Diophantus writes Arithmetica, one of the earliest treatises on algebra.
500 — Aryabhata solves the general linear diophantine equation.
628 - Brahmagupta gives Brahmagupta's identity and solves the so called Pell's equation using his composition method.
ca. 650 — Mathematicians in India create the Hindu–Arabic numeral system we use, including the zero, the decimals and negative numbers.
== 1000–1500 ==
ca. 1000 — Abu-Mahmud al-Khujandi first states a special case of Fermat's Last Theorem.
895 — Thabit ibn Qurra gives a theorem by which pairs of amicable numbers can be found, (i.e., two numbers such that each is the sum of the proper divisors of the other).
975 — The earliest triangle of binomial coefficients (Pascal triangle) occur in the 10th century in commentaries on the Chandas Shastra.
1150 — Bhaskara II gives first general method for solving Pell's equation
1260 — Al-Farisi gave a new proof of Thābit ibn Qurra's theorem, introducing important new ideas concerning factorization and combinatorial methods. He also gave the pair of amicable numbers 17296 and 18416 which have also been jointly attributed to Fermat as well as Thabit ibn Qurra.
== 17th century ==
1637 — Pierre de Fermat claims to have proven Fermat's Last Theorem in his copy of Diophantus' Arithmetica.
== 18th century ==
1742 — Christian Goldbach conjectures that every even number greater than two can be expressed as the sum of two primes, now known as Goldbach's conjecture.
1770 — Joseph Louis Lagrange proves the four-square theorem, that every positive integer is the sum of four squares of integers. In the same year, Edward Waring conjectures Waring's problem, that for any positive integer k, every positive integer is the sum of a fixed number of kth powers.
1796 — Adrien-Marie Legendre conjectures the prime number theorem.
== 19th century ==
1801 — Disquisitiones Arithmeticae, Carl Friedrich Gauss's number theory treatise, is published in Latin.
1825 — Peter Gustav Lejeune Dirichlet and Adrien-Marie Legendre prove Fermat's Last Theorem for n = 5.
1832 — Lejeune Dirichlet proves Fermat's Last Theorem for n = 14.
1835 — Lejeune Dirichlet proves Dirichlet's theorem about prime numbers in arithmetic progressions.
1859 — Bernhard Riemann formulates the Riemann hypothesis which has strong implications about the distribution of prime numbers.
1896 — Jacques Hadamard and Charles Jean de la Vallée-Poussin independently prove the prime number theorem.
1896 — Hermann Minkowski presents Geometry of numbers.
== 20th century ==
1903 — Edmund Georg Hermann Landau gives considerably simpler proof of the prime number theorem.
1909 — David Hilbert proves Waring's problem.
1912 — Josip Plemelj publishes simplified proof for the Fermat's Last Theorem for exponent n = 5.
1913 — Srinivasa Aaiyangar Ramanujan sends a long list of complex theorems without proofs to G. H. Hardy.
1914 — Srinivasa Aaiyangar Ramanujan publishes Modular Equations and Approximations to π.
1910s — Srinivasa Aaiyangar Ramanujan develops over 3000 theorems, including properties of highly composite numbers, the partition function and its asymptotics, and mock theta functions. He also makes major breakthroughs and discoveries in the areas of gamma functions, modular forms, divergent series, hypergeometric series and prime number theory.
1919 — Viggo Brun defines Brun's constant B2 for twin primes.
1937 — I. M. Vinogradov proves Vinogradov's theorem that every sufficiently large odd integer is the sum of three primes, a close approach to proving Goldbach's weak conjecture.
1949 — Atle Selberg and Paul Erdős give the first elementary proof of the prime number theorem.
1966 — Chen Jingrun proves Chen's theorem, a close approach to proving the Goldbach conjecture.
1967 — Robert Langlands formulates the influential Langlands program of conjectures relating number theory and representation theory.
1983 — Gerd Faltings proves the Mordell conjecture and thereby shows that there are only finitely many whole number solutions for each exponent of Fermat's Last Theorem.
1994 — Andrew Wiles proves part of the Taniyama–Shimura conjecture and thereby proves Fermat's Last Theorem.
1999 — the full Taniyama–Shimura conjecture is proved.
== 21st century ==
2002 — Manindra Agrawal, Nitin Saxena, and Neeraj Kayal of IIT Kanpur present an unconditional deterministic polynomial time algorithm to determine whether a given number is prime.
2002 — Preda Mihăilescu proves Catalan's conjecture.
2004 — Ben Green and Terence Tao prove the Green–Tao theorem, which states that the sequence of prime numbers contains arbitrarily long arithmetic progressions.
== References == | Wikipedia/Timeline_of_number_theory |
The following timeline of algorithms outlines the development of algorithms (mainly "mathematical recipes") since their inception.
== Antiquity ==
Before – writing about "recipes" (on cooking, rituals, agriculture and other themes)
c. 1700–2000 BC – Egyptians develop earliest known algorithms for multiplying two numbers
c. 1600 BC – Babylonians develop earliest known algorithms for factorization and finding square roots
c. 300 BC – Euclid's algorithm
c. 200 BC – the Sieve of Eratosthenes
263 AD – Gaussian elimination described by Liu Hui
== Medieval Period ==
628 – Chakravala method described by Brahmagupta
c. 820 – Al-Khawarizmi described algorithms for solving linear equations and quadratic equations in his Algebra; the word algorithm comes from his name
825 – Al-Khawarizmi described the algorism, algorithms for using the Hindu–Arabic numeral system, in his treatise On the Calculation with Hindu Numerals, which was translated into Latin as Algoritmi de numero Indorum, where "Algoritmi", the translator's rendition of the author's name gave rise to the word algorithm (Latin algorithmus) with a meaning "calculation method"
c. 850 – cryptanalysis and frequency analysis algorithms developed by Al-Kindi (Alkindus) in A Manuscript on Deciphering Cryptographic Messages, which contains algorithms on breaking encryptions and ciphers
c. 1025 – Ibn al-Haytham (Alhazen), was the first mathematician to derive the formula for the sum of the fourth powers, and in turn, he develops an algorithm for determining the general formula for the sum of any integral powers
c. 1400 – Ahmad al-Qalqashandi gives a list of ciphers in his Subh al-a'sha which include both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter; he also gives an exposition on and worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which can not occur together in one word
== Before 1940 ==
1540 – Lodovico Ferrari discovered a method to find the roots of a quartic polynomial
1545 – Gerolamo Cardano published Cardano's method for finding the roots of a cubic polynomial
1614 – John Napier develops method for performing calculations using logarithms
1671 – Newton–Raphson method developed by Isaac Newton
1690 – Newton–Raphson method independently developed by Joseph Raphson
1706 – John Machin develops a quickly converging inverse-tangent series for π and computes π to 100 decimal places
1768 – Leonhard Euler publishes his method for numerical integration of ordinary differential equations in problem 85 of Institutiones calculi integralis
1789 – Jurij Vega improves Machin's formula and computes π to 140 decimal places,
1805 – FFT-like algorithm known by Carl Friedrich Gauss
1842 – Ada Lovelace writes the first algorithm for a computing engine
1903 – A fast Fourier transform algorithm presented by Carle David Tolmé Runge
1918 - Soundex
1926 – Borůvka's algorithm
1926 – Primary decomposition algorithm presented by Grete Hermann
1927 – Hartree–Fock method developed for simulating a quantum many-body system in a stationary state.
1934 – Delaunay triangulation developed by Boris Delaunay
1936 – Turing machine, an abstract machine developed by Alan Turing, with others developed the modern notion of algorithm.
== 1940s ==
1942 – A fast Fourier transform algorithm developed by G.C. Danielson and Cornelius Lanczos
1945 – Merge sort developed by John von Neumann
1947 – Simplex algorithm developed by George Dantzig
== 1950s ==
1950 – Hamming codes developed by Richard Hamming
1952 – Huffman coding developed by David A. Huffman
1953 – Simulated annealing introduced by Nicholas Metropolis
1954 – Radix sort computer algorithm developed by Harold H. Seward
1964 – Box–Muller transform for fast generation of normally distributed numbers published by George Edward Pelham Box and Mervin Edgar Muller. Independently pre-discovered by Raymond E. A. C. Paley and Norbert Wiener in 1934.
1956 – Kruskal's algorithm developed by Joseph Kruskal
1956 – Ford–Fulkerson algorithm developed and published by R. Ford Jr. and D. R. Fulkerson
1957 – Prim's algorithm developed by Robert Prim
1957 – Bellman–Ford algorithm developed by Richard E. Bellman and L. R. Ford, Jr.
1959 – Dijkstra's algorithm developed by Edsger Dijkstra
1959 – Shell sort developed by Donald L. Shell
1959 – De Casteljau's algorithm developed by Paul de Casteljau
1959 – QR factorization algorithm developed independently by John G.F. Francis and Vera Kublanovskaya
1959 – Rabin–Scott powerset construction for converting NFA into DFA published by Michael O. Rabin and Dana Scott
== 1960s ==
1960 – Karatsuba multiplication
1961 – CRC (Cyclic redundancy check) invented by W. Wesley Peterson
1962 – AVL trees
1962 – Quicksort developed by C. A. R. Hoare
1962 – Bresenham's line algorithm developed by Jack E. Bresenham
1962 – Gale–Shapley 'stable-marriage' algorithm developed by David Gale and Lloyd Shapley
1964 – Heapsort developed by J. W. J. Williams
1964 – multigrid methods first proposed by R. P. Fedorenko
1965 – Cooley–Tukey algorithm rediscovered by James Cooley and John Tukey
1965 – Levenshtein distance developed by Vladimir Levenshtein
1965 – Cocke–Younger–Kasami (CYK) algorithm independently developed by Tadao Kasami
1965 – Buchberger's algorithm for computing Gröbner bases developed by Bruno Buchberger
1965 – LR parsers invented by Donald Knuth
1966 – Dantzig algorithm for shortest path in a graph with negative edges
1967 – Viterbi algorithm proposed by Andrew Viterbi
1967 – Cocke–Younger–Kasami (CYK) algorithm independently developed by Daniel H. Younger
1968 – A* graph search algorithm described by Peter Hart, Nils Nilsson, and Bertram Raphael
1968 – Risch algorithm for indefinite integration developed by Robert Henry Risch
1969 – Strassen algorithm for matrix multiplication developed by Volker Strassen
== 1970s ==
1970 – Dinic's algorithm for computing maximum flow in a flow network by Yefim (Chaim) A. Dinitz
1970 – Knuth–Bendix completion algorithm developed by Donald Knuth and Peter B. Bendix
1970 – BFGS method of the quasi-Newton class
1970 – Needleman–Wunsch algorithm published by Saul B. Needleman and Christian D. Wunsch
1972 – Edmonds–Karp algorithm published by Jack Edmonds and Richard Karp, essentially identical to Dinic's algorithm from 1970
1972 – Graham scan developed by Ronald Graham
1972 – Red–black trees and B-trees discovered
1973 – RSA encryption algorithm discovered by Clifford Cocks
1973 – Jarvis march algorithm developed by R. A. Jarvis
1973 – Hopcroft–Karp algorithm developed by John Hopcroft and Richard Karp
1974 – Pollard's p − 1 algorithm developed by John Pollard
1974 – Quadtree developed by Raphael Finkel and J.L. Bentley
1975 – Genetic algorithms popularized by John Holland
1975 – Pollard's rho algorithm developed by John Pollard
1975 – Aho–Corasick string matching algorithm developed by Alfred V. Aho and Margaret J. Corasick
1975 – Cylindrical algebraic decomposition developed by George E. Collins
1976 – Salamin–Brent algorithm independently discovered by Eugene Salamin and Richard Brent
1976 – Knuth–Morris–Pratt algorithm developed by Donald Knuth and Vaughan Pratt and independently by J. H. Morris
1977 – Boyer–Moore string-search algorithm for searching the occurrence of a string into another string.
1977 – RSA encryption algorithm rediscovered by Ron Rivest, Adi Shamir, and Len Adleman
1977 – LZ77 algorithm developed by Abraham Lempel and Jacob Ziv
1977 – multigrid methods developed independently by Achi Brandt and Wolfgang Hackbusch
1978 – LZ78 algorithm developed from LZ77 by Abraham Lempel and Jacob Ziv
1978 – Bruun's algorithm proposed for powers of two by Georg Bruun
1979 – Khachiyan's ellipsoid method developed by Leonid Khachiyan
1979 – ID3 decision tree algorithm developed by Ross Quinlan
== 1980s ==
1980 – Brent's Algorithm for cycle detection Richard P. Brendt
1981 – Quadratic sieve developed by Carl Pomerance
1981 – Smith–Waterman algorithm developed by Temple F. Smith and Michael S. Waterman
1983 – Simulated annealing developed by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi
1983 – Classification and regression tree (CART) algorithm developed by Leo Breiman, et al.
1984 – LZW algorithm developed from LZ78 by Terry Welch
1984 – Karmarkar's interior-point algorithm developed by Narendra Karmarkar
1984 – ACORN PRNG discovered by Roy Wikramaratna and used privately
1985 – Simulated annealing independently developed by V. Cerny
1985 – Car–Parrinello molecular dynamics developed by Roberto Car and Michele Parrinello
1985 – Splay trees discovered by Sleator and Tarjan
1986 – Blum Blum Shub proposed by L. Blum, M. Blum, and M. Shub
1986 – Push relabel maximum flow algorithm by Andrew Goldberg and Robert Tarjan
1986 – Barnes–Hut tree method developed by Josh Barnes and Piet Hut for fast approximate simulation of n-body problems
1987 – Fast multipole method developed by Leslie Greengard and Vladimir Rokhlin
1988 – Special number field sieve developed by John Pollard
1989 – ACORN PRNG published by Roy Wikramaratna
1989 – Paxos protocol developed by Leslie Lamport
1989 – Skip list discovered by William Pugh
== 1990s ==
1990 – General number field sieve developed from SNFS by Carl Pomerance, Joe Buhler, Hendrik Lenstra, and Leonard Adleman
1990 – Coppersmith–Winograd algorithm developed by Don Coppersmith and Shmuel Winograd
1990 – BLAST algorithm developed by Stephen Altschul, Warren Gish, Webb Miller, Eugene Myers, and David J. Lipman from National Institutes of Health
1991 – Wait-free synchronization developed by Maurice Herlihy
1992 – Deutsch–Jozsa algorithm proposed by D. Deutsch and Richard Jozsa
1992 – C4.5 algorithm, a descendant of ID3 decision tree algorithm, was developed by Ross Quinlan
1993 – Apriori algorithm developed by Rakesh Agrawal and Ramakrishnan Srikant
1993 – Karger's algorithm to compute the minimum cut of a connected graph by David Karger
1994 – Shor's algorithm developed by Peter Shor
1994 – Burrows–Wheeler transform developed by Michael Burrows and David Wheeler
1994 – Bootstrap aggregating (bagging) developed by Leo Breiman
1995 – AdaBoost algorithm, the first practical boosting algorithm, was introduced by Yoav Freund and Robert Schapire
1995 – soft-margin support vector machine algorithm was published by Vladimir Vapnik and Corinna Cortes. It adds a soft-margin idea to the 1992 algorithm by Boser, Nguyon, Vapnik, and is the algorithm that people usually refer to when saying SVM
1995 – Ukkonen's algorithm for construction of suffix trees
1996 – Bruun's algorithm generalized to arbitrary even composite sizes by H. Murakami
1996 – Grover's algorithm developed by Lov K. Grover
1996 – RIPEMD-160 developed by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel
1997 – Mersenne Twister a pseudo random number generator developed by Makoto Matsumoto and Tajuki Nishimura
1998 – PageRank algorithm was published by Larry Page
1998 – rsync algorithm developed by Andrew Tridgell
1999 – gradient boosting algorithm developed by Jerome H. Friedman
1999 – Yarrow algorithm designed by Bruce Schneier, John Kelsey, and Niels Ferguson
== 2000s ==
2000 – Hyperlink-induced topic search a hyperlink analysis algorithm developed by Jon Kleinberg
2001 – Lempel–Ziv–Markov chain algorithm for compression developed by Igor Pavlov
2001 – Viola–Jones algorithm for real-time face detection was developed by Paul Viola and Michael Jones.
2001 – DHT (Distributed hash table) is invented by multiple people from academia and application systems
2001 – BitTorrent a first fully decentralized peer-to-peer file distribution system is published
2001 – LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient method finding extreme eigenvalues of symmetric eigenvalue problems by Andrew Knyazev
2002 – AKS primality test developed by Manindra Agrawal, Neeraj Kayal and Nitin Saxena
2002 – Girvan–Newman algorithm to detect communities in complex systems
2002 – Packrat parser developed for generating a parser that parses PEG (Parsing expression grammar) in linear time parsing developed by Bryan Ford
2009 – Bitcoin a first trust-less decentralized cryptocurrency system is published
== 2010s ==
2013 – Raft consensus protocol published by Diego Ongaro and John Ousterhout
2015 – YOLO (“You Only Look Once”) is an effective real-time object recognition algorithm, first described by Joseph Redmon et al.
== References == | Wikipedia/Timeline_of_algorithms |
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables.
For example,
{
3
x
+
2
y
−
z
=
1
2
x
−
2
y
+
4
z
=
−
2
−
x
+
1
2
y
−
z
=
0
{\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}}
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple
(
x
,
y
,
z
)
=
(
1
,
−
2
,
−
2
)
,
{\displaystyle (x,y,z)=(1,-2,-2),}
since it makes all three equations valid.
Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry.
== Elementary examples ==
=== Trivial example ===
The system of one equation in one unknown
2
x
=
4
{\displaystyle 2x=4}
has the solution
x
=
2.
{\displaystyle x=2.}
However, most interesting linear systems have at least two equations.
=== Simple nontrivial example ===
The simplest kind of nontrivial linear system involves two equations and two variables:
2
x
+
3
y
=
6
4
x
+
9
y
=
15
.
{\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}}
One method for solving such a system is as follows. First, solve the top equation for
x
{\displaystyle x}
in terms of
y
{\displaystyle y}
:
x
=
3
−
3
2
y
.
{\displaystyle x=3-{\frac {3}{2}}y.}
Now substitute this expression for x into the bottom equation:
4
(
3
−
3
2
y
)
+
9
y
=
15.
{\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.}
This results in a single equation involving only the variable
y
{\displaystyle y}
. Solving gives
y
=
1
{\displaystyle y=1}
, and substituting this back into the equation for
x
{\displaystyle x}
yields
x
=
3
2
{\displaystyle x={\frac {3}{2}}}
. This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.)
== General form ==
A general system of m linear equations with n unknowns and coefficients can be written as
{
a
11
x
1
+
a
12
x
2
+
⋯
+
a
1
n
x
n
=
b
1
a
21
x
1
+
a
22
x
2
+
⋯
+
a
2
n
x
n
=
b
2
⋮
a
m
1
x
1
+
a
m
2
x
2
+
⋯
+
a
m
n
x
n
=
b
m
,
{\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}}
where
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
are the unknowns,
a
11
,
a
12
,
…
,
a
m
n
{\displaystyle a_{11},a_{12},\dots ,a_{mn}}
are the coefficients of the system, and
b
1
,
b
2
,
…
,
b
m
{\displaystyle b_{1},b_{2},\dots ,b_{m}}
are the constant terms.
Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.
=== Vector equation ===
One extremely helpful view is that each unknown is a weight for a column vector in a linear combination.
x
1
[
a
11
a
21
⋮
a
m
1
]
+
x
2
[
a
12
a
22
⋮
a
m
2
]
+
⋯
+
x
n
[
a
1
n
a
2
n
⋮
a
m
n
]
=
[
b
1
b
2
⋮
b
m
]
{\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}}
This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed.
=== Matrix equation ===
The vector equation is equivalent to a matrix equation of the form
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries.
A
=
[
a
11
a
12
⋯
a
1
n
a
21
a
22
⋯
a
2
n
⋮
⋮
⋱
⋮
a
m
1
a
m
2
⋯
a
m
n
]
,
x
=
[
x
1
x
2
⋮
x
n
]
,
b
=
[
b
1
b
2
⋮
b
m
]
.
{\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.}
The number of vectors in a basis for the span is now expressed as the rank of the matrix.
== Solution set ==
A solution of a linear system is an assignment of values to the variables
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
such that each of the equations is satisfied. The set of all possible solutions is called the solution set.
A linear system may behave in any one of three possible ways:
The system has infinitely many solutions.
The system has a unique solution.
The system has no solution.
=== Geometric interpretation ===
For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set.
For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points.
For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n.
=== General behavior ===
In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations.
In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system.
In general, a system with the same number of equations and unknowns has a single unique solution.
In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system.
In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations.
The following pictures illustrate this trichotomy in the case of two variables:
The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point.
It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point).
A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns.
== Properties ==
=== Independence ===
The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.
For example, the equations
3
x
+
2
y
=
6
and
6
x
+
4
y
=
12
{\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12}
are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations.
For a more complicated example, the equations
x
−
2
y
=
−
1
3
x
+
5
y
=
8
4
x
+
3
y
=
7
{\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}}
are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.
=== Consistency ===
A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1.
For example, the equations
3
x
+
2
y
=
6
and
3
x
+
2
y
=
12
{\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12}
are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines.
It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations
x
+
y
=
1
2
x
+
y
=
1
3
x
+
2
y
=
3
{\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}}
are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations.
In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.
Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1.
=== Equivalence ===
Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set.
== Solving a linear system ==
There are several algorithms for solving a system of linear equations.
=== Describing the solution ===
When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example
(
x
=
3
,
y
=
−
2
,
z
=
6
)
{\displaystyle (x=3,\;y=-2,\;z=6)}
. When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like
(
3
,
−
2
,
6
)
{\displaystyle (3,\,-2,\,6)}
for the previous example.
To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.
For example, consider the following system:
x
+
3
y
−
2
z
=
5
3
x
+
5
y
+
6
z
=
7
{\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}}
The solution set to this system can be described by the following equations:
x
=
−
7
z
−
1
and
y
=
3
z
+
2
.
{\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}}
Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y.
Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set.
Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:
y
=
−
3
7
x
+
11
7
and
z
=
−
1
7
x
−
1
7
.
{\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}}
Here x is the free variable, and y and z are dependent.
=== Elimination of variables ===
The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:
In the first equation, solve for one of the variables in terms of the others.
Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown.
Repeat steps 1 and 2 until the system is reduced to a single linear equation.
Solve this equation, and then back-substitute until the entire solution is found.
For example, consider the following system:
{
x
+
3
y
−
2
z
=
5
3
x
+
5
y
+
6
z
=
7
2
x
+
4
y
+
3
z
=
8
{\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}}
Solving the first equation for x gives
x
=
5
+
2
z
−
3
y
{\displaystyle x=5+2z-3y}
, and plugging this into the second and third equation yields
{
y
=
3
z
+
2
y
=
7
2
z
+
1
{\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}}
Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have:
3
z
+
2
=
7
2
z
+
1
⇒
z
=
2
{\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}}
Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple
(
x
,
y
,
z
)
=
(
−
15
,
8
,
2
)
{\displaystyle (x,y,z)=(-15,8,2)}
.
=== Row reduction ===
In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix
[
1
3
−
2
5
3
5
6
7
2
4
3
8
]
.
{\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}}
This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations:
Type 1: Swap the positions of two rows.
Type 2: Multiply a row by a nonzero scalar.
Type 3: Add to one row a scalar multiple of another.
Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above:
[
1
3
−
2
5
3
5
6
7
2
4
3
8
]
∼
[
1
3
−
2
5
0
−
4
12
−
8
2
4
3
8
]
∼
[
1
3
−
2
5
0
−
4
12
−
8
0
−
2
7
−
2
]
∼
[
1
3
−
2
5
0
1
−
3
2
0
−
2
7
−
2
]
∼
[
1
3
−
2
5
0
1
−
3
2
0
0
1
2
]
∼
[
1
3
−
2
5
0
1
0
8
0
0
1
2
]
∼
[
1
3
0
9
0
1
0
8
0
0
1
2
]
∼
[
1
0
0
−
15
0
1
0
8
0
0
1
2
]
.
{\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}}
The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.
=== Cramer's rule ===
Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system
x
+
3
y
−
2
z
=
5
3
x
+
5
y
+
6
z
=
7
2
x
+
4
y
+
3
z
=
8
{\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}}
is given by
x
=
|
5
3
−
2
7
5
6
8
4
3
|
|
1
3
−
2
3
5
6
2
4
3
|
,
y
=
|
1
5
−
2
3
7
6
2
8
3
|
|
1
3
−
2
3
5
6
2
4
3
|
,
z
=
|
1
3
5
3
5
7
2
4
8
|
|
1
3
−
2
3
5
6
2
4
3
|
.
{\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.}
For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.
Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.)
Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.
=== Matrix solution ===
If the equation system is expressed in the matrix form
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
, the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by
x
=
A
−
1
b
{\displaystyle \mathbf {x} =A^{-1}\mathbf {b} }
where
A
−
1
{\displaystyle A^{-1}}
is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted
A
+
{\displaystyle A^{+}}
, as follows:
x
=
A
+
b
+
(
I
−
A
+
A
)
w
{\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} }
where
w
{\displaystyle \mathbf {w} }
is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using
w
=
0
{\displaystyle \mathbf {w} =\mathbf {0} }
satisfy
A
x
=
b
{\displaystyle A\mathbf {x} =\mathbf {b} }
— that is, that
A
A
+
b
=
b
.
{\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .}
If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank,
A
+
{\displaystyle A^{+}}
simply equals
A
−
1
{\displaystyle A^{-1}}
and the general solution equation simplifies to
x
=
A
−
1
b
+
(
I
−
A
−
1
A
)
w
=
A
−
1
b
+
(
I
−
I
)
w
=
A
−
1
b
{\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} }
as previously stated, where
w
{\displaystyle \mathbf {w} }
has completely dropped out of the solution, leaving only a single solution. In other cases, though,
w
{\displaystyle \mathbf {w} }
remains and hence an infinitude of potential values of the free parameter vector
w
{\displaystyle \mathbf {w} }
give an infinitude of solutions of the equation.
=== Other methods ===
While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b.
If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications.
A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix
A
{\displaystyle A}
is split into its diagonal component
D
{\displaystyle D}
and its non-diagonal component
L
+
U
{\displaystyle L+U}
. An initial guess
x
(
0
)
{\displaystyle {\mathbf {x}}^{(0)}}
is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation:
x
(
k
+
1
)
=
D
−
1
(
b
−
(
L
+
U
)
x
(
k
)
)
{\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})}
When the difference between guesses
x
(
k
)
{\displaystyle {\mathbf {x}}^{(k)}}
and
x
(
k
+
1
)
{\displaystyle {\mathbf {x}}^{(k+1)}}
is sufficiently small, the algorithm is said to have converged on the solution.
There is also a quantum algorithm for linear systems of equations.
== Homogeneous systems ==
A system of linear equations is homogeneous if all of the constant terms are zero:
a
11
x
1
+
a
12
x
2
+
⋯
+
a
1
n
x
n
=
0
a
21
x
1
+
a
22
x
2
+
⋯
+
a
2
n
x
n
=
0
⋮
a
m
1
x
1
+
a
m
2
x
2
+
⋯
+
a
m
n
x
n
=
0.
{\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}}
A homogeneous system is equivalent to a matrix equation of the form
A
x
=
0
{\displaystyle A\mathbf {x} =\mathbf {0} }
where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries.
=== Homogeneous solution set ===
Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties:
If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system.
If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system.
These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A.
=== Relation to nonhomogeneous systems ===
There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system:
A
x
=
b
and
A
x
=
0
.
{\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .}
Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as
{
p
+
v
:
v
is any solution to
A
x
=
0
}
.
{\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.}
Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p.
This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A.
== See also ==
Arrangement of hyperplanes
Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations
Coates graph – A mathematical graph for solution of linear equations
LAPACK – Software library for numerical linear algebra
Linear equation over a ring
Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets
Matrix decomposition – Representation of a matrix as a product
Matrix splitting – Representation of a matrix as a sum
NAG Numerical Library – Software library of numerical-analysis algorithms
Rybicki Press algorithm – An algorithm for inverting a matrix
Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets
== References ==
== Bibliography ==
Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X
Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3
Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9
Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8
Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993
Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3
Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5
== Further reading ==
Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0.
Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7.
Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001.
Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3.
Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International.
Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall.
Strang, Gilbert (2005). Linear Algebra and Its Applications.
Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679.
== External links ==
Media related to System of linear equations at Wikimedia Commons | Wikipedia/Simultaneous_linear_equations |
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations.
Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term Boolean algebra was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.
== History ==
A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts. Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets.
Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure. For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets.
In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably.
Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification.
Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first-order logic.
Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics. The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity.
== Values ==
Whereas expressions denote mainly numbers in elementary algebra, in Boolean algebra, they denote the truth values false and true. These values are represented with the bits, 0 and 1. They do not behave like the integers 0 and 1, for which 1 + 1 = 2, but may be identified with the elements of the two-element field GF(2), that is, integer arithmetic modulo 2, for which 1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunction x ∨ y (inclusive-or) definable as x + y − xy and negation ¬x as 1 − x. In GF(2), − may be replaced by +, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in which GF(2) is not implemented).
Boolean algebra also deals with functions which have their values in the set {0,1}. A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set E: to a subset F of E, one can define the indicator function that takes the value 1 on F, and 0 outside F. The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof.
As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables.
== Operations ==
=== Basic operations ===
While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations: conjunction, disjunction, and negation, expressed with the corresponding binary operators AND (
∧
{\displaystyle \land }
) and OR (
∨
{\displaystyle \lor }
) and the unary operator NOT (
¬
{\displaystyle \neg }
), collectively referred to as Boolean operators. Variables in Boolean algebra that store the logical value of 0 and 1 are called the Boolean variables. They are used to store either true or false values. The basic operations on Boolean variables x and y are defined as follows:
Alternatively, the values of x ∧ y, x ∨ y, and ¬x can be expressed by tabulating their values with truth tables as follows:
When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules.
If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (where x + y uses addition and xy uses multiplication), or by the minimum/maximum functions:
x
∧
y
=
x
y
=
min
(
x
,
y
)
x
∨
y
=
x
+
y
−
x
y
=
x
+
y
(
1
−
x
)
=
max
(
x
,
y
)
¬
x
=
1
−
x
{\displaystyle {\begin{aligned}x\wedge y&=xy=\min(x,y)\\x\vee y&=x+y-xy=x+y(1-x)=\max(x,y)\\\neg x&=1-x\end{aligned}}}
One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws):
x
∧
y
=
¬
(
¬
x
∨
¬
y
)
x
∨
y
=
¬
(
¬
x
∧
¬
y
)
{\displaystyle {\begin{aligned}x\wedge y&=\neg (\neg x\vee \neg y)\\x\vee y&=\neg (\neg x\wedge \neg y)\end{aligned}}}
=== Secondary operations ===
Operations composed from the basic operations include, among others, the following:
These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs.
Material conditional
The first operation, x → y, or Cxy, is called material implication. If x is true, then the result of expression x → y is taken to be that of y (e.g. if x is true and y is false, then x → y is also false). But if x is false, then the value of y can be ignored; however, the operation must return some Boolean value and there are only two choices. So by definition, x → y is true when x is false (relevance logic rejects this definition, by viewing an implication with a false premise as something other than either true or false).
Exclusive OR (XOR)
The second operation, x ⊕ y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y being true (e.g. see table): if both are true then result is false. Defined in terms of arithmetic it is addition where mod 2 is 1 + 1 = 0.
Logical equivalence
The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y, being true just when x and y are different. Thus, its counterpart in arithmetic mod 2 is x + y. Equivalence's counterpart in arithmetic mod 2 is x + y + 1.
== Laws ==
A law of Boolean algebra is an identity such as x ∨ (y ∨ z) = (x ∨ y) ∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x ∨ (y ∧ z) = x ∨ (z ∧ y) from y ∧ z = z ∧ y (as treated in § Axiomatizing Boolean algebra).
=== Monotone laws ===
Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra:
The following laws hold in Boolean algebra, but not in ordinary algebra:
Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be 1(1 + 1) = 2, while the right hand side would be 1 (and so on).
All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows.
=== Nonmonotone laws ===
The complement operation is defined by the following two laws.
Complementation 1
x
∧
¬
x
=
0
Complementation 2
x
∨
¬
x
=
1
{\displaystyle {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}}
All properties of negation including the laws below follow from the above two laws alone.
In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law)
Double negation
¬
(
¬
x
)
=
x
{\displaystyle {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}}
But whereas ordinary algebra satisfies the two laws
(
−
x
)
(
−
y
)
=
x
y
(
−
x
)
+
(
−
y
)
=
−
(
x
+
y
)
{\displaystyle {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}}
Boolean algebra satisfies De Morgan's laws:
De Morgan 1
¬
x
∧
¬
y
=
¬
(
x
∨
y
)
De Morgan 2
¬
x
∨
¬
y
=
¬
(
x
∧
y
)
{\displaystyle {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}}
=== Completeness ===
The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in § Boolean algebras.
Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras.
This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in § Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent.
=== Duality principle ===
Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set.
There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences.
But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used.
But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged, now there is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns for x ∧ y and x ∨ y in the truth tables have changed places, but that switch is immaterial.
When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are called dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The duality principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged.
One change not needed to make as part of this interchange was to complement. Complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if f(x, y, z) = (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x), then f(f(x, y, z), x, t) is a self-dual operation of four arguments x, y, z, t.
The principle of duality can be explained from a group theory perspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be the principle (or square) of quaternality.: 21–22
== Diagrammatic representations ==
=== Venn diagrams ===
A Venn diagram can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention).
The three Venn diagrams in the figure below represent respectively conjunction x ∧ y, disjunction x ∨ y, and complement ¬x.
For conjunction, the region inside both circles is shaded to indicate that x ∧ y is 1 when both variables are 1. The other regions are left unshaded to indicate that x ∧ y is 0 for the other three combinations.
The second diagram represents disjunction x ∨ y by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle.
While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation.
Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry.
Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨.
To see the first absorption law, x ∧ (x ∨ y) = x, start with the diagram in the middle for x ∨ y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x ∨ (x ∧ y) = x, start with the left diagram for x∧y and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle.
The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades the x circle.
To visualize the first De Morgan's law, (¬x) ∧ (¬y) = ¬(x ∨ y), start with the middle diagram for x ∨ y and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes.
The second De Morgan's law, (¬x) ∨ (¬y) = ¬(x ∧ y), works the same way with the two diagrams interchanged.
The first complement law, x ∧ ¬x = 0, says that the interior and exterior of the x circle have no overlap. The second complement law, x ∨ ¬x = 1, says that everything is either inside or outside the x circle.
=== Digital logic gates ===
Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows:
The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports.
Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port.
The duality principle, or De Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged.
More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, ¬x, and ¬y; and the remaining two are x ⊕ y (XOR) and its complement x ≡ y.
== Boolean algebras ==
The term "algebra" denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion.
=== Concrete Boolean algebras ===
A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union, intersection, and complement relative to X.
(Historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.)
Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, finite, infinite, or even uncountable.
Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide.
Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers.
Example 4. For a less trivial example of the point made by example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X, and every point in X is in exactly one region. Then the set of all 22n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the case n = 0 of no curves.
=== Subsets as bit vectors ===
A subset Y of X can be identified with an indexed family of bits with index set X, with the bit indexed by x ∈ X being 1 or 0 according to whether or not x ∈ Y. (This is the so-called characteristic function notion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if
X
=
{
a
,
b
,
c
}
{\displaystyle X=\{a,b,c\}}
where a, b, c are viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]).
From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in 1010∧0110 = 0010, 1010∨0110 = 1110, and ¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively.
=== Prototypical Boolean algebra ===
The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called the prototypical Boolean algebra, justified by the following observation.
The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra.
This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector.
The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete.
=== Boolean algebras: the definition ===
The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra.
Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X, two binary operations on X, and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition.
A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws.
For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring, field etc. characteristic of modern or abstract algebra.
Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice, a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition.
A Boolean algebra is a complemented distributive lattice.
The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition.
=== Representable Boolean algebras ===
Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple, and division into n (that is, ¬x = n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions.
However, if each divisor of n is represented by the set of its prime factors, this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n. So this example, while not technically concrete, is at least "morally" concrete via this representation, called an isomorphism. This example is an instance of the following notion.
A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra.
The next question is answered positively as follows.
Every Boolean algebra is representable.
That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability.
The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra.
It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras.
== Axiomatizing Boolean algebra ==
The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold.
In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to be finitely axiomatizable or finitely based.
Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice.
By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing the Sheffer stroke operation, the single axiom
(
(
a
∣
b
)
∣
c
)
∣
(
a
∣
(
(
a
∣
c
)
∣
a
)
)
=
c
{\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c}
is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; see Minimal axioms for Boolean algebra.
== Propositional logic ==
Propositional logic is a logical system that is intimately connected to Boolean algebra. Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra.
Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x, y, ... become propositional variables (or atoms) P, Q, ... Boolean terms such as x ∨ y become propositional formulas P ∨ Q; 0 becomes false or ⊥, and 1 becomes true or T. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions.
The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra).
These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used.
=== Applications ===
One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language. Whereas the proposition "if x = 3, then x + 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "if x = 3, then x = 3" does not; it is true merely by virtue of its structure, and remains true whether "x = 3" is replaced by "x = 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "if P, then P," or in the language of Boolean algebra, P → P.
Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instantiating P in an abstract proposition is called an instance of the proposition. Thus, x = 3 → x = 3 is a tautology by virtue of being an instance of the abstract tautology P → P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense as P → x = 3 or x = 3 → x = 4.
Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiating Q by Q → P in P → (Q → P) to yield the instance P → ((Q → P) → P).
(The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.)
=== Deductive systems for propositional logic ===
An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A proof in an axiom system A is a finite nonempty sequence of propositions each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem.
==== Sequent calculus ====
Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions called sequents, such as A ∨ B, A ∧ C, ... ⊢ A, B → C, .... The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ, A ⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the antecedent.
Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when x ∨ y = y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus.
== Applications ==
Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics.
=== Computers ===
In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits.
Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.)
Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low.
Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming in machine code, assembly language, and certain other programming languages, programmers work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second.
=== Two-valued logic ===
Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, making two-valued logic deserving of organization and study in its own right.
A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low.
Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.
Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined via De Morgan's law. Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
=== Boolean operations ===
The original application for Boolean operations was mathematical logic, where it combines the truth values, true or false, of individual formulas.
==== Natural language ====
Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and although P necessarily implies "not not P," the converse is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them.
==== Digital logic ====
Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements.
==== Naive set theory ====
Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on.
==== Video cards ====
The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics, which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called the mask. Modern video cards offer all 223 = 256 ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constants SRC = 0xaa or 0b10101010, DST = 0xcc or 0b11001100, and MSK = 0xf0 or 0b11110000 allow Boolean operations such as (SRC^DST)&MSK (meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, 0x80 in the (SRC^DST)&MSK example, 0x88 if just SRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression.
==== Modeling and CAD ====
Solid modeling systems for computer aided design offer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operation x ∧ ¬y or x − y, which in set theory is set difference, remove the elements of y from those of x. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference.
==== Boolean searches ====
Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported by Google.
Doublequotes are used to combine whitespace-separated words into a single search term.
Whitespace is used to specify logical AND, as it is the default operator for joining search terms:
"Search term 1" "Search term 2"
The OR keyword is used for logical OR:
"Search term 1" OR "Search term 2"
A prefixed minus sign is used for logical NOT:
"Search term 1" −"Search term 2"
== See also ==
== Notes ==
== References ==
== Further reading ==
Mano, Morris; Ciletti, Michael D. (2013). Digital Design. Pearson. ISBN 978-0-13-277420-8.
Whitesitt, J. Eldon (1995). Boolean algebra and its applications. Courier Dover Publications. ISBN 978-0-486-68483-3.
Dwinger, Philip (1971). Introduction to Boolean algebras. Würzburg, Germany: Physica Verlag.
Sikorski, Roman (1969). Boolean Algebras (3 ed.). Berlin, Germany: Springer-Verlag. ISBN 978-0-387-04469-9.
Bocheński, Józef Maria (1959). A Précis of Mathematical Logic. Translated from the French and German editions by Otto Bird. Dordrecht, South Holland: D. Reidel.
=== Historical perspective ===
Boole, George (1848). "The Calculus of Logic". Cambridge and Dublin Mathematical Journal. III: 183–198.
Hailperin, Theodore (1986). Boole's logic and probability: a critical exposition from the standpoint of contemporary algebra, logic, and probability theory (2 ed.). Elsevier. ISBN 978-0-444-87952-3.
Gabbay, Dov M.; Woods, John, eds. (2004). The rise of modern logic: from Leibniz to Frege. Handbook of the History of Logic. Vol. 3. Elsevier. ISBN 978-0-444-51611-4., several relevant chapters by Hailperin, Valencia, and Grattan-Guinness
Badesa, Calixto (2004). "Chapter 1. Algebra of Classes and Propositional Calculus". The birth of model theory: Löwenheim's theorem in the frame of the theory of relatives. Princeton University Press. ISBN 978-0-691-05853-5.
Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. pp. xviii + 212. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25.
"The Algebra of Logic Tradition" entry by Burris, Stanley in the Stanford Encyclopedia of Philosophy, 21 February 2012
== External links == | Wikipedia/Boolean_algebra_(logic) |
In the history of calculus, the calculus controversy (German: Prioritätsstreit, lit. 'priority dispute') was an argument between mathematicians Isaac Newton and Gottfried Wilhelm Leibniz over who had first discovered calculus. The question was a major intellectual controversy, beginning in 1699 and reaching its peak in 1712. Leibniz had published his work on calculus first, but Newton's supporters accused Leibniz of plagiarizing Newton's unpublished ideas. The modern consensus is that the two men independently developed their ideas. Their creation of calculus has been called "the greatest advance in mathematics that had taken place since the time of Archimedes."
Newton stated he had begun working on a form of calculus (which he called "The Method of Fluxions and Infinite Series") in 1666, at the age of 23, but did not publish it until 1737 as a minor annotation in the back of one of his works decades later (a relevant Newton manuscript of October 1666 is now published among his mathematical papers). Gottfried Leibniz began working on his variant of calculus in 1674, and in 1684 published his first paper employing it, "Nova Methodus pro Maximis et Minimis". L'Hôpital published a text on Leibniz's calculus in 1696 (in which he recognized that Newton's Principia of 1687 was "nearly all about this calculus"). Meanwhile, Newton, though he explained his (geometrical) form of calculus in Section I of Book I of the Principia of 1687, did not explain his eventual fluxional notation for the calculus in print until 1693 (in part) and 1704 (in full).
The prevailing opinion in the 18th century was against Leibniz (in Britain, not in the German-speaking world). Today, the consensus is Leibniz and Newton independently invented and described calculus in Europe in the 17th century, with their work noted to be more than just a "synthesis of previously distinct pieces of mathematical technique, but it was certainly this in part".
It was certainly Isaac Newton who first devised a new infinitesimal calculus and elaborated it into a widely extensible algorithm, whose potentialities he fully understood; of equal certainty, differential and integral calculus, the fount of great developments flowing continuously from 1684 to the present day, was created independently by Gottfried Leibniz.
One author has identified the dispute as being about "profoundly different" methods:
Despite ... points of resemblance, the methods [of Newton and Leibniz] are profoundly different, so making the priority row a nonsense.
On the other hand, other authors have emphasized the equivalences and mutual translatability of the methods: here N Guicciardini (2003) appears to confirm L'Hôpital (1696) (already cited):
the Newtonian and Leibnizian schools shared a common mathematical method. They adopted two algorithms, the analytical method of fluxions, and the differential and integral calculus, which were translatable one into the other.
== Scientific priority in the 17th century ==
In the 17th century the question of scientific priority was of great importance to scientists; however, during this period, scientific journals had just begun to appear, and the generally accepted mechanism for fixing priority when publishing information about discoveries had not yet been formed. Among the methods used by scientists were anagrams, sealed envelopes placed in a safe place, correspondence with other scientists, or a private message. A letter to the founder of the French Academy of Sciences, Marin Mersenne for a French scientist, or to the secretary of the Royal Society of London, Henry Oldenburg for English, had essentially the status of a published article. The discoverer could "time-stamp" the moment of his discovery, and prove that he knew of it at the point the letter was sealed, and had not copied it from anything subsequently published; nevertheless, where an idea was subsequently published in conjunction with its use in a particularly valuable context, this might take priority over an earlier discoverer's work, which had no obvious application. Further, a mathematician's claim could be undermined by counter-claims that he had not truly invented an idea, but merely improved on someone else's idea, an improvement that required little skill, and was based on facts that were already known.
A series of high-profile disputes about the scientific priority of the 17th century—the era that the American science historian D. Meli called "the golden age of the mud-slinging priority disputes"—is associated with Leibniz. The first of them occurred at the beginning of 1673, during his first visit to London, when in the presence of the famous mathematician John Pell he presented his method of approximating series by differences. To Pell's remark this discovery had already been made by François Regnaud and published in 1670 in Lyon by Gabriel Mouton, Leibniz answered the next day. In a letter to Oldenburg, he wrote that, having looked at Mouton's book, Pell was correct, but he can provide his draft notes, which contain nuances not found by Renault and Mouton. Thus, the integrity of Leibniz was proved, but in this case, was recalled later. On the same visit to London, Leibniz was found in the opposite position. February 1, 1673, at a meeting of the Royal Society of London, he demonstrated his mechanical calculator. The curator of the experiments of the Society, Robert Hooke, carefully examined the device and even removed the back cover. A few days later, in the absence of Leibniz, Hooke criticized the German scientist's machine, saying that he could make a simpler model. Leibniz, who learned about this, returned to Paris and categorically rejected Hooke's claim in a letter to Oldenburg and formulated principles of correct scientific behaviour: "We know that respectable and modest people prefer it when they think of something that is consistent with what someone's done other discoveries, ascribe their own improvements and additions to the discoverer, so as not to arouse suspicions of intellectual dishonesty, and the desire for true generosity should pursue them, instead of the lying thirst for dishonest profit." To illustrate the proper behaviour, Leibniz gives an example of Nicolas-Claude Fabri de Peiresc and Pierre Gassendi, who performed astronomical observations similar to those made earlier by Galileo Galilei and Johannes Hevelius, respectively. Learning they did not make their discoveries first, the French scientists passed on their data to the discoverers.
Newton's approach to the priority problem can be illustrated by the example of the discovery of the inverse-square law as applied to the dynamics of bodies moving under the influence of gravity. Based on an analysis of Kepler's laws and his own calculations, Robert Hooke made the assumption that motion under such conditions should occur along orbits similar to elliptical. Unable to rigorously prove this claim, he reported it to Newton. Without further entering into correspondence with Hooke, Newton solved this problem, as well as the inverse to it, proving that the law of inverse-squares follows from the ellipticity of the orbits. This discovery was set forth in his famous work Philosophiæ Naturalis Principia Mathematica without mentioning Hooke. At the insistence of astronomer Edmund Halley, to whom the manuscript was handed over for editing and publication, the phrase was included in the text that the compliance of Kepler's first law with the law of inverse squares was "independently approved by Wren, Hooke and Halley."
According to the remark of Vladimir Arnold, Newton, choosing between refusal to publish his discoveries and constant struggle for priority, chose both of them.
== Background ==
=== Invention of Differential and Integral Calculus ===
By the time of Newton and Leibniz, European mathematicians had already made a significant contribution to the formation of the ideas of mathematical analysis. The Dutchman Simon Stevin (1548–1620), the Italian Luca Valerio (1553–1618), the German Johannes Kepler (1571–1630) were engaged in the development of the ancient "method of exhaustion" for calculating areas and volumes. The latter's ideas, apparently, influenced – directly or through Galileo Galilei – on the "method of indivisibles" developed by Bonaventura Cavalieri (1598–1647).
The last years of Leibniz's life, 1710–1716, were embittered by a long controversy with John Keill, Newton, and others, over whether Leibniz had discovered calculus independently of Newton, or whether he had merely invented another notation for ideas that were fundamentally Newton's. No participant doubted that Newton had already developed his method of fluxions when Leibniz began working on the differential calculus, yet there was seemingly no proof beyond Newton's word. He had published a calculation of a tangent with the note: "This is only a special case of a general method whereby I can calculate curves and determine maxima, minima, and centers of gravity." How this was done he explained to a pupil a 20 years later, when Leibniz's articles were already well-read. Newton's manuscripts came to light only after his death.
The infinitesimal calculus can be expressed either in the notation of fluxions or in that of differentials, or, as noted above, it was also expressed by Newton in geometrical form, as in the Principia of 1687. Newton employed fluxions as early as 1666, but did not publish an account of his notation until 1693. The earliest use of differentials in Leibniz's notebooks may be traced to 1675. He employed this notation in a 1677 letter to Newton. The differential notation also appeared in Leibniz's memoir of 1684.
The claim that Leibniz invented the calculus independently of Newton rests on the basis that Leibniz:
Published a description of his method some years before Newton printed anything on fluxions,
Always alluded to the discovery as being his own invention (this statement went unchallenged for some years),
Enjoyed the strong presumption that he acted in good faith
Demonstrated in his private papers his development of the ideas of calculus in a manner independent of the path taken by Newton.
According to Leibniz's detractors, the fact that Leibniz's claim went unchallenged for some years is immaterial. To rebut this case it is sufficient to show that he:
Saw some of Newton's papers on the subject in or before 1675 or at least 1677, and
Obtained the fundamental ideas of the calculus from those papers.
No attempt was made to rebut #4, which was not known at the time, but which provides the strongest of the evidence that Leibniz came to the calculus independently from Newton. This evidence, however, is still questionable based on the discovery, in the inquest and after, that Leibniz both back-dated and changed fundamentals of his "original" notes, not only in this intellectual conflict, but in several others. He also published "anonymous" slanders of Newton regarding their controversy which he tried, initially, to claim he was not author of.
If good faith is nevertheless assumed, however, Leibniz's notes as presented to the inquest came first to integration, which he saw as a generalization of the summation of infinite series, whereas Newton began from derivatives. However, to view the development of calculus as entirely independent between the work of Newton and Leibniz misses that both had some knowledge of the methods of the other (though Newton did develop most fundamentals before Leibniz began) and worked together on a few aspects, in particular power series, as is shown in a letter to Henry Oldenburg dated 24 October 1676, where Newton remarks that Leibniz had developed a number of methods, one of which was new to him. Both Leibniz and Newton could see the other was far along towards inventing calculus (Leibniz in particular mentions it) but only Leibniz was prodded thereby into publication.
That Leibniz saw some of Newton's manuscripts had always been likely. In 1849, C. I. Gerhardt, while going through Leibniz's manuscripts, found extracts from Newton's De Analysi per Equationes Numero Terminorum Infinitas (published in 1704 as part of the De Quadratura Curvarum but also previously circulated among mathematicians starting with Newton giving a copy to Isaac Barrow in 1669 and Barrow sending it to John Collins) in Leibniz's handwriting, the existence of which had been previously unsuspected, along with notes re-expressing the content of these extracts in Leibniz's differential notation. Hence when these extracts were made becomes all-important. It is known that a copy of Newton's manuscript had been sent to Ehrenfried Walther von Tschirnhaus in May 1675, a time when he and Leibniz were collaborating; it is not impossible that these extracts were made then. It is also possible that they may have been made in 1676, when Leibniz discussed analysis by infinite series with Collins and Oldenburg. It is probable that they would have then shown him Newton's manuscript on the subject, a copy of which one or both of them surely possessed. On the other hand, it may be supposed that Leibniz made the extracts from the printed copy in or after 1704. Shortly before his death, Leibniz admitted in a letter to Abbé Antonio Schinella Conti, that in 1676 Collins had shown him some of Newton's papers, but Leibniz also implied that they were of little or no value. Presumably he was referring to Newton's letters of 13 June and 24 October 1676, and to the letter of 10 December 1672, on the method of tangents, extracts from which accompanied the letter of 13 June.
Whether Leibniz made use of the manuscript from which he had copied extracts, or whether he had previously invented the calculus, are questions on which no direct evidence is available at present. It is, however, worth noting that the unpublished Portsmouth Papers show that when Newton entered into the dispute in 1711, he picked this manuscript as the one which had likely fallen into Leibniz's hands. At that time there was no direct evidence that Leibniz had seen Newton's manuscript before it was printed in 1704; hence Newton's conjecture was not published. But Gerhardt's discovery of a copy made by Leibniz appears to confirm its accuracy. Those who question Leibniz's good faith allege that to a man of his ability, the manuscript, especially if supplemented by the letter of 10 December 1672, sufficed to give him a clue as to the methods of the calculus. Since Newton's work at issue did employ the fluxional notation, anyone building on that work would have to invent a notation, but some deny this.
== Development ==
The quarrel was a retrospective affair. In 1696, already some years later than the events that became the subject of the quarrel, the position still looked potentially peaceful: Newton and Leibniz had each made limited acknowledgements of the other's work, and L'Hôpital's 1696 book about the calculus from a Leibnizian point of view had also acknowledged Newton's published work of the 1680s as "nearly all about this calculus" ("presque tout de ce calcul"), while expressing preference for the convenience of Leibniz's notation.
At first, there was no reason to suspect Leibniz's good faith. In 1699, Nicolas Fatio de Duillier, a Swiss mathematician known for his work on the zodiacal light problem, publicly accused Leibniz of plagiarizing Newton, although he privately had accused Leibniz of plagiarism twice in letters to Christiaan Huygens in 1692. It was not until the 1704 publication of an anonymous review of Newton's tract on quadrature, which implied Newton had borrowed the idea of the fluxional calculus from Leibniz, that any responsible mathematician doubted Leibniz had invented the calculus independently of Newton. With respect to the review of Newton's quadrature work, all admit that there was no justification or authority for the statements made therein, which were rightly attributed to Leibniz. But the subsequent discussion led to a critical examination of the whole question, and doubts emerged: "Had Leibniz derived the fundamental idea of the calculus from Newton?" The case against Leibniz, as it appeared to Newton's friends, was summed up in the Commercium Epistolicum of 1712, which referenced all allegations. This document was thoroughly machined by Newton.
No such summary (with facts, dates, and references) of the case for Leibniz was issued by his friends; but Johann Bernoulli attempted to indirectly weaken the evidence by attacking the personal character of Newton in a letter dated 7 June 1713. When pressed for an explanation, Bernoulli most solemnly denied having written the letter. In accepting the denial, Newton added in a private letter to Bernoulli the following remarks, Newton's claimed reasons for why he took part in the controversy. He said, "I have never grasped at fame among foreign nations, but I am very desirous to preserve my character for honesty, which the author of that epistle, as if by the authority of a great judge, had endeavoured to wrest from me. Now that I am old, I have little pleasure in mathematical studies, and I have never tried to propagate my opinions over the world, but I have rather taken care not to involve myself in disputes on account of them."
Leibniz explained his silence as follows, in a letter to Conti dated 9 April 1716:
In order to respond point by point to all the work published against me, I would have to go into much minutiae that occurred thirty, forty years ago, of which I remember little: I would have to search my old letters, of which many are lost. Moreover, in most cases, I did not keep a copy, and when I did, the copy is buried in a great heap of papers, which I could sort through only with time and patience. I have enjoyed little leisure, being so weighted down of late with occupations of a totally different nature.
In any event, a bias favouring Newton tainted the whole affair from the outset. The Royal Society, of which Isaac Newton was president at the time, set up a committee to pronounce on the priority dispute, in response to a letter it had received from Leibniz. That committee never asked Leibniz to give his version of the events. The report of the committee, finding in favour of Newton, was written and published as "Commercium Epistolicum" (mentioned above) by Newton early in 1713. But Leibniz did not see it until the autumn of 1714.
=== Leibniz's death and end of dispute ===
Leibniz never agreed to acknowledge Newton's priority in inventing calculus. He attempted to write his own version of the history of differential calculus, but, as in the case of the history of the rulers of Braunschweig, he did not complete the matter. At the end of 1715, Leibniz accepted Johann Bernoulli's offer to organize another mathematical competition, in which different approaches had to prove their worth. This time the problem was taken from the area later called the calculus of variations – it was required to construct a tangent line to a family of curves. A letter was written on 25 November and transmitted in London to Newton through Abate Conti. The problem was formulated in unclear terms, and only later it became evident that it was required to find a general, and not a particular, as Newton understood, solution. After the British side published their decision, Leibniz published his, more general, and, thus, formally won this competition. For his part, Newton stubbornly sought to destroy his opponent. Not having achieved this with the "Report", he continued his research, spending hundreds of hours on it. His next study, entitled "Observations upon the preceding Epistle", was inspired by a letter from Leibniz to Conti in March 1716, which criticized Newton's philosophical views; no new facts were given in this document.
== See also ==
Possibility of transmission of Kerala School results to Europe
List of scientific priority disputes
== References ==
This article incorporates text from this source, which is in the public domain: Ball, W. W. Rouse (1908). A Short Account of the History of Mathematics. New York: MacMillan.{{cite book}}: CS1 maint: publisher location (link)
== Sources ==
Арнольд, В. И. (1989). Гюйгенс и Барроу, Ньютон и Гук - Первые шаги математического анализа и теории катастроф. М.: Наука. p. 98. ISBN 5-02-013935-1.
Arnold, Vladimir (1990). Huygens and Barrow, Newton and Hooke: Pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Translated by Primrose, Eric J.F. Birkhäuser Verlag. ISBN 3-7643-2383-3.
W. W. Rouse Ball (1908) A Short Account of the History of Mathematics], 4th ed.
Bardi, Jason Socrates (2006). The Calculus Wars: Newton, Leibniz, and the Greatest Mathematical Clash of All Time. New York: Thunder's Mouth Press. ISBN 978-1-56025-992-3.
Boyer, C. B. (1949). The History of the Calculus and its conceptual development. Dover Publications, inc.
Richard C. Brown (2012) Tangled origins of the Leibnitzian Calculus: A case study of mathematical revolution, World Scientific ISBN 9789814390804
Ivor Grattan-Guinness (1997) The Norton History of the Mathematical Sciences. W W Norton.
Hall, A. R. (1980). Philosophers at War: The Quarrel between Newton and Leibniz. Cambridge University Press. p. 356. ISBN 0-521-22732-1.
Stephen Hawking (1988) A Brief History of Time From the Big Bang to Black Holes. Bantam Books.
Kandaswamy, Anand. The Newton/Leibniz Conflict in Context.
Meli, D. B. (1993). Equivalence and Priority: Newton versus Leibniz: Including Leibniz's Unpublished Manuscripts on the Principia. Clarendon Press. p. 318. ISBN 0-19-850143-9.
== External links ==
Gottfried Wilhelm Leibniz, Sämtliche Schriften und Briefe, Reihe VII: Mathematische Schriften, vol. 5: Infinitesimalmathematik 1674-1676, Berlin: Akademie Verlag, 2008, pp. 288–295 ("Analyseos tetragonisticae pars secunda", 29 October 1675) and 321–331 ("Methodi tangentium inversae exempla", 11 November 1675).
Gottfried Wilhelm Leibniz, "Nova Methodus pro Maximis et Minimis...", 1684 (Latin original) (English translation)
Isaac Newton, "Newton's Waste Book (Part 3) (Normalized Version)": 16 May 1666 entry (The Newton Project)
Isaac Newton, "De Analysi per Equationes Numero Terminorum Infinitas (Of the Quadrature of Curves and Analysis by Equations of an Infinite Number of Terms)", in: Sir Isaac Newton's Two Treatises, James Bettenham, 1745. | Wikipedia/Leibniz–Newton_calculus_controversy |
In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.
If the defining polynomial of a plane algebraic curve is irreducible, then one has an irreducible plane algebraic curve. Otherwise, the algebraic curve is the union of one or several irreducible curves, called its components, that are defined by the irreducible factors.
More generally, an algebraic curve is an algebraic variety of dimension one. In some contexts, an algebraic set of dimension one is also called an algebraic curve, but this will not be the case in this article. Equivalently, an algebraic curve is an algebraic variety that is birationally equivalent to an irreducible algebraic plane curve. If the curve is contained in an affine space or a projective space, one can take a projection for such a birational equivalence.
These birational equivalences reduce most of the study of algebraic curves to the study of algebraic plane curves. However, some properties are not kept under birational equivalence and must be studied on non-plane curves. This is, in particular, the case for the degree and smoothness. For example, there exist smooth curves of genus 0 and degree greater than two, but any plane projection of such curves has singular points (see Genus–degree formula).
A non-plane curve is often called a space curve or a skew curve.
== In Euclidean geometry ==
An algebraic curve in the Euclidean plane is the set of the points whose coordinates are the solutions of a bivariate polynomial equation p(x, y) = 0. This equation is often called the implicit equation of the curve, in contrast to the curves that are the graph of a function defining explicitly y as a function of x.
With a curve given by such an implicit equation, the first problems are to determine the shape of the curve and to draw it. These problems are not as easy to solve as in the case of the graph of a function, for which y may easily be computed for various values of x. The fact that the defining equation is a polynomial implies that the curve has some structural properties that may help in solving these problems.
Every algebraic curve may be uniquely decomposed into a finite number of smooth monotone arcs (also called branches) sometimes connected by some points sometimes called "remarkable points", and possibly a finite number of isolated points called acnodes. A smooth monotone arc is the graph of a smooth function which is defined and monotone on an open interval of the x-axis. In each direction, an arc is either unbounded (usually called an infinite arc) or has an endpoint which is either a singular point (this will be defined below) or a point with a tangent parallel to one of the coordinate axes.
For example, for the Tschirnhausen cubic, there are two infinite arcs having the origin (0,0) as of endpoint. This point is the only singular point of the curve. There are also two arcs having this singular point as one endpoint and having a second endpoint with a horizontal tangent. Finally, there are two other arcs each having one of these points with horizontal tangent as the first endpoint and having the unique point with vertical tangent as the second endpoint. In contrast, the sinusoid is certainly not an algebraic curve, having an infinite number of monotone arcs.
To draw an algebraic curve, it is important to know the remarkable points and their tangents, the infinite branches and their asymptotes (if any) and the way in which the arcs connect them. It is also useful to consider the inflection points as remarkable points. When all this information is drawn on a sheet of paper, the shape of the curve usually appears rather clearly. If not, it suffices to add a few other points and their tangents to get a good description of the curve.
The methods for computing the remarkable points and their tangents are described below in the section Remarkable points of a plane curve.
== Plane projective curves ==
It is often desirable to consider curves in the projective space. An algebraic curve in the projective plane or plane projective curve is the set of the points in a projective plane whose projective coordinates are zeros of a homogeneous polynomial in three variables P(x, y, z).
Every affine algebraic curve of equation p(x, y) = 0 may be completed into the projective curve of equation
h
p
(
x
,
y
,
z
)
=
0
,
{\displaystyle ^{h}p(x,y,z)=0,}
where
h
p
(
x
,
y
,
z
)
=
z
deg
(
p
)
p
(
x
z
,
y
z
)
{\displaystyle ^{h}p(x,y,z)=z^{\deg(p)}p\left({\frac {x}{z}},{\frac {y}{z}}\right)}
is the result of the homogenization of p. Conversely, if P(x, y, z) = 0 is the homogeneous equation of a projective curve, then P(x, y, 1) = 0 is the equation of an affine curve, which consists of the points of the projective curve whose third projective coordinate is not zero. These two operations are reciprocal one to the other, as
h
p
(
x
,
y
,
1
)
=
p
(
x
,
y
)
{\displaystyle ^{h}p(x,y,1)=p(x,y)}
and, if p is defined by
p
(
x
,
y
)
=
P
(
x
,
y
,
1
)
{\displaystyle p(x,y)=P(x,y,1)}
, then
h
p
(
x
,
y
,
z
)
=
P
(
x
,
y
,
z
)
,
{\displaystyle ^{h}p(x,y,z)=P(x,y,z),}
as soon as the homogeneous polynomial P is not divisible by z.
For example, the projective curve of equation x2 + y2 − z2 is the projective completion of the unit circle of equation x2 + y2 − 1 = 0.
This implies that an affine curve and its projective completion are the same curves, or, more precisely that the affine curve is a part of the projective curve that is large enough to well define the "complete" curve. This point of view is commonly expressed by calling "points at infinity" of the affine curve the points (in finite number) of the projective completion that do not belong to the affine part.
Projective curves are frequently studied for themselves. They are also useful for the study of affine curves. For example, if p(x, y) is the polynomial defining an affine curve, beside the partial derivatives
p
x
′
{\displaystyle p'_{x}}
and
p
y
′
{\displaystyle p'_{y}}
, it is useful to consider the derivative at infinity
p
∞
′
(
x
,
y
)
=
h
p
z
′
(
x
,
y
,
1
)
.
{\displaystyle p'_{\infty }(x,y)={^{h}p'_{z}(x,y,1)}.}
For example, the equation of the tangent of the affine curve of equation p(x, y) = 0 at a point (a, b) is
x
p
x
′
(
a
,
b
)
+
y
p
y
′
(
a
,
b
)
+
p
∞
′
(
a
,
b
)
=
0.
{\displaystyle xp'_{x}(a,b)+yp'_{y}(a,b)+p'_{\infty }(a,b)=0.}
== Remarkable points of a plane curve ==
In this section, we consider a plane algebraic curve defined by a bivariate polynomial p(x, y) and its projective completion, defined by the homogenization
P
(
x
,
y
,
z
)
=
h
p
(
x
,
y
,
z
)
{\displaystyle P(x,y,z)={}^{h}p(x,y,z)}
of p.
=== Intersection with a line ===
Knowing the points of intersection of a curve with a given line is frequently useful. The intersection with the axes of coordinates and the asymptotes are useful to draw the curve. Intersecting with a line parallel to the axes allows one to find at least a point in each branch of the curve. If an efficient root-finding algorithm is available, this allows to draw the curve by plotting the intersection point with all the lines parallel to the y-axis and passing through each pixel on the x-axis.
If the polynomial defining the curve has a degree d, any line cuts the curve in at most d points. Bézout's theorem asserts that this number is exactly d, if the points are searched in the projective plane over an algebraically closed field (for example the complex numbers), and counted with their multiplicity. The method of computation that follows proves again this theorem, in this simple case.
To compute the intersection of the curve defined by the polynomial p with the line of equation ax+by+c = 0, one solves the equation of the line for x (or for y if a = 0). Substituting the result in p, one gets a univariate equation q(y) = 0 (or q(x) = 0, if the equation of the line has been solved in y), each of whose roots is one coordinate of an intersection point. The other coordinate is deduced from the equation of the line. The multiplicity of an intersection point is the multiplicity of the corresponding root. There is an intersection point at infinity if the degree of q is lower than the degree of p; the multiplicity of such an intersection point at infinity is the difference of the degrees of p and q.
=== Tangent at a point ===
The tangent at a point (a, b) of the curve is the line of equation
(
x
−
a
)
p
x
′
(
a
,
b
)
+
(
y
−
b
)
p
y
′
(
a
,
b
)
=
0
{\displaystyle (x-a)p'_{x}(a,b)+(y-b)p'_{y}(a,b)=0}
, like for every differentiable curve defined by an implicit equation. In the case of polynomials, another formula for the tangent has a simpler constant term and is more symmetric:
x
p
x
′
(
a
,
b
)
+
y
p
y
′
(
a
,
b
)
+
p
∞
′
(
a
,
b
)
=
0
,
{\displaystyle xp'_{x}(a,b)+yp'_{y}(a,b)+p'_{\infty }(a,b)=0,}
where
p
∞
′
(
x
,
y
)
=
P
z
′
(
x
,
y
,
1
)
{\displaystyle p'_{\infty }(x,y)=P'_{z}(x,y,1)}
is the derivative at infinity. The equivalence of the two equations results from Euler's homogeneous function theorem applied to P.
If
p
x
′
(
a
,
b
)
=
p
y
′
(
a
,
b
)
=
0
,
{\displaystyle p'_{x}(a,b)=p'_{y}(a,b)=0,}
the tangent is not defined and the point is a singular point.
This extends immediately to the projective case: The equation of the tangent of at the point of projective coordinates (a:b:c) of the projective curve of equation P(x, y, z) = 0 is
x
P
x
′
(
a
,
b
,
c
)
+
y
P
y
′
(
a
,
b
,
c
)
+
z
P
z
′
(
a
,
b
,
c
)
=
0
,
{\displaystyle xP'_{x}(a,b,c)+yP'_{y}(a,b,c)+zP'_{z}(a,b,c)=0,}
and the points of the curves that are singular are the points such that
P
x
′
(
a
,
b
,
c
)
=
P
y
′
(
a
,
b
,
c
)
=
P
z
′
(
a
,
b
,
c
)
=
0.
{\displaystyle P'_{x}(a,b,c)=P'_{y}(a,b,c)=P'_{z}(a,b,c)=0.}
(The condition P(a, b, c) = 0 is implied by these conditions, by Euler's homogeneous function theorem.)
=== Asymptotes ===
Every infinite branch of an algebraic curve corresponds to a point at infinity on the curve, that is a point of the projective completion of the curve that does not belong to its affine part. The corresponding asymptote is the tangent of the curve at that point. The general formula for a tangent to a projective curve may apply, but it is worth to make it explicit in this case.
Let
p
=
p
d
+
⋯
+
p
0
{\displaystyle p=p_{d}+\cdots +p_{0}}
be the decomposition of the polynomial defining the curve into its homogeneous parts, where pi is the sum of the monomials of p of degree i. It follows that
P
=
h
p
=
p
d
+
z
p
d
−
1
+
⋯
+
z
d
p
0
{\displaystyle P={^{h}p}=p_{d}+zp_{d-1}+\cdots +z^{d}p_{0}}
and
P
z
′
(
a
,
b
,
0
)
=
p
d
−
1
(
a
,
b
)
.
{\displaystyle P'_{z}(a,b,0)=p_{d-1}(a,b).}
A point at infinity of the curve is a zero of p of the form (a, b, 0). Equivalently, (a, b) is a zero of pd. The fundamental theorem of algebra implies that, over an algebraically closed field (typically, the field of complex numbers), pd factors into a product of linear factors. Each factor defines a point at infinity on the curve: if bx − ay is such a factor, then it defines the point at infinity (a, b, 0). Over the reals, pd factors into linear and quadratic factors. The irreducible quadratic factors define non-real points at infinity, and the real points are given by the linear factors.
If (a, b, 0) is a point at infinity of the curve, one says that (a, b) is an asymptotic direction. Setting q = pd the equation of the corresponding asymptote is
x
q
x
′
(
a
,
b
)
+
y
q
y
′
(
a
,
b
)
+
p
d
−
1
(
a
,
b
)
=
0.
{\displaystyle xq'_{x}(a,b)+yq'_{y}(a,b)+p_{d-1}(a,b)=0.}
If
q
x
′
(
a
,
b
)
=
q
y
′
(
a
,
b
)
=
0
{\displaystyle q'_{x}(a,b)=q'_{y}(a,b)=0}
and
p
d
−
1
(
a
,
b
)
≠
0
,
{\displaystyle p_{d-1}(a,b)\neq 0,}
the asymptote is the line at infinity, and, in the real case, the curve has a branch that looks like a parabola. In this case one says that the curve has a parabolic branch. If
q
x
′
(
a
,
b
)
=
q
y
′
(
a
,
b
)
=
p
d
−
1
(
a
,
b
)
=
0
,
{\displaystyle q'_{x}(a,b)=q'_{y}(a,b)=p_{d-1}(a,b)=0,}
the curve has a singular point at infinity and may have several asymptotes. They may be computed by the method of computing the tangent cone of a singular point.
=== Singular points ===
The singular points of a curve of degree d defined by a polynomial p(x,y) of degree d are the solutions of the system of equations:
p
x
′
(
x
,
y
)
=
p
y
′
(
x
,
y
)
=
p
(
x
,
y
)
=
0.
{\displaystyle p'_{x}(x,y)=p'_{y}(x,y)=p(x,y)=0.}
In characteristic zero, this system is equivalent to
p
x
′
(
x
,
y
)
=
p
y
′
(
x
,
y
)
=
p
∞
′
(
x
,
y
)
=
0
,
{\displaystyle p'_{x}(x,y)=p'_{y}(x,y)=p'_{\infty }(x,y)=0,}
where, with the notation of the preceding section,
p
∞
′
(
x
,
y
)
=
P
z
′
(
x
,
y
,
1
)
.
{\displaystyle p'_{\infty }(x,y)=P'_{z}(x,y,1).}
The systems are equivalent because of Euler's homogeneous function theorem. The latter system has the advantage of having its third polynomial of degree d-1 instead of d.
Similarly, for a projective curve defined by a homogeneous polynomial P(x,y,z) of degree d, the singular points have the solutions of the system
P
x
′
(
x
,
y
,
z
)
=
P
y
′
(
x
,
y
,
z
)
=
P
z
′
(
x
,
y
,
z
)
=
0
{\displaystyle P'_{x}(x,y,z)=P'_{y}(x,y,z)=P'_{z}(x,y,z)=0}
as homogeneous coordinates. (In positive characteristic, the equation
P
(
x
,
y
,
z
)
{\displaystyle P(x,y,z)}
has to be added to the system.)
This implies that the number of singular points is finite as long as p(x,y) or P(x,y,z) is square free. Bézout's theorem implies thus that the number of singular points is at most (d − 1)2, but this bound is not sharp because the system of equations is overdetermined. If reducible polynomials are allowed, the sharp bound is d(d − 1)/2, this value is reached when the polynomial factors in linear factors, that is if the curve is the union of d lines. For irreducible curves and polynomials, the number of singular points is at most (d − 1)(d − 2)/2, because of the formula expressing the genus in term of the singularities (see below). The maximum is reached by the curves of genus zero whose all singularities have multiplicity two and distinct tangents (see below).
The equation of the tangents at a singular point is given by the nonzero homogeneous part of the lowest degree in the Taylor series of the polynomial at the singular point. When one changes the coordinates to put the singular point at the origin, the equation of the tangents at the singular point is thus the nonzero homogeneous part of the lowest degree of the polynomial, and the multiplicity of the singular point is the degree of this homogeneous part.
== Analytic structure ==
The study of the analytic structure of an algebraic curve in the neighborhood of a singular point provides accurate information of the topology of singularities. In fact, near a singular point, a real algebraic curve is the union of a finite number of branches that intersect only at the singular point and look either as a cusp or as a smooth curve.
Near a regular point, one of the coordinates of the curve may be expressed as an analytic function of the other coordinate. This is a corollary of the analytic implicit function theorem, and implies that the curve is smooth near the point. Near a singular point, the situation is more complicated and involves Puiseux series, which provide analytic parametric equations of the branches.
For describing a singularity, it is worth to translate the curve for having the singularity at the origin. This consists of a change of variable of the form
X
=
x
−
a
,
Y
=
y
−
b
,
{\displaystyle X=x-a,Y=y-b,}
where
a
,
b
{\displaystyle a,b}
are the coordinates of the singular point. In the following, the singular point under consideration is always supposed to be at the origin.
The equation of an algebraic curve is
f
(
x
,
y
)
=
0
,
{\displaystyle f(x,y)=0,}
where f is a polynomial in x and y. This polynomial may be considered as a polynomial in y, with coefficients in the algebraically closed field of the Puiseux series in x. Thus f may be factored in factors of the form
y
−
P
(
x
)
,
{\displaystyle y-P(x),}
where P is a Puiseux series. These factors are all different if f is an irreducible polynomial, because this implies that f is square-free, a property which is independent of the field of coefficients.
The Puiseux series that occur here have the form
P
(
x
)
=
∑
n
=
n
0
∞
a
n
x
n
/
d
,
{\displaystyle P(x)=\sum _{n=n_{0}}^{\infty }a_{n}x^{n/d},}
where d is a positive integer, and
n
0
{\displaystyle n_{0}}
is an integer that may also be supposed to be positive, because we consider only the branches of the curve that pass through the origin. Without loss of generality, we may suppose that d is coprime with the greatest common divisor of the n such that
a
n
≠
0
{\displaystyle a_{n}\neq 0}
(otherwise, one could choose a smaller common denominator for the exponents).
Let
ω
d
{\displaystyle \omega _{d}}
be a primitive dth root of unity. If the above Puiseux series occurs in the factorization of
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0}
, then the d series
P
i
(
x
)
=
∑
n
=
n
0
∞
a
n
ω
d
i
x
n
/
d
{\displaystyle P_{i}(x)=\sum _{n=n_{0}}^{\infty }a_{n}\omega _{d}^{i}x^{n/d}}
occur also in the factorization (a consequence of Galois theory). These d series are said conjugate, and are considered as a single branch of the curve, of ramification index d.
In the case of a real curve, that is a curve defined by a polynomial with real coefficients, three cases may occur. If none
P
i
(
x
)
{\displaystyle P_{i}(x)}
has real coefficients, then one has a non-real branch. If some
P
i
(
x
)
{\displaystyle P_{i}(x)}
has real coefficients, then one may choose it as
P
0
(
x
)
{\displaystyle P_{0}(x)}
. If d is odd, then every real value of x provides a real value of
P
0
(
x
)
{\displaystyle P_{0}(x)}
, and one has a real branch that looks regular, although it is singular if d > 1. If d is even, then
P
0
(
x
)
{\displaystyle P_{0}(x)}
and
P
d
/
2
(
x
)
{\displaystyle P_{d/2}(x)}
have real values, but only for x ≥ 0. In this case, the real branch looks as a cusp (or is a cusp, depending on the definition of a cusp that is used).
For example, the ordinary cusp has only one branch. If it is defined by the equation
y
2
−
x
3
=
0
,
{\displaystyle y^{2}-x^{3}=0,}
then the factorization is
(
y
−
x
3
/
2
)
(
y
+
x
3
/
2
)
;
{\displaystyle (y-x^{3/2})(y+x^{3/2});}
the ramification index is 2, and the two factors are real and define each a half branch. If the cusp is rotated, it equation becomes
y
3
−
x
2
=
0
,
{\displaystyle y^{3}-x^{2}=0,}
and the factorization is
(
y
−
x
2
/
3
)
(
y
−
j
2
x
2
/
3
)
(
y
−
(
j
2
)
2
x
2
/
3
)
,
{\displaystyle (y-x^{2/3})(y-j^{2}x^{2/3})(y-(j^{2})^{2}x^{2/3}),}
with
j
=
(
1
+
−
3
)
/
2
{\displaystyle j=(1+{\sqrt {-3}})/2}
(the coefficient
(
j
2
)
2
{\displaystyle (j^{2})^{2}}
has not been simplified to j for showing how the above definition of
P
i
(
x
)
{\displaystyle P_{i}(x)}
is specialized). Here the ramification index is 3, and only one factor is real; this shows that, in the first case, the two factors must be considered as defining the same branch.
== Non-plane algebraic curves ==
An algebraic curve is an algebraic variety of dimension one. This implies that an affine curve in an affine space of dimension n is defined by, at least, n − 1 polynomials in n variables. To define a curve, these polynomials must generate a prime ideal of Krull dimension 1. This condition is not easy to test in practice. Therefore, the following way to represent non-plane curves may be preferred.
Let
f
,
g
0
,
g
3
,
…
,
g
n
{\displaystyle f,g_{0},g_{3},\ldots ,g_{n}}
be n polynomials in two variables x1 and x2 such that f is irreducible. The points in the affine space of dimension n such whose coordinates satisfy the equations and inequations
f
(
x
1
,
x
2
)
=
0
g
0
(
x
1
,
x
2
)
≠
0
x
3
=
g
3
(
x
1
,
x
2
)
g
0
(
x
1
,
x
2
)
⋮
x
n
=
g
n
(
x
1
,
x
2
)
g
0
(
x
1
,
x
2
)
{\displaystyle {\begin{aligned}&f(x_{1},x_{2})=0\\&g_{0}(x_{1},x_{2})\neq 0\\x_{3}&={\frac {g_{3}(x_{1},x_{2})}{g_{0}(x_{1},x_{2})}}\\&{}\ \vdots \\x_{n}&={\frac {g_{n}(x_{1},x_{2})}{g_{0}(x_{1},x_{2})}}\end{aligned}}}
are all the points of an algebraic curve in which a finite number of points have been removed. This curve is defined by a system of generators of the ideal of the polynomials h such that it exists an integer k such
g
0
k
h
{\displaystyle g_{0}^{k}h}
belongs to the ideal generated by
f
,
x
3
g
0
−
g
3
,
…
,
x
n
g
0
−
g
n
{\displaystyle f,x_{3}g_{0}-g_{3},\ldots ,x_{n}g_{0}-g_{n}}
.
This representation is a birational equivalence between the curve and the plane curve defined by f. Every algebraic curve may be represented in this way. However, a linear change of variables may be needed in order to make almost always injective the projection on the two first variables. When a change of variables is needed, almost every change is convenient, as soon as it is defined over an infinite field.
This representation allows us to deduce easily any property of a non-plane algebraic curve, including its graphical representation, from the corresponding property of its plane projection.
For a curve defined by its implicit equations, above representation of the curve may easily deduced from a Gröbner basis for a block ordering such that the block of the smaller variables is (x1, x2). The polynomial f is the unique polynomial in the base that depends only of x1 and x2. The fractions gi/g0 are obtained by choosing, for i = 3, ..., n, a polynomial in the basis that is linear in xi and depends only on x1, x2 and xi. If these choices are not possible, this means either that the equations define an algebraic set that is not a variety, or that the variety is not of dimension one, or that one must change of coordinates. The latter case occurs when f exists and is unique, and, for i = 3, …, n, there exist polynomials whose leading monomial depends only on x1, x2 and xi.
== Algebraic function fields ==
The study of algebraic curves can be reduced to the study of irreducible algebraic curves: those curves that cannot be written as the union of two smaller curves. Up to birational equivalence, the irreducible curves over a field F are categorically equivalent to algebraic function fields in one variable over F. Such an algebraic function field is a field extension K of F that contains an element x which is transcendental over F, and such that K is a finite algebraic extension of F(x), which is the field of rational functions in the indeterminate x over F.
For example, consider the field C of complex numbers, over which we may define the field C(x) of rational functions in C. If y2 = x3 − x − 1, then the field C(x, y) is an elliptic function field. The element x is not uniquely determined; the field can also be regarded, for instance, as an extension of C(y). The algebraic curve corresponding to the function field is simply the set of points (x, y) in C2 satisfying y2 = x3 − x − 1.
If the field F is not algebraically closed, the point of view of function fields is a little more general than that of considering the locus of points, since we include, for instance, "curves" with no points on them. For example, if the base field F is the field R of real numbers, then x2 + y2 = −1 defines an algebraic extension field of R(x), but the corresponding curve considered as a subset of R2 has no points. The equation x2 + y2 = −1 does define an irreducible algebraic curve over R in the scheme sense (an integral, separated one-dimensional schemes of finite type over R). In this sense, the one-to-one correspondence between irreducible algebraic curves over F (up to birational equivalence) and algebraic function fields in one variable over F holds in general.
Two curves can be birationally equivalent (i.e. have isomorphic function fields) without being isomorphic as curves. The situation becomes easier when dealing with nonsingular curves, i.e. those that lack any singularities. Two nonsingular projective curves over a field are isomorphic if and only if their function fields are isomorphic.
Tsen's theorem is about the function field of an algebraic curve over an algebraically closed field.
== Complex curves and real surfaces ==
A complex projective algebraic curve resides in n-dimensional complex projective space CPn. This has complex dimension n, but topological dimension, as a real manifold, 2n, and is compact, connected, and orientable. An algebraic curve over C likewise has topological dimension two; in other words, it is a surface.
The topological genus of this surface, that is the number of handles or donut holes, is equal to the geometric genus of the algebraic curve that may be computed by algebraic means. In short, if one consider a plane projection of a nonsingular curve that has degree d and only ordinary singularities (singularities of multiplicity two with distinct tangents), then the genus is (d − 1)(d − 2)/2 − k, where k is the number of these singularities.
=== Compact Riemann surfaces ===
A Riemann surface is a connected complex analytic manifold of one complex dimension, which makes it a connected real manifold of two dimensions. It is compact if it is compact as a topological space.
There is a triple equivalence of categories between the category of smooth irreducible projective algebraic curves over C (with non-constant regular maps as morphisms), the category of compact Riemann surfaces (with non-constant holomorphic maps as morphisms), and the opposite of the category of algebraic function fields in one variable over C (with field homomorphisms that fix C as morphisms). This means that in studying these three subjects we are in a sense studying one and the same thing. It allows complex analytic methods to be used in algebraic geometry, and algebraic-geometric methods in complex analysis and field-theoretic methods to be used in both. This is characteristic of a much wider class of problems in algebraic geometry.
See also algebraic geometry and analytic geometry for a more general theory.
== Singularities ==
Using the intrinsic concept of tangent space, points P on an algebraic curve C are classified as smooth (synonymous: non-singular), or else singular. Given n − 1 homogeneous polynomials in n + 1 variables, we may find the Jacobian matrix as the (n − 1)×(n + 1) matrix of the partial derivatives. If the rank of this matrix is n − 1, then the polynomials define an algebraic curve (otherwise they define an algebraic variety of higher dimension). If the rank remains n − 1 when the Jacobian matrix is evaluated at a point P on the curve, then the point is a smooth or regular point; otherwise it is a singular point. In particular, if the curve is a plane projective algebraic curve, defined by a single homogeneous polynomial equation f(x,y,z) = 0, then the singular points are precisely the points P where the rank of the 1×(n + 1) matrix is zero, that is, where
∂
f
∂
x
(
P
)
=
∂
f
∂
y
(
P
)
=
∂
f
∂
z
(
P
)
=
0.
{\displaystyle {\frac {\partial f}{\partial x}}(P)={\frac {\partial f}{\partial y}}(P)={\frac {\partial f}{\partial z}}(P)=0.}
Since f is a polynomial, this definition is purely algebraic and makes no assumption about the nature of the field F, which in particular need not be the real or complex numbers. It should, of course, be recalled that (0,0,0) is not a point of the curve and hence not a singular point.
Similarly, for an affine algebraic curve defined by a single polynomial equation f(x,y) = 0, then the singular points are precisely the points P of the curve where the rank of the 1×n Jacobian matrix is zero, that is, where
f
(
P
)
=
∂
f
∂
x
(
P
)
=
∂
f
∂
y
(
P
)
=
0.
{\displaystyle f(P)={\frac {\partial f}{\partial x}}(P)={\frac {\partial f}{\partial y}}(P)=0.}
The singularities of a curve are not birational invariants. However, locating and classifying the singularities of a curve is one way of computing the genus, which is a birational invariant. For this to work, we should consider the curve projectively and require F to be algebraically closed, so that all the singularities which belong to the curve are considered.
=== Classification of singularities ===
Singular points include multiple points where the curve crosses over itself, and also various types of cusp, for example that shown by the curve with equation x3 = y2 at (0,0).
A curve C has at most a finite number of singular points. If it has none, it can be called smooth or non-singular. Commonly, this definition is understood over an algebraically closed field and for a curve C in a projective space (i.e., complete in the sense of algebraic geometry). For example, the plane curve of equation
y
−
x
3
=
0
{\displaystyle y-x^{3}=0}
is considered as singular, as having a singular point (a cusp) at infinity.
In the remainder of this section, one considers a plane curve C defined as the zero set of a bivariate polynomial f(x, y). Some of the results, but not all, may be generalized to non-plane curves.
The singular points are classified by means of several invariants. The multiplicity m is defined as the maximum integer such that the derivatives of f to all orders up to m – 1 vanish (also the minimal intersection number between the curve and a straight line at P).
Intuitively, a singular point has delta invariant δ if it concentrates δ ordinary double points at P. To make this precise, the blow up process produces so-called infinitely near points, and summing m(m − 1)/2 over the infinitely near points, where m is their multiplicity, produces δ.
For an irreducible and reduced curve and a point P we can define δ algebraically as the length of
O
~
P
/
O
P
{\displaystyle {\widetilde {\mathcal {O}}}_{P}/{\mathcal {O}}_{P}}
where
O
P
{\displaystyle {\mathcal {O}}_{P}}
is the local ring at P and
O
~
P
{\displaystyle {\widetilde {\mathcal {O}}}_{P}}
is its integral closure.
The Milnor number μ of a singularity is the degree of the mapping grad f(x,y)/|grad f(x,y)| on the small sphere of radius ε, in the sense of the topological degree of a continuous mapping, where grad f is the (complex) gradient vector field of f. It is related to δ and r by the Milnor–Jung formula,
Here, the branching number r of P is the number of locally irreducible branches at P. For example, r = 1 at an ordinary cusp, and r = 2 at an ordinary double point. The multiplicity m is at least r, and that P is singular if and only if m is at least 2. Moreover, δ is at least m(m-1)/2.
Computing the delta invariants of all of the singularities allows the genus g of the curve to be determined; if d is the degree, then
g
=
1
2
(
d
−
1
)
(
d
−
2
)
−
∑
P
δ
P
,
{\displaystyle g={\frac {1}{2}}(d-1)(d-2)-\sum _{P}\delta _{P},}
where the sum is taken over all singular points P of the complex projective plane curve. It is called the genus formula.
Assign the invariants [m, δ, r] to a singularity, where m is the multiplicity, δ is the delta-invariant, and r is the branching number. Then an ordinary cusp is a point with invariants [2,1,1] and an ordinary double point is a point with invariants [2,1,2], and an ordinary m-multiple point is a point with invariants [m, m(m − 1)/2, m].
== Examples of curves ==
=== Rational curves ===
A rational curve, also called a unicursal curve, is any curve which is birationally equivalent to a line, which we may take to be a projective line; accordingly, we may identify the function field of the curve with the field of rational functions in one indeterminate F(x). If F is algebraically closed, this is equivalent to a curve of genus zero; however, the field of all real algebraic functions defined on the real algebraic variety x2 + y2 = −1 is a field of genus zero which is not a rational function field.
Concretely, a rational curve embedded in an affine space of dimension n over F can be parameterized (except for isolated exceptional points) by means of n rational functions of a single parameter t; by reducing these rational functions to the same denominator, the n+1 resulting polynomials define a polynomial parametrization of the projective completion of the curve in the projective space. An example is the
rational normal curve, where all these polynomials are monomials.
Any conic section defined over F with a rational point in F is a rational curve. It can be parameterized by drawing a line with slope t through the rational point, and an intersection with the plane quadratic curve; this gives a polynomial with F-rational coefficients and one F-rational root, hence the other root is F-rational (i.e., belongs to F) also.
For example, consider the ellipse x2 + xy + y2 = 1, where (−1, 0) is a rational point. Drawing a line with slope t from (−1,0), y = t(x + 1), substituting it in the equation of the ellipse, factoring, and solving for x, we obtain
x
=
1
−
t
2
1
+
t
+
t
2
.
{\displaystyle x={\frac {1-t^{2}}{1+t+t^{2}}}.}
Then the equation for y is
y
=
t
(
x
+
1
)
=
t
(
t
+
2
)
1
+
t
+
t
2
,
{\displaystyle y=t(x+1)={\frac {t(t+2)}{1+t+t^{2}}}\,,}
which defines a rational parameterization of the ellipse and hence shows the ellipse is a rational curve. All points of the ellipse are given, except for (−1,1), which corresponds to t = ∞; the entire curve is parameterized therefore by the real projective line.
Such a rational parameterization may be considered in the projective space by equating the first projective coordinates to the numerators of the parameterization and the last one to the common denominator. As the parameter is defined in a projective line, the polynomials in the parameter should be homogenized. For example, the projective parameterization of the above ellipse is
X
=
U
2
−
T
2
,
Y
=
T
(
T
+
2
U
)
,
Z
=
T
2
+
T
U
+
U
2
.
{\displaystyle X=U^{2}-T^{2},\quad Y=T\,(T+2\,U),\quad Z=T^{2}+TU+U^{2}.}
Eliminating T and U between these equations we get again the projective equation of the ellipse
X
2
+
X
Y
+
Y
2
=
Z
2
,
{\displaystyle X^{2}+X\,Y+Y^{2}=Z^{2},}
which may be easily obtained directly by homogenizing the above equation.
Many of the curves on Wikipedia's list of curves are rational and hence have similar rational parameterizations.
=== Rational plane curves ===
Rational plane curves are rational curves embedded into
P
2
{\displaystyle \mathbb {P} ^{2}}
. Given generic sections
s
1
,
s
2
,
s
3
∈
Γ
(
P
1
,
O
(
d
)
)
{\displaystyle s_{1},s_{2},s_{3}\in \Gamma (\mathbb {P} ^{1},{\mathcal {O}}(d))}
of degree
d
{\displaystyle d}
homogeneous polynomials in two coordinates,
x
,
y
{\displaystyle x,y}
, there is a map
s
:
P
1
→
P
2
{\displaystyle s:\mathbb {P} ^{1}\to \mathbb {P} ^{2}}
given by
s
(
[
x
:
y
]
)
=
[
s
1
(
[
x
:
y
]
)
:
s
2
(
[
x
:
y
]
)
:
s
3
(
[
x
:
y
]
)
]
{\displaystyle s([x:y])=[s_{1}([x:y]):s_{2}([x:y]):s_{3}([x:y])]}
defining a rational plane curve of degree
d
{\displaystyle d}
. There is an associated moduli space
M
=
M
¯
0
,
0
(
P
2
,
d
⋅
[
H
]
)
{\displaystyle {\mathcal {M}}={\overline {\mathcal {M}}}_{0,0}(\mathbb {P} ^{2},d\cdot [H])}
(where
[
H
]
{\displaystyle [H]}
is the hyperplane class) parametrizing all such stable curves. A dimension count can be made to determine the moduli spaces dimension: There are
d
+
1
{\displaystyle d+1}
parameters in
Γ
(
P
1
,
O
(
d
)
)
{\displaystyle \Gamma (\mathbb {P} ^{1},{\mathcal {O}}(d))}
giving
3
d
+
3
{\displaystyle 3d+3}
parameters total for each of the sections. Then, since they are considered up to a projective quotient in
P
2
{\displaystyle \mathbb {P} ^{2}}
there is
1
{\displaystyle 1}
less parameter in
M
{\displaystyle {\mathcal {M}}}
. Furthermore, there is a three dimensional group of automorphisms of
P
1
{\displaystyle \mathbb {P} ^{1}}
, hence
M
{\displaystyle {\mathcal {M}}}
has dimension
3
d
+
3
−
1
−
3
=
3
d
−
1
{\displaystyle 3d+3-1-3=3d-1}
. This moduli space can be used to count the number
N
d
{\displaystyle N_{d}}
of degree
d
{\displaystyle d}
rational plane curves intersecting
3
d
−
1
{\displaystyle 3d-1}
points using Gromov–Witten theory. It is given by the recursive relation
N
d
=
∑
d
A
+
d
B
=
d
N
d
A
N
d
B
d
A
2
d
B
(
d
B
(
3
d
−
4
3
d
A
−
2
)
−
d
A
(
3
d
−
4
3
d
A
−
1
)
)
{\displaystyle N_{d}=\sum _{d_{A}+d_{B}=d}N_{d_{A}}N_{d_{B}}d_{A}^{2}d_{B}\left(d_{B}{\binom {3d-4}{3d_{A}-2}}-d_{A}{\binom {3d-4}{3d_{A}-1}}\right)}
where
N
1
=
N
2
=
1
{\displaystyle N_{1}=N_{2}=1}
.
=== Elliptic curves ===
An elliptic curve may be defined as any curve of genus one with a rational point: a common model is a nonsingular cubic curve, which suffices to model any genus one curve. In this model the distinguished point is commonly taken to be an inflection point at infinity; this amounts to requiring that the curve can be written in Tate-Weierstrass form, which in its projective version is
y
2
z
+
a
1
x
y
z
+
a
3
y
z
2
=
x
3
+
a
2
x
2
z
+
a
4
x
z
2
+
a
6
z
3
.
{\displaystyle y^{2}z+a_{1}xyz+a_{3}yz^{2}=x^{3}+a_{2}x^{2}z+a_{4}xz^{2}+a_{6}z^{3}.}
If the characteristic of the field is different from 2 and 3, then a linear change of coordinates allows putting
a
1
=
a
2
=
a
3
=
0
,
{\displaystyle a_{1}=a_{2}=a_{3}=0,}
which gives the classical Weierstrass form
y
2
=
x
3
+
p
x
+
q
.
{\displaystyle y^{2}=x^{3}+px+q.}
Elliptic curves carry the structure of an abelian group with the distinguished point as the identity of the group law. In a plane cubic model three points sum to zero in the group if and only if they are collinear. For an elliptic curve defined over the complex numbers the group is isomorphic to the additive group of the complex plane modulo the period lattice of the corresponding elliptic functions.
The intersection of two quadric surfaces is, in general, a nonsingular curve of genus one and degree four, and thus an elliptic curve, if it has a rational point. In special cases, the intersection either may be a rational singular quartic or is decomposed in curves of smaller degrees which are not always distinct (either a cubic curve and a line, or two conics, or a conic and two lines, or four lines).
=== Curves of genus greater than one ===
Curves of genus greater than one differ markedly from both rational and elliptic curves. Such curves defined over the rational numbers, by Faltings's theorem, can have only a finite number of rational points, and they may be viewed as having a hyperbolic geometry structure. Examples are the hyperelliptic curves, the Klein quartic curve, and the Fermat curve xn + yn = zn when n is greater than three. Also projective plane curves in
P
2
{\displaystyle \mathbb {P} ^{2}}
and curves in
P
1
×
P
1
{\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}}
provide many useful examples.
==== Projective plane curves ====
Plane curves
C
⊂
P
2
{\displaystyle C\subset \mathbb {P} ^{2}}
of degree
k
{\displaystyle k}
, which can be constructed as the vanishing locus of a generic section
s
∈
Γ
(
P
2
,
O
(
k
)
)
{\displaystyle s\in \Gamma (\mathbb {P} ^{2},{\mathcal {O}}(k))}
, have genus
(
k
−
1
)
(
k
−
2
)
2
{\displaystyle {\frac {(k-1)(k-2)}{2}}}
which can be computed using coherent sheaf cohomology. Here's a brief summary of the curves' genera relative to their degree
For example, the curve
x
4
+
y
4
+
z
4
{\displaystyle x^{4}+y^{4}+z^{4}}
defines a curve of genus
3
{\displaystyle 3}
which is smooth since the differentials
4
x
3
,
4
y
3
,
4
z
3
{\displaystyle 4x^{3},4y^{3},4z^{3}}
have no common zeros with the curve. A non-example of a generic section is the curve
x
(
x
2
+
y
2
+
z
2
)
{\displaystyle x(x^{2}+y^{2}+z^{2})}
which, by Bezout's theorem, should intersect at most
2
{\displaystyle 2}
points; it is the union of two rational curves
C
1
∪
C
2
{\displaystyle C_{1}\cup C_{2}}
intersecting at two points. Note
C
1
{\displaystyle C_{1}}
is given by the vanishing locus of
x
{\displaystyle x}
and
C
2
{\displaystyle C_{2}}
is given by the vanishing locus of
x
2
+
y
2
+
z
2
{\displaystyle x^{2}+y^{2}+z^{2}}
. These can be found explicitly: a point lies in both if
x
=
0
{\displaystyle x=0}
. So the two solutions are the points
[
0
:
y
:
z
]
{\displaystyle [0:y:z]}
such that
y
2
+
z
2
=
0
{\displaystyle y^{2}+z^{2}=0}
, which are
[
0
:
1
:
−
−
1
]
{\displaystyle [0:1:-{\sqrt {-1}}]}
and
[
0
:
1
:
−
1
]
{\displaystyle [0:1:{\sqrt {-1}}]}
.
==== Curves in product of projective lines ====
Curve
C
⊂
P
1
×
P
1
{\displaystyle C\subset \mathbb {P} ^{1}\times \mathbb {P} ^{1}}
given by the vanishing locus of
s
∈
Γ
(
P
1
×
P
1
,
O
(
a
,
b
)
)
{\displaystyle s\in \Gamma (\mathbb {P} ^{1}\times \mathbb {P} ^{1},{\mathcal {O}}(a,b))}
, for
a
,
b
≥
2
{\displaystyle a,b\geq 2}
, give curves of genus
a
b
−
a
−
b
+
1
{\displaystyle ab-a-b+1}
which can be checked using coherent sheaf cohomology. If
a
=
2
{\displaystyle a=2}
, then they define curves of genus
2
b
−
2
−
b
+
1
=
b
−
1
{\displaystyle 2b-2-b+1=b-1}
, hence a curve of any genus can be constructed as a curve in
P
1
×
P
1
{\displaystyle \mathbb {P} ^{1}\times \mathbb {P} ^{1}}
. Their genera can be summarized in the table
and for
a
=
3
{\displaystyle a=3}
, this is
== See also ==
=== Classical algebraic geometry ===
=== Modern algebraic geometry ===
=== Geometry of Riemann surfaces ===
== Notes ==
== References ==
Brieskorn, Egbert; Knörrer, Horst (2013). Plane Algebraic Curves. Translated by Stillwell, John. Birkhäuser. ISBN 978-3-0348-5097-1.
Chevalley, Claude (1951). Introduction to the Theory of Algebraic Functions of One Variable. Mathematical surveys. Vol. 6. American Mathematical Society. ISBN 978-0-8218-1506-9. {{cite book}}: ISBN / Date incompatibility (help)
Coolidge, Julian L. (2004) [1931]. A Treatise on Algebraic Plane Curves. Dover. ISBN 978-0-486-49576-7.
Farkas, H. M.; Kra, I. (2012) [1980]. Riemann Surfaces. Graduate Texts in Mathematics. Vol. 71. Springer. ISBN 978-1-4684-9930-8.
Fulton, William (1989). Algebraic Curves: An Introduction to Algebraic Geometry. Mathematics lecture note series. Vol. 30 (3rd ed.). Addison-Wesley. ISBN 978-0-201-51010-2.
Gibson, C.G. (1998). Elementary Geometry of Algebraic Curves: An Undergraduate Introduction. Cambridge University Press. ISBN 978-0-521-64641-3.
Griffiths, Phillip A. (1985). Introduction to Algebraic Curves. Translation of Mathematical Monographs. Vol. 70 (3rd ed.). American Mathematical Society. ISBN 9780821845370.
Hartshorne, Robin (2013) [1977]. Algebraic Geometry. Graduate Texts in Mathematics. Vol. 52. Springer. ISBN 978-1-4757-3849-0.
Iitaka, Shigeru (2011) [1982]. Algebraic Geometry: An Introduction to Birational Geometry of Algebraic Varieties. Graduate Texts in Mathematics. Vol. 76. Springer New York. ISBN 978-1-4613-8121-1.
Milnor, John (1968). Singular Points of Complex Hypersurfaces. Princeton University Press. ISBN 0-691-08065-8.
Serre, Jean-Pierre (2012) [1988]. Algebraic Groups and Class Fields. Graduate Texts in Mathematics. Vol. 117. Springer. ISBN 978-1-4612-1035-1.
Kötter, Ernst (1887). "Grundzüge einer rein geometrischen Theorie der algebraischen ebenen Curven" [Fundamentals of a purely geometrical theory of algebraic plane curves]. Transactions of the Royal Academy of Berlin. — gained the 1886 Academy prize | Wikipedia/Algebraic_curves |
In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.
As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation x + x/4 = 15. This is solved by false position. First, guess that x = 4 to obtain, on the left, 4 + 4/4 = 5. This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply x (currently set to 4) by 3 and substitute again to get 12 + 12/4 = 15, verifying that the solution is x = 12.
Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained, and if it can, how fast can the approximation be found.
== Two historical types ==
Two basic types of false position method can be distinguished historically, simple false position and double false position.
Simple false position is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine x such that
a
x
=
b
,
{\displaystyle ax=b,}
if a and b are known. The method begins by using a test input value x′, and finding the corresponding output value b′ by multiplication: ax′ = b′. The correct answer is then found by proportional adjustment, x = b/ b′ x′.
Double false position is aimed at solving more difficult problems that can be written algebraically in the form: determine x such that
f
(
x
)
=
a
x
+
c
=
0
,
{\displaystyle f(x)=ax+c=0,}
if it is known that
f
(
x
1
)
=
b
1
,
f
(
x
2
)
=
b
2
.
{\displaystyle f(x_{1})=b_{1},\qquad f(x_{2})=b_{2}.}
Double false position is mathematically equivalent to linear interpolation. By using a pair of test inputs and the corresponding pair of outputs, the result of this algorithm given by,
x
=
b
1
x
2
−
b
2
x
1
b
1
−
b
2
,
{\displaystyle x={\frac {b_{1}x_{2}-b_{2}x_{1}}{b_{1}-b_{2}}},}
would be memorized and carried out by rote. Indeed, the rule as given by Robert Recorde in his Ground of Artes (c. 1542) is:
For an affine linear function,
f
(
x
)
=
a
x
+
c
,
{\displaystyle f(x)=ax+c,}
double false position provides the exact solution, while for a nonlinear function f it provides an approximation that can be successively improved by iteration.
== History ==
The simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics, and in papyri from ancient Egyptian mathematics.
Double false position arose in late antiquity as a purely arithmetical algorithm. In the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art (九章算術), dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition:
Now an item is purchased jointly; everyone contributes 8 [coins], the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.
Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the Book of the Two Errors (Kitāb al-khaṭāʾayn). The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa (10th century), an Arab mathematician from Baalbek, Lebanon. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as hisāb al-khaṭāʾayn ("reckoning by two errors"). It was used for centuries to solve practical problems such as commercial and juridical questions (estate partitions according to rules of Quranic inheritance), as well as purely recreational problems. The algorithm was often memorized with the aid of mnemonics, such as a verse attributed to Ibn al-Yasamin and balance-scale diagrams explained by al-Hassar and Ibn al-Banna, all three being mathematicians of Moroccan origin.
Leonardo of Pisa (Fibonacci) devoted Chapter 13 of his book Liber Abaci (AD 1202) to explaining and demonstrating the uses of double false position, terming the method regulis elchatayn after the al-khaṭāʾayn method that he had learned from Arab sources. In 1494, Pacioli used the term el cataym in his book Summa de arithmetica, probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556. Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". Regula Falsi appears as the Latinized version of Rule of False as early as 1690.
Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says:
The Rule of falsehoode is so named not for that it teacheth anye deceyte or falsehoode, but that by fayned numbers taken at all aduentures, it teacheth to finde out the true number that is demaunded, and this of all the vulgar Rules which are in practise) is ye most excellence.
== Numerical analysis ==
The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in numerical analysis, double false position became a root-finding algorithm used in iterative numerical approximation techniques.
Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and regula falsi provides one of these.
Given an equation, move all of its terms to one side so that it has the form, f (x) = 0, where f is some function of the unknown variable x. A value c that satisfies this equation, that is, f (c) = 0, is called a root or zero of the function f and is a solution of the original equation. If f is a continuous function and there exist two points a0 and b0 such that f (a0) and f (b0) are of opposite signs, then, by the intermediate value theorem, the function f has a root in the interval (a0, b0).
There are many root-finding algorithms that can be used to obtain approximations to such a root. One of the most common is Newton's method, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's derivative. Other methods are needed and one general class of methods are the two-point bracketing methods. These methods proceed by producing a sequence of shrinking intervals [ak, bk], at the kth step, such that (ak, bk) contains a root of f.
=== Two-point bracketing methods ===
These methods start with two x-values, initially found by trial-and-error, at which f (x) has opposite signs. Under the continuity assumption, a root of f is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If c is the point selected, then the smaller interval goes from c to the endpoint where f (x) has the sign opposite that of f (c). In the improbable case that f (c) = 0, a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy.
The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate.
Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as Newton's method or the secant method.
The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval. That is, if at step k, the current bracketing interval is [ak, bk], then the new solution estimate ck is obtained by,
c
k
=
a
k
+
b
k
2
.
{\displaystyle c_{k}={\frac {a_{k}+b_{k}}{2}}.}
This ensures that ck is between ak and bk, thereby guaranteeing convergence toward the solution.
Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy.
=== The regula falsi (false position) method ===
The convergence rate of the bisection method could possibly be improved by using a different solution estimate.
The regula falsi method calculates the new solution estimate as the x-intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment.
More precisely, suppose that in the k-th iteration the bracketing interval is (ak, bk). Construct the line through the points (ak, f (ak)) and (bk, f (bk)), as illustrated. This line is a secant or chord of the graph of the function f. In point-slope form, its equation is given by
y
−
f
(
b
k
)
=
f
(
b
k
)
−
f
(
a
k
)
b
k
−
a
k
(
x
−
b
k
)
.
{\displaystyle y-f(b_{k})={\frac {f(b_{k})-f(a_{k})}{b_{k}-a_{k}}}(x-b_{k}).}
Now choose ck to be the x-intercept of this line, that is, the value of x for which y = 0, and substitute these values to obtain
f
(
b
k
)
+
f
(
b
k
)
−
f
(
a
k
)
b
k
−
a
k
(
c
k
−
b
k
)
=
0.
{\displaystyle f(b_{k})+{\frac {f(b_{k})-f(a_{k})}{b_{k}-a_{k}}}(c_{k}-b_{k})=0.}
Solving this equation for ck gives:
c
k
=
b
k
−
f
(
b
k
)
b
k
−
a
k
f
(
b
k
)
−
f
(
a
k
)
=
a
k
f
(
b
k
)
−
b
k
f
(
a
k
)
f
(
b
k
)
−
f
(
a
k
)
.
{\displaystyle c_{k}=b_{k}-f(b_{k}){\frac {b_{k}-a_{k}}{f(b_{k})-f(a_{k})}}={\frac {a_{k}f(b_{k})-b_{k}f(a_{k})}{f(b_{k})-f(a_{k})}}.}
This last symmetrical form has a computational advantage:
As a solution is approached, ak and bk will be very close together, and nearly always of the same sign. Such a subtraction can lose significant digits. Because f (bk) and f (ak) are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition (as is the subtraction in the denominator too).
At iteration number k, the number ck is calculated as above and then, if f (ak) and f (ck) have the same sign, set ak + 1 = ck and bk + 1 = bk, otherwise set ak + 1 = ak and bk + 1 = ck. This process is repeated until the root is approximated sufficiently well.
The above formula is also used in the secant method, but the secant method always retains the last two computed points, and so, while it is slightly faster, it does not preserve bracketing and may not converge.
The fact that regula falsi always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed. However, its rate of convergence can drop below that of the bisection method.
== Analysis ==
Since the initial end-points
a0 and b0 are chosen such that f (a0) and f (b0) are of opposite signs, at each step, one of the end-points will get closer to a root of f.
If the second derivative of f is of constant sign (so there is no inflection point) in the interval,
then one endpoint (the one where f also has the same sign) will remain fixed for all subsequent
iterations while the converging endpoint becomes updated. As a result,
unlike the bisection method, the width of the bracket does not tend to
zero (unless the zero is at an inflection point around which sign(f ) = −sign(f")). As a consequence, the linear approximation to f (x), which is used to pick the false position,
does not improve as rapidly as possible.
One example of this phenomenon is the function
f
(
x
)
=
2
x
3
−
4
x
2
+
3
x
{\displaystyle f(x)=2x^{3}-4x^{2}+3x}
on the initial bracket
[−1,1]. The left end, −1, is never replaced (it does not change at first and after the first three iterations, f" is negative on the interval) and thus the width
of the bracket never falls below 1. Hence, the right endpoint approaches 0 at
a linear rate (the number of accurate digits grows linearly, with a rate of convergence of 2/3).
For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at x = 0 for 1/x or the sign function). In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at x = 0 for the function given by f (x) = abs(x) − x2 when x ≠ 0 and by f (0) = 5, starting with the interval [-0.5, 3.0]).
It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change, for example at x = ±1 in
f
(
x
)
=
1
(
x
−
1
)
2
+
1
(
x
+
1
)
2
.
{\displaystyle f(x)={\frac {1}{(x-1)^{2}}}+{\frac {1}{(x+1)^{2}}}.}
The method of bisection avoids this hypothetical convergence problem.
== Improvements in regula falsi ==
Though regula falsi always converges, usually considerably faster than bisection, there are situations that can slow its convergence – sometimes to a prohibitive degree. That problem isn't unique to regula falsi: Other than bisection, all of the numerical equation-solving methods can have a slow-convergence or no-convergence problem under some conditions. Sometimes, Newton's method and the secant method diverge instead of converging – and often do so under the same conditions that slow regula falsi's convergence.
But, though regula falsi is one of the best methods, and even in its original un-improved version would often be the best choice; for example, when Newton's isn't used because the derivative is prohibitively time-consuming to evaluate, or when Newton's and Successive-Substitutions have failed to converge.
Regula falsi's failure mode is easy to detect: The same end-point is retained twice in a row. The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations. A number of such improvements to regula falsi have been proposed; two of them, the Illinois algorithm and the Anderson–Björk algorithm, are described below.
=== The Illinois algorithm ===
The Illinois algorithm halves the y-value of the retained end point in the next estimate computation when the new y-value (that is, f (ck)) has the same sign as the previous one (f (ck − 1)), meaning that the end point of the previous step will be retained. Hence:
c
k
=
1
2
f
(
b
k
)
a
k
−
f
(
a
k
)
b
k
1
2
f
(
b
k
)
−
f
(
a
k
)
{\displaystyle c_{k}={\frac {{\frac {1}{2}}f(b_{k})a_{k}-f(a_{k})b_{k}}{{\frac {1}{2}}f(b_{k})-f(a_{k})}}}
or
c
k
=
f
(
b
k
)
a
k
−
1
2
f
(
a
k
)
b
k
f
(
b
k
)
−
1
2
f
(
a
k
)
,
{\displaystyle c_{k}={\frac {f(b_{k})a_{k}-{\frac {1}{2}}f(a_{k})b_{k}}{f(b_{k})-{\frac {1}{2}}f(a_{k})}},}
down-weighting one of the endpoint values to force the next ck to occur on that side of the function. The factor 1/2 used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates.
The above adjustment to regula falsi is called the Illinois algorithm by some scholars. Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.
=== Anderson–Björck algorithm ===
Suppose that in the k-th iteration the bracketing interval is [ak, bk] and that the functional value of the new calculated estimate ck has the same sign as f (bk). In this case, the new bracketing interval [ak + 1, bk + 1] = [ak, ck] and the left-hand endpoint has been retained.
(So far, that's the same as ordinary Regula Falsi and the Illinois algorithm.)
But, whereas the Illinois algorithm would multiply f (ak) by 1/2, Anderson–Björck algorithm multiplies it by m, where m has one of the two following values:
m
′
=
1
−
f
(
c
k
)
f
(
b
k
)
,
m
=
{
m
′
if
m
′
>
0
,
1
2
otherwise.
{\displaystyle {\begin{aligned}m'&=1-{\frac {f(c_{k})}{f(b_{k})}},\\m&={\begin{cases}m'&{\text{if }}m'>0,\\{\frac {1}{2}}&{\text{otherwise.}}\end{cases}}\end{aligned}}}
For simple roots, Anderson–Björck performs very well in practice.
=== ITP method ===
Given
κ
1
∈
(
0
,
∞
)
,
κ
2
∈
[
1
,
1
+
ϕ
)
{\displaystyle \kappa _{1}\in (0,\infty ),\kappa _{2}\in \left[1,1+\phi \right)}
,
n
1
/
2
≡
⌈
(
b
0
−
a
0
)
/
2
ϵ
⌉
{\displaystyle n_{1/2}\equiv \lceil (b_{0}-a_{0})/2\epsilon \rceil }
and
n
0
∈
[
0
,
∞
)
{\displaystyle n_{0}\in [0,\infty )}
where
ϕ
{\displaystyle \phi }
is the golden ration
1
2
(
1
+
5
)
{\displaystyle {\tfrac {1}{2}}(1+{\sqrt {5}})}
, in each iteration
j
=
0
,
1
,
2...
{\displaystyle j=0,1,2...}
the ITP method calculates the point
x
ITP
{\displaystyle x_{\text{ITP}}}
following three steps:
[Interpolation Step] Calculate the bisection and the regula falsi points:
x
1
/
2
≡
a
+
b
2
{\displaystyle x_{1/2}\equiv {\frac {a+b}{2}}}
and
x
f
≡
b
f
(
a
)
−
a
f
(
b
)
f
(
a
)
−
f
(
b
)
{\displaystyle x_{f}\equiv {\frac {bf(a)-af(b)}{f(a)-f(b)}}}
;
[Truncation Step] Perturb the estimator towards the center:
x
t
≡
x
f
+
σ
δ
{\displaystyle x_{t}\equiv x_{f}+\sigma \delta }
where
σ
≡
sign
(
x
1
/
2
−
x
f
)
{\displaystyle \sigma \equiv {\text{sign}}(x_{1/2}-x_{f})}
and
δ
≡
min
{
κ
1
|
b
−
a
|
κ
2
,
|
x
1
/
2
−
x
f
|
}
{\displaystyle \delta \equiv \min\{\kappa _{1}|b-a|^{\kappa _{2}},|x_{1/2}-x_{f}|\}}
;
[Projection Step] Project the estimator to minmax interval:
x
ITP
≡
x
1
/
2
−
σ
ρ
k
{\displaystyle x_{\text{ITP}}\equiv x_{1/2}-\sigma \rho _{k}}
where
ρ
k
≡
min
{
ϵ
2
n
1
/
2
+
n
0
−
j
−
b
−
a
2
,
|
x
t
−
x
1
/
2
|
}
{\displaystyle \rho _{k}\equiv \min \left\{\epsilon 2^{n_{1/2}+n_{0}-j}-{\frac {b-a}{2}},|x_{t}-x_{1/2}|\right\}}
.
The value of the function
f
(
x
ITP
)
{\displaystyle f(x_{\text{ITP}})}
on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method. And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions.
== Practical considerations ==
When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn't as fast as the other methods—when they're at their best and don't have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations.
For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there's no need to try to save time by using a less reliable method—and every method is less reliable than bisection.
An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant.
Then, a program could start with Newton's method, and, if Newton's isn't converging, switch to regula falsi, maybe in one of its improved versions, such as the Illinois or Anderson–Björck versions. Or, if even that isn't converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate.
When the change in y has become very small, and x is also changing very little, then Newton's method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton's method if one wanted the error to be very small and wanted very fast convergence.
== Example: Growth of a bulrush ==
In chapter 7 of The Nine Chapters, a root finding problem can be translated to modern language as follows:
Excess And Deficit Problem #11:
A bulrush grew 3 units on its first day. At the end of each day, the plant is observed to have grown by 1 /2 of the previous day's growth.
A club-rush grew 1 unit on its first day. At the end of each day, the plant has grown by 2 times as much as the previous day's growth.
Find the time [in fractional days] that the club-rush becomes as tall as the bulrush.
Answer:
(
2
+
6
13
)
{\displaystyle (2+{\frac {6}{13}})}
days; the height is
(
4
+
8
10
+
6
130
)
{\displaystyle (4+{\frac {8}{10}}+{\frac {6}{130}})}
units.
Explanation:
Suppose it is day 2. The club-rush is shorter than the bulrush by 1.5 units.
Suppose it is day 3. The club-rush is taller than the bulrush by 1.75 units. ∎
To understand this, we shall model the heights of the plants on day n (n = 1, 2, 3...) after a geometric series.
B
(
n
)
=
∑
i
=
1
n
3
⋅
1
2
i
−
1
{\displaystyle B(n)=\sum _{i=1}^{n}3\cdot {\frac {1}{2^{i-1}}}\quad }
Bulrush
C
(
n
)
=
∑
i
=
1
n
1
⋅
2
i
−
1
{\displaystyle C(n)=\sum _{i=1}^{n}1\cdot 2^{i-1}\quad }
Club-rush
For the sake of better notations, let
k
=
i
−
1
.
{\displaystyle \ k=i-1~.}
Rewrite the plant height series
B
(
n
)
,
C
(
n
)
{\displaystyle \ B(n),\ C(n)\ }
in terms of k and invoke the sum formula.
B
(
n
)
=
∑
k
=
0
n
−
1
3
⋅
1
2
k
=
3
(
1
−
(
1
2
)
n
−
1
+
1
1
−
1
2
)
=
6
(
1
−
1
2
n
)
{\displaystyle \ B(n)=\sum _{k=0}^{n-1}3\cdot {\frac {1}{2^{k}}}=3\left({\frac {1-({\tfrac {1}{2}})^{n-1+1}}{1-{\tfrac {1}{2}}}}\right)=6\left(1-{\frac {1}{2^{n}}}\right)}
C
(
n
)
=
∑
k
=
0
n
−
1
2
k
=
1
−
2
n
1
−
2
=
2
n
−
1
{\displaystyle \ C(n)=\sum _{k=0}^{n-1}2^{k}={\frac {~~1-2^{n}}{\ 1-2\ }}=2^{n}-1\ }
Now, use regula falsi to find the root of
(
C
(
n
)
−
B
(
n
)
)
{\displaystyle \ (C(n)-B(n))\ }
F
(
n
)
:=
C
(
n
)
−
B
(
n
)
=
6
2
n
+
2
n
−
7
{\displaystyle \ F(n):=C(n)-B(n)={\frac {6}{2^{n}}}+2^{n}-7\ }
Set
x
1
=
2
{\displaystyle \ x_{1}=2\ }
and compute
F
(
x
1
)
=
F
(
2
)
{\displaystyle \ F(x_{1})=F(2)\ }
which equals −1.5 (the "deficit").
Set
x
2
=
3
{\displaystyle \ x_{2}=3\ }
and compute
F
(
x
2
)
=
F
(
3
)
{\displaystyle \ F(x_{2})=F(3)\ }
which equals 1.75 (the "excess").
Estimated root (1st iteration):
x
^
=
x
1
F
(
x
2
)
−
x
2
F
(
x
1
)
F
(
x
2
)
−
F
(
x
1
)
=
2
×
1.75
+
3
×
1.5
1.75
+
1.5
≈
2.4615
{\displaystyle \ {\hat {x}}~=~{\frac {~x_{1}F(x_{2})-x_{2}F(x_{1})~}{F(x_{2})-F(x_{1})}}~=~{\frac {~2\times 1.75+3\times 1.5~}{1.75+1.5}}~\approx ~2.4615\ }
== Example code ==
This example program, written in the C programming language, is an example of the Illinois algorithm.
To find the positive number x where cos(x) = x3, the equation is transformed into a root-finding form f (x) = cos(x) − x3 = 0.
After running this code, the final answer is approximately
0.865474033101614.
== See also ==
ITP method, a variation with guaranteed minmax and superlinear convergence
Ridders' method, another root-finding method based on the false position method
Brent's method
== References ==
== Further reading ==
Burden, Richard L.; Faires, J. Douglas (2000). Numerical Analysis (7th ed.). Brooks/Cole. ISBN 0-534-38216-9.
Sigler, L.E. (2002). Fibonacci's Liber Abaci, Leonardo Pisano's Book of Calculation. Springer-Verlag. ISBN 0-387-40737-5.
Roberts, A.M. (2020). "Mathematical Philology in the Treatise on Double False Position in an Arabic Manuscript at Columbia University". Philological Encounters. 5 (3–4): 3–4. doi:10.1163/24519197-BJA10007. S2CID 229538951. (On a previously unpublished treatise on Double False Position in a medieval Arabic manuscript.) | Wikipedia/False_position_method |
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers).
Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory can often be understood through the study of analytical objects, such as the Riemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation).
Number theory is one of the oldest branches of mathematics alongside geometry. One quirk of number theory is that it deals with statements that are simple to understand but are very difficult to solve. Examples of this are Fermat's Last Theorem, which was proved 358 years after the original formulation, and Goldbach's conjecture, which remains unsolved since the 18th century. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." It was regarded as the example of pure mathematics with no applications outside mathematics until the 1970s, when it became known that prime numbers would be used as the basis for the creation of public-key cryptography algorithms.
== History ==
Number theory is the branch of mathematics that studies integers and their properties and relations. The integers comprise a set that extends the set of natural numbers
{
1
,
2
,
3
,
…
}
{\displaystyle \{1,2,3,\dots \}}
to include number
0
{\displaystyle 0}
and the negation of natural numbers
{
−
1
,
−
2
,
−
3
,
…
}
{\displaystyle \{-1,-2,-3,\dots \}}
. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers).
Number theory is closely related to arithmetic and some authors use the terms as synonyms. However, the word "arithmetic" is used today to mean the study of numerical operations and extends to the real numbers. In a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships. Traditionally, it is known as higher arithmetic. By the early twentieth century, the term number theory had been widely adopted. The term number means whole numbers, which refers to either the natural numbers or the integers.
Elementary number theory studies aspects of integers that can be investigated using elementary methods such as elementary proofs. Analytic number theory, by contrast, relies on complex numbers and techniques from analysis and calculus. Algebraic number theory employs algebraic structures such as fields and rings to analyze the properties of and relations between numbers. Geometric number theory uses concepts from geometry to study numbers. Further branches of number theory are probabilistic number theory, combinatorial number theory, computational number theory, and applied number theory, which examines the application of number theory to science and technology.
=== Origins ===
==== Ancient Mesopotamia ====
The earliest historical find of an arithmetical nature is a fragment of a table: Plimpton 322 (Larsa, Mesopotamia, c. 1800 BC), a broken clay tablet, contains a list of "Pythagorean triples", that is, integers
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
such that
a
2
+
b
2
=
c
2
{\displaystyle a^{2}+b^{2}=c^{2}}
. The triples are too numerous and too large to have been obtained by brute force. The heading over the first column reads: "The takiltum of the diagonal which has been subtracted such that the width..."
The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity
(
1
2
(
x
−
1
x
)
)
2
+
1
=
(
1
2
(
x
+
1
x
)
)
2
,
{\displaystyle \left({\frac {1}{2}}\left(x-{\frac {1}{x}}\right)\right)^{2}+1=\left({\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right)^{2},}
which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by
c
/
a
{\displaystyle c/a}
, presumably for actual use as a "table", for example, with a view to applications.
It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own many centuries later. It has been suggested instead that the table was a source of numerical examples for school problems. Plimpton 322 tablet is the only surviving evidence of what today would be called number theory within Babylonian mathematics, though a kind of Babylonian algebra was much more developed.
==== Ancient Greece ====
Although other civilizations probably influenced Greek mathematics at the beginning, all evidence of such borrowings appear relatively late, and it is likely that Greek arithmētikḗ (the theoretical or philosophical study of numbers) is an indigenous tradition. Aside from a few fragments, most of what is known about Greek mathematics in the 6th to 4th centuries BC (the Archaic and Classical periods) comes through either the reports of contemporary non-mathematicians or references from mathematical works in the early Hellenistic period. In the case of number theory, this means largely Plato, Aristotle, and Euclid.
Plato had a keen interest in mathematics, and distinguished clearly between arithmētikḗ and calculation (logistikē). Plato reports in his dialogue Theaetetus that Theodorus had proven that
3
,
5
,
…
,
17
{\displaystyle {\sqrt {3}},{\sqrt {5}},\dots ,{\sqrt {17}}}
are irrational. Theaetetus, a disciple of Theodorus's, worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. Aristotle further claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: Platonem ferunt didicisse Pythagorea omnia ("They say Plato learned all things Pythagorean").
Euclid devoted part of his Elements (Books VII–IX) to topics that belong to elementary number theory, including prime numbers and divisibility. He gave an algorithm, the Euclidean algorithm, for computing the greatest common divisor of two numbers (Prop. VII.2) and a proof implying the infinitude of primes (Prop. IX.20). There is also older material likely based on Pythagorean teachings (Prop. IX.21–34), such as "odd times even is even" and "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it". This is all that is needed to prove that
2
{\displaystyle {\sqrt {2}}}
is irrational. Pythagoreans apparently gave great importance to the odd and the even. The discovery that
2
{\displaystyle {\sqrt {2}}}
is irrational is credited to the early Pythagoreans, sometimes assigned to Hippasus, who was expelled or split from the Pythagorean community as a result. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic) and lengths and proportions (which may be identified with real numbers, whether rational or not).
The Pythagorean tradition also spoke of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th centuries).
An epigram published by Lessing in 1773 appears to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as it is known, such equations were first successfully treated by Indian mathematicians. It is not known whether Archimedes himself had a method of solution.
===== Late Antiquity =====
Aside from the elementary work of Neopythagoreans such as Nicomachus and Theon of Smyrna, the foremost authority in arithmētikḗ in Late Antiquity was Diophantus of Alexandria, who probably lived in the 3rd century AD, approximately five hundred years after Euclid. Little is known about his life, but he wrote two works that are extant: On Polygonal Numbers, a short treatise written in the Euclidean manner on the subject, and the Arithmetica, a work on pre-modern algebra (namely, the use of algebra to solve numerical problems). Six out of the thirteen books of Diophantus's Arithmetica survive in the original Greek and four more survive in an Arabic translation. The Arithmetica is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form
f
(
x
,
y
)
=
z
2
{\displaystyle f(x,y)=z^{2}}
or
f
(
x
,
y
,
z
)
=
w
2
{\displaystyle f(x,y,z)=w^{2}}
. In modern parlance, Diophantine equations are polynomial equations to which rational or integer solutions are sought.
==== Asia ====
The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (between the third and fifth centuries). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) The result was later generalized with a complete solution called Da-yan-shu (大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early nineteenth century by British missionary Alexander Wylie. There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere.
While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an autochthonous tradition; in particular, there is no evidence that Euclid's Elements reached India before the eighteenth century. Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences
n
≡
a
1
mod
m
1
{\displaystyle n\equiv a_{1}{\bmod {m}}_{1}}
,
n
≡
a
2
mod
m
2
{\displaystyle n\equiv a_{2}{\bmod {m}}_{2}}
could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalization of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations.
Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century).
Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke.
==== Arithmetic in the Islamic golden age ====
In the early ninth century, the caliph al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind, which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta).
Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912).
Part of the treatise al-Fakhri (by al-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem.
==== Western Europe in the Middle Ages ====
Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica.
=== Early modern number theory ===
==== Fermat ====
Pierre de Fermat (1607–1665) never published his writings but communicated through correspondence instead. Accordingly, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. Although he drew inspiration from classical sources, in his notes and letters Fermat scarcely wrote any proofs—he had no models in the area.
Over his lifetime, Fermat made the following contributions to the field:
One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day.
In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer.
Fermat's little theorem (1640): if a is not divisible by a prime p, then
a
p
−
1
≡
1
mod
p
.
{\displaystyle a^{p-1}\equiv 1{\bmod {p}}.}
If a and b are coprime, then
a
2
+
b
2
{\displaystyle a^{2}+b^{2}}
is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form
a
2
+
b
2
{\displaystyle a^{2}+b^{2}}
. These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent.
In 1657, Fermat posed the problem of solving
x
2
−
N
y
2
=
1
{\displaystyle x^{2}-Ny^{2}=1}
as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat was not aware of this). He stated that a proof could be found by infinite descent.
Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that
x
4
+
y
4
=
z
4
{\displaystyle x^{4}+y^{4}=z^{4}}
has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that
x
3
+
y
3
=
z
3
{\displaystyle x^{3}+y^{3}=z^{3}}
has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent).
Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to
x
n
+
y
n
=
z
n
{\displaystyle x^{n}+y^{n}=z^{n}}
for all
n
≥
3
{\displaystyle n\geq 3}
; this claim appears in his annotations in the margins of his copy of Diophantus.
==== Euler ====
The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following:
Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that
p
=
x
2
+
y
2
{\displaystyle p=x^{2}+y^{2}}
if and only if
p
≡
1
mod
4
{\displaystyle p\equiv 1{\bmod {4}}}
; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to
x
4
+
y
4
=
z
2
{\displaystyle x^{4}+y^{4}=z^{2}}
(implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method).
Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation.
First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function.
Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form
x
2
+
N
y
2
{\displaystyle x^{2}+Ny^{2}}
, some of it prefiguring quadratic reciprocity.
Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated.
==== Lagrange, Legendre, and Gauss ====
Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations; for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to
m
X
2
+
n
Y
2
{\displaystyle mX^{2}+nY^{2}}
), including defining their equivalence relation, showing how to put them in reduced form, etc.
Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation
a
x
2
+
b
y
2
+
c
z
2
=
0
{\displaystyle ax^{2}+by^{2}+cz^{2}=0}
and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for
n
=
5
{\displaystyle n=5}
(completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain).
Carl Friedrich Gauss (1777–1855) worked in a wide variety of fields in both mathematics and physics including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. The Disquisitiones Arithmeticae (1801), which he wrote three years earlier when he was 21, had an immense influence in the area of number theory and set its agenda for much of the 19th century. Gauss proved in this work the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory:
The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic.
In this way, Gauss arguably made forays towards Évariste Galois's work and the area algebraic number theory.
=== Maturity and division into subfields ===
Starting early in the nineteenth century, the following developments gradually took place:
The rise to self-consciousness of number theory (or higher arithmetic) as a field of study.
The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra.
The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory.
Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms).
The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize.
== Main subdivisions ==
=== Elementary number theory ===
Elementary number theory deals with the topics in number theory by means of basic methods in arithmetic. Its primary subjects of study are divisibility, factorization, and primality, as well as congruences in modular arithmetic. Other topics in elementary number theory include Diophantine equations, continued fractions, integer partitions, and Diophantine approximations.
Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithms. Multiplication, for instance, is an operation that combines two numbers, referred to as factors, to form a single number, termed the product, such as
2
×
3
=
6
{\displaystyle 2\times 3=6}
.
Divisibility is a property between two nonzero integers related to division. An integer
a
{\displaystyle a}
is said to be divisible by a nonzero integer
b
{\displaystyle b}
if
a
{\displaystyle a}
is a multiple of
b
{\displaystyle b}
; that is, if there exists an integer
q
{\displaystyle q}
such that
a
=
b
q
{\displaystyle a=bq}
. An equivalent formulation is that
b
{\displaystyle b}
divides
a
{\displaystyle a}
and is denoted by a vertical bar, which in this case is
b
|
a
{\displaystyle b|a}
. Conversely, if this were not the case, then
a
{\displaystyle a}
would not be divided evenly by
b
{\displaystyle b}
, resulting in a remainder. Euclid's division lemma asserts that
a
{\displaystyle a}
and
b
{\displaystyle b}
can generally be written as
a
=
b
q
+
r
{\displaystyle a=bq+r}
, where the remainder
r
<
b
{\displaystyle r<b}
accounts for the leftover quantity. Elementary number theory studies divisibility rules in order to quickly identify if a given integer is divisible by a fixed divisor. For instance, it is known that any integer is divisible by 3 if its decimal digit sum is divisible by 3.
A common divisor of several nonzero integers is an integer that divides all of them. The greatest common divisor (gcd) is the largest of such divisors. Two integers are said to be coprime or relatively prime to one another if their greatest common divisor, and simultaneously their only divisor, is 1. The Euclidean algorithm computes the greatest common divisor of two integers
a
,
b
{\displaystyle a,b}
by means of repeatedly applying the division lemma and shifting the divisor and remainder after every step. The algorithm can be extended to solve a special case of linear Diophantine equations
a
x
+
b
y
=
1
{\displaystyle ax+by=1}
. A Diophantine equation is an equation with several unknowns and integer coefficients. Another kind of Diophantine equation is described in the Pythagorean theorem,
x
2
+
y
2
=
z
2
{\displaystyle x^{2}+y^{2}=z^{2}}
, whose solutions are called Pythagorean triples if they are all integers.
Elementary number theory studies the divisibility properties of integers such as parity (even and odd numbers), prime numbers, and perfect numbers. Important number-theoric functions include the divisor-counting function, the divisor summatory function and its modifications, and Euler's totient function. A prime number is an integer greater than 1 whose only positive divisors are 1 and the prime itself. A positive integer greater than 1 that is not prime is called a composite number. Euclid's theorem demonstrates that there are infinitely many prime numbers that comprise the set {2, 3, 5, 7, 11, ...}. The sieve of Eratosthenes was devised as an efficient algorithm for identifying all primes up to a given natural number by eliminating all composite numbers.
Factorization is a method of expressing a number as a product. Specifically in number theory, integer factorization is the decomposition of an integer into a product of integers. The process of repeatedly applying this procedure until all factors are prime is known as prime factorization. A fundamental property of primes is shown in Euclid's lemma. It is a consequence of the lemma that if a prime divides a product of integers, then that prime divides at least one of the factors in the product. The unique factorization theorem is the fundamental theorem of arithmetic that relates to prime factorization. The theorem states that every integer greater than 1 can be factorised into a product of prime numbers and that this factorisation is unique up to the order of the factors. For example,
120
{\displaystyle 120}
is expressed uniquely as
2
×
2
×
2
×
3
×
5
{\displaystyle 2\times 2\times 2\times 3\times 5}
or simply
2
3
×
3
×
5
{\displaystyle 2^{3}\times 3\times 5}
.
Modular arithmetic works with finite sets of integers and introduces the concepts of congruence and residue classes. A congruence of two integers
a
,
b
{\displaystyle a,b}
modulo
n
{\displaystyle n}
(a positive integer called the modulus) is an equivalence relation whereby
n
|
(
a
−
b
)
{\displaystyle n|(a-b)}
is true. Performing Euclidean division on both
a
{\displaystyle a}
and
n
{\displaystyle n}
, and on
b
{\displaystyle b}
and
n
{\displaystyle n}
, yields the same remainder. This written as
a
≡
b
(
mod
n
)
{\textstyle a\equiv b{\pmod {n}}}
. In a manner analogous to the 12-hour clock, the sum of 4 and 9 is equal to 13, yet congruent to 1. A residue class modulo
n
{\displaystyle n}
is a set that contains all integers congruent to a specified
r
{\displaystyle r}
modulo
n
{\displaystyle n}
. For example,
6
Z
+
1
{\displaystyle 6\mathbb {Z} +1}
contains all multiples of 6 incremented by 1. Modular arithmetic provides a range of formulas for rapidly solving congruences of very large powers. An influential theorem is Fermat's little theorem, which states that if a prime
p
{\displaystyle p}
is coprime to some integer
a
{\displaystyle a}
, then
a
p
−
1
≡
1
(
mod
p
)
{\textstyle a^{p-1}\equiv 1{\pmod {p}}}
is true. Euler's theorem extends this to assert that every integer
n
{\displaystyle n}
satisfies the congruence
a
φ
(
n
)
≡
1
(
mod
n
)
,
{\displaystyle a^{\varphi (n)}\equiv 1{\pmod {n}},}
where Euler's totient function
φ
{\displaystyle \varphi }
counts all positive integers up to
n
{\displaystyle n}
that are coprime to
n
{\displaystyle n}
. Modular arithmetic also provides formulas that are used to solve congruences with unknowns in a similar vein to equation solving in algebra, such as the Chinese remainder theorem.
=== Analytic number theory ===
Analytic number theory, in contrast to elementary number theory, relies on complex numbers and techniques from analysis and calculus. Analytic number theory may be defined
in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or
in terms of its concerns, as the study within number theory of estimates on the size and density of certain numbers (e.g., primes), as opposed to identities.
It studies the distribution of primes, behavior of number-theoric functions, and irrational numbers.
Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, many of the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture, the twin prime conjecture, the Hardy–Littlewood conjectures, the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory.
Analysis is the branch of mathematics that studies the limit, defined as the value to which a sequence or function tends as the argument (or index) approaches a specific value. For example, the limit of the sequence 0.9, 0.99, 0.999, ... is 1. In the context of functions, the limit of
1
x
{\textstyle {\frac {1}{x}}}
as
x
{\displaystyle x}
approaches infinity is 0. The complex numbers extend the real numbers with the imaginary unit
i
{\displaystyle i}
defined as the solution to
i
2
=
−
1
{\displaystyle i^{2}=-1}
. Every complex number can be expressed as
x
+
i
y
{\displaystyle x+iy}
, where
x
{\displaystyle x}
is called the real part and
y
{\displaystyle y}
is called the imaginary part.
The distribution of primes, described by the function
π
{\displaystyle \pi }
that counts all primes up to a given real number, is unpredictable and is a major subject of study in number theory. Elementary formulas for a partial sequence of primes, including Euler's prime-generating polynomials have been developed. However, these cease to function as the primes become too large. The prime number theorem in analytic number theory provides a formalisation of the notion that prime numbers appear less commonly as their numerical value increases. One distribution states, informally, that the function
x
log
(
x
)
{\displaystyle {\frac {x}{\log(x)}}}
approximates
π
(
x
)
{\displaystyle \pi (x)}
. Another distribution involves an offset logarithmic integral which converges to
π
(
x
)
{\displaystyle \pi (x)}
more quickly.
The zeta function has been demonstrated to be connected to the distribution of primes. It is defined as the series
ζ
(
s
)
=
∑
n
=
1
∞
1
n
s
=
1
1
s
+
1
2
s
+
1
3
s
+
⋯
{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots }
that converges if
s
{\displaystyle s}
is greater than 1. Euler demonstrated a link involving the infinite product over all prime numbers, expressed as the identity
ζ
(
s
)
=
∏
p
prime
(
1
−
1
p
s
)
−
1
.
{\displaystyle \zeta (s)=\prod _{p{\text{ prime}}}\left(1-{\frac {1}{p^{s}}}\right)^{-1}.}
Riemann extended the definition to a complex variable and conjectured that all nontrivial cases (
0
<
ℜ
(
s
)
<
1
{\displaystyle 0<\Re (s)<1}
) where the function returns a zero are those in which the real part of
s
{\displaystyle s}
is equal to
1
2
{\textstyle {\frac {1}{2}}}
. He established a connection between the nontrivial zeroes and the prime-counting function. In what is now recognised as the unsolved Riemann hypothesis, a solution to it would imply direct consequences for understanding the distribution of primes.
One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function.
Elementary number theory works with elementary proofs, a term that excludes the use of complex numbers but may include basic analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous. For example, proofs based on complex Tauberian theorems, such as Wiener–Ikehara, are often seen as quite enlightening but not elementary despite using Fourier analysis, not complex analysis. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a more advanced proof.
Some subjects generally considered to be part of analytic number theory (e.g., sieve theory) are better covered by the second rather than the first definition. Small sieves, for instance, use little analysis and yet still belong to analytic number theory.
=== Algebraic number theory ===
An algebraic number is any complex number that is a solution to some polynomial equation
f
(
x
)
=
0
{\displaystyle f(x)=0}
with rational coefficients; for example, every solution
x
{\displaystyle x}
of
x
5
+
(
11
/
2
)
x
3
−
7
x
2
+
9
=
0
{\displaystyle x^{5}+(11/2)x^{3}-7x^{2}+9=0}
is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields.
It could be argued that the simplest kind of number fields, namely quadratic fields, were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones Arithmeticae can be restated in terms of ideals and
norms in quadratic fields. (A quadratic field consists of all
numbers of the form
a
+
b
d
{\displaystyle a+b{\sqrt {d}}}
, where
a
{\displaystyle a}
and
b
{\displaystyle b}
are rational numbers and
d
{\displaystyle d}
is a fixed rational number whose square root is not rational.)
For that matter, the eleventh-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such.
The grounds of the subject were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were introduced; these are three complementary ways of dealing with the lack of unique factorization in algebraic number fields. (For example, in the field generated by the rationals
and
−
5
{\displaystyle {\sqrt {-5}}}
, the number
6
{\displaystyle 6}
can be factorised both as
6
=
2
⋅
3
{\displaystyle 6=2\cdot 3}
and
6
=
(
1
+
−
5
)
(
1
−
−
5
)
{\displaystyle 6=(1+{\sqrt {-5}})(1-{\sqrt {-5}})}
; all of
2
{\displaystyle 2}
,
3
{\displaystyle 3}
,
1
+
−
5
{\displaystyle 1+{\sqrt {-5}}}
and
1
−
−
5
{\displaystyle 1-{\sqrt {-5}}}
are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalizations of quadratic reciprocity.
Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K.
(For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.)
Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood.
Their classification was the object of the programme of class field theory, which was initiated in the late nineteenth century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950.
An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields.
=== Diophantine geometry ===
The central problem of Diophantine geometry is to determine when a Diophantine equation has integer or rational solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object.
For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines a curve, a surface, or some other such object in n-dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or
integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely
or infinitely many rational points on a given curve or surface.
Consider, for instance, the Pythagorean equation
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
. One would like to know its rational solutions, namely
(
x
,
y
)
{\displaystyle (x,y)}
such that x and y are both rational. This is the same as asking for all integer solutions
to
a
2
+
b
2
=
c
2
{\displaystyle a^{2}+b^{2}=c^{2}}
; any solution to the latter equation gives us a solution
x
=
a
/
c
{\displaystyle x=a/c}
,
y
=
b
/
c
{\displaystyle y=b/c}
to the former. It is also the
same as asking for all points with rational coordinates on the curve described by
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
(a circle of radius 1 centered on the origin).
The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equation
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0}
, where
f
{\displaystyle f}
is a polynomial in two variables) depends crucially on the genus of the curve. A major achievement of this approach is Wiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial.
There is also the closely linked area of Diophantine approximations: given a number
x
{\displaystyle x}
, determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: call
a
/
q
{\displaystyle a/q}
(with
gcd
(
a
,
q
)
=
1
{\displaystyle \gcd(a,q)=1}
) a good approximation to
x
{\displaystyle x}
if
|
x
−
a
/
q
|
<
1
q
c
{\displaystyle |x-a/q|<{\frac {1}{q^{c}}}}
, where
c
{\displaystyle c}
is large. This question is of special interest if
x
{\displaystyle x}
is an algebraic number. If
x
{\displaystyle x}
cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be approximated better than any algebraic number, then it is a transcendental number. It is by this argument that π and e have been shown to be transcendental.
Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry is a contemporary term for the same domain covered by Diophantine geometry, particularly when one wishes to emphasize the connections to modern algebraic geometry (for example, in Faltings's theorem) rather than to techniques in Diophantine approximations.
=== Other subfields ===
Probabilistic number theory starts with questions such as the following: Take an integer n at random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors will n have on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average?
Combinatorics in number theory starts with questions like the following: Does a fairly "thick" infinite set
A
{\displaystyle A}
contain many elements in arithmetic progression:
a
{\displaystyle a}
,
a
+
b
,
a
+
2
b
,
a
+
3
b
,
…
,
a
+
10
b
{\displaystyle a+b,a+2b,a+3b,\ldots ,a+10b}
? Should it be possible to write large integers as sums of elements of
A
{\displaystyle A}
?There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms for testing primality are now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring.
== Applications ==
For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory.
This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. Schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators.
In 1974, Donald Knuth said "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations".
Elementary number theory is taught in discrete mathematics courses for computer scientists. It also has applications to the continuous in numerical analysis.
Number theory has now several modern applications spanning diverse areas such as:
Computer science: The fast Fourier transform (FFT) algorithm, which is used to efficiently compute the discrete Fourier transform, has important applications in signal processing and data analysis.
Physics: The Riemann hypothesis has connections to the distribution of prime numbers and has been studied for its potential implications in physics.
Error correction codes: The theory of finite fields and algebraic geometry have been used to construct efficient error-correcting codes.
Communications: The design of cellular telephone networks requires knowledge of the theory of modular forms, which is a part of analytic number theory.
Study of musical scales: the concept of "equal temperament", which is the basis for most modern Western music, involves dividing the octave into 12 equal parts. This has been studied using number theory and in particular the properties of the 12th root of 2.
== See also ==
Arithmetic dynamics
Algebraic function field
Arithmetic topology
Finite field
p-adic number
List of number theoretic algorithms
== Notes ==
== References ==
=== Sources ===
This article incorporates material from the Citizendium article "Number theory", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
== Further reading ==
Two of the most popular introductions to the subject are:
Hardy, G. H.; Wright, E. M. (2008) [1938]. An introduction to the theory of numbers (rev. by D. R. Heath-Brown and J. H. Silverman, 6th ed.). Oxford University Press. ISBN 978-0-19-921986-5.
Vinogradov, I. M. (2003) [1954]. Elements of Number Theory (reprint of the 1954 ed.). Mineola, NY: Dover Publications.
Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol 1981).
Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are:
Ivan M. Niven; Herbert S. Zuckerman; Hugh L. Montgomery (2008) [1960]. An introduction to the theory of numbers (reprint of the 5th 1991 ed.). John Wiley & Sons. ISBN 978-81-265-1811-1. Retrieved 2016-02-28.
Rosen, Kenneth H. (2010). Elementary Number Theory (6th ed.). Pearson Education. ISBN 978-0-321-71775-7. Retrieved 2016-02-28.
Popular choices for a second textbook include:
Borevich, A. I.; Shafarevich, Igor R. (1966). Number theory. Pure and Applied Mathematics. Vol. 20. Boston, MA: Academic Press. ISBN 978-0-12-117850-5. MR 0195803.
Serre, Jean-Pierre (1996) [1973]. A course in arithmetic. Graduate Texts in Mathematics. Vol. 7. Springer. ISBN 978-0-387-90040-7.
== External links ==
Number Theory entry in the Encyclopedia of Mathematics
Number Theory Web | Wikipedia/Theory_of_numbers |
In algebra, a cubic equation in one variable is an equation of the form
a
x
3
+
b
x
2
+
c
x
+
d
=
0
{\displaystyle ax^{3}+bx^{2}+cx+d=0}
in which a is not zero.
The solutions of this equation are called roots of the cubic function defined by the left-hand side of the equation. If all of the coefficients a, b, c, and d of the cubic equation are real numbers, then it has at least one real root (this is true for all odd-degree polynomial functions). All of the roots of the cubic equation can be found by the following means:
algebraically: more precisely, they can be expressed by a cubic formula involving the four coefficients, the four basic arithmetic operations, square roots, and cube roots. (This is also true of quadratic (second-degree) and quartic (fourth-degree) equations, but not for higher-degree equations, by the Abel–Ruffini theorem.)
trigonometrically
numerical approximations of the roots can be found using root-finding algorithms such as Newton's method.
The coefficients do not need to be real numbers. Much of what is covered below is valid for coefficients in any field with characteristic other than 2 and 3. The solutions of the cubic equation do not necessarily belong to the same field as the coefficients. For example, some cubic equations with rational coefficients have roots that are irrational (and even non-real) complex numbers.
== History ==
Cubic equations were known to the ancient Babylonians, Greeks, Chinese, Indians, and Egyptians. Babylonian (20th to 16th centuries BC) cuneiform tablets have been found with tables for calculating cubes and cube roots. The Babylonians could have used the tables to solve cubic equations, but no evidence exists to confirm that they did. The problem of doubling the cube involves the simplest and oldest studied cubic equation, and one for which the ancient Egyptians did not believe a solution existed. In the 5th century BC, Hippocrates reduced this problem to that of finding two mean proportionals between one line and another of twice its length, but could not solve this with a compass and straightedge construction, a task which is now known to be impossible. Methods for solving cubic equations appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BC and commented on by Liu Hui in the 3rd century.
In the 3rd century AD, the Greek mathematician Diophantus found integer or rational solutions for some bivariate cubic equations (Diophantine equations). Hippocrates, Menaechmus and Archimedes are believed to have come close to solving the problem of doubling the cube using intersecting conic sections, though historians such as Reviel Netz dispute whether the Greeks were thinking about cubic equations or just problems that can lead to cubic equations. Some others like T. L. Heath, who translated all of Archimedes's works, disagree, putting forward evidence that Archimedes really solved cubic equations using intersections of two conics, but also discussed the conditions where the roots are 0, 1 or 2.
In the 7th century, the Tang dynasty astronomer mathematician Wang Xiaotong in his mathematical treatise titled Jigu Suanjing systematically established and solved numerically 25 cubic equations of the form x3 + px2 + qx = N, 23 of them with p, q ≠ 0, and two of them with q = 0.
In the 11th century, the Persian poet-mathematician, Omar Khayyam (1048–1131), made significant progress in the theory of cubic equations. In an early paper, he discovered that a cubic equation can have more than one solution and stated that it cannot be solved using compass and straightedge constructions. He also found a geometric solution. In his later work, the Treatise on Demonstration of Problems of Algebra, he wrote a complete classification of cubic equations with general geometric solutions found by means of intersecting conic sections. Khayyam made an attempt to come up with an algebraic formula for extracting cubic roots. He wrote: “We have tried to express these roots by algebra but have failed. It may be, however, that men who come after us will succeed.”
In the 12th century, the Indian mathematician Bhaskara II attempted the solution of cubic equations without general success. However, he gave one example of a cubic equation: x3 + 12x = 6x2 + 35. In the 12th century, another Persian mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), wrote the Al-Muʿādalāt (Treatise on Equations), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the Horner–Ruffini method to numerically approximate the root of a cubic equation. He also used the concepts of maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation to find algebraic solutions to certain types of cubic equations.
In his book Flos, Leonardo de Pisa, also known as Fibonacci (1170–1250), was able to closely approximate the positive solution to the cubic equation x3 + 2x2 + 10x = 20. Writing in Babylonian numerals he gave the result as 1,22,7,42,33,4,40 (equivalent to 1 + 22/60 + 7/602 + 42/603 + 33/604 + 4/605 + 40/606), which has a relative error of about 10−9.
In the early 16th century, the Italian mathematician Scipione del Ferro (1465–1526) found a method for solving a class of cubic equations, namely those of the form x3 + mx = n. In fact, all cubic equations can be reduced to this form if one allows m and n to be negative, but negative numbers were not known to him at that time. Del Ferro kept his achievement secret until just before his death, when he told his student Antonio Fior about it.
In 1535, Niccolò Tartaglia (1500–1557) received two problems in cubic equations from Zuanne da Coi and announced that he could solve them. He was soon challenged by Fior, which led to a famous contest between the two. Each contestant had to put up a certain amount of money and to propose a number of problems for his rival to solve. Whoever solved more problems within 30 days would get all the money. Tartaglia received questions in the form x3 + mx = n, for which he had worked out a general method. Fior received questions in the form x3 + mx2 = n, which proved to be too difficult for him to solve, and Tartaglia won the contest.
Later, Tartaglia was persuaded by Gerolamo Cardano (1501–1576) to reveal his secret for solving cubic equations. In 1539, Tartaglia did so only on the condition that Cardano would never reveal it and that if he did write a book about cubics, he would give Tartaglia time to publish. Some years later, Cardano learned about del Ferro's prior work and published del Ferro's method in his book Ars Magna in 1545, meaning Cardano gave Tartaglia six years to publish his results (with credit given to Tartaglia for an independent solution).
Cardano's promise to Tartaglia said that he would not publish Tartaglia's work, and Cardano felt he was publishing del Ferro's, so as to get around the promise. Nevertheless, this led to a challenge to Cardano from Tartaglia, which Cardano denied. The challenge was eventually accepted by Cardano's student Lodovico Ferrari (1522–1565). Ferrari did better than Tartaglia in the competition, and Tartaglia lost both his prestige and his income.
Cardano noticed that Tartaglia's method sometimes required him to extract the square root of a negative number. He even included a calculation with these complex numbers in Ars Magna, but he did not really understand it. Rafael Bombelli studied this issue in detail and is therefore often considered as the discoverer of complex numbers.
François Viète (1540–1603) independently derived the trigonometric solution for the cubic with three real roots, and René Descartes (1596–1650) extended the work of Viète.
== Factorization ==
If the coefficients of a cubic equation are rational numbers, one can obtain an equivalent equation with integer coefficients, by multiplying all coefficients by a common multiple of their denominators. Such an equation
a
x
3
+
b
x
2
+
c
x
+
d
=
0
,
{\displaystyle ax^{3}+bx^{2}+cx+d=0,}
with integer coefficients, is said to be reducible if the polynomial on the left-hand side is the product of polynomials of lower degrees. By Gauss's lemma, if the equation is reducible, one can suppose that the factors have integer coefficients.
Finding the roots of a reducible cubic equation is easier than solving the general case. In fact, if the equation is reducible, one of the factors must have degree one, and thus have the form
q
x
−
p
,
{\displaystyle qx-p,}
with q and p being coprime integers. The rational root test allows finding q and p by examining a finite number of cases (because q must be a divisor of a, and p must be a divisor of d).
Thus, one root is
x
1
=
p
q
,
{\displaystyle \textstyle x_{1}={\frac {p}{q}},}
and the other roots are the roots of the other factor, which can be found by polynomial long division. This other factor is
a
q
x
2
+
b
q
+
a
p
q
2
x
+
c
q
2
+
b
p
q
+
a
p
2
q
3
.
{\displaystyle {\frac {a}{q}}\,x^{2}+{\frac {bq+ap}{q^{2}}}\,x+{\frac {cq^{2}+bpq+ap^{2}}{q^{3}}}.}
(The coefficients seem not to be integers, but must be integers if
p
/
q
{\displaystyle p/q}
is a root.)
Then, the other roots are the roots of this quadratic polynomial and can be found by using the quadratic formula.
== Depressed cubic ==
Cubics of the form
t
3
+
p
t
+
q
{\displaystyle t^{3}+pt+q}
are said to be depressed. They are much simpler than general cubics, but are fundamental, because the study of any cubic may be reduced by a simple change of variable to that of a depressed cubic.
Let
a
x
3
+
b
x
2
+
c
x
+
d
=
0
{\displaystyle ax^{3}+bx^{2}+cx+d=0}
be a cubic equation. The change of variable
x
=
t
−
b
3
a
{\displaystyle x=t-{\frac {b}{3a}}}
gives a cubic (in t) that has no term in t2.
After dividing by a one gets the depressed cubic equation
t
3
+
p
t
+
q
=
0
,
{\displaystyle t^{3}+pt+q=0,}
with
t
=
x
+
b
3
a
p
=
3
a
c
−
b
2
3
a
2
q
=
2
b
3
−
9
a
b
c
+
27
a
2
d
27
a
3
.
{\displaystyle {\begin{aligned}t={}&x+{\frac {b}{3a}}\\p={}&{\frac {3ac-b^{2}}{3a^{2}}}\\q={}&{\frac {2b^{3}-9abc+27a^{2}d}{27a^{3}}}.\end{aligned}}}
The roots
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}}
of the original equation are related to the roots
t
1
,
t
2
,
t
3
{\displaystyle t_{1},t_{2},t_{3}}
of the depressed equation by the relations
x
i
=
t
i
−
b
3
a
,
{\displaystyle x_{i}=t_{i}-{\frac {b}{3a}},}
for
i
=
1
,
2
,
3
{\displaystyle i=1,2,3}
.
== Discriminant and nature of the roots ==
The nature (real or not, distinct or not) of the roots of a cubic can be determined without computing them explicitly, by using the discriminant.
=== Discriminant ===
The discriminant of a polynomial is a function of its coefficients that is zero if and only if the polynomial has a multiple root, or, if it is divisible by the square of a non-constant polynomial. In other words, the discriminant is nonzero if and only if the polynomial is square-free.
If r1, r2, r3 are the three roots (not necessarily distinct nor real) of the cubic
a
x
3
+
b
x
2
+
c
x
+
d
,
{\displaystyle ax^{3}+bx^{2}+cx+d,}
then the discriminant is
a
4
(
r
1
−
r
2
)
2
(
r
1
−
r
3
)
2
(
r
2
−
r
3
)
2
.
{\displaystyle a^{4}(r_{1}-r_{2})^{2}(r_{1}-r_{3})^{2}(r_{2}-r_{3})^{2}.}
The discriminant of the depressed cubic
t
3
+
p
t
+
q
{\displaystyle t^{3}+pt+q}
is
−
(
4
p
3
+
27
q
2
)
.
{\displaystyle -\left(4\,p^{3}+27\,q^{2}\right).}
The discriminant of the general cubic
a
x
3
+
b
x
2
+
c
x
+
d
{\displaystyle ax^{3}+bx^{2}+cx+d}
is
18
a
b
c
d
−
4
b
3
d
+
b
2
c
2
−
4
a
c
3
−
27
a
2
d
2
.
{\displaystyle 18\,abcd-4\,b^{3}d+b^{2}c^{2}-4\,ac^{3}-27\,a^{2}d^{2}.}
It is the product of
a
4
{\displaystyle a^{4}}
and the discriminant of the corresponding depressed cubic. Using the formula relating the general cubic and the associated depressed cubic, this implies that the discriminant of the general cubic can be written as
4
(
b
2
−
3
a
c
)
3
−
(
2
b
3
−
9
a
b
c
+
27
a
2
d
)
2
27
a
2
.
{\displaystyle {\frac {4(b^{2}-3ac)^{3}-(2b^{3}-9abc+27a^{2}d)^{2}}{27a^{2}}}.}
It follows that one of these two discriminants is zero if and only if the other is also zero, and, if the coefficients are real, the two discriminants have the same sign. In summary, the same information can be deduced from either one of these two discriminants.
To prove the preceding formulas, one can use Vieta's formulas to express everything as polynomials in r1, r2, r3, and a. The proof then results in the verification of the equality of two polynomials.
=== Nature of the roots ===
If the coefficients of a polynomial are real numbers, and its discriminant
Δ
{\displaystyle \Delta }
is not zero, there are two cases:
If
Δ
>
0
,
{\displaystyle \Delta >0,}
the cubic has three distinct real roots
If
Δ
<
0
,
{\displaystyle \Delta <0,}
the cubic has one real root and two non-real complex conjugate roots.
This can be proved as follows. First, if r is a root of a polynomial with real coefficients, then its complex conjugate is also a root. So the non-real roots, if any, occur as pairs of complex conjugate roots. As a cubic polynomial has three roots (not necessarily distinct) by the fundamental theorem of algebra, at least one root must be real.
As stated above, if r1, r2, r3 are the three roots of the cubic
a
x
3
+
b
x
2
+
c
x
+
d
{\displaystyle ax^{3}+bx^{2}+cx+d}
, then the discriminant is
Δ
=
a
4
(
r
1
−
r
2
)
2
(
r
1
−
r
3
)
2
(
r
2
−
r
3
)
2
{\displaystyle \Delta =a^{4}(r_{1}-r_{2})^{2}(r_{1}-r_{3})^{2}(r_{2}-r_{3})^{2}}
If the three roots are real and distinct, the discriminant is a product of positive reals, that is
Δ
>
0.
{\displaystyle \Delta >0.}
If only one root, say r1, is real, then r2 and r3 are complex conjugates, which implies that r2 − r3 is a purely imaginary number, and thus that (r2 − r3)2 is real and negative. On the other hand, r1 − r2 and r1 − r3 are complex conjugates, and their product is real and positive. Thus the discriminant is the product of a single negative number and several positive ones. That is
Δ
<
0.
{\displaystyle \Delta <0.}
=== Multiple root ===
If the discriminant of a cubic is zero, the cubic has a multiple root. If furthermore its coefficients are real, then all of its roots are real.
The discriminant of the depressed cubic
t
3
+
p
t
+
q
{\displaystyle t^{3}+pt+q}
is zero if
4
p
3
+
27
q
2
=
0.
{\displaystyle 4p^{3}+27q^{2}=0.}
If p is also zero, then p = q = 0 , and 0 is a triple root of the cubic. If
4
p
3
+
27
q
2
=
0
,
{\displaystyle 4p^{3}+27q^{2}=0,}
and p ≠ 0 , then the cubic has a simple root
t
1
=
3
q
p
{\displaystyle t_{1}={\frac {3q}{p}}}
and a double root
t
2
=
t
3
=
−
3
q
2
p
.
{\displaystyle t_{2}=t_{3}=-{\frac {3q}{2p}}.}
In other words,
t
3
+
p
t
+
q
=
(
t
−
3
q
p
)
(
t
+
3
q
2
p
)
2
.
{\displaystyle t^{3}+pt+q=\left(t-{\frac {3q}{p}}\right)\left(t+{\frac {3q}{2p}}\right)^{2}.}
This result can be proved by expanding the latter product or retrieved by solving the rather simple system of equations resulting from Vieta's formulas.
By using the reduction of a depressed cubic, these results can be extended to the general cubic. This gives: If the discriminant of the cubic
a
x
3
+
b
x
2
+
c
x
+
d
{\displaystyle ax^{3}+bx^{2}+cx+d}
is zero, then
either, if
b
2
=
3
a
c
,
{\displaystyle b^{2}=3ac,}
the cubic has a triple root
x
1
=
x
2
=
x
3
=
−
b
3
a
,
{\displaystyle x_{1}=x_{2}=x_{3}=-{\frac {b}{3a}},}
and
a
x
3
+
b
x
2
+
c
x
+
d
=
a
(
x
+
b
3
a
)
3
{\displaystyle ax^{3}+bx^{2}+cx+d=a\left(x+{\frac {b}{3a}}\right)^{3}}
or, if
b
2
≠
3
a
c
,
{\displaystyle b^{2}\neq 3ac,}
the cubic has a double root
x
2
=
x
3
=
9
a
d
−
b
c
2
(
b
2
−
3
a
c
)
,
{\displaystyle x_{2}=x_{3}={\frac {9ad-bc}{2(b^{2}-3ac)}},}
and a simple root,
x
1
=
4
a
b
c
−
9
a
2
d
−
b
3
a
(
b
2
−
3
a
c
)
.
{\displaystyle x_{1}={\frac {4abc-9a^{2}d-b^{3}}{a(b^{2}-3ac)}}.}
and thus
a
x
3
+
b
x
2
+
c
x
+
d
=
a
(
x
−
x
1
)
(
x
−
x
2
)
2
.
{\displaystyle ax^{3}+bx^{2}+cx+d=a(x-x_{1})(x-x_{2})^{2}.}
=== Characteristic 2 and 3 ===
The above results are valid when the coefficients belong to a field of characteristic other than 2 or 3, but must be modified for characteristic 2 or 3, because of the involved divisions by 2 and 3.
The reduction to a depressed cubic works for characteristic 2, but not for characteristic 3. However, in both cases, it is simpler to establish and state the results for the general cubic. The main tool for that is the fact that a multiple root is a common root of the polynomial and its formal derivative. In these characteristics, if the derivative is not a constant, it is a linear polynomial in characteristic 3, and is the square of a linear polynomial in characteristic 2. Therefore, for either characteristic 2 or 3, the derivative has only one root. This allows computing the multiple root, and the third root can be deduced from the sum of the roots, which is provided by Vieta's formulas.
A difference with other characteristics is that, in characteristic 2, the formula for a double root involves a square root, and, in characteristic 3, the formula for a triple root involves a cube root.
== Cardano's formula ==
Gerolamo Cardano is credited with publishing the first formula for solving cubic equations, attributing it to Scipione del Ferro and Niccolo Fontana Tartaglia. The formula applies to depressed cubics, but, as shown in § Depressed cubic, it allows solving all cubic equations.
Cardano's result is that if
t
3
+
p
t
+
q
=
0
{\displaystyle t^{3}+pt+q=0}
is a cubic equation such that p and q are real numbers such that
q
2
4
+
p
3
27
{\displaystyle {\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}
is positive (this implies that the discriminant of the equation is negative) then the equation has the real root
u
1
3
+
u
2
3
,
{\displaystyle {\sqrt[{3}]{u_{1}}}+{\sqrt[{3}]{u_{2}}},}
where
u
1
{\displaystyle u_{1}}
and
u
2
{\displaystyle u_{2}}
are the two numbers
−
q
2
+
q
2
4
+
p
3
27
{\displaystyle -{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}
and
−
q
2
−
q
2
4
+
p
3
27
.
{\displaystyle -{\frac {q}{2}}-{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}.}
See § Derivation of the roots, below, for several methods for getting this result.
As shown in § Nature of the roots, the two other roots are non-real complex conjugate numbers, in this case. It was later shown (Cardano did not know complex numbers) that the two other roots are obtained by multiplying one of the cube roots by the primitive cube root of unity
ε
1
=
−
1
+
i
3
2
,
{\displaystyle \varepsilon _{1}={\frac {-1+i{\sqrt {3}}}{2}},}
and the other cube root by the other primitive cube root of the unity
ε
2
=
ε
1
2
=
−
1
−
i
3
2
.
{\displaystyle \varepsilon _{2}=\varepsilon _{1}^{2}={\frac {-1-i{\sqrt {3}}}{2}}.}
That is, the other roots of the equation are
ε
1
u
1
3
+
ε
2
u
2
3
{\displaystyle \varepsilon _{1}{\sqrt[{3}]{u_{1}}}+\varepsilon _{2}{\sqrt[{3}]{u_{2}}}}
and
ε
2
u
1
3
+
ε
1
u
2
3
.
{\displaystyle \varepsilon _{2}{\sqrt[{3}]{u_{1}}}+\varepsilon _{1}{\sqrt[{3}]{u_{2}}}.}
If
4
p
3
+
27
q
2
<
0
,
{\displaystyle 4p^{3}+27q^{2}<0,}
there are three real roots, but Galois theory allows proving that, if there is no rational root, the roots cannot be expressed by an algebraic expression involving only real numbers. Therefore, the equation cannot be solved in this case with the knowledge of Cardano's time. This case has thus been called casus irreducibilis, meaning irreducible case in Latin.
In casus irreducibilis, Cardano's formula can still be used, but some care is needed in the use of cube roots. A first method is to define the symbols
{\displaystyle {\sqrt {{~}^{~}}}}
and
3
{\displaystyle {\sqrt[{3}]{{~}^{~}}}}
as representing the principal values of the root function (that is the root that has the largest real part). With this convention Cardano's formula for the three roots remains valid, but is not purely algebraic, as the definition of a principal part is not purely algebraic, since it involves inequalities for comparing real parts. Also, the use of principal cube root may give a wrong result if the coefficients are non-real complex numbers. Moreover, if the coefficients belong to another field, the principal cube root is not defined in general.
The second way for making Cardano's formula always correct, is to remark that the product of the two cube roots must be −p / 3. It results that a root of the equation is
C
−
p
3
C
with
C
=
−
q
2
+
q
2
4
+
p
3
27
3
.
{\displaystyle C-{\frac {p}{3C}}\quad {\text{with}}\quad C={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}}.}
In this formula, the symbols
{\displaystyle {\sqrt {{~}^{~}}}}
and
3
{\displaystyle {\sqrt[{3}]{{~}^{~}}}}
denote any square root and any cube root. The other roots of the equation are obtained either by changing of cube root or, equivalently, by multiplying the cube root by a primitive cube root of unity, that is
−
1
±
−
3
2
.
{\displaystyle \textstyle {\frac {-1\pm {\sqrt {-3}}}{2}}.}
This formula for the roots is always correct except when p = q = 0, with the proviso that if p = 0, the square root is chosen so that C ≠ 0. However, Cardano's formula is useless if
p
=
0
,
{\displaystyle p=0,}
as the roots are the cube roots of
−
q
.
{\displaystyle -q.}
Similarly, the formula is also useless in the cases where no cube root is needed, that is when the cubic polynomial is not irreducible; this includes the case
4
p
3
+
27
q
2
=
0.
{\displaystyle 4p^{3}+27q^{2}=0.}
This formula is also correct when p and q belong to any field of characteristic other than 2 or 3.
== General cubic formula ==
A cubic formula for the roots of the general cubic equation (with a ≠ 0)
a
x
3
+
b
x
2
+
c
x
+
d
=
0
{\displaystyle ax^{3}+bx^{2}+cx+d=0}
can be deduced from every variant of Cardano's formula by reduction to a depressed cubic. The variant that is presented here is valid not only for complex coefficients, but also for coefficients a, b, c, d belonging to any algebraically closed field of characteristic other than 2 or 3. If the coefficients are real numbers, the formula covers all complex solutions, not just real ones.
The formula being rather complicated, it is worth splitting it in smaller formulas.
Let
Δ
0
=
b
2
−
3
a
c
,
Δ
1
=
2
b
3
−
9
a
b
c
+
27
a
2
d
.
{\displaystyle {\begin{aligned}\Delta _{0}&=b^{2}-3ac,\\\Delta _{1}&=2b^{3}-9abc+27a^{2}d.\end{aligned}}}
(Both
Δ
0
{\displaystyle \Delta _{0}}
and
Δ
1
{\displaystyle \Delta _{1}}
can be expressed as resultants of the cubic and its derivatives:
Δ
1
{\displaystyle \Delta _{1}}
is −1/8a times the resultant of the cubic and its second derivative, and
Δ
0
{\displaystyle \Delta _{0}}
is −1/12a times the resultant of the first and second derivatives of the cubic polynomial.)
Then let
C
=
Δ
1
±
Δ
1
2
−
4
Δ
0
3
2
3
,
{\displaystyle C={\sqrt[{3}]{\frac {\Delta _{1}\pm {\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}}{2}}},}
where the symbols
{\displaystyle {\sqrt {{~}^{~}}}}
and
3
{\displaystyle {\sqrt[{3}]{{~}^{~}}}}
are interpreted as any square root and any cube root, respectively (every nonzero complex number has two square roots and three cubic roots). The sign "±" before the square root is either "+" or "–"; the choice is almost arbitrary, and changing it amounts to choosing a different square root. However, if a choice yields C = 0 (this occurs if
Δ
0
=
0
{\displaystyle \Delta _{0}=0}
), then the other sign must be selected instead. If both choices yield C = 0, that is, if
Δ
0
=
Δ
1
=
0
,
{\displaystyle \Delta _{0}=\Delta _{1}=0,}
a fraction 0/0 occurs in following formulas; this fraction must be interpreted as equal to zero (see the end of this section).
With these conventions, one of the roots is
x
=
−
1
3
a
(
b
+
C
+
Δ
0
C
)
.
{\displaystyle x=-{\frac {1}{3a}}\left(b+C+{\frac {\Delta _{0}}{C}}\right).}
The other two roots can be obtained by changing the choice of the cube root in the definition of C, or, equivalently by multiplying C by a primitive cube root of unity, that is –1 ± √–3/2. In other words, the three roots are
x
k
=
−
1
3
a
(
b
+
ξ
k
C
+
Δ
0
ξ
k
C
)
,
k
∈
{
0
,
1
,
2
}
,
{\displaystyle x_{k}=-{\frac {1}{3a}}\left(b+\xi ^{k}C+{\frac {\Delta _{0}}{\xi ^{k}C}}\right),\qquad k\in \{0,1,2\}{\text{,}}}
where ξ = –1 + √–3/2.
As for the special case of a depressed cubic, this formula applies but is useless when the roots can be expressed without cube roots. In particular, if
Δ
0
=
Δ
1
=
0
,
{\displaystyle \Delta _{0}=\Delta _{1}=0,}
the formula gives that the three roots equal
−
b
3
a
,
{\displaystyle {\frac {-b}{3a}},}
which means that the cubic polynomial can be factored as
a
(
x
+
b
3
a
)
3
.
{\displaystyle \textstyle a(x+{\frac {b}{3a}})^{3}.}
A straightforward computation allows verifying that the existence of this factorization is equivalent with
Δ
0
=
Δ
1
=
0.
{\displaystyle \Delta _{0}=\Delta _{1}=0.}
== Trigonometric and hyperbolic solutions ==
=== Trigonometric solution for three real roots ===
When a cubic equation with real coefficients has three real roots, the formulas expressing these roots in terms of radicals involve complex numbers. Galois theory allows proving that when the three roots are real, and none is rational (casus irreducibilis), one cannot express the roots in terms of real radicals. Nevertheless, purely real expressions of the solutions may be obtained using trigonometric functions, specifically in terms of cosines and arccosines. More precisely, the roots of the depressed cubic
t
3
+
p
t
+
q
=
0
{\displaystyle t^{3}+pt+q=0}
are
t
k
=
2
−
p
3
cos
[
1
3
arccos
(
3
q
2
p
−
3
p
)
−
2
π
k
3
]
for
k
=
0
,
1
,
2.
{\displaystyle t_{k}=2\,{\sqrt {-{\frac {p}{3}}}}\,\cos \left[\,{\frac {1}{3}}\arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\,\right)-{\frac {2\pi k}{3}}\,\right]\qquad {\text{for }}k=0,1,2.}
This formula is due to François Viète. It is purely real when the equation has three real roots (that is
4
p
3
+
27
q
2
<
0
{\displaystyle 4p^{3}+27q^{2}<0}
). Otherwise, it is still correct but involves complex cosines and arccosines when there is only one real root, and it is nonsensical (division by zero) when p = 0.
This formula can be straightforwardly transformed into a formula for the roots of a general cubic equation, using the back-substitution described in § Depressed cubic.
The formula can be proved as follows: Starting from the equation t3 + pt + q = 0, let us set t = u cos θ. The idea is to choose u to make the equation coincide with the identity
4
cos
3
θ
−
3
cos
θ
−
cos
(
3
θ
)
=
0.
{\displaystyle 4\cos ^{3}\theta -3\cos \theta -\cos(3\theta )=0.}
For this, choose
u
=
2
−
p
3
,
{\displaystyle u=2\,{\sqrt {-{\frac {p}{3}}}}\,,}
and divide the equation by
u
3
4
.
{\displaystyle {\frac {u^{3}}{4}}.}
This gives
4
cos
3
θ
−
3
cos
θ
−
3
q
2
p
−
3
p
=
0.
{\displaystyle 4\cos ^{3}\theta -3\cos \theta -{\frac {3q}{2p}}\,{\sqrt {\frac {-3}{p}}}=0.}
Combining with the above identity, one gets
cos
(
3
θ
)
=
3
q
2
p
−
3
p
,
{\displaystyle \cos(3\theta )={\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\,,}
and the roots are thus
t
k
=
2
−
p
3
cos
[
1
3
arccos
(
3
q
2
p
−
3
p
)
−
2
π
k
3
]
for
k
=
0
,
1
,
2.
{\displaystyle t_{k}=2\,{\sqrt {-{\frac {p}{3}}}}\,\cos \left[{\frac {1}{3}}\arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\right)-{\frac {2\pi k}{3}}\right]\qquad {\text{for }}k=0,1,2.}
=== Hyperbolic solution for one real root ===
When there is only one real root (and p ≠ 0), this root can be similarly represented using hyperbolic functions, as
t
0
=
−
2
|
q
|
q
−
p
3
cosh
[
1
3
arcosh
(
−
3
|
q
|
2
p
−
3
p
)
]
if
4
p
3
+
27
q
2
>
0
and
p
<
0
,
t
0
=
−
2
p
3
sinh
[
1
3
arsinh
(
3
q
2
p
3
p
)
]
if
p
>
0.
{\displaystyle {\begin{aligned}t_{0}&=-2{\frac {|q|}{q}}{\sqrt {-{\frac {p}{3}}}}\cosh \left[{\frac {1}{3}}\operatorname {arcosh} \left({\frac {-3|q|}{2p}}{\sqrt {\frac {-3}{p}}}\right)\right]\qquad {\text{if }}~4p^{3}+27q^{2}>0~{\text{ and }}~p<0,\\t_{0}&=-2{\sqrt {\frac {p}{3}}}\sinh \left[{\frac {1}{3}}\operatorname {arsinh} \left({\frac {3q}{2p}}{\sqrt {\frac {3}{p}}}\right)\right]\qquad {\text{if }}~p>0.\end{aligned}}}
If p ≠ 0 and the inequalities on the right are not satisfied (the case of three real roots), the formulas remain valid but involve complex quantities.
When p = ±3, the above values of t0 are sometimes called the Chebyshev cube root. More precisely, the values involving cosines and hyperbolic cosines define, when p = −3, the same analytic function denoted C1/3(q), which is the proper Chebyshev cube root. The value involving hyperbolic sines is similarly denoted S1/3(q), when p = 3.
== Geometric solutions ==
=== Omar Khayyám's solution ===
For solving the cubic equation x3 + m2x = n where n > 0, Omar Khayyám constructed the parabola y = x2/m, the circle that has as a diameter the line segment [0, n/m2] on the positive x-axis, and a vertical line through the point where the circle and the parabola intersect above the x-axis. The solution is given by the length of the horizontal line segment from the origin to the intersection of the vertical line and the x-axis (see the figure).
A simple modern proof is as follows. Multiplying the equation by x/m2 and regrouping the terms gives
x
4
m
2
=
x
(
n
m
2
−
x
)
.
{\displaystyle {\frac {x^{4}}{m^{2}}}=x\left({\frac {n}{m^{2}}}-x\right).}
The left-hand side is the value of y2 on the parabola. The equation of the circle being y2 + x(x − n/m2) = 0, the right hand side is the value of y2 on the circle.
=== Solution with angle trisector ===
A cubic equation with real coefficients can be solved geometrically using compass, straightedge, and an angle trisector if and only if it has three real roots.: Thm. 1
A cubic equation can be solved by compass-and-straightedge construction (without trisector) if and only if it has a rational root. This implies that the old problems of angle trisection and doubling the cube, set by ancient Greek mathematicians, cannot be solved by compass-and-straightedge construction.
== Geometric interpretation of the roots ==
=== Three real roots ===
Viète's trigonometric expression of the roots in the three-real-roots case lends itself to a geometric interpretation in terms of a circle. When the cubic is written in depressed form (2), t3 + pt + q = 0, as shown above, the solution can be expressed as
t
k
=
2
−
p
3
cos
(
1
3
arccos
(
3
q
2
p
−
3
p
)
−
k
2
π
3
)
for
k
=
0
,
1
,
2
.
{\displaystyle t_{k}=2{\sqrt {-{\frac {p}{3}}}}\cos \left({\frac {1}{3}}\arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\right)-k{\frac {2\pi }{3}}\right)\quad {\text{for}}\quad k=0,1,2\,.}
Here
arccos
(
3
q
2
p
−
3
p
)
{\displaystyle \arccos \left({\frac {3q}{2p}}{\sqrt {\frac {-3}{p}}}\right)}
is an angle in the unit circle; taking 1/3 of that angle corresponds to taking a cube root of a complex number; adding −k2π/3 for k = 1, 2 finds the other cube roots; and multiplying the cosines of these resulting angles by
2
−
p
3
{\displaystyle 2{\sqrt {-{\frac {p}{3}}}}}
corrects for scale.
For the non-depressed case (1) (shown in the accompanying graph), the depressed case as indicated previously is obtained by defining t such that x = t − b/3a so t = x + b/3a. Graphically this corresponds to simply shifting the graph horizontally when changing between the variables t and x, without changing the angle relationships. This shift moves the point of inflection and the centre of the circle onto the y-axis. Consequently, the roots of the equation in t sum to zero.
=== One real root ===
==== In the Cartesian plane ====
When the graph of a cubic function is plotted in the Cartesian plane, if there is only one real root, it is the abscissa (x-coordinate) of the horizontal intercept of the curve (point R on the figure). Further, if the complex conjugate roots are written as g ± hi, then the real part g is the abscissa of the tangency point H of the tangent line to cubic that passes through x-intercept R of the cubic (that is the signed length OM, negative on the figure). The imaginary parts ±h are the square roots of the tangent of the angle between this tangent line and the horizontal axis.
==== In the complex plane ====
With one real and two complex roots, the three roots can be represented as points in the complex plane, as can the two roots of the cubic's derivative. There is an interesting geometrical relationship among all these roots.
The points in the complex plane representing the three roots serve as the vertices of an isosceles triangle. (The triangle is isosceles because one root is on the horizontal (real) axis and the other two roots, being complex conjugates, appear symmetrically above and below the real axis.) Marden's theorem says that the points representing the roots of the derivative of the cubic are the foci of the Steiner inellipse of the triangle—the unique ellipse that is tangent to the triangle at the midpoints of its sides. If the angle at the vertex on the real axis is less than π/3 then the major axis of the ellipse lies on the real axis, as do its foci and hence the roots of the derivative. If that angle is greater than π/3, the major axis is vertical and its foci, the roots of the derivative, are complex conjugates. And if that angle is π/3, the triangle is equilateral, the Steiner inellipse is simply the triangle's incircle, its foci coincide with each other at the incenter, which lies on the real axis, and hence the derivative has duplicate real roots.
== Galois group ==
Given a cubic irreducible polynomial over a field K of characteristic different from 2 and 3, the Galois group over K is the group of the field automorphisms that fix K of the smallest extension of K (splitting field). As these automorphisms must permute the roots of the polynomials, this group is either the group S3 of all six permutations of the three roots, or the group A3 of the three circular permutations.
The discriminant Δ of the cubic is the square of
Δ
=
a
2
(
r
1
−
r
2
)
(
r
1
−
r
3
)
(
r
2
−
r
3
)
,
{\displaystyle {\sqrt {\Delta }}=a^{2}(r_{1}-r_{2})(r_{1}-r_{3})(r_{2}-r_{3}),}
where a is the leading coefficient of the cubic, and r1, r2 and r3 are the three roots of the cubic. As
Δ
{\displaystyle {\sqrt {\Delta }}}
changes of sign if two roots are exchanged,
Δ
{\displaystyle {\sqrt {\Delta }}}
is fixed by the Galois group only if the Galois group is
A3. In other words, the Galois group is A3 if and only if the discriminant is the square of an element of K.
As most integers are not squares, when working over the field Q of the rational numbers, the Galois group of most irreducible cubic polynomials is the group S3 with six elements. An example of a Galois group A3 with three elements is given by p(x) = x3 − 3x − 1, whose discriminant is 81 = 92.
== Derivation of the roots ==
This section regroups several methods for deriving Cardano's formula.
=== Cardano's method ===
This method is due to Scipione del Ferro and Tartaglia, but is named after Gerolamo Cardano who first published it in his book Ars Magna (1545).
This method applies to a depressed cubic t3 + pt + q = 0. The idea is to introduce two variables u and
v
{\displaystyle v}
such that
u
+
v
=
t
{\displaystyle u+v=t}
and to substitute this in the depressed cubic, giving
u
3
+
v
3
+
(
3
u
v
+
p
)
(
u
+
v
)
+
q
=
0.
{\displaystyle u^{3}+v^{3}+(3uv+p)(u+v)+q=0.}
At this point Cardano imposed the condition
3
u
v
+
p
=
0.
{\displaystyle 3uv+p=0.}
This removes the third term in the previous equality, leading to the system of equations
u
3
+
v
3
=
−
q
u
v
=
−
p
3
.
{\displaystyle {\begin{aligned}u^{3}+v^{3}&=-q\\uv&=-{\frac {p}{3}}.\end{aligned}}}
Knowing the sum and the product of u3 and
v
3
,
{\displaystyle v^{3},}
one deduces that they are the two solutions of the quadratic equation
0
=
(
x
−
u
3
)
(
x
−
v
3
)
=
x
2
−
(
u
3
+
v
3
)
x
+
u
3
v
3
=
x
2
−
(
u
3
+
v
3
)
x
+
(
u
v
)
3
{\displaystyle {\begin{aligned}0&=(x-u^{3})(x-v^{3})\\&=x^{2}-(u^{3}+v^{3})x+u^{3}v^{3}\\&=x^{2}-(u^{3}+v^{3})x+(uv)^{3}\end{aligned}}}
so
x
2
+
q
x
−
p
3
27
=
0.
{\displaystyle x^{2}+qx-{\frac {p^{3}}{27}}=0.}
The discriminant of this equation is
Δ
=
q
2
+
4
p
3
27
{\displaystyle \Delta =q^{2}+{\frac {4p^{3}}{27}}}
, and assuming it is positive, real solutions to this equation are (after folding division by 4 under the square root):
−
q
2
±
q
2
4
+
p
3
27
.
{\displaystyle -{\frac {q}{2}}\pm {\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}.}
So (without loss of generality in choosing u or
v
{\displaystyle v}
):
u
=
−
q
2
+
q
2
4
+
p
3
27
3
.
{\displaystyle u={\sqrt[{3}]{-{\frac {q}{2}}+{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}}.}
v
=
−
q
2
−
q
2
4
+
p
3
27
3
.
{\displaystyle v={\sqrt[{3}]{-{\frac {q}{2}}-{\sqrt {{\frac {q^{2}}{4}}+{\frac {p^{3}}{27}}}}}}.}
As
u
+
v
=
t
,
{\displaystyle u+v=t,}
the sum of the cube roots of these solutions is a root of the equation. That is
t
=
−
q
2
+
q
2
4
+
p
3
27
3
+
−
q
2
−
q
2
4
+
p
3
27
3
{\displaystyle t={\sqrt[{3}]{-{q \over 2}+{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}+{\sqrt[{3}]{-{q \over 2}-{\sqrt {{q^{2} \over 4}+{p^{3} \over 27}}}}}}
is a root of the equation; this is Cardano's formula.
This works well when
4
p
3
+
27
q
2
>
0
,
{\displaystyle 4p^{3}+27q^{2}>0,}
but, if
4
p
3
+
27
q
2
<
0
,
{\displaystyle 4p^{3}+27q^{2}<0,}
the square root appearing in the formula is not real. As a complex number has three cube roots, using Cardano's formula without care would provide nine roots, while a cubic equation cannot have more than three roots. This was clarified first by Rafael Bombelli in his book L'Algebra (1572). The solution is to use the fact that
u
v
=
−
p
3
,
{\displaystyle uv=-{\frac {p}{3}},}
that is,
v
=
−
p
3
u
.
{\displaystyle v={\frac {-p}{3u}}.}
This means that only one cube root needs to be computed, and leads to the second formula given in § Cardano's formula.
The other roots of the equation can be obtained by changing of cube root, or, equivalently, by multiplying the cube root by each of the two primitive cube roots of unity, which are
−
1
±
−
3
2
.
{\displaystyle {\frac {-1\pm {\sqrt {-3}}}{2}}.}
=== Vieta's substitution ===
Vieta's substitution is a method introduced by François Viète (Vieta is his Latin name) in a text published posthumously in 1615, which provides directly the second formula of § Cardano's method, and avoids the problem of computing two different cube roots.
Starting from the depressed cubic t3 + pt + q = 0, Vieta's substitution is t = w − p/3w.
The substitution t = w – p/3w transforms the depressed cubic into
w
3
+
q
−
p
3
27
w
3
=
0.
{\displaystyle w^{3}+q-{\frac {p^{3}}{27w^{3}}}=0.}
Multiplying by w3, one gets a quadratic equation in w3:
(
w
3
)
2
+
q
(
w
3
)
−
p
3
27
=
0.
{\displaystyle (w^{3})^{2}+q(w^{3})-{\frac {p^{3}}{27}}=0.}
Let
W
=
−
q
2
±
p
3
27
+
q
2
4
{\displaystyle W=-{\frac {q}{2}}\pm {\sqrt {{\frac {p^{3}}{27}}+{\frac {q^{2}}{4}}}}}
be any nonzero root of this quadratic equation. If w1, w2 and w3 are the three cube roots of W, then the roots of the original depressed cubic are w1 − p/3w1, w2 − p/3w2, and w3 − p/3w3. The other root of the quadratic equation is
−
p
3
27
W
.
{\displaystyle \textstyle -{\frac {p^{3}}{27W}}.}
This implies that changing the sign of the square root exchanges wi and − p/3wi for i = 1, 2, 3, and therefore does not change the roots. This method only fails when both roots of the quadratic equation are zero, that is when p = q = 0, in which case the only root of the depressed cubic is 0.
=== Lagrange's method ===
In his paper Réflexions sur la résolution algébrique des équations ("Thoughts on the algebraic solving of equations"), Joseph Louis Lagrange introduced a new method to solve equations of low degree in a uniform way, with the hope that he could generalize it for higher degrees. This method works well for cubic and quartic equations, but Lagrange did not succeed in applying it to a quintic equation, because it requires solving a resolvent polynomial of degree at least six.
Apart from the fact that nobody had previously succeeded, this was the first indication of the non-existence of an algebraic formula for degrees 5 and higher; as was later proved by the Abel–Ruffini theorem. Nevertheless, modern methods for solving solvable quintic equations are mainly based on Lagrange's method.
In the case of cubic equations, Lagrange's method gives the same solution as Cardano's. Lagrange's method can be applied directly to the general cubic equation ax3 + bx2 + cx + d = 0, but the computation is simpler with the depressed cubic equation, t3 + pt + q = 0.
Lagrange's main idea was to work with the discrete Fourier transform of the roots instead of with the roots themselves. More precisely, let ξ be a primitive third root of unity, that is a number such that ξ3 = 1 and ξ2 + ξ + 1 = 0 (when working in the space of complex numbers, one has
ξ
=
−
1
±
i
3
2
=
e
2
i
π
/
3
,
{\displaystyle \textstyle \xi ={\frac {-1\pm i{\sqrt {3}}}{2}}=e^{2i\pi /3},}
but this complex interpretation is not used here). Denoting x0, x1 and x2 the three roots of the cubic equation to be solved, let
s
0
=
x
0
+
x
1
+
x
2
,
s
1
=
x
0
+
ξ
x
1
+
ξ
2
x
2
,
s
2
=
x
0
+
ξ
2
x
1
+
ξ
x
2
,
{\displaystyle {\begin{aligned}s_{0}&=x_{0}+x_{1}+x_{2},\\s_{1}&=x_{0}+\xi x_{1}+\xi ^{2}x_{2},\\s_{2}&=x_{0}+\xi ^{2}x_{1}+\xi x_{2},\end{aligned}}}
be the discrete Fourier transform of the roots. If s0, s1 and s2 are known, the roots may be recovered from them with the inverse Fourier transform consisting of inverting this linear transformation; that is,
x
0
=
1
3
(
s
0
+
s
1
+
s
2
)
,
x
1
=
1
3
(
s
0
+
ξ
2
s
1
+
ξ
s
2
)
,
x
2
=
1
3
(
s
0
+
ξ
s
1
+
ξ
2
s
2
)
.
{\displaystyle {\begin{aligned}x_{0}&={\tfrac {1}{3}}(s_{0}+s_{1}+s_{2}),\\x_{1}&={\tfrac {1}{3}}(s_{0}+\xi ^{2}s_{1}+\xi s_{2}),\\x_{2}&={\tfrac {1}{3}}(s_{0}+\xi s_{1}+\xi ^{2}s_{2}).\end{aligned}}}
By Vieta's formulas, s0 is known to be zero in the case of a depressed cubic, and −b/a for the general cubic. So, only s1 and s2 need to be computed. They are not symmetric functions of the roots (exchanging x1 and x2 exchanges also s1 and s2), but some simple symmetric functions of s1 and s2 are also symmetric in the roots of the cubic equation to be solved. Thus these symmetric functions can be expressed in terms of the (known) coefficients of the original cubic, and this allows eventually expressing the si as roots of a polynomial with known coefficients. This works well for every degree, but, in degrees higher than four, the resulting polynomial that has the si as roots has a degree higher than that of the initial polynomial, and is therefore unhelpful for solving. This is the reason for which Lagrange's method fails in degrees five and higher.
In the case of a cubic equation,
P
=
s
1
s
2
,
{\displaystyle P=s_{1}s_{2},}
and
S
=
s
1
3
+
s
2
3
{\displaystyle S=s_{1}^{3}+s_{2}^{3}}
are such symmetric polynomials (see below). It follows that
s
1
3
{\displaystyle s_{1}^{3}}
and
s
2
3
{\displaystyle s_{2}^{3}}
are the two roots of the quadratic equation
z
2
−
S
z
+
P
3
=
0.
{\displaystyle z^{2}-Sz+P^{3}=0.}
Thus the resolution of the equation may be finished exactly as with Cardano's method, with
s
1
{\displaystyle s_{1}}
and
s
2
{\displaystyle s_{2}}
in place of u and
v
.
{\displaystyle v.}
In the case of the depressed cubic, one has
x
0
=
1
3
(
s
1
+
s
2
)
{\displaystyle x_{0}={\tfrac {1}{3}}(s_{1}+s_{2})}
and
s
1
s
2
=
−
3
p
,
{\displaystyle s_{1}s_{2}=-3p,}
while in Cardano's method we have set
x
0
=
u
+
v
{\displaystyle x_{0}=u+v}
and
u
v
=
−
1
3
p
.
{\displaystyle uv=-{\tfrac {1}{3}}p.}
Thus, up to the exchange of u and
v
,
{\displaystyle v,}
we have
s
1
=
3
u
{\displaystyle s_{1}=3u}
and
s
2
=
3
v
.
{\displaystyle s_{2}=3v.}
In other words, in this case, Cardano's method and Lagrange's method compute exactly the same things, up to a factor of three in the auxiliary variables, the main difference being that Lagrange's method explains why these auxiliary variables appear in the problem.
==== Computation of S and P ====
A straightforward computation using the relations ξ3 = 1 and ξ2 + ξ + 1 = 0 gives
P
=
s
1
s
2
=
x
0
2
+
x
1
2
+
x
2
2
−
(
x
0
x
1
+
x
1
x
2
+
x
2
x
0
)
,
S
=
s
1
3
+
s
2
3
=
2
(
x
0
3
+
x
1
3
+
x
2
3
)
−
3
(
x
0
2
x
1
+
x
1
2
x
2
+
x
2
2
x
0
+
x
0
x
1
2
+
x
1
x
2
2
+
x
2
x
0
2
)
+
12
x
0
x
1
x
2
.
{\displaystyle {\begin{aligned}P&=s_{1}s_{2}=x_{0}^{2}+x_{1}^{2}+x_{2}^{2}-(x_{0}x_{1}+x_{1}x_{2}+x_{2}x_{0}),\\S&=s_{1}^{3}+s_{2}^{3}=2(x_{0}^{3}+x_{1}^{3}+x_{2}^{3})-3(x_{0}^{2}x_{1}+x_{1}^{2}x_{2}+x_{2}^{2}x_{0}+x_{0}x_{1}^{2}+x_{1}x_{2}^{2}+x_{2}x_{0}^{2})+12x_{0}x_{1}x_{2}.\end{aligned}}}
This shows that P and S are symmetric functions of the roots. Using Newton's identities, it is straightforward to express them in terms of the elementary symmetric functions of the roots, giving
P
=
e
1
2
−
3
e
2
,
S
=
2
e
1
3
−
9
e
1
e
2
+
27
e
3
,
{\displaystyle {\begin{aligned}P&=e_{1}^{2}-3e_{2},\\S&=2e_{1}^{3}-9e_{1}e_{2}+27e_{3},\end{aligned}}}
with e1 = 0, e2 = p and e3 = −q in the case of a depressed cubic, and e1 = −b/a, e2 = c/a and e3 = −d/a, in the general case.
== Applications ==
Cubic equations arise in various other contexts.
=== In mathematics ===
Angle trisection and doubling the cube are two ancient problems of geometry that have been proved to not be solvable by straightedge and compass construction, because they are equivalent to solving a cubic equation.
Marden's theorem states that the foci of the Steiner inellipse of any triangle can be found by using the cubic function whose roots are the coordinates in the complex plane of the triangle's three vertices. The roots of the first derivative of this cubic are the complex coordinates of those foci.
The area of a regular heptagon can be expressed in terms of the roots of a cubic. Further, the ratios of the long diagonal to the side, the side to the short diagonal, and the negative of the short diagonal to the long diagonal all satisfy a particular cubic equation. In addition, the ratio of the inradius to the circumradius of a heptagonal triangle is one of the solutions of a cubic equation. The values of trigonometric functions of angles related to
2
π
/
7
{\displaystyle 2\pi /7}
satisfy cubic equations.
Given the cosine (or other trigonometric function) of an arbitrary angle, the cosine of one-third of that angle is one of the roots of a cubic.
The solution of the general quartic equation relies on the solution of its resolvent cubic.
The eigenvalues of a 3×3 matrix are the roots of a cubic polynomial which is the characteristic polynomial of the matrix.
The characteristic equation of a third-order constant coefficients or Cauchy–Euler (equidimensional variable coefficients) linear differential equation or difference equation is a cubic equation.
Intersection points of cubic Bézier curve and straight line can be computed using direct cubic equation representing Bézier curve.
Critical points of a quartic function are found by solving a cubic equation (the derivative set equal to zero).
Inflection points of a quintic function are the solution of a cubic equation (the second derivative set equal to zero).
=== In other sciences ===
In analytical chemistry, the Charlot equation, which can be used to find the pH of buffer solutions, can be solved using a cubic equation.
In thermodynamics, equations of state (which relate pressure, volume, and temperature of a substances), e.g. the Van der Waals equation of state, are cubic in the volume.
Kinematic equations involving linear rates of acceleration are cubic.
The speed of seismic Rayleigh waves is a solution of the Rayleigh wave cubic equation.
The steady state speed of a vehicle moving on a slope with air friction for a given input power is solved by a depressed cubic equation.
Kepler's third law of planetary motion is cubic in the semi-major axis.
== See also ==
Quartic equation
Quintic equation
Tschirnhaus transformation
Principal equation form
== Notes ==
== References ==
Guilbeau, Lucye (1930), "The History of the Solution of the Cubic Equation", Mathematics News Letter, 5 (4): 8–12, doi:10.2307/3027812, JSTOR 3027812
== Further reading ==
Anglin, W. S.; Lambek, Joachim (1995), "Mathematics in the Renaissance", The Heritage of Thales, Springers, pp. 125–131, ISBN 978-0-387-94544-6 Ch. 24.
Dence, T. (November 1997), "Cubics, chaos and Newton's method", Mathematical Gazette, 81 (492), Mathematical Association: 403–408, doi:10.2307/3619617, ISSN 0025-5572, JSTOR 3619617, S2CID 125196796
Dunnett, R. (November 1994), "Newton–Raphson and the cubic", Mathematical Gazette, 78 (483), Mathematical Association: 347–348, doi:10.2307/3620218, ISSN 0025-5572, JSTOR 3620218, S2CID 125643035
Jacobson, Nathan (2009), Basic algebra, vol. 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1
Mitchell, D. W. (November 2007), "Solving cubics by solving triangles", Mathematical Gazette, 91, Mathematical Association: 514–516, doi:10.1017/S0025557200182178, ISSN 0025-5572, S2CID 124710259
Mitchell, D. W. (November 2009), "Powers of φ as roots of cubics", Mathematical Gazette, 93, Mathematical Association, doi:10.1017/S0025557200185237, ISSN 0025-5572, S2CID 126286653
Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), "Section 5.6 Quadratic and Cubic Equations", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
Rechtschaffen, Edgar (July 2008), "Real roots of cubics: Explicit formula for quasi-solutions", Mathematical Gazette, 92, Mathematical Association: 268–276, doi:10.1017/S0025557200183147, ISSN 0025-5572, S2CID 125870578
Zucker, I. J. (July 2008), "The cubic equation – a new look at the irreducible case", Mathematical Gazette, 92, Mathematical Association: 264–268, doi:10.1017/S0025557200183135, ISSN 0025-5572, S2CID 125986006
== External links ==
"Cardano formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
History of quadratic, cubic and quartic equations on MacTutor archive.
500 years of NOT teaching THE CUBIC FORMULA. What is it they think you can't handle? – YouTube video by Mathologer about the history of cubic equations and Cardano's solution, as well as Ferrari's solution to quartic equations | Wikipedia/Cubic_equations |
In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers.
Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change. This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory.
Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum).
== Cantor's argument ==
Cantor's first proof that infinite sets can have different cardinalities was published in 1874. This proof demonstrates that the set of natural numbers and the set of real numbers have different cardinalities. It uses the theorem that a bounded increasing sequence of real numbers has a limit, which can be proved by using Cantor's or Richard Dedekind's construction of the irrational numbers. Because Leopold Kronecker did not accept these constructions, Cantor was motivated to develop a new proof.
In 1891, he published "a much simpler proof ... which does not depend on considering the irrational numbers." His new proof uses his diagonal argument to prove that there exists an infinite set with a larger number of elements (or greater cardinality) than the set of natural numbers N = {1, 2, 3, ...}. This larger set consists of the elements (x1, x2, x3, ...), where each xn is either m or w. Each of these elements corresponds to a subset of N—namely, the element (x1, x2, x3, ...) corresponds to {n ∈ N: xn = w}. So Cantor's argument implies that the set of all subsets of N has greater cardinality than N. The set of all subsets of N is denoted by P(N), the power set of N.
Cantor generalized his argument to an arbitrary set A and the set consisting of all functions from A to {0, 1}. Each of these functions corresponds to a subset of A, so his generalized argument implies the theorem: The power set P(A) has greater cardinality than A. This is known as Cantor's theorem.
The argument below is a modern version of Cantor's argument that uses power sets (for his original argument, see Cantor's diagonal argument). By presenting a modern argument, it is possible to see which assumptions of axiomatic set theory are used. The first part of the argument proves that N and P(N) have different cardinalities:
There exists at least one infinite set. This assumption (not formally specified by Cantor) is captured in formal set theory by the axiom of infinity. This axiom implies that N, the set of all natural numbers, exists.
P(N), the set of all subsets of N, exists. In formal set theory, this is implied by the power set axiom, which says that for every set there is a set of all of its subsets.
The concept of "having the same number" or "having the same cardinality" can be captured by the idea of one-to-one correspondence. This (purely definitional) assumption is sometimes known as Hume's principle. As Frege said, "If a waiter wishes to be certain of laying exactly as many knives on a table as plates, he has no need to count either of them; all he has to do is to lay immediately to the right of every plate a knife, taking care that every knife on the table lies immediately to the right of a plate. Plates and knives are thus correlated one to one." Sets in such a correlation are called equinumerous, and the correlation is called a one-to-one correspondence.
A set cannot be put into one-to-one correspondence with its power set. This implies that N and P(N) have different cardinalities. It depends on very few assumptions of set theory, and, as John P. Mayberry puts it, is a "simple and beautiful argument" that is "pregnant with consequences". Here is the argument:
Let
A
{\displaystyle A}
be a set and
P
(
A
)
{\displaystyle P(A)}
be its power set. The following theorem will be proved: If
f
{\displaystyle f}
is a function from
A
{\displaystyle A}
to
P
(
A
)
,
{\displaystyle P(A),}
then it is not onto. This theorem implies that there is no one-to-one correspondence between
A
{\displaystyle A}
and
P
(
A
)
{\displaystyle P(A)}
since such a correspondence must be onto. Proof of theorem: Define the diagonal subset
D
=
{
x
∈
A
:
x
∉
f
(
x
)
}
.
{\displaystyle D=\{x\in A:x\notin f(x)\}.}
Since
D
∈
P
(
A
)
,
{\displaystyle D\in P(A),}
proving that for all
x
∈
A
,
D
≠
f
(
x
)
{\displaystyle x\in A,\,D\neq f(x)}
will imply that
f
{\displaystyle f}
is not onto. Let
x
∈
A
.
{\displaystyle x\in A.}
Then
x
∈
D
⇔
x
∉
f
(
x
)
,
{\displaystyle x\in D\Leftrightarrow x\notin f(x),}
which implies
x
∉
D
⇔
x
∈
f
(
x
)
.
{\displaystyle x\notin D\Leftrightarrow x\in f(x).}
So if
x
∈
D
,
{\displaystyle x\in D,}
then
x
∉
f
(
x
)
;
{\displaystyle x\notin f(x);}
and if
x
∉
D
,
{\displaystyle x\notin D,}
then
x
∈
f
(
x
)
.
{\displaystyle x\in f(x).}
Since one of these sets contains
x
{\displaystyle x}
and the other does not,
D
≠
f
(
x
)
.
{\displaystyle D\neq f(x).}
Therefore,
D
{\displaystyle D}
is not in the image of
f
{\displaystyle f}
, so
f
{\displaystyle f}
is not onto.
Next Cantor shows that
A
{\displaystyle A}
is equinumerous with a subset of
P
(
A
)
{\displaystyle P(A)}
. From this and the fact that
P
(
A
)
{\displaystyle P(A)}
and
A
{\displaystyle A}
have different cardinalities, he concludes that
P
(
A
)
{\displaystyle P(A)}
has greater cardinality than
A
{\displaystyle A}
. This conclusion uses his 1878 definition: If A and B have different cardinalities, then either B is equinumerous with a subset of A (in this case, B has less cardinality than A) or A is equinumerous with a subset of B (in this case, B has greater cardinality than A). This definition leaves out the case where A and B are equinumerous with a subset of the other set—that is, A is equinumerous with a subset of B and B is equinumerous with a subset of A. Because Cantor implicitly assumed that cardinalities are linearly ordered, this case cannot occur. After using his 1878 definition, Cantor stated that in an 1883 article he proved that cardinalities are well-ordered, which implies they are linearly ordered. This proof used his well-ordering principle "every set can be well-ordered", which he called a "law of thought". The well-ordering principle is equivalent to the axiom of choice.
Around 1895, Cantor began to regard the well-ordering principle as a theorem and attempted to prove it. In 1895, Cantor also gave a new definition of "greater than" that correctly defines this concept without the aid of his well-ordering principle. By using Cantor's new definition, the modern argument that P(N) has greater cardinality than N can be completed using weaker assumptions than his original argument:
The concept of "having greater cardinality" can be captured by Cantor's 1895 definition: B has greater cardinality than A if (1) A is equinumerous with a subset of B, and (2) B is not equinumerous with a subset of A. Clause (1) says B is at least as large as A, which is consistent with our definition of "having the same cardinality". Clause (2) implies that the case where A and B are equinumerous with a subset of the other set is false. Since clause (2) says that A is not at least as large as B, the two clauses together say that B is larger (has greater cardinality) than A.
The power set
P
(
A
)
{\displaystyle P(A)}
has greater cardinality than
A
,
{\displaystyle A,}
which implies that P(N) has greater cardinality than N. Here is the proof:
Define the subset
P
1
=
{
y
∈
P
(
A
)
:
∃
x
∈
A
(
y
=
{
x
}
)
}
.
{\displaystyle P_{1}=\{\,y\in P(A):\exists x\in A\,(y=\{x\})\,\}.}
Define
f
(
x
)
=
{
x
}
,
{\displaystyle f(x)=\{x\},}
which maps
A
{\displaystyle A}
onto
P
1
.
{\displaystyle P_{1}.}
Since
f
(
x
1
)
=
f
(
x
2
)
{\displaystyle f(x_{1})=f(x_{2})}
implies
x
1
=
x
2
,
f
{\displaystyle x_{1}=x_{2},\,f}
is a one-to-one correspondence from
A
{\displaystyle A}
to
P
1
.
{\displaystyle P_{1}.}
Therefore,
A
{\displaystyle A}
is equinumerous with a subset of
P
(
A
)
.
{\displaystyle P(A).}
Using proof by contradiction, assume that
A
1
,
{\displaystyle A_{1},}
a subset of
A
,
{\displaystyle A,}
is equinumerous with
P
(
A
)
.
{\displaystyle P(A).}
. Then there is a one-to-one correspondence
g
{\displaystyle g}
from
A
1
{\displaystyle A_{1}}
to
P
(
A
)
.
{\displaystyle P(A).}
Define
h
{\displaystyle h}
from
A
{\displaystyle A}
to
P
(
A
)
:
{\displaystyle P(A){\text{:}}}
if
x
∈
A
1
,
{\displaystyle x\in A_{1},}
then
h
(
x
)
=
g
(
x
)
;
{\displaystyle h(x)=g(x);}
if
x
∈
A
∖
A
1
,
{\displaystyle x\in A\setminus A_{1},}
then
h
(
x
)
=
{
}
.
{\displaystyle h(x)=\{\,\}.}
Since
g
{\displaystyle g}
maps
A
1
{\displaystyle A_{1}}
onto
P
(
A
)
,
h
{\displaystyle P(A),\,h}
maps
A
{\displaystyle A}
onto
P
(
A
)
,
{\displaystyle P(A),}
contradicting the theorem above stating that a function from
A
{\displaystyle A}
to
P
(
A
)
{\displaystyle P(A)}
is not onto. Therefore,
P
(
A
)
{\displaystyle P(A)}
is not equinumerous with a subset of
A
.
{\displaystyle A.}
Besides the axioms of infinity and power set, the axioms of separation, extensionality, and pairing were used in the modern argument. For example, the axiom of separation was used to define the diagonal subset
D
,
{\displaystyle D,}
the axiom of extensionality was used to prove
D
≠
f
(
x
)
,
{\displaystyle D\neq f(x),}
and the axiom of pairing was used in the definition of the subset
P
1
.
{\displaystyle P_{1}.}
== Reception of the argument ==
Initially, Cantor's theory was controversial among mathematicians and (later) philosophers. Logician Wilfrid Hodges (1998) has commented on the energy devoted to refuting this "harmless little argument" (i.e. Cantor's diagonal argument) asking, "what had it done to anyone to make them angry with it?" Mathematician Solomon Feferman has referred to Cantor's theories as “simply not relevant to everyday mathematics.”
Before Cantor, the notion of infinity was often taken as a useful abstraction which helped mathematicians reason about the finite world; for example the use of infinite limit cases in calculus. The infinite was deemed to have at most a potential existence, rather than an actual existence. "Actual infinity does not exist. What we call infinite is only the endless possibility of creating new objects no matter how many exist already". Carl Friedrich Gauss's views on the subject can be paraphrased as: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics." In other words, the only access we have to the infinite is through the notion of limits, and hence, we must not treat infinite sets as if they have an existence exactly comparable to the existence of finite sets.
Cantor's ideas ultimately were largely accepted, strongly supported by David Hilbert, amongst others. Hilbert predicted: "No one will drive us from the paradise which Cantor created for us." To which Wittgenstein replied "if one person can see it as a paradise of mathematicians, why should not another see it as a joke?" The rejection of Cantor's infinitary ideas influenced the development of schools of mathematics such as constructivism and intuitionism.
Wittgenstein did not object to mathematical formalism wholesale, but had a finitist view on what Cantor's proof meant. The philosopher maintained that belief in infinities arises from confusing the intensional nature of mathematical laws with the extensional nature of sets, sequences, symbols etc. A series of symbols is finite in his view: In Wittgenstein's words: "...A curve is not composed of points, it is a law that points
obey, or again, a law according to which points can be constructed."
He also described the diagonal argument as "hocus pocus" and not proving what it purports to do.
== Objection to the axiom of infinity ==
A common objection to Cantor's theory of infinite number involves the axiom of infinity (which is, indeed, an axiom and not a logical truth). Mayberry has noted that "the set-theoretical axioms that sustain modern mathematics are self-evident in differing degrees. One of them—indeed, the most important of them, namely Cantor's Axiom, the so-called Axiom of Infinity—has scarcely any claim to self-evidence at all".
Another objection is that the use of infinite sets is not adequately justified by analogy to finite sets. Hermann Weyl wrote:
... classical logic was abstracted from the mathematics of finite sets and their subsets …. Forgetful of this limited origin, one afterwards mistook that logic for something above and prior to all mathematics, and finally applied it, without justification, to the mathematics of infinite sets. This is the Fall and original sin of [Cantor's] set theory
The difficulty with finitism is to develop foundations of mathematics using finitist assumptions that incorporate what everyone reasonably regards as mathematics (for example, real analysis).
== See also ==
Preintuitionism
== Notes ==
== References ==
Bishop, Errett; Bridges, Douglas S. (1985), Constructive Analysis, Grundlehren Der Mathematischen Wissenschaften, Springer, ISBN 978-0-387-15066-6
Cantor, Georg (1878), "Ein Beitrag zur Mannigfaltigkeitslehre", Journal für die Reine und Angewandte Mathematik, 84: 242–248
Cantor, Georg (1891), "Ueber eine elementare Frage der Mannigfaltigkeitslehre" (PDF), Jahresbericht der Deutschen Mathematiker-Vereinigung, 1: 75–78
Cantor, Georg (1895), "Beiträge zur Begründung der transfiniten Mengenlehre (1)", Mathematische Annalen, 46 (4): 481–512, doi:10.1007/bf02124929, S2CID 177801164, archived from the original on April 23, 2014
Cantor, Georg; Philip Jourdain (trans.) (1954) [1915], Contributions to the Founding of the Theory of Transfinite Numbers, Dover, ISBN 978-0-486-60045-1 {{citation}}: ISBN / Date incompatibility (help)
Dauben, Joseph (1979), Georg Cantor: His Mathematics and Philosophy of the Infinite, Harvard University Press, ISBN 0-674-34871-0
Dunham, William (1991), Journey through Genius: The Great Theorems of Mathematics, Penguin Books, ISBN 978-0140147391
Ewald, William B., ed. (1996), From Immanuel Kant to David Hilbert: A Source Book in the Foundations of Mathematics, Volume 2, Oxford University Press, ISBN 0-19-850536-1
Frege, Gottlob; J.L. Austin (trans.) (1884), The Foundations of Arithmetic (2nd ed.), Northwestern University Press, ISBN 978-0-8101-0605-5 {{citation}}: ISBN / Date incompatibility (help)
Hallett, Michael (1984), Cantorian Set Theory and Limitation of Size, Clarendon Press, ISBN 0-19-853179-6
Hilbert, David (1926), "Über das Unendliche", Mathematische Annalen, vol. 95, pp. 161–190, doi:10.1007/BF01206605, JFM 51.0044.02, S2CID 121888793
"Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können."
Translated in Van Heijenoort, Jean, On the infinite, Harvard University Press
Kline, Morris (1982), Mathematics: The Loss of Certainty, Oxford, ISBN 0-19-503085-0{{citation}}: CS1 maint: location missing publisher (link)
Mayberry, J.P. (2000), The Foundations of Mathematics in the Theory of Sets, Encyclopedia of Mathematics and its Applications, vol. 82, Cambridge University Press
Moore, Gregory H. (1982), Zermelo's Axiom of Choice: Its Origins, Development & Influence, Springer, ISBN 978-1-4613-9480-8
Poincaré, Henri (1908), The Future of Mathematics (PDF), Revue generale des Sciences pures et appliquees, vol. 23, archived from the original (PDF) on 2003-06-29 (address to the Fourth International Congress of Mathematicians)
Sainsbury, R.M. (1979), Russell, London{{citation}}: CS1 maint: location missing publisher (link)
Weyl, Hermann (1946), "Mathematics and logic: A brief survey serving as a preface to a review of The Philosophy of Bertrand Russell", American Mathematical Monthly, vol. 53, pp. 2–13, doi:10.2307/2306078, JSTOR 2306078
Wittgenstein, Ludwig; A. J. P. Kenny (trans.) (1974), Philosophical Grammar, Oxford{{citation}}: CS1 maint: location missing publisher (link)
Wittgenstein; R. Hargreaves (trans.); R. White (trans.) (1964), Philosophical Remarks, Oxford{{citation}}: CS1 maint: location missing publisher (link)
Wittgenstein (2001), Remarks on the Foundations of Mathematics (3rd ed.), Oxford{{citation}}: CS1 maint: location missing publisher (link)
== External links ==
Doron Zeilberger's 68th Opinion
Philosopher Hartley Slater's argument against the idea of "number" that underpins Cantor's set theory
Wolfgang Mueckenheim: Transfinity - A Source Book
Hodges "An editor recalls some hopeless papers" | Wikipedia/Controversy_over_Cantor's_theory |
A timeline of calculus and mathematical analysis.
== 500BC to 1600 ==
5th century BC - The Zeno's paradoxes,
5th century BC - Antiphon attempts to square the circle,
5th century BC - Democritus finds the volume of cone is 1/3 of volume of cylinder,
4th century BC - Eudoxus of Cnidus develops the method of exhaustion,
3rd century BC - Archimedes displays geometric series in The Quadrature of the Parabola. He further develops the method of exhaustion.
3rd century BC - Archimedes develops a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems using methods now termed as integral calculus. Archimedes also derives several formulae for determining the area and volume of various solids including sphere, cone, paraboloid and hyperboloid.
Before 50 BC - Babylonian cuneiform tablets show use of the Trapezoid rule to calculate of the position of Jupiter.
3rd century - Liu Hui rediscovers the method of exhaustion in order to find the area of a circle.
4th century - The Pappus's centroid theorem,
5th century - Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere.
600 - Liu Zhuo is the first person to use second-order interpolation for computing the positions of the sun and the moon.
665 - Brahmagupta discovers a second order Newton-Stirling interpolation for
sin
(
x
+
ϵ
)
{\displaystyle \sin(x+\epsilon )}
,
862 - The Banu Musa brothers write the "Book on the Measurement of Plane and Spherical Figures",
9th century - Thābit ibn Qurra discusses the quadrature of the parabola and the volume of different types of conic sections.
12th century - Bhāskara II discovers a rule equivalent to Rolle's theorem for
sin
x
{\displaystyle \sin x}
,
14th century - Nicole Oresme proves of the divergence of the harmonic series,
14th century - Madhava discovers the power series expansion for
sin
x
{\displaystyle \sin x}
,
cos
x
{\displaystyle \cos x}
,
arctan
x
{\displaystyle \arctan x}
and
π
/
4
{\displaystyle \pi /4}
This theory is now well known in the Western world as the Taylor series or infinite series.
14th century - Parameshvara discovers a third order Taylor interpolation for
sin
(
x
+
ϵ
)
{\displaystyle \sin(x+\epsilon )}
,
1445 - Nicholas of Cusa attempts to square the circle,
1501 - Nilakantha Somayaji writes the Tantrasamgraha, which contains the Madhava's discoveries,
1548 - Francesco Maurolico attempted to calculate the barycenter of various bodies (pyramid, paraboloid, etc.),
1550 - Jyeshtadeva writes the Yuktibhāṣā, a commentary to Nilakantha's Tantrasamgraha,
1560 - Sankara Variar writes the Kriyakramakari,
1565 - Federico Commandino publishes De centro Gravitati,
1588 - Commandino's translation of Pappus' Collectio gets published,
1593 - François Viète discovers the first infinite product in the history of mathematics,
== 17th century ==
1606 - Luca Valerio applies methods of Archimedes to find volumes and centres of gravity of solid bodies,
1609 - Johannes Kepler computes the integral
∫
0
θ
sin
x
d
x
=
1
−
cos
θ
{\displaystyle \int _{0}^{\theta }\sin x\ dx=1-\cos \theta }
,
1611 - Thomas Harriot discovers an interpolation formula similar to Newton's interpolation formula,
1615 - Johannes Kepler publishes Nova stereometria doliorum,
1620 - Grégoire de Saint-Vincent discovers that the area under a hyperbola represented a logarithm,
1624 - Henry Briggs publishes Arithmetica Logarithmica,
1629 - Pierre de Fermat discovers his method of maxima and minima, precursor of the derivative concept,
1634 - Gilles de Roberval shows that the area under a cycloid is three times the area of its generating circle,
1635 - Bonaventura Cavalieri publishes Geometria Indivisibilibus,
1637 - René Descartes publishes La Géométrie,
1638 - Galileo Galilei publishes Two New Sciences,
1644 - Evangelista Torricelli publishes Opera geometrica,
1644 - Fermat's methods of maxima and minima published by Pierre Hérigone,
1647 - Cavalieri computes the integral
∫
0
a
x
n
d
x
=
1
n
+
1
a
n
+
1
{\displaystyle \int _{0}^{a}x^{n}\ dx={\frac {1}{n+1}}a^{n+1}}
,
1647 - Grégoire de Saint-Vincent publishes Opus Geometricum,
1650 - Pietro Mengoli proves of the divergence of the harmonic series,
1654 - Johannes Hudde discovers the power series expansion for
ln
(
1
+
x
)
{\displaystyle \ln(1+x)}
,
1656 - John Wallis publishes Arithmetica Infinitorum,
1658 - Christopher Wren shows that the length of a cycloid is four times the diameter of its generating circle,
1659 - Second edition of Van Schooten's Latin translation of Descartes' Geometry with appendices by Hudde and Heuraet,
1665 - Isaac Newton discovers the generalized binomial theorem and develops his version of infinitesimal calculus,
1667 - James Gregory publishes Vera circuli et hyperbolae quadratura,
1668 - Nicholas Mercator publishes Logarithmotechnia,
1668 - James Gregory computes the integral of the secant function,
1669 - Newton invents a Newton's method for the computation of roots of a function,
1670 - Newton rediscovers the power series expansion for
sin
x
{\displaystyle \sin x}
and
cos
x
{\displaystyle \cos x}
(originally discovered by Madhava),
1670 - Isaac Barrow publishes Lectiones Geometricae,
1671 - James Gregory rediscovers the power series expansion for
arctan
x
{\displaystyle \arctan x}
and
π
/
4
{\displaystyle \pi /4}
(originally discovered by Madhava),
1672 - René-François de Sluse publishes A Method of Drawing Tangents to All Geometrical Curves,
1673 - Gottfried Leibniz also develops his version of infinitesimal calculus,
1675 - Leibniz uses the modern notation for an integral for the first time,
1677 - Leibniz discovers the rules for differentiating products, quotients, and the function of a function.
1683 - Jacob Bernoulli discovers the number e,
1684 - Leibniz publishes his first paper on calculus,
1685 - Newon formulates and solves Newton's minimal resistance problem, giving birth to the field of calculus of variations,
1686 - The first appearance in print of the
∫
{\displaystyle \int }
notation for integrals,
1687 - Isaac Newton publishes Philosophiæ Naturalis Principia Mathematica,
1691 - The first proof of Rolle's theorem is given by Michel Rolle,
1691 - Leibniz discovers the technique of separation of variables for ordinary differential equations,
1694 - Johann Bernoulli discovers the L'Hôpital's rule,
1696 - Guillaume de L'Hôpital publishes Analyse des Infiniment Petits, the first calculus textbook,
1696 - Jakob Bernoulli and Johann Bernoulli solve the brachistochrone problem.
== 18th century ==
1711 - Isaac Newton publishes De analysi per aequationes numero terminorum infinitas,
1712 - Brook Taylor develops Taylor series,
1722 - Roger Cotes computes the derivative of sine function in his Harmonia Mensurarum,
1730 - James Stirling publishes The Differential Method,
1734 - George Berkeley publishes The Analyst,
1734 - Leonhard Euler introduces the integrating factor technique for solving first-order ordinary differential equations,
1735 - Leonhard Euler solves the Basel problem, relating an infinite series to π,
1736 - Newton's Method of Fluxions posthumously published,
1737 - Thomas Simpson publishes Treatise of Fluxions,
1739 - Leonhard Euler solves the general homogeneous linear ordinary differential equation with constant coefficients,
1742 - Modern definion of logarithm by William Gardiner,
1742 - Colin Maclaurin publishes Treatise on Fluxions,
1748 - Euler publishes Introductio in analysin infinitorum,
1748 - Maria Gaetana Agnesi discusses analysis in Instituzioni Analitiche ad Uso della Gioventu Italiana,
1762 - Joseph Louis Lagrange discovers the divergence theorem,
1797 - Lagrange publishes Théorie des fonctions analytiques,
== 19th century ==
1807 - Joseph Fourier announces his discoveries about the trigonometric decomposition of functions,
1811 - Carl Friedrich Gauss discusses the meaning of integrals with complex limits and briefly examines the dependence of such integrals on the chosen path of integration,
1815 - Siméon Denis Poisson carries out integrations along paths in the complex plane,
1817 - Bernard Bolzano presents the intermediate value theorem — a continuous function which is negative at one point and positive at another point must be zero for at least one point in between,
1822 - Augustin-Louis Cauchy presents the Cauchy integral theorem for integration around the boundary of a rectangle in the complex plane,
1825 - Augustin-Louis Cauchy presents the Cauchy integral theorem for general integration paths—he assumes the function being integrated has a continuous derivative, and he introduces the theory of residues in complex analysis,
1825 - André-Marie Ampère discovers Stokes' theorem,
1828 - George Green introduces Green's theorem,
1831 - Mikhail Vasilievich Ostrogradsky rediscovers and gives the first proof of the divergence theorem earlier described by Lagrange, Gauss and Green,
1841 - Karl Weierstrass discovers but does not publish the Laurent expansion theorem,
1843 - Pierre-Alphonse Laurent discovers and presents the Laurent expansion theorem,
1850 - Victor Alexandre Puiseux distinguishes between poles and branch points and introduces the concept of essential singular points,
1850 - George Gabriel Stokes rediscovers and proves Stokes' theorem,
1861 - Karl Weierstrass starts to use the language of epsilons and deltas,
1873 - Georg Frobenius presents his method for finding series solutions to linear differential equations with regular singular points,
== 20th century ==
1908 - Josip Plemelj solves the Riemann problem about the existence of a differential equation with a given monodromic group and uses Sokhotsky - Plemelj formulae,
1966 - Abraham Robinson presents non-standard analysis.
1985 - Louis de Branges de Bourcia proves the Bieberbach conjecture,
== See also ==
Timeline of ancient Greek mathematicians
Timeline of geometry – Notable events in the history of geometry
Timeline of mathematical logic
Timeline of mathematics
== References == | Wikipedia/Timeline_of_calculus_and_mathematical_analysis |
This is a timeline of category theory and related mathematics. Its scope ("related mathematics") is taken as:
Categories of abstract algebraic structures including representation theory and universal algebra;
Homological algebra;
Homotopical algebra;
Topology using categories, including algebraic topology, categorical topology, quantum topology, low-dimensional topology;
Categorical logic and set theory in the categorical context such as algebraic set theory;
Foundations of mathematics building on categories, for instance topos theory;
Abstract geometry, including algebraic geometry, categorical noncommutative geometry, etc.
Quantization related to category theory, in particular categorical quantization;
Categorical physics relevant for mathematics.
In this article, and in category theory in general, ∞ = ω.
== Timeline to 1945: before the definitions ==
== 1945–1970 ==
== 1971–1980 ==
== 1981–1990 ==
== 1991–2000 ==
== 2001–present ==
== See also ==
EGA
FGA
SGA
== Notes ==
== References ==
nLab, just as a higher-dimensional Wikipedia, started in late 2008; see nLab
Zhaohua Luo; Categorical geometry homepage
John Baez, Aaron Lauda; A prehistory of n-categorical physics
Ross Street; An Australian conspectus of higher categories
Elaine Landry, Jean-Pierre Marquis; Categories in context: historical, foundational, and philosophical
Jim Stasheff; A survey of cohomological physics
John Bell; The development of categorical logic
Jean Dieudonné; The historical development of algebraic geometry
Charles Weibel; History of homological algebra
Peter Johnstone; The point of pointless topology
Stasheff, Jim (January 21, 1996). "The Pre-History Of Operads". In Loday, Jean-Louis; Stasheff, James D.; Voronov, Alexander A. (eds.). Operads: Proceedings of Renaissance Conferences. Contemporary Mathematics. Vol. 202. Providence, Rhode Island: American Mathematical Society. pp. 9–14. CiteSeerX 10.1.1.25.5089. doi:10.1090/conm/202/02592. ISBN 0-8218-0513-4. ISSN 0271-4132. LCCN 96-37049. MR 1436913. Retrieved 2021-12-08.
George Whitehead; Fifty years of homotopy theory
Haynes Miller; The origin of sheaf theory | Wikipedia/Timeline_of_category_theory_and_related_mathematics |
In mathematics, specifically in differential topology, Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse, a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology.
Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography. Morse originally applied his theory to geodesics (critical points of the energy functional on the space of paths). These techniques were used in Raoul Bott's proof of his periodicity theorem.
The analogue of Morse theory for complex manifolds is Picard–Lefschetz theory.
== Basic concepts ==
To illustrate, consider a mountainous landscape surface
M
{\displaystyle M}
(more generally, a manifold). If
f
{\displaystyle f}
is the function
M
→
R
{\displaystyle M\to \mathbb {R} }
giving the elevation of each point, then the inverse image of a point in
R
{\displaystyle \mathbb {R} }
is a contour line (more generally, a level set). Each connected component of a contour line is either a point, a simple closed curve, or a closed curve with double point(s). Contour lines may also have points of higher order (triple points, etc.), but these are unstable and may be removed by a slight deformation of the landscape. Double points in contour lines occur at saddle points, or passes, where the surrounding landscape curves up in one direction and down in the other.
Imagine flooding this landscape with water. When the water reaches elevation
a
{\displaystyle a}
, the underwater surface is
M
a
=
def
f
−
1
(
−
∞
,
a
]
{\displaystyle M^{a}\,{\stackrel {\text{def}}{=}}\,f^{-1}(-\infty ,a]}
, the points with elevation
a
{\displaystyle a}
or below. Consider how the topology of this surface changes as the water rises. It appears unchanged except when
a
{\displaystyle a}
passes the height of a critical point, where the gradient of
f
{\displaystyle f}
is
0
{\displaystyle 0}
(more generally, the Jacobian matrix acting as a linear map between tangent spaces does not have maximal rank). In other words, the topology of
M
a
{\displaystyle M^{a}}
does not change except when the water either (1) starts filling a basin, (2) covers a saddle (a mountain pass), or (3) submerges a peak.
To these three types of critical points—basins, passes, and peaks (i.e. minima, saddles, and maxima)—one associates a number called the index, the number of independent directions in which
f
{\displaystyle f}
decreases from the point. More precisely, the index of a non-degenerate critical point
p
{\displaystyle p}
of
f
{\displaystyle f}
is the dimension of the largest subspace of the tangent space to
M
{\displaystyle M}
at
p
{\displaystyle p}
on which the Hessian of
f
{\displaystyle f}
is negative definite. The indices of basins, passes, and peaks are
0
,
1
,
{\displaystyle 0,1,}
and
2
,
{\displaystyle 2,}
respectively.
Considering a more general surface, let
M
{\displaystyle M}
be a torus oriented as in the picture, with
f
{\displaystyle f}
again taking a point to its height above the plane. One can again analyze how the topology of the underwater surface
M
a
{\displaystyle M^{a}}
changes as the water level
a
{\displaystyle a}
rises.
Starting from the bottom of the torus, let
p
,
q
,
r
,
{\displaystyle p,q,r,}
and
s
{\displaystyle s}
be the four critical points of index
0
,
1
,
1
,
{\displaystyle 0,1,1,}
and
2
{\displaystyle 2}
corresponding to the basin, two saddles, and peak, respectively. When
a
{\displaystyle a}
is less than
f
(
p
)
=
0
,
{\displaystyle f(p)=0,}
then
M
a
{\displaystyle M^{a}}
is the empty set. After
a
{\displaystyle a}
passes the level of
p
,
{\displaystyle p,}
when
0
<
a
<
f
(
q
)
,
{\displaystyle 0<a<f(q),}
then
M
a
{\displaystyle M^{a}}
is a disk, which is homotopy equivalent to a point (a 0-cell) which has been "attached" to the empty set. Next, when
a
{\displaystyle a}
exceeds the level of
q
,
{\displaystyle q,}
and
f
(
q
)
<
a
<
f
(
r
)
,
{\displaystyle f(q)<a<f(r),}
then
M
a
{\displaystyle M^{a}}
is a cylinder, and is homotopy equivalent to a disk with a 1-cell attached (image at left). Once
a
{\displaystyle a}
passes the level of
r
,
{\displaystyle r,}
and
f
(
r
)
<
a
<
f
(
s
)
,
{\displaystyle f(r)<a<f(s),}
then
M
a
{\displaystyle M^{a}}
is a torus with a disk removed, which is homotopy equivalent to a cylinder with a 1-cell attached (image at right). Finally, when
a
{\displaystyle a}
is greater than the critical level of
s
,
{\displaystyle s,}
M
a
{\displaystyle M^{a}}
is a torus, i.e. a torus with a disk (a 2-cell) removed and re-attached.
This illustrates the following rule: the topology of
M
a
{\displaystyle M^{a}}
does not change except when
a
{\displaystyle a}
passes the height of a critical point; at this point, a
γ
{\displaystyle \gamma }
-cell is attached to
M
a
{\displaystyle M^{a}}
, where
γ
{\displaystyle \gamma }
is the index of the point. This does not address what happens when two critical points are at the same height, which can be resolved by a slight perturbation of
f
.
{\displaystyle f.}
In the case of a landscape or a manifold embedded in Euclidean space, this perturbation might simply be tilting slightly, rotating the coordinate system.
One must take care to make the critical points non-degenerate. To see what can pose a problem, let
M
=
R
{\displaystyle M=\mathbb {R} }
and let
f
(
x
)
=
x
3
.
{\displaystyle f(x)=x^{3}.}
Then
0
{\displaystyle 0}
is a critical point of
f
,
{\displaystyle f,}
but the topology of
M
a
{\displaystyle M^{a}}
does not change when
a
{\displaystyle a}
passes
0.
{\displaystyle 0.}
The problem is that the second derivative is
f
″
(
0
)
=
0
{\displaystyle f''(0)=0}
—that is, the Hessian of
f
{\displaystyle f}
vanishes and the critical point is degenerate. This situation is unstable, since by slightly deforming
f
{\displaystyle f}
to
f
(
x
)
=
x
3
+
ϵ
x
{\displaystyle f(x)=x^{3}+\epsilon x}
, the degenerate critical point is either removed (
ϵ
>
0
{\displaystyle \epsilon >0}
) or breaks up into two non-degenerate critical points (
ϵ
<
0
{\displaystyle \epsilon <0}
).
== Formal development ==
For a real-valued smooth function
f
:
M
→
R
{\displaystyle f:M\to \mathbb {R} }
on a differentiable manifold
M
,
{\displaystyle M,}
the points where the differential of
f
{\displaystyle f}
vanishes are called critical points of
f
{\displaystyle f}
and their images under
f
{\displaystyle f}
are called critical values. If at a critical point
p
{\displaystyle p}
the matrix of second partial derivatives (the Hessian matrix) is non-singular, then
p
{\displaystyle p}
is called a non-degenerate critical point; if the Hessian is singular then
p
{\displaystyle p}
is a degenerate critical point.
For the functions
f
(
x
)
=
a
+
b
x
+
c
x
2
+
d
x
3
+
⋯
{\displaystyle f(x)=a+bx+cx^{2}+dx^{3}+\cdots }
from
R
{\displaystyle \mathbb {R} }
to
R
,
{\displaystyle \mathbb {R} ,}
f
{\displaystyle f}
has a critical point at the origin if
b
=
0
,
{\displaystyle b=0,}
which is non-degenerate if
c
≠
0
{\displaystyle c\neq 0}
(that is,
f
{\displaystyle f}
is of the form
a
+
c
x
2
+
⋯
{\displaystyle a+cx^{2}+\cdots }
) and degenerate if
c
=
0
{\displaystyle c=0}
(that is,
f
{\displaystyle f}
is of the form
a
+
d
x
3
+
⋯
{\displaystyle a+dx^{3}+\cdots }
). A less trivial example of a degenerate critical point is the origin of the monkey saddle.
The index of a non-degenerate critical point
p
{\displaystyle p}
of
f
{\displaystyle f}
is the dimension of the largest subspace of the tangent space to
M
{\displaystyle M}
at
p
{\displaystyle p}
on which the Hessian is negative definite. This corresponds to the intuitive notion that the index is the number of directions in which
f
{\displaystyle f}
decreases. The degeneracy and index of a critical point are independent of the choice of the local coordinate system used, as shown by Sylvester's Law.
=== Morse lemma ===
Let
p
{\displaystyle p}
be a non-degenerate critical point of
f
:
M
→
R
.
{\displaystyle f\colon M\to \mathbb {R} .}
Then there exists a chart
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \left(x_{1},x_{2},\ldots ,x_{n}\right)}
in a neighborhood
U
{\displaystyle U}
of
p
{\displaystyle p}
such that
x
i
(
p
)
=
0
{\displaystyle x_{i}(p)=0}
for all
i
{\displaystyle i}
and
f
(
x
)
=
f
(
p
)
−
x
1
2
−
⋯
−
x
γ
2
+
x
γ
+
1
2
+
⋯
+
x
n
2
{\displaystyle f(x)=f(p)-x_{1}^{2}-\cdots -x_{\gamma }^{2}+x_{\gamma +1}^{2}+\cdots +x_{n}^{2}}
throughout
U
.
{\displaystyle U.}
Here
γ
{\displaystyle \gamma }
is equal to the index of
f
{\displaystyle f}
at
p
{\displaystyle p}
. As a corollary of the Morse lemma, one sees that non-degenerate critical points are isolated. (Regarding an extension to the complex domain see Complex Morse Lemma. For a generalization, see Morse–Palais lemma).
=== Fundamental theorems ===
A smooth real-valued function on a manifold
M
{\displaystyle M}
is a Morse function if it has no degenerate critical points. A basic result of Morse theory says that almost all functions are Morse functions. Technically, the Morse functions form an open, dense subset of all smooth functions
M
→
R
{\displaystyle M\to \mathbb {R} }
in the
C
2
{\displaystyle C^{2}}
topology. This is sometimes expressed as "a typical function is Morse" or "a generic function is Morse".
As indicated before, we are interested in the question of when the topology of
M
a
=
f
−
1
(
−
∞
,
a
]
{\displaystyle M^{a}=f^{-1}(-\infty ,a]}
changes as
a
{\displaystyle a}
varies. Half of the answer to this question is given by the following theorem.
Theorem. Suppose
f
{\displaystyle f}
is a smooth real-valued function on
M
,
{\displaystyle M,}
a
<
b
,
{\displaystyle a<b,}
f
−
1
[
a
,
b
]
{\displaystyle f^{-1}[a,b]}
is compact, and there are no critical values between
a
{\displaystyle a}
and
b
.
{\displaystyle b.}
Then
M
a
{\displaystyle M^{a}}
is diffeomorphic to
M
b
,
{\displaystyle M^{b},}
and
M
b
{\displaystyle M^{b}}
deformation retracts onto
M
a
.
{\displaystyle M^{a}.}
It is also of interest to know how the topology of
M
a
{\displaystyle M^{a}}
changes when
a
{\displaystyle a}
passes a critical point. The following theorem answers that question.
Theorem. Suppose
f
{\displaystyle f}
is a smooth real-valued function on
M
{\displaystyle M}
and
p
{\displaystyle p}
is a non-degenerate critical point of
f
{\displaystyle f}
of index
γ
,
{\displaystyle \gamma ,}
and that
f
(
p
)
=
q
.
{\displaystyle f(p)=q.}
Suppose
f
−
1
[
q
−
ε
,
q
+
ε
]
{\displaystyle f^{-1}[q-\varepsilon ,q+\varepsilon ]}
is compact and contains no critical points besides
p
.
{\displaystyle p.}
Then
M
q
+
ε
{\displaystyle M^{q+\varepsilon }}
is homotopy equivalent to
M
q
−
ε
{\displaystyle M^{q-\varepsilon }}
with a
γ
{\displaystyle \gamma }
-cell attached.
These results generalize and formalize the 'rule' stated in the previous section.
Using the two previous results and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an
n
{\displaystyle n}
-cell for each critical point of index
n
.
{\displaystyle n.}
To do this, one needs the technical fact that one can arrange to have a single critical point on each critical level, which is usually proven by using gradient-like vector fields to rearrange the critical points.
=== Morse inequalities ===
Morse theory can be used to prove some strong results on the homology of manifolds. The number of critical points of index
γ
{\displaystyle \gamma }
of
f
:
M
→
R
{\displaystyle f:M\to \mathbb {R} }
is equal to the number of
γ
{\displaystyle \gamma }
cells in the CW structure on
M
{\displaystyle M}
obtained from "climbing"
f
.
{\displaystyle f.}
Using the fact that the alternating sum of the ranks of the homology groups of a topological space is equal to the alternating sum of the ranks of the chain groups from which the homology is computed, then by using the cellular chain groups (see cellular homology) it is clear that the Euler characteristic
χ
(
M
)
{\displaystyle \chi (M)}
is equal to the sum
∑
(
−
1
)
γ
C
γ
=
χ
(
M
)
{\displaystyle \sum (-1)^{\gamma }C^{\gamma }\,=\chi (M)}
where
C
γ
{\displaystyle C^{\gamma }}
is the number of critical points of index
γ
.
{\displaystyle \gamma .}
Also by cellular homology, the rank of the
n
{\displaystyle n}
th homology group of a CW complex
M
{\displaystyle M}
is less than or equal to the number of
n
{\displaystyle n}
-cells in
M
.
{\displaystyle M.}
Therefore, the rank of the
γ
{\displaystyle \gamma }
th homology group, that is, the Betti number
b
γ
(
M
)
{\displaystyle b_{\gamma }(M)}
, is less than or equal to the number of critical points of index
γ
{\displaystyle \gamma }
of a Morse function on
M
.
{\displaystyle M.}
These facts can be strengthened to obtain the Morse inequalities:
C
γ
−
C
γ
−
1
±
⋯
+
(
−
1
)
γ
C
0
≥
b
γ
(
M
)
−
b
γ
−
1
(
M
)
±
⋯
+
(
−
1
)
γ
b
0
(
M
)
.
{\displaystyle C^{\gamma }-C^{\gamma -1}\pm \cdots +(-1)^{\gamma }C^{0}\geq b_{\gamma }(M)-b_{\gamma -1}(M)\pm \cdots +(-1)^{\gamma }b_{0}(M).}
In particular, for any
γ
∈
{
0
,
…
,
n
=
dim
M
}
,
{\displaystyle \gamma \in \{0,\ldots ,n=\dim M\},}
one has
C
γ
≥
b
γ
(
M
)
.
{\displaystyle C^{\gamma }\geq b_{\gamma }(M).}
This gives a powerful tool to study manifold topology. Suppose on a closed manifold there exists a Morse function
f
:
M
→
R
{\displaystyle f:M\to \mathbb {R} }
with precisely k critical points. In what way does the existence of the function
f
{\displaystyle f}
restrict
M
{\displaystyle M}
? The case
k
=
2
{\displaystyle k=2}
was studied by Georges Reeb in 1952; the Reeb sphere theorem states that
M
{\displaystyle M}
is homeomorphic to a sphere
S
n
.
{\displaystyle S^{n}.}
The case
k
=
3
{\displaystyle k=3}
is possible only in a small number of low dimensions, and M is homeomorphic to an Eells–Kuiper manifold.
In 1982 Edward Witten developed an analytic approach to the Morse inequalities by considering the de Rham complex for the perturbed operator
d
t
=
e
−
t
f
d
e
t
f
.
{\displaystyle d_{t}=e^{-tf}de^{tf}.}
=== Application to classification of closed 2-manifolds ===
Morse theory has been used to classify closed 2-manifolds up to diffeomorphism. If
M
{\displaystyle M}
is oriented, then
M
{\displaystyle M}
is classified by its genus
g
{\displaystyle g}
and is diffeomorphic to a sphere with
g
{\displaystyle g}
handles: thus if
g
=
0
,
{\displaystyle g=0,}
M
{\displaystyle M}
is diffeomorphic to the 2-sphere; and if
g
>
0
,
{\displaystyle g>0,}
M
{\displaystyle M}
is diffeomorphic to the connected sum of
g
{\displaystyle g}
2-tori. If
N
{\displaystyle N}
is unorientable, it is classified by a number
g
>
0
{\displaystyle g>0}
and is diffeomorphic to the connected sum of
g
{\displaystyle g}
real projective spaces
R
P
2
.
{\displaystyle \mathbf {RP} ^{2}.}
In particular two closed 2-manifolds are homeomorphic if and only if they are diffeomorphic.
=== Morse homology ===
Morse homology is a particularly easy way to understand the homology of smooth manifolds. It is defined using a generic choice of Morse function and Riemannian metric. The basic theorem is that the resulting homology is an invariant of the manifold (that is, independent of the function and metric) and isomorphic to the singular homology of the manifold; this implies that the Morse and singular Betti numbers agree and gives an immediate proof of the Morse inequalities. An infinite dimensional analog of Morse homology in symplectic geometry is known as Floer homology.
== Morse–Bott theory ==
The notion of a Morse function can be generalized to consider functions that have nondegenerate manifolds of critical points. A Morse–Bott function is a smooth function on a manifold whose critical set is a closed submanifold and whose Hessian is non-degenerate in the normal direction. (Equivalently, the kernel of the Hessian at a critical point equals the tangent space to the critical submanifold.) A Morse function is the special case where the critical manifolds are zero-dimensional (so the Hessian at critical points is non-degenerate in every direction, that is, has no kernel).
The index is most naturally thought of as a pair
(
i
−
,
i
+
)
,
{\displaystyle \left(i_{-},i_{+}\right),}
where
i
−
{\displaystyle i_{-}}
is the dimension of the unstable manifold at a given point of the critical manifold, and
i
+
{\displaystyle i_{+}}
is equal to
i
−
{\displaystyle i_{-}}
plus the dimension of the critical manifold. If the Morse–Bott function is perturbed by a small function on the critical locus, the index of all critical points of the perturbed function on a critical manifold of the unperturbed function will lie between
i
−
{\displaystyle i_{-}}
and
i
+
.
{\displaystyle i_{+}.}
Morse–Bott functions are useful because generic Morse functions are difficult to work with; the functions one can visualize, and with which one can easily calculate, typically have symmetries. They often lead to positive-dimensional critical manifolds. Raoul Bott used Morse–Bott theory in his original proof of the Bott periodicity theorem.
Round functions are examples of Morse–Bott functions, where the critical sets are (disjoint unions of) circles.
Morse homology can also be formulated for Morse–Bott functions; the differential in Morse–Bott homology is computed by a spectral sequence. Frederic Bourgeois sketched an approach in the course of his work on a Morse–Bott version of symplectic field theory, but this work was never published due to substantial analytic difficulties.
== See also ==
== References ==
== Further reading ==
Bott, Raoul (1988). "Morse Theory Indomitable". Publications Mathématiques de l'IHÉS. 68: 99–114. doi:10.1007/bf02698544. S2CID 54005577.
Bott, Raoul (1982). "Lectures on Morse theory, old and new". Bulletin of the American Mathematical Society. (N.S.). 7 (2): 331–358. doi:10.1090/s0273-0979-1982-15038-8.
Cayley, Arthur (1859). "On Contour and Slope Lines" (PDF). The Philosophical Magazine. 18 (120): 264–268.
Guest, Martin (2001). "Morse Theory in the 1990s". arXiv:math/0104155.
Hirsch, M. (1994). Differential Topology (2nd ed.). Springer.
Kosinski, Antoni A. (19 October 2007). Differential Manifolds. Dover Book on Mathematics (Reprint of 1993 ed.). Mineola, New York: Dover Publications. ISBN 978-0-486-46244-8. OCLC 853621933.
Lang, Serge (1999). Fundamentals of Differential Geometry. Graduate Texts in Mathematics. Vol. 191. New York: Springer-Verlag. ISBN 978-0-387-98593-0. OCLC 39379395.
Matsumoto, Yukio (2002). An Introduction to Morse Theory.
Maxwell, James Clerk (1870). "On Hills and Dales" (PDF). The Philosophical Magazine. 40 (269): 421–427.
Milnor, John (1963). Morse Theory. Princeton University Press. ISBN 0-691-08008-9. {{cite book}}: ISBN / Date incompatibility (help) A classic advanced reference in mathematics and mathematical physics.
Milnor, John (1965). Lectures on the h-cobordism theorem (PDF).
Morse, Marston (1934). The Calculus of Variations in the Large. American Mathematical Society Colloquium Publication. Vol. 18. New York.{{cite book}}: CS1 maint: location missing publisher (link)
Schwarz, Matthias (1993). Morse Homology. Birkhäuser. ISBN 9780817629045. | Wikipedia/Morse_theory |
In mathematics, Hodge theory, named after W. V. D. Hodge, is a method for studying the cohomology groups of a smooth manifold M using partial differential equations. The key observation is that, given a Riemannian metric on M, every cohomology class has a canonical representative, a differential form that vanishes under the Laplacian operator of the metric. Such forms are called harmonic.
The theory was developed by Hodge in the 1930s to study algebraic geometry, and it built on the work of Georges de Rham on de Rham cohomology. It has major applications in two settings—Riemannian manifolds and Kähler manifolds. Hodge's primary motivation, the study of complex projective varieties, is encompassed by the latter case. Hodge theory has become an important tool in algebraic geometry, particularly through its connection to the study of algebraic cycles.
While Hodge theory is intrinsically dependent upon the real and complex numbers, it can be applied to questions in number theory. In arithmetic situations, the tools of p-adic Hodge theory have given alternative proofs of, or analogous results to, classical Hodge theory.
== History ==
The field of algebraic topology was still nascent in the 1920s. It had not yet developed the notion of cohomology, and the interaction between differential forms and topology was poorly understood. In 1928, Élie Cartan published an idea, "Sur les nombres de Betti des espaces de groupes clos", in which he suggested—but did not prove—that differential forms and topology should be linked. Upon reading it, Georges de Rham, then a student, was inspired. In his 1931 thesis, he proved a result now called de Rham's theorem. By Stokes' theorem, integration of differential forms along singular chains induces, for any compact smooth manifold M, a bilinear pairing as shown below:
H
k
(
M
;
R
)
×
H
dR
k
(
M
;
R
)
→
R
.
{\displaystyle H_{k}(M;\mathbf {R} )\times H_{\text{dR}}^{k}(M;\mathbf {R} )\to \mathbf {R} .}
As originally stated, de Rham's theorem asserts that this is a perfect pairing, and that therefore each of the terms on the left-hand side are vector space duals of one another. In contemporary language, de Rham's theorem is more often phrased as the statement that singular cohomology with real coefficients is isomorphic to de Rham cohomology:
H
sing
k
(
M
;
R
)
≅
H
dR
k
(
M
;
R
)
.
{\displaystyle H_{\text{sing}}^{k}(M;\mathbf {R} )\cong H_{\text{dR}}^{k}(M;\mathbf {R} ).}
De Rham's original statement is then a consequence of the fact that over the reals, singular cohomology is the dual of singular homology.
Separately, a 1927 paper of Solomon Lefschetz used topological methods to reprove theorems of Riemann. In modern language, if ω1 and ω2 are holomorphic differentials on an algebraic curve C, then their wedge product is necessarily zero because C has only one complex dimension; consequently, the cup product of their cohomology classes is zero, and when made explicit, this gave Lefschetz a new proof of the Riemann relations. Additionally, if ω is a non-zero holomorphic differential, then
−
1
ω
∧
ω
¯
{\displaystyle {\sqrt {-1}}\,\omega \wedge {\bar {\omega }}}
is a positive volume form, from which Lefschetz was able to rederive Riemann's inequalities. In 1929, W. V. D. Hodge learned of Lefschetz's paper. He immediately observed that similar principles applied to algebraic surfaces. More precisely, if ω is a non-zero holomorphic form on an algebraic surface, then
−
1
ω
∧
ω
¯
{\displaystyle {\sqrt {-1}}\,\omega \wedge {\bar {\omega }}}
is positive, so the cup product of
ω
{\displaystyle \omega }
and
ω
¯
{\displaystyle {\bar {\omega }}}
must be non-zero. It follows that ω itself must represent a non-zero cohomology class, so its periods cannot all be zero. This resolved a question of Severi.
Hodge felt that these techniques should be applicable to higher dimensional varieties as well. His colleague Peter Fraser recommended de Rham's thesis to him. In reading de Rham's thesis, Hodge realized that the real and imaginary parts of a holomorphic 1-form on a Riemann surface were in some sense dual to each other. He suspected that there should be a similar duality in higher dimensions; this duality is now known as the Hodge star operator. He further conjectured that each cohomology class should have a distinguished representative with the property that both it and its dual vanish under the exterior derivative operator; these are now called harmonic forms. Hodge devoted most of the 1930s to this problem. His earliest published attempt at a proof appeared in 1933, but he considered it "crude in the extreme". Hermann Weyl, one of the most brilliant mathematicians of the era, found himself unable to determine whether Hodge's proof was correct or not. In 1936, Hodge published a new proof. While Hodge considered the new proof much superior, a serious flaw was discovered by Bohnenblust. Independently, Hermann Weyl and Kunihiko Kodaira modified Hodge's proof to repair the error. This established Hodge's sought-for isomorphism between harmonic forms and cohomology classes.
In retrospect it is clear that the technical difficulties in the existence theorem did not really require any significant new ideas, but merely a careful extension of classical methods. The real novelty, which was Hodge’s major contribution, was in the conception of harmonic integrals and their relevance to algebraic geometry. This triumph of concept over technique is reminiscent of a similar episode in the work of Hodge’s great predecessor Bernhard Riemann.
—M. F. Atiyah, William Vallance Douglas Hodge, 17 June 1903 – 7 July 1975, Biographical Memoirs of Fellows of the Royal Society, vol. 22, 1976, pp. 169–192.
== Hodge theory for real manifolds ==
=== De Rham cohomology ===
The Hodge theory references the de Rham complex. Let M be a smooth manifold. For a non-negative integer k, let Ωk(M) be the real vector space of smooth differential forms of degree k on M. The de Rham complex is the sequence of differential operators
0
→
Ω
0
(
M
)
→
d
0
Ω
1
(
M
)
→
d
1
⋯
→
d
n
−
1
Ω
n
(
M
)
→
d
n
0
,
{\displaystyle 0\to \Omega ^{0}(M)\xrightarrow {d_{0}} \Omega ^{1}(M)\xrightarrow {d_{1}} \cdots \xrightarrow {d_{n-1}} \Omega ^{n}(M)\xrightarrow {d_{n}} 0,}
where dk denotes the exterior derivative on Ωk(M). This is a cochain complex in the sense that dk+1 ∘ dk = 0 (also written d2 = 0). De Rham's theorem says that the singular cohomology of M with real coefficients is computed by the de Rham complex:
H
k
(
M
,
R
)
≅
ker
d
k
im
d
k
−
1
.
{\displaystyle H^{k}(M,\mathbf {R} )\cong {\frac {\ker d_{k}}{\operatorname {im} d_{k-1}}}.}
=== Operators in Hodge theory ===
Choose a Riemannian metric g on M and recall that:
Ω
k
(
M
)
=
Γ
(
⋀
k
T
∗
(
M
)
)
.
{\displaystyle \Omega ^{k}(M)=\Gamma \left(\bigwedge \nolimits ^{k}T^{*}(M)\right).}
The metric yields an inner product on each fiber
⋀
k
(
T
p
∗
(
M
)
)
{\displaystyle \bigwedge \nolimits ^{k}(T_{p}^{*}(M))}
by extending (see Gramian matrix) the inner product induced by g from each cotangent fiber
T
p
∗
(
M
)
{\displaystyle T_{p}^{*}(M)}
to its
k
t
h
{\displaystyle k^{th}}
exterior product:
⋀
k
(
T
p
∗
(
M
)
)
{\displaystyle \bigwedge \nolimits ^{k}(T_{p}^{*}(M))}
. The
Ω
k
(
M
)
{\displaystyle \Omega ^{k}(M)}
inner product is then defined as the integral of the pointwise inner product of a given pair of k-forms over M with respect to the volume form
σ
{\displaystyle \sigma }
associated with g. Explicitly, given some
ω
,
τ
∈
Ω
k
(
M
)
{\displaystyle \omega ,\tau \in \Omega ^{k}(M)}
we have
(
ω
,
τ
)
↦
⟨
ω
,
τ
⟩
:=
∫
M
⟨
ω
(
p
)
,
τ
(
p
)
⟩
p
σ
.
{\displaystyle (\omega ,\tau )\mapsto \langle \omega ,\tau \rangle :=\int _{M}\langle \omega (p),\tau (p)\rangle _{p}\sigma .}
Naturally the above inner product induces a norm, when that norm is finite on some fixed k-form:
⟨
ω
,
ω
⟩
=
‖
ω
‖
2
<
∞
,
{\displaystyle \langle \omega ,\omega \rangle =\|\omega \|^{2}<\infty ,}
then the integrand is a real valued, square integrable function on M, evaluated at a given point via its point-wise norms,
‖
ω
(
p
)
‖
p
:
M
→
R
∈
L
2
(
M
)
.
{\displaystyle \|\omega (p)\|_{p}:M\to \mathbf {R} \in L^{2}(M).}
Consider the adjoint operator of d with respect to these inner products:
δ
:
Ω
k
+
1
(
M
)
→
Ω
k
(
M
)
.
{\displaystyle \delta :\Omega ^{k+1}(M)\to \Omega ^{k}(M).}
Then the Laplacian on forms is defined by
Δ
=
d
δ
+
δ
d
.
{\displaystyle \Delta =d\delta +\delta d.}
This is a second-order linear differential operator, generalizing the Laplacian for functions on Rn. By definition, a form on M is harmonic if its Laplacian is zero:
H
Δ
k
(
M
)
=
{
α
∈
Ω
k
(
M
)
∣
Δ
α
=
0
}
.
{\displaystyle {\mathcal {H}}_{\Delta }^{k}(M)=\{\alpha \in \Omega ^{k}(M)\mid \Delta \alpha =0\}.}
The Laplacian appeared first in mathematical physics. In particular, Maxwell's equations say that the electromagnetic field in a vacuum, i.e. absent any charges, is represented by a 2-form F such that ΔF = 0 on spacetime, viewed as Minkowski space of dimension 4.
Every harmonic form α on a closed Riemannian manifold is closed, meaning that dα = 0. As a result, there is a canonical mapping
φ
:
H
Δ
k
(
M
)
→
H
k
(
M
,
R
)
{\displaystyle \varphi :{\mathcal {H}}_{\Delta }^{k}(M)\to H^{k}(M,\mathbf {R} )}
. The Hodge theorem states that
φ
{\displaystyle \varphi }
is an isomorphism of vector spaces. In other words, each real cohomology class on M has a unique harmonic representative. Concretely, the harmonic representative is the unique closed form of minimum L2 norm that represents a given cohomology class. The Hodge theorem was proved using the theory of elliptic partial differential equations, with Hodge's initial arguments completed by Kodaira and others in the 1940s.
For example, the Hodge theorem implies that the cohomology groups with real coefficients of a closed manifold are finite-dimensional. (Admittedly, there are other ways to prove this.) Indeed, the operators Δ are elliptic, and the kernel of an elliptic operator on a closed manifold is always a finite-dimensional vector space. Another consequence of the Hodge theorem is that a Riemannian metric on a closed manifold M determines a real-valued inner product on the integral cohomology of M modulo torsion. It follows, for example, that the image of the isometry group of M in the general linear group GL(H∗(M, Z)) is finite (because the group of isometries of a lattice is finite).
A variant of the Hodge theorem is the Hodge decomposition. This says that there is a unique decomposition of any differential form ω on a closed Riemannian manifold as a sum of three parts in the form
ω
=
d
α
+
δ
β
+
γ
,
{\displaystyle \omega =d\alpha +\delta \beta +\gamma ,}
in which γ is harmonic: Δγ = 0. In terms of the L2 metric on differential forms, this gives an orthogonal direct sum decomposition:
Ω
k
(
M
)
≅
im
d
k
−
1
⊕
im
δ
k
+
1
⊕
H
Δ
k
(
M
)
.
{\displaystyle \Omega ^{k}(M)\cong \operatorname {im} d_{k-1}\oplus \operatorname {im} \delta _{k+1}\oplus {\mathcal {H}}_{\Delta }^{k}(M).}
The Hodge decomposition is a generalization of the Helmholtz decomposition for the de Rham complex.
=== Hodge theory of elliptic complexes ===
Atiyah and Bott defined elliptic complexes as a generalization of the de Rham complex. The Hodge theorem extends to this setting, as follows. Let
E
0
,
E
1
,
…
,
E
N
{\displaystyle E_{0},E_{1},\ldots ,E_{N}}
be vector bundles, equipped with metrics, on a closed smooth manifold M with a volume form dV. Suppose that
L
i
:
Γ
(
E
i
)
→
Γ
(
E
i
+
1
)
{\displaystyle L_{i}:\Gamma (E_{i})\to \Gamma (E_{i+1})}
are linear differential operators acting on C∞ sections of these vector bundles, and that the induced sequence
0
→
Γ
(
E
0
)
→
Γ
(
E
1
)
→
⋯
→
Γ
(
E
N
)
→
0
{\displaystyle 0\to \Gamma (E_{0})\to \Gamma (E_{1})\to \cdots \to \Gamma (E_{N})\to 0}
is an elliptic complex. Introduce the direct sums:
E
∙
=
⨁
i
Γ
(
E
i
)
L
=
⨁
i
L
i
:
E
∙
→
E
∙
{\displaystyle {\begin{aligned}{\mathcal {E}}^{\bullet }&=\bigoplus \nolimits _{i}\Gamma (E_{i})\\L&=\bigoplus \nolimits _{i}L_{i}:{\mathcal {E}}^{\bullet }\to {\mathcal {E}}^{\bullet }\end{aligned}}}
and let L∗ be the adjoint of L. Define the elliptic operator Δ = LL∗ + L∗L. As in the de Rham case, this yields the vector space of harmonic sections
H
=
{
e
∈
E
∙
∣
Δ
e
=
0
}
.
{\displaystyle {\mathcal {H}}=\{e\in {\mathcal {E}}^{\bullet }\mid \Delta e=0\}.}
Let
H
:
E
∙
→
H
{\displaystyle H:{\mathcal {E}}^{\bullet }\to {\mathcal {H}}}
be the orthogonal projection, and let G be the Green's operator for Δ. The Hodge theorem then asserts the following:
H and G are well-defined.
Id = H + ΔG = H + GΔ
LG = GL, L∗G = GL∗
The cohomology of the complex is canonically isomorphic to the space of harmonic sections,
H
(
E
j
)
≅
H
(
E
j
)
{\displaystyle H(E_{j})\cong {\mathcal {H}}(E_{j})}
, in the sense that each cohomology class has a unique harmonic representative.
There is also a Hodge decomposition in this situation, generalizing the statement above for the de Rham complex.
== Hodge theory for complex projective varieties ==
Let X be a smooth complex projective manifold, meaning that X is a closed complex submanifold of some complex projective space CPN. By Chow's theorem, complex projective manifolds are automatically algebraic: they are defined by the vanishing of homogeneous polynomial equations on CPN. The standard Riemannian metric on CPN induces a Riemannian metric on X which has a strong compatibility with the complex structure, making X a Kähler manifold.
For a complex manifold X and a natural number r, every C∞ r-form on X (with complex coefficients) can be written uniquely as a sum of forms of type (p, q) with p + q = r, meaning forms that can locally be written as a finite sum of terms, with each term taking the form
f
d
z
1
∧
⋯
∧
d
z
p
∧
d
w
1
¯
∧
⋯
∧
d
w
q
¯
{\displaystyle f\,dz_{1}\wedge \cdots \wedge dz_{p}\wedge d{\overline {w_{1}}}\wedge \cdots \wedge d{\overline {w_{q}}}}
with f a C∞ function and the zs and ws holomorphic functions. On a Kähler manifold, the (p, q) components of a harmonic form are again harmonic. Therefore, for any compact Kähler manifold X, the Hodge theorem gives a decomposition of the cohomology of X with complex coefficients as a direct sum of complex vector spaces:
H
r
(
X
,
C
)
=
⨁
p
+
q
=
r
H
p
,
q
(
X
)
.
{\displaystyle H^{r}(X,\mathbf {C} )=\bigoplus _{p+q=r}H^{p,q}(X).}
This decomposition is in fact independent of the choice of Kähler metric (but there is no analogous decomposition for a general compact complex manifold). On the other hand, the Hodge decomposition genuinely depends on the structure of X as a complex manifold, whereas the group Hr(X, C) depends only on the underlying topological space of X.
Taking wedge products of these harmonic representatives corresponds to the cup product in cohomology, so the cup product with complex coefficients is compatible with the Hodge decomposition:
⌣
:
H
p
,
q
(
X
)
×
H
p
′
,
q
′
(
X
)
→
H
p
+
p
′
,
q
+
q
′
(
X
)
.
{\displaystyle \smile \colon H^{p,q}(X)\times H^{p',q'}(X)\rightarrow H^{p+p',q+q'}(X).}
The piece Hp,q(X) of the Hodge decomposition can be identified with a coherent sheaf cohomology group, which depends only on X as a complex manifold (not on the choice of Kähler metric):
H
p
,
q
(
X
)
≅
H
q
(
X
,
Ω
p
)
,
{\displaystyle H^{p,q}(X)\cong H^{q}(X,\Omega ^{p}),}
where Ωp denotes the sheaf of holomorphic p-forms on X. For example, Hp,0(X) is the space of holomorphic p-forms on X. (If X is projective, Serre's GAGA theorem implies that a holomorphic p-form on all of X is in fact algebraic.)
On the other hand, the integral can be written as the cap product of the homology class of Z and the cohomology class represented by
α
{\displaystyle \alpha }
. By Poincaré duality, the homology class of Z is dual to a cohomology class which we will call [Z], and the cap product can be computed by taking the cup product of [Z] and α and capping with the fundamental class of X.
Because [Z] is a cohomology class, it has a Hodge decomposition. By the computation we did above, if we cup this class with any class of type
(
p
,
q
)
≠
(
k
,
k
)
{\displaystyle (p,q)\neq (k,k)}
, then we get zero. Because
H
2
n
(
X
,
C
)
=
H
n
,
n
(
X
)
{\displaystyle H^{2n}(X,\mathbb {C} )=H^{n,n}(X)}
, we conclude that [Z] must lie in
H
n
−
k
,
n
−
k
(
X
)
{\displaystyle H^{n-k,n-k}(X)}
.
The Hodge number hp,q(X) means the dimension of the complex vector space Hp.q(X). These are important invariants of a smooth complex projective variety; they do not change when the complex structure of X is varied continuously, and yet they are in general not topological invariants. Among the properties of Hodge numbers are Hodge symmetry hp,q = hq,p (because Hp,q(X) is the complex conjugate of Hq,p(X)) and hp,q = hn−p,n−q (by Serre duality).
The Hodge numbers of a smooth complex projective variety (or compact Kähler manifold) can be listed in the Hodge diamond (shown in the case of complex dimension 2):
For example, every smooth projective curve of genus g has Hodge diamond
For another example, every K3 surface has Hodge diamond
The Betti numbers of X are the sum of the Hodge numbers in a given row. A basic application of Hodge theory is then that the odd Betti numbers b2a+1 of a smooth complex projective variety (or compact Kähler manifold) are even, by Hodge symmetry. This is not true for compact complex manifolds in general, as shown by the example of the Hopf surface, which is diffeomorphic to S1 × S3 and hence has b1 = 1.
The "Kähler package" is a powerful set of restrictions on the cohomology of smooth complex projective varieties (or compact Kähler manifolds), building on Hodge theory. The results include the Lefschetz hyperplane theorem, the hard Lefschetz theorem, and the Hodge–Riemann bilinear relations. Many of these results follow from fundamental technical tools which may be proven for compact Kähler manifolds using Hodge theory, including the Kähler identities and the
∂
∂
¯
{\displaystyle \partial {\bar {\partial }}}
-lemma.
Hodge theory and extensions such as non-abelian Hodge theory also give strong restrictions on the possible fundamental groups of compact Kähler manifolds.
== Algebraic cycles and the Hodge conjecture ==
Let
X
{\displaystyle X}
be a smooth complex projective variety. A complex subvariety
Y
{\displaystyle Y}
in
X
{\displaystyle X}
of codimension
p
{\displaystyle p}
defines an element of the cohomology group
H
2
p
(
X
,
Z
)
{\displaystyle H^{2p}(X,\mathbb {Z} )}
. Moreover, the resulting class has a special property: its image in the complex cohomology
H
2
p
(
X
,
C
)
{\displaystyle H^{2p}(X,\mathbb {C} )}
lies in the middle piece of the Hodge decomposition,
H
p
,
p
(
X
)
{\displaystyle H^{p,p}(X)}
. The Hodge conjecture predicts a converse: every element of
H
2
p
(
X
,
Z
)
{\displaystyle H^{2p}(X,\mathbb {Z} )}
whose image in complex cohomology lies in the subspace
H
p
,
p
(
X
)
{\displaystyle H^{p,p}(X)}
should have a positive integral multiple that is a
Z
{\displaystyle \mathbb {Z} }
-linear combination of classes of complex subvarieties of
X
{\displaystyle X}
. (Such a linear combination is called an algebraic cycle on
X
{\displaystyle X}
.)
A crucial point is that the Hodge decomposition is a decomposition of cohomology with complex coefficients that usually does not come from a decomposition of cohomology with integral (or rational) coefficients. As a result, the intersection
(
H
2
p
(
X
,
Z
)
/
torsion
)
∩
H
p
,
p
(
X
)
⊆
H
2
p
(
X
,
C
)
{\displaystyle (H^{2p}(X,\mathbb {Z} )/{\text{torsion}})\cap H^{p,p}(X)\subseteq H^{2p}(X,\mathbb {C} )}
may be much smaller than the whole group
H
2
p
(
X
,
Z
)
/
torsion
{\displaystyle H^{2p}(X,\mathbb {Z} )/{\text{torsion}}}
, even if the Hodge number
h
p
,
p
{\displaystyle h^{p,p}}
is big. In short, the Hodge conjecture predicts that the possible "shapes" of complex subvarieties of
X
{\displaystyle X}
(as described by cohomology) are determined by the Hodge structure of
X
{\displaystyle X}
(the combination of integral cohomology with the Hodge decomposition of complex cohomology).
The Lefschetz (1,1)-theorem says that the Hodge conjecture is true for
p
=
1
{\displaystyle p=1}
(even integrally, that is, without the need for a positive integral multiple in the statement).
The Hodge structure of a variety
X
{\displaystyle X}
describes the integrals of algebraic differential forms on
X
{\displaystyle X}
over homology classes in
X
{\displaystyle X}
. In this sense, Hodge theory is related to a basic issue in calculus: there is in general no "formula" for the integral of an algebraic function. In particular, definite integrals of algebraic functions, known as periods, can be transcendental numbers. The difficulty of the Hodge conjecture reflects the lack of understanding of such integrals in general.
Example: For a smooth complex projective K3 surface
X
{\displaystyle X}
, the group
H
2
(
X
,
Z
)
{\displaystyle H^{2}(X,\mathbb {Z} )}
is isomorphic to
Z
22
{\displaystyle \mathbb {Z} ^{22}}
, and
H
1
,
1
(
X
)
{\displaystyle H^{1,1}(X)}
is isomorphic to
C
20
{\displaystyle \mathbb {C} ^{20}}
. Their intersection can have rank anywhere between 1 and 20; this rank is called the Picard number of
X
{\displaystyle X}
. The moduli space of all projective K3 surfaces has a countably infinite set of components, each of complex dimension 19. The subspace of K3 surfaces with Picard number
a
{\displaystyle a}
has dimension
20
−
a
{\displaystyle 20-a}
. (Thus, for most projective K3 surfaces, the intersection of
H
2
(
X
,
Z
)
{\displaystyle H^{2}(X,\mathbb {Z} )}
with
H
1
,
1
(
X
)
{\displaystyle H^{1,1}(X)}
is isomorphic to
Z
{\displaystyle \mathbb {Z} }
, but for "special" K3 surfaces the intersection can be bigger.)
This example suggests several different roles played by Hodge theory in complex algebraic geometry. First, Hodge theory gives restrictions on which topological spaces can have the structure of a smooth complex projective variety. Second, Hodge theory gives information about the moduli space of smooth complex projective varieties with a given topological type. The best case is when the Torelli theorem holds, meaning that the variety is determined up to isomorphism by its Hodge structure. Finally, Hodge theory gives information about the Chow group of algebraic cycles on a given variety. The Hodge conjecture is about the image of the cycle map from Chow groups to ordinary cohomology, but Hodge theory also gives information about the kernel of the cycle map, for example using the intermediate Jacobians which are built from the Hodge structure.
== Generalizations ==
Mixed Hodge theory, developed by Pierre Deligne, extends Hodge theory to all complex algebraic varieties, not necessarily smooth or compact. Namely, the cohomology of any complex algebraic variety has a more general type of decomposition, a mixed Hodge structure.
A different generalization of Hodge theory to singular varieties is provided by intersection homology. Namely, Morihiko Saito showed that the intersection homology of any complex projective variety (not necessarily smooth) has a pure Hodge structure, just as in the smooth case. In fact, the whole Kähler package extends to intersection homology.
A fundamental aspect of complex geometry is that there are continuous families of non-isomorphic complex manifolds (which are all diffeomorphic as real manifolds). Phillip Griffiths's notion of a variation of Hodge structure describes how the Hodge structure of a smooth complex projective variety
X
{\displaystyle X}
varies when
X
{\displaystyle X}
varies. In geometric terms, this amounts to studying the period mapping associated to a family of varieties. Saito's theory of Hodge modules is a generalization. Roughly speaking, a mixed Hodge module on a variety
X
{\displaystyle X}
is a sheaf of mixed Hodge structures over
X
{\displaystyle X}
, as would arise from a family of varieties which need not be smooth or compact.
== See also ==
Potential theory
Serre duality
Helmholtz decomposition
Local invariant cycle theorem
Arakelov theory
Hodge–Arakelov theory
ddbar lemma, a key consequence of Hodge theory for compact Kähler manifolds.
== Notes ==
== References ==
Arapura, Donu, Computing Some Hodge Numbers (PDF)
Griffiths, Phillip; Harris, Joseph (1994) [1978]. Principles of Algebraic Geometry. Wiley Classics Library. Wiley Interscience. ISBN 0-471-05059-8. MR 0507725.
Hodge, W. V. D. (1941), "The Theory and Applications of Harmonic Integrals", Nature, 148 (3743), Cambridge University Press: 97, Bibcode:1941Natur.148...97D, doi:10.1038/148097a0, ISBN 978-0-521-35881-1, MR 0003947 {{citation}}: ISBN / Date incompatibility (help)
Huybrechts, Daniel (2005), Complex Geometry: An Introduction, Springer, ISBN 3-540-21290-6, MR 2093043
Voisin, Claire (2007) [2002], Hodge Theory and Complex Algebraic Geometry (2 vols.), Cambridge University Press, doi:10.1017/CBO9780511615344, ISBN 978-0-521-71801-1, MR 1967689
Warner, Frank (1983) [1971], Foundations of Differentiable Manifolds and Lie Groups, Springer, ISBN 0-387-90894-3, MR 0722297
Wells Jr., Raymond O. (2008) [1973], Differential Analysis on Complex Manifolds, Graduate Texts in Mathematics, vol. 65 (3rd ed.), Springer, doi:10.1007/978-0-387-73892-5, hdl:10338.dmlcz/141778, ISBN 978-0-387-73891-8, MR 2359489
Python code for computing Hodge numbers of hypersurfaces on GitHub | Wikipedia/Hodge_theory |
In the part of mathematics referred to as topology, a surface is a two-dimensional manifold. Some surfaces arise as the boundaries of three-dimensional solid figures; for example, the sphere is the boundary of the solid ball. Other surfaces arise as graphs of functions of two variables; see the figure at right. However, surfaces can also be defined abstractly, without reference to any ambient space. For example, the Klein bottle is a surface that cannot be embedded in three-dimensional Euclidean space.
Topological surfaces are sometimes equipped with additional information, such as a Riemannian metric or a complex structure, that connects them to other disciplines within mathematics, such as differential geometry and complex analysis. The various mathematical notions of surface can be used to model surfaces in the physical world.
== In general ==
In mathematics, a surface is a geometrical shape that resembles a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, such as spheres. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
A surface is a two-dimensional space; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
The concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface.
== Definitions and first examples ==
A (topological) surface is a topological space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a (coordinate) chart. It is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as local coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean.
In most writings on the subject, it is often assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second-countable, and Hausdorff. It is also often assumed that the surfaces under consideration are connected.
The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second-countable, and connected.
More generally, a (topological) surface with boundary is a Hausdorff topological space in which every point has an open neighbourhood homeomorphic to some open subset of the closure of the upper half-plane H2 in C. These homeomorphisms are also known as (coordinate) charts. The boundary of the upper half-plane is the x-axis. A point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of such points is known as the boundary of the surface which is necessarily a one-manifold, that is, the union of closed curves. On the other hand, a point mapped to above the x-axis is an interior point. The collection of interior points is the interior of the surface which is always non-empty. The closed disk is a simple example of a surface with boundary. The boundary of the disc is a circle.
The term surface used without qualification refers to surfaces without boundary. In particular, a surface with empty boundary is a surface in the usual sense. A surface with empty boundary which is compact is known as a 'closed' surface. The two-dimensional sphere, the two-dimensional torus, and the real projective plane are examples of closed surfaces.
The Möbius strip is a surface on which the distinction between clockwise and counterclockwise can be defined locally, but not globally. In general, a surface is said to be orientable if it does not contain a homeomorphic copy of the Möbius strip; intuitively, it has two distinct "sides". For example, the sphere and torus are orientable, while the real projective plane is not (because the real projective plane with one point removed is homeomorphic to the open Möbius strip).
In differential and algebraic geometry, extra structure is added upon the topology of the surface. This added structure can be a smoothness structure (making it possible to define differentiable maps to and from the surface), a Riemannian metric (making it possible to define length and angles on the surface), a complex structure (making it possible to define holomorphic maps to and from the surface—in which case the surface is called a Riemann surface), or an algebraic structure (making it possible to detect singularities, such as self-intersections and cusps, that cannot be described solely in terms of the underlying topology).
== Extrinsically defined surfaces and embeddings ==
Historically, surfaces were initially defined as subspaces of Euclidean spaces. Often, these surfaces were the locus of zeros of certain functions, usually polynomial functions. Such a definition considered the surface as part of a larger (Euclidean) space, and as such was termed extrinsic.
In the previous section, a surface is defined as a topological space with certain properties, namely Hausdorff and locally Euclidean. This topological space is not considered a subspace of another space. In this sense, the definition given above, which is the definition that mathematicians use at present, is intrinsic.
A surface defined as intrinsic is not required to satisfy the added constraint of being a subspace of Euclidean space. It may seem possible for some surfaces defined intrinsically to not be surfaces in the extrinsic sense. However, the Whitney embedding theorem asserts every surface can in fact be embedded homeomorphically into Euclidean space, in fact into E4: The extrinsic and intrinsic approaches turn out to be equivalent.
In fact, any compact surface that is either orientable or has a boundary can be embedded in E3; on the other hand, the real projective plane, which is compact, non-orientable and without boundary, cannot be embedded into E3 (see Gramain). Steiner surfaces, including Boy's surface, the Roman surface and the cross-cap, are models of the real projective plane in E3, but only the Boy surface is an immersed surface. All these models are singular at points where they intersect themselves.
The Alexander horned sphere is a well-known pathological embedding of the two-sphere into the three-sphere.
The chosen embedding (if any) of a surface into another space is regarded as extrinsic information; it is not essential to the surface itself. For example, a torus can be embedded into E3 in the "standard" manner (which looks like a bagel) or in a knotted manner (see figure). The two embedded tori are homeomorphic, but not isotopic: They are topologically equivalent, but their embeddings are not.
The image of a continuous, injective function from R2 to higher-dimensional Rn is said to be a parametric surface. Such an image is so-called because the x- and y- directions of the domain R2 are 2 variables that parametrize the image. A parametric surface need not be a topological surface. A surface of revolution can be viewed as a special kind of parametric surface.
If f is a smooth function from R3 to R whose gradient is nowhere zero, then the locus of zeros of f does define a surface, known as an implicit surface. If the condition of non-vanishing gradient is dropped, then the zero locus may develop singularities.
== Construction from polygons ==
Each closed surface can be constructed from an oriented polygon with an even number of sides, called a fundamental polygon of the surface, by pairwise identification of its edges. For example, in each polygon below, attaching the sides with matching labels (A with A, B with B), so that the arrows point in the same direction, yields the indicated surface.
Any fundamental polygon can be written symbolically as follows. Begin at any vertex, and proceed around the perimeter of the polygon in either direction until returning to the starting vertex. During this traversal, record the label on each edge in order, with an exponent of -1 if the edge points opposite to the direction of traversal. The four models above, when traversed clockwise starting at the upper left, yield
sphere:
A
B
B
−
1
A
−
1
{\displaystyle ABB^{-1}A^{-1}}
real projective plane:
A
B
A
B
{\displaystyle ABAB}
torus:
A
B
A
−
1
B
−
1
{\displaystyle ABA^{-1}B^{-1}}
Klein bottle:
A
B
A
B
−
1
{\displaystyle ABAB^{-1}}
.
Note that the sphere and the projective plane can both be realized as quotients of the 2-gon, while the torus and Klein bottle require a 4-gon (square).
The expression thus derived from a fundamental polygon of a surface turns out to be the sole relation in a presentation of the fundamental group of the surface with the polygon edge labels as generators. This is a consequence of the Seifert–van Kampen theorem.
Gluing edges of polygons is a special kind of quotient space process. The quotient concept can be applied in greater generality to produce new or alternative constructions of surfaces. For example, the real projective plane can be obtained as the quotient of the sphere by identifying all pairs of opposite points on the sphere. Another example of a quotient is the connected sum.
== Connected sums ==
The connected sum of two surfaces M and N, denoted M # N, is obtained by removing a disk from each of them and gluing them along the boundary components that result. The boundary of a disk is a circle, so these boundary components are circles. The Euler characteristic
χ
{\displaystyle \chi }
of M # N is the sum of the Euler characteristics of the summands, minus two:
χ
(
M
#
N
)
=
χ
(
M
)
+
χ
(
N
)
−
2.
{\displaystyle \chi (M{\mathbin {\#}}N)=\chi (M)+\chi (N)-2.\,}
The sphere S is an identity element for the connected sum, meaning that S # M = M. This is because deleting a disk from the sphere leaves a disk, which simply replaces the disk deleted from M upon gluing.
Connected summation with the torus T is also described as attaching a "handle" to the other summand M. If M is orientable, then so is T # M. The connected sum is associative, so the connected sum of a finite collection of surfaces is well-defined.
The connected sum of two real projective planes, P # P, is the Klein bottle K. The connected sum of the real projective plane and the Klein bottle is homeomorphic to the connected sum of the real projective plane with the torus; in a formula, P # K = P # T. Thus, the connected sum of three real projective planes is homeomorphic to the connected sum of the real projective plane with the torus. Any connected sum involving a real projective plane is nonorientable.
== Closed surfaces ==
A closed surface is a surface that is compact and without boundary. Examples of closed surfaces include the sphere, the torus and the Klein bottle. Examples of non-closed surfaces include an open disk (which is a sphere with a puncture), an open cylinder (which is a sphere with two punctures), and the Möbius strip.
A surface embedded in three-dimensional space is closed if and only if it is the boundary of a solid. As with any closed manifold, a surface embedded in Euclidean space that is closed with respect to the inherited Euclidean topology is not necessarily a closed surface; for example, a disk embedded in
R
3
{\displaystyle \mathbb {R} ^{3}}
that contains its boundary is a surface that is topologically closed but not a closed surface.
=== Classification of closed surfaces ===
The classification theorem of closed surfaces states that any connected closed surface is homeomorphic to some member of one of these three families:
the sphere,
the connected sum of g tori for g ≥ 1,
the connected sum of k real projective planes for k ≥ 1.
The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number g of tori involved is called the genus of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of g tori is 2 − 2g.
The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of k of them is 2 − k.
It follows that a closed surface is determined, up to homeomorphism, by two pieces of information: its Euler characteristic, and whether it is orientable or not. In other words, Euler characteristic and orientability completely classify closed surfaces up to homeomorphism.
Closed surfaces with multiple connected components are classified by the class of each of their connected components, and thus one generally assumes that the surface is connected.
=== Monoid structure ===
Relating this classification to connected sums, the closed surfaces up to homeomorphism form a commutative monoid under the operation of connected sum, as indeed do manifolds of any fixed dimension. The identity is the sphere, while the real projective plane and the torus generate this monoid, with a single relation P # P # P = P # T, which may also be written P # K = P # T, since K = P # P. This relation is sometimes known as Dyck's theorem after Walther von Dyck, who proved it in (Dyck 1888), and the triple cross surface P # P # P is accordingly called Dyck's surface.
Geometrically, connect-sum with a torus (# T) adds a handle with both ends attached to the same side of the surface, while connect-sum with a Klein bottle (# K) adds a handle with the two ends attached to opposite sides of an orientable surface; in the presence of a projective plane (# P), the surface is not orientable (there is no notion of side), so there is no difference between attaching a torus and attaching a Klein bottle, which explains the relation.
=== Proof ===
The classification of closed surfaces has been known since the 1860s, and today a number of proofs exist.
Topological and combinatorial proofs in general rely on the difficult result that every compact 2-manifold is homeomorphic to a simplicial complex, which is of interest in its own right. The most common proof of the classification is (Seifert & Threlfall 1980), which brings every triangulated surface to a standard form. A simplified proof, which avoids a standard form, was discovered by John H. Conway circa 1992, which he called the "Zero Irrelevancy Proof" or "ZIP proof" and is presented in (Francis & Weeks 1999).
A geometric proof, which yields a stronger geometric result, is the uniformization theorem. This was originally proven only for Riemann surfaces in the 1880s and 1900s by Felix Klein, Paul Koebe, and Henri Poincaré.
== Surfaces with boundary ==
Compact surfaces, possibly with boundary, are simply closed surfaces with a finite number of holes (open discs that have been removed). Thus, a connected compact surface is classified by the number of boundary components and the genus of the corresponding closed surface – equivalently, by the number of boundary components, the orientability, and Euler characteristic. The genus of a compact surface is defined as the genus of the corresponding closed surface.
This classification follows almost immediately from the classification of closed surfaces: removing an open disc from a closed surface yields a compact surface with a circle for boundary component, and removing k open discs yields a compact surface with k disjoint circles for boundary components. The precise locations of the holes are irrelevant, because the homeomorphism group acts k-transitively on any connected manifold of dimension at least 2.
Conversely, the boundary of a compact surface is a closed 1-manifold, and is therefore the disjoint union of a finite number of circles; filling these circles with disks (formally, taking the cone) yields a closed surface.
The unique compact orientable surface of genus g and with k boundary components is often denoted
Σ
g
,
k
,
{\displaystyle \Sigma _{g,k},}
for example in the study of the mapping class group.
== Non-compact surfaces ==
Non-compact surfaces are more difficult to classify. As a simple example, a non-compact surface can be obtained by puncturing (removing a finite set of points from) a closed manifold. On the other hand, any open subset of a compact surface is itself a non-compact surface; consider, for example, the complement of a Cantor set in the sphere, otherwise known as the Cantor tree surface. However, not every non-compact surface is a subset of a compact surface; two canonical counterexamples are the Jacob's ladder and the Loch Ness monster, which are non-compact surfaces with infinite genus.
A non-compact surface M has a non-empty space of ends E(M), which informally speaking describes the ways that the surface "goes off to infinity". The space E(M) is always topologically equivalent to a closed subspace of the Cantor set. M may have a finite or countably infinite number Nh of handles, as well as a finite or countably infinite number Np of projective planes. If both Nh and Np are finite, then these two numbers, and the topological type of space of ends, classify the surface M up to topological equivalence. If either or both of Nh and Np is infinite, then the topological type of M depends not only on these two numbers but also on how the infinite one(s) approach the space of ends. In general the topological type of M is determined by the four subspaces of E(M) that are limit points of infinitely many handles and infinitely many projective planes, limit points of only handles, limit points of only projective planes, and limit points of neither.
== Assumption of second-countability ==
If one removes the assumption of second-countability from the definition of a surface, there exist (necessarily non-compact) topological surfaces having no countable base for their topology. Perhaps the simplest example is the Cartesian product of the long line with the space of real numbers.
Another surface having no countable base for its topology, but not requiring the Axiom of Choice to prove its existence, is the Prüfer manifold, which can be described by simple equations that show it to be a real-analytic surface. The Prüfer manifold may be thought of as the upper half plane together with one additional "tongue" Tx hanging down from it directly below the point (x,0), for each real x.
In 1925, Tibor Radó proved that all Riemann surfaces (i.e., one-dimensional complex manifolds) are necessarily second-countable (Radó's theorem). By contrast, if one replaces the real numbers in the construction of the Prüfer surface by the complex numbers, one obtains a two-dimensional complex manifold (which is necessarily a 4-dimensional real manifold) with no countable base.
== Surfaces in geometry ==
Polyhedra, such as the boundary of a cube, are among the first surfaces encountered in geometry. It is also possible to define smooth surfaces, in which each point has a neighborhood diffeomorphic to some open set in E2. This elaboration allows calculus to be applied to surfaces to prove many results.
Two smooth surfaces are diffeomorphic if and only if they are homeomorphic. (The analogous result does not hold for higher-dimensional manifolds.) Thus closed surfaces are classified up to diffeomorphism by their Euler characteristic and orientability.
Smooth surfaces equipped with Riemannian metrics are of foundational importance in differential geometry. A Riemannian metric endows a surface with notions of geodesic, distance, angle, and area. It also gives rise to Gaussian curvature, which describes how curved or bent the surface is at each point. Curvature is a rigid, geometric property, in that it is not preserved by general diffeomorphisms of the surface. However, the famous Gauss–Bonnet theorem for closed surfaces states that the integral of the Gaussian curvature K over the entire surface S is determined by the Euler characteristic:
∫
S
K
d
A
=
2
π
χ
(
S
)
.
{\displaystyle \int _{S}K\;dA=2\pi \chi (S).}
This result exemplifies the deep relationship between the geometry and topology of surfaces (and, to a lesser extent, higher-dimensional manifolds).
Another way in which surfaces arise in geometry is by passing into the complex domain. A complex one-manifold is a smooth oriented surface, also called a Riemann surface. Any complex nonsingular algebraic curve viewed as a complex manifold is a Riemann surface. In fact, every compact orientable surface is realizable as a Riemann surface. Thus compact Riemann surfaces are characterized topologically by their genus: 0, 1, 2, .... On the other hand, the genus does not characterize the complex structure. For example, there are uncountably many non-isomorphic compact Riemann surfaces of genus 1 (the elliptic curves).
Complex structures on a closed oriented surface correspond to conformal equivalence classes of Riemannian metrics on the surface. One version of the uniformization theorem (due to Poincaré) states that any Riemannian metric on an oriented, closed surface is conformally equivalent to an essentially unique metric of constant curvature. This provides a starting point for one of the approaches to Teichmüller theory, which provides a finer classification of Riemann surfaces than the topological one by Euler characteristic alone.
A complex surface is a complex two-manifold and thus a real four-manifold; it is not a surface in the sense of this article. Neither are algebraic curves defined over fields other than the complex numbers,
nor are algebraic surfaces defined over fields other than the real numbers.
== See also ==
Boundary (topology)
Volume form, for volumes of surfaces in En
Poincaré metric, for metric properties of Riemann surfaces
Roman surface
Boy's surface
Tetrahemihexahedron
Crumpled surface, a non-differentiable surface obtained by deforming (crumpling) a differentiable surface
== Notes ==
== References ==
Dyck, Walther (1888), "Beiträge zur Analysis situs I", Math. Ann., 32 (4): 459–512, doi:10.1007/bf01443580, S2CID 118123073
=== Simplicial proofs of classification up to homeomorphism ===
Seifert, Herbert; Threlfall, William (1980), A textbook of topology, Pure and Applied Mathematics, vol. 89, Academic Press, ISBN 0126348502, English translation of 1934 classic German textbook
Ahlfors, Lars V.; Sario, Leo (1960), Riemann surfaces, Princeton Mathematical Series, vol. 26, Princeton University Press, Chapter I
Maunder, C. R. F. (1996), Algebraic topology, Dover Publications, ISBN 0486691314, Cambridge undergraduate course
Massey, William S. (1991). A Basic Course in Algebraic Topology. Springer-Verlag. ISBN 0-387-97430-X.
Bredon, Glen E. (1993). Topology and Geometry. Springer-Verlag. ISBN 0-387-97926-3.
Jost, Jürgen (2006), Compact Riemann surfaces: an introduction to contemporary mathematics (3rd ed.), Springer, ISBN 3540330658, for closed oriented Riemannian manifolds
=== Morse theoretic proofs of classification up to diffeomorphism ===
Hirsch, M. (1994), Differential topology (2nd ed.), Springer
Gauld, David B. (1982), Differential topology: an introduction, Monographs and Textbooks in Pure and Applied Mathematics, vol. 72, Marcel Dekker, ISBN 0824717090
Shastri, Anant R. (2011), Elements of differential topology, CRC Press, ISBN 9781439831601, careful proof aimed at undergraduates
Gramain, André (1984). Topology of Surfaces. BCS Associates. ISBN 0-914351-01-X. (Original 1969-70 Orsay course notes in French for "Topologie des Surfaces")
A. Champanerkar; et al., Classification of surfaces via Morse Theory (PDF), an exposition of Gramain's notes{{citation}}: CS1 maint: postscript (link)
=== Other proofs ===
Lawson, Terry (2003), Topology: a geometric approach, Oxford University Press, ISBN 0-19-851597-9, similar to Morse theoretic proof using sliding of attached handles
Francis, George K.; Weeks, Jeffrey R. (May 1999), "Conway's ZIP Proof" (PDF), American Mathematical Monthly, 106 (5): 393, doi:10.2307/2589143, JSTOR 2589143; page discussing the paper: On Conway's ZIP Proof
Thomassen, Carsten (1992), "The Jordan-Schönflies theorem and the classification of surfaces", Amer. Math. Monthly, 99 (2): 116–13, doi:10.2307/2324180, JSTOR 2324180, short elementary proof using spanning graphs
Prasolov, V.V. (2006), Elements of combinatorial and differential topology, Graduate Studies in Mathematics, vol. 74, American Mathematical Society, ISBN 0821838091, contains short account of Thomassen's proof
== External links ==
Classification of Compact Surfaces in Mathifold Project
The Classification of Surfaces and the Jordan Curve Theorem in Home page of Andrew Ranicki
Math Surfaces Gallery, with 60 ~surfaces and Java Applet for live rotation viewing
Math Surfaces Animation, with JavaScript (Canvas HTML) for tens surfaces rotation viewing
The Classification of Surfaces Lecture Notes by Z.Fiedorowicz
History and Art of Surfaces and their Mathematical Models
2-manifolds at the Manifold Atlas | Wikipedia/Surface_(topology) |
In mathematics, genus (pl.: genera) has a few different, but closely related, meanings. Intuitively, the genus is the number of "holes" of a surface. A sphere has genus 0, while a torus has genus 1.
== Topology ==
=== Orientable surfaces ===
The genus of a connected, orientable surface is an integer representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected. It is equal to the number of handles on it. Alternatively, it can be defined in terms of the Euler characteristic
χ
{\displaystyle \chi }
, via the relationship
χ
=
2
−
2
g
{\displaystyle \chi =2-2g}
for closed surfaces, where
g
{\displaystyle g}
is the genus. For surfaces with
b
{\displaystyle b}
boundary components, the equation reads
χ
=
2
−
2
g
−
b
{\displaystyle \chi =2-2g-b}
.
In layman's terms, the genus is the number of "holes" an object has ("holes" interpreted in the sense of doughnut holes; a hollow sphere would be considered as having zero holes in this sense). A torus has 1 such hole, while a sphere has 0. The green surface pictured above has 2 holes of the relevant sort.
For instance:
The sphere
S
2
{\displaystyle S^{2}}
and a disc both have genus zero.
A torus has genus one, as does the surface of a coffee mug with a handle. This is the source of the joke "topologists are people who can't tell their donut from their coffee mug."
Explicit construction of surfaces of the genus g is given in the article on the fundamental polygon.
Genus of orientable surfaces
=== Non-orientable surfaces ===
The non-orientable genus, demigenus, or Euler genus of a connected, non-orientable closed surface is a positive integer representing the number of cross-caps attached to a sphere. Alternatively, it can be defined for a closed surface in terms of the Euler characteristic χ, via the relationship χ = 2 − k, where k is the non-orientable genus.
For instance:
A real projective plane has a non-orientable genus 1.
A Klein bottle has non-orientable genus 2.
=== Knot ===
The genus of a knot K is defined as the minimal genus of all Seifert surfaces for K. A Seifert surface of a knot is however a manifold with boundary, the boundary being the knot, i.e.
homeomorphic to the unit circle. The genus of such a surface is defined to be the genus of the two-manifold, which is obtained by gluing the unit disk along the boundary.
=== Handlebody ===
The genus of a 3-dimensional handlebody is an integer representing the maximum number of cuttings along embedded disks without rendering the resultant manifold disconnected. It is equal to the number of handles on it.
For instance:
A ball has genus 0.
A solid torus D2 × S1 has genus 1.
=== Graph theory ===
The genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n handles (i.e. an oriented surface of the genus n). Thus, a planar graph has genus 0, because it can be drawn on a sphere without self-crossing.
The non-orientable genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps (i.e. a non-orientable surface of (non-orientable) genus n). (This number is also called the demigenus.)
The Euler genus is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps or on a sphere with n/2 handles.
In topological graph theory there are several definitions of the genus of a group. Arthur T. White introduced the following concept. The genus of a group G is the minimum genus of a (connected, undirected) Cayley graph for G.
The graph genus problem is NP-complete.
== Algebraic geometry ==
There are two related definitions of genus of any projective algebraic scheme
X
{\displaystyle X}
: the arithmetic genus and the geometric genus. When
X
{\displaystyle X}
is an algebraic curve with field of definition the complex numbers, and if
X
{\displaystyle X}
has no singular points, then these definitions agree and coincide with the topological definition applied to the Riemann surface of
X
{\displaystyle X}
(its manifold of complex points). For example, the definition of elliptic curve from algebraic geometry is connected non-singular projective curve of genus 1 with a given rational point on it.
By the Riemann–Roch theorem, an irreducible plane curve of degree
d
{\displaystyle d}
given by the vanishing locus of a section
s
∈
Γ
(
P
2
,
O
P
2
(
d
)
)
{\displaystyle s\in \Gamma (\mathbb {P} ^{2},{\mathcal {O}}_{\mathbb {P} ^{2}}(d))}
has geometric genus
g
=
(
d
−
1
)
(
d
−
2
)
2
−
s
,
{\displaystyle g={\frac {(d-1)(d-2)}{2}}-s,}
where
s
{\displaystyle s}
is the number of singularities when properly counted.
== Differential geometry ==
In differential geometry, a genus of an oriented manifold
M
{\displaystyle M}
may be defined as a complex number
Φ
(
M
)
{\displaystyle \Phi (M)}
subject to the conditions
Φ
(
M
1
⨿
M
2
)
=
Φ
(
M
1
)
+
Φ
(
M
2
)
{\displaystyle \Phi (M_{1}\amalg M_{2})=\Phi (M_{1})+\Phi (M_{2})}
Φ
(
M
1
×
M
2
)
=
Φ
(
M
1
)
⋅
Φ
(
M
2
)
{\displaystyle \Phi (M_{1}\times M_{2})=\Phi (M_{1})\cdot \Phi (M_{2})}
Φ
(
M
1
)
=
Φ
(
M
2
)
{\displaystyle \Phi (M_{1})=\Phi (M_{2})}
if
M
1
{\displaystyle M_{1}}
and
M
2
{\displaystyle M_{2}}
are cobordant.
In other words,
Φ
{\displaystyle \Phi }
is a ring homomorphism
R
→
C
{\displaystyle R\to \mathbb {C} }
, where
R
{\displaystyle R}
is Thom's oriented cobordism ring.
The genus
Φ
{\displaystyle \Phi }
is multiplicative for all bundles on spinor manifolds with a connected compact structure if
log
Φ
{\displaystyle \log _{\Phi }}
is an elliptic integral such as
log
Φ
(
x
)
=
∫
0
x
(
1
−
2
δ
t
2
+
ε
t
4
)
−
1
/
2
d
t
{\displaystyle \log _{\Phi }(x)=\int _{0}^{x}(1-2\delta t^{2}+\varepsilon t^{4})^{-1/2}dt}
for some
δ
,
ε
∈
C
.
{\displaystyle \delta ,\varepsilon \in \mathbb {C} .}
This genus is called an elliptic genus.
The Euler characteristic
χ
(
M
)
{\displaystyle \chi (M)}
is not a genus in this sense since it is not invariant concerning cobordisms.
== Biology ==
Genus can be also calculated for the graph spanned by the net of chemical interactions in nucleic acids or proteins. In particular, one may study the growth of the genus along the chain. Such a function (called the genus trace) shows the topological complexity and domain structure of biomolecules.
== See also ==
Group (mathematics)
Arithmetic genus
Geometric genus
Genus of a multiplicative sequence
Genus of a quadratic form
Spinor genus
== Citations ==
== References == | Wikipedia/Genus_(topology) |
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
If x0 is an interior point in the domain of a function f, then f is said to be differentiable at x0 if the derivative
f
′
(
x
0
)
{\displaystyle f'(x_{0})}
exists. In other words, the graph of f has a non-vertical tangent line at the point (x0, f(x0)). f is said to be differentiable on U if it is differentiable at every point of U. f is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function
f
{\textstyle f}
. Generally speaking, f is said to be of class
C
k
{\displaystyle C^{k}}
if its first
k
{\displaystyle k}
derivatives
f
′
(
x
)
,
f
′
′
(
x
)
,
…
,
f
(
k
)
(
x
)
{\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}
exist and are continuous over the domain of the function
f
{\textstyle f}
.
For a multivariable function, as shown here, the differentiability of it is something more complex than the existence of the partial derivatives of it.
== Differentiability of real functions of one variable ==
A function
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
, defined on an open set
U
⊂
R
{\textstyle U\subset \mathbb {R} }
, is said to be differentiable at
a
∈
U
{\displaystyle a\in U}
if the derivative
f
′
(
a
)
=
lim
h
→
0
f
(
a
+
h
)
−
f
(
a
)
h
=
lim
x
→
a
f
(
x
)
−
f
(
a
)
x
−
a
{\displaystyle f'(a)=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}=\lim _{x\to a}{\frac {f(x)-f(a)}{x-a}}}
exists. This implies that the function is continuous at a.
This function f is said to be differentiable on U if it is differentiable at every point of U. In this case, the derivative of f is thus a function from U into
R
.
{\displaystyle \mathbb {R} .}
A continuous function is not necessarily differentiable, but a differentiable function is necessarily continuous (at every point where it is differentiable) as is shown below (in the section Differentiability and continuity). A function is said to be continuously differentiable if its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the section Differentiability classes).
=== Semi-differentiability ===
The above definition can be extended to define the derivative at boundary points. The derivative of a function
f
:
A
→
R
{\textstyle f:A\to \mathbb {R} }
defined on a closed subset
A
⊊
R
{\textstyle A\subsetneq \mathbb {R} }
of the real numbers, evaluated at a boundary point
c
{\textstyle c}
, can be defined as the following one-sided limit, where the argument
x
{\textstyle x}
approaches
c
{\textstyle c}
such that it is always within
A
{\textstyle A}
:
f
′
(
c
)
=
lim
x
→
c
x
∈
A
f
(
x
)
−
f
(
c
)
x
−
c
.
{\displaystyle f'(c)=\lim _{\scriptstyle x\to c \atop \scriptstyle x\in A}{\frac {f(x)-f(c)}{x-c}}.}
For
x
{\textstyle x}
to remain within
A
{\textstyle A}
, which is a subset of the reals, it follows that this limit will be defined as either
f
′
(
c
)
=
lim
x
→
c
+
f
(
x
)
−
f
(
c
)
x
−
c
or
f
′
(
c
)
=
lim
x
→
c
−
f
(
x
)
−
f
(
c
)
x
−
c
.
{\displaystyle f'(c)=\lim _{x\to c^{+}}{\frac {f(x)-f(c)}{x-c}}\quad {\text{or}}\quad f'(c)=\lim _{x\to c^{-}}{\frac {f(x)-f(c)}{x-c}}.}
== Differentiability and continuity ==
If f is differentiable at a point x0, then f must also be continuous at x0. In particular, any differentiable function must be continuous at every point in its domain. The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, but fails to be differentiable at the location of the anomaly.
Most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a meagre set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
== Differentiability classes ==
A function
f
{\textstyle f}
is said to be continuously differentiable if the derivative
f
′
(
x
)
{\textstyle f^{\prime }(x)}
exists and is itself a continuous function. Although the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity. For example, the function
f
(
x
)
=
{
x
2
sin
(
1
/
x
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)\;=\;{\begin{cases}x^{2}\sin(1/x)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is differentiable at 0, since
f
′
(
0
)
=
lim
ε
→
0
(
ε
2
sin
(
1
/
ε
)
−
0
ε
)
=
0
{\displaystyle f'(0)=\lim _{\varepsilon \to 0}\left({\frac {\varepsilon ^{2}\sin(1/\varepsilon )-0}{\varepsilon }}\right)=0}
exists. However, for
x
≠
0
,
{\displaystyle x\neq 0,}
differentiation rules imply
f
′
(
x
)
=
2
x
sin
(
1
/
x
)
−
cos
(
1
/
x
)
,
{\displaystyle f'(x)=2x\sin(1/x)-\cos(1/x)\;,}
which has no limit as
x
→
0.
{\displaystyle x\to 0.}
Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless, Darboux's theorem implies that the derivative of any function satisfies the conclusion of the intermediate value theorem.
Similarly to how continuous functions are said to be of class
C
0
,
{\displaystyle C^{0},}
continuously differentiable functions are sometimes said to be of class
C
1
{\displaystyle C^{1}}
. A function is of class
C
2
{\displaystyle C^{2}}
if the first and second derivative of the function both exist and are continuous. More generally, a function is said to be of class
C
k
{\displaystyle C^{k}}
if the first
k
{\displaystyle k}
derivatives
f
′
(
x
)
,
f
′
′
(
x
)
,
…
,
f
(
k
)
(
x
)
{\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}
all exist and are continuous. If derivatives
f
(
n
)
{\displaystyle f^{(n)}}
exist for all positive integers
n
,
{\textstyle n,}
the function is smooth or equivalently, of class
C
∞
.
{\displaystyle C^{\infty }.}
== Differentiability in higher dimensions ==
A function of several real variables f: Rm → Rn is said to be differentiable at a point x0 if there exists a linear map J: Rm → Rn such that
lim
h
→
0
‖
f
(
x
0
+
h
)
−
f
(
x
0
)
−
J
(
h
)
‖
R
n
‖
h
‖
R
m
=
0.
{\displaystyle \lim _{\mathbf {h} \to \mathbf {0} }{\frac {\|\mathbf {f} (\mathbf {x_{0}} +\mathbf {h} )-\mathbf {f} (\mathbf {x_{0}} )-\mathbf {J} \mathbf {(h)} \|_{\mathbf {R} ^{n}}}{\|\mathbf {h} \|_{\mathbf {R} ^{m}}}}=0.}
If a function is differentiable at x0, then all of the partial derivatives exist at x0, and the linear map J is given by the Jacobian matrix, an n × m matrix in this case. A similar formulation of the higher-dimensional derivative is provided by the fundamental increment lemma found in single-variable calculus.
If all the partial derivatives of a function exist in a neighborhood of a point x0 and are continuous at the point x0, then the function is differentiable at that point x0.
However, the existence of the partial derivatives (or even of all the directional derivatives) does not guarantee that a function is differentiable at a point. For example, the function f: R2 → R defined by
f
(
x
,
y
)
=
{
x
if
y
≠
x
2
0
if
y
=
x
2
{\displaystyle f(x,y)={\begin{cases}x&{\text{if }}y\neq x^{2}\\0&{\text{if }}y=x^{2}\end{cases}}}
is not differentiable at (0, 0), but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function
f
(
x
,
y
)
=
{
y
3
/
(
x
2
+
y
2
)
if
(
x
,
y
)
≠
(
0
,
0
)
0
if
(
x
,
y
)
=
(
0
,
0
)
{\displaystyle f(x,y)={\begin{cases}y^{3}/(x^{2}+y^{2})&{\text{if }}(x,y)\neq (0,0)\\0&{\text{if }}(x,y)=(0,0)\end{cases}}}
is not differentiable at (0, 0), but again all of the partial derivatives and directional derivatives exist.
== Differentiability in complex analysis ==
In complex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividing complex numbers. So, a function
f
:
C
→
C
{\textstyle f:\mathbb {C} \to \mathbb {C} }
is said to be differentiable at
x
=
a
{\textstyle x=a}
when
f
′
(
a
)
=
lim
h
→
0
h
∈
C
f
(
a
+
h
)
−
f
(
a
)
h
.
{\displaystyle f'(a)=\lim _{\underset {h\in \mathbb {C} }{h\to 0}}{\frac {f(a+h)-f(a)}{h}}.}
Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A function
f
:
C
→
C
{\textstyle f:\mathbb {C} \to \mathbb {C} }
, that is complex-differentiable at a point
x
=
a
{\textstyle x=a}
is automatically differentiable at that point, when viewed as a function
f
:
R
2
→
R
2
{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}
. This is because the complex-differentiability implies that
lim
h
→
0
h
∈
C
|
f
(
a
+
h
)
−
f
(
a
)
−
f
′
(
a
)
h
|
|
h
|
=
0.
{\displaystyle \lim _{\underset {h\in \mathbb {C} }{h\to 0}}{\frac {|f(a+h)-f(a)-f'(a)h|}{|h|}}=0.}
However, a function
f
:
C
→
C
{\textstyle f:\mathbb {C} \to \mathbb {C} }
can be differentiable as a multi-variable function, while not being complex-differentiable. For example,
f
(
z
)
=
z
+
z
¯
2
{\displaystyle f(z)={\frac {z+{\overline {z}}}{2}}}
is differentiable at every point, viewed as the 2-variable real function
f
(
x
,
y
)
=
x
{\displaystyle f(x,y)=x}
, but it is not complex-differentiable at any point because the limit
lim
h
→
0
h
+
h
¯
2
h
{\textstyle \lim _{h\to 0}{\frac {h+{\bar {h}}}{2h}}}
gives different values for different approaches to 0.
Any function that is complex-differentiable in a neighborhood of a point is called holomorphic at that point. Such a function is necessarily infinitely differentiable, and in fact analytic.
== Differentiable functions on manifolds ==
If M is a differentiable manifold, a real or complex-valued function f on M is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate chart defined around p. If M and N are differentiable manifolds, a function f: M → N is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate charts defined around p and f(p).
== See also ==
Generalizations of the derivative
Semi-differentiability
Differentiable programming
== References == | Wikipedia/Differentiable_function |
Symplectic geometry is a branch of differential geometry and differential topology that studies symplectic manifolds; that is, differentiable manifolds equipped with a closed, nondegenerate 2-form. Symplectic geometry has its origins in the Hamiltonian formulation of classical mechanics where the phase space of certain classical systems takes on the structure of a symplectic manifold.
The term "symplectic", introduced by Hermann Weyl, is a calque of "complex"; previously, the "symplectic group" had been called the "line complex group". "Complex" comes from the Latin com-plexus, meaning "braided together" (co- + plexus), while symplectic comes from the corresponding Greek sym-plektikos (συμπλεκτικός); in both cases the stem comes from the Indo-European root *pleḱ- The name reflects the deep connections between complex and symplectic structures.
By Darboux's theorem, symplectic manifolds are isomorphic to the standard symplectic vector space locally, hence only have global (topological) invariants. "Symplectic topology," which studies global properties of symplectic manifolds, is often used interchangeably with "symplectic geometry".
== Overview ==
A symplectic geometry is defined on a smooth even-dimensional space that is a differentiable manifold. On this space is defined a geometric object, the symplectic 2-form, that allows for the measurement of sizes of two-dimensional objects in the space. The symplectic form in symplectic geometry plays a role analogous to that of the metric tensor in Riemannian geometry. Where the metric tensor measures lengths and angles, the symplectic form measures oriented areas.
Symplectic geometry arose from the study of classical mechanics and an example of a symplectic structure is the motion of an object in one dimension. To specify the trajectory of the object, one requires both the position q and the momentum p, which form a point (p,q) in the Euclidean plane
R
2
{\displaystyle \mathbb {R} ^{2}}
. In this case, the symplectic form is
ω
=
d
p
∧
d
q
{\displaystyle \omega =dp\wedge dq}
and is an area form that measures the area A of a region S in the plane through integration:
A
=
∫
S
ω
.
{\displaystyle A=\int _{S}\omega .}
The area is important because as conservative dynamical systems evolve in time, this area is invariant.
Higher dimensional symplectic geometries are defined analogously. A 2n-dimensional symplectic geometry is formed of pairs of directions
(
(
x
1
,
x
2
)
,
(
x
3
,
x
4
)
,
…
(
x
2
n
−
1
,
x
2
n
)
)
{\displaystyle ((x_{1},x_{2}),(x_{3},x_{4}),\ldots (x_{2n-1},x_{2n}))}
in a 2n-dimensional manifold along with a symplectic form
ω
=
d
x
1
∧
d
x
2
+
d
x
3
∧
d
x
4
+
⋯
+
d
x
2
n
−
1
∧
d
x
2
n
.
{\displaystyle \omega =dx_{1}\wedge dx_{2}+dx_{3}\wedge dx_{4}+\cdots +dx_{2n-1}\wedge dx_{2n}.}
This symplectic form yields the size of a 2n-dimensional region V in the space as the sum of the areas of the projections of V onto each of the planes formed by the pairs of directions
A
=
∫
V
ω
=
∫
V
d
x
1
∧
d
x
2
+
∫
V
d
x
3
∧
d
x
4
+
⋯
+
∫
V
d
x
2
n
−
1
∧
d
x
2
n
.
{\displaystyle A=\int _{V}\omega =\int _{V}dx_{1}\wedge dx_{2}+\int _{V}dx_{3}\wedge dx_{4}+\cdots +\int _{V}dx_{2n-1}\wedge dx_{2n}.}
== Comparison with Riemannian geometry ==
Symplectic geometry has a number of similarities with and differences from Riemannian geometry, which is the study of differentiable manifolds equipped with nondegenerate, symmetric 2-tensors (called metric tensors). Unlike in the Riemannian case, symplectic manifolds have no local invariants such as curvature. This is a consequence of Darboux's theorem which states that a neighborhood of any point of a 2n-dimensional symplectic manifold is isomorphic to the standard symplectic structure on an open set of
R
2
n
{\displaystyle \mathbb {R} ^{2n}}
. Another difference with Riemannian geometry is that not every differentiable manifold need admit a symplectic form; there are certain topological restrictions. For example, every symplectic manifold is even-dimensional and orientable. Additionally, if M is a closed symplectic manifold, then the 2nd de Rham cohomology group H2(M) is nontrivial; this implies, for example, that the only n-sphere that admits a symplectic form is the 2-sphere. A parallel that one can draw between the two subjects is the analogy between geodesics in Riemannian geometry and pseudoholomorphic curves in symplectic geometry: Geodesics are curves of shortest length (locally), while pseudoholomorphic curves are surfaces of minimal area. Both concepts play a fundamental role in their respective disciplines.
== Examples and structures ==
Every Kähler manifold is also a symplectic manifold. Well into the 1970s, symplectic experts were unsure whether any compact non-Kähler symplectic manifolds existed, but since then many examples have been constructed (the first was due to William Thurston); in particular, Robert Gompf has shown that every finitely presented group occurs as the fundamental group of some symplectic 4-manifold, in marked contrast with the Kähler case.
Most symplectic manifolds, one can say, are not Kähler; and so do not have an integrable complex structure compatible with the symplectic form. Mikhail Gromov, however, made the important observation that symplectic manifolds do admit an abundance of compatible almost complex structures, so that they satisfy all the axioms for a Kähler manifold except the requirement that the transition maps be holomorphic.
Gromov used the existence of almost complex structures on symplectic manifolds to develop a theory of pseudoholomorphic curves, which has led to a number of advancements in symplectic topology, including a class of symplectic invariants now known as Gromov–Witten invariants. Later, using the pseudoholomorphic curve technique Andreas Floer invented another important tool in symplectic geometry known as the Floer homology.
== See also ==
== Notes ==
== References ==
Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. ISBN 978-0-8053-0102-1.
Arnol'd, V. I. (1986). "Первые шаги симплектической топологии" [First steps in symplectic topology]. Успехи математических наук (in Russian). 41 (6(252)): 3–18. doi:10.1070/RM1986v041n06ABEH004221. ISSN 0036-0279. S2CID 250908036 – via Russian Mathematical Surveys, 1986, 41:6, 1–21.
McDuff, Dusa; Salamon, D. (1998). Introduction to Symplectic Topology. Oxford University Press. ISBN 978-0-19-850451-1.
Fomenko, A. T. (1995). Symplectic Geometry (2nd ed.). Gordon and Breach. ISBN 978-2-88124-901-3. (An undergraduate level introduction.)
de Gosson, Maurice A. (2006). Symplectic Geometry and Quantum Mechanics. Basel: Birkhäuser Verlag. ISBN 978-3-7643-7574-4.
Weinstein, Alan (1981). "Symplectic Geometry" (PDF). Bulletin of the American Mathematical Society. 5 (1): 1–13. doi:10.1090/s0273-0979-1981-14911-9.
Weyl, Hermann (1939). The Classical Groups. Their Invariants and Representations. Reprinted by Princeton University Press (1997). ISBN 0-691-05756-7. MR0000255.
== External links ==
Media related to Symplectic geometry at Wikimedia Commons
"Symplectic structure", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Symplectic_topology |
In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint non-empty open subsets. Connectedness is one of the principal topological properties that distinguish topological spaces.
A subset of a topological space
X
{\displaystyle X}
is a connected set if it is a connected space when viewed as a subspace of
X
{\displaystyle X}
.
Some related but stronger conditions are path connected, simply connected, and
n
{\displaystyle n}
-connected. Another related notion is locally connected, which neither implies nor follows from connectedness.
== Formal definition ==
A topological space
X
{\displaystyle X}
is said to be disconnected if it is the union of two disjoint non-empty open sets. Otherwise,
X
{\displaystyle X}
is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with its unique topology) as a connected space, but this article does not follow that practice.
For a topological space
X
{\displaystyle X}
the following conditions are equivalent:
X
{\displaystyle X}
is connected, that is, it cannot be divided into two disjoint non-empty open sets.
The only subsets of
X
{\displaystyle X}
which are both open and closed (clopen sets) are
X
{\displaystyle X}
and the empty set.
The only subsets of
X
{\displaystyle X}
with empty boundary are
X
{\displaystyle X}
and the empty set.
X
{\displaystyle X}
cannot be written as the union of two non-empty separated sets (sets for which each is disjoint from the other's closure).
All continuous functions from
X
{\displaystyle X}
to
{
0
,
1
}
{\displaystyle \{0,1\}}
are constant, where
{
0
,
1
}
{\displaystyle \{0,1\}}
is the two-point space endowed with the discrete topology.
Historically this modern formulation of the notion of connectedness (in terms of no partition of
X
{\displaystyle X}
into two separated sets) first appeared (independently) with N.J. Lennes, Frigyes Riesz, and Felix Hausdorff at the beginning of the 20th century. See (Wilder 1978) for details.
=== Connected components ===
Given some point
x
{\displaystyle x}
in a topological space
X
,
{\displaystyle X,}
the union of any collection of connected subsets such that each contains
x
{\displaystyle x}
will once again be a connected subset.
The connected component of a point
x
{\displaystyle x}
in
X
{\displaystyle X}
is the union of all connected subsets of
X
{\displaystyle X}
that contain
x
;
{\displaystyle x;}
it is the unique largest (with respect to
⊆
{\displaystyle \subseteq }
) connected subset of
X
{\displaystyle X}
that contains
x
.
{\displaystyle x.}
The maximal connected subsets (ordered by inclusion
⊆
{\displaystyle \subseteq }
) of a non-empty topological space are called the connected components of the space.
The components of any topological space
X
{\displaystyle X}
form a partition of
X
{\displaystyle X}
: they are disjoint, non-empty and their union is the whole space.
Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets (singletons), which are not open. Proof: Any two distinct rational numbers
q
1
<
q
2
{\displaystyle q_{1}<q_{2}}
are in different components. Take an irrational number
q
1
<
r
<
q
2
,
{\displaystyle q_{1}<r<q_{2},}
and then set
A
=
{
q
∈
Q
:
q
<
r
}
{\displaystyle A=\{q\in \mathbb {Q} :q<r\}}
and
B
=
{
q
∈
Q
:
q
>
r
}
.
{\displaystyle B=\{q\in \mathbb {Q} :q>r\}.}
Then
(
A
,
B
)
{\displaystyle (A,B)}
is a separation of
Q
,
{\displaystyle \mathbb {Q} ,}
and
q
1
∈
A
,
q
2
∈
B
{\displaystyle q_{1}\in A,q_{2}\in B}
. Thus each component is a one-point set.
Let
Γ
x
{\displaystyle \Gamma _{x}}
be the connected component of
x
{\displaystyle x}
in a topological space
X
,
{\displaystyle X,}
and
Γ
x
′
{\displaystyle \Gamma _{x}'}
be the intersection of all clopen sets containing
x
{\displaystyle x}
(called quasi-component of
x
{\displaystyle x}
). Then
Γ
x
⊂
Γ
x
′
{\displaystyle \Gamma _{x}\subset \Gamma '_{x}}
where the equality holds if
X
{\displaystyle X}
is compact Hausdorff or locally connected.
=== Disconnected spaces ===
A space in which all components are one-point sets is called totally disconnected. Related to this property, a space
X
{\displaystyle X}
is called totally separated if, for any two distinct elements
x
{\displaystyle x}
and
y
{\displaystyle y}
of
X
{\displaystyle X}
, there exist disjoint open sets
U
{\displaystyle U}
containing
x
{\displaystyle x}
and
V
{\displaystyle V}
containing
y
{\displaystyle y}
such that
X
{\displaystyle X}
is the union of
U
{\displaystyle U}
and
V
{\displaystyle V}
. Clearly, any totally separated space is totally disconnected, but the converse does not hold. For example, take two copies of the rational numbers
Q
{\displaystyle \mathbb {Q} }
, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff.
== Examples ==
The closed interval
[
0
,
2
)
{\displaystyle [0,2)}
in the standard subspace topology is connected; although it can, for example, be written as the union of
[
0
,
1
)
{\displaystyle [0,1)}
and
[
1
,
2
)
,
{\displaystyle [1,2),}
the second set is not open in the chosen topology of
[
0
,
2
)
.
{\displaystyle [0,2).}
The union of
[
0
,
1
)
{\displaystyle [0,1)}
and
(
1
,
2
]
{\displaystyle (1,2]}
is disconnected; both of these intervals are open in the standard topological space
[
0
,
1
)
∪
(
1
,
2
]
.
{\displaystyle [0,1)\cup (1,2].}
(
0
,
1
)
∪
{
3
}
{\displaystyle (0,1)\cup \{3\}}
is disconnected.
A convex subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
is connected; it is actually simply connected.
A Euclidean plane excluding the origin,
(
0
,
0
)
,
{\displaystyle (0,0),}
is connected, but is not simply connected. The three-dimensional Euclidean space without the origin is connected, and even simply connected. In contrast, the one-dimensional Euclidean space without the origin is not connected.
A Euclidean plane with a straight line removed is not connected since it consists of two half-planes.
R
{\displaystyle \mathbb {R} }
, the space of real numbers with the usual topology, is connected.
The Sorgenfrey line is disconnected.
If even a single point is removed from
R
{\displaystyle \mathbb {R} }
, the remainder is disconnected. However, if even a countable infinity of points are removed from
R
n
{\displaystyle \mathbb {R} ^{n}}
, where
n
≥
2
,
{\displaystyle n\geq 2,}
the remainder is connected. If
n
≥
3
{\displaystyle n\geq 3}
, then
R
n
{\displaystyle \mathbb {R} ^{n}}
remains simply connected after removal of countably many points.
Any topological vector space, e.g. any Hilbert space or Banach space, over a connected field (such as
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
), is simply connected.
Every discrete topological space with at least two elements is disconnected, in fact such a space is totally disconnected. The simplest example is the discrete two-point space.
On the other hand, a finite set might be connected. For example, the spectrum of a discrete valuation ring consists of two points and is connected. It is an example of a Sierpiński space.
The Cantor set is totally disconnected; since the set contains uncountably many points, it has uncountably many components.
If a space
X
{\displaystyle X}
is homotopy equivalent to a connected space, then
X
{\displaystyle X}
is itself connected.
The topologist's sine curve is an example of a set that is connected but is neither path connected nor locally connected.
The general linear group
GL
(
n
,
R
)
{\displaystyle \operatorname {GL} (n,\mathbb {R} )}
(that is, the group of
n
{\displaystyle n}
-by-
n
{\displaystyle n}
real, invertible matrices) consists of two connected components: the one with matrices of positive determinant and the other of negative determinant. In particular, it is not connected. In contrast,
GL
(
n
,
C
)
{\displaystyle \operatorname {GL} (n,\mathbb {C} )}
is connected. More generally, the set of invertible bounded operators on a complex Hilbert space is connected.
The spectra of commutative local ring and integral domains are connected. More generally, the following are equivalent
The spectrum of a commutative ring
R
{\displaystyle R}
is connected
Every finitely generated projective module over
R
{\displaystyle R}
has constant rank.
R
{\displaystyle R}
has no idempotent
≠
0
,
1
{\displaystyle \neq 0,1}
(i.e.,
R
{\displaystyle R}
is not a product of two rings in a nontrivial way).
An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the subspace topology induced by two-dimensional Euclidean space.
== Path connectedness ==
A path-connected space is a stronger notion of connectedness, requiring the structure of a path. A path from a point
x
{\displaystyle x}
to a point
y
{\displaystyle y}
in a topological space
X
{\displaystyle X}
is a continuous function
f
{\displaystyle f}
from the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
to
X
{\displaystyle X}
with
f
(
0
)
=
x
{\displaystyle f(0)=x}
and
f
(
1
)
=
y
{\displaystyle f(1)=y}
. A path-component of
X
{\displaystyle X}
is an equivalence class of
X
{\displaystyle X}
under the equivalence relation which makes
x
{\displaystyle x}
equivalent to
y
{\displaystyle y}
if and only if there is a path from
x
{\displaystyle x}
to
y
{\displaystyle y}
. The space
X
{\displaystyle X}
is said to be path-connected (or pathwise connected or
0
{\displaystyle \mathbf {0} }
-connected) if there is exactly one path-component. For non-empty spaces, this is equivalent to the statement that there is a path joining any two points in
X
{\displaystyle X}
. Again, many authors exclude the empty space.
Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line
L
∗
{\displaystyle L^{*}}
and the topologist's sine curve.
Subsets of the real line
R
{\displaystyle \mathbb {R} }
are connected if and only if they are path-connected; these subsets are the intervals and rays of
R
{\displaystyle \mathbb {R} }
.
Also, open subsets of
R
n
{\displaystyle \mathbb {R} ^{n}}
or
C
n
{\displaystyle \mathbb {C} ^{n}}
are connected if and only if they are path-connected.
Additionally, connectedness and path-connectedness are the same for finite topological spaces.
== Arc connectedness ==
A space
X
{\displaystyle X}
is said to be arc-connected or arcwise connected if any two topologically distinguishable points can be joined by an arc, which is an embedding
f
:
[
0
,
1
]
→
X
{\displaystyle f:[0,1]\to X}
. An arc-component of
X
{\displaystyle X}
is a maximal arc-connected subset of
X
{\displaystyle X}
; or equivalently an equivalence class of the equivalence relation of whether two points can be joined by an arc or by a path whose points are topologically indistinguishable.
Every Hausdorff space that is path-connected is also arc-connected; more generally this is true for a
Δ
{\displaystyle \Delta }
-Hausdorff space, which is a space where each image of a path is closed. An example of a space which is path-connected but not arc-connected is given by the line with two origins; its two copies of
0
{\displaystyle 0}
can be connected by a path but not by an arc.
Intuition for path-connected spaces does not readily transfer to arc-connected spaces. Let
X
{\displaystyle X}
be the line with two origins. The following are facts whose analogues hold for path-connected spaces, but do not hold for arc-connected spaces:
Continuous image of arc-connected space may not be arc-connected: for example, a quotient map from an arc-connected space to its quotient with countably many (at least 2) topologically distinguishable points cannot be arc-connected due to too small cardinality.
Arc-components may not be disjoint. For example,
X
{\displaystyle X}
has two overlapping arc-components.
Arc-connected product space may not be a product of arc-connected spaces. For example,
X
×
R
{\displaystyle X\times \mathbb {R} }
is arc-connected, but
X
{\displaystyle X}
is not.
Arc-components of a product space may not be products of arc-components of the marginal spaces. For example,
X
×
R
{\displaystyle X\times \mathbb {R} }
has a single arc-component, but
X
{\displaystyle X}
has two arc-components.
If arc-connected subsets have a non-empty intersection, then their union may not be arc-connected. For example, the arc-components of
X
{\displaystyle X}
intersect, but their union is not arc-connected.
== Local connectedness ==
A topological space is said to be locally connected at a point
x
{\displaystyle x}
if every neighbourhood of
x
{\displaystyle x}
contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space
X
{\displaystyle X}
is locally connected if and only if every component of every open set of
X
{\displaystyle X}
is open.
Similarly, a topological space is said to be locally path-connected if it has a base of path-connected sets.
An open subset of a locally path-connected space is connected if and only if it is path-connected.
This generalizes the earlier statement about
R
n
{\displaystyle \mathbb {R} ^{n}}
and
C
n
{\displaystyle \mathbb {C} ^{n}}
, each of which is locally path-connected. More generally, any topological manifold is locally path-connected.
Locally connected does not imply connected, nor does locally path-connected imply path connected. A simple example of a locally connected (and locally path-connected) space that is not connected (or path-connected) is the union of two separated intervals in
R
{\displaystyle \mathbb {R} }
, such as
(
0
,
1
)
∪
(
2
,
3
)
{\displaystyle (0,1)\cup (2,3)}
.
A classic example of a connected space that is not locally connected is the so-called topologist's sine curve, defined as
T
=
{
(
0
,
0
)
}
∪
{
(
x
,
sin
(
1
x
)
)
:
x
∈
(
0
,
1
]
}
{\displaystyle T=\{(0,0)\}\cup \left\{\left(x,\sin \left({\tfrac {1}{x}}\right)\right):x\in (0,1]\right\}}
, with the Euclidean topology induced by inclusion in
R
2
{\displaystyle \mathbb {R} ^{2}}
.
== Set operations ==
The intersection of connected sets is not necessarily connected.
The union of connected sets is not necessarily connected, as can be seen by considering
X
=
(
0
,
1
)
∪
(
1
,
2
)
{\displaystyle X=(0,1)\cup (1,2)}
.
Each ellipse is a connected set, but the union is not connected, since it can be partitioned into two disjoint open sets
U
{\displaystyle U}
and
V
{\displaystyle V}
.
This means that, if the union
X
{\displaystyle X}
is disconnected, then the collection
{
X
i
}
{\displaystyle \{X_{i}\}}
can be partitioned into two sub-collections, such that the unions of the sub-collections are disjoint and open in
X
{\displaystyle X}
(see picture). This implies that in several cases, a union of connected sets is necessarily connected. In particular:
If the common intersection of all sets is not empty (
⋂
X
i
≠
∅
{\textstyle \bigcap X_{i}\neq \emptyset }
), then obviously they cannot be partitioned to collections with disjoint unions. Hence the union of connected sets with non-empty intersection is connected.
If the intersection of each pair of sets is not empty (
∀
i
,
j
:
X
i
∩
X
j
≠
∅
{\displaystyle \forall i,j:X_{i}\cap X_{j}\neq \emptyset }
) then again they cannot be partitioned to collections with disjoint unions, so their union must be connected.
If the sets can be ordered as a "linked chain", i.e. indexed by integer indices and
∀
i
:
X
i
∩
X
i
+
1
≠
∅
{\displaystyle \forall i:X_{i}\cap X_{i+1}\neq \emptyset }
, then again their union must be connected.
If the sets are pairwise-disjoint and the quotient space
X
/
{
X
i
}
{\displaystyle X/\{X_{i}\}}
is connected, then X must be connected. Otherwise, if
U
∪
V
{\displaystyle U\cup V}
is a separation of X then
q
(
U
)
∪
q
(
V
)
{\displaystyle q(U)\cup q(V)}
is a separation of the quotient space (since
q
(
U
)
,
q
(
V
)
{\displaystyle q(U),q(V)}
are disjoint and open in the quotient space).
The set difference of connected sets is not necessarily connected. However, if
X
⊇
Y
{\displaystyle X\supseteq Y}
and their difference
X
∖
Y
{\displaystyle X\setminus Y}
is disconnected (and thus can be written as a union of two open sets
X
1
{\displaystyle X_{1}}
and
X
2
{\displaystyle X_{2}}
), then the union of
Y
{\displaystyle Y}
with each such component is connected (i.e.
Y
∪
X
i
{\displaystyle Y\cup X_{i}}
is connected for all
i
{\displaystyle i}
).
== Theorems ==
Main theorem of connectedness: Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be topological spaces and let
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
be a continuous function. If
X
{\displaystyle X}
is (path-)connected then the image
f
(
X
)
{\displaystyle f(X)}
is (path-)connected. This result can be considered a generalization of the intermediate value theorem.
Every path-connected space is connected.
In a locally path-connected space, every open connected set is path-connected.
Every locally path-connected space is locally connected.
A locally path-connected space is path-connected if and only if it is connected.
The closure of a connected subset is connected. Furthermore, any subset between a connected subset and its closure is connected.
The connected components are always closed (but in general not open)
The connected components of a locally connected space are also open.
The connected components of a space are disjoint unions of the path-connected components (which in general are neither open nor closed).
Every quotient of a connected (resp. locally connected, path-connected, locally path-connected) space is connected (resp. locally connected, path-connected, locally path-connected).
Every product of a family of connected (resp. path-connected) spaces is connected (resp. path-connected).
Every open subset of a locally connected (resp. locally path-connected) space is locally connected (resp. locally path-connected).
Every manifold is locally path-connected.
Arc-wise connected space is path connected, but path-wise connected space may not be arc-wise connected
Continuous image of arc-wise connected set is arc-wise connected.
== Graphs ==
Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them.
However, it is not always possible to find a topology on the set of points which induces the same connected sets. The 5-cycle graph (and any
n
{\displaystyle n}
-cycle with
n
>
3
{\displaystyle n>3}
odd) is one such example.
As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets (Muscat & Buhagiar 2006). Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs.
However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space.
== Stronger forms of connectedness ==
There are stronger forms of connectedness for topological spaces, for instance:
If there exist no two disjoint non-empty open sets in a topological space
X
{\displaystyle X}
,
X
{\displaystyle X}
must be connected, and thus hyperconnected spaces are also connected.
Since a simply connected space is, by definition, also required to be path connected, any simply connected space is also connected. If the "path connectedness" requirement is dropped from the definition of simple connectivity, a simply connected space does not need to be connected.
Yet stronger versions of connectivity include the notion of a contractible space. Every contractible space is path connected and thus also connected.
In general, any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist's sine curve.
== See also ==
Connected component (graph theory) – Maximal subgraph whose vertices can reach each otherPages displaying short descriptions of redirect targets
Connectedness locus
Domain (mathematical analysis) – Connected open subset of a topological space
Extremally disconnected space – Topological space in which the closure of every open set is open
Locally connected space – Property of topological spaces
n-connected
Uniformly connected space – Type of uniform space
Pixel connectivity
== References ==
Wilder, R.L. (1978). "Evolution of the Topological Concept of "Connected"". American Mathematical Monthly. 85 (9): 720–726. doi:10.2307/2321676. JSTOR 2321676.
== Further reading == | Wikipedia/Connected_(topology) |
In mathematics, Thurston's geometrization conjecture (now a theorem) states that each of certain three-dimensional topological spaces has a unique geometric structure that can be associated with it. It is an analogue of the uniformization theorem for two-dimensional surfaces, which states that every simply connected Riemann surface can be given one of three geometries (Euclidean, spherical, or hyperbolic).
In three dimensions, it is not always possible to assign a single geometry to a whole topological space. Instead, the geometrization conjecture states that every closed 3-manifold can be decomposed in a canonical way into pieces that each have one of eight types of geometric structure. The conjecture was proposed by William Thurston (1982) as part of his 24 questions, and implies several other conjectures, such as the Poincaré conjecture and Thurston's elliptization conjecture.
Thurston's hyperbolization theorem implies that Haken manifolds satisfy the geometrization conjecture. Thurston announced a proof in the 1980s, and since then, several complete proofs have appeared in print.
Grigori Perelman announced a proof of the full geometrization conjecture in 2003 using Ricci flow with surgery in two papers posted at the arxiv.org preprint server. Perelman's papers were studied by several independent groups that produced books and online manuscripts filling in the complete details of his arguments. Verification was essentially complete in time for Perelman to be awarded the 2006 Fields Medal for his work, and in 2010 the Clay Mathematics Institute awarded him its 1 million USD prize for solving the Poincaré conjecture, though Perelman declined both awards.
The Poincaré conjecture and the spherical space form conjecture are corollaries of the geometrization conjecture, although there are shorter proofs of the former that do not lead to the geometrization conjecture.
== The conjecture ==
A 3-manifold is called closed if it is compact – without "punctures" or "missing endpoints" – and has no boundary ("edge").
Every closed 3-manifold has a prime decomposition: this means it is the connected sum ("a gluing together") of prime 3-manifolds. This reduces much of the study of 3-manifolds to the case of prime 3-manifolds: those that cannot be written as a non-trivial connected sum.
Here is a statement of Thurston's conjecture:
Every oriented prime closed 3-manifold can be cut along tori, so that the interior of each of the resulting manifolds has a geometric structure with finite volume.
There are 8 possible geometric structures in 3 dimensions. There is a unique minimal way of cutting an irreducible oriented 3-manifold along tori into pieces that are Seifert manifolds or atoroidal called the JSJ decomposition, which is not quite the same as the decomposition in the geometrization conjecture, because some of the pieces in the JSJ decomposition might not have finite volume geometric structures. (For example, the mapping torus of an Anosov map of a torus has a finite volume solv structure, but its JSJ decomposition cuts it open along one torus to produce a product of a torus and a unit interval, and the interior of this has no finite volume geometric structure.)
For non-oriented manifolds the easiest way to state a geometrization conjecture is to first take the oriented double cover. It is also possible to work directly with non-orientable manifolds, but this gives some extra complications: it may be necessary to cut along projective planes and Klein bottles as well as spheres and tori, and manifolds with a projective plane boundary component usually have no geometric structure.
In 2 dimensions, every closed surface has a geometric structure consisting of a metric with constant curvature; it is not necessary to cut the manifold up first. Specifically, every closed surface is diffeomorphic to a quotient of S2, E2, or H2.
== The eight Thurston geometries ==
A model geometry is a simply connected smooth manifold X together with a transitive action of a Lie group G on X with compact stabilizers.
A model geometry is called maximal if G is maximal among groups acting smoothly and transitively on X with compact stabilizers. Sometimes this condition is included in the definition of a model geometry.
A geometric structure on a manifold M is a diffeomorphism from M to X/Γ for some model geometry X, where Γ is a discrete subgroup of G acting freely on X ; this is a special case of a complete (G,X)-structure. If a given manifold admits a geometric structure, then it admits one whose model is maximal.
A 3-dimensional model geometry X is relevant to the geometrization conjecture if it is maximal and if there is at least one compact manifold with a geometric structure modelled on X. Thurston classified the 8 model geometries satisfying these conditions; they are listed below and are sometimes called Thurston geometries. (There are also uncountably many model geometries without compact quotients.)
There is some connection with the Bianchi groups: the 3-dimensional Lie groups. Most Thurston geometries can be realized as a left invariant metric on a Bianchi group. However S2 × R cannot be, Euclidean space corresponds to two different Bianchi groups, and there are an uncountable number of solvable non-unimodular Bianchi groups, most of which give model geometries with no compact representatives.
=== Spherical geometry S3 ===
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O(4, R), with 2 components. The corresponding manifolds are exactly the closed 3-manifolds with finite fundamental group. Examples include the 3-sphere, the Poincaré homology sphere, Lens spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type IX. Manifolds with this geometry are all compact, orientable, and have the structure of a Seifert fiber space (often in several ways). The complete list of such manifolds is given in the article on spherical 3-manifolds. Under Ricci flow, manifolds with this geometry collapse to a point in finite time.
=== Euclidean geometry E3 ===
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group R3 × O(3, R), with 2 components. Examples are the 3-torus, and more generally the mapping torus of a finite-order automorphism of the 2-torus; see torus bundle. There are exactly 10 finite closed 3-manifolds with this geometry, 6 orientable and 4 non-orientable. This geometry can be modeled as a left invariant metric on the Bianchi groups of type I or VII0. Finite volume manifolds with this geometry are all compact, and have the structure of a Seifert fiber space (sometimes in two ways). The complete list of such manifolds is given in the article on Seifert fiber spaces. Under Ricci flow, manifolds with Euclidean geometry remain invariant.
=== Hyperbolic geometry H3 ===
The point stabilizer is O(3, R), and the group G is the 6-dimensional Lie group O+(1, 3, R), with 2 components. There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold. Other examples are given by the Seifert–Weber space, or "sufficiently complicated" Dehn surgeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V or VIIh≠0. Under Ricci flow, manifolds with hyperbolic geometry expand.
=== The geometry of S2 × R ===
The point stabilizer is O(2, R) × Z/2Z, and the group G is O(3, R) × R × Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2 × S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3-dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group. Finite volume manifolds with this geometry are all compact and have the structure of a Seifert fiber space (often in several ways). Under normalized Ricci flow manifolds with this geometry converge to a 1-dimensional manifold.
=== The geometry of H2 × R ===
The point stabilizer is O(2, R) × Z/2Z, and the group G is O+(1, 2, R) × R × Z/2Z, with 4 components. Examples include the product of a hyperbolic surface with a circle, or more generally the mapping torus of an isometry of a hyperbolic surface. Finite volume manifolds with this geometry have the structure of a Seifert fiber space if they are orientable. (If they are not orientable the natural fibration by circles is not necessarily a Seifert fibration: the problem is that some fibers may "reverse orientation"; in other words their neighborhoods look like fibered solid Klein bottles rather than solid tori.) The classification of such (oriented) manifolds is given in the article on Seifert fiber spaces. This geometry can be modeled as a left invariant metric on the Bianchi group of type III. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
=== The geometry of the universal cover of SL(2, R) ===
The universal cover of SL(2, R) is denoted
S
L
~
(
2
,
R
)
{\displaystyle {\widetilde {\rm {SL}}}(2,\mathbf {R} )}
. It fibers over H2, and the space is sometimes called "Twisted H2 × R". The group G has 2 components. Its identity component has the structure
(
R
×
S
L
~
2
(
R
)
)
/
Z
{\displaystyle (\mathbf {R} \times {\widetilde {\rm {SL}}}_{2}(\mathbf {R} ))/\mathbf {Z} }
. The point stabilizer is O(2,R).
Examples of these manifolds include: the manifold of unit vectors of the tangent bundle of a hyperbolic surface, and more generally the Brieskorn homology spheres (excepting the 3-sphere and the Poincaré dodecahedral space). This geometry can be modeled as a left invariant metric on the Bianchi group of type VIII or III. Finite volume manifolds with this geometry are orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow manifolds with this geometry converge to a 2-dimensional manifold.
=== Nil geometry ===
This fibers over E2, and so is sometimes known as "Twisted E2 × R". It is the geometry of the Heisenberg group. The point stabilizer is O(2, R). The group G has 2 components, and is a semidirect product of the 3-dimensional Heisenberg group by the group O(2, R) of isometries of a circle. Compact manifolds with this geometry include the mapping torus of a Dehn twist of a 2-torus, or the quotient of the Heisenberg group by the "integral Heisenberg group". This geometry can be modeled as a left invariant metric on the Bianchi group of type II. Finite volume manifolds with this geometry are compact and orientable and have the structure of a Seifert fiber space. The classification of such manifolds is given in the article on Seifert fiber spaces. Under normalized Ricci flow, compact manifolds with this geometry converge to R2 with the flat metric.
=== Sol geometry ===
This geometry (also called Solv geometry) fibers over the line with fiber the plane, and is the geometry of the identity component of the group G. The point stabilizer is the dihedral group of order 8. The group G has 8 components, and is the group of maps from 2-dimensional Minkowski space to itself that are either isometries or multiply the metric by −1. The identity component has a normal subgroup R2 with quotient R, where R acts on R2 with 2 (real) eigenspaces, with distinct real eigenvalues of product 1. This is the Bianchi group of type VI0 and the geometry can be modeled as a left invariant metric on this group. All finite volume manifolds with solv geometry are compact. The compact manifolds with solv geometry are either the mapping torus of an Anosov map of the 2-torus (such a map is an automorphism of the 2-torus given by an invertible 2 by 2 matrix whose eigenvalues are real and distinct, such as
(
2
1
1
1
)
{\displaystyle \left({\begin{array}{*{20}c}2&1\\1&1\\\end{array}}\right)}
), or quotients of these by groups of order at most 8. The eigenvalues of the automorphism of the torus generate an order of a real quadratic field, and the solv manifolds can be classified in terms of the units and ideal classes of this order.
Under normalized Ricci flow compact manifolds with this geometry converge (rather slowly) to R1.
== Uniqueness ==
A closed 3-manifold has a geometric structure of at most one of the 8 types above, but finite volume non-compact 3-manifolds can occasionally have more than one type of geometric structure. (Nevertheless, a manifold can have many different geometric structures of the same type; for example, a surface of genus at least 2 has a continuum of different hyperbolic metrics.) More precisely, if M is a manifold with a finite volume geometric structure, then the type of geometric structure is almost determined as follows, in terms of the fundamental group π1(M):
If π1(M) is finite then the geometric structure on M is spherical, and M is compact.
If π1(M) is virtually cyclic but not finite then the geometric structure on M is S2×R, and M is compact.
If π1(M) is virtually abelian but not virtually cyclic then the geometric structure on M is Euclidean, and M is compact.
If π1(M) is virtually nilpotent but not virtually abelian then the geometric structure on M is nil geometry, and M is compact.
If π1(M) is virtually solvable but not virtually nilpotent then the geometric structure on M is solv geometry, and M is compact.
If π1(M) has an infinite normal cyclic subgroup but is not virtually solvable then the geometric structure on M is either H2×R or the universal cover of SL(2, R). The manifold M may be either compact or non-compact. If it is compact, then the 2 geometries can be distinguished by whether or not π1(M) has a finite index subgroup that splits as a semidirect product of the normal cyclic subgroup and something else. If the manifold is non-compact, then the fundamental group cannot distinguish the two geometries, and there are examples (such as the complement of a trefoil knot) where a manifold may have a finite volume geometric structure of either type.
If π1(M) has no infinite normal cyclic subgroup and is not virtually solvable then the geometric structure on M is hyperbolic, and M may be either compact or non-compact.
Infinite volume manifolds can have many different types of geometric structure: for example, R3 can have 6 of the different geometric structures listed above, as 6 of the 8 model geometries are homeomorphic to it. Moreover if the volume does not have to be finite there are an infinite number of new geometric structures with no compact models; for example, the geometry of almost any non-unimodular 3-dimensional Lie group.
There can be more than one way to decompose a closed 3-manifold into pieces with geometric structures. For example:
Taking connected sums with several copies of S3 does not change a manifold.
The connected sum of two projective 3-spaces has a S2×R geometry, and is also the connected sum of two pieces with S3 geometry.
The product of a surface of negative curvature and a circle has a geometric structure, but can also be cut along tori to produce smaller pieces that also have geometric structures. There are many similar examples for Seifert fiber spaces.
It is possible to choose a "canonical" decomposition into pieces with geometric structure, for example by first cutting the manifold into prime pieces in a minimal way, then cutting these up using the smallest possible number of tori. However this minimal decomposition is not necessarily the one produced by Ricci flow; in fact, the Ricci flow can cut up a manifold into geometric pieces in many inequivalent ways, depending on the choice of initial metric.
== History ==
The Fields Medal was awarded to Thurston in 1982 partially for his proof of the geometrization conjecture for Haken manifolds.
In 1982, Richard S. Hamilton showed that given a closed 3-manifold with a metric of positive Ricci curvature, the Ricci flow would collapse the manifold to a point in finite time, which proves the geometrization conjecture for this case as the metric becomes "almost round" just before the collapse. He later developed a program to prove the geometrization conjecture by Ricci flow with surgery. The idea is that the Ricci flow will in general produce singularities, but one may be able to continue the Ricci flow past the singularity by using surgery to change the topology of the manifold. Roughly speaking, the Ricci flow contracts positive curvature regions and expands negative curvature regions, so it should kill off the pieces of the manifold with the "positive curvature" geometries S3 and S2 × R, while what is left at large times should have a thick–thin decomposition into a "thick" piece with hyperbolic geometry and a "thin" graph manifold.
In 2003, Grigori Perelman announced a proof of the geometrization conjecture by showing that the Ricci flow can indeed be continued past the singularities, and has the behavior described above.
One component of Perelman's proof was a novel collapsing theorem in Riemannian geometry. Perelman did not release any details on the proof of this result (Theorem 7.4 in the preprint 'Ricci flow with surgery on three-manifolds'). Beginning with Shioya and Yamaguchi, there are now several different proofs of Perelman's collapsing theorem, or variants thereof. Shioya and Yamaguchi's formulation was used in the first fully detailed formulations of Perelman's work.
A second route to the last part of Perelman's proof of geometrization is the method of Laurent Bessières and co-authors, which uses Thurston's hyperbolization theorem for Haken manifolds and Gromov's norm for 3-manifolds. A book by the same authors with complete details of their version of the proof has been published by the European Mathematical Society.
== Higher dimensions ==
In four dimensions, only a rather restricted class of closed 4-manifolds admit a geometric decomposition. However, lists of maximal model geometries can still be given.
The four-dimensional maximal model geometries were classified by Richard Filipkiewicz in 1983. They number eighteen, plus one countably infinite family: their usual names are E4, Nil4, Nil3 × E1, Sol4m,n (a countably infinite family), Sol40, Sol41, H3 × E1,
S
L
~
{\displaystyle {\widetilde {\rm {SL}}}}
× E1, H2 × E2, H2 × H2, H4, H2(C) (a complex hyperbolic space), F4 (the tangent bundle of the hyperbolic plane), S2 × E2, S2 × H2, S3 × E1, S4, CP2 (the complex projective plane), and S2 × S2. No closed manifold admits the geometry F4, but there are manifolds with proper decomposition including an F4 piece.
The five-dimensional maximal model geometries were classified by Andrew Geng in 2016. There are 53 individual geometries and six infinite families. Some new phenomena not observed in lower dimensions occur, including two uncountable families of geometries and geometries with no compact quotients.
== Footnotes ==
== Notes ==
== References ==
L. Bessieres, G. Besson, M. Boileau, S. Maillot, J. Porti, 'Geometrisation of 3-manifolds', EMS Tracts in Mathematics, volume 13. European Mathematical Society, Zurich, 2010. [1]
M. Boileau Geometrization of 3-manifolds with symmetries
F. Bonahon Geometric structures on 3-manifolds Handbook of Geometric Topology (2002) Elsevier.
Cao, Huai-Dong; Zhu, Xi-Ping (2006). "A complete proof of the Poincaré and geometrization conjectures—application of the Hamilton–Perelman theory of the Ricci flow". Asian Journal of Mathematics. 10 (2): 165–492. doi:10.4310/ajm.2006.v10.n2.a2. MR 2233789. Zbl 1200.53057.– – (2006). "Erratum". Asian Journal of Mathematics. 10 (4): 663–664. doi:10.4310/AJM.2006.v10.n4.e2. MR 2282358.– – (2006). "Hamilton–Perelman's Proof of the Poincaré Conjecture and the Geometrization Conjecture". arXiv:math/0612069.
Allen Hatcher: Notes on Basic 3-Manifold Topology 2000
J. Isenberg, M. Jackson, Ricci flow of locally homogeneous geometries on a Riemannian manifold, J. Diff. Geom. 35 (1992) no. 3 723–741.
Kleiner, Bruce; Lott, John (2008). "Notes on Perelman's papers". Geometry & Topology. 12 (5). Updated for corrections in 2011 & 2013: 2587–2855. arXiv:math/0605667. doi:10.2140/gt.2008.12.2587. MR 2460872. Zbl 1204.53033.
John W. Morgan. Recent progress on the Poincaré conjecture and the classification of 3-manifolds. Bulletin Amer. Math. Soc. 42 (2005) no. 1, 57–78 (expository article explains the eight geometries and geometrization conjecture briefly, and gives an outline of Perelman's proof of the Poincaré conjecture)
Morgan, John W.; Fong, Frederick Tsz-Ho (2010). Ricci Flow and Geometrization of 3-Manifolds. University Lecture Series. ISBN 978-0-8218-4963-7. Retrieved 2010-09-26.
Morgan, John; Tian, Gang (2014). The geometrization conjecture. Clay Mathematics Monographs. Vol. 5. Cambridge, MA: Clay Mathematics Institute. ISBN 978-0-8218-5201-9. MR 3186136.
Perelman, Grisha (2002). "The entropy formula for the Ricci flow and its geometric applications". arXiv:math/0211159.
Perelman, Grisha (2003). "Ricci flow with surgery on three-manifolds". arXiv:math/0303109.
Perelman, Grisha (2003). "Finite extinction time for the solutions to the Ricci flow on certain three-manifolds". arXiv:math/0307245.
Scott, Peter The geometries of 3-manifolds. (errata) Bull. London Math. Soc. 15 (1983), no. 5, 401–487.
Thurston, William P. (1982). "Three-dimensional manifolds, Kleinian groups and hyperbolic geometry". Bulletin of the American Mathematical Society. New Series. 6 (3): 357–381. doi:10.1090/S0273-0979-1982-15003-0. ISSN 0002-9904. MR 0648524. This gives the original statement of the conjecture.
William Thurston. Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, NJ, 1997. x+311 pp. ISBN 0-691-08304-5 (in depth explanation of the eight geometries and the proof that there are only eight)
William Thurston. The Geometry and Topology of Three-Manifolds, 1980 Princeton lecture notes on geometric structures on 3-manifolds.
== External links ==
"The Geometry of 3-Manifolds (video)". Archived from the original on January 27, 2010. Retrieved January 20, 2010. A public lecture on the Poincaré and geometrization conjectures, given by C. McMullen at Harvard in 2006. | Wikipedia/Geometrization_conjecture |
In mathematics, specifically in geometric topology, surgery theory is a collection of techniques used to produce one finite-dimensional manifold from another in a 'controlled' way, introduced by John Milnor (1961). Milnor called this technique surgery, while Andrew Wallace called it spherical modification. The "surgery" on a differentiable manifold M of dimension
n
=
p
+
q
+
1
{\displaystyle n=p+q+1}
, could be described as removing an imbedded sphere of dimension p from M. Originally developed for differentiable (or, smooth) manifolds, surgery techniques also apply to piecewise linear (PL-) and topological manifolds.
Surgery refers to cutting out parts of the manifold and replacing it with a part of another manifold, matching up along the cut or boundary. This is closely related to, but not identical with, handlebody decompositions.
More technically, the idea is to start with a well-understood manifold M and perform surgery on it to produce a manifold M′ having some desired property, in such a way that the effects on the homology, homotopy groups, or other invariants of the manifold are known. A relatively easy argument using Morse theory shows that a manifold can be obtained from another one by a sequence of spherical modifications if and only if those two belong to the same cobordism class.
The classification of exotic spheres by Michel Kervaire and Milnor (1963) led to the emergence of surgery theory as a major tool in high-dimensional topology.
== Surgery on a manifold ==
=== A basic observation ===
If X, Y are manifolds with boundary, then the boundary of the product manifold is
∂
(
X
×
Y
)
=
(
∂
X
×
Y
)
∪
(
X
×
∂
Y
)
.
{\displaystyle \partial (X\times Y)=(\partial X\times Y)\cup (X\times \partial Y).}
The basic observation which justifies surgery is that the space
S
p
×
S
q
−
1
{\displaystyle S^{p}\times S^{q-1}}
can be understood either as the boundary of
D
p
+
1
×
S
q
−
1
{\displaystyle D^{p+1}\times S^{q-1}}
or as the boundary of
S
p
×
D
q
{\displaystyle S^{p}\times D^{q}}
. In symbols,
∂
(
S
p
×
D
q
)
=
S
p
×
S
q
−
1
=
∂
(
D
p
+
1
×
S
q
−
1
)
{\displaystyle \partial \left(S^{p}\times D^{q}\right)=S^{p}\times S^{q-1}=\partial \left(D^{p+1}\times S^{q-1}\right)}
,
where
D
q
{\displaystyle D^{q}}
is the q-dimensional disk, i.e., the set of points in
R
q
{\displaystyle \mathbb {R} ^{q}}
that are at distance one-or-less from a given fixed point (the center of the disk); for example, then,
D
1
{\displaystyle D^{1}}
is homeomorphic to the unit interval, while
D
2
{\displaystyle D^{2}}
is a circle together with the points in its interior.
=== Surgery ===
Now, given a manifold M of dimension
n
=
p
+
q
{\displaystyle n=p+q}
and an embedding
ϕ
:
S
p
×
D
q
→
M
{\displaystyle \phi \colon S^{p}\times D^{q}\to M}
, define another n-dimensional manifold
M
′
{\displaystyle M'}
to be
M
′
:=
(
M
∖
int
(
im
(
ϕ
)
)
)
∪
ϕ
|
S
p
×
S
q
−
1
(
D
p
+
1
×
S
q
−
1
)
.
{\displaystyle M':=\left(M\setminus \operatorname {int} (\operatorname {im} (\phi ))\right)\;\cup _{\phi |_{S^{p}\times S^{q-1}}}\left(D^{p+1}\times S^{q-1}\right).}
Since
im
(
ϕ
)
=
ϕ
(
S
p
×
D
q
)
{\displaystyle \operatorname {im} (\phi )=\phi (S^{p}\times D^{q})}
and from the equation from our basic observation before, the gluing is justified then
ϕ
(
∂
(
S
p
×
D
q
)
)
=
ϕ
(
S
p
×
S
q
−
1
)
.
{\displaystyle \phi \left(\partial \left(S^{p}\times D^{q}\right)\right)=\phi \left(S^{p}\times S^{q-1}\right).}
One says that the manifold M′ is produced by a surgery cutting out
S
p
×
D
q
{\displaystyle S^{p}\times D^{q}}
and gluing in
D
p
+
1
×
S
q
−
1
{\displaystyle D^{p+1}\times S^{q-1}}
, or by a p-surgery if one wants to specify the number p. Strictly speaking, M′ is a manifold with corners, but there is a canonical way to smooth them out. Notice that the submanifold that was replaced in M was of the same dimension as M (it was of codimension 0).
=== Attaching handles and cobordisms ===
Surgery is closely related to (but not the same as) handle attaching. Given an
(
n
+
1
)
{\displaystyle (n+1)}
-manifold with boundary
(
L
,
∂
L
)
{\displaystyle (L,\partial L)}
and an embedding
ϕ
:
S
p
×
D
q
→
∂
L
{\displaystyle \phi \colon S^{p}\times D^{q}\to \partial L}
, where
n
=
p
+
q
{\displaystyle n=p+q}
, define another
(
n
+
1
)
{\displaystyle (n+1)}
-manifold with boundary L′ by
L
′
:=
L
∪
ϕ
(
D
p
+
1
×
D
q
)
.
{\displaystyle L':=L\;\cup _{\phi }\left(D^{p+1}\times D^{q}\right).}
The manifold L′ is obtained by "attaching a
(
p
+
1
)
{\displaystyle (p+1)}
-handle", with
∂
L
′
{\displaystyle \partial L'}
obtained from
∂
L
{\displaystyle \partial L}
by a p-surgery
∂
L
′
=
(
∂
L
∖
int
(
im
(
ϕ
)
)
)
∪
ϕ
|
S
p
×
D
q
(
D
p
+
1
×
S
q
−
1
)
.
{\displaystyle \partial L'=(\partial L\setminus \operatorname {int} (\operatorname {im} (\phi )))\;\cup _{\phi |_{S^{p}\times D^{q}}}\left(D^{p+1}\times S^{q-1}\right).}
A surgery on M not only produces a new manifold M′, but also a cobordism W between M and M′. The trace of the surgery is the cobordism
(
W
;
M
,
M
′
)
{\displaystyle (W;M,M')}
, with
W
:=
(
M
×
I
)
∪
ϕ
×
{
1
}
(
D
p
+
1
×
D
q
)
{\displaystyle W:=(M\times I)\;\cup _{\phi \times \{1\}}\left(D^{p+1}\times D^{q}\right)}
the
(
n
+
1
)
{\displaystyle (n+1)}
-dimensional manifold with boundary
∂
W
=
M
∪
M
′
{\displaystyle \partial W=M\cup M'}
obtained from the product
M
×
I
{\displaystyle M\times I}
by attaching a
(
p
+
1
)
{\displaystyle (p+1)}
-handle
D
p
+
1
×
D
q
{\displaystyle D^{p+1}\times D^{q}}
.
Surgery is symmetric in the sense that the manifold M can be re-obtained from M′ by a
(
q
−
1
)
{\displaystyle (q-1)}
-surgery, the trace of which coincides with the trace of the original surgery, up to orientation.
In most applications, the manifold M comes with additional geometric structure, such as a map to some reference space, or additional bundle data. One then wants the surgery process to endow M′ with the same kind of additional structure. For instance, a standard tool in surgery theory is surgery on normal maps: such a process changes a normal map to another normal map within the same bordism class.
=== Examples ===
=== Effects on homotopy groups, and comparison to cell-attachment ===
Intuitively, the process of surgery is the manifold analog of attaching a cell to a topological space, where the embedding
ϕ
{\displaystyle \phi }
takes the place of the attaching map. A simple attachment of a
(
p
+
1
)
{\displaystyle (p+1)}
-cell to an n-manifold would destroy the manifold structure for dimension reasons, so it has to be thickened by crossing with another cell.
Up to homotopy, the process of surgery on an embedding
ϕ
:
S
p
×
D
q
→
M
{\displaystyle \phi \colon S^{p}\times D^{q}\to M}
can be described as the attaching of a
(
p
+
1
)
{\displaystyle (p+1)}
-cell, giving the homotopy type of the trace, and the detaching of a q-cell to obtain N. The necessity of the detaching process can be understood as an effect of Poincaré duality.
In the same way as a cell can be attached to a space to kill an element in some homotopy group of the space, a p-surgery on a manifold M can often be used to kill an element
α
∈
π
p
(
M
)
{\displaystyle \alpha \in \pi _{p}(M)}
. Two points are important however: Firstly, the element
α
∈
π
p
(
M
)
{\displaystyle \alpha \in \pi _{p}(M)}
has to be representable by an embedding
ϕ
:
S
p
×
D
q
→
M
{\displaystyle \phi \colon S^{p}\times D^{q}\to M}
(which means embedding the corresponding sphere with a trivial normal bundle). For instance, it is not possible to perform surgery on an orientation-reversing loop. Secondly, the effect of the detaching process has to be considered, since it might also have an effect on the homotopy group under consideration. Roughly speaking, this second point is only important when p is at least of the order of half the dimension of M.
== Application to classification of manifolds ==
The origin and main application of surgery theory lies in the classification of manifolds of dimension greater than four. Loosely, the organizing questions of surgery theory are:
Is X a manifold?
Is f a diffeomorphism?
More formally, one asks these questions up to homotopy:
Does a space X have the homotopy type of a smooth manifold of a given dimension?
Is a homotopy equivalence
f
:
M
→
N
{\displaystyle f\colon M\to N}
between two smooth manifolds homotopic to a diffeomorphism?
It turns out that the second ("uniqueness") question is a relative version of a question of the first ("existence") type; thus both questions can be treated with the same methods.
Note that surgery theory does not give a complete set of invariants to these questions. Instead, it is obstruction-theoretic: there is a primary obstruction, and a secondary obstruction called the surgery obstruction which is only defined if the primary obstruction vanishes, and which depends on the choice made in verifying that the primary obstruction vanishes.
=== The surgery approach ===
In the classical approach, as developed by William Browder, Sergei Novikov, Dennis Sullivan, and C. T. C. Wall, surgery is done on normal maps of degree one. Using surgery, the question "Is the normal map
f
:
M
→
X
{\displaystyle f\colon M\to X}
of degree one cobordant to a homotopy equivalence?" can be translated (in dimensions greater than four) to an algebraic statement about some element in an L-group of the group ring
Z
[
π
1
(
X
)
]
{\displaystyle \mathbb {Z} [\pi _{1}(X)]}
. More precisely, the question has a positive answer if and only if the surgery obstruction
σ
(
f
)
∈
L
n
(
Z
[
π
1
(
X
)
]
)
{\displaystyle \sigma (f)\in L_{n}(\mathbb {Z} [\pi _{1}(X)])}
is zero, where n is the dimension of M.
For example, consider the case where the dimension n = 4k is a multiple of four, and
π
1
(
X
)
=
0
{\displaystyle \pi _{1}(X)=0}
. It is known that
L
4
k
(
Z
)
{\displaystyle L_{4k}(\mathbb {Z} )}
is isomorphic to the integers
Z
{\displaystyle \mathbb {Z} }
; under this isomorphism the surgery obstruction of f is proportional to the difference of the signatures
σ
(
X
)
−
σ
(
M
)
{\displaystyle \sigma (X)-\sigma (M)}
of X and M. Hence a normal map of degree one is cobordant to a homotopy equivalence if and only if the signatures of domain and codomain agree.
Coming back to the "existence" question from above, we see that a space X has the homotopy type of a smooth manifold if and only if it receives a normal map of degree one whose surgery obstruction vanishes. This leads to a multi-step obstruction process: In order to speak of normal maps, X must satisfy an appropriate version of Poincaré duality which turns it into a Poincaré complex. Supposing that X is a Poincaré complex, the Pontryagin–Thom construction shows that a normal map of degree one to X exists if and only if the Spivak normal fibration of X has a reduction to a stable vector bundle. If normal maps of degree one to X exist, their bordism classes (called normal invariants) are classified by the set of homotopy classes
[
X
,
G
/
O
]
{\displaystyle [X,G/O]}
. Each of these normal invariants has a surgery obstruction; X has the homotopy type of a smooth manifold if and only if one of these obstructions is zero. Stated differently, this means that there is a choice of normal invariant with zero image under the surgery obstruction map
[
X
,
G
/
O
]
→
L
n
(
Z
[
π
1
(
X
)
]
)
.
{\displaystyle [X,G/O]\to L_{n}\left(\mathbb {Z} \left[\pi _{1}(X)\right]\right).}
=== Structure sets and surgery exact sequence ===
The concept of structure set is the unifying framework for both questions of existence and uniqueness. Roughly speaking, the structure set of a space X consists of homotopy equivalences M → X from some manifold to X, where two maps are identified under a bordism-type relation. A necessary (but not in general sufficient) condition for the structure set of a space X to be non-empty is that X be an n-dimensional Poincaré complex, i.e. that the homology and cohomology groups be related by isomorphisms
H
∗
(
X
)
≅
H
n
−
∗
(
X
)
{\displaystyle H^{*}(X)\cong H_{n-*}(X)}
of an n-dimensional manifold, for some integer n. Depending on the precise definition and the category of manifolds (smooth, PL, or topological), there are various versions of structure sets. Since, by the s-cobordism theorem, certain bordisms between manifolds are isomorphic (in the respective category) to cylinders, the concept of structure set allows a classification even up to diffeomorphism.
The structure set and the surgery obstruction map are brought together in the surgery exact sequence. This sequence allows to determine the structure set of a Poincaré complex once the surgery obstruction map (and a relative version of it) are understood. In important cases, the smooth or topological structure set can be computed by means of the surgery exact sequence. Examples are the classification of exotic spheres, and the proofs of the Borel conjecture for negatively curved manifolds and manifolds with hyperbolic fundamental group.
In the topological category, the surgery exact sequence is the long exact sequence induced by a fibration sequence of spectra. This implies that all the sets involved in the sequence are in fact abelian groups. On the spectrum level, the surgery obstruction map is an assembly map whose fiber is the block structure space of the corresponding manifold.
== See also ==
s-cobordism theorem
h-cobordism theorem
Whitehead torsion
Dehn surgery
Manifold decomposition
Orientation character
Plumbing (mathematics)
== Citations ==
== References ==
== External links ==
Surgery Theory for Amateurs
Edinburgh Surgery Theory Study Group
2012 Oberwolfach Seminar on Surgery theory on the Manifold Atlas Project
2012 Regensburg Blockseminar on Surgery theory on the Manifold Atlas Project
Jacob Lurie's 2011 Harvard surgery course Lecture notes
Andrew Ranicki's homepage
Shmuel Weinberger's homepage | Wikipedia/Surgery_theory |
In mathematics, the rank of a differentiable map
f
:
M
→
N
{\displaystyle f:M\to N}
between differentiable manifolds at a point
p
∈
M
{\displaystyle p\in M}
is the rank of the derivative of
f
{\displaystyle f}
at
p
{\displaystyle p}
. Recall that the derivative of
f
{\displaystyle f}
at
p
{\displaystyle p}
is a linear map
d
p
f
:
T
p
M
→
T
f
(
p
)
N
{\displaystyle d_{p}f:T_{p}M\to T_{f(p)}N\,}
from the tangent space at p to the tangent space at f(p). As a linear map between vector spaces it has a well-defined rank, which is just the dimension of the image in Tf(p)N:
rank
(
f
)
p
=
dim
(
im
(
d
p
f
)
)
.
{\displaystyle \operatorname {rank} (f)_{p}=\dim(\operatorname {im} (d_{p}f)).}
== Constant rank maps ==
A differentiable map f : M → N is said to have constant rank if the rank of f is the same for all p in M. Constant rank maps have a number of nice properties and are an important concept in differential topology.
Three special cases of constant rank maps occur. A constant rank map f : M → N is
an immersion if rank f = dim M (i.e. the derivative is everywhere injective),
a submersion if rank f = dim N (i.e. the derivative is everywhere surjective),
a local diffeomorphism if rank f = dim M = dim N (i.e. the derivative is everywhere bijective).
The map f itself need not be injective, surjective, or bijective for these conditions to hold; only the behavior of the derivative is important. For example, there are injective maps which are not immersions and immersions which are not injections. However, if f : M → N is a smooth map of constant rank then
if f is injective it is an immersion,
if f is surjective it is a submersion,
if f is bijective it is a diffeomorphism.
Constant rank maps have a nice description in terms of local coordinates. Suppose M and N are smooth manifolds of dimensions m and n respectively, and f : M → N is a smooth map with constant rank k. Then for all p in M there exist coordinates (x1, ..., xm) centered at p and coordinates (y1, ..., yn) centered at f(p) such that f is given by
f
(
x
1
,
…
,
x
m
)
=
(
x
1
,
…
,
x
k
,
0
,
…
,
0
)
{\displaystyle f(x^{1},\ldots ,x^{m})=(x^{1},\ldots ,x^{k},0,\ldots ,0)\,}
in these coordinates.
== Examples ==
Maps whose rank is generically maximal, but drops at certain singular points, occur frequently in coordinate systems. For example, in spherical coordinates, the rank of the map from the two angles to a point on the sphere (formally, a map T2 → S2 from the torus to the sphere) is 2 at regular points, but is only 1 at the north and south poles (zenith and nadir).
A subtler example occurs in charts on SO(3), the rotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used in navigation, nautical engineering, and aerospace engineering, among many other uses. Topologically, SO(3) is the real projective space RP3, and it is often desirable to represent rotations by a set of three numbers, known as Euler angles (in numerous variants), both because this is conceptually simple, and because one can build a combination of three gimbals to produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torus T3 of three angles to the real projective space RP3 of rotations, but this map does not have rank 3 at all points (formally because it cannot be a covering map, as the only (non-trivial) covering space is the hypersphere S3), and the phenomenon of the rank dropping to 2 at certain points is referred to in engineering as gimbal lock.
== References ==
Lee, John (2003). Introduction to Smooth Manifolds. Graduate Texts in Mathematics 218. New York: Springer. ISBN 978-0-387-95495-0. | Wikipedia/Rank_(differential_topology) |
In mathematics, deformation theory is the study of infinitesimal conditions associated with varying a solution P of a problem to slightly different solutions Pε, where ε is a small number, or a vector of small quantities. The infinitesimal conditions are the result of applying the approach of differential calculus to solving a problem with constraints. The name is an analogy to non-rigid structures that deform slightly to accommodate external forces.
Some characteristic phenomena are: the derivation of first-order equations by treating the ε quantities as having negligible squares; the possibility of isolated solutions, in that varying a solution may not be possible, or does not bring anything new; and the question of whether the infinitesimal constraints actually 'integrate', so that their solution does provide small variations. In some form these considerations have a history of centuries in mathematics, but also in physics and engineering. For example, in the geometry of numbers a class of results called isolation theorems was recognised, with the topological interpretation of an open orbit (of a group action) around a given solution. Perturbation theory also looks at deformations, in general of operators.
== Deformations of complex manifolds ==
The most salient deformation theory in mathematics has been that of complex manifolds and algebraic varieties. This was put on a firm basis by foundational work of Kunihiko Kodaira and Donald C. Spencer, after deformation techniques had received a great deal of more tentative application in the Italian school of algebraic geometry. One expects, intuitively, that deformation theory of the first order should equate the Zariski tangent space with a moduli space. The phenomena turn out to be rather subtle, though, in the general case.
In the case of Riemann surfaces, one can explain that the complex structure on the Riemann sphere is isolated (no moduli). For genus 1, an elliptic curve has a one-parameter family of complex structures, as shown in elliptic function theory. The general Kodaira–Spencer theory identifies as the key to the deformation theory the sheaf cohomology group
H
1
(
Θ
)
{\displaystyle H^{1}(\Theta )\,}
where Θ is (the sheaf of germs of sections of) the holomorphic tangent bundle. There is an obstruction in the H2 of the same sheaf; which is always zero in case of a curve, for general reasons of dimension. In the case of genus 0 the H1 vanishes, also. For genus 1 the dimension is the Hodge number h1,0 which is therefore 1. It is known that all curves of genus one have equations of form y2 = x3 + ax + b. These obviously depend on two parameters, a and b, whereas the isomorphism classes of such curves have only one parameter. Hence there must be an equation relating those a and b which describe isomorphic elliptic curves. It turns out that curves for which b2a−3 has the same value, describe isomorphic curves. I.e. varying a and b is one way to deform the structure of the curve y2 = x3 + ax + b, but not all variations of a,b actually change the isomorphism class of the curve.
One can go further with the case of genus g > 1, using Serre duality to relate the H1 to
H
0
(
Ω
[
2
]
)
{\displaystyle H^{0}(\Omega ^{[2]})}
where Ω is the holomorphic cotangent bundle and the notation Ω[2] means the tensor square (not the second exterior power). In other words, deformations are regulated by holomorphic quadratic differentials on a Riemann surface, again something known classically. The dimension of the moduli space, called Teichmüller space in this case, is computed as 3g − 3, by the Riemann–Roch theorem.
These examples are the beginning of a theory applying to holomorphic families of complex manifolds, of any dimension. Further developments included: the extension by Spencer of the techniques to other structures of differential geometry; the assimilation of the Kodaira–Spencer theory into the abstract algebraic geometry of Grothendieck, with a consequent substantive clarification of earlier work; and deformation theory of other structures, such as algebras.
== Deformations and flat maps ==
The most general form of a deformation is a flat map
f
:
X
→
S
{\displaystyle f:X\to S}
of complex-analytic spaces, schemes, or germs of functions on a space. Grothendieck was the first to find this far-reaching generalization for deformations and developed the theory in that context. The general idea is there should exist a universal family
X
→
B
{\displaystyle {\mathfrak {X}}\to B}
such that any deformation can be found as a unique pullback square
X
→
X
↓
↓
S
→
B
{\displaystyle {\begin{matrix}X&\to &{\mathfrak {X}}\\\downarrow &&\downarrow \\S&\to &B\end{matrix}}}
In many cases, this universal family is either a Hilbert scheme or Quot scheme, or a quotient of one of them. For example, in the construction of the moduli of curves, it is constructed as a quotient of the smooth curves in the Hilbert scheme. If the pullback square is not unique, then the family is only versal.
== Deformations of germs of analytic algebras ==
One of the useful and readily computable areas of deformation theory comes from the deformation theory of germs of complex spaces, such as Stein manifolds, complex manifolds, or complex analytic varieties. Note that this theory can be globalized to complex manifolds and complex analytic spaces by considering the sheaves of germs of holomorphic functions, tangent spaces, etc. Such algebras are of the form
A
≅
C
{
z
1
,
…
,
z
n
}
I
{\displaystyle A\cong {\frac {\mathbb {C} \{z_{1},\ldots ,z_{n}\}}{I}}}
where
C
{
z
1
,
…
,
z
n
}
{\displaystyle \mathbb {C} \{z_{1},\ldots ,z_{n}\}}
is the ring of convergent power-series and
I
{\displaystyle I}
is an ideal. For example, many authors study the germs of functions of a singularity, such as the algebra
A
≅
C
{
z
1
,
…
,
z
n
}
(
y
2
−
x
n
)
{\displaystyle A\cong {\frac {\mathbb {C} \{z_{1},\ldots ,z_{n}\}}{(y^{2}-x^{n})}}}
representing a plane-curve singularity. A germ of analytic algebras is then an object in the opposite category of such algebras. Then, a deformation of a germ of analytic algebras
X
0
{\displaystyle X_{0}}
is given by a flat map of germs of analytic algebras
f
:
X
→
S
{\displaystyle f:X\to S}
where
S
{\displaystyle S}
has a distinguished point
0
{\displaystyle 0}
such that the
X
0
{\displaystyle X_{0}}
fits into the pullback square
X
0
→
X
↓
↓
∗
→
0
S
{\displaystyle {\begin{matrix}X_{0}&\to &X\\\downarrow &&\downarrow \\*&{\xrightarrow[{0}]{}}&S\end{matrix}}}
These deformations have an equivalence relation given by commutative squares
X
′
→
X
↓
↓
S
′
→
S
{\displaystyle {\begin{matrix}X'&\to &X\\\downarrow &&\downarrow \\S'&\to &S\end{matrix}}}
where the horizontal arrows are isomorphisms. For example, there is a deformation of the plane curve singularity given by the opposite diagram of the commutative diagram of analytic algebras
C
{
x
,
y
}
(
y
2
−
x
n
)
←
C
{
x
,
y
,
s
}
(
y
2
−
x
n
+
s
)
↑
↑
C
←
C
{
s
}
{\displaystyle {\begin{matrix}{\frac {\mathbb {C} \{x,y\}}{(y^{2}-x^{n})}}&\leftarrow &{\frac {\mathbb {C} \{x,y,s\}}{(y^{2}-x^{n}+s)}}\\\uparrow &&\uparrow \\\mathbb {C} &\leftarrow &\mathbb {C} \{s\}\end{matrix}}}
In fact, Milnor studied such deformations, where a singularity is deformed by a constant, hence the fiber over a non-zero
s
{\displaystyle s}
is called the Milnor fiber.
=== Cohomological Interpretation of deformations ===
It should be clear there could be many deformations of a single germ of analytic functions. Because of this, there are some book-keeping devices required to organize all of this information. These organizational devices are constructed using tangent cohomology. This is formed by using the Koszul–Tate resolution, and potentially modifying it by adding additional generators for non-regular algebras
A
{\displaystyle A}
. In the case of analytic algebras these resolutions are called the Tjurina resolution for the mathematician who first studied such objects, Galina Tyurina. This is a graded-commutative differential graded algebra
(
R
∙
,
s
)
{\displaystyle (R_{\bullet },s)}
such that
R
0
→
A
{\displaystyle R_{0}\to A}
is a surjective map of analytic algebras, and this map fits into an exact sequence
⋯
→
s
R
−
2
→
s
R
−
1
→
s
R
0
→
p
A
→
0
{\displaystyle \cdots \xrightarrow {s} R_{-2}\xrightarrow {s} R_{-1}\xrightarrow {s} R_{0}\xrightarrow {p} A\to 0}
Then, by taking the differential graded module of derivations
(
Der
(
R
∙
)
,
d
)
{\displaystyle ({\text{Der}}(R_{\bullet }),d)}
, its cohomology forms the tangent cohomology of the germ of analytic algebras
A
{\displaystyle A}
. These cohomology groups are denoted
T
k
(
A
)
{\displaystyle T^{k}(A)}
. The
T
1
(
A
)
{\displaystyle T^{1}(A)}
contains information about all of the deformations of
A
{\displaystyle A}
and can be readily computed using the exact sequence
0
→
T
0
(
A
)
→
Der
(
R
0
)
→
d
Hom
R
0
(
I
,
A
)
→
T
1
(
A
)
→
0
{\displaystyle 0\to T^{0}(A)\to {\text{Der}}(R_{0})\xrightarrow {d} {\text{Hom}}_{R_{0}}(I,A)\to T^{1}(A)\to 0}
If
A
{\displaystyle A}
is isomorphic to the algebra
C
{
z
1
,
…
,
z
n
}
(
f
1
,
…
,
f
m
)
{\displaystyle {\frac {\mathbb {C} \{z_{1},\ldots ,z_{n}\}}{(f_{1},\ldots ,f_{m})}}}
then its deformations are equal to
T
1
(
A
)
≅
A
m
d
f
⋅
A
n
{\displaystyle T^{1}(A)\cong {\frac {A^{m}}{df\cdot A^{n}}}}
were
d
f
{\displaystyle df}
is the jacobian matrix of
f
=
(
f
1
,
…
,
f
m
)
:
C
n
→
C
m
{\displaystyle f=(f_{1},\ldots ,f_{m}):\mathbb {C} ^{n}\to \mathbb {C} ^{m}}
. For example, the deformations of a hypersurface given by
f
{\displaystyle f}
has the deformations
T
1
(
A
)
≅
A
n
(
∂
f
∂
z
1
,
…
,
∂
f
∂
z
n
)
{\displaystyle T^{1}(A)\cong {\frac {A^{n}}{\left({\frac {\partial f}{\partial z_{1}}},\ldots ,{\frac {\partial f}{\partial z_{n}}}\right)}}}
For the singularity
y
2
−
x
3
{\displaystyle y^{2}-x^{3}}
this is the module
A
2
(
y
,
x
2
)
{\displaystyle {\frac {A^{2}}{(y,x^{2})}}}
hence the only deformations are given by adding constants or linear factors, so a general deformation of
f
(
x
,
y
)
=
y
2
−
x
3
{\displaystyle f(x,y)=y^{2}-x^{3}}
is
F
(
x
,
y
,
a
1
,
a
2
)
=
y
2
−
x
3
+
a
1
+
a
2
x
{\displaystyle F(x,y,a_{1},a_{2})=y^{2}-x^{3}+a_{1}+a_{2}x}
where the
a
i
{\displaystyle a_{i}}
are deformation parameters.
== Functorial description ==
Another method for formalizing deformation theory is using functors on the category
Art
k
{\displaystyle {\text{Art}}_{k}}
of local Artin algebras over a field. A pre-deformation functor is defined as a functor
F
:
Art
k
→
Sets
{\displaystyle F:{\text{Art}}_{k}\to {\text{Sets}}}
such that
F
(
k
)
{\displaystyle F(k)}
is a point. The idea is that we want to study the infinitesimal structure of some moduli space around a point where lying above that point is the space of interest. It is typically the case that it is easier to describe the functor for a moduli problem instead of finding an actual space. For example, if we want to consider the moduli-space of hypersurfaces of degree
d
{\displaystyle d}
in
P
n
{\displaystyle \mathbb {P} ^{n}}
, then we could consider the functor
F
:
Sch
→
Sets
{\displaystyle F:{\text{Sch}}\to {\text{Sets}}}
where
F
(
S
)
=
{
X
↓
S
:
each fiber is a degree
d
hypersurface in
P
n
}
{\displaystyle F(S)=\left\{{\begin{matrix}X\\\downarrow \\S\end{matrix}}:{\text{ each fiber is a degree }}d{\text{ hypersurface in }}\mathbb {P} ^{n}\right\}}
Although in general, it is more convenient/required to work with functors of groupoids instead of sets. This is true for moduli of curves.
=== Technical remarks about infinitesimals ===
Infinitesimals have long been in use by mathematicians for non-rigorous arguments in calculus. The idea is that if we consider polynomials
F
(
x
,
ε
)
{\displaystyle F(x,\varepsilon )}
with an infinitesimal
ε
{\displaystyle \varepsilon }
, then only the first order terms really matter; that is, we can consider
F
(
x
,
ε
)
≡
f
(
x
)
+
ε
g
(
x
)
+
O
(
ε
2
)
{\displaystyle F(x,\varepsilon )\equiv f(x)+\varepsilon g(x)+O(\varepsilon ^{2})}
A simple application of this is that we can find the derivatives of monomials using infinitesimals:
(
x
+
ε
)
3
=
x
3
+
3
x
2
ε
+
O
(
ε
2
)
{\displaystyle (x+\varepsilon )^{3}=x^{3}+3x^{2}\varepsilon +O(\varepsilon ^{2})}
the
ε
{\displaystyle \varepsilon }
term contains the derivative of the monomial, demonstrating its use in calculus. We could also interpret this equation as the first two terms of the Taylor expansion of the monomial. Infinitesimals can be made rigorous using nilpotent elements in local artin algebras. In the ring
k
[
y
]
/
(
y
2
)
{\displaystyle k[y]/(y^{2})}
we see that arguments with infinitesimals can work. This motivates the notation
k
[
ε
]
=
k
[
y
]
/
(
y
2
)
{\displaystyle k[\varepsilon ]=k[y]/(y^{2})}
, which is called the ring of dual numbers.
Moreover, if we want to consider higher-order terms of a Taylor approximation then we could consider the artin algebras
k
[
y
]
/
(
y
k
)
{\displaystyle k[y]/(y^{k})}
. For our monomial, suppose we want to write out the second order expansion, then
(
x
+
ε
)
3
=
x
3
+
3
x
2
ε
+
3
x
ε
2
+
ε
3
{\displaystyle (x+\varepsilon )^{3}=x^{3}+3x^{2}\varepsilon +3x\varepsilon ^{2}+\varepsilon ^{3}}
Recall that a Taylor expansion (at zero) can be written out as
f
(
x
)
=
f
(
0
)
+
f
(
1
)
(
0
)
1
!
x
+
f
(
2
)
(
0
)
2
!
x
2
+
f
(
3
)
(
0
)
3
!
x
3
+
⋯
{\displaystyle f(x)=f(0)+{\frac {f^{(1)}(0)}{1!}}x+{\frac {f^{(2)}(0)}{2!}}x^{2}+{\frac {f^{(3)}(0)}{3!}}x^{3}+\cdots }
hence the previous two equations show that the second derivative of
x
3
{\displaystyle x^{3}}
is
6
x
{\displaystyle 6x}
.
In general, since we want to consider arbitrary order Taylor expansions in any number of variables, we will consider the category of all local artin algebras over a field.
=== Motivation ===
To motivate the definition of a pre-deformation functor, consider the projective hypersurface over a field
Proj
(
C
[
x
0
,
x
1
,
x
2
,
x
3
]
(
x
0
4
+
x
1
4
+
x
2
4
+
x
3
4
)
)
↓
Spec
(
k
)
{\displaystyle {\begin{matrix}\operatorname {Proj} \left({\dfrac {\mathbb {C} [x_{0},x_{1},x_{2},x_{3}]}{(x_{0}^{4}+x_{1}^{4}+x_{2}^{4}+x_{3}^{4})}}\right)\\\downarrow \\\operatorname {Spec} (k)\end{matrix}}}
If we want to consider an infinitesimal deformation of this space, then we could write down a Cartesian square
Proj
(
C
[
x
0
,
x
1
,
x
2
,
x
3
]
(
x
0
4
+
x
1
4
+
x
2
4
+
x
3
4
)
)
→
Proj
(
C
[
x
0
,
x
1
,
x
2
,
x
3
]
[
ε
]
(
x
0
4
+
x
1
4
+
x
2
4
+
x
3
4
+
ε
x
0
a
0
x
1
a
1
x
2
a
2
x
3
a
3
)
)
↓
↓
Spec
(
k
)
→
Spec
(
k
[
ε
]
)
{\displaystyle {\begin{matrix}\operatorname {Proj} \left({\dfrac {\mathbb {C} [x_{0},x_{1},x_{2},x_{3}]}{(x_{0}^{4}+x_{1}^{4}+x_{2}^{4}+x_{3}^{4})}}\right)&\to &\operatorname {Proj} \left({\dfrac {\mathbb {C} [x_{0},x_{1},x_{2},x_{3}][\varepsilon ]}{(x_{0}^{4}+x_{1}^{4}+x_{2}^{4}+x_{3}^{4}+\varepsilon x_{0}^{a_{0}}x_{1}^{a_{1}}x_{2}^{a_{2}}x_{3}^{a_{3}})}}\right)\\\downarrow &&\downarrow \\\operatorname {Spec} (k)&\to &\operatorname {Spec} (k[\varepsilon ])\end{matrix}}}
where
a
0
+
a
1
+
a
2
+
a
3
=
4
{\displaystyle a_{0}+a_{1}+a_{2}+a_{3}=4}
. Then, the space on the right hand corner is one example of an infinitesimal deformation: the extra scheme theoretic structure of the nilpotent elements in
Spec
(
k
[
ε
]
)
{\displaystyle \operatorname {Spec} (k[\varepsilon ])}
(which is topologically a point) allows us to organize this infinitesimal data. Since we want to consider all possible expansions, we will let our predeformation functor be defined on objects as
F
(
A
)
=
{
Proj
(
C
[
x
0
,
x
1
,
x
2
,
x
3
]
(
x
0
4
+
x
1
4
+
x
2
4
+
x
3
4
)
)
→
X
↓
↓
Spec
(
k
)
→
Spec
(
A
)
}
{\displaystyle F(A)=\left\{{\begin{matrix}\operatorname {Proj} \left({\dfrac {\mathbb {C} [x_{0},x_{1},x_{2},x_{3}]}{(x_{0}^{4}+x_{1}^{4}+x_{2}^{4}+x_{3}^{4})}}\right)&\to &{\mathfrak {X}}\\\downarrow &&\downarrow \\\operatorname {Spec} (k)&\to &\operatorname {Spec} (A)\end{matrix}}\right\}}
where
A
{\displaystyle A}
is a local Artin
k
{\displaystyle k}
-algebra.
=== Smooth pre-deformation functors ===
A pre-deformation functor is called smooth if for any surjection
A
′
→
A
{\displaystyle A'\to A}
such that the square of any element in the kernel is zero, there is a surjection
F
(
A
′
)
→
F
(
A
)
{\displaystyle F(A')\to F(A)}
This is motivated by the following question: given a deformation
X
→
X
↓
↓
Spec
(
k
)
→
Spec
(
A
)
{\displaystyle {\begin{matrix}X&\to &{\mathfrak {X}}\\\downarrow &&\downarrow \\\operatorname {Spec} (k)&\to &\operatorname {Spec} (A)\end{matrix}}}
does there exist an extension of this cartesian diagram to the cartesian diagrams
X
→
X
→
X
′
↓
↓
↓
Spec
(
k
)
→
Spec
(
A
)
→
Spec
(
A
′
)
{\displaystyle {\begin{matrix}X&\to &{\mathfrak {X}}&\to &{\mathfrak {X}}'\\\downarrow &&\downarrow &&\downarrow \\\operatorname {Spec} (k)&\to &\operatorname {Spec} (A)&\to &\operatorname {Spec} (A')\end{matrix}}}
the name smooth comes from the lifting criterion of a smooth morphism of schemes.
=== Tangent space ===
Recall that the tangent space of a scheme
X
{\displaystyle X}
can be described as the
Hom
{\displaystyle \operatorname {Hom} }
-set
T
X
:=
Hom
Sch
/
k
(
Spec
(
k
[
ε
]
)
,
X
)
{\displaystyle TX:=\operatorname {Hom} _{{\text{Sch}}/k}(\operatorname {Spec} (k[\varepsilon ]),X)}
where the source is the ring of dual numbers. Since we are considering the tangent space of a point of some moduli space, we can define the tangent space of our (pre-)deformation functor as
T
F
:=
F
(
k
[
ε
]
)
.
{\displaystyle T_{F}:=F(k[\varepsilon ]).}
== Applications of deformation theory ==
=== Dimension of moduli of curves ===
One of the first properties of the moduli of algebraic curves
M
g
{\displaystyle {\mathcal {M}}_{g}}
can be deduced using elementary deformation theory. Its dimension can be computed as
dim
(
M
g
)
=
dim
H
1
(
C
,
T
C
)
{\displaystyle \dim({\mathcal {M}}_{g})=\dim H^{1}(C,T_{C})}
for an arbitrary smooth curve of genus
g
{\displaystyle g}
because the deformation space is the tangent space of the moduli space. Using Serre duality the tangent space is isomorphic to
H
1
(
C
,
T
C
)
≅
H
0
(
C
,
T
C
∗
⊗
ω
C
)
∨
≅
H
0
(
C
,
ω
C
⊗
2
)
∨
{\displaystyle {\begin{aligned}H^{1}(C,T_{C})&\cong H^{0}(C,T_{C}^{*}\otimes \omega _{C})^{\vee }\\&\cong H^{0}(C,\omega _{C}^{\otimes 2})^{\vee }\end{aligned}}}
Hence the Riemann–Roch theorem gives
h
0
(
C
,
ω
C
⊗
2
)
−
h
1
(
C
,
ω
C
⊗
2
)
=
2
(
2
g
−
2
)
−
g
+
1
=
3
g
−
3
{\displaystyle {\begin{aligned}h^{0}(C,\omega _{C}^{\otimes 2})-h^{1}(C,\omega _{C}^{\otimes 2})&=2(2g-2)-g+1\\&=3g-3\end{aligned}}}
For curves of genus
g
≥
2
{\displaystyle g\geq 2}
the
h
1
(
C
,
ω
C
⊗
2
)
=
0
{\displaystyle h^{1}(C,\omega _{C}^{\otimes 2})=0}
because
h
1
(
C
,
ω
C
⊗
2
)
=
h
0
(
C
,
(
ω
C
⊗
2
)
∨
⊗
ω
C
)
{\displaystyle h^{1}(C,\omega _{C}^{\otimes 2})=h^{0}(C,(\omega _{C}^{\otimes 2})^{\vee }\otimes \omega _{C})}
the degree is
deg
(
(
ω
C
⊗
2
)
∨
⊗
ω
C
)
=
4
−
4
g
+
2
g
−
2
=
2
−
2
g
{\displaystyle {\begin{aligned}{\text{deg}}((\omega _{C}^{\otimes 2})^{\vee }\otimes \omega _{C})&=4-4g+2g-2\\&=2-2g\end{aligned}}}
and
h
0
(
L
)
=
0
{\displaystyle h^{0}(L)=0}
for line bundles of negative degree. Therefore the dimension of the moduli space is
3
g
−
3
{\displaystyle 3g-3}
.
=== Bend-and-break ===
Deformation theory was famously applied in birational geometry by Shigefumi Mori to study the existence of rational curves on varieties. For a Fano variety of positive dimension Mori showed that there is a rational curve passing through every point. The method of the proof later became known as Mori's bend-and-break. The rough idea is to start with some curve C through a chosen point and keep deforming it until it breaks into several components. Replacing C by one of the components has the effect of decreasing either the genus or the degree of C. So after several repetitions of the procedure, eventually we'll obtain a curve of genus 0, i.e. a rational curve. The existence and the properties of deformations of C require arguments from deformation theory and a reduction to positive characteristic.
=== Arithmetic deformations ===
One of the major applications of deformation theory is in arithmetic. It can be used to answer the following question: if we have a variety
X
/
F
p
{\displaystyle X/\mathbb {F} _{p}}
, what are the possible extensions
X
/
Z
p
{\displaystyle {\mathfrak {X}}/\mathbb {Z} _{p}}
? If our variety is a curve, then the vanishing
H
2
{\displaystyle H^{2}}
implies that every deformation induces a variety over
Z
p
{\displaystyle \mathbb {Z} _{p}}
; that is, if we have a smooth curve
X
↓
Spec
(
F
p
)
{\displaystyle {\begin{matrix}X\\\downarrow \\\operatorname {Spec} (\mathbb {F} _{p})\end{matrix}}}
and a deformation
X
→
X
2
↓
↓
Spec
(
F
p
)
→
Spec
(
Z
/
(
p
2
)
)
{\displaystyle {\begin{matrix}X&\to &{\mathfrak {X}}_{2}\\\downarrow &&\downarrow \\\operatorname {Spec} (\mathbb {F} _{p})&\to &\operatorname {Spec} (\mathbb {Z} /(p^{2}))\end{matrix}}}
then we can always extend it to a diagram of the form
X
→
X
2
→
X
3
→
⋯
↓
↓
↓
Spec
(
F
p
)
→
Spec
(
Z
/
(
p
2
)
)
→
Spec
(
Z
/
(
p
3
)
)
→
⋯
{\displaystyle {\begin{matrix}X&\to &{\mathfrak {X}}_{2}&\to &{\mathfrak {X}}_{3}&\to \cdots \\\downarrow &&\downarrow &&\downarrow &\\\operatorname {Spec} (\mathbb {F} _{p})&\to &\operatorname {Spec} (\mathbb {Z} /(p^{2}))&\to &\operatorname {Spec} (\mathbb {Z} /(p^{3}))&\to \cdots \end{matrix}}}
This implies that we can construct a formal scheme
X
=
Spet
(
X
∙
)
{\displaystyle {\mathfrak {X}}=\operatorname {Spet} ({\mathfrak {X}}_{\bullet })}
giving a curve over
Z
p
{\displaystyle \mathbb {Z} _{p}}
.
=== Deformations of abelian schemes ===
The Serre–Tate theorem asserts, roughly speaking, that the deformations of abelian scheme A is controlled by deformations of the p-divisible group
A
[
p
∞
]
{\displaystyle A[p^{\infty }]}
consisting of its p-power torsion points.
=== Galois deformations ===
Another application of deformation theory is with Galois deformations. It allows us to answer the question: If we have a Galois representation
G
→
GL
n
(
F
p
)
{\displaystyle G\to \operatorname {GL} _{n}(\mathbb {F} _{p})}
how can we extend it to a representation
G
→
GL
n
(
Z
p
)
?
{\displaystyle G\to \operatorname {GL} _{n}(\mathbb {Z} _{p}){\text{?}}}
== Relationship to string theory ==
The so-called Deligne conjecture arising in the context of algebras (and Hochschild cohomology) stimulated much interest in deformation theory in relation to string theory (roughly speaking, to formalise the idea that a string theory can be regarded as a deformation of a point-particle theory). This is now accepted as proved, after some hitches with early announcements. Maxim Kontsevich is among those who have offered a generally accepted proof of this.
== See also ==
Kodaira–Spencer map
Dual number
Schlessinger's theorem
Exalcomm
Cotangent complex
Gromov–Witten invariant
Moduli of algebraic curves
Degeneration (algebraic geometry)
== Notes ==
== Sources ==
"deformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Gerstenhaber, Murray and Stasheff, James, eds. (1992). Deformation Theory and Quantum Groups with Applications to Mathematical Physics, American Mathematical Society (Google eBook) ISBN 0821851411
=== Pedagogical ===
Palamodov, V. P., III. Deformations of complex spaces. Complex Variables IV (very down to earth intro)
Course Notes on Deformation Theory (Artin)
Studying Deformation Theory of Schemes
Sernesi, Eduardo, Deformations of Algebraic Schemes
Hartshorne, Robin, Deformation Theory
Notes from Hartshorne's Course on Deformation Theory
MSRI – Deformation Theory and Moduli in Algebraic Geometry
=== Survey articles ===
Mazur, Barry (2004), "Perturbations, Deformations, and Variations (and "Near-Misses" in Geometry, Physics, and Number Theory" (PDF), Bulletin of the American Mathematical Society, 41 (3): 307–336, doi:10.1090/S0273-0979-04-01024-9, MR 2058289
Anel, M., Why deformations are cohomological (PDF)
== External links ==
"A glimpse of deformation theory" (PDF)., lecture notes by Brian Osserman | Wikipedia/Deformation_theory |
In gauge theory and mathematical physics, a topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants.
While TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory and the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for mathematical work related to topological field theory.
In condensed matter physics, topological quantum field theories are the low-energy effective theories of topologically ordered states, such as fractional quantum Hall states, string-net condensed states, and other strongly correlated quantum liquid states.
== Overview ==
In a topological field theory, correlation functions do not depend on the metric of spacetime. This means that the theory is not sensitive to changes in the shape of spacetime; if spacetime warps or contracts, the correlation functions do not change. Consequently, they are topological invariants.
Topological field theories are not very interesting on flat Minkowski spacetime used in particle physics. Minkowski space can be contracted to a point, so a TQFT applied to Minkowski space results in trivial topological invariants. Consequently, TQFTs are usually applied to curved spacetimes, such as, for example, Riemann surfaces. Most of the known topological field theories are defined on spacetimes of dimension less than five. It seems that a few higher-dimensional theories exist, but they are not very well understood .
Quantum gravity is believed to be background-independent (in some suitable sense), and TQFTs provide examples of background independent quantum field theories. This has prompted ongoing theoretical investigations into this class of models.
(Caveat: It is often said that TQFTs have only finitely many degrees of freedom. This is not a fundamental property. It happens to be true in most of the examples that physicists and mathematicians study, but it is not necessary. A topological sigma model targets infinite-dimensional projective space, and if such a thing could be defined it would have countably infinitely many degrees of freedom.)
== Specific models ==
The known topological field theories fall into two general classes: Schwarz-type TQFTs and Witten-type TQFTs. Witten TQFTs are also sometimes referred to as cohomological field theories. See (Schwarz 2000).
=== Schwarz-type TQFTs ===
In Schwarz-type TQFTs, the correlation functions or partition functions of the system are computed by the path integral of metric-independent action functionals. For instance, in the BF model, the spacetime is a two-dimensional manifold M, the observables are constructed from a two-form F, an auxiliary scalar B, and their derivatives. The action (which determines the path integral) is
S
=
∫
M
B
F
{\displaystyle S=\int \limits _{M}BF}
The spacetime metric does not appear anywhere in the theory, so the theory is explicitly topologically invariant. The first example appeared in 1977 and is due to A. Schwarz; its action functional is:
S
=
∫
M
A
∧
d
A
.
{\displaystyle S=\int \limits _{M}A\wedge dA.}
Another more famous example is Chern–Simons theory, which can be applied to knot invariants. In general, partition functions depend on a metric but the above examples are metric-independent.
=== Witten-type TQFTs ===
The first example of Witten-type TQFTs appeared in Witten's paper in 1988 (Witten 1988a), i.e. topological Yang–Mills theory in four dimensions. Though its action functional contains the spacetime metric gαβ, after a topological twist it turns out to be metric independent. The independence of the stress-energy tensor Tαβ of the system from the metric depends on whether the BRST-operator is closed. Following Witten's example many other examples can be found in string theory.
Witten-type TQFTs arise if the following conditions are satisfied:
The action
S
{\displaystyle S}
of the TQFT has a symmetry, i.e. if
δ
{\displaystyle \delta }
denotes a symmetry transformation (e.g. a Lie derivative) then
δ
S
=
0
{\displaystyle \delta S=0}
holds.
The symmetry transformation is exact, i.e.
δ
2
=
0
{\displaystyle \delta ^{2}=0}
There are existing observables
O
1
,
…
,
O
n
{\displaystyle O_{1},\dots ,O_{n}}
which satisfy
δ
O
i
=
0
{\displaystyle \delta O_{i}=0}
for all
i
∈
{
1
,
…
,
n
}
{\displaystyle i\in \{1,\dots ,n\}}
.
The stress-energy-tensor (or similar physical quantities) is of the form
T
α
β
=
δ
G
α
β
{\displaystyle T^{\alpha \beta }=\delta G^{\alpha \beta }}
for an arbitrary tensor
G
α
β
{\displaystyle G^{\alpha \beta }}
.
As an example (Linker 2015): Given a 2-form field
B
{\displaystyle B}
with the differential operator
δ
{\displaystyle \delta }
which satisfies
δ
2
=
0
{\displaystyle \delta ^{2}=0}
, then the action
S
=
∫
M
B
∧
δ
B
{\displaystyle S=\int \limits _{M}B\wedge \delta B}
has a symmetry if
δ
B
∧
δ
B
=
0
{\displaystyle \delta B\wedge \delta B=0}
since
δ
S
=
∫
M
δ
(
B
∧
δ
B
)
=
∫
M
δ
B
∧
δ
B
+
∫
M
B
∧
δ
2
B
=
0.
{\displaystyle \delta S=\int \limits _{M}\delta (B\wedge \delta B)=\int \limits _{M}\delta B\wedge \delta B+\int \limits _{M}B\wedge \delta ^{2}B=0.}
Further, the following holds (under the condition that
δ
{\displaystyle \delta }
is independent on
B
{\displaystyle B}
and acts similarly to a functional derivative):
δ
δ
B
α
β
S
=
∫
M
δ
δ
B
α
β
B
∧
δ
B
+
∫
M
B
∧
δ
δ
δ
B
α
β
B
=
∫
M
δ
δ
B
α
β
B
∧
δ
B
−
∫
M
δ
B
∧
δ
δ
B
α
β
B
=
−
2
∫
M
δ
B
∧
δ
δ
B
α
β
B
.
{\displaystyle {\frac {\delta }{\delta B^{\alpha \beta }}}S=\int \limits _{M}{\frac {\delta }{\delta B^{\alpha \beta }}}B\wedge \delta B+\int \limits _{M}B\wedge \delta {\frac {\delta }{\delta B^{\alpha \beta }}}B=\int \limits _{M}{\frac {\delta }{\delta B^{\alpha \beta }}}B\wedge \delta B-\int \limits _{M}\delta B\wedge {\frac {\delta }{\delta B^{\alpha \beta }}}B=-2\int \limits _{M}\delta B\wedge {\frac {\delta }{\delta B^{\alpha \beta }}}B.}
The expression
δ
δ
B
α
β
S
{\displaystyle {\frac {\delta }{\delta B^{\alpha \beta }}}S}
is proportional to
δ
G
{\displaystyle \delta G}
with another 2-form
G
{\displaystyle G}
.
Now any averages of observables
⟨
O
i
⟩
:=
∫
d
μ
O
i
e
i
S
{\displaystyle \left\langle O_{i}\right\rangle :=\int d\mu O_{i}e^{iS}}
for the corresponding Haar measure
μ
{\displaystyle \mu }
are independent on the "geometric" field
B
{\displaystyle B}
and are therefore topological:
δ
δ
B
⟨
O
i
⟩
=
∫
d
μ
O
i
i
δ
δ
B
S
e
i
S
∝
∫
d
μ
O
i
δ
G
e
i
S
=
δ
(
∫
d
μ
O
i
G
e
i
S
)
=
0
{\displaystyle {\frac {\delta }{\delta B}}\left\langle O_{i}\right\rangle =\int d\mu O_{i}i{\frac {\delta }{\delta B}}Se^{iS}\propto \int d\mu O_{i}\delta Ge^{iS}=\delta \left(\int d\mu O_{i}Ge^{iS}\right)=0}
.
The third equality uses the fact that
δ
O
i
=
δ
S
=
0
{\displaystyle \delta O_{i}=\delta S=0}
and the invariance of the Haar measure under symmetry transformations. Since
∫
d
μ
O
i
G
e
i
S
{\displaystyle \int d\mu O_{i}Ge^{iS}}
is only a number, its Lie derivative vanishes.
== Mathematical formulations ==
=== Original Atiyah–Segal axioms ===
Atiyah suggested a set of axioms for topological quantum field theory, inspired by Segal's proposed axioms for conformal field theory (subsequently, Segal's idea was summarized in Segal (2001)), and Witten's geometric meaning of supersymmetry in Witten (1982). Atiyah's axioms are constructed by gluing the boundary with a differentiable (topological or continuous) transformation, while Segal's axioms are for conformal transformations. These axioms have been relatively useful for mathematical treatments of Schwarz-type QFTs, although it isn't clear that they capture the whole structure of Witten-type QFTs. The basic idea is that a TQFT is a functor from a certain category of cobordisms to the category of vector spaces.
There are in fact two different sets of axioms which could reasonably be called the Atiyah axioms. These axioms differ basically in whether or not they apply to a TQFT defined on a single fixed n-dimensional Riemannian / Lorentzian spacetime M or a TQFT defined on all n-dimensional spacetimes at once.
Let Λ be a commutative ring with 1 (for almost all real-world purposes we will have Λ = Z, R or C). Atiyah originally proposed the axioms of a topological quantum field theory (TQFT) in dimension d defined over a ground ring Λ as following:
A finitely generated Λ-module Z(Σ) associated to each oriented closed smooth d-dimensional manifold Σ (corresponding to the homotopy axiom),
An element Z(M) ∈ Z(∂M) associated to each oriented smooth (d + 1)-dimensional manifold (with boundary) M (corresponding to an additive axiom).
These data are subject to the following axioms (4 and 5 were added by Atiyah):
Z is functorial with respect to orientation preserving diffeomorphisms of Σ and M,
Z is involutory, i.e. Z(Σ*) = Z(Σ)* where Σ* is Σ with opposite orientation and Z(Σ)* denotes the dual module,
Z is multiplicative.
Z(
∅
{\displaystyle \emptyset }
) = Λ for the d-dimensional empty manifold and Z(
∅
{\displaystyle \emptyset }
) = 1 for the (d + 1)-dimensional empty manifold.
Z(M*) = Z(M) (the hermitian axiom). If
∂
M
=
Σ
0
∗
∪
Σ
1
{\displaystyle \partial M=\Sigma _{0}^{*}\cup \Sigma _{1}}
so that Z(M) can be viewed as a linear transformation between hermitian vector spaces, then this is equivalent to Z(M*) being the adjoint of Z(M).
Remark. If for a closed manifold M we view Z(M) as a numerical invariant, then for a manifold with a boundary we should think of Z(M) ∈ Z(∂M) as a "relative" invariant. Let f : Σ → Σ be an orientation-preserving diffeomorphism, and identify opposite ends of Σ × I by f. This gives a manifold Σf and our axioms imply
Z
(
Σ
f
)
=
Trace
Σ
(
f
)
{\displaystyle Z(\Sigma _{f})=\operatorname {Trace} \ \Sigma (f)}
where Σ(f) is the induced automorphism of Z(Σ).
Remark. For a manifold M with boundary Σ we can always form the double
M
∪
Σ
M
∗
{\displaystyle M\cup _{\Sigma }M^{*}}
which is a closed manifold. The fifth axiom shows that
Z
(
M
∪
Σ
M
∗
)
=
|
Z
(
M
)
|
2
{\displaystyle Z\left(M\cup _{\Sigma }M^{*}\right)=|Z(M)|^{2}}
where on the right we compute the norm in the hermitian (possibly indefinite) metric.
=== Relation to physics ===
Physically (2) + (4) are related to relativistic invariance while (3) + (5) are indicative of the quantum nature of the theory.
Σ is meant to indicate the physical space (usually, d = 3 for standard physics) and the extra dimension in Σ × I is "imaginary" time. The space Z(Σ) is the Hilbert space of the quantum theory and a physical theory, with a Hamiltonian H, will have a time evolution operator eitH or an "imaginary time" operator e−tH. The main feature of topological QFTs is that H = 0, which implies that there is no real dynamics or propagation along the cylinder Σ × I. However, there can be non-trivial "propagation" (or tunneling amplitudes) from Σ0 to Σ1 through an intervening manifold M with
∂
M
=
Σ
0
∗
∪
Σ
1
{\displaystyle \partial M=\Sigma _{0}^{*}\cup \Sigma _{1}}
; this reflects the topology of M.
If ∂M = Σ, then the distinguished vector Z(M) in the Hilbert space Z(Σ) is thought of as the vacuum state defined by M. For a closed manifold M the number Z(M) is the vacuum expectation value. In analogy with statistical mechanics it is also called the partition function.
The reason why a theory with a zero Hamiltonian can be sensibly formulated resides in the Feynman path integral approach to QFT. This incorporates relativistic invariance (which applies to general (d + 1)-dimensional "spacetimes") and the theory is formally defined by a suitable Lagrangian—a functional of the classical fields of the theory. A Lagrangian which involves only first derivatives in time formally leads to a zero Hamiltonian, but the Lagrangian itself may have non-trivial features which relate to the topology of M.
=== Atiyah's examples ===
In 1988, M. Atiyah published a paper in which he described many new examples of topological quantum field theory that were considered at that time (Atiyah 1988a)(Atiyah 1988b). It contains some new topological invariants along with some new ideas: Casson invariant, Donaldson invariant, Gromov's theory, Floer homology and Jones–Witten theory.
==== d = 0 ====
In this case Σ consists of finitely many points. To a single point we associate a vector space V = Z(point) and to n-points the n-fold tensor product: V⊗n = V ⊗ … ⊗ V. The symmetric group Sn acts on V⊗n. A standard way to get the quantum Hilbert space is to start with a classical symplectic manifold (or phase space) and then quantize it. Let us extend Sn to a compact Lie group G and consider "integrable" orbits for which the symplectic structure comes from a line bundle, then quantization leads to the irreducible representations V of G. This is the physical interpretation of the Borel–Weil theorem or the Borel–Weil–Bott theorem. The Lagrangian of these theories is the classical action (holonomy of the line bundle). Thus topological QFT's with d = 0 relate naturally to the classical representation theory of Lie groups and the symmetric group.
==== d = 1 ====
We should consider periodic boundary conditions given by closed loops in a compact symplectic manifold X. Along with Witten (1982) holonomy such loops as used in the case of d = 0 as a Lagrangian are then used to modify the Hamiltonian. For a closed surface M the invariant Z(M) of the theory is the number of pseudo holomorphic maps f : M → X in the sense of Gromov (they are ordinary holomorphic maps if X is a Kähler manifold). If this number becomes infinite i.e. if there are "moduli", then we must fix further data on M. This can be done by picking some points Pi and then looking at holomorphic maps f : M → X with f(Pi) constrained to lie on a fixed hyperplane. Witten (1988b) has written down the relevant Lagrangian for this theory. Floer has given a rigorous treatment, i.e. Floer homology, based on Witten's Morse theory ideas; for the case when the boundary conditions are over the interval instead of being periodic, the path initial and end-points lie on two fixed Lagrangian submanifolds. This theory has been developed as Gromov–Witten invariant theory.
Another example is Holomorphic Conformal Field Theory. This might not have been considered strictly topological quantum field theory at the time because Hilbert spaces are infinite dimensional. The conformal field theories are also related to the compact Lie group G in which the classical phase consists of a central extension of the loop group (LG). Quantizing these produces the Hilbert spaces of the theory of irreducible (projective) representations of LG. The group Diff+(S1) now substitutes for the symmetric group and plays an important role. As a result, the partition function in such theories depends on complex structure, thus it is not purely topological.
==== d = 2 ====
Jones–Witten theory is the most important theory in this case. Here the classical phase space, associated with a closed surface Σ is the moduli space of a flat G-bundle over Σ. The Lagrangian is an integer multiple of the Chern–Simons function of a G-connection on a 3-manifold (which has to be "framed"). The integer multiple k, called the level, is a parameter of the theory and k → ∞ gives the classical limit. This theory can be naturally coupled with the d = 0 theory to produce a "relative" theory. The details have been described by Witten who shows that the partition function for a (framed) link in the 3-sphere is just the value of the Jones polynomial for a suitable root of unity. The theory can be defined over the relevant cyclotomic field, see Atiyah (1988b). By considering a Riemann surface with boundary, we can couple it to the d = 1 conformal theory instead of coupling d = 2 theory to d = 0. This has developed into Jones–Witten theory and has led to the discovery of deep connections between knot theory and quantum field theory.
==== d = 3 ====
Donaldson has defined the integer invariant of smooth 4-manifolds by using moduli spaces of SU(2)-instantons. These invariants are polynomials on the second homology. Thus 4-manifolds should have extra data consisting of the symmetric algebra of H2. Witten (1988a) has produced a super-symmetric Lagrangian which formally reproduces the Donaldson theory. Witten's formula might be understood as an infinite-dimensional analogue of the Gauss–Bonnet theorem. At a later date, this theory was further developed and became the Seiberg–Witten gauge theory which reduces SU(2) to U(1) in N = 2, d = 4 gauge theory. The Hamiltonian version of the theory has been developed by Andreas Floer in terms of the space of connections on a 3-manifold. Floer uses the Chern–Simons function, which is the Lagrangian of Jones–Witten theory to modify the Hamiltonian. For details, see Atiyah (1988b). Witten (1988a) has also shown how one can couple the d = 3 and d = 1 theories together: this is quite analogous to the coupling between d = 2 and d = 0 in Jones–Witten theory.
Now, topological field theory is viewed as a functor, not on a fixed dimension but on all dimensions at the same time.
=== Case of a fixed spacetime ===
Let BordM be the category whose morphisms are n-dimensional submanifolds of M and whose objects are connected components of the boundaries of such submanifolds. Regard two morphisms as equivalent if they are homotopic via submanifolds of M, and so form the quotient category hBordM: The objects in hBordM are the objects of BordM, and the morphisms of hBordM are homotopy equivalence classes of morphisms in BordM. A TQFT on M is a symmetric monoidal functor from hBordM to the category of vector spaces.
Note that cobordisms can, if their boundaries match, be sewn together to form a new bordism. This is the composition law for morphisms in the cobordism category. Since functors are required to preserve composition, this says that the linear map corresponding to a sewn together morphism is just the composition of the linear map for each piece.
There is an equivalence of categories between the category of 2-dimensional topological quantum field theories and the category of commutative Frobenius algebras.
=== All n-dimensional spacetimes at once ===
To consider all spacetimes at once, it is necessary to replace hBordM by a larger category. So let Bordn be the category of bordisms, i.e. the category whose morphisms are n-dimensional manifolds with boundary, and whose objects are the connected components of the boundaries of n-dimensional manifolds. (Note that any (n−1)-dimensional manifold may appear as an object in Bordn.) As above, regard two morphisms in Bordn as equivalent if they are homotopic, and form the quotient category hBordn. Bordn is a monoidal category under the operation which maps two bordisms to the bordism made from their disjoint union. A TQFT on n-dimensional manifolds is then a functor from hBordn to the category of vector spaces, which maps disjoint unions of bordisms to their tensor product.
For example, for (1 + 1)-dimensional bordisms (2-dimensional bordisms between 1-dimensional manifolds), the map associated with a pair of pants gives a product or coproduct, depending on how the boundary components are grouped – which is commutative or cocommutative, while the map associated with a disk gives a counit (trace) or unit (scalars), depending on the grouping of boundary components, and thus (1+1)-dimension TQFTs correspond to Frobenius algebras.
Furthermore, we can consider simultaneously 4-dimensional, 3-dimensional and 2-dimensional manifolds related by the above bordisms, and from them we can obtain ample and important examples.
=== Development at a later time ===
Looking at the development of topological quantum field theory, we should consider its many applications to Seiberg–Witten gauge theory, topological string theory, the relationship between knot theory and quantum field theory, and quantum knot invariants. Furthermore, it has generated topics of great interest in both mathematics and physics. Also of important recent interest are non-local operators in TQFT (Gukov & Kapustin (2013)). If string theory is viewed as the fundamental, then non-local TQFTs can be viewed as non-physical models that provide a computationally efficient approximation to local string theory.
=== Witten-type TQFTs and dynamical systems ===
Stochastic (partial) differential equations (SDEs) are the foundation for models of everything in nature above the scale of quantum degeneracy and coherence and are essentially Witten-type TQFTs. All SDEs possess topological or BRST supersymmetry,
δ
{\displaystyle \delta }
, and in the operator representation of stochastic dynamics is the exterior derivative, which is commutative with the stochastic evolution operator. This supersymmetry preserves the continuity of phase space by continuous flows, and the phenomenon of supersymmetric spontaneous breakdown by a global non-supersymmetric ground state encompasses such well-established physical concepts as chaos, turbulence, 1/f and crackling noises, self-organized criticality etc. The topological sector of the theory for any SDE can be recognized as a Witten-type TQFT.
== See also ==
== References ==
Atiyah, Michael (1988a). "New invariants of three and four dimensional manifolds". The Mathematical Heritage of Hermann Weyl. Proceedings of Symposia in Pure Mathematics. Vol. 48. American Mathematical Society. pp. 285–299. doi:10.1090/pspum/048/974342. ISBN 9780821814826.
Atiyah, Michael (1988b). "Topological quantum field theories" (PDF). Publications Mathématiques de l'IHÉS. 68 (68): 175–186. doi:10.1007/BF02698547. MR 1001453. S2CID 121647908.
Gukov, Sergei; Kapustin, Anton (2013). "Topological Quantum Field Theory, Nonlocal Operators, and Gapped Phases of Gauge Theories". arXiv:1307.4793 [hep-th].
Linker, Patrick (2015). "Topological Dipole Field Theory" (PDF). The Winnower. 2: e144311.19292. doi:10.15200/winn.144311.19292.
Lurie, Jacob (2009). "On the Classification of Topological Field Theories". arXiv:0905.0465 [math.CT].
Schwarz, Albert (2000). "Topological quantum field theories". arXiv:hep-th/0011260.
Segal, Graeme (2001). "Topological structures in string theory". Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 359 (1784): 1389–1398. Bibcode:2001RSPTA.359.1389S. doi:10.1098/rsta.2001.0841. S2CID 120834154.
Witten, Edward (1982). "Super-symmetry and Morse Theory". Journal of Differential Geometry. 17 (4): 661–692. doi:10.4310/jdg/1214437492.
Witten, Edward (1988a). "Topological quantum field theory". Communications in Mathematical Physics. 117 (3): 353–386. Bibcode:1988CMaPh.117..353W. doi:10.1007/BF01223371. MR 0953828. S2CID 43230714.
Witten, Edward (1988b). "Topological sigma models". Communications in Mathematical Physics. 118 (3): 411–449. Bibcode:1988CMaPh.118..411W. doi:10.1007/bf01466725. S2CID 34042140. | Wikipedia/Topological_quantum_field_theory |
In the mathematical area of topology, the generalized Poincaré conjecture is a statement that a manifold that is a homotopy sphere is a sphere. More precisely, one fixes a category of manifolds: topological (Top), piecewise linear (PL), or differentiable (Diff). Then the statement is
Every homotopy sphere (a closed n-manifold which is homotopy equivalent to the n-sphere) in the chosen category (i.e. topological manifolds, PL manifolds, or smooth manifolds) is isomorphic in the chosen category (i.e. homeomorphic, PL-isomorphic, or diffeomorphic) to the standard n-sphere.
The name derives from the Poincaré conjecture, which was made for (topological or PL) manifolds of dimension 3, where being a homotopy sphere is equivalent to being simply connected and closed. The generalized Poincaré conjecture is known to be true or false in a number of instances, due to the work of many distinguished topologists, including the Fields medal awardees John Milnor, Steve Smale, Michael Freedman, and Grigori Perelman.
== Status ==
Here is a summary of the status of the generalized Poincaré conjecture in various settings.
Top: True in all dimensions.
PL: True in dimensions other than 4; unknown in dimension 4, where it is equivalent to Diff.
Diff: False generally, with the first known counterexample in dimension 7. True in some dimensions including 1, 2, 3, 5, 6, 12, 56 and 61. This list includes all odd dimensions for which the conjecture is true. For even dimensions, it is true only for those on the list, possibly dimension 4, and possibly some additional dimensions
≥
64
{\displaystyle \geq 64}
(though it is conjectured that there are none such). The case of dimension 4 is equivalent to PL.
Thus the veracity of the Poincaré conjectures is different in each category Top, PL, and Diff. In general, the notion of isomorphism differs among the categories, but it is the same in dimension 3 and below. In dimension 4, PL and Diff agree, but Top differs. In dimensions above 6 they all differ. In dimensions 5 and 6 every PL manifold admits an infinitely differentiable structure that is so-called Whitehead compatible.
== History ==
The cases n = 1 and 2 have long been known by the classification of manifolds in those dimensions.
For a PL or smooth homotopy n-sphere, in 1960 Stephen Smale proved for
n
≥
7
{\displaystyle n\geq 7}
that it was homeomorphic to the n-sphere and subsequently extended his proof to
n
≥
5
{\displaystyle n\geq 5}
; he received a Fields Medal for his work in 1966. Shortly after Smale's announcement of a proof, John Stallings gave a different proof for dimensions at least 7 that a PL homotopy n-sphere was homeomorphic to the n-sphere, using the notion of "engulfing". E. C. Zeeman modified Stalling's construction to work in dimensions 5 and 6. In 1962, Smale proved that a PL homotopy n-sphere is PL-isomorphic to the standard PL n-sphere for n at least 5. In 1966, M. H. A. Newman extended PL engulfing to the topological situation and proved that for
n
≥
5
{\displaystyle n\geq 5}
a topological homotopy n-sphere is homeomorphic to the n-sphere.
Michael Freedman solved the topological case
n
=
4
{\displaystyle n=4}
in 1982 and received a Fields Medal in 1986. The initial proof consisted of a 50-page outline, with many details missing. Freedman gave a series of lectures at the time, convincing experts that the proof was correct. A project to produce a written version of the proof with background and all details filled in began in 2013, with Freedman's support. The project's output, edited by Stefan Behrens, Boldizsar Kalmar, Min Hoon Kim, Mark Powell, and Arunima Ray, with contributions from 20 mathematicians, was published in August 2021 in the form of a 496-page book, The Disc Embedding Theorem.
Grigori Perelman solved the case
n
=
3
{\displaystyle n=3}
(where the topological, PL, and differentiable cases all coincide) in 2003 in a sequence of three papers. He was offered a Fields Medal in August 2006 and the Millennium Prize from the Clay Mathematics Institute in March 2010, but declined both.
== Exotic spheres ==
The generalized Poincaré conjecture is true topologically, but false smoothly in most dimensions. In fact, for odd dimensions, the smooth Poincaré conjecture is only true in dimensions 1, 3, 5 and 61. In even dimensions it is known that the smooth Poincaré conjecture is true in dimensions 2, 6, 12 and 56. This results from the construction of the exotic spheres, manifolds that are homeomorphic, but not diffeomorphic, to the standard sphere, which can be interpreted as non-standard smooth structures on the standard (topological) sphere.
Thus the homotopy spheres that John Milnor produced are homeomorphic (Top-isomorphic, and indeed piecewise linear homeomorphic) to the standard sphere
S
n
{\displaystyle S^{n}}
, but are not diffeomorphic (Diff-isomorphic) to it, and thus are exotic spheres.
Michel Kervaire and Milnor showed that the oriented 7-sphere has 28 different smooth structures (or 15 ignoring orientations), and in higher dimensions there are usually many different smooth structures on a sphere. It is suspected that certain differentiable structures on the 4-sphere, called Gluck twists, are not isomorphic to the standard one, but at the moment there are no known topological invariants capable of distinguishing different smooth structures on a 4-sphere.
== PL ==
For piecewise linear manifolds, the Poincaré conjecture is true except possibly in dimension 4, where the answer is unknown, and equivalent to the smooth case.
In other words, every compact PL manifold of dimension not equal to 4 that is homotopy equivalent to a sphere is PL isomorphic to a sphere.
== See also ==
OEIS: A001676
== References == | Wikipedia/Generalized_Poincaré_conjecture |
In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points in the closure of S not belonging to the interior of S. An element of the boundary of S is called a boundary point of S. The term boundary operation refers to finding or taking the boundary of a set. Notations used for boundary of a set S include
bd
(
S
)
,
fr
(
S
)
,
{\displaystyle \operatorname {bd} (S),\operatorname {fr} (S),}
and
∂
S
{\displaystyle \partial S}
.
Some authors (for example Willard, in General Topology) use the term frontier instead of boundary in an attempt to avoid confusion with a different definition used in algebraic topology and the theory of manifolds. Despite widespread acceptance of the meaning of the terms boundary and frontier, they have sometimes been used to refer to other sets. For example, Metric Spaces by E. T. Copson uses the term boundary to refer to Hausdorff's border, which is defined as the intersection of a set with its boundary. Hausdorff also introduced the term residue, which is defined as the intersection of a set with the closure of the border of its complement.
== Definitions ==
There are several equivalent definitions for the boundary of a subset
S
⊆
X
{\displaystyle S\subseteq X}
of a topological space
X
,
{\displaystyle X,}
which will be denoted by
∂
X
S
,
{\displaystyle \partial _{X}S,}
Bd
X
S
,
{\displaystyle \operatorname {Bd} _{X}S,}
or simply
∂
S
{\displaystyle \partial S}
if
X
{\displaystyle X}
is understood:
It is the closure of
S
{\displaystyle S}
minus the interior of
S
{\displaystyle S}
in
X
{\displaystyle X}
:
∂
S
:=
S
¯
∖
int
X
S
{\displaystyle \partial S~:=~{\overline {S}}\setminus \operatorname {int} _{X}S}
where
S
¯
=
cl
X
S
{\displaystyle {\overline {S}}=\operatorname {cl} _{X}S}
denotes the closure of
S
{\displaystyle S}
in
X
{\displaystyle X}
and
int
X
S
{\displaystyle \operatorname {int} _{X}S}
denotes the topological interior of
S
{\displaystyle S}
in
X
.
{\displaystyle X.}
It is the intersection of the closure of
S
{\displaystyle S}
with the closure of its complement:
∂
S
:=
S
¯
∩
(
X
∖
S
)
¯
{\displaystyle \partial S~:=~{\overline {S}}\cap {\overline {(X\setminus S)}}}
It is the set of points
p
∈
X
{\displaystyle p\in X}
such that every neighborhood of
p
{\displaystyle p}
contains at least one point of
S
{\displaystyle S}
and at least one point not of
S
{\displaystyle S}
:
∂
S
:=
{
p
∈
X
:
for every neighborhood
O
of
p
,
O
∩
S
≠
∅
and
O
∩
(
X
∖
S
)
≠
∅
}
.
{\displaystyle \partial S~:=~\{p\in X:{\text{ for every neighborhood }}O{\text{ of }}p,\ O\cap S\neq \varnothing \,{\text{ and }}\,O\cap (X\setminus S)\neq \varnothing \}.}
It is all points in
X
{\displaystyle X}
which are not in either the interior or exterior of
S
{\displaystyle S}
:
∂
S
:=
X
∖
(
int
X
S
∪
ext
X
S
)
{\displaystyle \partial S~:=~X\setminus \left(\operatorname {int} _{X}S\cup \operatorname {ext} _{X}S\right)}
where
int
X
S
{\displaystyle \operatorname {int} _{X}S}
denotes the interior of
S
{\displaystyle S}
in
X
{\displaystyle X}
and
ext
X
S
{\displaystyle \operatorname {ext} _{X}S}
denotes the exterior of
S
{\displaystyle S}
in
X
.
{\displaystyle X.}
A boundary point of a set is any element of that set's boundary. The boundary
∂
X
S
{\displaystyle \partial _{X}S}
defined above is sometimes called the set's topological boundary to distinguish it from other similarly named notions such as the boundary of a manifold with boundary or the boundary of a manifold with corners, to name just a few examples.
A connected component of the boundary of S is called a boundary component of S.
== Properties ==
The closure of a set
S
{\displaystyle S}
equals the union of the set with its boundary:
S
¯
=
S
∪
∂
X
S
{\displaystyle {\overline {S}}=S\cup \partial _{X}S}
where
S
¯
=
cl
X
S
{\displaystyle {\overline {S}}=\operatorname {cl} _{X}S}
denotes the closure of
S
{\displaystyle S}
in
X
.
{\displaystyle X.}
A set is closed if and only if it contains its boundary, and open if and only if it is disjoint from its boundary. The boundary of a set is closed; this follows from the formula
∂
X
S
:=
S
¯
∩
(
X
∖
S
)
¯
,
{\displaystyle \partial _{X}S~:=~{\overline {S}}\cap {\overline {(X\setminus S)}},}
which expresses
∂
X
S
{\displaystyle \partial _{X}S}
as the intersection of two closed subsets of
X
.
{\displaystyle X.}
("Trichotomy") Given any subset
S
⊆
X
,
{\displaystyle S\subseteq X,}
each point of
X
{\displaystyle X}
lies in exactly one of the three sets
int
X
S
,
∂
X
S
,
{\displaystyle \operatorname {int} _{X}S,\partial _{X}S,}
and
int
X
(
X
∖
S
)
.
{\displaystyle \operatorname {int} _{X}(X\setminus S).}
Said differently,
X
=
(
int
X
S
)
∪
(
∂
X
S
)
∪
(
int
X
(
X
∖
S
)
)
{\displaystyle X~=~\left(\operatorname {int} _{X}S\right)\;\cup \;\left(\partial _{X}S\right)\;\cup \;\left(\operatorname {int} _{X}(X\setminus S)\right)}
and these three sets are pairwise disjoint. Consequently, if these set are not empty then they form a partition of
X
.
{\displaystyle X.}
A point
p
∈
X
{\displaystyle p\in X}
is a boundary point of a set if and only if every neighborhood of
p
{\displaystyle p}
contains at least one point in the set and at least one point not in the set.
The boundary of the interior of a set as well as the boundary of the closure of a set are both contained in the boundary of the set.
== Examples ==
=== Characterizations and general examples ===
A set and its complement have the same boundary:
∂
X
S
=
∂
X
(
X
∖
S
)
.
{\displaystyle \partial _{X}S=\partial _{X}(X\setminus S).}
A set
U
{\displaystyle U}
is a dense open subset of
X
{\displaystyle X}
if and only if
∂
X
U
=
X
∖
U
.
{\displaystyle \partial _{X}U=X\setminus U.}
The interior of the boundary of a closed set is empty.
Consequently, the interior of the boundary of the closure of a set is empty.
The interior of the boundary of an open set is also empty.
Consequently, the interior of the boundary of the interior of a set is empty.
In particular, if
S
⊆
X
{\displaystyle S\subseteq X}
is a closed or open subset of
X
{\displaystyle X}
then there does not exist any nonempty subset
U
⊆
∂
X
S
{\displaystyle U\subseteq \partial _{X}S}
such that
U
{\displaystyle U}
is open in
X
.
{\displaystyle X.}
This fact is important for the definition and use of nowhere dense subsets, meager subsets, and Baire spaces.
A set is the boundary of some open set if and only if it is closed and nowhere dense.
The boundary of a set is empty if and only if the set is both closed and open (that is, a clopen set).
=== Concrete examples ===
Consider the real line
R
{\displaystyle \mathbb {R} }
with the usual topology (that is, the topology whose basis sets are open intervals) and
Q
,
{\displaystyle \mathbb {Q} ,}
the subset of rational numbers (whose topological interior in
R
{\displaystyle \mathbb {R} }
is empty). Then
∂
(
0
,
5
)
=
∂
[
0
,
5
)
=
∂
(
0
,
5
]
=
∂
[
0
,
5
]
=
{
0
,
5
}
{\displaystyle \partial (0,5)=\partial [0,5)=\partial (0,5]=\partial [0,5]=\{0,5\}}
∂
∅
=
∅
{\displaystyle \partial \varnothing =\varnothing }
∂
Q
=
R
{\displaystyle \partial \mathbb {Q} =\mathbb {R} }
∂
(
Q
∩
[
0
,
1
]
)
=
[
0
,
1
]
{\displaystyle \partial (\mathbb {Q} \cap [0,1])=[0,1]}
These last two examples illustrate the fact that the boundary of a dense set with empty interior is its closure. They also show that it is possible for the boundary
∂
S
{\displaystyle \partial S}
of a subset
S
{\displaystyle S}
to contain a non-empty open subset of
X
:=
R
{\displaystyle X:=\mathbb {R} }
; that is, for the interior of
∂
S
{\displaystyle \partial S}
in
X
{\displaystyle X}
to be non-empty. However, a closed subset's boundary always has an empty interior.
In the space of rational numbers with the usual topology (the subspace topology of
R
{\displaystyle \mathbb {R} }
), the boundary of
(
−
∞
,
a
)
,
{\displaystyle (-\infty ,a),}
where
a
{\displaystyle a}
is irrational, is empty.
The boundary of a set is a topological notion and may change if one changes the topology. For example, given the usual topology on
R
2
,
{\displaystyle \mathbb {R} ^{2},}
the boundary of a closed disk
Ω
=
{
(
x
,
y
)
:
x
2
+
y
2
≤
1
}
{\displaystyle \Omega =\left\{(x,y):x^{2}+y^{2}\leq 1\right\}}
is the disk's surrounding circle:
∂
Ω
=
{
(
x
,
y
)
:
x
2
+
y
2
=
1
}
.
{\displaystyle \partial \Omega =\left\{(x,y):x^{2}+y^{2}=1\right\}.}
If the disk is viewed as a set in
R
3
{\displaystyle \mathbb {R} ^{3}}
with its own usual topology, that is,
Ω
=
{
(
x
,
y
,
0
)
:
x
2
+
y
2
≤
1
}
,
{\displaystyle \Omega =\left\{(x,y,0):x^{2}+y^{2}\leq 1\right\},}
then the boundary of the disk is the disk itself:
∂
Ω
=
Ω
.
{\displaystyle \partial \Omega =\Omega .}
If the disk is viewed as its own topological space (with the subspace topology of
R
2
{\displaystyle \mathbb {R} ^{2}}
), then the boundary of the disk is empty.
=== Boundary of an open ball vs. its surrounding sphere ===
This example demonstrates that the topological boundary of an open ball of radius
r
>
0
{\displaystyle r>0}
is not necessarily equal to the corresponding sphere of radius
r
{\displaystyle r}
(centered at the same point); it also shows that the closure of an open ball of radius
r
>
0
{\displaystyle r>0}
is not necessarily equal to the closed ball of radius
r
{\displaystyle r}
(again centered at the same point).
Denote the usual Euclidean metric on
R
2
{\displaystyle \mathbb {R} ^{2}}
by
d
(
(
a
,
b
)
,
(
x
,
y
)
)
:=
(
x
−
a
)
2
+
(
y
−
b
)
2
{\displaystyle d((a,b),(x,y)):={\sqrt {(x-a)^{2}+(y-b)^{2}}}}
which induces on
R
2
{\displaystyle \mathbb {R} ^{2}}
the usual Euclidean topology.
Let
X
⊆
R
2
{\displaystyle X\subseteq \mathbb {R} ^{2}}
denote the union of the
y
{\displaystyle y}
-axis
Y
:=
{
0
}
×
R
{\displaystyle Y:=\{0\}\times \mathbb {R} }
with the unit circle
S
1
:=
{
p
∈
R
2
:
d
(
p
,
0
)
=
1
}
=
{
(
x
,
y
)
∈
R
2
:
x
2
+
y
2
=
1
}
{\displaystyle S^{1}:=\left\{p\in \mathbb {R} ^{2}:d(p,\mathbf {0} )=1\right\}=\left\{(x,y)\in \mathbb {R} ^{2}:x^{2}+y^{2}=1\right\}}
centered at the origin
0
:=
(
0
,
0
)
∈
R
2
{\displaystyle \mathbf {0} :=(0,0)\in \mathbb {R} ^{2}}
; that is,
X
:=
Y
∪
S
1
,
{\displaystyle X:=Y\cup S^{1},}
which is a topological subspace of
R
2
{\displaystyle \mathbb {R} ^{2}}
whose topology is equal to that induced by the (restriction of) the metric
d
.
{\displaystyle d.}
In particular, the sets
Y
,
S
1
,
Y
∩
S
1
=
{
(
0
,
±
1
)
}
,
{\displaystyle Y,S^{1},Y\cap S^{1}=\{(0,\pm 1)\},}
and
{
0
}
×
[
−
1
,
1
]
{\displaystyle \{0\}\times [-1,1]}
are all closed subsets of
R
2
{\displaystyle \mathbb {R} ^{2}}
and thus also closed subsets of its subspace
X
.
{\displaystyle X.}
Henceforth, unless it clearly indicated otherwise, every open ball, closed ball, and sphere should be assumed to be centered at the origin
0
=
(
0
,
0
)
{\displaystyle \mathbf {0} =(0,0)}
and moreover, only the metric space
(
X
,
d
)
{\displaystyle (X,d)}
will be considered (and not its superspace
(
R
2
,
d
)
{\displaystyle (\mathbb {R} ^{2},d)}
); this being a path-connected and locally path-connected complete metric space.
Denote the open ball of radius
r
>
0
{\displaystyle r>0}
in
(
X
,
d
)
{\displaystyle (X,d)}
by
B
r
:=
{
p
∈
X
:
d
(
p
,
0
)
<
r
}
{\displaystyle B_{r}:=\left\{p\in X:d(p,\mathbf {0} )<r\right\}}
so that when
r
=
1
{\displaystyle r=1}
then
B
1
=
{
0
}
×
(
−
1
,
1
)
{\displaystyle B_{1}=\{0\}\times (-1,1)}
is the open sub-interval of the
y
{\displaystyle y}
-axis strictly between
y
=
−
1
{\displaystyle y=-1}
and
y
=
1.
{\displaystyle y=1.}
The unit sphere in
(
X
,
d
)
{\displaystyle (X,d)}
("unit" meaning that its radius is
r
=
1
{\displaystyle r=1}
) is
{
p
∈
X
:
d
(
p
,
0
)
=
1
}
=
S
1
{\displaystyle \left\{p\in X:d(p,\mathbf {0} )=1\right\}=S^{1}}
while the closed unit ball in
(
X
,
d
)
{\displaystyle (X,d)}
is the union of the open unit ball and the unit sphere centered at this same point:
{
p
∈
X
:
d
(
p
,
0
)
≤
1
}
=
S
1
∪
(
{
0
}
×
[
−
1
,
1
]
)
.
{\displaystyle \left\{p\in X:d(p,\mathbf {0} )\leq 1\right\}=S^{1}\cup \left(\{0\}\times [-1,1]\right).}
However, the topological boundary
∂
X
B
1
{\displaystyle \partial _{X}B_{1}}
and topological closure
cl
X
B
1
{\displaystyle \operatorname {cl} _{X}B_{1}}
in
X
{\displaystyle X}
of the open unit ball
B
1
{\displaystyle B_{1}}
are:
∂
X
B
1
=
{
(
0
,
1
)
,
(
0
,
−
1
)
}
and
cl
X
B
1
=
B
1
∪
∂
X
B
1
=
B
1
∪
{
(
0
,
1
)
,
(
0
,
−
1
)
}
=
{
0
}
×
[
−
1
,
1
]
.
{\displaystyle \partial _{X}B_{1}=\{(0,1),(0,-1)\}\quad {\text{ and }}\quad \operatorname {cl} _{X}B_{1}~=~B_{1}\cup \partial _{X}B_{1}~=~B_{1}\cup \{(0,1),(0,-1)\}~=~\{0\}\times [-1,1].}
In particular, the open unit ball's topological boundary
∂
X
B
1
=
{
(
0
,
1
)
,
(
0
,
−
1
)
}
{\displaystyle \partial _{X}B_{1}=\{(0,1),(0,-1)\}}
is a proper subset of the unit sphere
{
p
∈
X
:
d
(
p
,
0
)
=
1
}
=
S
1
{\displaystyle \left\{p\in X:d(p,\mathbf {0} )=1\right\}=S^{1}}
in
(
X
,
d
)
.
{\displaystyle (X,d).}
And the open unit ball's topological closure
cl
X
B
1
=
B
1
∪
{
(
0
,
1
)
,
(
0
,
−
1
)
}
{\displaystyle \operatorname {cl} _{X}B_{1}=B_{1}\cup \{(0,1),(0,-1)\}}
is a proper subset of the closed unit ball
{
p
∈
X
:
d
(
p
,
0
)
≤
1
}
=
S
1
∪
(
{
0
}
×
[
−
1
,
1
]
)
{\displaystyle \left\{p\in X:d(p,\mathbf {0} )\leq 1\right\}=S^{1}\cup \left(\{0\}\times [-1,1]\right)}
in
(
X
,
d
)
.
{\displaystyle (X,d).}
The point
(
1
,
0
)
∈
X
,
{\displaystyle (1,0)\in X,}
for instance, cannot belong to
cl
X
B
1
{\displaystyle \operatorname {cl} _{X}B_{1}}
because there does not exist a sequence in
B
1
=
{
0
}
×
(
−
1
,
1
)
{\displaystyle B_{1}=\{0\}\times (-1,1)}
that converges to it; the same reasoning generalizes to also explain why no point in
X
{\displaystyle X}
outside of the closed sub-interval
{
0
}
×
[
−
1
,
1
]
{\displaystyle \{0\}\times [-1,1]}
belongs to
cl
X
B
1
.
{\displaystyle \operatorname {cl} _{X}B_{1}.}
Because the topological boundary of the set
B
1
{\displaystyle B_{1}}
is always a subset of
B
1
{\displaystyle B_{1}}
's closure, it follows that
∂
X
B
1
{\displaystyle \partial _{X}B_{1}}
must also be a subset of
{
0
}
×
[
−
1
,
1
]
.
{\displaystyle \{0\}\times [-1,1].}
In any metric space
(
M
,
ρ
)
,
{\displaystyle (M,\rho ),}
the topological boundary in
M
{\displaystyle M}
of an open ball of radius
r
>
0
{\displaystyle r>0}
centered at a point
c
∈
M
{\displaystyle c\in M}
is always a subset of the sphere of radius
r
{\displaystyle r}
centered at that same point
c
{\displaystyle c}
; that is,
∂
M
(
{
m
∈
M
:
ρ
(
m
,
c
)
<
r
}
)
⊆
{
m
∈
M
:
ρ
(
m
,
c
)
=
r
}
{\displaystyle \partial _{M}\left(\left\{m\in M:\rho (m,c)<r\right\}\right)~\subseteq ~\left\{m\in M:\rho (m,c)=r\right\}}
always holds.
Moreover, the unit sphere in
(
X
,
d
)
{\displaystyle (X,d)}
contains
X
∖
Y
=
S
1
∖
{
(
0
,
±
1
)
}
,
{\displaystyle X\setminus Y=S^{1}\setminus \{(0,\pm 1)\},}
which is an open subset of
X
.
{\displaystyle X.}
This shows, in particular, that the unit sphere
{
p
∈
X
:
d
(
p
,
0
)
=
1
}
{\displaystyle \left\{p\in X:d(p,\mathbf {0} )=1\right\}}
in
(
X
,
d
)
{\displaystyle (X,d)}
contains a non-empty open subset of
X
.
{\displaystyle X.}
== Boundary of a boundary ==
For any set
S
,
∂
S
⊇
∂
∂
S
,
{\displaystyle S,\partial S\supseteq \partial \partial S,}
where
⊇
{\displaystyle \,\supseteq \,}
denotes the superset with equality holding if and only if the boundary of
S
{\displaystyle S}
has no interior points, which will be the case for example if
S
{\displaystyle S}
is either closed or open. Since the boundary of a set is closed,
∂
∂
S
=
∂
∂
∂
S
{\displaystyle \partial \partial S=\partial \partial \partial S}
for any set
S
.
{\displaystyle S.}
The boundary operator thus satisfies a weakened kind of idempotence.
In discussing boundaries of manifolds or simplexes and their simplicial complexes, one often meets the assertion that the boundary of the boundary is always empty. Indeed, the construction of the singular homology rests critically on this fact. The explanation for the apparent incongruity is that the topological boundary (the subject of this article) is a slightly different concept from the boundary of a manifold or of a simplicial complex. For example, the boundary of an open disk viewed as a manifold is empty, as is its topological boundary viewed as a subset of itself, while its topological boundary viewed as a subset of the real plane is the circle surrounding the disk. Conversely, the boundary of a closed disk viewed as a manifold is the bounding circle, as is its topological boundary viewed as a subset of the real plane, while its topological boundary viewed as a subset of itself is empty. In particular, the topological boundary depends on the ambient space, while the boundary of a manifold is invariant.
== See also ==
See the discussion of boundary in topological manifold for more details.
Boundary of a manifold – Topological space that locally resembles Euclidean spacePages displaying short descriptions of redirect targets
Bounding point – Mathematical concept related to subsets of vector spaces
Closure (topology) – All points and limit points in a subset of a topological space
Exterior (topology) – Largest open set disjoint from some given set
Interior (topology) – Largest open subset of some given set
Nowhere dense set – Mathematical set whose closure has empty interior
Lebesgue's density theorem – Theorem in analysis, for measure-theoretic characterization and properties of boundary
Surface (topology) – Two-dimensional manifold
== Notes ==
== Citations ==
== References ==
Munkres, J. R. (2000). Topology. Prentice-Hall. ISBN 0-13-181629-2.
Willard, S. (1970). General Topology. Addison-Wesley. ISBN 0-201-08707-3.
van den Dries, L. (1998). Tame Topology. Cambridge University Press. ISBN 978-0521598385. | Wikipedia/Boundary_(topology) |
In the mathematical discipline of graph theory, the dual graph of a planar graph G is a graph that has a vertex for each face of G. The dual graph has an edge for each pair of faces in G that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge e of G has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of e. The definition of the dual depends on the choice of embedding of the graph G, so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph.
Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized combinatorially by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces.
These notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph.
The term dual is used because the property of being a dual graph is symmetric, meaning that if H is a dual of a connected graph G, then G is a dual of H. When discussing the dual of a graph G, the graph G itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs.
Graph duality can help explain the structure of mazes and of drainage basins. Dual graphs have also been applied in computer vision, computational geometry, mesh generation, and the design of integrated circuits.
== Examples ==
=== Cycles and dipoles ===
The unique planar embedding of a cycle graph divides the plane into only two regions, the inside and outside of the cycle, by the Jordan curve theorem. However, in an n-cycle, these two regions are separated from each other by n different edges. Therefore, the dual graph of the n-cycle is a multigraph with two vertices (dual to the regions), connected to each other by n dual edges. Such a graph is called a multiple edge, linkage, or sometimes a dipole graph. Conversely, the dual to an n-edge dipole graph is an n-cycle.
=== Dual polyhedra ===
According to Steinitz's theorem, every polyhedral graph (the graph formed by the vertices and edges of a three-dimensional convex polyhedron) must be planar and 3-vertex-connected, and every 3-vertex-connected planar graph comes from a convex polyhedron in this way. Every three-dimensional convex polyhedron has a dual polyhedron; the dual polyhedron has a vertex for every face of the original polyhedron, with two dual vertices adjacent whenever the corresponding two faces share an edge. Whenever two polyhedra are dual, their graphs are also dual. For instance the Platonic solids come in dual pairs, with the octahedron dual to the cube, the dodecahedron dual to the icosahedron, and the tetrahedron dual to itself. Polyhedron duality can also be extended to duality of higher dimensional polytopes, but this extension of geometric duality does not have clear connections to graph-theoretic duality.
=== Self-dual graphs ===
A plane graph is said to be self-dual if it is isomorphic to its dual graph. The wheel graphs provide an infinite family of self-dual graphs coming from self-dual polyhedra (the pyramids). However, there also exist self-dual graphs that are not polyhedral, such as the one shown. Servatius & Christopher (1992) describe two operations, adhesion and explosion, that can be used to construct a self-dual graph containing a given planar graph; for instance, the self-dual graph shown can be constructed as the adhesion of a tetrahedron with its dual.
It follows from Euler's formula that every self-dual graph with n vertices has exactly 2n − 2 edges. Every simple self-dual planar graph contains at least four vertices of degree three, and every self-dual embedding has at least four triangular faces.
== Properties ==
Many natural and important concepts in graph theory correspond to other equally natural but different concepts in the dual graph. Because the dual of the dual of a connected plane graph is isomorphic to the primal graph, each of these pairings is bidirectional: if concept X in a planar graph corresponds to concept Y in the dual graph, then concept Y in a planar graph corresponds to concept X in the dual.
=== Simple graphs versus multigraphs ===
The dual of a simple graph need not be simple: it may have self-loops (an edge with both endpoints at the same vertex) or multiple edges connecting the same two vertices, as was already evident in the example of dipole multigraphs being dual to cycle graphs. As a special case of the cut-cycle duality discussed below,
the bridges of a planar graph G are in one-to-one correspondence with the self-loops of the dual graph. For the same reason, a pair of parallel edges in a dual multigraph (that is, a length-2 cycle) corresponds to a 2-edge cutset in the primal graph (a pair of edges whose deletion disconnects the graph). Therefore, a planar graph is simple if and only if its dual has no 1- or 2-edge cutsets; that is, if it is 3-edge-connected. The simple planar graphs whose duals are simple are exactly the 3-edge-connected simple planar graphs. This class of graphs includes, but is not the same as, the class of 3-vertex-connected simple planar graphs. For instance, the figure showing a self-dual graph is 3-edge-connected (and therefore its dual is simple) but is not 3-vertex-connected.
=== Uniqueness ===
Because the dual graph depends on a particular embedding, the dual graph of a planar graph is not unique, in the sense that the same planar graph can have non-isomorphic dual graphs. In the picture, the blue graphs are isomorphic but their dual red graphs are not. The upper red dual has a vertex with degree 6 (corresponding to the outer face of the blue graph) while in the lower red graph all degrees are less than 6.
Hassler Whitney showed that if the graph is 3-connected then the embedding, and thus the dual graph, is unique. By Steinitz's theorem, these graphs are exactly the polyhedral graphs, the graphs of convex polyhedra. A planar graph is 3-vertex-connected if and only if its dual graph is 3-vertex-connected. Moreover, a planar biconnected graph has a unique embedding, and therefore also a unique dual, if and only if it is a subdivision of a 3-vertex-connected planar graph (a graph formed from a 3-vertex-connected planar graph by replacing some of its edges by paths).
For some planar graphs that are not 3-vertex-connected, such as the complete bipartite graph K2,4, the embedding is not unique, but all embeddings are isomorphic. When this happens, correspondingly, all dual graphs are isomorphic.
Because different embeddings may lead to different dual graphs, testing whether one graph is a dual of another (without already knowing their embeddings) is a nontrivial algorithmic problem. For biconnected graphs, it can be solved in polynomial time by using the SPQR trees of the graphs to construct a canonical form for the equivalence relation of having a shared mutual dual. For instance, the two red graphs in the illustration are equivalent according to this relation. However, for planar graphs that are not biconnected, this relation is not an equivalence relation and the problem of testing mutual duality is NP-complete.
=== Cuts and cycles ===
A cutset in an arbitrary connected graph is a subset of edges defined from a partition of the vertices into two subsets, by including an edge in the subset when it has one endpoint on each side of the partition. Removing the edges of a cutset necessarily splits the graph into at least two connected components. A minimal cutset (also called a bond) is a cutset with the property that every proper subset of the cutset is not itself a cut. A minimal cutset of a connected graph necessarily separates its graph into exactly two components, and consists of the set of edges that have one endpoint in each component. A simple cycle is a connected subgraph in which each vertex of the cycle is incident to exactly two edges of the cycle.
In a connected planar graph G, every simple cycle of G corresponds to a minimal cutset in the dual of G, and vice versa. This can be seen as a form of the Jordan curve theorem: each simple cycle separates the faces of G into the faces in the interior of the cycle and the faces of the exterior of the cycle, and the duals of the cycle edges are exactly the edges that cross from the interior to the exterior. The girth of any planar graph (the size of its smallest cycle) equals the edge connectivity of its dual graph (the size of its smallest cutset).
This duality extends from individual cutsets and cycles to vector spaces defined from them. The cycle space of a graph is defined as the family of all subgraphs that have even degree at each vertex; it can be viewed as a vector space over the two-element finite field, with the symmetric difference of two sets of edges acting as the vector addition operation in the vector space. Similarly, the cut space of a graph is defined as the family of all cutsets, with vector addition defined in the same way. Then the cycle space of any planar graph and the cut space of its dual graph are isomorphic as vector spaces. Thus, the rank of a planar graph (the dimension of its cut space) equals the cyclotomic number of its dual (the dimension of its cycle space) and vice versa. A cycle basis of a graph is a set of simple cycles that form a basis of the cycle space (every even-degree subgraph can be formed in exactly one way as a symmetric difference of some of these cycles). For edge-weighted planar graphs (with sufficiently general weights that no two cycles have the same weight) the minimum-weight cycle basis of the graph is dual to the Gomory–Hu tree of the dual graph, a collection of nested cuts that together include a minimum cut separating each pair of vertices in the graph. Each cycle in the minimum weight cycle basis has a set of edges that are dual to the edges of one of the cuts in the Gomory–Hu tree. When cycle weights may be tied, the minimum-weight cycle basis may not be unique, but in this case it is still true that the Gomory–Hu tree of the dual graph corresponds to one of the minimum weight cycle bases of the graph.
In directed planar graphs, simple directed cycles are dual to directed cuts (partitions of the vertices into two subsets such that all edges go in one direction, from one subset to the other). Strongly oriented planar graphs (graphs whose underlying undirected graph is connected, and in which every edge belongs to a cycle) are dual to directed acyclic graphs in which no edge belongs to a cycle. To put this another way, the strong orientations of a connected planar graph (assignments of directions to the edges of the graph that result in a strongly connected graph) are dual to acyclic orientations (assignments of directions that produce a directed acyclic graph). In the same way, dijoins (sets of edges that include an edge from each directed cut) are dual to feedback arc sets (sets of edges that include an edge from each cycle).
=== Spanning trees ===
A spanning tree may be defined as a set of edges that, together with all of the vertices of the graph, forms a connected and acyclic subgraph. But, by cut-cycle duality, if a set S of edges in a planar graph G is acyclic (has no cycles), then the set of edges dual to S has no cuts, from which it follows that the complementary set of dual edges (the duals of the edges that are not in S) forms a connected subgraph. Symmetrically, if S is connected, then the edges dual to the complement of S form an acyclic subgraph. Therefore, when S has both properties – it is connected and acyclic – the same is true for the complementary set in the dual graph. That is, each spanning tree of G is complementary to a spanning tree of the dual graph, and vice versa. Thus, the edges of any planar graph and its dual can together be partitioned (in multiple different ways) into two spanning trees, one in the primal and one in the dual, that together extend to all the vertices and faces of the graph but never cross each other. In particular, the minimum spanning tree of G is complementary to the maximum spanning tree of the dual graph. However, this does not work for shortest path trees, even approximately: there exist planar graphs such that, for every pair of a spanning tree in the graph and a complementary spanning tree in the dual graph, at least one of the two trees has distances that are significantly longer than the distances in its graph.
An example of this type of decomposition into interdigitating trees can be seen in some simple types of mazes, with a single entrance and no disconnected components of its walls. In this case both the maze walls and the space between the walls take the form of a mathematical tree. If the free space of the maze is partitioned into simple cells (such as the squares of a grid) then this system of cells can be viewed as an embedding of a planar graph, in which the tree structure of the walls forms a spanning tree of the graph and the tree structure of the free space forms a spanning tree of the dual graph. Similar pairs of interdigitating trees can also be seen in the tree-shaped pattern of streams and rivers within a drainage basin and the dual tree-shaped pattern of ridgelines separating the streams.
This partition of the edges and their duals into two trees leads to a simple proof of Euler’s formula V − E + F = 2 for planar graphs with V vertices, E edges, and F faces. Any spanning tree and its complementary dual spanning tree partition the edges into two subsets of V − 1 and F − 1 edges respectively, and adding the sizes of the two subsets gives the equation
E = (V − 1) + (F − 1)
which may be rearranged to form Euler's formula. According to Duncan Sommerville, this proof of Euler's formula is due to K. G. C. Von Staudt’s Geometrie der Lage (Nürnberg, 1847).
In nonplanar surface embeddings the set of dual edges complementary to a spanning tree is not a dual spanning tree. Instead this set of edges is the union of a dual spanning tree with a small set of extra edges whose number is determined by the genus of the surface on which the graph is embedded. The extra edges, in combination with paths in the spanning trees, can be used to generate the fundamental group of the surface.
=== Additional properties ===
Any counting formula involving vertices and faces that is valid for all planar graphs may be transformed by planar duality into an equivalent formula in which the roles of the vertices and faces have been swapped. Euler's formula, which is self-dual, is one example. Another given by Harary involves the handshaking lemma, according to which the sum of the degrees of the vertices of any graph equals twice the number of edges. In its dual form, this lemma states that in a plane graph, the sum of the numbers of sides of the faces of the graph equals twice the number of edges.
The medial graph of a plane graph is isomorphic to the medial graph of its dual. Two planar graphs can have isomorphic medial graphs only if they are dual to each other.
A planar graph with four or more vertices is maximal (no more edges can be added while preserving planarity) if and only if its dual graph is both 3-vertex-connected and 3-regular.
A connected planar graph is Eulerian (has even degree at every vertex) if and only if its dual graph is bipartite. A Hamiltonian cycle in a planar graph G corresponds to a partition of the vertices of the dual graph into two subsets (the interior and exterior of the cycle) whose induced subgraphs are both trees. In particular, Barnette's conjecture on the Hamiltonicity of cubic bipartite polyhedral graphs is equivalent to the conjecture that every Eulerian maximal planar graph can be partitioned into two induced trees.
If a planar graph G has Tutte polynomial TG(x,y), then the Tutte polynomial of its dual graph is obtained by swapping x and y. For this reason, if some particular value of the Tutte polynomial provides information about certain types of structures in G, then swapping the arguments to the Tutte polynomial will give the corresponding information for the dual structures. For instance, the number of strong orientations is TG(0,2) and the number of acyclic orientations is TG(2,0). For bridgeless planar graphs, graph colorings with k colors correspond to nowhere-zero flows modulo k on the dual graph. For instance, the four color theorem (the existence of a 4-coloring for every planar graph) can be expressed equivalently as stating that the dual of every bridgeless planar graph has a nowhere-zero 4-flow. The number of k-colorings is counted (up to an easily computed factor) by the Tutte polynomial value TG(1 − k,0) and dually the number of nowhere-zero k-flows is counted by TG(0,1 − k).
An st-planar graph is a connected planar graph together with a bipolar orientation of that graph, an orientation that makes it acyclic with a single source and a single sink, both of which are required to be on the same face as each other. Such a graph may be made into a strongly connected graph by adding one more edge, from the sink back to the source, through the outer face. The dual of this augmented planar graph is itself the augmentation of another st-planar graph.
== Variations ==
=== Directed graphs ===
In a directed plane graph, the dual graph may be made directed as well, by orienting each dual edge by a 90° clockwise turn from the corresponding primal edge. Strictly speaking, this construction is not a duality of directed planar graphs, because starting from a graph G and taking the dual twice does not return to G itself, but instead constructs a graph isomorphic to the transpose graph of G, the graph formed from G by reversing all of its edges. Taking the dual four times returns to the original graph.
=== Weak dual ===
The weak dual of a plane graph is the subgraph of the dual graph whose vertices correspond to the bounded faces of the primal graph. A plane graph is outerplanar if and only if its weak dual is a forest. For any plane graph G, let G+ be the plane multigraph formed by adding a single new vertex v in the unbounded face of G, and connecting v to each vertex of the outer face (multiple times, if a vertex appears multiple times on the boundary of the outer face); then, G is the weak dual of the (plane) dual of G+.
=== Infinite graphs and tessellations ===
The concept of duality applies as well to infinite graphs embedded in the plane as it does to finite graphs. However, care is needed to avoid topological complications such as points of the plane that are neither part of an open region disjoint from the graph nor part of an edge or vertex of the graph. When all faces are bounded regions surrounded by a cycle of the graph, an infinite planar graph embedding can also be viewed as a tessellation of the plane, a covering of the plane by closed disks (the tiles of the tessellation) whose interiors (the faces of the embedding) are disjoint open disks. Planar duality gives rise to the notion of a dual tessellation, a tessellation formed by placing a vertex at the center of each tile and connecting the centers of adjacent tiles.
The concept of a dual tessellation can also be applied to partitions of the plane into finitely many regions. It is closely related to but not quite the same as planar graph duality in this case. For instance, the Voronoi diagram of a finite set of point sites is a partition of the plane into polygons within which one site is closer than any other. The sites on the convex hull of the input give rise to unbounded Voronoi polygons, two of whose sides are infinite rays rather than finite line segments. The dual of this diagram is the Delaunay triangulation of the input, a planar graph that connects two sites by an edge whenever there exists a circle that contains those two sites and no other sites. The edges of the convex hull of the input are also edges of the Delaunay triangulation, but they correspond to rays rather than line segments of the Voronoi diagram. This duality between Voronoi diagrams and Delaunay triangulations can be turned into a duality between finite graphs in either of two ways: by adding an artificial vertex at infinity to the Voronoi diagram, to serve as the other endpoint for all of its rays, or by treating the bounded part of the Voronoi diagram as the weak dual of the Delaunay triangulation. Although the Voronoi diagram and Delaunay triangulation are dual, their embedding in the plane may have additional crossings beyond the crossings of dual pairs of edges. Each vertex of the Delaunay triangle is positioned within its corresponding face of the Voronoi diagram. Each vertex of the Voronoi diagram is positioned at the circumcenter of the corresponding triangle of the Delaunay triangulation, but this point may lie outside its triangle.
=== Nonplanar embeddings ===
The concept of duality can be extended to graph embeddings on two-dimensional manifolds other than the plane. The definition is the same: there is a dual vertex for each connected component of the complement of the graph in the manifold, and a dual edge for each graph edge connecting the two dual vertices on either side of the edge. In most applications of this concept, it is restricted to embeddings with the property that each face is a topological disk; this constraint generalizes the requirement for planar graphs that the graph be connected. With this constraint, the dual of any surface-embedded graph has a natural embedding on the same surface, such that the dual of the dual is isomorphic to and isomorphically embedded to the original graph. For instance, the complete graph K7 is a toroidal graph: it is not planar but can be embedded in a torus, with each face of the embedding being a triangle. This embedding has the Heawood graph as its dual graph.
The same concept works equally well for non-orientable surfaces. For instance, K6 can be embedded in the projective plane with ten triangular faces as the hemi-icosahedron, whose dual is the Petersen graph embedded as the hemi-dodecahedron.
Even planar graphs may have nonplanar embeddings, with duals derived from those embeddings that differ from their planar duals. For instance, the four Petrie polygons of a cube (hexagons formed by removing two opposite vertices of the cube) form the hexagonal faces of an embedding of the cube in a torus. The dual graph of this embedding has four vertices forming a complete graph K4 with doubled edges. In the torus embedding of this dual graph, the six edges incident to each vertex, in cyclic order around that vertex, cycle twice through the three other vertices. In contrast to the situation in the plane, this embedding of the cube and its dual is not unique; the cube graph has several other torus embeddings, with different duals.
Many of the equivalences between primal and dual graph properties of planar graphs fail to generalize to nonplanar duals, or require additional care in their generalization.
Another operation on surface-embedded graphs is the Petrie dual, which uses the Petrie polygons of the embedding as the faces of a new embedding. Unlike the usual dual graph, it has the same vertices as the original graph, but generally lies on a different surface.
Surface duality and Petrie duality are two of the six Wilson operations, and together generate the group of these operations.
=== Matroids and algebraic duals ===
An algebraic dual of a connected graph G is a graph G* such that G and G* have the same set of edges, any cycle of G is a cut of G*, and any cut of G is a cycle of G*. Every planar graph has an algebraic dual, which is in general not unique (any dual defined by a plane embedding will do). The converse is actually true, as settled by Hassler Whitney in Whitney's planarity criterion:
A connected graph G is planar if and only if it has an algebraic dual.
The same fact can be expressed in the theory of matroids. If M is the graphic matroid of a graph G, then a graph G* is an algebraic dual of G if and only if the graphic matroid of G* is the dual matroid of M. Then Whitney's planarity criterion can be rephrased as stating that the dual matroid of a graphic matroid M is itself a graphic matroid if and only if the underlying graph G of M is planar. If G is planar, the dual matroid is the graphic matroid of the dual graph of G. In particular, all dual graphs, for all the different planar embeddings of G, have isomorphic graphic matroids.
For nonplanar surface embeddings, unlike planar duals, the dual graph is not generally an algebraic dual of the primal graph. And for a non-planar graph G, the dual matroid of the graphic matroid of G is not itself a graphic matroid. However, it is still a matroid whose circuits correspond to the cuts in G, and in this sense can be thought of as a combinatorially generalized algebraic dual of G.
The duality between Eulerian and bipartite planar graphs can be extended to binary matroids (which include the graphic matroids derived from planar graphs): a binary matroid is Eulerian if and only if its dual matroid is bipartite.
The two dual concepts of girth and edge connectivity are unified in matroid theory by matroid girth: the girth of the graphic matroid of a planar graph is the same as the graph's girth, and the girth of the dual matroid (the graphic matroid of the dual graph) is the edge connectivity of the graph.
== Applications ==
Along with its use in graph theory, the duality of planar graphs has applications in several other areas of mathematical and computational study.
In geographic information systems, flow networks (such as the networks showing how water flows in a system of streams and rivers) are dual to cellular networks describing drainage divides. This duality can be explained by modeling the flow network as a spanning tree on a grid graph of an appropriate scale, and modeling the drainage divide as the complementary spanning tree of ridgelines on the dual grid graph.
In computer vision, digital images are partitioned into small square pixels, each of which has its own color. The dual graph of this subdivision into squares has a vertex per pixel and an edge between pairs of pixels that share an edge; it is useful for applications including clustering of pixels into connected regions of similar colors.
In computational geometry, the duality between Voronoi diagrams and Delaunay triangulations implies that any algorithm for constructing a Voronoi diagram can be immediately converted into an algorithm for the Delaunay triangulation, and vice versa. The same duality can also be used in finite element mesh generation. Lloyd's algorithm, a method based on Voronoi diagrams for moving a set of points on a surface to more evenly spaced positions, is commonly used as a way to smooth a finite element mesh described by the dual Delaunay triangulation. This method improves the mesh by making its triangles more uniformly sized and shaped.
In the synthesis of CMOS circuits, the function to be synthesized is represented as a formula in Boolean algebra. Then this formula is translated into two series–parallel multigraphs. These graphs can be interpreted as circuit diagrams in which the edges of the graphs represent transistors, gated by the inputs to the function. One circuit computes the function itself, and the other computes its complement. One of the two circuits is derived by converting the conjunctions and disjunctions of the formula into series and parallel compositions of graphs, respectively. The other circuit reverses this construction, converting the conjunctions and disjunctions of the formula into parallel and series compositions of graphs. These two circuits, augmented by an additional edge connecting the input of each circuit to its output, are planar dual graphs.
== History ==
The duality of convex polyhedra was recognized by Johannes Kepler in his 1619 book Harmonices Mundi.
Recognizable planar dual graphs, outside the context of polyhedra, appeared as early as 1725, in Pierre Varignon's posthumously published work, Nouvelle Méchanique ou Statique. This was even before Leonhard Euler's 1736 work on the Seven Bridges of Königsberg that is often taken to be the first work on graph theory. Varignon analyzed the forces on static systems of struts by drawing a graph dual to the struts, with edge lengths proportional to the forces on the struts; this dual graph is a type of Cremona diagram. In connection with the four color theorem, the dual graphs of maps (subdivisions of the plane into regions) were mentioned by Alfred Kempe in 1879, and extended to maps on non-planar surfaces by Lothar Heffter in 1891. Duality as an operation on abstract planar graphs was introduced by Hassler Whitney in 1931.
== Notes ==
== External links ==
Weisstein, Eric W., "Dual graph", MathWorld | Wikipedia/Dual_graph |
A graph database (GDB) is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A key concept of the system is the graph (or edge or relationship). The graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships is fast because they are perpetually stored in the database. Relationships can be intuitively visualized using graph databases, making them useful for heavily inter-connected data.
Graph databases are commonly referred to as a NoSQL database. Graph databases are similar to 1970s network model databases in that both represent general graphs, but network-model databases operate at a lower level of abstraction and lack easy traversal over a chain of edges.
The underlying storage mechanism of graph databases can vary. Relationships are first-class citizens in a graph database and can be labelled, directed, and given properties. Some depend on a relational engine and store the graph data in a table (although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices). Others use a key–value store or document-oriented database for storage, making them inherently NoSQL structures.
As of 2021, no graph query language has been universally adopted in the same way as SQL was for relational databases, and there are a wide variety of systems, many of which are tightly tied to one product. Some early standardization efforts led to multi-vendor query languages like Gremlin, SPARQL, and Cypher. In September 2019 a proposal for a project to create a new standard graph query language (ISO/IEC 39075 Information Technology — Database Languages — GQL) was approved by members of ISO/IEC Joint Technical Committee 1(ISO/IEC JTC 1). GQL is intended to be a declarative database query language, like SQL. In addition to having query language interfaces, some graph databases are accessed through application programming interfaces (APIs).
Graph databases differ from graph compute engines. Graph databases are technologies that are translations of the relational online transaction processing (OLTP) databases. On the other hand, graph compute engines are used in online analytical processing (OLAP) for bulk analysis. Graph databases attracted considerable attention in the 2000s, due to the successes of major technology corporations in using proprietary graph databases, along with the introduction of open-source graph databases.
One study concluded that an RDBMS was "comparable" in performance to existing graph analysis engines at executing graph queries.
== History ==
In the mid-1960s, navigational databases such as IBM's IMS supported tree-like structures in its hierarchical model, but the strict tree structure could be circumvented with virtual records.
Graph structures could be represented in network model databases from the late 1960s. CODASYL, which had defined COBOL in 1959, defined the Network Database Language in 1969.
Labeled graphs could be represented in graph databases from the mid-1980s, such as the Logical Data Model.
Commercial object databases (ODBMSs) emerged in the early 1990s. In 2000, the Object Data Management Group published a standard language for defining object and relationship (graph) structures in their ODMG'93 publication.
Several improvements to graph databases appeared in the early 1990s, accelerating in the late 1990s with endeavors to index web pages.
In the mid-to-late 2000s, commercial graph databases with ACID guarantees such as Neo4j and Oracle Spatial and Graph became available.
In the 2010s, commercial ACID graph databases that could be scaled horizontally became available. Further, SAP HANA brought in-memory and columnar technologies to graph databases. Also in the 2010s, multi-model databases that supported graph models (and other models such as relational database or document-oriented database) became available, such as OrientDB, ArangoDB, and MarkLogic (starting with its 7.0 version). During this time, graph databases of various types have become especially popular with social network analysis with the advent of social media companies. Also during the decade, cloud-based graph databases such as Amazon Neptune and Neo4j AuraDB became available.
== Background ==
Graph databases portray the data as it is viewed conceptually. This is accomplished by transferring the data into nodes and its relationships into edges.
A graph database is a database that is based on graph theory. It consists of a set of objects, which can be a node or an edge.
Nodes represent entities or instances such as people, businesses, accounts, or any other item to be tracked. They are roughly the equivalent of a record, relation, or row in a relational database, or a document in a document-store database.
Edges, also termed graphs or relationships, are the lines that connect nodes to other nodes; representing the relationship between them. Meaningful patterns emerge when examining the connections and interconnections of nodes, properties and edges. The edges can either be directed or undirected. In an undirected graph, an edge connecting two nodes has a single meaning. In a directed graph, the edges connecting two different nodes have different meanings, depending on their direction. Edges are the key concept in graph databases, representing an abstraction that is not directly implemented in a relational model or a document-store model.
Properties are information associated to nodes. For example, if Wikipedia were one of the nodes, it might be tied to properties such as website, reference material, or words that starts with the letter w, depending on which aspects of Wikipedia are germane to a given database.
== Graph models ==
=== Labeled-property graph ===
A labeled-property graph model is represented by a set of nodes, relationships, properties, and labels. Both nodes of data and their relationships are named and can store properties represented by key–value pairs. Nodes can be labelled to be grouped. The edges representing the relationships have two qualities: they always have a start node and an end node, and are directed; making the graph a directed graph. Relationships can also have properties. This is useful in providing additional metadata and semantics to relationships of the nodes. Direct storage of relationships allows a constant-time traversal.
=== Resource Description Framework (RDF) ===
In an RDF graph model, each addition of information is represented with a separate node. For example, imagine a scenario where a user has to add a name property for a person represented as a distinct node in the graph. In a labeled-property graph model, this would be done with an addition of a name property into the node of the person. However, in an RDF, the user has to add a separate node called hasName connecting it to the original person node. Specifically, an RDF graph model is composed of nodes and arcs. An RDF graph notation or a statement is represented by: a node for the subject, a node for the object, and an arc for the predicate. A node may be left blank, a literal and/or be identified by a URI. An arc may also be identified by a URI. A literal for a node may be of two types: plain (untyped) and typed. A plain literal has a lexical form and optionally a language tag. A typed literal is made up of a string with a URI that identifies a particular datatype. A blank node may be used to accurately illustrate the state of the data when the data does not have a URI.
== Properties ==
Graph databases are a powerful tool for graph-like queries. For example, computing the shortest path between two nodes in the graph. Other graph-like queries can be performed over a graph database in a natural way (for example graph's diameter computations or community detection).
Graphs are flexible, meaning it allows the user to insert new data into the existing graph without loss of application functionality. There is no need for the designer of the database to plan out extensive details of the database's future use cases.
=== Storage ===
The underlying storage mechanism of graph databases can vary. Some depend on a relational engine and "store" the graph data in a table (although a table is a logical element, therefore this approach imposes another level of abstraction between the graph database, the graph database management system and the physical devices where the data is actually stored). Others use a key–value store or document-oriented database for storage, making them inherently NoSQL structures. A node would be represented as any other document store, but edges that link two different nodes hold special attributes inside its document; a _from and _to attributes.
=== Index-free adjacency ===
Data lookup performance is dependent on the access speed from one particular node to another. Because index-free adjacency enforces the nodes to have direct physical RAM addresses and physically point to other adjacent nodes, it results in a fast retrieval. A native graph system with index-free adjacency does not have to move through any other type of data structures to find links between the nodes. Directly related nodes in a graph are stored in the cache once one of the nodes are retrieved, making the data lookup even faster than the first time a user fetches a node. However, such advantage comes at a cost. Index-free adjacency sacrifices the efficiency of queries that do not use graph traversals. Native graph databases use index-free adjacency to process CRUD operations on the stored data.
== Applications ==
Multiple categories of graphs by kind of data have been recognised. Gartner suggests the five broad categories of graphs:
Social graph: this is about the connections between people; examples include Facebook, Twitter, and the idea of six degrees of separation
Intent graph: this deals with reasoning and motivation.
Consumption graph: also known as the "payment graph", the consumption graph is heavily used in the retail industry. E-commerce companies such as Amazon, eBay and Walmart use consumption graphs to track the consumption of individual customers.
Interest graph: this maps a person's interests and is often complemented by a social graph. It has the potential to follow the previous revolution of web organization by mapping the web by interest rather than indexing webpages.
Mobile graph: this is built from mobile data. Mobile data in the future may include data from the web, applications, digital wallets, GPS, and Internet of Things (IoT) devices.
== Comparison with relational databases ==
Since Edgar F. Codd's 1970 paper on the relational model, relational databases have been the de facto industry standard for large-scale data storage systems. Relational models require a strict schema and data normalization which separates data into many tables and removes any duplicate data within the database. Data is normalized in order to preserve data consistency and support ACID transactions. However this imposes limitations on how relationships can be queried.
One of the relational model's design motivations was to achieve a fast row-by-row access. Problems arise when there is a need to form complex relationships between the stored data. Although relationships can be analyzed with the relational model, complex queries performing many join operations on many different attributes over several tables are required. In working with relational models, foreign key constraints should also be considered when retrieving relationships, causing additional overhead.
Compared with relational databases, graph databases are often faster for associative data sets and map more directly to the structure of object-oriented applications. They can scale more naturally to large datasets as they do not typically need join operations, which can often be expensive. As they depend less on a rigid schema, they are marketed as more suitable to manage ad hoc and changing data with evolving schemas.
Conversely, relational database management systems are typically faster at performing the same operation on large numbers of data elements, permitting the manipulation of the data in its natural structure. Despite the graph databases' advantages and recent popularity over relational databases, it is recommended the graph model itself should not be the sole reason to replace an existing relational database. A graph database may become relevant if there is an evidence for performance improvement by orders of magnitude and lower latency.
=== Examples ===
The relational model gathers data together using information in the data. For example, one might look for all the "users" whose phone number contains the area code "311". This would be done by searching selected datastores, or tables, looking in the selected phone number fields for the string "311". This can be a time-consuming process in large tables, so relational databases offer indexes, which allow data to be stored in a smaller sub-table, containing only the selected data and a unique key (or primary key) of the record. If the phone numbers are indexed, the same search would occur in the smaller index table, gathering the keys of matching records, and then looking in the main data table for the records with those keys. Usually, a table is stored in a way that allows a lookup via a key to be very fast.
Relational databases do not inherently contain the idea of fixed relationships between records. Instead, related data is linked to each other by storing one record's unique key in another record's data. For example, a table containing email addresses for users might hold a data item called userpk, which contains the primary key of the user record it is associated with. In order to link users and their email addresses, the system first looks up the selected user records primary keys, looks for those keys in the userpk column in the email table (or, more likely, an index of them), extracts the email data, and then links the user and email records to make composite records containing all the selected data. This operation, termed a join, can be computationally expensive. Depending on the complexity of the query, the number of joins, and indexing various keys, the system may have to search through multiple tables and indexes and then sort it all to match it together.
In contrast, graph databases directly store the relationships between records. Instead of an email address being found by looking up its user's key in the userpk column, the user record contains a pointer that directly refers to the email address record. That is, having selected a user, the pointer can be followed directly to the email records, there is no need to search the email table to find the matching records. This can eliminate the costly join operations. For example, if one searches for all of the email addresses for users in area code "311", the engine would first perform a conventional search to find the users in "311", but then retrieve the email addresses by following the links found in those records. A relational database would first find all the users in "311", extract a list of the primary keys, perform another search for any records in the email table with those primary keys, and link the matching records together. For these types of common operations, graph databases would theoretically be faster.
The true value of the graph approach becomes evident when one performs searches that are more than one level deep. For example, consider a search for users who have "subscribers" (a table linking users to other users) in the "311" area code. In this case a relational database has to first search for all the users with an area code in "311", then search the subscribers table for any of those users, and then finally search the users table to retrieve the matching users. In contrast, a graph database would search for all the users in "311", then follow the backlinks through the subscriber relationship to find the subscriber users. This avoids several searches, look-ups, and the memory usage involved in holding all of the temporary data from multiple records needed to construct the output. In terms of big O notation, this query would be
O
(
log
n
)
+
O
(
1
)
{\displaystyle O(\log n)+O(1)}
time – i.e., proportional to the logarithm of the size of the data. In contrast, the relational version would be multiple
O
(
log
n
)
{\displaystyle O(\log n)}
lookups, plus the
O
(
n
)
{\displaystyle O(n)}
time needed to join all of the data records.
The relative advantage of graph retrieval grows with the complexity of a query. For example, one might want to know "that movie about submarines with the actor who was in that movie with that other actor that played the lead in Gone With the Wind". This first requires the system to find the actors in Gone With the Wind, find all the movies they were in, find all the actors in all of those movies who were not the lead in Gone With the Wind, and then find all of the movies they were in, finally filtering that list to those with descriptions containing "submarine". In a relational database, this would require several separate searches through the movies and actors tables, doing another search on submarine movies, finding all the actors in those movies, and then comparing the (large) collected results. In contrast, the graph database would walk from Gone With the Wind to Clark Gable, gather the links to the movies he has been in, gather the links out of those movies to other actors, and then follow the links out of those actors back to the list of movies. The resulting list of movies can then be searched for "submarine". All of this can be done via one search.
Properties add another layer of abstraction to this structure that also improves many common queries. Properties are essentially labels that can be applied to any record, or in some cases, edges as well. For example, one might label Clark Gable as "actor", which would then allow the system to quickly find all the records that are actors, as opposed to director or camera operator. If labels on edges are allowed, one could also label the relationship between Gone With the Wind and Clark Gable as "lead", and by performing a search on people that are "lead" "actor" in the movie Gone With the Wind, the database would produce Vivien Leigh, Olivia de Havilland and Clark Gable. The equivalent SQL query would have to rely on added data in the table linking people and movies, adding more complexity to the query syntax. These sorts of labels may improve search performance under certain circumstances, but are generally more useful in providing added semantic data for end users.
Relational databases are very well suited to flat data layouts, where relationships between data are only one or two levels deep. For example, an accounting database might need to look up all the line items for all the invoices for a given customer, a three-join query. Graph databases are aimed at datasets that contain many more links. They are especially well suited to social networking systems, where the "friends" relationship is essentially unbounded. These properties make graph databases naturally suited to types of searches that are increasingly common in online systems, and in big data environments. For this reason, graph databases are becoming very popular for large online systems like Facebook, Google, Twitter, and similar systems with deep links between records.
To further illustrate, imagine a relational model with two tables: a people table (which has a person_id and person_name column) and a friend table (with friend_id and person_id, which is a foreign key from the people table). In this case, searching for all of Jack's friends would result in the following SQL query.
The same query may be translated into --
Cypher, a graph database query language
SPARQL, an RDF graph database query language standardized by W3C and used in multiple RDF Triple and Quad stores
Long form
Short form
SPASQL, a hybrid database query language, that extends SQL with SPARQL
The above examples are a simple illustration of a basic relationship query. They condense the idea of relational models' query complexity that increases with the total amount of data. In comparison, a graph database query is easily able to sort through the relationship graph to present the results.
There are also results that indicate simple, condensed, and declarative queries of the graph databases do not necessarily provide good performance in comparison to the relational databases. While graph databases offer an intuitive representation of data, relational databases offer better results when set operations are needed.
== List of graph databases ==
The following is a list of notable graph databases:
== Graph query-programming languages ==
AQL (ArangoDB Query Language): a SQL-like query language used in ArangoDB for both documents and graphs
Cypher Query Language (Cypher): a graph query declarative language for Neo4j that enables ad hoc and programmatic (SQL-like) access to the graph.
GQL: proposed ISO standard graph query language
GraphQL: an open-source data query and manipulation language for APIs. Dgraph implements modified GraphQL language called DQL (formerly GraphQL+-)
Gremlin: a graph programming language that is a part of Apache TinkerPop open-source project
SPARQL: a query language for RDF databases that can retrieve and manipulate data stored in RDF format
regular path queries, a theoretical language for queries on graph databases
== See also ==
Graph transformation
Hierarchical database model
Datalog
Vadalog
Object database
RDF Database
Structured storage
Text graph
Wikidata is a Wikipedia sister project that stores data in a graph database. Ordinary web browsing allows for viewing nodes, following edges, and running SPARQL queries.
== References == | Wikipedia/Graph_database |
In the mathematical field of graph theory, a path graph (or linear graph) is a graph whose vertices can be listed in the order v1, v2, ..., vn such that the edges are {vi, vi+1} where i = 1, 2, ..., n − 1. Equivalently, a path with at least two vertices is connected and has two terminal vertices (vertices of degree 1), while all others (if any) have degree 2.
Paths are often important in their role as subgraphs of other graphs, in which case they are called paths in that graph. A path is a particularly simple example of a tree, and in fact the paths are exactly the trees in which no vertex has degree 3 or more. A disjoint union of paths is called a linear forest.
Paths are fundamental concepts of graph theory, described in the introductory sections of most graph theory texts. See, for example, Bondy and Murty (1976), Gibbons (1985), or Diestel (2005).
== As Dynkin diagrams ==
In algebra, path graphs appear as the Dynkin diagrams of type A. As such, they classify the root system of type A and the Weyl group of type A, which is the symmetric group.
== See also ==
Path (graph theory)
Ladder graph
Caterpillar tree
Complete graph
Null graph
Path decomposition
Cycle (graph theory)
== References ==
Bondy, J. A.; Murty, U. S. R. (1976). Graph Theory with Applications. North Holland. pp. 12–21. ISBN 0-444-19451-7.
Diestel, Reinhard (2005). Graph Theory (3rd ed.). Graduate Texts in Mathematics, vol. 173, Springer-Verlag. pp. 6–9. ISBN 3-540-26182-6.
== External links ==
Weisstein, Eric W. "Path Graph". MathWorld. | Wikipedia/Path_graph |
In the mathematical field of graph theory, a complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge. A complete digraph is a directed graph in which every pair of distinct vertices is connected by a pair of unique edges (one in each direction).
Graph theory itself is typically dated as beginning with Leonhard Euler's 1736 work on the Seven Bridges of Königsberg. However, drawings of complete graphs, with their vertices placed on the points of a regular polygon, had already appeared in the 13th century, in the work of Ramon Llull. Such a drawing is sometimes referred to as a mystic rose.
== Properties ==
The complete graph on n vertices is denoted by Kn. Some sources claim that the letter K in this notation stands for the German word komplett, but the German name for a complete graph, vollständiger Graph, does not contain the letter K, and other sources state that the notation honors the contributions of Kazimierz Kuratowski to graph theory.
Kn has n(n − 1)/2 edges (a triangular number), and is a regular graph of degree n − 1. All complete graphs are their own maximal cliques. They are maximally connected as the only vertex cut which disconnects the graph is the complete set of vertices. The complement graph of a complete graph is an empty graph.
If the edges of a complete graph are each given an orientation, the resulting directed graph is called a tournament.
Kn can be decomposed into n trees Ti such that Ti has i vertices. Ringel's conjecture asks if the complete graph K2n+1 can be decomposed into copies of any tree with n edges. This is known to be true for sufficiently large n.
The number of all distinct paths between a specific pair of vertices in Kn+2 is given by
w
n
+
2
=
n
!
e
n
=
⌊
e
n
!
⌋
,
{\displaystyle w_{n+2}=n!e_{n}=\lfloor en!\rfloor ,}
where e refers to Euler's constant, and
e
n
=
∑
k
=
0
n
1
k
!
.
{\displaystyle e_{n}=\sum _{k=0}^{n}{\frac {1}{k!}}.}
The number of matchings of the complete graphs are given by the telephone numbers
1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, 35696, 140152, 568504, 2390480, 10349536, 46206736, ... (sequence A000085 in the OEIS).
These numbers give the largest possible value of the Hosoya index for an n-vertex graph. The number of perfect matchings of the complete graph Kn (with n even) is given by the double factorial (n − 1)!!.
The crossing numbers up to K27 are known, with K28 requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project. Rectilinear Crossing numbers for Kn are
0, 0, 0, 0, 1, 3, 9, 19, 36, 62, 102, 153, 229, 324, 447, 603, 798, 1029, 1318, 1657, 2055, 2528, 3077, 3699, 4430, 5250, 6180, ... (sequence A014540 in the OEIS).
== Geometry and topology ==
A complete graph with n nodes represents the edges of an (n − 1)-simplex. Geometrically K3 forms the edge set of a triangle, K4 a tetrahedron, etc. The Császár polyhedron, a nonconvex polyhedron with the topology of a torus, has the complete graph K7 as its skeleton. Every neighborly polytope in four or more dimensions also has a complete skeleton.
K1 through K4 are all planar graphs. However, every planar drawing of a complete graph with five or more vertices must contain a crossing, and the nonplanar complete graph K5 plays a key role in the characterizations of planar graphs: by Kuratowski's theorem, a graph is planar if and only if it contains neither K5 nor the complete bipartite graph K3,3 as a subdivision, and by Wagner's theorem the same result holds for graph minors in place of subdivisions. As part of the Petersen family, K6 plays a similar role as one of the forbidden minors for linkless embedding. In other words, and as Conway and Gordon proved, every embedding of K6 into three-dimensional space is intrinsically linked, with at least one pair of linked triangles. Conway and Gordon also showed that any three-dimensional embedding of K7 contains a Hamiltonian cycle that is embedded in space as a nontrivial knot.
== Examples ==
Complete graphs on
n
{\displaystyle n}
vertices, for
n
{\displaystyle n}
between 1 and 12, are shown below along with the numbers of edges:
== See also ==
Fully connected network, in computer networking
Complete bipartite graph (or biclique), a special bipartite graph where every vertex on one side of the bipartition is connected to every vertex on the other side
The simplex, which is identical to a complete graph of
n
+
1
{\displaystyle n+1}
vertices, where
n
{\displaystyle n}
is the dimension of the simplex.
== References ==
== External links ==
Weisstein, Eric W. "Complete Graph". MathWorld. | Wikipedia/Complete_graph |
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements (nodes or edges) that need to be removed to separate the remaining nodes into two or more isolated subgraphs. It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network.
== Connected vertices and graphs ==
In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length 1 (that is, they are the endpoints of a single edge), the vertices are called adjacent.
A graph is said to be connected if every pair of vertices in the graph is connected. This means that there is a path between every pair of vertices. An undirected graph that is not connected is called disconnected. An undirected graph G is therefore disconnected if there exist two vertices in G such that no path in G has these vertices as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected.
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is unilaterally connected or unilateral (also called semiconnected) if it contains a directed path from u to v or a directed path from v to u for every pair of vertices u, v. It is strongly connected, or simply strong, if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v.
== Components and cuts ==
A connected component is a maximal connected subgraph of an undirected graph. Each vertex belongs to exactly one connected component, as does each edge. A graph is connected if and only if it has exactly one connected component.
The strong components are the maximal strongly connected subgraphs of a directed graph.
A vertex cut or separating set of a connected graph G is a set of vertices whose removal renders G disconnected. The vertex connectivity κ(G) (where G is not a complete graph) is the size of a smallest vertex cut. A graph is called k-vertex-connected or k-connected if its vertex connectivity is k or greater.
More precisely, any graph G (complete or not) is said to be k-vertex-connected if it contains at least k + 1 vertices, but does not contain a set of k − 1 vertices whose removal disconnects the graph; and κ(G) is defined as the largest k such that G is k-connected. In particular, a complete graph with n vertices, denoted Kn, has no vertex cuts at all, but κ(Kn) = n − 1.
A vertex cut for two vertices u and v is a set of vertices whose removal from the graph disconnects u and v. The local connectivity κ(u, v) is the size of a smallest vertex cut separating u and v. Local connectivity is symmetric for undirected graphs; that is, κ(u, v) = κ(v, u). Moreover, except for complete graphs, κ(G) equals the minimum of κ(u, v) over all nonadjacent pairs of vertices u, v.
2-connectivity is also called biconnectivity and 3-connectivity is also called triconnectivity. A graph G which is connected but not 2-connected is sometimes called separable.
Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, an edge cut of G is a set of edges whose removal renders the graph disconnected. The edge-connectivity λ(G) is the size of a smallest edge cut, and the local edge-connectivity λ(u, v) of two vertices u, v is the size of a smallest edge cut disconnecting u from v. Again, local edge-connectivity is symmetric. A graph is called k-edge-connected if its edge connectivity is k or greater.
A graph is said to be maximally connected if its connectivity equals its minimum degree. A graph is said to be maximally edge-connected if its edge-connectivity equals its minimum degree.
=== Super- and hyper-connectivity ===
A graph is said to be super-connected or super-κ if every minimum vertex cut isolates a vertex. A graph is said to be hyper-connected or hyper-κ if the deletion of each minimum vertex cut creates exactly two components, one of which is an isolated vertex. A graph is semi-hyper-connected or semi-hyper-κ if any minimum vertex cut separates the graph into exactly two components.
More precisely: a G connected graph is said to be super-connected or super-κ if all minimum vertex-cuts consist of the vertices adjacent with one (minimum-degree) vertex.
A G connected graph is said to be super-edge-connected or super-λ if all minimum edge-cuts consist of the edges incident on some (minimum-degree) vertex.
A cutset X of G is called a non-trivial cutset if X does not contain the neighborhood N(u) of any vertex u ∉ X. Then the superconnectivity
κ
1
{\displaystyle \kappa _{1}}
of G is
κ
1
(
G
)
=
min
{
|
X
|
:
X
is a non-trivial cutset
}
.
{\displaystyle \kappa _{1}(G)=\min\{|X|:X{\text{ is a non-trivial cutset}}\}.}
A non-trivial edge-cut and the edge-superconnectivity
λ
1
(
G
)
{\displaystyle \lambda _{1}(G)}
are defined analogously.
== Menger's theorem ==
One of the most important facts about connectivity in graphs is Menger's theorem, which characterizes the connectivity and edge-connectivity of a graph in terms of the number of independent paths between vertices.
If u and v are vertices of a graph G, then a collection of paths between u and v is called independent if no two of them share a vertex (other than u and v themselves). Similarly, the collection is edge-independent if no two paths in it share an edge. The number of mutually independent paths between u and v is written as κ′(u, v), and the number of mutually edge-independent paths between u and v is written as λ′(u, v).
Menger's theorem asserts that for distinct vertices u,v, λ(u, v) equals λ′(u, v), and if u is also not adjacent to v then κ(u, v) equals κ′(u, v). This fact is actually a special case of the max-flow min-cut theorem.
== Computational aspects ==
The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More generally, it is easy to determine computationally whether a graph is connected (for example, by using a disjoint-set data structure), or to count the number of connected components. A simple algorithm might be written in pseudo-code as follows:
Begin at any arbitrary node of the graph G.
Proceed from that node using either depth-first or breadth-first search, counting all nodes reached.
Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of G, the graph is connected; otherwise it is disconnected.
By Menger's theorem, for any two vertices u and v in a connected graph G, the numbers κ(u, v) and λ(u, v) can be determined efficiently using the max-flow min-cut algorithm. The connectivity and edge-connectivity of G can then be computed as the minimum values of κ(u, v) and λ(u, v), respectively.
In computational complexity theory, SL is the class of problems log-space reducible to the problem of determining whether two vertices in a graph are connected, which was proved to be equal to L by Omer Reingold in 2004. Hence, undirected graph connectivity may be solved in O(log n) space.
The problem of computing the probability that a Bernoulli random graph is connected is called network reliability and the problem of computing whether two given vertices are connected the ST-reliability problem. Both of these are #P-hard.
=== Number of connected graphs ===
The number of distinct connected labeled graphs with n nodes is tabulated in the On-Line Encyclopedia of Integer Sequences as sequence A001187. The first few non-trivial terms are
== Examples ==
The vertex- and edge-connectivities of a disconnected graph are both 0.
1-connectedness is equivalent to connectedness for graphs of at least two vertices.
The complete graph on n vertices has edge-connectivity equal to n − 1. Every other simple graph on n vertices has strictly smaller edge-connectivity.
In a tree, the local edge-connectivity between any two distinct vertices is 1.
== Bounds on connectivity ==
The vertex-connectivity of a graph is less than or equal to its edge-connectivity. That is, κ(G) ≤ λ(G).
The edge-connectivity for a graph with at least 2 vertices is less than or equal to the minimum degree of the graph because removing all the edges that are incident to a vertex of minimum degree will disconnect that vertex from the rest of the graph.
For a vertex-transitive graph of degree d, we have: 2(d + 1)/3 ≤ κ(G) ≤ λ(G) = d.
For a vertex-transitive graph of degree d ≤ 4, or for any (undirected) minimal Cayley graph of degree d, or for any symmetric graph of degree d, both kinds of connectivity are equal: κ(G) = λ(G) = d.
== Other properties ==
Connectedness is preserved by graph homomorphisms.
If G is connected then its line graph L(G) is also connected.
A graph G is 2-edge-connected if and only if it has an orientation that is strongly connected.
Balinski's theorem states that the polytopal graph (1-skeleton) of a k-dimensional convex polytope is a k-vertex-connected graph. Steinitz's previous theorem that any 3-vertex-connected planar graph is a polytopal graph (Steinitz's theorem) gives a partial converse.
According to a theorem of G. A. Dirac, if a graph is k-connected for k ≥ 2, then for every set of k vertices in the graph there is a cycle that passes through all the vertices in the set. The converse is true when k = 2.
== See also ==
Algebraic connectivity
Cheeger constant (graph theory)
Dynamic connectivity, Disjoint-set data structure
Expander graph
Strength of a graph
== References == | Wikipedia/Connected_graph |
In graph theory, an orientation of an undirected graph is an assignment of a direction to each edge, turning the initial graph into a directed graph.
== Oriented graphs ==
A directed graph is called an oriented graph if none of its pairs of vertices is linked by two mutually symmetric edges. Among directed graphs, the oriented graphs are the ones that have no 2-cycles (that is at most one of (x, y) and (y, x) may be arrows of the graph).
A tournament is an orientation of a complete graph. A polytree is an orientation of an undirected tree. Sumner's conjecture states that every tournament with 2n – 2 vertices contains every polytree with n vertices.
The number of non-isomorphic oriented graphs with n vertices (for n = 1, 2, 3, …) is
1, 2, 7, 42, 582, 21480, 2142288, 575016219, 415939243032, … (sequence A001174 in the OEIS).
Tournaments are in one-to-one correspondence with complete directed graphs (graphs in which there is a directed edge in one or both directions between every pair of distinct vertices). A complete directed graph can be converted to an oriented graph by removing every 2-cycle, and conversely an oriented graph can be converted to a complete directed graph by adding a 2-cycle between every pair of vertices that are not endpoints of an edge; these correspondences are bijective. Therefore, the same sequence of numbers also solves the graph enumeration problem for complete digraphs. There is an explicit but complicated formula for the numbers in this sequence.
== Constrained orientations ==
A strong orientation is an orientation that results in a strongly connected graph. The closely related totally cyclic orientations are orientations in which every edge belongs to at least one simple cycle. An orientation of an undirected graph G is totally cyclic if and only if it is a strong orientation of every connected component of G. Robbins' theorem states that a graph has a strong orientation if and only if it is 2-edge-connected; disconnected graphs may have totally cyclic orientations, but only if they have no bridges.
An acyclic orientation is an orientation that results in a directed acyclic graph. Every graph has an acyclic orientation; all acyclic orientations may be obtained by placing the vertices into a sequence, and then directing each edge from the earlier of its endpoints in the sequence to the later endpoint. The Gallai–Hasse–Roy–Vitaver theorem states that a graph has an acyclic orientation in which the longest path has at most k vertices if and only if it can be colored with at most k colors. Acyclic orientations and totally cyclic orientations are related to each other by planar duality. An acyclic orientation with a single source and a single sink is called a bipolar orientation.
A transitive orientation is an orientation such that the resulting directed graph is its own transitive closure. The graphs with transitive orientations are called comparability graphs; they may be defined from a partially ordered set by making two elements adjacent whenever they are comparable in the partial order. A transitive orientation, if one exists, can be found in linear time. However, testing whether the resulting orientation (or any given orientation) is actually transitive requires more time, as it is equivalent in complexity to matrix multiplication.
An Eulerian orientation of an undirected graph is an orientation in which each vertex has equal in-degree and out-degree. Eulerian orientations of grid graphs arise in statistical mechanics in the theory of ice-type models.
A Pfaffian orientation has the property that certain even-length cycles in the graph have an odd number of edges oriented in each of the two directions around the cycle. They always exist for planar graphs, but not for certain other graphs. They are used in the FKT algorithm for counting perfect matchings.
== See also ==
Connex relation
== References ==
== External links ==
Weisstein, Eric W., "Graph Orientation", MathWorld
Weisstein, Eric W., "Oriented Graph", MathWorld | Wikipedia/Orientation_(graph_theory) |
In graph theory, a cograph, or complement-reducible graph, or P4-free graph, is a graph that can be generated from the single-vertex graph K1 by complementation and disjoint union. That is, the family of cographs is the smallest class of graphs that includes K1 and is closed under complementation and disjoint union.
Cographs have been discovered independently by several authors since the 1970s; early references include Jung (1978), Lerchs (1971), Seinsche (1974), and Sumner (1974). They have also been called D*-graphs, hereditary Dacey graphs (after the related work of James C. Dacey Jr. on orthomodular lattices), and 2-parity graphs.
They have a simple structural decomposition involving disjoint union and complement graph operations that can be represented concisely by a labeled tree and used algorithmically to efficiently solve many problems such as finding a maximum clique that are hard on more general graph classes.
Special types of cograph include complete graphs, complete bipartite graphs, cluster graphs, and threshold graphs. Cographs are, in turn, special cases of the distance-hereditary graphs, permutation graphs, comparability graphs, and perfect graphs.
== Definition ==
=== Recursive construction ===
Any cograph may be constructed using the following rules:
any single-vertex graph is a cograph;
if
G
{\displaystyle G}
is a cograph, so is its complement,
G
¯
{\displaystyle {\overline {G}}}
;
if
G
{\displaystyle G}
and
H
{\displaystyle H}
are cographs, so is their disjoint union,
G
∪
H
{\displaystyle G\cup H}
.
The cographs may be defined as the graphs that can be constructed using these operations, starting from the single-vertex graphs.
Alternatively, instead of using the complement operation, one can use the join operation, which
consists of forming the disjoint union
G
∪
H
{\displaystyle G\cup H}
and then adding an edge between every pair of a vertex from
G
{\displaystyle G}
and a vertex from
H
{\displaystyle H}
.
=== Other characterizations ===
Several alternative characterizations of cographs can be given. Among them:
A cograph is a graph which does not contain the path P4 on 4 vertices (and hence of length 3) as an induced subgraph. That is, a graph is a cograph if and only if for any four vertices
v
1
,
v
2
,
v
3
,
v
4
{\displaystyle v_{1},v_{2},v_{3},v_{4}}
, if
{
v
1
,
v
2
}
,
{
v
2
,
v
3
}
{\displaystyle \{v_{1},v_{2}\},\{v_{2},v_{3}\}}
and
{
v
3
,
v
4
}
{\displaystyle \{v_{3},v_{4}\}}
are edges of the graph then at least one of
{
v
1
,
v
3
}
,
{
v
1
,
v
4
}
{\displaystyle \{v_{1},v_{3}\},\{v_{1},v_{4}\}}
or
{
v
2
,
v
4
}
{\displaystyle \{v_{2},v_{4}\}}
is also an edge.
A cograph is a graph all of whose induced subgraphs have the property that any maximal clique intersects any maximal independent set in a single vertex.
A cograph is a graph in which every nontrivial induced subgraph has at least two vertices with the same neighbourhoods.
A cograph is a graph in which every connected induced subgraph has a disconnected complement.
A cograph is a graph all of whose connected induced subgraphs have diameter at most 2.
A cograph is a graph in which every connected component is a distance-hereditary graph with diameter at most 2.
A cograph is a graph with clique-width at most 2.
A cograph is a comparability graph of a series-parallel partial order.
A cograph is a permutation graph of a separable permutation.
A cograph is a graph all of whose minimal chordal completions are trivially perfect graphs.
A cograph is a hereditarily well-colored graph, a graph such that every greedy coloring of every induced subgraph uses an optimal number of colors.
A graph is a cograph if and only if every vertex order of the graph is a perfect order, since having no P4 means that no obstruction to a perfect order will exist in any vertex order.
== Cotrees ==
A cotree is a tree in which the internal nodes are labeled with the numbers 0 and 1. Every cotree T defines a cograph G having the leaves of T as vertices, and in which the subtree rooted at each node of T corresponds to the induced subgraph in G defined by the set of leaves descending from that node:
A subtree consisting of a single leaf node corresponds to an induced subgraph with a single vertex.
A subtree rooted at a node labeled 0 corresponds to the union of the subgraphs defined by the children of that node.
A subtree rooted at a node labeled 1 corresponds to the join of the subgraphs defined by the children of that node; that is, we form the union and add an edge between every two vertices corresponding to leaves in different subtrees. Alternatively, the join of a set of graphs can be viewed as formed by complementing each graph, forming the union of the complements, and then complementing the resulting union.
An equivalent way of describing the cograph formed from a cotree is that two vertices are connected by an edge if and only if the lowest common ancestor of the corresponding leaves is labeled by 1. Conversely, every cograph can be represented in this way by a cotree. If we require the labels on any root-leaf path of this tree to alternate between 0 and 1, this representation is unique.
== Computational properties ==
Cographs may be recognized in linear time, and a cotree representation constructed, using modular decomposition, partition refinement, LexBFS , or split decomposition. Once a cotree representation has been constructed, many familiar graph problems may be solved via simple bottom-up calculations on the cotrees.
For instance, to find the maximum clique in a cograph, compute in bottom-up order the maximum clique in each subgraph represented by a subtree of the cotree. For a node labeled 0, the maximum clique is the maximum among the cliques computed for that node's children. For a node labeled 1, the maximum clique is the union of the cliques computed for that node's children, and has size equal to the sum of the children's clique sizes. Thus, by alternately maximizing and summing values stored at each node of the cotree, we may compute the maximum clique size, and by alternately maximizing and taking unions, we may construct the maximum clique itself. Similar bottom-up tree computations allow the maximum independent set, vertex coloring number, maximum clique cover, and Hamiltonicity (that is the existence of a Hamiltonian cycle) to be computed in linear time from a cotree representation of a cograph. Because cographs have bounded clique-width, Courcelle's theorem may be used to test any property in the monadic second-order logic of graphs (MSO1) on cographs in linear time.
The problem of testing whether a given graph is k vertices away and/or t edges away from a cograph is fixed-parameter tractable. Deciding if a graph can be k-edge-deleted to a cograph can be solved in O*(2.562k) time, and k-edge-edited to a cograph in O*(4.612k). If the largest induced cograph subgraph of a graph can be found by deleting k vertices from the graph, it can be found in O*(3.115k) time.
Two cographs are isomorphic if and only if their cotrees (in the canonical form with no two adjacent vertices with the same label) are isomorphic. Because of this equivalence, one can determine in linear time whether two cographs are isomorphic, by constructing their cotrees and applying a linear time isomorphism test for labeled trees.
If H is an induced subgraph of a cograph G, then H is itself a cograph; the cotree for H may be formed by removing some of the leaves from the cotree for G and then suppressing nodes that have only one child. It follows from Kruskal's tree theorem that the relation of being an induced subgraph is a well-quasi-ordering on the cographs. Thus, if a subfamily of the cographs (such as the planar cographs) is closed under induced subgraph operations then it has a finite number of forbidden induced subgraphs. Computationally, this means that testing membership in such a subfamily may be performed in linear time, by using a bottom-up computation on the cotree of a given graph to test whether it contains any of these forbidden subgraphs. However, when the sizes of two cographs are both variable, testing whether one of them is an induced subgraph of the other is NP-complete.
Cographs play a key role in algorithms for recognizing read-once functions.
Some counting problems also become tractable when the input is restricted to be a cograph. For instance, there are polynomial-time algorithms to count the number of cliques or the number of maximum cliques in a cograph.
== Enumeration ==
The number of connected cographs with n vertices, for n = 1, 2, 3, ..., is:
1, 1, 2, 5, 12, 33, 90, 261, 766, 2312, 7068, 21965, 68954, ... (sequence A000669 in the OEIS)
For n > 1 there are the same number of disconnected cographs, because for every cograph exactly one of it or its complement graph is connected.
== Related graph families ==
=== Subclasses ===
Every complete graph Kn is a cograph, with a cotree consisting of a single 1-node and n leaves. Similarly, every complete bipartite graph Ka,b is a cograph. Its cotree is rooted at a 1-node which has two 0-node children, one with a leaf children and one with b leaf children.
A Turán graph may be formed by the join of a family of equal-sized independent sets; thus, it also is a cograph, with a cotree rooted at a 1-node that has a child 0-node for each independent set.
Every threshold graph is also a cograph. A threshold graph may be formed by repeatedly adding one vertex, either connected to all previous vertices or to none of them; each such operation is one of the disjoint union or join operations by which a cotree may be formed.
=== Superclasses ===
The characterization of cographs by the property that every clique and maximal independent set have a nonempty intersection is a stronger version of the defining property of strongly perfect graphs, in which there every induced subgraph contains an independent set that intersects all maximal cliques. In a cograph, every maximal independent set intersects all maximal cliques. Thus, every cograph is strongly perfect.
The fact that cographs are P4-free implies that they are perfectly orderable. In fact, every vertex order of a cograph is a perfect order which further implies that max clique finding and min colouring can be found in linear time with any greedy colouring and without the need for a cotree decomposition.
Every cograph is a distance-hereditary graph, meaning that every induced path in a cograph is a shortest path. The cographs may be characterized among the distance-hereditary graphs as having diameter at most two in each connected component.
Every cograph is also a comparability graph of a series-parallel partial order, obtained by replacing the disjoint union and join operations by which the cograph was constructed by disjoint union and ordinal sum operations on partial orders.
Because strongly perfect graphs, perfectly orderable graphs, distance-hereditary graphs, and comparability graphs are all perfect graphs, cographs are also perfect.
== Notes ==
== References ==
== External links ==
"Cograph graphs", Information System on Graph Class Inclusions
Weisstein, Eric W., "Cograph", MathWorld | Wikipedia/Cograph |
In computational biology, power graph analysis is a method for the analysis and
representation of complex networks. Power graph analysis is the computation, analysis and visual representation of a power graph from a graph (networks).
Power graph analysis can be thought of as a lossless compression algorithm for graphs. It extends graph syntax with representations of cliques, bicliques and stars. Compression levels of up to 95% have been obtained for complex biological networks.
Hypergraphs are a generalization of graphs in which edges are not just couples of nodes but arbitrary n-tuples. Power graphs are not another generalization of graphs, but instead a novel representation of graphs that proposes a shift from the "node and edge" language to one using cliques, bicliques and stars as primitives.
== Power graphs ==
=== Graphical representation ===
Graphs are drawn with circles or points that represent nodes and lines connecting pairs of nodes that represent edges. Power graphs extend the syntax of graphs with power nodes, which are drawn as a circle enclosing nodes or other power nodes, and power edges, which are lines between power nodes.
Bicliques are two sets of nodes with an edge between every member of one set and every member of the other set. In a power graph, a biclique is represented as an edge between two power nodes.
Cliques are a set of nodes with an edge between every pair of nodes. In a power graph, a clique is represented by a power node with a loop.
Stars are a set of nodes with an edge between every member of that set and a single node outside the set. In a power graph, a star is represented by a power edge between a regular node and a power node.
=== Formal definition ===
Given a graph
G
=
(
V
,
E
)
{\displaystyle G={\bigl (}{V,E}{\bigr )}}
where
V
=
{
v
0
,
…
,
v
n
}
{\displaystyle V={\bigl \{}v_{0},\dots ,v_{n}{\bigr \}}}
is the set of nodes and
E
⊆
V
×
V
{\displaystyle E\subseteq V\times V}
is the set of edges, a power graph
G
′
=
(
V
′
,
E
′
)
{\displaystyle G'={\bigl (}{V',E'}{\bigr )}}
is a graph defined on the power set
V
′
⊆
P
(
V
)
{\displaystyle V'\subseteq {\mathcal {P}}{\bigl (}V{\bigr )}}
of power nodes connected to each other by power edges:
E
′
⊆
V
′
×
V
′
{\displaystyle E'\subseteq V'\times V'}
. Hence power graphs are defined on the power set of nodes as well as on the power set of edges of the graph
G
{\displaystyle G}
.
The semantics of power graphs are as follows: if two power nodes are connected by a power edge, this means that all nodes of the first power node are connected to all nodes of the second power node. Similarly, if a power node is connected to itself by a power edge, this signifies that all nodes in the power node are connected to each other by edges.
The following two conditions are required:
Power node hierarchy condition: Any two power nodes are either disjoint, or one is included in the other.
Power edge disjointness condition: There is an onto mapping from edges of the original graph to power edges.
== Analogy to Fourier analysis ==
The Fourier analysis of a function
can be seen as a rewriting of the function in terms of harmonic functions instead of
t
↦
x
{\displaystyle t\mapsto x}
pairs. This transformation changes the point of view from time domain
to frequency domain and enables many interesting applications in signal analysis, data compression,
and filtering.
Similarly, Power graph analysis is a rewriting or decomposition of a network using bicliques, cliques and stars
as primitive elements (just as harmonic functions for Fourier analysis).
It can be used to analyze, compress and filter networks.
There are, however, several key differences. First, in Fourier analysis the two spaces (time and frequency domains)
are the same function space - but stricto sensu, power graphs are not graphs.
Second, there is not a unique power graph representing a given graph. Yet a very interesting class of power graphs
are minimal power graphs which have the fewest power edges and power nodes necessary to represent a given graph.
== Minimal power graphs ==
In general, there is no unique minimal power graph for a given graph.
In this example (right) a graph of four nodes and five edges admits two minimal power graphs of two power edges each.
The main difference between these two minimal power graphs is the higher nesting level of the second power graph as well as a loss of symmetry with respect to the underlying graph.
Loss of symmetry is only a problem in small toy examples since complex networks rarely exhibit such symmetries in the first place.
Additionally, one can minimize the nesting level but even then, there is in general not a unique minimal power graph of minimal nesting level.
== Power graph greedy algorithm ==
The power graph greedy algorithm relies on two simple steps to perform the decomposition:
The first step identifies candidate power nodes through a hierarchical clustering of the nodes in the network
based on the similarity of their neighboring nodes. The similarity of two sets of neighbors is taken as the Jaccard index
of the two sets.
The second step performs a greedy search for possible power edges between candidate power nodes.
Power edges abstracting the most edges in the original network are added first to the power graph.
Thus bicliques, cliques and stars are incrementally replaced with power edges, until all remaining single edges are also added.
Candidate power nodes that are not the end point of any power edge are ignored.
== Modular decomposition ==
Modular decomposition can be used to compute a power graph by using
the strong modules of the modular decomposition.
Modules in modular decomposition are groups of nodes in a graph that
have identical neighbors. A Strong Module is a module that does not overlap
with another module.
However, in complex networks strong modules are more the exception than the
rule. Therefore, the power graphs obtained through modular decomposition are far
from minimality.
The main difference between modular decomposition and power graph analysis is the
emphasis of power graph analysis in decomposing graphs not only using modules of nodes
but also modules of edges (cliques, bicliques). Indeed, power graph analysis can be seen as a loss-less
simultaneous clustering of both nodes and edges.
== Applications ==
=== Biological networks ===
Power graph analysis has been shown to be useful for the analysis of several types of biological networks such as protein-protein interaction networks, domain-peptide binding motifs, gene regulatory networks and homology/paralogy networks.
Also a network of significant disease-trait pairs have been recently visualized and analyzed with power graphs.
Network compression, a new measure derived from power graphs, has been proposed as a quality measure for protein interaction networks.
=== Drug repositioning ===
Power graphs have been also applied to the analysis of drug-target-disease networks for drug repositioning.
=== Social networks ===
Power graphs have been applied to large-scale data in social networks, for community mining or for modeling author types.
== See also ==
Computational biology
Networks/Graph
Complex networks
Modular decomposition
== References ==
== External links ==
[1] Power Graph Analysis tools (CyOog v2.8.2) and example applications
[2] Power Graph Analysis with CyOog v2.6 | Wikipedia/Power_graph_analysis |
In graph theory, a loop (also called a self-loop or a buckle) is an edge that connects a vertex to itself. A simple graph contains no loops.
Depending on the context, a graph or a multigraph may be defined so as to either allow or disallow the presence of loops (often in concert with allowing or disallowing multiple edges between the same vertices):
Where graphs are defined so as to allow loops and multiple edges, a graph without loops or multiple edges is often distinguished from other graphs by calling it a simple graph.
Where graphs are defined so as to disallow loops and multiple edges, a graph that does have loops or multiple edges is often distinguished from the graphs that satisfy these constraints by calling it a multigraph or pseudograph.
In a graph with one vertex, all edges must be loops. Such a graph is called a bouquet.
== Degree ==
For an undirected graph, the degree of a vertex is equal to the number of adjacent vertices.
A special case is a loop, which adds two to the degree. This can be understood by letting each connection of the loop edge count as its own adjacent vertex. In other words, a vertex with a loop "sees" itself as an adjacent vertex from both ends of the edge thus adding two, not one, to the degree.
For a directed graph, a loop adds one to the in degree and one to the out degree.
== See also ==
=== In graph theory ===
Cycle (graph theory)
Graph theory
Glossary of graph theory
=== In topology ===
Möbius ladder
Möbius strip
Strange loop
Klein bottle
== References ==
Balakrishnan, V. K.; Graph Theory, McGraw-Hill; 1 edition (February 1, 1997). ISBN 0-07-005489-4.
Bollobás, Béla; Modern Graph Theory, Springer; 1st edition (August 12, 2002). ISBN 0-387-98488-7.
Diestel, Reinhard; Graph Theory, Springer; 2nd edition (February 18, 2000). ISBN 0-387-98976-5.
Gross, Jonathon L, and Yellen, Jay; Graph Theory and Its Applications, CRC Press (December 30, 1998). ISBN 0-8493-3982-0.
Gross, Jonathon L, and Yellen, Jay; (eds); Handbook of Graph Theory. CRC (December 29, 2003). ISBN 1-58488-090-2.
Zwillinger, Daniel; CRC Standard Mathematical Tables and Formulae, Chapman & Hall/CRC; 31st edition (November 27, 2002). ISBN 1-58488-291-3.
== External links ==
This article incorporates public domain material from Paul E. Black. "Self loop". Dictionary of Algorithms and Data Structures. NIST. | Wikipedia/Loop_(graph_theory) |
In the mathematical field of graph theory, an automorphism of a graph is a form of symmetry in which the graph is mapped onto itself while preserving the edge–vertex connectivity.
Formally, an automorphism of a graph G = (V, E) is a permutation σ of the vertex set V, such that the pair of vertices (u, v) form an edge if and only if the pair (σ(u), σ(v)) also form an edge. That is, it is a graph isomorphism from G to itself. Automorphisms may be defined in this way both for directed graphs and for undirected graphs.
The composition of two automorphisms is another automorphism, and the set of automorphisms of a given graph, under the composition operation, forms a group, the automorphism group of the graph. In the opposite direction, by Frucht's theorem, all groups can be represented as the automorphism group of a connected graph – indeed, of a cubic graph.
== Computational complexity ==
Constructing the automorphism group of a graph, in the form of a list of generators, is polynomial-time equivalent to the graph isomorphism problem, and therefore solvable in quasi-polynomial time, that is with running time
2
O
(
(
log
n
)
c
)
{\displaystyle 2^{O((\log n)^{c})}}
for some fixed
c
>
0
{\displaystyle c>0}
.
Consequently, like the graph isomorphism problem, the problem of finding a graph's automorphism group is known to belong to the complexity class NP, but not known to be in P nor to be NP-complete, and therefore may be NP-intermediate.
The easier problem of testing whether a graph has any symmetries (nontrivial automorphisms), known as the graph automorphism problem, also has no known polynomial time solution.
There is a polynomial time algorithm for solving the graph automorphism problem for graphs where vertex degrees are bounded by a constant.
The graph automorphism problem is polynomial-time many-one reducible to the graph isomorphism problem, but the converse reduction is unknown. By contrast, hardness is known when the automorphisms are constrained in a certain fashion; for instance, determining the existence of a fixed-point-free automorphism (an automorphism that fixes no vertex) is NP-complete, and the problem of counting such automorphisms is ♯P-complete.
== Algorithms, software and applications ==
While no worst-case polynomial-time algorithms are known for the general Graph Automorphism problem, finding the automorphism group (and printing out an irredundant set of generators) for many large graphs arising in applications is rather easy. Several open-source software tools are available for this task, including NAUTY, BLISS and SAUCY. SAUCY and BLISS are particularly efficient for sparse graphs, e.g., SAUCY processes some graphs with millions of vertices in mere seconds. However, BLISS and NAUTY can also produce Canonical Labeling, whereas SAUCY is currently optimized for solving Graph Automorphism. An important observation is that for a graph on n vertices, the automorphism group can be specified by no more than
n
−
1
{\displaystyle n-1}
generators, and the above software packages are guaranteed to satisfy this bound as a side-effect of their algorithms (minimal sets of generators are harder to find and are not particularly useful in practice). It also appears that the total support (i.e., the number of vertices moved) of all generators is limited by a linear function of n, which is important in runtime analysis of these algorithms. However, this has not been established for a fact, as of March 2012.
Practical applications of Graph Automorphism include graph drawing and other visualization tasks, solving structured instances of Boolean Satisfiability arising in the context of Formal verification and Logistics. Molecular symmetry can predict or explain chemical properties.
== Symmetry display ==
Several graph drawing researchers have investigated algorithms for drawing graphs in such a way that the automorphisms of the graph become visible as symmetries of the drawing. This may be done either by using a method that is not designed around symmetries, but that automatically generates symmetric drawings when possible, or by explicitly identifying symmetries and using them to guide vertex placement in the drawing. It is not always possible to display all symmetries of the graph simultaneously, so it may be necessary to choose which symmetries to display and which to leave unvisualized.
== Graph families defined by their automorphisms ==
Several families of graphs are defined by having certain types of automorphisms:
An asymmetric graph is an undirected graph with only the trivial automorphism.
A vertex-transitive graph is an undirected graph in which every vertex may be mapped by an automorphism into any other vertex.
An edge-transitive graph is an undirected graph in which every edge may be mapped by an automorphism into any other edge.
A symmetric graph is a graph such that every pair of adjacent vertices may be mapped by an automorphism into any other pair of adjacent vertices.
A distance-transitive graph is a graph such that every pair of vertices may be mapped by an automorphism into any other pair of vertices that are the same distance apart.
A semi-symmetric graph is a graph that is edge-transitive but not vertex-transitive.
A half-transitive graph is a graph that is vertex-transitive and edge-transitive but not symmetric.
A skew-symmetric graph is a directed graph together with a permutation σ on the vertices that maps edges to edges but reverses the direction of each edge. Additionally, σ is required to be an involution.
Inclusion relationships between these families are indicated by the following table:
== See also ==
Algebraic graph theory
Distinguishing coloring
== References ==
== External links ==
Weisstein, Eric W. "Graph automorphism". MathWorld. | Wikipedia/Graph_automorphism |
In graph theory, a connected graph is k-edge-connected if it remains connected whenever fewer than k edges are removed.
The edge-connectivity of a graph is the largest k for which the graph is k-edge-connected.
Edge connectivity and the enumeration of k-edge-connected graphs was studied by Camille Jordan in 1869.
== Formal definition ==
Let
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
be an arbitrary graph.
If the subgraph
G
′
=
(
V
,
E
∖
X
)
{\displaystyle G'=(V,E\setminus X)}
is connected for all
X
⊆
E
{\displaystyle X\subseteq E}
where
|
X
|
<
k
{\displaystyle |X|<k}
, then G is said to be k-edge-connected.
The edge connectivity of
G
{\displaystyle G}
is the maximum value k such that G is k-edge-connected. The smallest set X whose removal disconnects G is a minimum cut in G.
The edge connectivity version of Menger's theorem provides an alternative and equivalent characterization, in terms of edge-disjoint paths in the graph. If and only if every two vertices of G form the endpoints of k paths, no two of which share an edge with each other, then G is k-edge-connected. In one direction this is easy: if a system of paths like this exists, then every set X of fewer than k edges is disjoint from at least one of the paths, and the pair of vertices remains connected to each other even after X is deleted. In the other direction, the existence of a system of paths for each pair of vertices in a graph that cannot be disconnected by the removal of few edges can be proven using the max-flow min-cut theorem from the theory of network flows.
== Related concepts ==
Minimum vertex degree gives a trivial upper bound on edge-connectivity. That is, if a graph
G
=
(
V
,
E
)
{\displaystyle G=(V,E)}
is k-edge-connected then it is necessary that k ≤ δ(G), where δ(G) is the minimum degree of any vertex v ∈ V. Deleting all edges incident to a vertex v would disconnect v from the graph.
Edge connectivity is the dual concept to girth, the length of the shortest cycle in a graph, in the sense that the girth of a planar graph is the edge connectivity of its dual graph, and vice versa. These concepts are unified in matroid theory by the girth of a matroid, the size of the smallest dependent set in the matroid. For a graphic matroid, the matroid girth equals the girth of the underlying graph, while for a co-graphic matroid it equals the edge connectivity.
The 2-edge-connected graphs can also be characterized by the absence of bridges, by the existence of an ear decomposition, or by Robbins' theorem according to which these are exactly the graphs that have a strong orientation.
== Computational aspects ==
There is a polynomial-time algorithm to determine the largest k for which a graph G is k-edge-connected. A simple algorithm would, for every pair (u,v), determine the maximum flow from u to v with the capacity of all edges in G set to 1 for both directions. A graph is k-edge-connected if and only if the maximum flow from u to v is at least k for any pair (u,v), so k is the least u-v-flow among all (u,v).
If n is the number of vertices in the graph, this simple algorithm would perform
O
(
n
2
)
{\displaystyle O(n^{2})}
iterations of the Maximum flow problem, which can be solved in
O
(
n
3
)
{\displaystyle O(n^{3})}
time. Hence the complexity of the simple algorithm described above is
O
(
n
5
)
{\displaystyle O(n^{5})}
in total.
An improved algorithm will solve the maximum flow problem for every pair (u,v) where u is arbitrarily fixed while v varies
over all vertices. This reduces the complexity to
O
(
n
4
)
{\displaystyle O(n^{4})}
and is sound since, if a cut of capacity less than k exists,
it is bound to separate u from some other vertex. It can be further improved by an algorithm of Gabow that runs in worst case
O
(
n
3
)
{\displaystyle O(n^{3})}
time.
The Karger–Stein variant of Karger's algorithm provides a faster randomized algorithm for determining the connectivity, with expected runtime
O
(
n
2
log
3
n
)
{\displaystyle O(n^{2}\log ^{3}n)}
.
A related problem: finding the minimum k-edge-connected spanning subgraph of G (that is: select as few as possible edges in G that your selection is k-edge-connected) is NP-hard for
k
≥
2
{\displaystyle k\geq 2}
.
== See also ==
k-vertex-connected graph
Connectivity (graph theory)
Matching preclusion
== References == | Wikipedia/K-edge-connected_graph |
In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group, is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem (named after Arthur Cayley), and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs make them particularly good candidates for constructing expander graphs.
== Definition ==
Let
G
{\displaystyle G}
be a group and
S
{\displaystyle S}
be a generating set of
G
{\displaystyle G}
. The Cayley graph
Γ
=
Γ
(
G
,
S
)
{\displaystyle \Gamma =\Gamma (G,S)}
is an edge-colored directed graph constructed as follows:
Each element
g
{\displaystyle g}
of
G
{\displaystyle G}
is assigned a vertex: the vertex set of
Γ
{\displaystyle \Gamma }
is identified with
G
.
{\displaystyle G.}
Each element
s
{\displaystyle s}
of
S
{\displaystyle S}
is assigned a color
c
s
{\displaystyle c_{s}}
.
For every
g
∈
G
{\displaystyle g\in G}
and
s
∈
S
{\displaystyle s\in S}
, there is a directed edge of color
c
s
{\displaystyle c_{s}}
from the vertex corresponding to
g
{\displaystyle g}
to the one corresponding to
g
s
{\displaystyle gs}
.
Not every convention requires that
S
{\displaystyle S}
generate the group. If
S
{\displaystyle S}
is not a generating set for
G
{\displaystyle G}
, then
Γ
{\displaystyle \Gamma }
is disconnected and each connected component represents a coset of the subgroup generated by
S
{\displaystyle S}
.
If an element
s
{\displaystyle s}
of
S
{\displaystyle S}
is its own inverse,
s
=
s
−
1
,
{\displaystyle s=s^{-1},}
then it is typically represented by an undirected edge.
The set
S
{\displaystyle S}
is often assumed to be finite, especially in geometric group theory, which corresponds to
Γ
{\displaystyle \Gamma }
being locally finite and
G
{\displaystyle G}
being finitely generated.
The set
S
{\displaystyle S}
is sometimes assumed to be symmetric (
S
=
S
−
1
{\displaystyle S=S^{-1}}
) and not containing the group identity element. In this case, the uncolored Cayley graph can be represented as a simple undirected graph.
== Examples ==
Suppose that
G
=
Z
{\displaystyle G=\mathbb {Z} }
is the infinite cyclic group and the set
S
{\displaystyle S}
consists of the standard generator 1 and its inverse (−1 in the additive notation); then the Cayley graph is an infinite path.
Similarly, if
G
=
Z
n
{\displaystyle G=\mathbb {Z} _{n}}
is the finite cyclic group of order
n
{\displaystyle n}
and the set
S
{\displaystyle S}
consists of two elements, the standard generator of
G
{\displaystyle G}
and its inverse, then the Cayley graph is the cycle
C
n
{\displaystyle C_{n}}
. More generally, the Cayley graphs of finite cyclic groups are exactly the circulant graphs.
The Cayley graph of the direct product of groups (with the cartesian product of generating sets as a generating set) is the cartesian product of the corresponding Cayley graphs. Thus the Cayley graph of the abelian group
Z
2
{\displaystyle \mathbb {Z} ^{2}}
with the set of generators consisting of four elements
(
±
1
,
0
)
,
(
0
,
±
1
)
{\displaystyle (\pm 1,0),(0,\pm 1)}
is the infinite grid on the plane
R
2
{\displaystyle \mathbb {R} ^{2}}
, while for the direct product
Z
n
×
Z
m
{\displaystyle \mathbb {Z} _{n}\times \mathbb {Z} _{m}}
with similar generators the Cayley graph is the
n
×
m
{\displaystyle n\times m}
finite grid on a torus.
A Cayley graph of the dihedral group
D
4
{\displaystyle D_{4}}
on two generators
a
{\displaystyle a}
and
b
{\displaystyle b}
is depicted to the left. Red arrows represent composition with
a
{\displaystyle a}
. Since
b
{\displaystyle b}
is self-inverse, the blue lines, which represent composition with
b
{\displaystyle b}
, are undirected. Therefore the graph is mixed: it has eight vertices, eight arrows, and four edges. The Cayley table of the group
D
4
{\displaystyle D_{4}}
can be derived from the group presentation
⟨
a
,
b
∣
a
4
=
b
2
=
e
,
a
b
=
b
a
3
⟩
.
{\displaystyle \langle a,b\mid a^{4}=b^{2}=e,ab=ba^{3}\rangle .}
A different Cayley graph of
D
4
{\displaystyle D_{4}}
is shown on the right.
b
{\displaystyle b}
is still the horizontal reflection and is represented by blue lines, and
c
{\displaystyle c}
is a diagonal reflection and is represented by pink lines. As both reflections are self-inverse the Cayley graph on the right is completely undirected. This graph corresponds to the presentation
⟨
b
,
c
∣
b
2
=
c
2
=
e
,
b
c
b
c
=
c
b
c
b
⟩
.
{\displaystyle \langle b,c\mid b^{2}=c^{2}=e,bcbc=cbcb\rangle .}
The Cayley graph of the free group on two generators
a
{\displaystyle a}
and
b
{\displaystyle b}
corresponding to the set
S
=
{
a
,
b
,
a
−
1
,
b
−
1
}
{\displaystyle S=\{a,b,a^{-1},b^{-1}\}}
is depicted at the top of the article, with
e
{\displaystyle e}
being the identity. Travelling along an edge to the right represents right multiplication by
a
,
{\displaystyle a,}
while travelling along an edge upward corresponds to the multiplication by
b
.
{\displaystyle b.}
Since the free group has no relations, the Cayley graph has no cycles: it is the 4-regular infinite tree. It is a key ingredient in the proof of the Banach–Tarski paradox.
More generally, the Bethe lattice or Cayley tree is the Cayley graph of the free group on
n
{\displaystyle n}
generators. A presentation of a group
G
{\displaystyle G}
by
n
{\displaystyle n}
generators corresponds to a surjective homomorphism from the free group on
n
{\displaystyle n}
generators to the group
G
,
{\displaystyle G,}
defining a map from the Cayley tree to the Cayley graph of
G
{\displaystyle G}
. Interpreting graphs topologically as one-dimensional simplicial complexes, the simply connected infinite tree is the universal cover of the Cayley graph; and the kernel of the mapping is the fundamental group of the Cayley graph.
A Cayley graph of the discrete Heisenberg group
{
(
1
x
z
0
1
y
0
0
1
)
,
x
,
y
,
z
∈
Z
}
{\displaystyle \left\{{\begin{pmatrix}1&x&z\\0&1&y\\0&0&1\\\end{pmatrix}},\ x,y,z\in \mathbb {Z} \right\}}
is depicted to the right. The generators used in the picture are the three matrices
X
,
Y
,
Z
{\displaystyle X,Y,Z}
given by the three permutations of 1, 0, 0 for the entries
x
,
y
,
z
{\displaystyle x,y,z}
. They satisfy the relations
Z
=
X
Y
X
−
1
Y
−
1
,
X
Z
=
Z
X
,
Y
Z
=
Z
Y
{\displaystyle Z=XYX^{-1}Y^{-1},XZ=ZX,YZ=ZY}
, which can also be understood from the picture. This is a non-commutative infinite group, and despite being embedded in a three-dimensional space, the Cayley graph has four-dimensional volume growth.
== Characterization ==
The group
G
{\displaystyle G}
acts on itself by left multiplication (see Cayley's theorem). This may be viewed as the action of
G
{\displaystyle G}
on its Cayley graph. Explicitly, an element
h
∈
G
{\displaystyle h\in G}
maps a vertex
g
∈
V
(
Γ
)
{\displaystyle g\in V(\Gamma )}
to the vertex
h
g
∈
V
(
Γ
)
.
{\displaystyle hg\in V(\Gamma ).}
The set of edges of the Cayley graph and their color is preserved by this action: the edge
(
g
,
g
s
)
{\displaystyle (g,gs)}
is mapped to the edge
(
h
g
,
h
g
s
)
{\displaystyle (hg,hgs)}
, both having color
c
s
{\displaystyle c_{s}}
. In fact, all automorphisms of the colored directed graph
Γ
{\displaystyle \Gamma }
are of this form, so that
G
{\displaystyle G}
is isomorphic to the symmetry group of
Γ
{\displaystyle \Gamma }
.
The left multiplication action of a group on itself is simply transitive, in particular, Cayley graphs are vertex-transitive. The following is a kind of converse to this:
To recover the group
G
{\displaystyle G}
and the generating set
S
{\displaystyle S}
from the unlabeled directed graph
Γ
{\displaystyle \Gamma }
, select a vertex
v
1
∈
V
(
Γ
)
{\displaystyle v_{1}\in V(\Gamma )}
and label it by the identity element of the group. Then label each vertex
v
{\displaystyle v}
of
Γ
{\displaystyle \Gamma }
by the unique element of
G
{\displaystyle G}
that maps
v
1
{\displaystyle v_{1}}
to
v
.
{\displaystyle v.}
The set
S
{\displaystyle S}
of generators of
G
{\displaystyle G}
that yields
Γ
{\displaystyle \Gamma }
as the Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is the set of labels of out-neighbors of
v
1
{\displaystyle v_{1}}
. Since
Γ
{\displaystyle \Gamma }
is uncolored, it might have more directed graph automorphisms than the left multiplication maps, for example group automorphisms of
G
{\displaystyle G}
which permute
S
{\displaystyle S}
.
== Elementary properties ==
The Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
depends in an essential way on the choice of the set
S
{\displaystyle S}
of generators. For example, if the generating set
S
{\displaystyle S}
has
k
{\displaystyle k}
elements then each vertex of the Cayley graph has
k
{\displaystyle k}
incoming and
k
{\displaystyle k}
outgoing directed edges. In the case of a symmetric generating set
S
{\displaystyle S}
with
r
{\displaystyle r}
elements, the Cayley graph is a regular directed graph of degree
r
.
{\displaystyle r.}
Cycles (or closed walks) in the Cayley graph indicate relations among the elements of
S
.
{\displaystyle S.}
In the more elaborate construction of the Cayley complex of a group, closed paths corresponding to relations are "filled in" by polygons. This means that the problem of constructing the Cayley graph of a given presentation
P
{\displaystyle {\mathcal {P}}}
is equivalent to solving the Word Problem for
P
{\displaystyle {\mathcal {P}}}
.
If
f
:
G
′
→
G
{\displaystyle f:G'\to G}
is a surjective group homomorphism and the images of the elements of the generating set
S
′
{\displaystyle S'}
for
G
′
{\displaystyle G'}
are distinct, then it induces a covering of graphs
f
¯
:
Γ
(
G
′
,
S
′
)
→
Γ
(
G
,
S
)
,
{\displaystyle {\bar {f}}:\Gamma (G',S')\to \Gamma (G,S),}
where
S
=
f
(
S
′
)
.
{\displaystyle S=f(S').}
In particular, if a group
G
{\displaystyle G}
has
k
{\displaystyle k}
generators, all of order different from 2, and the set
S
{\displaystyle S}
consists of these generators together with their inverses, then the Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is covered by the infinite regular tree of degree
2
k
{\displaystyle 2k}
corresponding to the free group on the same set of generators.
For any finite Cayley graph, considered as undirected, the vertex connectivity is at least equal to 2/3 of the degree of the graph. If the generating set is minimal (removal of any element and, if present, its inverse from the generating set leaves a set which is not generating), the vertex connectivity is equal to the degree. The edge connectivity is in all cases equal to the degree.
If
ρ
reg
(
g
)
(
x
)
=
g
x
{\displaystyle \rho _{\text{reg}}(g)(x)=gx}
is the left-regular representation with
|
G
|
×
|
G
|
{\displaystyle |G|\times |G|}
matrix form denoted
[
ρ
reg
(
g
)
]
{\displaystyle [\rho _{\text{reg}}(g)]}
, the adjacency matrix of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is
A
=
∑
s
∈
S
[
ρ
reg
(
s
)
]
{\textstyle A=\sum _{s\in S}[\rho _{\text{reg}}(s)]}
.
Every group character
χ
{\displaystyle \chi }
of the group
G
{\displaystyle G}
induces an eigenvector of the adjacency matrix of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
. The associated eigenvalue is
λ
χ
=
∑
s
∈
S
χ
(
s
)
,
{\displaystyle \lambda _{\chi }=\sum _{s\in S}\chi (s),}
which, when
G
{\displaystyle G}
is Abelian, takes the form
∑
s
∈
S
e
2
π
i
j
s
/
|
G
|
{\displaystyle \sum _{s\in S}e^{2\pi ijs/|G|}}
for integers
j
=
0
,
1
,
…
,
|
G
|
−
1.
{\displaystyle j=0,1,\dots ,|G|-1.}
In particular, the associated eigenvalue of the trivial character (the one sending every element to 1) is the degree of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
, that is, the order of
S
{\displaystyle S}
. If
G
{\displaystyle G}
is an Abelian group, there are exactly
|
G
|
{\displaystyle |G|}
characters, determining all eigenvalues. The corresponding orthonormal basis of eigenvectors is given by
v
j
=
1
|
G
|
(
1
e
2
π
i
j
/
|
G
|
e
2
⋅
2
π
i
j
/
|
G
|
e
3
⋅
2
π
i
j
/
|
G
|
⋯
e
(
|
G
|
−
1
)
2
π
i
j
/
|
G
|
)
.
{\displaystyle v_{j}={\tfrac {1}{\sqrt {|G|}}}{\begin{pmatrix}1&e^{2\pi ij/|G|}&e^{2\cdot 2\pi ij/|G|}&e^{3\cdot 2\pi ij/|G|}&\cdots &e^{(|G|-1)2\pi ij/|G|}\end{pmatrix}}.}
It is interesting to note that this eigenbasis is independent of the generating set
S
{\displaystyle S}
. More generally for symmetric generating sets, take
ρ
1
,
…
,
ρ
k
{\displaystyle \rho _{1},\dots ,\rho _{k}}
a complete set of irreducible representations of
G
,
{\displaystyle G,}
and let
ρ
i
(
S
)
=
∑
s
∈
S
ρ
i
(
s
)
{\textstyle \rho _{i}(S)=\sum _{s\in S}\rho _{i}(s)}
with eigenvalue set
Λ
i
(
S
)
{\displaystyle \Lambda _{i}(S)}
. Then the set of eigenvalues of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is exactly
⋃
i
Λ
i
(
S
)
,
{\textstyle \bigcup _{i}\Lambda _{i}(S),}
where eigenvalue
λ
{\displaystyle \lambda }
appears with multiplicity
dim
(
ρ
i
)
{\displaystyle \dim(\rho _{i})}
for each occurrence of
λ
{\displaystyle \lambda }
as an eigenvalue of
ρ
i
(
S
)
.
{\displaystyle \rho _{i}(S).}
== Schreier coset graph ==
If one instead takes the vertices to be right cosets of a fixed subgroup
H
,
{\displaystyle H,}
one obtains a related construction, the Schreier coset graph, which is at the basis of coset enumeration or the Todd–Coxeter process.
== Connection to group theory ==
Knowledge about the structure of the group can be obtained by studying the adjacency matrix of the graph and in particular applying the theorems of spectral graph theory. Conversely, for symmetric generating sets, the spectral and representation theory of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
are directly tied together: take
ρ
1
,
…
,
ρ
k
{\displaystyle \rho _{1},\dots ,\rho _{k}}
a complete set of irreducible representations of
G
,
{\displaystyle G,}
and let
ρ
i
(
S
)
=
∑
s
∈
S
ρ
i
(
s
)
{\textstyle \rho _{i}(S)=\sum _{s\in S}\rho _{i}(s)}
with eigenvalues
Λ
i
(
S
)
{\displaystyle \Lambda _{i}(S)}
. Then the set of eigenvalues of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is exactly
⋃
i
Λ
i
(
S
)
,
{\textstyle \bigcup _{i}\Lambda _{i}(S),}
where eigenvalue
λ
{\displaystyle \lambda }
appears with multiplicity
dim
(
ρ
i
)
{\displaystyle \dim(\rho _{i})}
for each occurrence of
λ
{\displaystyle \lambda }
as an eigenvalue of
ρ
i
(
S
)
.
{\displaystyle \rho _{i}(S).}
The genus of a group is the minimum genus for any Cayley graph of that group.
=== Geometric group theory ===
For infinite groups, the coarse geometry of the Cayley graph is fundamental to geometric group theory. For a finitely generated group, this is independent of choice of finite set of generators, hence an intrinsic property of the group. This is only interesting for infinite groups: every finite group is coarsely equivalent to a point (or the trivial group), since one can choose as finite set of generators the entire group.
Formally, for a given choice of generators, one has the word metric (the natural distance on the Cayley graph), which determines a metric space. The coarse equivalence class of this space is an invariant of the group.
== Expansion properties ==
When
S
=
S
−
1
{\displaystyle S=S^{-1}}
, the Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is
|
S
|
{\displaystyle |S|}
-regular, so spectral techniques may be used to analyze the expansion properties of the graph. In particular for abelian groups, the eigenvalues of the Cayley graph are more easily computable and given by
λ
χ
=
∑
s
∈
S
χ
(
s
)
{\textstyle \lambda _{\chi }=\sum _{s\in S}\chi (s)}
with top eigenvalue equal to
|
S
|
{\displaystyle |S|}
, so we may use Cheeger's inequality to bound the edge expansion ratio using the spectral gap.
Representation theory can be used to construct such expanding Cayley graphs, in the form of Kazhdan property (T). The following statement holds:
For example the group
G
=
S
L
3
(
Z
)
{\displaystyle G=\mathrm {SL} _{3}(\mathbb {Z} )}
has property (T) and is generated by elementary matrices and this gives relatively explicit examples of expander graphs.
== Integral classification ==
An integral graph is one whose eigenvalues are all integers. While the complete classification of integral graphs remains an open problem, the Cayley graphs of certain groups are always integral.
Using previous characterizations of the spectrum of Cayley graphs, note that
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is integral iff the eigenvalues of
ρ
(
S
)
{\displaystyle \rho (S)}
are integral for every representation
ρ
{\displaystyle \rho }
of
G
{\displaystyle G}
.
=== Cayley integral simple group ===
A group
G
{\displaystyle G}
is Cayley integral simple (CIS) if the connected Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is integral exactly when the symmetric generating set
S
{\displaystyle S}
is the complement of a subgroup of
G
{\displaystyle G}
. A result of Ahmady, Bell, and Mohar shows that all CIS groups are isomorphic to
Z
/
p
Z
,
Z
/
p
2
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} ,\mathbb {Z} /p^{2}\mathbb {Z} }
, or
Z
2
×
Z
2
{\displaystyle \mathbb {Z} _{2}\times \mathbb {Z} _{2}}
for primes
p
{\displaystyle p}
. It is important that
S
{\displaystyle S}
actually generates the entire group
G
{\displaystyle G}
in order for the Cayley graph to be connected. (If
S
{\displaystyle S}
does not generate
G
{\displaystyle G}
, the Cayley graph may still be integral, but the complement of
S
{\displaystyle S}
is not necessarily a subgroup.)
In the example of
G
=
Z
/
5
Z
{\displaystyle G=\mathbb {Z} /5\mathbb {Z} }
, the symmetric generating sets (up to graph isomorphism) are
S
=
{
1
,
4
}
{\displaystyle S=\{1,4\}}
:
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is a
5
{\displaystyle 5}
-cycle with eigenvalues
2
,
5
−
1
2
,
5
−
1
2
,
−
5
−
1
2
,
−
5
−
1
2
{\displaystyle 2,{\tfrac {{\sqrt {5}}-1}{2}},{\tfrac {{\sqrt {5}}-1}{2}},{\tfrac {-{\sqrt {5}}-1}{2}},{\tfrac {-{\sqrt {5}}-1}{2}}}
S
=
{
1
,
2
,
3
,
4
}
{\displaystyle S=\{1,2,3,4\}}
:
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is
K
5
{\displaystyle K_{5}}
with eigenvalues
4
,
−
1
,
−
1
,
−
1
,
−
1
{\displaystyle 4,-1,-1,-1,-1}
The only subgroups of
Z
/
5
Z
{\displaystyle \mathbb {Z} /5\mathbb {Z} }
are the whole group and the trivial group, and the only symmetric generating set
S
{\displaystyle S}
that produces an integral graph is the complement of the trivial group. Therefore
Z
/
5
Z
{\displaystyle \mathbb {Z} /5\mathbb {Z} }
must be a CIS group.
The proof of the complete CIS classification uses the fact that every subgroup and homomorphic image of a CIS group is also a CIS group.
=== Cayley integral group ===
A slightly different notion is that of a Cayley integral group
G
{\displaystyle G}
, in which every symmetric subset
S
{\displaystyle S}
produces an integral graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
. Note that
S
{\displaystyle S}
no longer has to generate the entire group.
The complete list of Cayley integral groups is given by
Z
2
n
×
Z
3
m
,
Z
2
n
×
Z
4
n
,
Q
8
×
Z
2
n
,
S
3
{\displaystyle \mathbb {Z} _{2}^{n}\times \mathbb {Z} _{3}^{m},\mathbb {Z} _{2}^{n}\times \mathbb {Z} _{4}^{n},Q_{8}\times \mathbb {Z} _{2}^{n},S_{3}}
, and the dicyclic group of order
12
{\displaystyle 12}
, where
m
,
n
∈
Z
≥
0
{\displaystyle m,n\in \mathbb {Z} _{\geq 0}}
and
Q
8
{\displaystyle Q_{8}}
is the quaternion group. The proof relies on two important properties of Cayley integral groups:
Subgroups and homomorphic images of Cayley integral groups are also Cayley integral groups.
A group is Cayley integral iff every connected Cayley graph of the group is also integral.
=== Normal and Eulerian generating sets ===
Given a general group
G
{\displaystyle G}
, a subset
S
⊆
G
{\displaystyle S\subseteq G}
is normal if
S
{\displaystyle S}
is closed under conjugation by elements of
G
{\displaystyle G}
(generalizing the notion of a normal subgroup), and
S
{\displaystyle S}
is Eulerian if for every
s
∈
S
{\displaystyle s\in S}
, the set of elements generating the cyclic group
⟨
s
⟩
{\displaystyle \langle s\rangle }
is also contained in
S
{\displaystyle S}
.
A 2019 result by Guo, Lytkina, Mazurov, and Revin proves that the Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is integral for any Eulerian normal subset
S
⊆
G
{\displaystyle S\subseteq G}
, using purely representation theoretic techniques.
The proof of this result is relatively short: given
S
{\displaystyle S}
an Eulerian normal subset, select
x
1
,
…
,
x
t
∈
G
{\displaystyle x_{1},\dots ,x_{t}\in G}
pairwise nonconjugate so that
S
{\displaystyle S}
is the union of the conjugacy classes
Cl
(
x
i
)
{\displaystyle \operatorname {Cl} (x_{i})}
. Then using the characterization of the spectrum of a Cayley graph, one can show the eigenvalues of
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
are given by
{
λ
χ
=
∑
i
=
1
t
χ
(
x
i
)
|
Cl
(
x
i
)
|
χ
(
1
)
}
{\textstyle \left\{\lambda _{\chi }=\sum _{i=1}^{t}{\frac {\chi (x_{i})\left|\operatorname {Cl} (x_{i})\right|}{\chi (1)}}\right\}}
taken over irreducible characters
χ
{\displaystyle \chi }
of
G
{\displaystyle G}
. Each eigenvalue
λ
χ
{\displaystyle \lambda _{\chi }}
in this set must be an element of
Q
(
ζ
)
{\displaystyle \mathbb {Q} (\zeta )}
for
ζ
{\displaystyle \zeta }
a primitive
m
t
h
{\displaystyle m^{th}}
root of unity (where
m
{\displaystyle m}
must be divisible by the orders of each
x
i
{\displaystyle x_{i}}
). Because the eigenvalues are algebraic integers, to show they are integral it suffices to show that they are rational, and it suffices to show
λ
χ
{\displaystyle \lambda _{\chi }}
is fixed under any automorphism
σ
{\displaystyle \sigma }
of
Q
(
ζ
)
{\displaystyle \mathbb {Q} (\zeta )}
. There must be some
k
{\displaystyle k}
relatively prime to
m
{\displaystyle m}
such that
σ
(
χ
(
x
i
)
)
=
χ
(
x
i
k
)
{\displaystyle \sigma (\chi (x_{i}))=\chi (x_{i}^{k})}
for all
i
{\displaystyle i}
, and because
S
{\displaystyle S}
is both Eulerian and normal,
σ
(
χ
(
x
i
)
)
=
χ
(
x
j
)
{\displaystyle \sigma (\chi (x_{i}))=\chi (x_{j})}
for some
j
{\displaystyle j}
. Sending
x
↦
x
k
{\displaystyle x\mapsto x^{k}}
bijects conjugacy classes, so
Cl
(
x
i
)
{\displaystyle \operatorname {Cl} (x_{i})}
and
Cl
(
x
j
)
{\displaystyle \operatorname {Cl} (x_{j})}
have the same size and
σ
{\displaystyle \sigma }
merely permutes terms in the sum for
λ
χ
{\displaystyle \lambda _{\chi }}
. Therefore
λ
χ
{\displaystyle \lambda _{\chi }}
is fixed for all automorphisms of
Q
(
ζ
)
{\displaystyle \mathbb {Q} (\zeta )}
, so
λ
χ
{\displaystyle \lambda _{\chi }}
is rational and thus integral.
Consequently, if
G
=
A
n
{\displaystyle G=A_{n}}
is the alternating group and
S
{\displaystyle S}
is a set of permutations given by
{
(
12
i
)
±
1
}
{\displaystyle \{(12i)^{\pm 1}\}}
, then the Cayley graph
Γ
(
A
n
,
S
)
{\displaystyle \Gamma (A_{n},S)}
is integral. (This solved a previously open problem from the Kourovka Notebook.) In addition when
G
=
S
n
{\displaystyle G=S_{n}}
is the symmetric group and
S
{\displaystyle S}
is either the set of all transpositions or the set of transpositions involving a particular element, the Cayley graph
Γ
(
G
,
S
)
{\displaystyle \Gamma (G,S)}
is also integral.
== History ==
Cayley graphs were first considered for finite groups by Arthur Cayley in 1878. Max Dehn in his unpublished lectures on group theory from 1909–10 reintroduced Cayley graphs under the name Gruppenbild (group diagram), which led to the geometric group theory of today. His most important application was the solution of the word problem for the fundamental group of surfaces with genus ≥ 2, which is equivalent to the topological problem of deciding which closed curves on the surface contract to a point.
== See also ==
Vertex-transitive graph
Generating set of a group
Lovász conjecture
Cube-connected cycles
Algebraic graph theory
Cycle graph (algebra)
== Notes ==
== External links ==
Cayley diagrams
Weisstein, Eric W. "Cayley graph". MathWorld. | Wikipedia/Cayley_graph |
In the mathematical field of graph theory, a bipartite graph (or bigraph) is a graph whose vertices can be divided into two disjoint and independent sets
U
{\displaystyle U}
and
V
{\displaystyle V}
, that is, every edge connects a vertex in
U
{\displaystyle U}
to one in
V
{\displaystyle V}
. Vertex sets
U
{\displaystyle U}
and
V
{\displaystyle V}
are usually called the parts of the graph. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles.
The two sets
U
{\displaystyle U}
and
V
{\displaystyle V}
may be thought of as a coloring of the graph with two colors: if one colors all nodes in
U
{\displaystyle U}
blue, and all nodes in
V
{\displaystyle V}
red, each edge has endpoints of differing colors, as is required in the graph coloring problem. In contrast, such a coloring is impossible in the case of a non-bipartite graph, such as a triangle: after one node is colored blue and another red, the third vertex of the triangle is connected to vertices of both colors, preventing it from being assigned either color.
One often writes
G
=
(
U
,
V
,
E
)
{\displaystyle G=(U,V,E)}
to denote a bipartite graph whose partition has the parts
U
{\displaystyle U}
and
V
{\displaystyle V}
, with
E
{\displaystyle E}
denoting the edges of the graph. If a bipartite graph is not connected, it may have more than one bipartition; in this case, the
(
U
,
V
,
E
)
{\displaystyle (U,V,E)}
notation is helpful in specifying one particular bipartition that may be of importance in an application. If
|
U
|
=
|
V
|
{\displaystyle |U|=|V|}
, that is, if the two subsets have equal cardinality, then
G
{\displaystyle G}
is called a balanced bipartite graph. If all vertices on the same side of the bipartition have the same degree, then
G
{\displaystyle G}
is called biregular.
== Examples ==
When modelling relations between two different classes of objects, bipartite graphs very often arise naturally. For instance, a graph of football players and clubs, with an edge between a player and a club if the player has played for that club, is a natural example of an affiliation network, a type of bipartite graph used in social network analysis.
Another example where bipartite graphs appear naturally is in the (NP-complete) railway optimization problem, in which the input is a schedule of trains and their stops, and the goal is to find a set of train stations as small as possible such that every train visits at least one of the chosen stations. This problem can be modeled as a dominating set problem in a bipartite graph that has a vertex for each train and each station and an edge for each pair of a station and a train that stops at that station.
A third example is in the academic field of numismatics. Ancient coins are made using two positive impressions of the design (the obverse and reverse). The charts numismatists produce to represent the production of coins are bipartite graphs.
More abstract examples include the following:
Every tree is bipartite.
Cycle graphs with an even number of vertices are bipartite.
Every planar graph whose faces all have even length is bipartite. Special cases of this are grid graphs and squaregraphs, in which every inner face consists of 4 edges and every inner vertex has four or more neighbors.
The complete bipartite graph on m and n vertices, denoted by Kn,m is the bipartite graph
G
=
(
U
,
V
,
E
)
{\displaystyle G=(U,V,E)}
, where U and V are disjoint sets of size m and n, respectively, and E connects every vertex in U with all vertices in V. It follows that Km,n has mn edges. Closely related to the complete bipartite graphs are the crown graphs, formed from complete bipartite graphs by removing the edges of a perfect matching.
Hypercube graphs, partial cubes, and median graphs are bipartite. In these graphs, the vertices may be labeled by bitvectors, in such a way that two vertices are adjacent if and only if the corresponding bitvectors differ in a single position. A bipartition may be formed by separating the vertices whose bitvectors have an even number of ones from the vertices with an odd number of ones. Trees and squaregraphs form examples of median graphs, and every median graph is a partial cube.
== Properties ==
=== Characterization ===
Bipartite graphs may be characterized in several different ways:
An undirected graph is bipartite if and only if it does not contain an odd cycle.
A graph is bipartite if and only if it is 2-colorable, (i.e. its chromatic number is less than or equal to 2).
A graph is bipartite if and only if every edge belongs to an odd number of bonds, minimal subsets of edges whose removal increases the number of components of the graph.
A graph is bipartite if and only if the spectrum of the graph is symmetric.
=== Kőnig's theorem and perfect graphs ===
In bipartite graphs, the size of minimum vertex cover is equal to the size of the maximum matching; this is Kőnig's theorem. An alternative and equivalent form of this theorem is that the size of the maximum independent set plus the size of the maximum matching is equal to the number of vertices. In any graph without isolated vertices the size of the minimum edge cover plus the size of a maximum matching equals the number of vertices. Combining this equality with Kőnig's theorem leads to the facts that, in bipartite graphs, the size of the minimum edge cover is equal to the size of the maximum independent set, and the size of the minimum edge cover plus the size of the minimum vertex cover is equal to the number of vertices.
Another class of related results concerns perfect graphs: every bipartite graph, the complement of every bipartite graph, the line graph of every bipartite graph, and the complement of the line graph of every bipartite graph, are all perfect. Perfection of bipartite graphs is easy to see (their chromatic number is two and their maximum clique size is also two) but perfection of the complements of bipartite graphs is less trivial, and is another restatement of Kőnig's theorem. This was one of the results that motivated the initial definition of perfect graphs. Perfection of the complements of line graphs of perfect graphs is yet another restatement of Kőnig's theorem, and perfection of the line graphs themselves is a restatement of an earlier theorem of Kőnig, that every bipartite graph has an edge coloring using a number of colors equal to its maximum degree.
According to the strong perfect graph theorem, the perfect graphs have a forbidden graph characterization resembling that of bipartite graphs: a graph is bipartite if and only if it has no odd cycle as a subgraph, and a graph is perfect if and only if it has no odd cycle or its complement as an induced subgraph. The bipartite graphs, line graphs of bipartite graphs, and their complements form four out of the five basic classes of perfect graphs used in the proof of the strong perfect graph theorem. It follows that any subgraph of a bipartite graph is also bipartite because it cannot gain an odd cycle.
=== Degree ===
For a vertex, the number of adjacent vertices is called the degree of the vertex and is denoted
deg
v
{\displaystyle \deg v}
. The degree sum formula for a bipartite graph states that
∑
v
∈
V
deg
v
=
∑
u
∈
U
deg
u
=
|
E
|
.
{\displaystyle \sum _{v\in V}\deg v=\sum _{u\in U}\deg u=|E|\,.}
The degree sequence of a bipartite graph is the pair of lists each containing the degrees of the two parts
U
{\displaystyle U}
and
V
{\displaystyle V}
. For example, the complete bipartite graph K3,5 has degree sequence
(
5
,
5
,
5
)
,
(
3
,
3
,
3
,
3
,
3
)
{\displaystyle (5,5,5),(3,3,3,3,3)}
. Isomorphic bipartite graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a bipartite graph; in some cases, non-isomorphic bipartite graphs may have the same degree sequence.
The bipartite realization problem is the problem of finding a simple bipartite graph with the degree sequence being two given lists of natural numbers. (Trailing zeros may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the digraph.)
=== Relation to hypergraphs and directed graphs ===
The biadjacency matrix of a bipartite graph
(
U
,
V
,
E
)
{\displaystyle (U,V,E)}
is a (0,1) matrix of size
|
U
|
×
|
V
|
{\displaystyle |U|\times |V|}
that has a one for each pair of adjacent vertices and a zero for nonadjacent vertices. Biadjacency matrices may be used to describe equivalences between bipartite graphs, hypergraphs, and directed graphs.
A hypergraph is a combinatorial structure that, like an undirected graph, has vertices and edges, but in which the edges may be arbitrary sets of vertices rather than having to have exactly two endpoints. A bipartite graph
(
U
,
V
,
E
)
{\displaystyle (U,V,E)}
may be used to model a hypergraph in which U is the set of vertices of the hypergraph, V is the set of hyperedges, and E contains an edge from a hypergraph vertex v to a hypergraph edge e exactly when v is one of the endpoints of e. Under this correspondence, the biadjacency matrices of bipartite graphs are exactly the incidence matrices of the corresponding hypergraphs. As a special case of this correspondence between bipartite graphs and hypergraphs, any multigraph (a graph in which there may be two or more edges between the same two vertices) may be interpreted as a hypergraph in which some hyperedges have equal sets of endpoints, and represented by a bipartite graph that does not have multiple adjacencies and in which the vertices on one side of the bipartition all have degree two.
A similar reinterpretation of adjacency matrices may be used to show a one-to-one correspondence between directed graphs (on a given number of labeled vertices, allowing self-loops) and balanced bipartite graphs, with the same number of vertices on both sides of the bipartition. For, the adjacency matrix of a directed graph with n vertices can be any (0,1) matrix of size
n
×
n
{\displaystyle n\times n}
, which can then be reinterpreted as the adjacency matrix of a bipartite graph with n vertices on each side of its bipartition. In this construction, the bipartite graph is the bipartite double cover of the directed graph.
== Algorithms ==
=== Testing bipartiteness ===
It is possible to test whether a graph is bipartite, and to return either a two-coloring (if it is bipartite) or an odd cycle (if it is not) in linear time, using depth-first search (DFS). The main idea is to assign to each vertex the color that differs from the color of its parent in the DFS forest, assigning colors in a preorder traversal of the depth-first-search forest. This will necessarily provide a two-coloring of the spanning forest consisting of the edges connecting vertices to their parents, but it may not properly color some of the non-forest edges. In a DFS forest, one of the two endpoints of every non-forest edge is an ancestor of the other endpoint, and when the depth first search discovers an edge of this type it should check that these two vertices have different colors. If they do not, then the path in the forest from ancestor to descendant, together with the miscolored edge, form an odd cycle, which is returned from the algorithm together with the result that the graph is not bipartite. However, if the algorithm terminates without detecting an odd cycle of this type, then every edge must be properly colored, and the algorithm returns the coloring together with the result that the graph is bipartite.
Alternatively, a similar procedure may be used with breadth-first search in place of DFS. Again, each node is given the opposite color to its parent in the search forest, in breadth-first order. If, when a vertex is colored, there exists an edge connecting it to a previously-colored vertex with the same color, then this edge together with the paths in the breadth-first search forest connecting its two endpoints to their lowest common ancestor forms an odd cycle. If the algorithm terminates without finding an odd cycle in this way, then it must have found a proper coloring, and can safely conclude that the graph is bipartite.
For the intersection graphs of
n
{\displaystyle n}
line segments or other simple shapes in the Euclidean plane, it is possible to test whether the graph is bipartite and return either a two-coloring or an odd cycle in time
O
(
n
log
n
)
{\displaystyle O(n\log n)}
, even though the graph itself may have up to
O
(
n
2
)
{\displaystyle O(n^{2})}
edges.
=== Odd cycle transversal ===
Odd cycle transversal is an NP-complete algorithmic problem that asks, given a graph G = (V,E) and a number k, whether there exists a set of k vertices whose removal from G would cause the resulting graph to be bipartite. The problem is fixed-parameter tractable, meaning that there is an algorithm whose running time can be bounded by a polynomial function of the size of the graph multiplied by a larger function of k. The name odd cycle transversal comes from the fact that a graph is bipartite if and only if it has no odd cycles. Hence, to delete vertices from a graph in order to obtain a bipartite graph, one needs to "hit all odd cycle", or find a so-called odd cycle transversal set. In the illustration, every odd cycle in the graph contains the blue (the bottommost) vertices, so removing those vertices kills all odd cycles and leaves a bipartite graph.
The edge bipartization problem is the algorithmic problem of deleting as few edges as possible to make a graph bipartite and is also an important problem in graph modification algorithmics. This problem is also fixed-parameter tractable, and can be solved in time
O
(
2
k
m
2
)
{\textstyle O\left(2^{k}m^{2}\right)}
, where k is the number of edges to delete and m is the number of edges in the input graph.
=== Matching ===
A matching in a graph is a subset of its edges, no two of which share an endpoint. Polynomial time algorithms are known for many algorithmic problems on matchings, including maximum matching (finding a matching that uses as many edges as possible), maximum weight matching, and stable marriage. In many cases, matching problems are simpler to solve on bipartite graphs than on non-bipartite graphs, and many matching algorithms such as the Hopcroft–Karp algorithm for maximum cardinality matching work correctly only on bipartite inputs.
As a simple example, suppose that a set
P
{\displaystyle P}
of people are all seeking jobs from among a set
J
{\displaystyle J}
of jobs, with not all people suitable for all jobs. This situation can be modeled as a bipartite graph
(
P
,
J
,
E
)
{\displaystyle (P,J,E)}
where an edge connects each job-seeker with each suitable job. A perfect matching describes a way of simultaneously satisfying all job-seekers and filling all jobs; Hall's marriage theorem provides a characterization of the bipartite graphs which allow perfect matchings. The National Resident Matching Program applies graph matching methods to solve this problem for U.S. medical student job-seekers and hospital residency jobs.
The Dulmage–Mendelsohn decomposition is a structural decomposition of bipartite graphs that is useful in finding maximum matchings.
== Additional applications ==
Bipartite graphs are extensively used in modern coding theory, especially to decode codewords received from the channel. Factor graphs and Tanner graphs are examples of this. A Tanner graph is a bipartite graph in which the vertices on one side of the bipartition represent digits of a codeword, and the vertices on the other side represent combinations of digits that are expected to sum to zero in a codeword without errors. A factor graph is a closely related belief network used for probabilistic decoding of LDPC and turbo codes.
In computer science, a Petri net is a mathematical modeling tool used in analysis and simulations of concurrent systems. A system is modeled as a bipartite directed graph with two sets of nodes: A set of "place" nodes that contain resources, and a set of "event" nodes which generate and/or consume resources. There are additional constraints on the nodes and edges that constrain the behavior of the system. Petri nets utilize the properties of bipartite directed graphs and other properties to allow mathematical proofs of the behavior of systems while also allowing easy implementation of simulations of the system.
In projective geometry, Levi graphs are a form of bipartite graph used to model the incidences between points and lines in a configuration. Corresponding to the geometric property of points and lines that every two lines meet in at most one point and every two points be connected with a single line, Levi graphs necessarily do not contain any cycles of length four, so their girth must be six or more.
== See also ==
Bipartite dimension, the minimum number of complete bipartite graphs whose union is the given graph
Bipartite double cover, a way of transforming any graph into a bipartite graph by doubling its vertices
Bipartite hypergraph, a generalization of bipartiteness to hypergraphs.
Bipartite matroid, a class of matroids that includes the graphic matroids of bipartite graphs
Bipartite network projection, a weighting technique for compressing information about bipartite networks
Convex bipartite graph, a bipartite graph whose vertices can be ordered so that the vertex neighborhoods are contiguous
Multipartite graph, a generalization of bipartite graphs to more than two subsets of vertices
Parity graph, a generalization of bipartite graphs in which every two induced paths between the same two points have the same parity
Quasi-bipartite graph, a type of Steiner tree problem instance in which the terminals form an independent set, allowing approximation algorithms that generalize those for bipartite graphs
Split graph, a graph in which the vertices can be partitioned into two subsets, one of which is independent and the other of which is a clique
Zarankiewicz problem on the maximum number of edges in a bipartite graph with forbidden subgraphs
== References ==
== External links ==
"Graph, bipartite", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Information System on Graph Classes and their Inclusions: bipartite graph
Weisstein, Eric W., "Bipartite Graph", MathWorld
Bipartite graphs in systems biology and medicine | Wikipedia/Bipartite_graph |
In graph theory, the lexicographic product or (graph) composition G ∙ H of graphs G and H is a graph such that
the vertex set of G ∙ H is the cartesian product V(G) × V(H); and
any two vertices (u,v) and (x,y) are adjacent in G ∙ H if and only if either u is adjacent to x in G or u = x and v is adjacent to y in H.
If the edge relations of the two graphs are order relations, then the edge relation of their lexicographic product is the corresponding lexicographic order.
The lexicographic product was first studied by Felix Hausdorff (1914). As Feigenbaum & Schäffer (1986) showed, the problem of recognizing whether a graph is a lexicographic product is equivalent in complexity to the graph isomorphism problem.
== Properties ==
The lexicographic product is in general noncommutative: G ∙ H ≠ H ∙ G. However it satisfies a distributive law with respect to disjoint union: (A + B) ∙ C = A ∙ C + B ∙ C.
In addition it satisfies an identity with respect to complementation: C(G ∙ H) = C(G) ∙ C(H). In particular, the lexicographic product of two self-complementary graphs is self-complementary.
The independence number of a lexicographic product may be easily calculated from that of its factors:
α(G ∙ H) = α(G)α(H).
The clique number of a lexicographic product is as well multiplicative:
ω(G ∙ H) = ω(G)ω(H).
The chromatic number of a lexicographic product is equal to the b-fold chromatic number of G, for b equal to the chromatic number of H:
χ(G ∙ H) = χb(G), where b = χ(H).
The lexicographic product of two graphs is a perfect graph if and only if both factors are perfect.
== Notes ==
== References ==
Feigenbaum, J.; Schäffer, A. A. (1986), "Recognizing composite graphs is equivalent to testing graph isomorphism", SIAM Journal on Computing, 15 (2): 619–627, doi:10.1137/0215045, MR 0837609.
Geller, D.; Stahl, S. (1975), "The chromatic number and other functions of the lexicographic product", Journal of Combinatorial Theory, Series B, 19: 87–95, doi:10.1016/0095-8956(75)90076-3, MR 0392645.
Hausdorff, F. (1914), Grundzüge der Mengenlehre, Leipzig{{citation}}: CS1 maint: location missing publisher (link)
Imrich, Wilfried; Klavžar, Sandi (2000), Product Graphs: Structure and Recognition, Wiley, ISBN 0-471-37039-8
Ravindra, G.; Parthasarathy, K. R. (1977), "Perfect product graphs", Discrete Mathematics, 20 (2): 177–186, doi:10.1016/0012-365X(77)90056-5, hdl:10338.dmlcz/102469, MR 0491304.
== External links ==
Weisstein, Eric W. "Graph Lexicographic Product". MathWorld. | Wikipedia/Lexicographic_product_of_graphs |
In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called arcs), with each edge directed from one vertex to another, such that following those directions will never form a closed loop. A directed graph is a DAG if and only if it can be topologically ordered, by arranging the vertices as a linear ordering that is consistent with all edge directions. DAGs have numerous scientific and computational applications, ranging from biology (evolution, family trees, epidemiology) to information science (citation networks) to computation (scheduling).
Directed acyclic graphs are also called acyclic directed graphs or acyclic digraphs.
== Definitions ==
A graph is formed by vertices and by edges connecting pairs of vertices, where the vertices can be any kind of object that is connected in pairs by edges. In the case of a directed graph, each edge has an orientation, from one vertex to another vertex. A path in a directed graph is a sequence of edges having the property that the ending vertex of each edge in the sequence is the same as the starting vertex of the next edge in the sequence; a path forms a cycle if the starting vertex of its first edge equals the ending vertex of its last edge. A directed acyclic graph is a directed graph that has no cycles.
A vertex v of a directed graph is said to be reachable from another vertex u when there exists a path that starts at u and ends at v. As a special case, every vertex is considered to be reachable from itself (by a path with zero edges). If a vertex can reach itself via a nontrivial path (a path with one or more edges), then that path is a cycle, so another way to define directed acyclic graphs is that they are the graphs in which no vertex can reach itself via a nontrivial path.
== Mathematical properties ==
=== Reachability relation, transitive closure, and transitive reduction ===
The reachability relation of a DAG can be formalized as a partial order ≤ on the vertices of the DAG. In this partial order, two vertices u and v are ordered as u ≤ v exactly when there exists a directed path from u to v in the DAG; that is, when u can reach v (or v is reachable from u). However, different DAGs may give rise to the same reachability relation and the same partial order. For example, a DAG with two edges u → v and v → w has the same reachability relation as the DAG with three edges u → v, v → w, and u → w. Both of these DAGs produce the same partial order, in which the vertices are ordered as u ≤ v ≤ w.
The transitive closure of a DAG is the graph with the most edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the reachability relation ≤ of the DAG, and may therefore be thought of as a direct translation of the reachability relation ≤ into graph-theoretic terms. The same method of translating partial orders into DAGs works more generally: for every finite partially ordered set (S, ≤), the graph that has a vertex for every element of S and an edge for every pair of elements in ≤ is automatically a transitively closed DAG, and has (S, ≤) as its reachability relation. In this way, every finite partially ordered set can be represented as a DAG.
The transitive reduction of a DAG is the graph with the fewest edges that has the same reachability relation as the DAG. It has an edge u → v for every pair of vertices (u, v) in the covering relation of the reachability relation ≤ of the DAG. It is a subgraph of the DAG, formed by discarding the edges u → v for which the DAG also contains a longer directed path from u to v.
Like the transitive closure, the transitive reduction is uniquely defined for DAGs. In contrast, for a directed graph that is not acyclic, there can be more than one minimal subgraph with the same reachability relation. Transitive reductions are useful in visualizing the partial orders they represent, because they have fewer edges than other graphs representing the same orders and therefore lead to simpler graph drawings. A Hasse diagram of a partial order is a drawing of the transitive reduction in which the orientation of every edge is shown by placing the starting vertex of the edge in a lower position than its ending vertex.
=== Topological ordering ===
A topological ordering of a directed graph is an ordering of its vertices into a sequence, such that for every edge the start vertex of the edge occurs earlier in the sequence than the ending vertex of the edge. A graph that has a topological ordering cannot have any cycles, because the edge into the earliest vertex of a cycle would have to be oriented the wrong way. Therefore, every graph with a topological ordering is acyclic. Conversely, every directed acyclic graph has at least one topological ordering. The existence of a topological ordering can therefore be used as an equivalent definition of a directed acyclic graphs: they are exactly the graphs that have topological orderings.
In general, this ordering is not unique; a DAG has a unique topological ordering if and only if it has a directed path containing all the vertices, in which case the ordering is the same as the order in which the vertices appear in the path.
The family of topological orderings of a DAG is the same as the family of linear extensions of the reachability relation for the DAG, so any two graphs representing the same partial order have the same set of topological orders.
=== Combinatorial enumeration ===
The graph enumeration problem of counting directed acyclic graphs was studied by Robinson (1973).
The number of DAGs on n labeled vertices, for n = 0, 1, 2, 3, … (without restrictions on the order in which these numbers appear in a topological ordering of the DAG) is
1, 1, 3, 25, 543, 29281, 3781503, … (sequence A003024 in the OEIS).
These numbers may be computed by the recurrence relation
a
n
=
∑
k
=
1
n
(
−
1
)
k
−
1
(
n
k
)
2
k
(
n
−
k
)
a
n
−
k
.
{\displaystyle a_{n}=\sum _{k=1}^{n}(-1)^{k-1}{n \choose k}2^{k(n-k)}a_{n-k}.}
Eric W. Weisstein conjectured, and McKay et al. (2004) proved, that the same numbers count the (0,1) matrices for which all eigenvalues are positive real numbers. The proof is bijective: a matrix A is an adjacency matrix of a DAG if and only if A + I is a (0,1) matrix with all eigenvalues positive, where I denotes the identity matrix. Because a DAG cannot have self-loops, its adjacency matrix must have a zero diagonal, so adding I preserves the property that all matrix coefficients are 0 or 1.
=== Related families of graphs ===
A multitree (also called a strongly unambiguous graph or a mangrove) is a DAG in which there is at most one directed path between any two vertices. Equivalently, it is a DAG in which the subgraph reachable from any vertex induces an undirected tree.
A polytree (also called a directed tree) is a multitree formed by orienting the edges of an undirected tree.
An arborescence is a polytree formed by orienting the edges of an undirected tree away from a particular vertex, called the root of the arborescence.
== Computational problems ==
=== Topological sorting and recognition ===
Topological sorting is the algorithmic problem of finding a topological ordering of a given DAG. It can be solved in linear time. Kahn's algorithm for topological sorting builds the vertex ordering directly. It maintains a list of vertices that have no incoming edges from other vertices that have not already been included in the partially constructed topological ordering; initially this list consists of the vertices with no incoming edges at all. Then, it repeatedly adds one vertex from this list to the end of the partially constructed topological ordering, and checks whether its neighbors should be added to the list. The algorithm terminates when all vertices have been processed in this way. Alternatively, a topological ordering may be constructed by reversing a postorder numbering of a depth-first search graph traversal.
It is also possible to check whether a given directed graph is a DAG in linear time, either by attempting to find a topological ordering and then testing for each edge whether the resulting ordering is valid or alternatively, for some topological sorting algorithms, by verifying that the algorithm successfully orders all the vertices without meeting an error condition.
=== Construction from cyclic graphs ===
Any undirected graph may be made into a DAG by choosing a total order for its vertices and directing every edge from the earlier endpoint in the order to the later endpoint. The resulting orientation of the edges is called an acyclic orientation. Different total orders may lead to the same acyclic orientation, so an n-vertex graph can have fewer than n! acyclic orientations. The number of acyclic orientations is equal to |χ(−1)|, where χ is the chromatic polynomial of the given graph.
Any directed graph may be made into a DAG by removing a feedback vertex set or a feedback arc set, a set of vertices or edges (respectively) that touches all cycles. However, the smallest such set is NP-hard to find. An arbitrary directed graph may also be transformed into a DAG, called its condensation, by contracting each of its strongly connected components into a single supervertex. When the graph is already acyclic, its smallest feedback vertex sets and feedback arc sets are empty, and its condensation is the graph itself.
=== Transitive closure and transitive reduction ===
The transitive closure of a given DAG, with n vertices and m edges, may be constructed in time O(mn) by using either breadth-first search or depth-first search to test reachability from each vertex. Alternatively, it can be solved in time O(nω) where ω < 2.373 is the exponent for matrix multiplication algorithms; this is a theoretical improvement over the O(mn) bound for dense graphs.
In all of these transitive closure algorithms, it is possible to distinguish pairs of vertices that are reachable by at least one path of length two or more from pairs that can only be connected by a length-one path. The transitive reduction consists of the edges that form length-one paths that are the only paths connecting their endpoints. Therefore, the transitive reduction can be constructed in the same asymptotic time bounds as the transitive closure.
=== Closure problem ===
The closure problem takes as input a vertex-weighted directed acyclic graph and seeks the minimum (or maximum) weight of a closure – a set of vertices C, such that no edges leave C. The problem may be formulated for directed graphs without the assumption of acyclicity, but with no greater generality, because in this case it is equivalent to the same problem on the condensation of the graph. It may be solved in polynomial time using a reduction to the maximum flow problem.
=== Path algorithms ===
Some algorithms become simpler when used on DAGs instead of general graphs, based on the principle of topological ordering. For example, it is possible to find shortest paths and longest paths from a given starting vertex in DAGs in linear time by processing the vertices in a topological order, and calculating the path length for each vertex to be the minimum or maximum length obtained via any of its incoming edges. In contrast, for arbitrary graphs the shortest path may require slower algorithms such as Dijkstra's algorithm or the Bellman–Ford algorithm, and longest paths in arbitrary graphs are NP-hard to find.
== Applications ==
=== Scheduling ===
Directed acyclic graph representations of partial orderings have many applications in scheduling for systems of tasks with ordering constraints.
An important class of problems of this type concern collections of objects that need to be updated, such as the cells of a spreadsheet after one of the cells has been changed, or the object files of a piece of computer software after its source code has been changed.
In this context, a dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. A cycle in this graph is called a circular dependency, and is generally not allowed, because there would be no way to consistently schedule the tasks involved in the cycle.
Dependency graphs without circular dependencies form DAGs.
For instance, when one cell of a spreadsheet changes, it is necessary to recalculate the values of other cells that depend directly or indirectly on the changed cell. For this problem, the tasks to be scheduled are the recalculations of the values of individual cells of the spreadsheet. Dependencies arise when an expression in one cell uses a value from another cell. In such a case, the value that is used must be recalculated earlier than the expression that uses it. Topologically ordering the dependency graph, and using this topological order to schedule the cell updates, allows the whole spreadsheet to be updated with only a single evaluation per cell. Similar problems of task ordering arise in makefiles for program compilation and instruction scheduling for low-level computer program optimization.
A somewhat different DAG-based formulation of scheduling constraints is used by the program evaluation and review technique (PERT), a method for management of large human projects that was one of the first applications of DAGs. In this method, the vertices of a DAG represent milestones of a project rather than specific tasks to be performed. Instead, a task or activity is represented by an edge of a DAG, connecting two milestones that mark the beginning and completion of the task. Each such edge is labeled with an estimate for the amount of time that it will take a team of workers to perform the task. The longest path in this DAG represents the critical path of the project, the one that controls the total time for the project. Individual milestones can be scheduled according to the lengths of the longest paths ending at their vertices.
=== Data processing networks ===
A directed acyclic graph may be used to represent a network of processing elements. In this representation, data enters a processing element through its incoming edges and leaves the element through its outgoing edges.
For instance, in electronic circuit design, static combinational logic blocks can be represented as an acyclic system of logic gates that computes a function of an input, where the input and output of the function are represented as individual bits. In general, the output of these blocks cannot be used as the input unless it is captured by a register or state element which maintains its acyclic properties. Electronic circuit schematics either on paper or in a database are a form of directed acyclic graphs using instances or components to form a directed reference to a lower level component. Electronic circuits themselves are not necessarily acyclic or directed.
Dataflow programming languages describe systems of operations on data streams, and the connections between the outputs of some operations and the inputs of others. These languages can be convenient for describing repetitive data processing tasks, in which the same acyclically-connected collection of operations is applied to many data items. They can be executed as a parallel algorithm in which each operation is performed by a parallel process as soon as another set of inputs becomes available to it.
In compilers, straight line code (that is, sequences of statements without loops or conditional branches) may be represented by a DAG describing the inputs and outputs of each of the arithmetic operations performed within the code. This representation allows the compiler to perform common subexpression elimination efficiently. At a higher level of code organization, the acyclic dependencies principle states that the dependencies between modules or components of a large software system should form a directed acyclic graph.
Feedforward neural networks are another example.
=== Causal structures ===
Graphs in which vertices represent events occurring at a definite time, and where the edges always point from an earlier time vertex to a later time vertex, are necessarily directed and acyclic. The lack of a cycle follows because the time associated with a vertex always increases as you follow any directed path in the graph, so you can never return to a vertex on a path. This reflects our natural intuition that causality means events can only affect the future, they never affect the past, and thus we have no causal loops. An example of this type of directed acyclic graph are those encountered in the causal set approach to quantum gravity though in this case the graphs considered are transitively complete. In the version history example below, each version of the software is associated with a unique time, typically the time the version was saved, committed or released. In the citation graph examples below, the documents are published at one time and can only refer to older documents.
Sometimes events are not associated with a specific physical time. Provided that pairs of events have a purely causal relationship, that is edges represent causal relations between the events, we will have a directed acyclic graph. For instance, a Bayesian network represents a system of probabilistic events as vertices in a directed acyclic graph, in which the likelihood of an event may be calculated from the likelihoods of its predecessors in the DAG. In this context, the moral graph of a DAG is the undirected graph created by adding an (undirected) edge between all parents of the same vertex (sometimes called marrying), and then replacing all directed edges by undirected edges. Another type of graph with a similar causal structure is an influence diagram, the vertices of which represent either decisions to be made or unknown information, and the edges of which represent causal influences from one vertex to another. In epidemiology, for instance, these diagrams are often used to estimate the expected value of different choices for intervention.
The converse is also true. That is in any application represented by a directed acyclic graph there is a causal structure, either an explicit order or time in the example or an order which can be derived from graph structure. This follows because all directed acyclic graphs have a topological ordering, i.e. there is at least one way to put the vertices in an order such that all edges point in the same direction along that order.
=== Genealogy and version history ===
Family trees may be seen as directed acyclic graphs, with a vertex for each family member and an edge for each parent-child relationship. Despite the name, these graphs are not necessarily trees because of the possibility of marriages between relatives (so a child has a common ancestor on both the mother's and father's side) causing pedigree collapse. The graphs of matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) are trees within this graph. Because no one can become their own ancestor, family trees are acyclic.
The version history of a distributed revision control system, such as Git, generally has the structure of a directed acyclic graph, in which there is a vertex for each revision and an edge connecting pairs of revisions that were directly derived from each other. These are not trees in general due to merges.
In many randomized algorithms in computational geometry, the algorithm maintains a history DAG representing the version history of a geometric structure over the course of a sequence of changes to the structure. For instance in a randomized incremental algorithm for Delaunay triangulation, the triangulation changes by replacing one triangle by three smaller triangles when each point is added, and by "flip" operations that replace pairs of triangles by a different pair of triangles. The history DAG for this algorithm has a vertex for each triangle constructed as part of the algorithm, and edges from each triangle to the two or three other triangles that replace it. This structure allows point location queries to be answered efficiently: to find the location of a query point q in the Delaunay triangulation, follow a path in the history DAG, at each step moving to the replacement triangle that contains q. The final triangle reached in this path must be the Delaunay triangle that contains q.
=== Citation graphs ===
In a citation graph the vertices are documents with a single publication date. The edges represent the citations from the bibliography of one document to other necessarily earlier documents. The classic example comes from the citations between academic papers as pointed out in the 1965 article "Networks of Scientific Papers" by Derek J. de Solla Price who went on to produce the first model of a citation network, the Price model. In this case the citation count of a paper is just the in-degree of the corresponding vertex of the citation network. This is an important measure in citation analysis. Court judgements provide another example as judges support their conclusions in one case by recalling other earlier decisions made in previous cases. A final example is provided by patents which must refer to earlier prior art, earlier patents which are relevant to the current patent claim. By taking the special properties of directed acyclic graphs into account, one can analyse citation networks with techniques not available when analysing the general graphs considered in many studies using network analysis. For instance transitive reduction gives new insights into the citation distributions found in different applications highlighting clear differences in the mechanisms creating citations networks in different contexts. Another technique is main path analysis, which traces the citation links and suggests the most significant citation chains in a given citation graph.
The Price model is too simple to be a realistic model of a citation network but it is simple enough to allow for analytic solutions for some of its properties. Many of these can be found by using results derived from the undirected version of the Price model, the Barabási–Albert model. However, since Price's model gives a directed acyclic graph, it is a useful model when looking for analytic calculations of properties unique to directed acyclic graphs. For instance,
the length of the longest path, from the n-th node added to the network to the first node in the network, scales as
ln
(
n
)
{\displaystyle \ln(n)}
.
=== Data compression ===
Directed acyclic graphs may also be used as a compact representation of a collection of sequences. In this type of application, one finds a DAG in which the paths form the given sequences. When many of the sequences share the same subsequences, these shared subsequences can be represented by a shared part of the DAG, allowing the representation to use less space than it would take to list out all of the sequences separately. For example, the directed acyclic word graph is a data structure in computer science formed by a directed acyclic graph with a single source and with edges labeled by letters or symbols; the paths from the source to the sinks in this graph represent a set of strings, such as English words. Any set of sequences can be represented as paths in a tree, by forming a tree vertex for every prefix of a sequence and making the parent of one of these vertices represent the sequence with one fewer element; the tree formed in this way for a set of strings is called a trie. A directed acyclic word graph saves space over a trie by allowing paths to diverge and rejoin, so that a set of words with the same possible suffixes can be represented by a single tree vertex.
The same idea of using a DAG to represent a family of paths occurs in the binary decision diagram, a DAG-based data structure for representing binary functions. In a binary decision diagram, each non-sink vertex is labeled by the name of a binary variable, and each sink and each edge is labeled by a 0 or 1. The function value for any truth assignment to the variables is the value at the sink found by following a path, starting from the single source vertex, that at each non-sink vertex follows the outgoing edge labeled with the value of that vertex's variable. Just as directed acyclic word graphs can be viewed as a compressed form of tries, binary decision diagrams can be viewed as compressed forms of decision trees that save space by allowing paths to rejoin when they agree on the results of all remaining decisions.
== References ==
== External links ==
Weisstein, Eric W., "Acyclic Digraph", MathWorld{{cite web}}: CS1 maint: overridden setting (link)
DAGitty – an online tool for creating DAGs | Wikipedia/Directed_acyclic_graph |
In graph theory, a cycle graph or circular graph is a graph that consists of a single cycle, or in other words, some number of vertices (at least 3, if the graph is simple) connected in a closed chain. The cycle graph with n vertices is called Cn. The number of vertices in Cn equals the number of edges, and every vertex has degree 2; that is, every vertex has exactly two edges incident with it.
If
n
=
1
{\displaystyle n=1}
, it is an isolated loop.
== Terminology ==
There are many synonyms for "cycle graph". These include simple cycle graph and cyclic graph, although the latter term is less often used, because it can also refer to graphs which are merely not acyclic. Among graph theorists, cycle, polygon, or n-gon are also often used. The term n-cycle is sometimes used in other settings.
A cycle with an even number of vertices is called an even cycle; a cycle with an odd number of vertices is called an odd cycle.
== Properties ==
A cycle graph is:
2-edge colorable, if and only if it has an even number of vertices
2-regular
2-vertex colorable, if and only if it has an even number of vertices. More generally, a graph is bipartite if and only if it has no odd cycles (Kőnig, 1936).
Connected
Eulerian
Hamiltonian
A unit distance graph
In addition:
As cycle graphs can be drawn as regular polygons, the symmetries of an n-cycle are the same as those of a regular polygon with n sides, the dihedral group of order 2n. In particular, there exist symmetries taking any vertex to any other vertex, and any edge to any other edge, so the n-cycle is a symmetric graph.
Similarly to the Platonic graphs, the cycle graphs form the skeletons of the dihedra. Their duals are the dipole graphs, which form the skeletons of the hosohedra.
== Directed cycle graph ==
A directed cycle graph is a directed version of a cycle graph, with all the edges being oriented in the same direction.
In a directed graph, a set of edges which contains at least one edge (or arc) from each directed cycle is called a feedback arc set. Similarly, a set of vertices containing at least one vertex from each directed cycle is called a feedback vertex set.
A directed cycle graph has uniform in-degree 1 and uniform out-degree 1.
Directed cycle graphs are Cayley graphs for cyclic groups (see e.g. Trevisan).
== See also ==
Complete bipartite graph
Complete graph
Circulant graph
Cycle graph (algebra)
Null graph
Path graph
== References ==
== Sources ==
Diestel, Reinhard (2017). Graph Theory (5 ed.). Springer. ISBN 978-3-662-53621-6.
== External links ==
Weisstein, Eric W. "Cycle Graph". MathWorld. (discussion of both 2-regular cycle graphs and the group-theoretic concept of cycle diagrams)
Luca Trevisan, Characters and Expansion. | Wikipedia/Cycle_graph |
In the mathematical field of graph theory, a distance-regular graph is a regular graph such that for any two vertices v and w, the number of vertices at distance j from v and at distance k from w depends only upon j, k, and the distance between v and w.
Some authors exclude the complete graphs and disconnected graphs from this definition.
Every distance-transitive graph is distance regular. Indeed, distance-regular graphs were introduced as a combinatorial generalization of distance-transitive graphs, having the numerical regularity properties of the latter without necessarily having a large automorphism group.
== Intersection arrays ==
The intersection array of a distance-regular graph is the array
(
b
0
,
b
1
,
…
,
b
d
−
1
;
c
1
,
…
,
c
d
)
{\displaystyle (b_{0},b_{1},\ldots ,b_{d-1};c_{1},\ldots ,c_{d})}
in which
d
{\displaystyle d}
is the diameter of the graph and for each
1
≤
j
≤
d
{\displaystyle 1\leq j\leq d}
,
b
j
{\displaystyle b_{j}}
gives the number of neighbours of
u
{\displaystyle u}
at distance
j
+
1
{\displaystyle j+1}
from
v
{\displaystyle v}
and
c
j
{\displaystyle c_{j}}
gives the number of neighbours of
u
{\displaystyle u}
at distance
j
−
1
{\displaystyle j-1}
from
v
{\displaystyle v}
for any pair of vertices
u
{\displaystyle u}
and
v
{\displaystyle v}
at distance
j
{\displaystyle j}
. There is also the number
a
j
{\displaystyle a_{j}}
that gives the number of neighbours of
u
{\displaystyle u}
at distance
j
{\displaystyle j}
from
v
{\displaystyle v}
. The numbers
a
j
,
b
j
,
c
j
{\displaystyle a_{j},b_{j},c_{j}}
are called the intersection numbers of the graph. They satisfy the equation
a
j
+
b
j
+
c
j
=
k
,
{\displaystyle a_{j}+b_{j}+c_{j}=k,}
where
k
=
b
0
{\displaystyle k=b_{0}}
is the valency, i.e., the number of neighbours, of any vertex.
It turns out that a graph
G
{\displaystyle G}
of diameter
d
{\displaystyle d}
is distance regular if and only if it has an intersection array in the preceding sense.
== Cospectral and disconnected distance-regular graphs ==
A pair of connected distance-regular graphs are cospectral if their adjacency matrices have the same spectrum. This is equivalent to their having the same intersection array.
A distance-regular graph is disconnected if and only if it is a disjoint union of cospectral distance-regular graphs.
== Properties ==
Suppose
G
{\displaystyle G}
is a connected distance-regular graph of valency
k
{\displaystyle k}
with intersection array
(
b
0
,
b
1
,
…
,
b
d
−
1
;
c
1
,
…
,
c
d
)
{\displaystyle (b_{0},b_{1},\ldots ,b_{d-1};c_{1},\ldots ,c_{d})}
. For each
0
≤
j
≤
d
,
{\displaystyle 0\leq j\leq d,}
let
k
j
{\displaystyle k_{j}}
denote the number of vertices at distance
k
{\displaystyle k}
from any given vertex and let
G
j
{\displaystyle G_{j}}
denote the
k
j
{\displaystyle k_{j}}
-regular graph with adjacency matrix
A
j
{\displaystyle A_{j}}
formed by relating pairs of vertices on
G
{\displaystyle G}
at distance
j
{\displaystyle j}
.
=== Graph-theoretic properties ===
k
j
+
1
k
j
=
b
j
c
j
+
1
{\displaystyle {\frac {k_{j+1}}{k_{j}}}={\frac {b_{j}}{c_{j+1}}}}
for all
0
≤
j
<
d
{\displaystyle 0\leq j<d}
.
b
0
>
b
1
≥
⋯
≥
b
d
−
1
>
0
{\displaystyle b_{0}>b_{1}\geq \cdots \geq b_{d-1}>0}
and
1
=
c
1
≤
⋯
≤
c
d
≤
b
0
{\displaystyle 1=c_{1}\leq \cdots \leq c_{d}\leq b_{0}}
.
=== Spectral properties ===
G
{\displaystyle G}
has
d
+
1
{\displaystyle d+1}
distinct eigenvalues.
The only simple eigenvalue of
G
{\displaystyle G}
is
k
,
{\displaystyle k,}
or both
k
{\displaystyle k}
and
−
k
{\displaystyle -k}
if
G
{\displaystyle G}
is bipartite.
k
≤
1
2
(
m
−
1
)
(
m
+
2
)
{\displaystyle k\leq {\frac {1}{2}}(m-1)(m+2)}
for any eigenvalue multiplicity
m
>
1
{\displaystyle m>1}
of
G
,
{\displaystyle G,}
unless
G
{\displaystyle G}
is a complete multipartite graph.
d
≤
3
m
−
4
{\displaystyle d\leq 3m-4}
for any eigenvalue multiplicity
m
>
1
{\displaystyle m>1}
of
G
,
{\displaystyle G,}
unless
G
{\displaystyle G}
is a cycle graph or a complete multipartite graph.
If
G
{\displaystyle G}
is strongly regular, then
n
≤
4
m
−
1
{\displaystyle n\leq 4m-1}
and
k
≤
2
m
−
1
{\displaystyle k\leq 2m-1}
.
== Examples ==
Some first examples of distance-regular graphs include:
The complete graphs.
The cycle graphs.
The odd graphs.
The Moore graphs.
The collinearity graph of a regular near polygon.
The Wells graph and the Sylvester graph.
Strongly regular graphs are the distance-regular graphs of diameter 2.
== Classification of distance-regular graphs ==
There are only finitely many distinct connected distance-regular graphs of any given valency
k
>
2
{\displaystyle k>2}
.
Similarly, there are only finitely many distinct connected distance-regular graphs with any given eigenvalue multiplicity
m
>
2
{\displaystyle m>2}
(with the exception of the complete multipartite graphs).
=== Cubic distance-regular graphs ===
The cubic distance-regular graphs have been completely classified.
The 13 distinct cubic distance-regular graphs are K4 (or Tetrahedral graph), K3,3, the Petersen graph, the Cubical graph, the Heawood graph, the Pappus graph, the Coxeter graph, the Tutte–Coxeter graph, the Dodecahedral graph, the Desargues graph, Tutte 12-cage, the Biggs–Smith graph, and the Foster graph.
== References ==
== Further reading ==
Godsil, C. D. (1993). Algebraic Combinatorics. Chapman and Hall Mathematics Series. New York: Chapman and Hall. ISBN 978-0-412-04131-0. MR 1220704. | Wikipedia/Distance-regular_graph |
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements (nodes or edges) that need to be removed to separate the remaining nodes into two or more isolated subgraphs. It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network.
== Connected vertices and graphs ==
In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length 1 (that is, they are the endpoints of a single edge), the vertices are called adjacent.
A graph is said to be connected if every pair of vertices in the graph is connected. This means that there is a path between every pair of vertices. An undirected graph that is not connected is called disconnected. An undirected graph G is therefore disconnected if there exist two vertices in G such that no path in G has these vertices as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected.
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is unilaterally connected or unilateral (also called semiconnected) if it contains a directed path from u to v or a directed path from v to u for every pair of vertices u, v. It is strongly connected, or simply strong, if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v.
== Components and cuts ==
A connected component is a maximal connected subgraph of an undirected graph. Each vertex belongs to exactly one connected component, as does each edge. A graph is connected if and only if it has exactly one connected component.
The strong components are the maximal strongly connected subgraphs of a directed graph.
A vertex cut or separating set of a connected graph G is a set of vertices whose removal renders G disconnected. The vertex connectivity κ(G) (where G is not a complete graph) is the size of a smallest vertex cut. A graph is called k-vertex-connected or k-connected if its vertex connectivity is k or greater.
More precisely, any graph G (complete or not) is said to be k-vertex-connected if it contains at least k + 1 vertices, but does not contain a set of k − 1 vertices whose removal disconnects the graph; and κ(G) is defined as the largest k such that G is k-connected. In particular, a complete graph with n vertices, denoted Kn, has no vertex cuts at all, but κ(Kn) = n − 1.
A vertex cut for two vertices u and v is a set of vertices whose removal from the graph disconnects u and v. The local connectivity κ(u, v) is the size of a smallest vertex cut separating u and v. Local connectivity is symmetric for undirected graphs; that is, κ(u, v) = κ(v, u). Moreover, except for complete graphs, κ(G) equals the minimum of κ(u, v) over all nonadjacent pairs of vertices u, v.
2-connectivity is also called biconnectivity and 3-connectivity is also called triconnectivity. A graph G which is connected but not 2-connected is sometimes called separable.
Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, an edge cut of G is a set of edges whose removal renders the graph disconnected. The edge-connectivity λ(G) is the size of a smallest edge cut, and the local edge-connectivity λ(u, v) of two vertices u, v is the size of a smallest edge cut disconnecting u from v. Again, local edge-connectivity is symmetric. A graph is called k-edge-connected if its edge connectivity is k or greater.
A graph is said to be maximally connected if its connectivity equals its minimum degree. A graph is said to be maximally edge-connected if its edge-connectivity equals its minimum degree.
=== Super- and hyper-connectivity ===
A graph is said to be super-connected or super-κ if every minimum vertex cut isolates a vertex. A graph is said to be hyper-connected or hyper-κ if the deletion of each minimum vertex cut creates exactly two components, one of which is an isolated vertex. A graph is semi-hyper-connected or semi-hyper-κ if any minimum vertex cut separates the graph into exactly two components.
More precisely: a G connected graph is said to be super-connected or super-κ if all minimum vertex-cuts consist of the vertices adjacent with one (minimum-degree) vertex.
A G connected graph is said to be super-edge-connected or super-λ if all minimum edge-cuts consist of the edges incident on some (minimum-degree) vertex.
A cutset X of G is called a non-trivial cutset if X does not contain the neighborhood N(u) of any vertex u ∉ X. Then the superconnectivity
κ
1
{\displaystyle \kappa _{1}}
of G is
κ
1
(
G
)
=
min
{
|
X
|
:
X
is a non-trivial cutset
}
.
{\displaystyle \kappa _{1}(G)=\min\{|X|:X{\text{ is a non-trivial cutset}}\}.}
A non-trivial edge-cut and the edge-superconnectivity
λ
1
(
G
)
{\displaystyle \lambda _{1}(G)}
are defined analogously.
== Menger's theorem ==
One of the most important facts about connectivity in graphs is Menger's theorem, which characterizes the connectivity and edge-connectivity of a graph in terms of the number of independent paths between vertices.
If u and v are vertices of a graph G, then a collection of paths between u and v is called independent if no two of them share a vertex (other than u and v themselves). Similarly, the collection is edge-independent if no two paths in it share an edge. The number of mutually independent paths between u and v is written as κ′(u, v), and the number of mutually edge-independent paths between u and v is written as λ′(u, v).
Menger's theorem asserts that for distinct vertices u,v, λ(u, v) equals λ′(u, v), and if u is also not adjacent to v then κ(u, v) equals κ′(u, v). This fact is actually a special case of the max-flow min-cut theorem.
== Computational aspects ==
The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More generally, it is easy to determine computationally whether a graph is connected (for example, by using a disjoint-set data structure), or to count the number of connected components. A simple algorithm might be written in pseudo-code as follows:
Begin at any arbitrary node of the graph G.
Proceed from that node using either depth-first or breadth-first search, counting all nodes reached.
Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of G, the graph is connected; otherwise it is disconnected.
By Menger's theorem, for any two vertices u and v in a connected graph G, the numbers κ(u, v) and λ(u, v) can be determined efficiently using the max-flow min-cut algorithm. The connectivity and edge-connectivity of G can then be computed as the minimum values of κ(u, v) and λ(u, v), respectively.
In computational complexity theory, SL is the class of problems log-space reducible to the problem of determining whether two vertices in a graph are connected, which was proved to be equal to L by Omer Reingold in 2004. Hence, undirected graph connectivity may be solved in O(log n) space.
The problem of computing the probability that a Bernoulli random graph is connected is called network reliability and the problem of computing whether two given vertices are connected the ST-reliability problem. Both of these are #P-hard.
=== Number of connected graphs ===
The number of distinct connected labeled graphs with n nodes is tabulated in the On-Line Encyclopedia of Integer Sequences as sequence A001187. The first few non-trivial terms are
== Examples ==
The vertex- and edge-connectivities of a disconnected graph are both 0.
1-connectedness is equivalent to connectedness for graphs of at least two vertices.
The complete graph on n vertices has edge-connectivity equal to n − 1. Every other simple graph on n vertices has strictly smaller edge-connectivity.
In a tree, the local edge-connectivity between any two distinct vertices is 1.
== Bounds on connectivity ==
The vertex-connectivity of a graph is less than or equal to its edge-connectivity. That is, κ(G) ≤ λ(G).
The edge-connectivity for a graph with at least 2 vertices is less than or equal to the minimum degree of the graph because removing all the edges that are incident to a vertex of minimum degree will disconnect that vertex from the rest of the graph.
For a vertex-transitive graph of degree d, we have: 2(d + 1)/3 ≤ κ(G) ≤ λ(G) = d.
For a vertex-transitive graph of degree d ≤ 4, or for any (undirected) minimal Cayley graph of degree d, or for any symmetric graph of degree d, both kinds of connectivity are equal: κ(G) = λ(G) = d.
== Other properties ==
Connectedness is preserved by graph homomorphisms.
If G is connected then its line graph L(G) is also connected.
A graph G is 2-edge-connected if and only if it has an orientation that is strongly connected.
Balinski's theorem states that the polytopal graph (1-skeleton) of a k-dimensional convex polytope is a k-vertex-connected graph. Steinitz's previous theorem that any 3-vertex-connected planar graph is a polytopal graph (Steinitz's theorem) gives a partial converse.
According to a theorem of G. A. Dirac, if a graph is k-connected for k ≥ 2, then for every set of k vertices in the graph there is a cycle that passes through all the vertices in the set. The converse is true when k = 2.
== See also ==
Algebraic connectivity
Cheeger constant (graph theory)
Dynamic connectivity, Disjoint-set data structure
Expander graph
Strength of a graph
== References == | Wikipedia/Connectivity_(graph_theory) |
In graph theory, the Cartesian product G □ H of graphs G and H is a graph such that:
the vertex set of G □ H is the Cartesian product V(G) × V(H); and
two vertices (u,v) and (u' ,v' ) are adjacent in G □ H if and only if either
u = u' and v is adjacent to v' in H, or
v = v' and u is adjacent to u' in G.
The Cartesian product of graphs is sometimes called the box product of graphs [Harary 1969].
The operation is associative, as the graphs (F □ G) □ H and F □ (G □ H) are naturally isomorphic.
The operation is commutative as an operation on isomorphism classes of graphs, and more strongly the graphs G □ H and H □ G are naturally isomorphic, but it is not commutative as an operation on labeled graphs.
The notation G × H has often been used for Cartesian products of graphs, but is now more commonly used for another construction known as the tensor product of graphs. The square symbol is intended to be an intuitive and unambiguous notation for the Cartesian product, since it shows visually the four edges resulting from the Cartesian product of two edges.
== Examples ==
The Cartesian product of two edges is a cycle on four vertices: K2□K2 = C4.
The Cartesian product of K2 and a path graph is a ladder graph.
The Cartesian product of two path graphs is a grid graph.
The Cartesian product of n edges is a hypercube:
(
K
2
)
◻
n
=
Q
n
.
{\displaystyle (K_{2})^{\square n}=Q_{n}.}
Thus, the Cartesian product of two hypercube graphs is another hypercube: Qi□Qj = Qi+j.
The Cartesian product of two median graphs is another median graph.
The graph of vertices and edges of an n-prism is the Cartesian product graph K2□Cn.
The rook's graph is the Cartesian product of two complete graphs.
== Properties ==
If a connected graph is a Cartesian product, it can be factorized uniquely as a product of prime factors, graphs that cannot themselves be decomposed as products of graphs. However, Imrich & Klavžar (2000) describe a disconnected graph that can be expressed in two different ways as a Cartesian product of prime graphs:
(
K
1
+
K
2
+
K
2
2
)
◻
(
K
1
+
K
2
3
)
=
(
K
1
+
K
2
2
+
K
2
4
)
◻
(
K
1
+
K
2
)
,
{\displaystyle (K_{1}+K_{2}+K_{2}^{2})\mathbin {\square } (K_{1}+K_{2}^{3})=(K_{1}+K_{2}^{2}+K_{2}^{4})\mathbin {\square } (K_{1}+K_{2}),}
where the plus sign denotes disjoint union and the superscripts denote exponentiation over Cartesian products. This is related to the identity that
(
1
+
x
+
x
2
)
(
1
+
x
3
)
=
(
1
+
x
2
+
x
4
)
(
1
+
x
)
=
1
+
x
+
x
2
+
x
3
+
x
4
+
x
5
=
(
1
+
x
)
(
1
+
x
+
x
2
)
(
1
−
x
+
x
2
)
{\displaystyle {\begin{aligned}(1+x+x^{2})(1+x^{3})&=(1+x^{2}+x^{4})(1+x)\\&=1+x+x^{2}+x^{3}+x^{4}+x^{5}\\&=(1+x)(1+x+x^{2})(1-x+x^{2})\end{aligned}}}
Both the factors
1
+
x
3
{\displaystyle 1+x^{3}}
and
1
+
x
2
+
x
4
{\displaystyle 1+x^{2}+x^{4}}
are not irreducible polynomials, but their factors include negative coefficients and thus the corresponding graphs cannot be decomposed. In this sense, the failure of unique factorization on (possibly disconnected) graphs is akin to the statement that polynomials with nonnegative integer coefficients is a semiring that fails the unique factorization property.
A Cartesian product is vertex transitive if and only if each of its factors is.
A Cartesian product is bipartite if and only if each of its factors is. More generally, the chromatic number of the Cartesian product satisfies the equation
χ
(
G
◻
H
)
=
max
{
χ
(
G
)
,
χ
(
H
)
}
.
{\displaystyle \chi (G\mathbin {\square } H)=\max\{\chi (G),\chi (H)\}.}
The Hedetniemi conjecture states a related equality for the tensor product of graphs. The independence number of a Cartesian product is not so easily calculated, but as Vizing (1963) showed it satisfies the inequalities
α
(
G
)
α
(
H
)
+
min
{
|
V
(
G
)
|
−
α
(
G
)
,
|
V
(
H
)
|
−
α
(
H
)
}
≤
α
(
G
◻
H
)
≤
min
{
α
(
G
)
|
V
(
H
)
|
,
α
(
H
)
|
V
(
G
)
|
}
.
{\displaystyle \alpha (G)\alpha (H)+\min\{|V(G)|-\alpha (G),|V(H)|-\alpha (H)\}\leq \alpha (G\mathbin {\square } H)\leq \min\{\alpha (G)|V(H)|,\alpha (H)|V(G)|\}.}
The Vizing conjecture states that the domination number of a Cartesian product satisfies the inequality
γ
(
G
◻
H
)
≥
γ
(
G
)
γ
(
H
)
.
{\displaystyle \gamma (G\mathbin {\square } H)\geq \gamma (G)\gamma (H).}
The Cartesian product of unit distance graphs is another unit distance graph.
Cartesian product graphs can be recognized efficiently, in linear time.
== Algebraic graph theory ==
Algebraic graph theory can be used to analyse the Cartesian graph product.
If the graph
G
1
{\displaystyle G_{1}}
has
n
1
{\displaystyle n_{1}}
vertices and the
n
1
×
n
1
{\displaystyle n_{1}\times n_{1}}
adjacency matrix
A
1
{\displaystyle \mathbf {A} _{1}}
, and the graph
G
2
{\displaystyle G_{2}}
has
n
2
{\displaystyle n_{2}}
vertices and the
n
2
×
n
2
{\displaystyle n_{2}\times n_{2}}
adjacency matrix
A
2
{\displaystyle \mathbf {A} _{2}}
, then the adjacency matrix of the Cartesian product of both graphs is given by
A
1
◻
2
=
A
1
⊗
I
n
2
+
I
n
1
⊗
A
2
{\displaystyle \mathbf {A} _{1\mathbin {\square } 2}=\mathbf {A} _{1}\otimes \mathbf {I} _{n_{2}}+\mathbf {I} _{n_{1}}\otimes \mathbf {A} _{2}}
,
where
⊗
{\displaystyle \otimes }
denotes the Kronecker product of matrices and
I
n
{\displaystyle \mathbf {I} _{n}}
denotes the
n
×
n
{\displaystyle n\times n}
identity matrix. The adjacency matrix of the Cartesian graph product is therefore the Kronecker sum of the adjacency matrices of the factors.
== Category theory ==
Viewing a graph as a category whose objects are the vertices and whose morphisms are the paths in the graph, the cartesian product of graphs corresponds to the funny tensor product of categories. The cartesian product of graphs is one of two graph products that turn the category of graphs and graph homomorphisms into a symmetric closed monoidal category (as opposed to merely symmetric monoidal), the other being the tensor product of graphs. The internal hom
[
G
,
H
]
{\displaystyle [G,H]}
for the cartesian product of graphs has graph homomorphisms from
G
{\displaystyle G}
to
H
{\displaystyle H}
as vertices and "unnatural transformations" between them as edges.
== History ==
According to Imrich & Klavžar (2000), Cartesian products of graphs were defined in 1912 by Whitehead and Russell. They were repeatedly rediscovered later, notably by Gert Sabidussi (1960).
== Notes ==
== References ==
Aurenhammer, F.; Hagauer, J.; Imrich, W. (1992), "Cartesian graph factorization at logarithmic cost per edge", Computational Complexity, 2 (4): 331–349, doi:10.1007/BF01200428, MR 1215316.
Feigenbaum, Joan; Hershberger, John; Schäffer, Alejandro A. (1985), "A polynomial time algorithm for finding the prime factors of Cartesian-product graphs", Discrete Applied Mathematics, 12 (2): 123–138, doi:10.1016/0166-218X(85)90066-6, MR 0808453.
Hahn, Geňa; Sabidussi, Gert (1997), Graph symmetry: algebraic methods and applications, NATO Advanced Science Institutes Series, vol. 497, Springer, p. 116, ISBN 978-0-7923-4668-5.
Horvat, Boris; Pisanski, Tomaž (2010), "Products of unit distance graphs", Discrete Mathematics, 310 (12): 1783–1792, doi:10.1016/j.disc.2009.11.035, MR 2610282.
Imrich, Wilfried; Klavžar, Sandi (2000), Product Graphs: Structure and Recognition, Wiley, ISBN 0-471-37039-8.
Imrich, Wilfried; Klavžar, Sandi; Rall, Douglas F. (2008), Graphs and their Cartesian Products, A. K. Peters, ISBN 1-56881-429-1.
Imrich, Wilfried; Peterin, Iztok (2007), "Recognizing Cartesian products in linear time", Discrete Mathematics, 307 (3–5): 472–483, doi:10.1016/j.disc.2005.09.038, MR 2287488.
Kaveh, A.; Rahami, H. (2005), "A unified method for eigendecomposition of graph products", Communications in Numerical Methods in Engineering with Biomedical Applications, 21 (7): 377–388, doi:10.1002/cnm.753, MR 2151527.
Sabidussi, G. (1957), "Graphs with given group and given graph-theoretical properties", Canadian Journal of Mathematics, 9: 515–525, doi:10.4153/CJM-1957-060-7, MR 0094810.
Sabidussi, G. (1960), "Graph multiplication", Mathematische Zeitschrift, 72: 446–457, doi:10.1007/BF01162967, hdl:10338.dmlcz/102459, MR 0209177.
Vizing, V. G. (1963), "The Cartesian product of graphs", Vycisl. Sistemy, 9: 30–43, MR 0209178.
Weber, Mark (2013), "Free products of higher operad algebras", TAC, 28 (2): 24–65.
== External links ==
Weisstein, Eric W. "Graph Cartesian Product". MathWorld. | Wikipedia/Cartesian_product_of_graphs |
In graph theory and statistics, a graphon (also known as a graph limit) is a symmetric measurable function
W
:
[
0
,
1
]
2
→
[
0
,
1
]
{\displaystyle W:[0,1]^{2}\to [0,1]}
, that is important in the study of dense graphs. Graphons arise both as a natural notion for the limit of a sequence of dense graphs, and as the fundamental defining objects of exchangeable random graph models. Graphons are tied to dense graphs by the following pair of observations: the random graph models defined by graphons give rise to dense graphs almost surely, and, by the regularity lemma, graphons capture the structure of arbitrary large dense graphs.
== Statistical formulation ==
A graphon is a symmetric measurable function
W
:
[
0
,
1
]
2
→
[
0
,
1
]
{\displaystyle W:[0,1]^{2}\to [0,1]}
. Usually a graphon is understood as defining an exchangeable random graph model according to the following scheme:
Each vertex
j
{\displaystyle j}
of the graph is assigned an independent random value
u
j
∼
U
[
0
,
1
]
{\displaystyle u_{j}\sim U[0,1]}
Edge
(
i
,
j
)
{\displaystyle (i,j)}
is independently included in the graph with probability
W
(
u
i
,
u
j
)
{\displaystyle W(u_{i},u_{j})}
.
A random graph model is an exchangeable random graph model if and only if it can be defined in terms of a (possibly random) graphon in this way.
The model based on a fixed graphon
W
{\displaystyle W}
is sometimes denoted
G
(
n
,
W
)
{\displaystyle \mathbb {G} (n,W)}
,
by analogy with the Erdős–Rényi model of random graphs.
A graph generated from a graphon
W
{\displaystyle W}
in this way is called a
W
{\displaystyle W}
-random graph.
It follows from this definition and the law of large numbers that, if
W
≠
0
{\displaystyle W\neq 0}
, exchangeable random graph models are dense almost surely.
=== Examples ===
The simplest example of a graphon is
W
(
x
,
y
)
≡
p
{\displaystyle W(x,y)\equiv p}
for some constant
p
∈
[
0
,
1
]
{\displaystyle p\in [0,1]}
. In this case the associated exchangeable random graph model is the Erdős–Rényi model
G
(
n
,
p
)
{\displaystyle G(n,p)}
that includes each edge independently with probability
p
{\displaystyle p}
.
If we instead start with a graphon that is piecewise constant by:
dividing the unit square into
k
×
k
{\displaystyle k\times k}
blocks, and
setting
W
{\displaystyle W}
equal to
p
l
m
{\displaystyle p_{lm}}
on the
(
ℓ
,
m
)
th
{\displaystyle (\ell ,m)^{\text{th}}}
block,
the resulting exchangeable random graph model is the
k
{\displaystyle k}
community stochastic block model, a generalization of the Erdős–Rényi model.
We can interpret this as a random graph model consisting of
k
{\displaystyle k}
distinct Erdős–Rényi graphs with parameters
p
ℓ
ℓ
{\displaystyle p_{\ell \ell }}
respectively, with bigraphs between them where each possible edge between blocks
(
ℓ
,
ℓ
)
{\displaystyle (\ell ,\ell )}
and
(
m
,
m
)
{\displaystyle (m,m)}
is included independently with probability
p
ℓ
m
{\displaystyle p_{\ell m}}
.
Many other popular random graph models can be understood as exchangeable random graph models defined by some graphon, a detailed survey is included in Orbanz and Roy.
=== Jointly exchangeable adjacency matrices ===
A random graph of size
n
{\displaystyle n}
can be represented as a random
n
×
n
{\displaystyle n\times n}
adjacency matrix. In order to impose consistency (in the sense of projectivity) between random graphs of different sizes it is natural to study the sequence of adjacency matrices arising as the upper-left
n
×
n
{\displaystyle n\times n}
sub-matrices of some infinite array of random variables; this allows us to generate
G
n
{\displaystyle G_{n}}
by adding a node to
G
n
−
1
{\displaystyle G_{n-1}}
and sampling the edges
(
j
,
n
)
{\displaystyle (j,n)}
for
j
<
n
{\displaystyle j<n}
. With this perspective, random graphs are defined as random infinite symmetric arrays
(
X
i
j
)
{\displaystyle (X_{ij})}
.
Following the fundamental importance of exchangeable sequences in classical probability, it is natural to look for an analogous notion in the random graph setting. One such notion is given by jointly exchangeable matrices; i.e. random matrices satisfying
(
X
i
j
)
=
d
(
X
σ
(
i
)
σ
(
j
)
)
{\displaystyle (X_{ij})\ {\overset {d}{=}}\,(X_{\sigma (i)\sigma (j)})}
for all permutations
σ
{\displaystyle \sigma }
of the natural numbers, where
=
d
{\displaystyle {\overset {d}{=}}}
means equal in distribution. Intuitively, this condition means that the distribution of the random graph is unchanged by a relabeling of its vertices: that is, the labels of the vertices carry no information.
There is a representation theorem for jointly exchangeable random adjacency matrices, analogous to de Finetti’s representation theorem for exchangeable sequences. This is a special case of the Aldous–Hoover theorem for jointly exchangeable arrays and, in this setting, asserts that the random matrix
(
X
i
j
)
{\displaystyle (X_{ij})}
is generated by:
Sample
u
j
∼
U
[
0
,
1
]
{\displaystyle u_{j}\sim U[0,1]}
independently
X
i
j
=
X
j
i
=
1
{\displaystyle X_{ij}=X_{ji}=1}
independently at random with probability
W
(
u
i
,
u
j
)
,
{\displaystyle W(u_{i},u_{j}),}
where
W
:
[
0
,
1
]
2
→
[
0
,
1
]
{\displaystyle W:[0,1]^{2}\to [0,1]}
is a (possibly random) graphon. That is, a random graph model has a jointly exchangeable adjacency matrix if and only if it is a jointly exchangeable random graph model defined in terms of some graphon.
=== Graphon estimation ===
Due to identifiability issues, it is impossible to estimate either the graphon function
W
{\displaystyle W}
or the node latent positions
u
i
,
{\displaystyle u_{i},}
and there are two main directions of graphon estimation. One direction aims at estimating
W
{\displaystyle W}
up to an equivalence class, or estimate the probability matrix induced by
W
{\displaystyle W}
.
== Analytic formulation ==
Any graph on
n
{\displaystyle n}
vertices
{
1
,
2
,
…
,
n
}
{\displaystyle \{1,2,\dots ,n\}}
can be identified with its adjacency matrix
A
G
{\displaystyle A_{G}}
.
This matrix corresponds to a step function
W
G
:
[
0
,
1
]
2
→
[
0
,
1
]
{\displaystyle W_{G}:[0,1]^{2}\to [0,1]}
,
defined by partitioning
[
0
,
1
]
{\displaystyle [0,1]}
into intervals
I
1
,
I
2
,
…
,
I
n
{\displaystyle I_{1},I_{2},\dots ,I_{n}}
such that
I
j
{\displaystyle I_{j}}
has interior
(
j
−
1
n
,
j
n
)
{\displaystyle \left({\frac {j-1}{n}},{\frac {j}{n}}\right)}
and for each
(
x
,
y
)
∈
I
i
×
I
j
{\displaystyle (x,y)\in I_{i}\times I_{j}}
, setting
W
G
(
x
,
y
)
{\displaystyle W_{G}(x,y)}
equal to the
(
i
,
j
)
th
{\displaystyle (i,j)^{\text{th}}}
entry of
A
G
{\displaystyle A_{G}}
.
This function
W
G
{\displaystyle W_{G}}
is the associated graphon of the graph
G
{\displaystyle G}
.
In general, if we have a sequence of graphs
(
G
n
)
{\displaystyle (G_{n})}
where the number of vertices of
G
n
{\displaystyle G_{n}}
goes to infinity, we can analyze the limiting behavior of the sequence by considering the limiting behavior of the functions
(
W
G
n
)
{\displaystyle (W_{G_{n}})}
.
If these graphs converge (according to some suitable definition of convergence), then we expect the limit of these graphs to correspond to the limit of these associated functions.
This motivates the definition of a graphon (short for "graph function") as a symmetric measurable function
W
:
[
0
,
1
]
2
→
[
0
,
1
]
{\displaystyle W:[0,1]^{2}\to [0,1]}
which captures the notion of a limit of a sequence of graphs. It turns out that for sequences of dense graphs, several apparently distinct notions of convergence are equivalent and under all of them the natural limit object is a graphon.
=== Examples ===
==== Constant graphon ====
Take a sequence of
(
G
n
)
{\displaystyle (G_{n})}
Erdős–Rényi random graphs
G
n
=
G
(
n
,
p
)
{\displaystyle G_{n}=G(n,p)}
with some fixed parameter
p
{\displaystyle p}
.
Intuitively, as
n
{\displaystyle n}
tends to infinity, the limit of this sequence of graphs is determined solely by edge density of these graphs.
In the space of graphons, it turns out that such a sequence converges almost surely to the constant
W
(
x
,
y
)
≡
p
{\displaystyle W(x,y)\equiv p}
, which captures the above intuition.
==== Half graphon ====
Take the sequence
(
H
n
)
{\displaystyle (H_{n})}
of half-graphs, defined by taking
H
n
{\displaystyle H_{n}}
to be the bipartite graph on
2
n
{\displaystyle 2n}
vertices
u
1
,
u
2
,
…
,
u
n
{\displaystyle u_{1},u_{2},\dots ,u_{n}}
and
v
1
,
v
2
,
…
,
v
n
{\displaystyle v_{1},v_{2},\dots ,v_{n}}
such that
u
i
{\displaystyle u_{i}}
is adjacent to
v
j
{\displaystyle v_{j}}
precisely when
i
≤
j
{\displaystyle i\leq j}
. If the vertices are listed in the presented order, then
the adjacency matrix
A
H
n
{\displaystyle A_{H_{n}}}
has two corners of "half square" block matrices filled with ones, with the rest of the entries equal to zero. For example, the adjacency matrix of
H
3
{\displaystyle H_{3}}
is given by
[
0
0
0
1
1
1
0
0
0
0
1
1
0
0
0
0
0
1
1
0
0
0
0
0
1
1
0
0
0
0
1
1
1
0
0
0
]
.
{\displaystyle {\begin{bmatrix}0&0&0&1&1&1\\0&0&0&0&1&1\\0&0&0&0&0&1\\1&0&0&0&0&0\\1&1&0&0&0&0\\1&1&1&0&0&0\end{bmatrix}}.}
As
n
{\displaystyle n}
gets large, these corners of ones "smooth" out.
Matching this intuition, the sequence
(
H
n
)
{\displaystyle (H_{n})}
converges to the half-graphon
W
{\displaystyle W}
defined by
W
(
x
,
y
)
=
1
{\displaystyle W(x,y)=1}
when
|
x
−
y
|
≥
1
/
2
{\displaystyle |x-y|\geq 1/2}
and
W
(
x
,
y
)
=
0
{\displaystyle W(x,y)=0}
otherwise.
==== Complete bipartite graphon ====
Take the sequence
(
K
n
,
n
)
{\displaystyle (K_{n,n})}
of complete bipartite graphs with equal sized parts.
If we order the vertices by placing all vertices in one part at the beginning
and placing the vertices of the other part at the end,
the adjacency matrix of
(
K
n
,
n
)
{\displaystyle (K_{n,n})}
looks like a block off-diagonal matrix, with two blocks of ones and two blocks of zeros.
For example, the adjacency matrix of
K
2
,
2
{\displaystyle K_{2,2}}
is given by
[
0
0
1
1
0
0
1
1
1
1
0
0
1
1
0
0
]
.
{\displaystyle {\begin{bmatrix}0&0&1&1\\0&0&1&1\\1&1&0&0\\1&1&0&0\end{bmatrix}}.}
As
n
{\displaystyle n}
gets larger, this block structure of the adjacency matrix remains constant,
so that this sequence of graphs converges to a "complete bipartite" graphon
W
{\displaystyle W}
defined by
W
(
x
,
y
)
=
1
{\displaystyle W(x,y)=1}
whenever
min
(
x
,
y
)
≤
1
/
2
{\displaystyle \min(x,y)\leq 1/2}
and
max
(
x
,
y
)
>
1
/
2
{\displaystyle \max(x,y)>1/2}
, and setting
W
(
x
,
y
)
=
0
{\displaystyle W(x,y)=0}
otherwise.
If we instead order the vertices of
K
n
,
n
{\displaystyle K_{n,n}}
by alternating between parts,
the adjacency matrix has a chessboard structure of zeros and ones.
For example, under this ordering, the adjacency matrix of
K
2
,
2
{\displaystyle K_{2,2}}
is given by
[
0
1
0
1
1
0
1
0
0
1
0
1
1
0
1
0
]
.
{\displaystyle {\begin{bmatrix}0&1&0&1\\1&0&1&0\\0&1&0&1\\1&0&1&0\end{bmatrix}}.}
As
n
{\displaystyle n}
gets larger,
the adjacency matrices become a finer and finer chessboard.
Despite this behavior, we still want the limit of
(
K
n
,
n
)
{\displaystyle (K_{n,n})}
to be unique and result in the graphon from example 3.
This means that when we formally define convergence for a sequence of graphs, the definition of a limit should be agnostic to relabelings of the vertices.
==== Limit of W-random graphs ====
Take a random sequence
(
G
n
)
{\displaystyle (G_{n})}
of
W
{\displaystyle W}
-random graphs by drawing
G
n
∼
G
(
n
,
W
)
{\displaystyle G_{n}\sim \mathbb {G} (n,W)}
for some fixed graphon
W
{\displaystyle W}
.
Then just like in the first example from this section, it turns out that
(
G
n
)
{\displaystyle (G_{n})}
converges to
W
{\displaystyle W}
almost surely.
=== Recovering graph parameters from graphons ===
Given graph
G
{\displaystyle G}
with associated graphon
W
=
W
G
{\displaystyle W=W_{G}}
, we can recover graph theoretic properties and parameters of
G
{\displaystyle G}
by integrating transformations of
W
{\displaystyle W}
. For example, the edge density (i.e. average degree divided by number of vertices) of
G
{\displaystyle G}
is given by the integral
∫
0
1
∫
0
1
W
(
x
,
y
)
d
x
d
y
.
{\displaystyle \int _{0}^{1}\int _{0}^{1}W(x,y)\;\mathrm {d} x\,\mathrm {d} y.}
This is because
W
{\displaystyle W}
is
{
0
,
1
}
{\displaystyle \{0,1\}}
-valued, and each edge
(
i
,
j
)
{\displaystyle (i,j)}
in
G
{\displaystyle G}
corresponds to a region
I
i
×
I
j
{\displaystyle I_{i}\times I_{j}}
of area
1
/
n
2
{\displaystyle 1/n^{2}}
where
W
{\displaystyle W}
equals
1
{\displaystyle 1}
.
Similar reasoning shows that the triangle density in
G
{\displaystyle G}
is equal to
1
6
∫
0
1
∫
0
1
∫
0
1
W
(
x
,
y
)
W
(
y
,
z
)
W
(
z
,
x
)
d
x
d
y
d
z
.
{\displaystyle {\frac {1}{6}}\int _{0}^{1}\int _{0}^{1}\int _{0}^{1}W(x,y)W(y,z)W(z,x)\;\mathrm {d} x\,\mathrm {d} y\,\mathrm {d} z.}
=== Notions of convergence ===
There are many different ways to measure the distance between two graphs.
If we are interested in metrics that "preserve" extremal properties of graphs,
then we should restrict our attention to metrics that identify random graphs as similar.
For example, if we randomly draw two graphs independently from an Erdős–Rényi model
G
(
n
,
p
)
{\displaystyle G(n,p)}
for some fixed
p
{\displaystyle p}
, the distance between these two graphs under a "reasonable" metric should be close to zero with high probability for large
n
{\displaystyle n}
.
Naively, given two graphs on the same vertex set, one might define their distance as the number of edges that must be added or removed to get from one graph to the other, i.e. their edit distance. However, the edit distance does not identify random graphs as similar; in fact, two graphs drawn independently from
G
(
n
,
1
2
)
{\displaystyle G(n,{\tfrac {1}{2}})}
have an expected (normalized) edit distance of
1
2
{\displaystyle {\tfrac {1}{2}}}
.
There are two natural metrics that behave well on dense random graphs in the sense that we want.
The first is a sampling metric, which says that two graphs are close if their distributions of subgraphs are close.
The second is an edge discrepancy metric, which says two graphs are close when their edge densities are close on all their corresponding subsets of vertices.
Miraculously, a sequence of graphs converges with respect to one metric precisely when it converges with respect to the other.
Moreover, the limit objects under both metrics turn out to be graphons.
The equivalence of these two notions of convergence mirrors how various notions of quasirandom graphs are equivalent.
==== Homomorphism densities ====
One way to measure the distance between two graphs
G
{\displaystyle G}
and
H
{\displaystyle H}
is to compare their relative subgraph counts.
That is, for each graph
F
{\displaystyle F}
we can compare the number of copies of
F
{\displaystyle F}
in
G
{\displaystyle G}
and
F
{\displaystyle F}
in
H
{\displaystyle H}
.
If these numbers are close for every graph
F
{\displaystyle F}
, then
intuitively
G
{\displaystyle G}
and
H
{\displaystyle H}
are similar looking graphs.
Rather than dealing directly with subgraphs, however, it turns out to be
easier to work with graph homomorphisms.
This is fine when dealing with large, dense graphs, since in this scenario
the number of subgraphs and the number of graph homomorphisms from a fixed graph are asymptotically equal.
Given two graphs
F
{\displaystyle F}
and
G
{\displaystyle G}
, the
homomorphism density
t
(
F
,
G
)
{\displaystyle t(F,G)}
of
F
{\displaystyle F}
in
G
{\displaystyle G}
is defined to be the number of graph homomorphisms from
F
{\displaystyle F}
to
G
{\displaystyle G}
.
In other words,
t
(
F
,
G
)
{\displaystyle t(F,G)}
is the probability a randomly chosen map from the vertices of
F
{\displaystyle F}
to the vertices of
G
{\displaystyle G}
sends adjacent vertices in
F
{\displaystyle F}
to adjacent vertices in
G
{\displaystyle G}
.
Graphons offer a simple way to compute homomorphism densities.
Indeed, given a graph
G
{\displaystyle G}
with associated graphon
W
G
{\displaystyle W_{G}}
and another
F
{\displaystyle F}
, we have
t
(
F
,
G
)
=
∫
∏
(
i
,
j
)
∈
E
(
F
)
W
G
(
x
i
,
x
j
)
{
d
x
i
}
i
∈
V
(
F
)
{\displaystyle t(F,G)=\int \prod _{(i,j)\in E(F)}W_{G}(x_{i},x_{j})\;\left\{\mathrm {d} x_{i}\right\}_{i\in V(F)}}
where the integral is multidimensional, taken over the unit hypercube
[
0
,
1
]
V
(
F
)
{\displaystyle [0,1]^{V(F)}}
.
This follows from the definition of an associated graphon, by considering when the above integrand is equal to
1
{\displaystyle 1}
.
We can then extend the definition of homomorphism density to arbitrary graphons
W
{\displaystyle W}
, by using the same integral and defining
t
(
F
,
W
)
=
∫
∏
(
i
,
j
)
∈
E
(
F
)
W
(
x
i
,
x
j
)
{
d
x
i
}
i
∈
V
(
F
)
{\displaystyle t(F,W)=\int \prod _{(i,j)\in E(F)}W(x_{i},x_{j})\;\left\{\mathrm {d} x_{i}\right\}_{i\in V(F)}}
for any graph
F
{\displaystyle F}
.
Given this setup, we say a sequence of graphs
(
G
n
)
{\displaystyle (G_{n})}
is left-convergent if for every fixed graph
F
{\displaystyle F}
, the sequence of homomorphism densities
(
t
(
F
,
G
n
)
)
{\displaystyle \left(t(F,G_{n})\right)}
converges.
Although not evident from the definition alone, if
(
G
n
)
{\displaystyle (G_{n})}
converges in this sense, then there always exists a graphon
W
{\displaystyle W}
such that for every graph
F
{\displaystyle F}
, we have
lim
n
→
∞
t
(
F
,
G
n
)
=
t
(
F
,
W
)
{\displaystyle \lim _{n\to \infty }t(F,G_{n})=t(F,W)}
simultaneously.
==== Cut distance ====
Take two graphs
G
{\displaystyle G}
and
H
{\displaystyle H}
on the same vertex set.
Because these graphs share the same vertices,
one way to measure their distance is to restrict to subsets
X
,
Y
{\displaystyle X,Y}
of the vertex set, and for each such pair of subsets compare the number of edges
e
G
(
X
,
Y
)
{\displaystyle e_{G}(X,Y)}
from
X
{\displaystyle X}
to
Y
{\displaystyle Y}
in
G
{\displaystyle G}
to the number of edges
e
H
(
X
,
Y
)
{\displaystyle e_{H}(X,Y)}
between
X
{\displaystyle X}
and
Y
{\displaystyle Y}
in
H
{\displaystyle H}
. If these numbers are similar for every pair of subsets (relative to the total number of vertices), then that suggests
G
{\displaystyle G}
and
H
{\displaystyle H}
are similar graphs.
As a preliminary formalization of this notion of distance, for any pair of graphs
G
{\displaystyle G}
and
H
{\displaystyle H}
on the same vertex set
V
{\displaystyle V}
of size
|
V
|
=
n
{\displaystyle |V|=n}
, define the labeled cut distance between
G
{\displaystyle G}
and
H
{\displaystyle H}
to be
d
◻
(
G
,
H
)
=
1
n
2
max
X
,
Y
⊆
V
|
e
G
(
X
,
Y
)
−
e
H
(
X
,
Y
)
|
.
{\displaystyle d_{\square }(G,H)={\frac {1}{n^{2}}}\max _{X,Y\subseteq V}\left|e_{G}(X,Y)-e_{H}(X,Y)\right|.}
In other words, the labeled cut distance encodes the maximum discrepancy of the edge densities between
G
{\displaystyle G}
and
H
{\displaystyle H}
.
We can generalize this concept to graphons by expressing the edge density
1
n
2
e
G
(
X
,
Y
)
{\displaystyle {\tfrac {1}{n^{2}}}e_{G}(X,Y)}
in terms of the associated graphon
W
G
{\displaystyle W_{G}}
, giving the equality
d
◻
(
G
,
H
)
=
max
X
,
Y
⊆
V
|
∫
I
X
∫
I
Y
W
G
(
x
,
y
)
−
W
H
(
x
,
y
)
d
x
d
y
|
{\displaystyle d_{\square }(G,H)=\max _{X,Y\subseteq V}\left|\int _{I_{X}}\int _{I_{Y}}W_{G}(x,y)-W_{H}(x,y)\;\mathrm {d} x\,\mathrm {d} y\right|}
where
I
X
,
I
Y
⊆
[
0
,
1
]
{\displaystyle I_{X},I_{Y}\subseteq [0,1]}
are unions of intervals corresponding to the vertices in
X
{\displaystyle X}
and
Y
{\displaystyle Y}
. Note that this definition can still be used even when the graphs being compared do not share a vertex set.
This motivates the following more general definition.
Definition 1. For any symmetric, measurable function
f
:
[
0
,
1
]
2
→
R
{\displaystyle f:[0,1]^{2}\to \mathbb {R} }
, define the cut norm of
f
{\displaystyle f}
to be the quantity
‖
f
‖
◻
=
sup
S
,
T
⊆
[
0
,
1
]
|
∫
S
∫
T
f
(
x
,
y
)
d
x
d
y
|
{\displaystyle \lVert f\rVert _{\square }=\sup _{S,T\subseteq [0,1]}\left|\int _{S}\int _{T}f(x,y)\;\mathrm {d} x\,\mathrm {d} y\right|}
taken over all measurable subsets
S
,
T
{\displaystyle S,T}
of the unit interval.
This captures our earlier notion of labeled cut distance, as we have the equality
‖
W
G
−
W
H
‖
◻
=
d
◻
(
G
,
H
)
{\displaystyle \lVert W_{G}-W_{H}\rVert _{\square }=d_{\square }(G,H)}
.
This distance measure still has one major limitation: it can assign nonzero distance to two isomorphic graphs.
To make sure isomorphic graphs have distance zero, we should compute the minimum cut norm over all possible "relabellings" of the vertices.
This motivates the following definition of the cut distance.
Definition 2. For any pair of graphons
U
{\displaystyle U}
and
W
{\displaystyle W}
, define their cut distance to be
δ
◻
(
U
,
W
)
=
inf
φ
‖
U
−
W
φ
‖
◻
{\displaystyle \delta _{\square }(U,W)=\inf _{\varphi }\lVert U-W^{\varphi }\rVert _{\square }}
where
W
φ
(
x
,
y
)
=
W
(
φ
(
x
)
,
φ
(
y
)
)
{\displaystyle W^{\varphi }(x,y)=W(\varphi (x),\varphi (y))}
is the composition of
W
{\displaystyle W}
with the map
φ
{\displaystyle \varphi }
, and the infimum is taken over all measure-preserving bijections from the unit interval to itself.
The cut distance between two graphs is defined to be the cut distance between their associated graphons.
We now say that a sequence of graphs
(
G
n
)
{\displaystyle (G_{n})}
is convergent under the cut distance if it is a Cauchy sequence under the cut distance
δ
◻
{\displaystyle \delta _{\square }}
. Although not a direct consequence of the definition, if such a sequence of graphs is Cauchy, then it always converges to some graphon
W
{\displaystyle W}
.
==== Equivalence of convergence ====
As it turns out, for any sequence of graphs
(
G
n
)
{\displaystyle (G_{n})}
, left-convergence is equivalent to convergence under the cut distance, and furthermore, the limit graphon
W
{\displaystyle W}
is the same. We can also consider convergence of graphons themselves using the same definitions, and the same equivalence is true. In fact, both notions of convergence are related more strongly through what are called counting lemmas.
Counting Lemma. For any pair of graphons
U
{\displaystyle U}
and
W
{\displaystyle W}
, we have
|
t
(
F
,
U
)
−
t
(
F
,
W
)
|
≤
e
(
F
)
δ
◻
(
U
,
W
)
{\displaystyle |t(F,U)-t(F,W)|\leq e(F)\delta _{\square }(U,W)}
for all graphs
F
{\displaystyle F}
.
The name "counting lemma" comes from the bounds that this lemma gives on homomorphism densities
t
(
F
,
W
)
{\displaystyle t(F,W)}
, which are analogous to subgraph counts of graphs. This lemma is a generalization of the graph counting lemma that appears in the field of regularity partitions, and it immediately shows that convergence under the cut distance implies left-convergence.
Inverse Counting Lemma. For every real number
ε
>
0
{\displaystyle \varepsilon >0}
, there exist a real number
η
>
0
{\displaystyle \eta >0}
and a positive integer
k
{\displaystyle k}
such that for any pair of graphons
U
{\displaystyle U}
and
W
{\displaystyle W}
with
|
t
(
F
,
U
)
−
t
(
F
,
W
)
|
≤
η
{\displaystyle |t(F,U)-t(F,W)|\leq \eta }
for all graphs
F
{\displaystyle F}
satisfying
v
(
F
)
≤
k
{\displaystyle v(F)\leq k}
,
we must have
δ
◻
(
U
,
W
)
<
ε
{\displaystyle \delta _{\square }(U,W)<\varepsilon }
.
This lemma shows that left-convergence implies convergence under the cut distance.
=== The space of graphons ===
We can make the cut-distance into a metric by taking the set of all graphons and identifying two graphons
U
∼
W
{\displaystyle U\sim W}
whenever
δ
◻
(
U
,
W
)
=
0
{\displaystyle \delta _{\square }(U,W)=0}
.
The resulting space of graphons is denoted
W
~
0
{\displaystyle {\widetilde {\mathcal {W}}}_{0}}
, and together with
δ
◻
{\displaystyle \delta _{\square }}
forms a metric space.
This space turns out to be compact.
Moreover, it contains the set of all finite graphs, represented by their associated graphons, as a dense subset.
These observations show that the space of graphons is a completion of the space of graphs with respect to the cut distance. One immediate consequence of this is the following.
Corollary 1. For every real number
ε
>
0
{\displaystyle \varepsilon >0}
, there is an integer
N
{\displaystyle N}
such that for every graphon
W
{\displaystyle W}
, there is a graph
G
{\displaystyle G}
with at most
N
{\displaystyle N}
vertices such that
δ
◻
(
W
,
W
G
)
<
ε
{\displaystyle \delta _{\square }(W,W_{G})<\varepsilon }
.
To see why, let
G
{\displaystyle {\mathcal {G}}}
be the set of graphs. Consider for each graph
G
∈
G
{\displaystyle G\in {\mathcal {G}}}
the open ball
B
◻
(
G
,
ε
)
{\displaystyle B_{\square }(G,\varepsilon )}
containing all graphons
W
{\displaystyle W}
such that
δ
◻
(
W
,
W
G
)
<
ε
{\displaystyle \delta _{\square }(W,W_{G})<\varepsilon }
. The set of open balls for all graphs covers
W
~
0
{\displaystyle {\widetilde {\mathcal {W}}}_{0}}
, so compactness implies that there is a finite subcover
{
B
◻
(
G
,
ε
)
∣
G
∈
G
0
}
{\displaystyle \{B_{\square }(G,\varepsilon )\mid G\in {\mathcal {G}}_{0}\}}
for some finite subset
G
0
⊂
G
{\displaystyle {\mathcal {G}}_{0}\subset {\mathcal {G}}}
. We can now take
N
{\displaystyle N}
to be the largest number of vertices among the graphs in
G
0
{\displaystyle {\mathcal {G}}_{0}}
.
== Applications ==
=== Regularity lemma ===
Compactness of the space of graphons
(
W
~
0
,
δ
◻
)
{\displaystyle ({\widetilde {\mathcal {W}}}_{0},\delta _{\square })}
can be thought of as an analytic formulation of Szemerédi's regularity lemma; in fact, a stronger result than the original lemma.
Szemeredi's regularity lemma can be translated into the language of graphons as follows. Define a step function to be a graphon
W
{\displaystyle W}
that is piecewise constant, i.e. for some partition
P
{\displaystyle {\mathcal {P}}}
of
[
0
,
1
]
{\displaystyle [0,1]}
,
W
{\displaystyle W}
is constant on
S
×
T
{\displaystyle S\times T}
for all
S
,
T
∈
P
{\displaystyle S,T\in {\mathcal {P}}}
. The statement that a graph
G
{\displaystyle G}
has a regularity partition is equivalent to saying that its associated graphon
W
G
{\displaystyle W_{G}}
is close to a step function.
The proof of compactness requires only the weak regularity lemma:
Weak Regularity Lemma for Graphons. For every graphon
W
{\displaystyle W}
and
ε
>
0
{\displaystyle \varepsilon >0}
, there is a step function
W
′
{\displaystyle W'}
with at most
⌈
4
1
/
ε
2
⌉
{\displaystyle \lceil 4^{1/\varepsilon ^{2}}\rceil }
steps such that
‖
W
−
W
′
‖
◻
≤
ε
{\displaystyle \lVert W-W'\rVert _{\square }\leq \varepsilon }
.
but it can be used to prove stronger regularity results, such as the strong regularity lemma:
Strong Regularity Lemma for Graphons. For every sequence
ε
=
(
ε
0
,
ε
1
,
…
)
{\displaystyle \mathbf {\varepsilon } =(\varepsilon _{0},\varepsilon _{1},\dots )}
of positive real numbers, there is a positive integer
S
{\displaystyle S}
such that for every graphon
W
{\displaystyle W}
, there is a graphon
W
′
{\displaystyle W'}
and a step function
U
{\displaystyle U}
with
k
<
S
{\displaystyle k<S}
steps such that
‖
W
−
W
′
‖
1
≤
ε
0
{\displaystyle \lVert W-W'\rVert _{1}\leq \varepsilon _{0}}
and
‖
W
′
−
U
‖
◻
≤
ε
k
.
{\displaystyle \lVert W'-U\rVert _{\square }\leq \varepsilon _{k}.}
The proof of the strong regularity lemma is similar in concept to Corollary 1 above. It turns out that every graphon
W
{\displaystyle W}
can be approximated with a step function
U
{\displaystyle U}
in the
L
1
{\displaystyle L_{1}}
norm, showing that the set of balls
B
1
(
U
,
ε
0
)
{\displaystyle B_{1}(U,\varepsilon _{0})}
cover
W
~
0
{\displaystyle {\widetilde {\mathcal {W}}}_{0}}
. These sets are not open in the
δ
◻
{\displaystyle \delta _{\square }}
metric, but they can be enlarged slightly to be open. Now, we can take a finite subcover, and one can show that the desired condition follows.
=== Sidorenko's conjecture ===
The analytic nature of graphons allows greater flexibility in attacking inequalities related to homomorphisms.
For example, Sidorenko's conjecture is a major open problem in extremal graph theory, which asserts that for any graph
G
{\displaystyle G}
on
n
{\displaystyle n}
vertices with average degree
p
n
{\displaystyle pn}
(for some
p
∈
[
0
,
1
]
{\displaystyle p\in [0,1]}
) and bipartite graph
H
{\displaystyle H}
on
v
{\displaystyle v}
vertices and
e
{\displaystyle e}
edges, the number of homomorphisms from
H
{\displaystyle H}
to
G
{\displaystyle G}
is at least
p
e
n
v
{\displaystyle p^{e}n^{v}}
.
Since this quantity is the expected number of labeled subgraphs of
H
{\displaystyle H}
in a random graph
G
(
n
,
p
)
{\displaystyle G(n,p)}
,
the conjecture can be interpreted as the claim
that for any bipartite graph
H
{\displaystyle H}
, the random graph achieves (in expectation) the minimum number of copies of
H
{\displaystyle H}
over all graphs with some fixed edge density.
Many approaches to Sidorenko's conjecture formulate the problem as an integral inequality on graphons, which then allows the problem to be attacked using other analytical approaches.
== Generalizations ==
Graphons are naturally associated with dense simple graphs. There are extensions of this model to dense directed weighted graphs, often referred to as decorated graphons. There are also recent extensions to the sparse graph regime, from both the perspective of random graph models and graph limit theory.
== References == | Wikipedia/Continuous_graph |
In graph theory, a graph property or graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph.
== Definitions ==
While graph drawing and graph representation are valid topics in graph theory, in order to focus only on the abstract structure of graphs, a graph property is defined to be a property preserved under all possible isomorphisms of a graph. In other words, it is a property of the graph itself, not of a specific drawing or representation of the graph.
Informally, the term "graph invariant" is used for properties expressed quantitatively, while "property" usually refers to descriptive characterizations of graphs. For example, the statement "graph does not have vertices of degree 1" is a "property" while "the number of vertices of degree 1 in a graph" is an "invariant".
More formally, a graph property is a class of graphs with the property that any two isomorphic graphs either both belong to the class, or both do not belong to it. Equivalently, a graph property may be formalized using the indicator function of the class, a function from graphs to Boolean values that is true for graphs in the class and false otherwise; again, any two isomorphic graphs must have the same function value as each other. A graph invariant or graph parameter may similarly be formalized as a function from graphs to a broader class of values, such as integers, real numbers, sequences of numbers, or polynomials, that again has the same value for any two isomorphic graphs.
== Properties of properties ==
Many graph properties are well-behaved with respect to certain natural partial orders or preorders defined on graphs:
A graph property P is hereditary if every induced subgraph of a graph with property P also has property P. For instance, being a perfect graph or being a chordal graph are hereditary properties.
A graph property is monotone if every subgraph of a graph with property P also has property P. For instance, being a bipartite graph or being a triangle-free graph is monotone. Every monotone property is hereditary, but not necessarily vice versa; for instance, subgraphs of chordal graphs are not necessarily chordal, so being a chordal graph is not monotone.
A graph property is minor-closed if every graph minor of a graph with property P also has property P. For instance, being a planar graph is minor-closed. Every minor-closed property is monotone, but not necessarily vice versa; for instance, minors of triangle-free graphs are not necessarily themselves triangle-free.
These definitions may be extended from properties to numerical invariants of graphs: a graph invariant is hereditary, monotone, or minor-closed if the function formalizing the invariant forms a monotonic function from the corresponding partial order on graphs to the real numbers.
Additionally, graph invariants have been studied with respect to their behavior with regard to disjoint unions of graphs:
A graph invariant is additive if, for all two graphs G and H, the value of the invariant on the disjoint union of G and H is the sum of the values on G and on H. For instance, the number of vertices is additive.
A graph invariant is multiplicative if, for all two graphs G and H, the value of the invariant on the disjoint union of G and H is the product of the values on G and on H. For instance, the Hosoya index (number of matchings) is multiplicative.
A graph invariant is maxing if, for all two graphs G and H, the value of the invariant on the disjoint union of G and H is the maximum of the values on G and on H. For instance, the chromatic number is maxing.
In addition, graph properties can be classified according to the type of graph they describe: whether the graph is undirected or directed, whether the property applies to multigraphs, etc.
== Values of invariants ==
The target set of a function that defines a graph invariant may be one of:
A truth-value, true or false, for the indicator function of a graph property.
An integer, such as the number of vertices or chromatic number of a graph.
A real number, such as the fractional chromatic number of a graph.
A sequence of integers, such as the degree sequence of a graph.
A polynomial, such as the Tutte polynomial of a graph.
== Graph invariants and graph isomorphism ==
Easily computable graph invariants are instrumental for fast recognition of graph isomorphism, or rather non-isomorphism, since for any invariant at all, two graphs with different values cannot (by definition) be isomorphic. Two graphs with the same invariants may or may not be isomorphic, however.
A graph invariant I(G) is called complete if the identity of the invariants I(G) and I(H) implies the isomorphism of the graphs G and H. Finding an efficiently-computable such invariant (the problem of graph canonization) would imply an easy solution to the challenging graph isomorphism problem. However, even polynomial-valued invariants such as the chromatic polynomial are not usually complete. The claw graph and the path graph on 4 vertices both have the same chromatic polynomial, for example.
== Examples ==
=== Properties ===
Connected graphs
Bipartite graphs
Planar graphs
Triangle-free graphs
Perfect graphs
Eulerian graphs
Hamiltonian graphs
=== Integer invariants ===
Order, the number of vertices
Size, the number of edges
Number of connected components
Circuit rank, a linear combination of the numbers of edges, vertices, and components
diameter, the longest of the shortest path lengths between pairs of vertices
girth, the length of the shortest cycle
Vertex connectivity, the smallest number of vertices whose removal disconnects the graph
Edge connectivity, the smallest number of edges whose removal disconnects the graph
Chromatic number, the smallest number of colors for the vertices in a proper coloring
Chromatic index, the smallest number of colors for the edges in a proper edge coloring
Choosability (or list chromatic number), the least number k such that G is k-choosable
Independence number, the largest size of an independent set of vertices
Clique number, the largest order of a complete subgraph
Arboricity
Graph genus
Pagenumber
Hosoya index
Wiener index
Colin de Verdière graph invariant
Boxicity
=== Real number invariants ===
Clustering coefficient
Betweenness centrality
Fractional chromatic number
Algebraic connectivity
Isoperimetric number
Estrada index
Strength
=== Sequences and polynomials ===
Degree sequence
Graph spectrum
Characteristic polynomial of the adjacency matrix
Chromatic polynomial, the number of
k
{\displaystyle k}
-colorings viewed as a function of
k
{\displaystyle k}
Tutte polynomial, a bivariate function that encodes much of the graph's connectivity
=== Edge partition ===
(a, b)-decomposition for any natural a,b
== See also ==
Hereditary property
Logic of graphs, one of several formal languages used to specify graph properties
Topological index, a closely related concept in chemical graph theory
== External links ==
List of integer invariants
== References == | Wikipedia/Graph_property |
In the mathematical field of graph theory, graph operations are operations which produce new graphs from initial ones. They include both unary (one input) and binary (two input) operations.
== Unary operations ==
Unary operations create a new graph from a single initial graph.
=== Elementary operations ===
Elementary operations or editing operations, which are also known as graph edit operations, create a new graph from one initial one by a simple local change, such as addition or deletion of a vertex or of an edge, merging and splitting of vertices, edge contraction, etc.
The graph edit distance between a pair of graphs is the minimum number of elementary operations required to transform one graph into the other.
=== Advanced operations ===
Advanced operations create a new graph from an initial one by a complex change, such as:
transpose graph;
complement graph;
line graph;
graph minor;
graph rewriting;
power of graph;
dual graph;
medial graph;
quotient graph;
Y-Δ transform;
Mycielskian.
== Binary operations ==
Binary operations create a new graph from two initial graphs G1 = (V1, E1) and G2 = (V2, E2), such as:
graph union: G1 ∪ G2. There are two definitions. In the most common one, the disjoint union of graphs, the union is assumed to be disjoint. Less commonly (though more consistent with the general definition of union in mathematics) the union of two graphs is defined as the graph (V1 ∪ V2, E1 ∪ E2).
graph intersection: G1 ∩ G2 = (V1 ∩ V2, E1 ∩ E2);
graph join:
G
1
∇
G
2
{\displaystyle G_{1}\nabla G_{2}}
. Graph with all the edges that connect the vertices of the first graph with the vertices of the second graph. It is a commutative operation (for unlabelled graphs);
graph products based on the cartesian product of the vertex sets:
cartesian graph product: it is a commutative and associative operation (for unlabelled graphs),
lexicographic graph product (or graph composition): it is an associative (for unlabelled graphs) and non-commutative operation,
strong graph product: it is a commutative and associative operation (for unlabelled graphs),
tensor graph product (or direct graph product, categorical graph product, cardinal graph product, Kronecker graph product): it is a commutative and associative operation (for unlabelled graphs),
replacement product,
zig-zag graph product;
graph product based on other products:
rooted graph product: it is an associative operation (for unlabelled but rooted graphs),
corona graph product: it is a non-commutative operation;
series–parallel graph composition:
parallel graph composition: it is a commutative operation (for unlabelled graphs),
series graph composition: it is a non-commutative operation,
source graph composition: it is a commutative operation (for unlabelled graphs);
Hajós construction.
== Notes == | Wikipedia/Graph_operations |
In graph theory, a regular graph is a graph where each vertex has the same number of neighbors; i.e. every vertex has the same degree or valency. A regular directed graph must also satisfy the stronger condition that the indegree and outdegree of each internal vertex are equal to each other. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k.
== Special cases ==
Regular graphs of degree at most 2 are easy to classify: a 0-regular graph consists of disconnected vertices, a 1-regular graph consists of disconnected edges, and a 2-regular graph consists of a disjoint union of cycles and infinite chains.
A 3-regular graph is known as a cubic graph.
A strongly regular graph is a regular graph where every adjacent pair of vertices has the same number l of neighbors in common, and every non-adjacent pair of vertices has the same number n of neighbors in common. The smallest graphs that are regular but not strongly regular are the cycle graph and the circulant graph on 6 vertices.
The complete graph Km is strongly regular for any m.
== Existence ==
The necessary and sufficient conditions for a
k
{\displaystyle k}
-regular graph of order
n
{\displaystyle n}
to exist are that
n
≥
k
+
1
{\displaystyle n\geq k+1}
and that
n
k
{\displaystyle nk}
is even.
Proof: A complete graph has every pair of distinct vertices connected to each other by a unique edge. So edges are maximum in complete graph and number of edges are
(
n
2
)
=
n
(
n
−
1
)
2
{\displaystyle {\binom {n}{2}}={\dfrac {n(n-1)}{2}}}
and degree here is
n
−
1
{\displaystyle n-1}
. So
k
=
n
−
1
,
n
=
k
+
1
{\displaystyle k=n-1,n=k+1}
. This is the minimum
n
{\displaystyle n}
for a particular
k
{\displaystyle k}
. Also note that if any regular graph has order
n
{\displaystyle n}
then number of edges are
n
k
2
{\displaystyle {\dfrac {nk}{2}}}
so
n
k
{\displaystyle nk}
has to be even.
In such case it is easy to construct regular graphs by considering appropriate parameters for circulant graphs.
== Properties ==
From the handshaking lemma, a k-regular graph with odd k has an even number of vertices.
A theorem by Nash-Williams says that every k‑regular graph on 2k + 1 vertices has a Hamiltonian cycle.
Let A be the adjacency matrix of a graph. Then the graph is regular if and only if
j
=
(
1
,
…
,
1
)
{\displaystyle {\textbf {j}}=(1,\dots ,1)}
is an eigenvector of A. Its eigenvalue will be the constant degree of the graph. Eigenvectors corresponding to other eigenvalues are orthogonal to
j
{\displaystyle {\textbf {j}}}
, so for such eigenvectors
v
=
(
v
1
,
…
,
v
n
)
{\displaystyle v=(v_{1},\dots ,v_{n})}
, we have
∑
i
=
1
n
v
i
=
0
{\displaystyle \sum _{i=1}^{n}v_{i}=0}
.
A regular graph of degree k is connected if and only if the eigenvalue k has multiplicity one. The "only if" direction is a consequence of the Perron–Frobenius theorem.
There is also a criterion for regular and connected graphs :
a graph is connected and regular if and only if the matrix of ones J, with
J
i
j
=
1
{\displaystyle J_{ij}=1}
, is in the adjacency algebra of the graph (meaning it is a linear combination of powers of A).
Let G be a k-regular graph with diameter D and eigenvalues of adjacency matrix
k
=
λ
0
>
λ
1
≥
⋯
≥
λ
n
−
1
{\displaystyle k=\lambda _{0}>\lambda _{1}\geq \cdots \geq \lambda _{n-1}}
. If G is not bipartite, then
D
≤
log
(
n
−
1
)
log
(
λ
0
/
λ
1
)
+
1.
{\displaystyle D\leq {\frac {\log {(n-1)}}{\log(\lambda _{0}/\lambda _{1})}}+1.}
== Generation ==
Fast algorithms exist to generate, up to isomorphism, all regular graphs with a given degree and number of vertices.
== See also ==
Random regular graph
Strongly regular graph
Moore graph
Cage graph
Highly irregular graph
== References ==
== External links ==
Weisstein, Eric W. "Regular Graph". MathWorld.
Weisstein, Eric W. "Strongly Regular Graph". MathWorld.
GenReg software and data by Markus Meringer.
Nash-Williams, Crispin (1969), Valency Sequences which force graphs to have Hamiltonian Circuits, University of Waterloo Research Report, Waterloo, Ontario: University of Waterloo | Wikipedia/Regular_graph |
A geographic information system (GIS) consists of integrated computer hardware and software that store, manage, analyze, edit, output, and visualize geographic data. Much of this often happens within a spatial database; however, this is not essential to meet the definition of a GIS. In a broader sense, one may consider such a system also to include human users and support staff, procedures and workflows, the body of knowledge of relevant concepts and methods, and institutional organizations.
The uncounted plural, geographic information systems, also abbreviated GIS, is the most common term for the industry and profession concerned with these systems. The academic discipline that studies these systems and their underlying geographic principles, may also be abbreviated as GIS, but the unambiguous GIScience is more common. GIScience is often considered a subdiscipline of geography within the branch of technical geography.
Geographic information systems are utilized in multiple technologies, processes, techniques and methods. They are attached to various operations and numerous applications, that relate to: engineering, planning, management, transport/logistics, insurance, telecommunications, and business, as well as the natural sciences such as forestry, ecology, and Earth science. For this reason, GIS and location intelligence applications are at the foundation of location-enabled services, which rely on geographic analysis and visualization.
GIS provides the ability to relate previously unrelated information, through the use of location as the "key index variable". Locations and extents that are found in the Earth's spacetime are able to be recorded through the date and time of occurrence, along with x, y, and z coordinates; representing, longitude (x), latitude (y), and elevation (z). All Earth-based, spatial–temporal, location and extent references should be relatable to one another, and ultimately, to a "real" physical location or extent. This key characteristic of GIS has begun to open new avenues of scientific inquiry and studies.
== History and development ==
While digital GIS dates to the mid-1960s, when Roger Tomlinson first coined the phrase "geographic information system", many of the geographic concepts and methods that GIS automates date back decades earlier.
One of the first known instances in which spatial analysis was used came from the field of epidemiology in the Rapport sur la marche et les effets du choléra dans Paris et le département de la Seine (1832). French cartographer and geographer Charles Picquet created a map outlining the forty-eight districts in Paris, using halftone color gradients, to provide a visual representation for the number of reported deaths due to cholera per every 1,000 inhabitants.
In 1854, John Snow, an epidemiologist and physician, was able to determine the source of a cholera outbreak in London through the use of spatial analysis. Snow achieved this through plotting the residence of each casualty on a map of the area, as well as the nearby water sources. Once these points were marked, he was able to identify the water source within the cluster that was responsible for the outbreak. This was one of the earliest successful uses of a geographic methodology in pinpointing the source of an outbreak in epidemiology. While the basic elements of topography and theme existed previously in cartography, Snow's map was unique due to his use of cartographic methods, not only to depict, but also to analyze clusters of geographically dependent phenomena.
The early 20th century saw the development of photozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse the draughtsman. This work was initially drawn on glass plates, but later plastic film was introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each color. While the use of layers much later became one of the typical features of a contemporary GIS, the photographic process just described is not considered a GIS in itself – as the maps were just images with no database to link them to.
Two additional developments are notable in the early days of GIS: Ian McHarg's publication Design with Nature and its map overlay method and the introduction of a street network into the U.S. Census Bureau's DIME (Dual Independent Map Encoding) system.
The first publication detailing the use of computers to facilitate cartography was written by Waldo Tobler in 1959. Further computer hardware development spurred by nuclear weapon research led to more widespread general-purpose computer "mapping" applications by the early 1960s.
In 1963, the world's first true operational GIS was developed in Ottawa, Ontario, Canada, by the federal Department of Forestry and Rural Development. Developed by Roger Tomlinson, it was called the Canada Geographic Information System (CGIS) and was used to store, analyze, and manipulate data collected for the Canada Land Inventory, an effort to determine the land capability for rural Canada by mapping information about soils, agriculture, recreation, wildlife, waterfowl, forestry and land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.
CGIS was an improvement over "computer mapping" applications as it provided capabilities for data storage, overlay, measurement, and digitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines as arcs having a true embedded topology and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data. CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as a mainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets. The CGIS was never available commercially.
In 1964, Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY, to universities, research centers and corporations worldwide. These programs were the first examples of general-purpose GIS software that was not developed for a particular installation, and was very influential on future commercial software, such as Esri ARC/INFO, released in 1983.
By the late 1970s, two public domain GIS systems (MOSS and GRASS GIS) were in development, and by the early 1980s, M&S Computing (later Intergraph) along with Bentley Systems Incorporated for the CAD platform, Environmental Systems Research Institute (ESRI), CARIS (Computer Aided Resource Information System), and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first-generation approach to separation of spatial and attribute information with a second-generation approach to organizing attribute data into database structures.
In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product, was released for the DOS operating system. This was renamed in 1990 to MapInfo for Windows when it was ported to the Microsoft Windows platform. This began the process of moving GIS from the research department into the business environment.
By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over the Internet, requiring data format and transfer standards. More recently, a growing number of free, open-source GIS packages run on a range of operating systems and can be customized to perform specific tasks. The major trend of the 21st Century has been the integration of GIS capabilities with other Information technology and Internet infrastructure, such as relational databases, cloud computing, software as a service (SAAS), and mobile computing.
== GIS software ==
The distinction must be made between a singular geographic information system, which is a single installation of software and data for a particular use, along with associated hardware, staff, and institutions (e.g., the GIS for a particular city government); and GIS software, a general-purpose application program that is intended to be used in many individual geographic information systems in a variety of application domains.: 16 Starting in the late 1970s, many software packages have been created specifically for GIS applications. Esri's ArcGIS, which includes ArcGIS Pro and the legacy software ArcMap, currently dominates the GIS market. Other examples of GIS include Autodesk and MapInfo Professional and open-source programs such as QGIS, GRASS GIS, MapGuide, and Hadoop-GIS. These and other desktop GIS applications include a full suite of capabilities for entering, managing, analyzing, and visualizing geographic data, and are designed to be used on their own.
Starting in the late 1990s with the emergence of the Internet, as computer network technology progressed, GIS infrastructure and data began to move to servers, providing another mechanism for providing GIS capabilities.: 216 This was facilitated by standalone software installed on a server, similar to other server software such as HTTP servers and relational database management systems, enabling clients to have access to GIS data and processing tools without having to install specialized desktop software. These networks are known as distributed GIS. This strategy has been extended through the Internet and development of cloud-based GIS platforms such as ArcGIS Online and GIS-specialized software as a service (SAAS). The use of the Internet to facilitate distributed GIS is known as Internet GIS.
An alternative approach is the integration of some or all of these capabilities into other software or information technology architectures. One example is a spatial extension to Object-relational database software, which defines a geometry datatype so that spatial data can be stored in relational tables, and extensions to SQL for spatial analysis operations such as overlay. Another example is the proliferation of geospatial libraries and application programming interfaces (e.g., GDAL, Leaflet, D3.js) that extend programming languages to enable the incorporation of GIS data and processing into custom software, including web mapping sites and location-based services in smartphones.
== Geospatial data management ==
The core of any GIS is a database that contains representations of geographic phenomena, modeling their geometry (location and shape) and their properties or attributes. A GIS database may be stored in a variety of forms, such as a collection of separate data files or a single spatially-enabled relational database. Collecting and managing these data usually constitutes the bulk of the time and financial resources of a project, far more than other aspects such as analysis and mapping.: 175
=== Aspects of geographic data ===
GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate otherwise unrelated information by using location as the key index variable. The key is the location and/or extent in space-time.
Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and z coordinates representing, longitude, latitude, and elevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance gate, water depth sounding, POS or CAD drawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, see map projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time.
Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented. This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of real-world information that previously had not been systematically correlated.
=== Data modeling ===
GIS data represents phenomena that exist in the real world, such as roads, land use, elevation, trees, waterways, and states. The most common types of phenomena that are represented in data can be divided into two conceptualizations: discrete objects (e.g., a house, a road) and continuous fields (e.g., rainfall amount or population density). : 62–65 Other types of geographic phenomena, such as events (e.g., location of World War II battles), processes (e.g., extent of suburbanization), and masses (e.g., types of soil in an area) are represented less commonly or indirectly, or are modeled in analysis procedures rather than data.
Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references: raster images and vector. Points, lines, and polygons represent vector data of mapped location attribute references.
A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points with RGB information at each point, returning a 3D color image. GIS thematic maps then are becoming more and more realistically visually descriptive of what they set out to show or determine.
=== Data acquisition ===
GIS data acquisition includes several methods for gathering spatial data into a GIS database, which can be grouped into three categories: primary data capture, the direct measurement phenomena in the field (e.g., remote sensing, the global positioning system); secondary data capture, the extraction of information from existing sources that are not in a GIS form, such as paper maps, through digitization; and data transfer, the copying of existing GIS data from external sources such as government agencies and private companies. All of these methods can consume significant time, finances, and other resources.: 173
==== Primary data capture ====
Survey data can be directly entered into a GIS from digital data collection systems on survey instruments using a technique called coordinate geometry (COGO). Positions from a global navigation satellite system (GNSS) like the Global Positioning System can also be collected and then imported into a GIS. A current trend in data collection gives users the ability to utilize field computers with the ability to edit live data using wireless connections or disconnected editing sessions. The current trend is to utilize applications available on smartphones and PDAs in the form of mobile GIS. This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using a laser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate.
Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners and lidar, while platforms usually consist of aircraft and satellites. In England in the mid-1990s, hybrid kite/balloons called helikites first pioneered the use of compact airborne digital cameras as airborne geo-information systems. Aircraft measurement software, accurate to 0.4 mm, was used to link the photographs and measure the ground. Helikites are inexpensive and gather more accurate data than aircraft. Helikites can be used over roads, railways and towns where unmanned aerial vehicles (UAVs) are banned.
Recently, aerial data collection has become more accessible with miniature UAVs and drones. For example, the Aeryon Scout was used to map a 50-acre area with a ground sample distance of 1 inch (2.54 cm) in only 12 minutes.
The majority of digital data currently comes from photo interpretation of aerial photographs. Soft-copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped.
Satellite remote sensing provides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of the electromagnetic spectrum or radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover.
==== Secondary data capture ====
The most common method of data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability of ortho-rectified imagery (from satellites, aircraft, Helikites and UAVs), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separate digitizing tablet (heads-down digitizing). Heads-down digitizing, or manual digitizing, uses a special magnetic pen, or stylus, that feeds information into a computer to create an identical, digital map. Some tablets use a mouse-like tool, called a puck, instead of a stylus. The puck has a small window with cross-hairs which allows for greater precision and pinpointing map features. Though heads-up digitizing is more commonly used, heads-down digitizing is still useful for digitizing maps of poor quality.
Existing data printed on paper or PET film maps can be digitized or scanned to produce digital data. A digitizer produces vector data as an operator traces points, lines, and polygon boundaries from a map. Scanning a map results in raster data that could be further processed to produce vector data.
When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture.
After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resulting raster. For example, a fleck of dirt might connect two lines that should not be connected.
=== Projections, coordinate systems, and registration ===
The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models called datums that apply to different areas of the earth to provide increased accuracy, like North American Datum of 1983 for U.S. measurements, and the World Geodetic System for worldwide measurements.
The latitude and longitude on a map made against a local datum may not be the same as one obtained from a GPS receiver. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient.
In popular GIS software, data projected in latitude/longitude is often represented as a Geographic coordinate system. For example, data in latitude/longitude if the datum is the 'North American Datum of 1983' is denoted by 'GCS North American 1983'.
=== Data quality ===
While no digital model can be a perfect representation of the real world, it is important that GIS data be of a high quality. In keeping with the principle of homomorphism, the data must be close enough to reality so that the results of GIS procedures correctly correspond to the results of real world processes. This means that there is no single standard for data quality, because the necessary degree of quality depends on the scale and purpose of the tasks for which it is to be used. Several elements of data quality are important to GIS data:
Accuracy
The degree of similarity between a represented measurement and the actual value; conversely, error is the amount of difference between them.: 623 In GIS data, there is concern for accuracy in representations of location (positional accuracy), property (attribute accuracy), and time. For example, the US 2020 Census says that the population of Houston on April 1, 2020 was 2,304,580; if it was actually 2,310,674, this would be an error and thus a lack of attribute accuracy.
Precision
The degree of refinement in a represented value. In a quantitative property, this is the number of significant digits in the measured value.: 115 An imprecise value is vague or ambiguous, including a range of possible values. For example, if one were to say that the population of Houston on April 1, 2020 was "about 2.3 million," this statement would be imprecise, but likely accurate because the correct value (and many incorrect values) are included. As with accuracy, representations of location, property, and time can all be more or less precise. Resolution is a commonly used expression of positional precision, especially in raster data sets. Scale is closely related to precision in maps, as it dictates a desirable level of spatial precision, but is problematic in GIS, where a data set can be shown at a variety of display scales (including scales that would not be appropriate for the quality of the data).
Uncertainty
A general acknowledgement of the presence of error and imprecision in geographic data.: 99 That is, it is a degree of general doubt, given that it is difficult to know exactly how much error is present in a data set, although some form of estimate may be attempted (a confidence interval being such an estimate of uncertainty). This is sometimes used as a collective term for all or most aspects of data quality.
Vagueness or fuzziness
The degree to which an aspect (location, property, or time) of a phenomenon is inherently imprecise, rather than the imprecision being in a measured value.: 103 For example, the spatial extent of the Houston metropolitan area is vague, as there are places on the outskirts of the city that are less connected to the central city (measured by activities such as commuting) than places that are closer. Mathematical tools such as fuzzy set theory are commonly used to manage vagueness in geographic data.
Completeness
The degree to which a data set represents all of the actual features that it purports to include.: 623 For example, if a layer of "roads in Houston" is missing some actual streets, it is incomplete.
Currency
The most recent point in time at which a data set claims to be an accurate representation of reality. This is a concern for the majority of GIS applications, which attempt to represent the world "at present," in which case older data is of lower quality.
Consistency
The degree to which the representations of the many phenomena in a data set correctly correspond with each other.: 623 Consistency in topological relationships between spatial objects is an especially important aspect of consistency.: 117 For example, if all of the lines in a street network were accidentally moved 10 meters to the East, they would be inaccurate but still consistent, because they would still properly connect at each intersection, and network analysis tools such as shortest path would still give correct results.
Propagation of uncertainty
The degree to which the quality of the results of Spatial analysis methods and other processing tools derives from the quality of input data.: 118 For example, interpolation is a common operation used in many ways in GIS; because it generates estimates of values between known measurements, the results will always be more precise, but less certain (as each estimate has an unknown amount of error).
The quality of a dataset is very dependent upon its sources, and the methods used to create it. Land surveyors have been able to provide a high level of positional accuracy utilizing high-end GPS equipment, but GPS locations on the average smartphone are much less accurate. Common datasets such as digital terrain and aerial imagery are available in a wide variety of levels of quality, especially spatial precision. Paper maps, which have been digitized for many years as a data source, can also be of widely varying quality.
A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict.
=== Raster-to-vector translation ===
Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion.
More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false color rendering and a variety of other techniques including use of two dimensional Fourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convert geographic data from one structure to another. In so doing, the implicit assumptions behind different ontologies and classifications require analysis. Object ontologies have gained increasing prominence as a consequence of object-oriented programming and sustained work by Barry Smith and co-workers.
=== Spatial ETL ===
Spatial ETL tools provide the data processing functionality of traditional extract, transform, load (ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such as spreadsheets.
== Spatial analysis ==
GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), while in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The increased availability has created a new dimension to business intelligence termed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data. Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process.
Geoprocessing is a GIS operation used to manipulate spatial data. A typical geoprocessing operation takes an input dataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis, topology processing, raster processing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions.
=== Terrain analysis ===
Many geographic tasks involve the terrain, the shape of the surface of the earth, such as hydrology, earthworks, and biogeography. Thus, terrain data is often a core dataset in a GIS, usually in the form of a raster Digital elevation model (DEM) or a Triangulated irregular network (TIN). A variety of tools are available in most GIS software for analyzing terrain, often by creating derivative datasets that represent a specific aspect of the surface. Some of the most common include:
Slope or grade is the steepness or gradient of a unit of terrain, usually measured as an angle in degrees or as a percentage.
Aspect can be defined as the direction in which a unit of terrain faces. Aspect is usually expressed in degrees from north.
Cut and fill is a computation of the difference between the surface before and after an excavation project to estimate costs.
Hydrological modeling can provide a spatial element that other hydrological models lack, with the analysis of variables such as slope, aspect and watershed or catchment area. Terrain analysis is fundamental to hydrology, since water always flows down a slope. As basic terrain analysis of a digital elevation model (DEM) involves calculation of slope and aspect, DEMs are very useful for hydrological analysis. Slope and aspect can then be used to determine direction of surface runoff, and hence flow accumulation for the formation of streams, rivers and lakes. Areas of divergent flow can also give a clear indication of the boundaries of a catchment. Once a flow direction and accumulation matrix has been created, queries can be performed that show contributing or dispersal areas at a certain point. More detail can be added to the model, such as terrain roughness, vegetation types and soil types, which can influence infiltration and evapotranspiration rates, and hence influencing surface flow. One of the main uses of hydrological modeling is in environmental contamination research. Other applications of hydrological modeling include groundwater and surface water mapping, as well as flood risk maps.
Viewshed analysis predicts the impact that terrain has on the visibility between locations, which is especially important for wireless communications.
Shaded relief is a depiction of the surface as if it were a three dimensional model lit from a given direction, which is very commonly used in maps.
Most of these are generated using algorithms that are discrete simplifications of vector calculus. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours. Each of these is strongly affected by the level of detail in the terrain data, such as the resolution of a DEM, which should be chosen carefully.
=== Proximity analysis ===
Distance is a key part of solving many geographic tasks, usually due to the friction of distance. Thus, a wide variety of analysis tools have analyze distance in some form, such as buffers, Voronoi or Thiessen polygons, Cost distance analysis, and network analysis.
=== Data analysis ===
It is difficult to relate wetlands maps to rainfall amounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map with isopleth or contour lines that indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability of water power potential as a renewable energy source. Similarly, GIS can be used to compare other renewable energy resources to find the best geographic potential for a region.
Additionally, from a series of three-dimensional points, or digital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis, shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expected thalweg of where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS.
=== Topological modeling ===
A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else).
=== Geometric networks ===
Geometric networks are linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar to graphs in mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks and public utility networks, such as electric, gas, and water networks. Network modeling is also commonly employed in transportation planning, hydrology modeling, and infrastructure modeling.
=== Cartographic modeling ===
Dana Tomlin coined the term cartographic modeling in his PhD dissertation (1983); he later used it in the title of his book, Geographic Information Systems and Cartographic Modeling (1990). Cartographic modeling refers to a process where several thematic layers of the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models.
=== Map overlay ===
The combination of several spatial datasets (points, lines, or polygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematical Venn diagram overlays. A union overlay combines the geographic features and attribute tables of both inputs into a single new output. An intersect overlay defines the area where both inputs overlap and retains a set of attribute fields for each. A symmetric difference overlay defines an output area that includes the total area of both inputs except for the overlapping area.
Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset.
In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra", through a function that combines the values of each raster's matrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon.
=== Geostatistics ===
Geostatistics is a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation).
When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over the Pacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection.
To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable.
Interpolation is the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain.
Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity.
Digital elevation models, triangulated irregular networks, edge-finding algorithms, Thiessen polygons, Fourier analysis, (weighted) moving averages, inverse distance weighting, kriging, spline, and trend surface analysis are all mathematical methods to produce interpolative data.
=== Address geocoding ===
Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such as ZIP Codes, parcel lots and address locations. A reference theme is required to geocode individual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information.
=== Reverse geocoding ===
Reverse geocoding is the process of returning an estimated street address number as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at the midpoint of a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range.
=== Multi-criteria decision analysis ===
Coupled with GIS, multi-criteria decision analysis methods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritised. GIS MCDA may reduce costs and time involved in identifying potential restoration sites.
=== GIS data mining ===
GIS or spatial data mining is the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications include environmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis.
== Data output and cartography ==
Cartography is the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data.
Cartographic work serves two major functions:
First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events. Web Map Servers facilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX, Java, Flash, etc.).
Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill.
An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data.
=== Terrain depiction ===
Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols. Topographic maps show the shape of land surface with contour lines or with shaded relief.
Today, graphic display techniques such as shading based on altitude in a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion of San Mateo County, California.
The digital elevation model, consisting of surface elevations recorded on a 30-meter horizontal grid, shows high elevations as white and low elevation as black.
The accompanying Landsat Thematic Mapper image shows a false-color infrared image looking down at the same area in 30-meter pixels, or picture elements, for the same coordinate points, pixel by pixel, as the elevation information.
A GIS was used to register and combine the two images to render the three-dimensional perspective view looking down the San Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of the landforms. The GIS display depends on the viewing point of the observer and time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day.
=== Web mapping ===
In recent years there has been a proliferation of free-to-use and easily accessible mapping software such as the proprietary web applications Google Maps and Bing Maps, as well as the free and open-source alternative OpenStreetMap. These services give the public access to huge amounts of geographic data, perceived by many users to be as trustworthy and usable as professional information. For example, during the COVID-19 pandemic, web maps hosted on dashboards were used to rapidly disseminate case data to the general public.
Some of them, like Google Maps and OpenLayers, expose an application programming interface (API) that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality. Web mapping has also uncovered the potential of crowdsourcing geodata in projects like OpenStreetMap, which is a collaborative project to create a free editable map of the world. These mashup projects have been proven to provide a high level of value and benefit to end users outside that possible through traditional geographic information.
Web mapping is not without its drawbacks. Web mapping allows for the creation and distribution of maps by people without proper cartographic training. This has led to maps that ignore cartographic conventions and are potentially misleading, with one study finding that more than half of United States state government COVID-19 dashboards did not follow these conventions.
== Uses ==
Since its origin in the 1960s, GIS has been used in an ever-increasing range of applications, corroborating the widespread importance of location and aided by the continuing reduction in the barriers to adopting geospatial technology. The perhaps hundreds of different uses of GIS can be classified in several ways:
Goal: the purpose of an application can be broadly classified as either scientific research or resource management. The purpose of research, defined as broadly as possible, is to discover new knowledge; this may be performed by someone who considers themself a scientist, but may also be done by anyone who is trying to learn why the world appears to work the way it does. A study as practical as deciphering why a business location has failed would be research in this sense. Management (sometimes called operational applications), also defined as broadly as possible, is the application of knowledge to make practical decisions on how to employ the resources one has control over to achieve one's goals. These resources could be time, capital, labor, equipment, land, mineral deposits, wildlife, and so on.: 791
Decision level: Management applications have been further classified as strategic, tactical, operational, a common classification in business management. Strategic tasks are long-term, visionary decisions about what goals one should have, such as whether a business should expand or not. Tactical tasks are medium-term decisions about how to achieve strategic goals, such as a national forest creating a grazing management plan. Operational decisions are concerned with the day-to-day tasks, such as a person finding the shortest route to a pizza restaurant.
Topic: the domains in which GIS is applied largely fall into those concerned with the human world (e.g., economics, politics, transportation, education, landscape architecture, archaeology, urban planning, real estate, public health, crime mapping, national defense), and those concerned with the natural world (e.g., geology, biology, oceanography, climate). That said, one of the powerful capabilities of GIS and the spatial perspective of geography is their integrative ability to compare disparate topics, and many applications are concerned with multiple domains. Examples of integrated human-natural application domains include deep mapping, Natural hazard mitigation, wildlife management, sustainable development, natural resources, and climate change response.
Institution: GIS has been implemented in a variety of different kinds of institutions: government (at all levels from municipal to international), business (of all types and sizes), non-profit organizations (even churches), as well as personal uses. The latter has become increasingly prominent with the rise of location-enabled smartphones.
Lifespan: GIS implementations may be focused on a project or an enterprise. A Project GIS is focused on accomplishing a single task: data is gathered, analysis is performed, and results are produced separately from any other projects the person may perform, and the implementation is essentially transitory. An Enterprise GIS is intended to be a permanent institution, including a database that is carefully designed to be useful for a variety of projects over many years, and is likely used by many individuals across an enterprise, with some employed full-time just to maintain it.
Integration: Traditionally, most GIS applications were standalone, using specialized GIS software, specialized hardware, specialized data, and specialized professionals. Although these remain common to the present day, integrated applications have greatly increased, as geospatial technology was merged into broader enterprise applications, sharing IT infrastructure, databases, and software, often using enterprise integration platforms such as SAP.
The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarily interoperable or compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose.
GIS is also diverging into location-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed objects (nearest restaurant, gas station, fire hydrant) or mobile objects (friends, children, police car), or to relay their position back to a central server for display or other processing.
GIS is also used in digital marketing and SEO for audience segmentation based on location.
=== Topics ===
==== Aquatic science ====
==== Archaeology ====
==== Disaster response ====
Geospatial disaster response uses geospatial data and tools to help emergency responders, land managers, and scientists respond to disasters. Geospatial data can help save lives, reduce damage, and improve communication. Geospatial data can be used by federal authorities like FEMA to create maps that show the extent of a disaster, the location of people in need, and the location of debris, create models that estimate the number of people at risk and the amount of damage, improve communication between emergency responders, land managers, and scientists, as well as help determine where to allocate resources, such as emergency medical resources or search and rescue teams and plan evacuation routes and identify which areas are most at risk.
In the United States, FEMA's Response Geospatial Office is responsible for the agency's capture, analysis and development of GIS products to enhance situational awareness and enable expeditions and effective decision making. The RGO's mission is to support decision makers in understanding the size, scope, and extent of disaster impacts so they can deliver resources to the communities most in need.
==== Environmental governance ====
==== Environmental contamination ====
==== Geological mapping ====
==== Geospatial intelligence ====
==== History ====
The use of digital maps generated by GIS has also influenced the development of an academic field known as spatial humanities.
==== Hydrology ====
==== Participatory GIS ====
==== Public health ====
==== Traditional knowledge GIS ====
== Other aspects ==
=== Open Geospatial Consortium standards ===
The Open Geospatial Consortium (OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols include Web Map Service, and Web Feature Service.
GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications.
Compliant products are software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site.
Implementing products are software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry.
=== Adding the dimension of time ===
The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years through the use of cartographic visualizations. As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation.
GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by the advanced very-high-resolution radiometer (AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 km2 (0.39 sq mi). The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently the moderate-resolution imaging spectroradiometer (MODIS) are only two of many sensor systems used for Earth surface analysis.
In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by the U.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS.
Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions using spatial decision support systems.
=== Semantics ===
Tools and technologies emerging from the World Wide Web Consortium's Semantic Web are proving useful for data integration problems in information systems. Correspondingly, such technologies have been proposed as a means to facilitate interoperability and data reuse among GIS applications and also to enable new analysis mechanisms.
Ontologies are a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the intended meaning of data rather than its syntax or structure. For example, reasoning that a land cover type classified as deciduous needleleaf trees in one dataset is a specialization or subset of land cover type forest in another more roughly classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Tentative ontologies have been developed in areas related to GIS applications, for example the hydrology ontology developed by the Ordnance Survey in the United Kingdom and the SWEET ontologies developed by NASA's Jet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group to represent geospatial data on the web. GeoSPARQL is a standard developed by the Ordnance Survey, United States Geological Survey, Natural Resources Canada, Australia's Commonwealth Scientific and Industrial Research Organisation and others to support ontology creation and reasoning using well-understood OGC literals (GML, WKT), topological relationships (Simple Features, RCC8, DE-9IM), RDF and the SPARQL database query protocols.
Recent research results in this area can be seen in the International Conference on Geospatial Semantics and the Terra Cognita – Directions to the Geospatial Semantic Web workshop at the International Semantic Web Conference.
== Societal implications ==
With the popularization of GIS in decision making, scholars have begun to scrutinize the social and political implications of GIS. GIS can also be misused to distort reality for individual and political gain. It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context and has the potential to increase citizen trust in government. Other related topics include discussion on copyright, privacy, and censorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation.
=== In education ===
At the end of the 20th century, GIS began to be recognized as tools that could be used in the classroom. The benefits of GIS in education seem focused on developing spatial cognition, but there is not enough bibliography or statistical data to show the concrete scope of the use of GIS in education around the world, although the expansion has been faster in those countries where the curriculum mentions them.: 36
GIS seems to provide many advantages in teaching geography because it allows for analysis based on real geographic data and also helps raise research questions from teachers and students in the classroom. It also contributes to improvement in learning by developing spatial and geographical thinking and, in many cases, student motivation.: 38
Courses in GIS are also offered by educational institutions.
=== In local government ===
GIS is proven as an organization-wide, enterprise and enduring technology that continues to change how local government operates. Government agencies have adopted GIS technology as a method to better manage the following areas of government organization:
Economic development departments use interactive GIS mapping tools, aggregated with other data (demographics, labor force, business, industry, talent) along with a database of available commercial sites and buildings in order to attract investment and support existing business. Businesses making location decisions can use the tools to choose communities and sites that best match their criteria for success.
Public safety operations such as emergency operations centers, fire prevention, police and sheriff mobile technology and dispatch, and mapping weather risks.
Parks and recreation departments and their functions in asset inventory, land conservation, land management, and cemetery management
Public works and utilities, tracking water and stormwater drainage, electrical assets, engineering projects, and public transportation assets and trends
Fiber network management for interdepartmental network assets
School analytical and demographic data, asset management, and improvement/expansion planning
Public administration for election data, property records, and zoning/management
The open data initiative is pushing local government to take advantage of technology such as GIS technology, as it encompasses the requirements to fit the open data/open government model of transparency. With open data, local government organizations can implement citizen engagement applications and online portals, allowing citizens to see land information, report potholes and signage issues, view and sort parks by assets, view real-time crime rates and utility repairs, and much more. The push for open data within government organizations is driving the growth in local government GIS technology spending, and database management.
== See also ==
== References ==
== Further reading ==
Bolstad, P. (2019). GIS Fundamentals: A first text on Geographic Information Systems, Sixth Edition. Ann Arbor: XanEdu, 764 pp.
Burrough, P. A. and McDonnell, R. A. (1998). Principles of geographical information systems. Oxford University Press, Oxford, 327 pp.
DeMers, M. (2009). Fundamentals of Geographic Information Systems, 4th Edition. Wiley, ISBN 978-0-470-12906-7
Harvey, Francis (2008). A Primer of GIS, Fundamental geographic and cartographic concepts. The Guilford Press, 31 pp.
Heywood, I., Cornelius, S., and Carver, S. (2006). An Introduction to Geographical Information Systems. Prentice Hall. 3rd edition.
Ott, T. and Swiaczny, F. (2001) .Time-integrative GIS. Management and analysis of Spatio-temporal data, Berlin / Heidelberg / New York: Springer.
Thurston, J., Poiker, T.K. and J. Patrick Moore. (2003). Integrated Geospatial Technologies: A Guide to GPS, GIS, and Data Logging. Hoboken, New Jersey: Wiley.
Worboys, Michael; Duckham, Matt (2004). GIS: a computing perspective. Boca Raton: CRC Press. ISBN 978-0415283755.
== External links ==
Media related to Geographic information systems at Wikimedia Commons | Wikipedia/Geographic_information_systems |
In computer science, graph transformation, or graph rewriting, concerns the technique of creating a new graph out of an original graph algorithmically. It has numerous applications, ranging from software engineering (software construction and also software verification) to layout algorithms and picture generation.
Graph transformations can be used as a computation abstraction. The basic idea is that if the state of a computation can be represented as a graph, further steps in that computation can then be represented as transformation rules on that graph. Such rules consist of an original graph, which is to be matched to a subgraph in the complete state, and a replacing graph, which will replace the matched subgraph.
Formally, a graph rewriting system usually consists of a set of graph rewrite rules of the form
L
→
R
{\displaystyle L\rightarrow R}
, with
L
{\displaystyle L}
being called pattern graph (or left-hand side) and
R
{\displaystyle R}
being called replacement graph (or right-hand side of the rule). A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph (pattern matching, thus solving the subgraph isomorphism problem) and by replacing the found occurrence by an instance of the replacement graph. Rewrite rules can be further regulated in the case of labeled graphs, such as in string-regulated graph grammars.
Sometimes graph grammar is used as a synonym for graph rewriting system, especially in the context of formal languages; the different wording is used to emphasize the goal of constructions, like the enumeration of all graphs from some starting graph, i.e. the generation of a graph language – instead of simply transforming a given state (host graph) into a new state.
== Graph rewriting approaches ==
=== Algebraic approach ===
The algebraic approach to graph rewriting is based upon category theory. The algebraic approach is further divided into sub-approaches, the most common of which are the double-pushout (DPO) approach and the single-pushout (SPO) approach. Other sub-approaches include the sesqui-pushout and the pullback approach.
From the perspective of the DPO approach a graph rewriting rule is a pair of morphisms in the category of graphs and graph homomorphisms between them:
r
=
(
L
←
K
→
R
)
{\displaystyle r=(L\leftarrow K\rightarrow R)}
, also written
L
⊇
K
⊆
R
{\displaystyle L\supseteq K\subseteq R}
, where
K
→
L
{\displaystyle K\rightarrow L}
is injective. The graph K is called invariant or sometimes the gluing graph. A rewriting step or application of a rule r to a host graph G is defined by two pushout diagrams both originating in the same morphism
k
:
K
→
D
{\displaystyle k\colon K\rightarrow D}
, where D is a context graph (this is where the name double-pushout comes from). Another graph morphism
m
:
L
→
G
{\displaystyle m\colon L\rightarrow G}
models an occurrence of L in G and is called a match. Practical understanding of this is that
L
{\displaystyle L}
is a subgraph that is matched from
G
{\displaystyle G}
(see subgraph isomorphism problem), and after a match is found,
L
{\displaystyle L}
is replaced with
R
{\displaystyle R}
in host graph
G
{\displaystyle G}
where
K
{\displaystyle K}
serves as an interface, containing the nodes and edges which are preserved when applying the rule. The graph
K
{\displaystyle K}
is needed to attach the pattern being matched to its context: if it is empty, the match can only designate a whole connected component of the graph
G
{\displaystyle G}
.
In contrast a graph rewriting rule of the SPO approach is a single morphism in the category of labeled multigraphs and partial mappings that preserve the multigraph structure:
r
:
L
→
R
{\displaystyle r\colon L\rightarrow R}
. Thus a rewriting step is defined by a single pushout diagram. Practical understanding of this is similar to the DPO approach. The difference is, that there is no interface between the host graph G and the graph G' being the result of the rewriting step.
From the practical perspective, the key distinction between DPO and SPO is how they deal with the deletion of nodes with adjacent edges, in particular, how they avoid that such deletions may leave behind "dangling edges". The DPO approach only deletes a node when the rule specifies the deletion of all adjacent edges as well (this dangling condition can be checked for a given match), whereas the SPO approach simply disposes the adjacent edges, without requiring an explicit specification.
There is also another algebraic-like approach to graph rewriting, based mainly on Boolean algebra and an algebra of matrices, called matrix graph grammars.
=== Determinate graph rewriting ===
Yet another approach to graph rewriting, known as determinate graph rewriting, came out of logic and database theory. In this approach, graphs are treated as database instances, and rewriting operations as a mechanism for defining queries and views; therefore, all rewriting is required to yield unique results (up to isomorphism), and this is achieved by applying any rewriting rule concurrently throughout the graph, wherever it applies, in such a way that the result is indeed uniquely defined.
=== Term graph rewriting ===
Another approach to graph rewriting is term graph rewriting, which involves the processing or transformation of term graphs (also known as abstract semantic graphs) by a set of syntactic rewrite rules.
Term graphs are a prominent topic in programming language research since term graph rewriting rules are capable of formally expressing a compiler's operational semantics. Term graphs are also used as abstract machines capable of modelling chemical and biological computations as well as graphical calculi such as concurrency models. Term graphs can perform automated verification and logical programming since they are well-suited to representing quantified statements in first order logic. Symbolic programming software is another application for term graphs, which are capable of representing and performing computation with abstract algebraic structures such as groups, fields and rings.
The TERMGRAPH conference focuses entirely on research into term graph rewriting and its applications.
== Classes of graph grammar and graph rewriting system ==
Graph rewriting systems naturally group into classes according to the kind of representation of graphs that are used and how the rewrites are expressed. The term graph grammar, otherwise equivalent to graph rewriting system or graph replacement system, is most often used in classifications. Some common types are:
Attributed graph grammars, typically formalised using either the single-pushout approach or the double-pushout approach to characterising replacements, mentioned in the above section on the algebraic approach to graph rewriting.
Hypergraph grammars, including as more restrictive subclasses port graph grammars, linear graph grammars and interaction nets.
== Implementations and applications ==
Graphs are an expressive, visual and mathematically precise formalism for modelling of objects (entities) linked by relations; objects are represented by nodes and relations between them by edges. Nodes and edges are commonly typed and attributed. Computations are described in this model by changes in the relations between the entities or by attribute changes of the graph elements. They are encoded in graph rewrite/graph transformation rules and executed by graph rewrite systems/graph transformation tools.
Tools that are application domain neutral:
AGG, the attributed graph grammar system (Java).
GP 2 is a visual rule-based graph programming language designed to facilitate formal reasoning over graph programs.
GMTE Archived 2018-03-13 at the Wayback Machine, the Graph Matching and Transformation Engine for graph matching and transformation. It is an implementation of an extension of Messmer’s algorithm using C++.
GrGen.NET, the graph rewrite generator, a graph transformation tool emitting C#-code or .NET-assemblies.
GROOVE, a Java-based tool set for editing graphs and graph transformation rules, exploring the state spaces of graph grammars, and model checking those state spaces; can also be used as a graph transformation engine.
Verigraph, a software specification and verification system based on graph rewriting (Haskell).
Tools that solve software engineering tasks (mainly MDA) with graph rewriting:
eMoflon, an EMF-compliant model-transformation tool with support for Story-Driven Modeling and Triple Graph Grammars.
EMorF a graph rewriting system based on EMF, supporting in-place and model-to-model transformation.
Fujaba uses Story driven modelling, a graph rewrite language based on PROGRES.
Graph databases often support dynamic rewriting of graphs.
GReAT.
Gremlin, a graph-based programming language (see Graph Rewriting).
Henshin, a graph rewriting system based on EMF, supporting in-place and model-to-model transformation, critical pair analysis, and model checking.
PROGRES, an integrated environment and very high level language for PROgrammed Graph REwriting Systems.
VIATRA.
Mechanical engineering tools
GraphSynth is an interpreter and UI environment for creating unrestricted graph grammars as well as testing and searching the resultant language variant. It saves graphs and graph grammar rules as XML files and is written in C#.
Soley Studio, is an integrated development environment for graph transformation systems. Its main application focus is data analytics in the field of engineering.
Biology applications
Functional-structural plant modeling with a graph grammar based language
Multicellular development modeling with string-regulated graph grammars
Kappa is a rule-based language for modeling systems of interacting agents, primarily motivated by molecular systems biology.
Artificial Intelligence/Natural Language Processing
OpenCog provides a basic pattern matcher (on hypergraphs) which is used to implement various AI algorithms.
RelEx is an English-language parser that employs graph re-writing to convert a link parse into a dependency parse.
Computer programming language
The Clean programming language is implemented using graph rewriting.
== See also ==
Graph theory
Shape grammar
Formal grammar
Abstract rewriting — a generalization of graph rewriting
== References ==
=== Citations ===
=== Sources === | Wikipedia/Graph_rewriting |
In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.
Formally, a directed hypergraph is a pair
(
X
,
E
)
{\displaystyle (X,E)}
, where
X
{\displaystyle X}
is a set of elements called nodes, vertices, points, or elements and
E
{\displaystyle E}
is a set of pairs of subsets of
X
{\displaystyle X}
. Each of these pairs
(
D
,
C
)
∈
E
{\displaystyle (D,C)\in E}
is called an edge or hyperedge; the vertex subset
D
{\displaystyle D}
is known as its tail or domain, and
C
{\displaystyle C}
as its head or codomain.
The order of a hypergraph
(
X
,
E
)
{\displaystyle (X,E)}
is the number of vertices in
X
{\displaystyle X}
. The size of the hypergraph is the number of edges in
E
{\displaystyle E}
. The order of an edge
e
=
(
D
,
C
)
{\displaystyle e=(D,C)}
in a directed hypergraph is
|
e
|
=
(
|
D
|
,
|
C
|
)
{\displaystyle |e|=(|D|,|C|)}
: that is, the number of vertices in its tail followed by the number of vertices in its head.
The definition above generalizes from a directed graph to a directed hypergraph by defining the head or tail of each edge as a set of vertices (
C
⊆
X
{\displaystyle C\subseteq X}
or
D
⊆
X
{\displaystyle D\subseteq X}
) rather than as a single vertex. A graph is then the special case where each of these sets contains only one element. Hence any standard graph theoretic concept that is independent of the edge orders
|
e
|
{\displaystyle |e|}
will generalize to hypergraph theory.
An undirected hypergraph
(
X
,
E
)
{\displaystyle (X,E)}
is an undirected graph whose edges connect not just two vertices, but an arbitrary number. An undirected hypergraph is also called a set system or a family of sets drawn from the universal set.
Hypergraphs can be viewed as incidence structures. In particular, there is a bipartite "incidence graph" or "Levi graph" corresponding to every hypergraph, and conversely, every bipartite graph can be regarded as the incidence graph of a hypergraph when it is 2-colored and it is indicated which color class corresponds to hypergraph vertices and which to hypergraph edges.
Hypergraphs have many other names. In computational geometry, an undirected hypergraph may sometimes be called a range space and then the hyperedges are called ranges.
In cooperative game theory, hypergraphs are called simple games (voting games); this notion is applied to solve problems in social choice theory. In some literature edges are referred to as hyperlinks or connectors.
The collection of hypergraphs is a category with hypergraph homomorphisms as morphisms.
== Applications ==
Undirected hypergraphs are useful in modelling such things as satisfiability problems, databases, machine learning, and Steiner tree problems. They have been extensively used in machine learning tasks as the data model and classifier regularization (mathematics). The applications include recommender system (communities as hyperedges), image retrieval (correlations as hyperedges), and bioinformatics (biochemical interactions as hyperedges). Representative hypergraph learning techniques include hypergraph spectral clustering that extends the spectral graph theory with hypergraph Laplacian, and hypergraph semi-supervised learning that introduces extra hypergraph structural cost to restrict the learning results. For large scale hypergraphs, a distributed framework built using Apache Spark is also available. It can be desirable to study hypergraphs where all hyperedges have the same cardinality; a k-uniform hypergraph is a hypergraph such that all its hyperedges have size k. (In other words, one such hypergraph is a collection of sets, each such set a hyperedge connecting k nodes.) So a 2-uniform hypergraph is a graph, a 3-uniform hypergraph is a collection of unordered triples, and so on.
Directed hypergraphs can be used to model things including telephony applications, detecting money laundering, operations research, and transportation planning. They can also be used to model Horn-satisfiability.
== Generalizations of concepts from graphs ==
Many theorems and concepts involving graphs also hold for hypergraphs, in particular:
Matching in hypergraphs;
Vertex cover in hypergraphs (also known as: transversal);
Line graph of a hypergraph;
Hypergraph grammar - created by augmenting a class of hypergraphs with a set of replacement rules;
Ramsey's theorem;
Erdős–Ko–Rado theorem;
Kruskal–Katona theorem on uniform hypergraphs;
Hall-type theorems for hypergraphs.
In directed hypergraphs: transitive closure, and shortest path problems.
== Hypergraph drawing ==
Although hypergraphs are more difficult to draw on paper than graphs, several researchers have studied methods for the visualization of hypergraphs.
In one possible visual representation for hypergraphs, similar to the standard graph drawing style in which curves in the plane are used to depict graph edges, a hypergraph's vertices are depicted as points, disks, or boxes, and its hyperedges are depicted as trees that have the vertices as their leaves. If the vertices are represented as points, the hyperedges may also be shown as smooth curves that connect sets of points, or as simple closed curves that enclose sets of points.
In another style of hypergraph visualization, the subdivision model of hypergraph drawing, the plane is subdivided into regions, each of which represents a single vertex of the hypergraph. The hyperedges of the hypergraph are represented by contiguous subsets of these regions, which may be indicated by coloring, by drawing outlines around them, or both. An order-n Venn diagram, for instance, may be viewed as a subdivision drawing of a hypergraph with n hyperedges (the curves defining the diagram) and 2n − 1 vertices (represented by the regions into which these curves subdivide the plane). In contrast with the polynomial-time recognition of planar graphs, it is NP-complete to determine whether a hypergraph has a planar subdivision drawing, but the existence of a drawing of this type may be tested efficiently when the adjacency pattern of the regions is constrained to be a path, cycle, or tree.
An alternative representation of the hypergraph called PAOH is shown in the figure on top of this article. Edges are vertical lines connecting vertices. Vertices are aligned on the left. The legend on the right shows the names of the edges. It has been designed for dynamic hypergraphs but can be used for simple hypergraphs as well.
== Hypergraph coloring ==
Classic hypergraph coloring is assigning one of the colors from set
{
1
,
2
,
3
,
.
.
.
,
λ
}
{\displaystyle \{1,2,3,...,\lambda \}}
to every vertex of a hypergraph in such a way that each hyperedge contains at least two vertices of distinct colors. In other words, there must be no monochromatic hyperedge with cardinality at least 2. In this sense it is a direct generalization of graph coloring. The minimum number of used distinct colors over all colorings is called the chromatic number of a hypergraph.
Hypergraphs for which there exists a coloring using up to k colors are referred to as k-colorable. The 2-colorable hypergraphs are exactly the bipartite ones.
There are many generalizations of classic hypergraph coloring. One of them is the so-called mixed hypergraph coloring, when monochromatic edges are allowed. Some mixed hypergraphs are uncolorable for any number of colors. A general criterion for uncolorability is unknown. When a mixed hypergraph is colorable, then the minimum and maximum number of used colors are called the lower and upper chromatic numbers respectively.
== Properties of hypergraphs ==
A hypergraph can have various properties, such as:
Empty - has no edges.
Non-simple (or multiple) - has loops (hyperedges with a single vertex) or repeated edges, which means there can be two or more edges containing the same set of vertices.
Simple - has no loops and no repeated edges.
d
{\displaystyle d}
-regular - every vertex has degree
d
{\displaystyle d}
, i.e., contained in exactly
d
{\displaystyle d}
hyperedges.
2-colorable - its vertices can be partitioned into two classes U and V in such a way that each hyperedge with cardinality at least 2 contains at least one vertex from both classes. An alternative term is Property B.
Two stronger properties are bipartite and balanced.
k
{\displaystyle k}
-uniform - each hyperedge contains precisely
k
{\displaystyle k}
vertices.
k
{\displaystyle k}
-partite - the vertices are partitioned into
k
{\displaystyle k}
parts, and each hyperedge contains precisely one vertex of each type.
Every
k
{\displaystyle k}
-partite hypergraph (for
k
≥
2
{\displaystyle k\geq 2}
) is both
k
{\displaystyle k}
-uniform and bipartite (and 2-colorable).
Reduced: no hyperedge is a strict subset of another hyperedge; equivalently, every hyperedge is maximal for inclusion. The reduction of a hypergraph is the reduced hypergraph obtained by removing every hyperedge which is included in another hyperedge.
Downward-closed - every subset of an undirected hypergraph's edges is a hyperedge too. A downward-closed hypergraph is usually called an abstract simplicial complex. It is generally not reduced, unless all hyperedges have cardinality 1.
An abstract simplicial complex with the augmentation property is called a matroid.
Laminar: for any two hyperedges, either they are disjoint, or one is included in the other. In other words, the set of hyperedges forms a laminar set family.
== Related hypergraphs ==
Because hypergraph links can have any cardinality, there are several notions of the concept of a subgraph, called subhypergraphs, partial hypergraphs and section hypergraphs.
Let
H
=
(
X
,
E
)
{\displaystyle H=(X,E)}
be the hypergraph consisting of vertices
X
=
{
x
i
∣
i
∈
I
v
}
,
{\displaystyle X=\lbrace x_{i}\mid i\in I_{v}\rbrace ,}
and having edge set
E
=
{
e
i
∣
i
∈
I
e
,
e
i
⊆
X
,
e
i
≠
∅
}
,
{\displaystyle E=\lbrace e_{i}\mid i\in I_{e},e_{i}\subseteq X,e_{i}\neq \emptyset \rbrace ,}
where
I
v
{\displaystyle I_{v}}
and
I
e
{\displaystyle I_{e}}
are the index sets of the vertices and edges respectively.
A subhypergraph is a hypergraph with some vertices removed. Formally, the subhypergraph
H
A
{\displaystyle H_{A}}
induced by
A
⊆
X
{\displaystyle A\subseteq X}
is defined as
H
A
=
(
A
,
{
e
∩
A
∣
e
∈
E
,
e
∩
A
≠
∅
}
)
.
{\displaystyle H_{A}=\left(A,\lbrace e\cap A\mid e\in E,e\cap A\neq \emptyset \rbrace \right).}
An alternative term is the restriction of H to A.: 468
An extension of a subhypergraph is a hypergraph where each hyperedge of
H
{\displaystyle H}
which is partially contained in the subhypergraph
H
A
{\displaystyle H_{A}}
is fully contained in the extension
E
x
(
H
A
)
{\displaystyle Ex(H_{A})}
. Formally
E
x
(
H
A
)
=
(
A
∪
A
′
,
E
′
)
{\displaystyle Ex(H_{A})=(A\cup A',E')}
with
A
′
=
⋃
e
∈
E
e
∖
A
{\displaystyle A'=\bigcup _{e\in E}e\setminus A}
and
E
′
=
{
e
∈
E
∣
e
⊆
(
A
∪
A
′
)
}
{\displaystyle E'=\lbrace e\in E\mid e\subseteq (A\cup A')\rbrace }
.
The partial hypergraph is a hypergraph with some edges removed.: 468 Given a subset
J
⊂
I
e
{\displaystyle J\subset I_{e}}
of the edge index set, the partial hypergraph generated by
J
{\displaystyle J}
is the hypergraph
(
X
,
{
e
i
∣
i
∈
J
}
)
.
{\displaystyle \left(X,\lbrace e_{i}\mid i\in J\rbrace \right).}
Given a subset
A
⊆
X
{\displaystyle A\subseteq X}
, the section hypergraph is the partial hypergraph
H
×
A
=
(
A
,
{
e
i
∣
i
∈
I
e
,
e
i
⊆
A
}
)
.
{\displaystyle H\times A=\left(A,\lbrace e_{i}\mid i\in I_{e},e_{i}\subseteq A\rbrace \right).}
The dual
H
∗
{\displaystyle H^{*}}
of
H
{\displaystyle H}
is a hypergraph whose vertices and edges are interchanged, so that the vertices are given by
{
e
i
}
{\displaystyle \lbrace e_{i}\rbrace }
and whose edges are given by
{
X
m
}
{\displaystyle \lbrace X_{m}\rbrace }
where
X
m
=
{
e
i
∣
x
m
∈
e
i
}
.
{\displaystyle X_{m}=\lbrace e_{i}\mid x_{m}\in e_{i}\rbrace .}
When a notion of equality is properly defined, as done below, the operation of taking the dual of a hypergraph is an involution, i.e.,
(
H
∗
)
∗
=
H
.
{\displaystyle \left(H^{*}\right)^{*}=H.}
A connected graph G with the same vertex set as a connected hypergraph H is a host graph for H if every hyperedge of H induces a connected subgraph in G. For a disconnected hypergraph H, G is a host graph if there is a bijection between the connected components of G and of H, such that each connected component G' of G is a host of the corresponding H'.
The 2-section (or clique graph, representing graph, primal graph, Gaifman graph) of a hypergraph is the graph with the same vertices of the hypergraph, and edges between all pairs of vertices contained in the same hyperedge.
== Incidence matrix ==
Let
V
=
{
v
1
,
v
2
,
…
,
v
n
}
{\displaystyle V=\{v_{1},v_{2},~\ldots ,~v_{n}\}}
and
E
=
{
e
1
,
e
2
,
…
e
m
}
{\displaystyle E=\{e_{1},e_{2},~\ldots ~e_{m}\}}
. Every hypergraph has an
n
×
m
{\displaystyle n\times m}
incidence matrix.
For an undirected hypergraph,
I
=
(
b
i
j
)
{\displaystyle I=(b_{ij})}
where
b
i
j
=
{
1
i
f
v
i
∈
e
j
0
o
t
h
e
r
w
i
s
e
.
{\displaystyle b_{ij}=\left\{{\begin{matrix}1&\mathrm {if} ~v_{i}\in e_{j}\\0&\mathrm {otherwise} .\end{matrix}}\right.}
The transpose
I
t
{\displaystyle I^{t}}
of the incidence matrix defines a hypergraph
H
∗
=
(
V
∗
,
E
∗
)
{\displaystyle H^{*}=(V^{*},\ E^{*})}
called the dual of
H
{\displaystyle H}
, where
V
∗
{\displaystyle V^{*}}
is an m-element set and
E
∗
{\displaystyle E^{*}}
is an n-element set of subsets of
V
∗
{\displaystyle V^{*}}
. For
v
j
∗
∈
V
∗
{\displaystyle v_{j}^{*}\in V^{*}}
and
e
i
∗
∈
E
∗
,
v
j
∗
∈
e
i
∗
{\displaystyle e_{i}^{*}\in E^{*},~v_{j}^{*}\in e_{i}^{*}}
if and only if
b
i
j
=
1
{\displaystyle b_{ij}=1}
.
For a directed hypergraph, the heads and tails of each hyperedge
e
j
{\displaystyle e_{j}}
are denoted by
H
(
e
j
)
{\displaystyle H(e_{j})}
and
T
(
e
j
)
{\displaystyle T(e_{j})}
respectively.
I
=
(
b
i
j
)
{\displaystyle I=(b_{ij})}
where
b
i
j
=
{
−
1
i
f
v
i
∈
T
(
e
j
)
1
i
f
v
i
∈
H
(
e
j
)
0
o
t
h
e
r
w
i
s
e
.
{\displaystyle b_{ij}=\left\{{\begin{matrix}-1&\mathrm {if} ~v_{i}\in T(e_{j})\\1&\mathrm {if} ~v_{i}\in H(e_{j})\\0&\mathrm {otherwise} .\end{matrix}}\right.}
=== Incidence graph ===
A hypergraph H may be represented by a bipartite graph BG as follows: the sets X and E are the parts of BG, and (x1, e1) are connected with an edge if and only if vertex x1 is contained in edge e1 in H.
Conversely, any bipartite graph with fixed parts and no unconnected nodes in the second part represents some hypergraph in the manner described above. This bipartite graph is also called incidence graph.
== Adjacency matrix ==
A parallel for the adjacency matrix of a hypergraph can be drawn from the adjacency matrix of a graph. In the case of a graph, the adjacency matrix is a square matrix which indicates whether pairs of vertices are adjacent. Likewise, we can define the adjacency matrix
A
=
(
a
i
j
)
{\displaystyle A=(a_{ij})}
for a hypergraph in general where the hyperedges
e
k
≤
m
{\displaystyle e_{k\leq m}}
have real weights
w
e
k
∈
R
{\displaystyle w_{e_{k}}\in \mathbb {R} }
with
a
i
j
=
{
w
e
k
i
f
(
v
i
,
v
j
)
∈
E
0
o
t
h
e
r
w
i
s
e
.
{\displaystyle a_{ij}=\left\{{\begin{matrix}w_{e_{k}}&\mathrm {if} ~(v_{i},v_{j})\in E\\0&\mathrm {otherwise} .\end{matrix}}\right.}
== Cycles ==
In contrast with ordinary undirected graphs for which there is a single natural notion of cycles and acyclic graphs. For hypergraphs, there are multiple natural non-equivalent definitions of cycles which collapse to the ordinary notion of cycle when the graph case is considered.
=== Berge cycles ===
A first notion of cycle was introduced by Claude Berge. A Berge cycle in a hypergraph is an alternating sequence of distinct vertices and edges
(
v
1
,
e
1
,
…
,
v
n
,
e
n
)
{\displaystyle (v_{1},e_{1},\dots ,v_{n},e_{n})}
where
v
i
,
v
i
+
1
{\displaystyle v_{i},v_{i+1}}
are both in
e
i
{\displaystyle e_{i}}
for each
i
∈
[
n
]
{\displaystyle i\in [n]}
(with indices taken modulo
n
{\displaystyle n}
).
Under this definition a hypergraph is acyclic if and only if its incidence graph (the bipartite graph defined above) is acyclic. Thus Berge-cyclicity can obviously be tested in linear time by an exploration of the incidence graph.
=== Tight cycles ===
This definition is particularly used for
k
{\displaystyle k}
-uniform hypergraphs, where all hyperedges are of size
k
{\displaystyle k}
. A tight cycle of length
n
{\displaystyle n}
in a hypergraph
H
{\displaystyle H}
is a sequence of distinct vertices
v
1
,
…
,
v
n
{\displaystyle v_{1},\dots ,v_{n}}
such that every consecutive
k
{\displaystyle k}
-tuple
{
v
i
,
…
,
v
i
+
k
−
1
}
{\displaystyle \{v_{i},\dots ,v_{i+k-1}\}}
(indices modulo
n
{\displaystyle n}
) forms a hyperedge in
H
{\displaystyle H}
.
This notion was introduced by Katona and Kierstead and has since garnered considerable attention, particularly in the study of Hamiltonicity in extremal combinatorics.
Rödl, Szemerédi, and Ruciński showed that every
n
{\displaystyle n}
-vertex
k
{\displaystyle k}
-uniform hypergraph
H
{\displaystyle H}
in which every
(
k
−
1
)
{\displaystyle (k-1)}
-subset of vertices is contained in at least
n
/
2
+
o
(
n
)
{\displaystyle n/2+o(n)}
hyperedges contains a Hamilton cycle.
This corresponds to an approximate hypergraph-extension of the celebrated Dirac's theorem about Hamilton cycles in graphs.
The maximum number of hyperedges in a (tightly) acyclic
k
{\displaystyle k}
-uniform hypergraph remains unknown. The best known bounds, obtained by Sudakov and Tomon, show that every
n
{\displaystyle n}
-vertex
k
{\displaystyle k}
-uniform hypergraph with at least
n
k
−
1
+
o
(
1
)
{\displaystyle n^{k-1+o(1)}}
hyperedges must contain a tight cycle. This bound is optimal up to the
o
(
1
)
{\displaystyle o(1)}
error term.
An
ℓ
{\displaystyle \ell }
-cycle generalizes the notion of a tight cycle.
It consists in a sequence of vertices
v
1
,
…
,
v
n
{\displaystyle v_{1},\dots ,v_{n}}
and hyperedges
e
1
,
…
,
e
t
{\displaystyle e_{1},\dots ,e_{t}}
where each
e
i
{\displaystyle e_{i}}
consists of
k
{\displaystyle k}
consecutive vertices in the sequence and
|
e
i
∩
e
i
+
1
|
=
ℓ
{\displaystyle |e_{i}\cap e_{i+1}|=\ell }
for every
1
≤
i
≤
t
{\displaystyle 1\leq i\leq t}
. Since every edge of the
ℓ
{\displaystyle \ell }
-cycle contains exactly
k
−
ℓ
{\displaystyle k-\ell }
vertices which are not contained in the previous edge,
n
{\displaystyle n}
must be divisible by
k
−
ℓ
{\displaystyle k-\ell }
. Note that
ℓ
=
k
−
1
{\displaystyle \ell =k-1}
recovers the definition of a tight cycle.
=== α-acyclicity ===
The definition of Berge-acyclicity might seem to be very restrictive: for instance, if a hypergraph has some pair
v
≠
v
′
{\displaystyle v\neq v'}
of vertices and some pair
f
≠
f
′
{\displaystyle f\neq f'}
of hyperedges such that
v
,
v
′
∈
f
{\displaystyle v,v'\in f}
and
v
,
v
′
∈
f
′
{\displaystyle v,v'\in f'}
, then it is Berge-cyclic.
We can define a weaker notion of hypergraph acyclicity, later termed α-acyclicity. This notion of acyclicity is equivalent to the hypergraph being conformal (every clique of the primal graph is covered by some hyperedge) and its primal graph being chordal; it is also equivalent to reducibility to the empty graph through the GYO algorithm (also known as Graham's algorithm), a confluent iterative process which removes hyperedges using a generalized definition of ears. In the domain of database theory, it is known that a database schema enjoys certain desirable properties if its underlying hypergraph is α-acyclic. Besides, α-acyclicity is also related to the expressiveness of the guarded fragment of first-order logic.
We can test in linear time if a hypergraph is α-acyclic.
Note that α-acyclicity has the counter-intuitive property that adding hyperedges to an α-cyclic hypergraph may make it α-acyclic (for instance, adding a hyperedge containing all vertices of the hypergraph will always make it α-acyclic). Motivated in part by this perceived shortcoming, Ronald Fagin defined the stronger notions of β-acyclicity and γ-acyclicity. We can state β-acyclicity as the requirement that all subhypergraphs of the hypergraph are α-acyclic, which is equivalent to an earlier definition by Graham. The notion of γ-acyclicity is a more restrictive condition which is equivalent to several desirable properties of database schemas and is related to Bachman diagrams. Both β-acyclicity and γ-acyclicity can be tested in polynomial time.
Those four notions of acyclicity are comparable: γ-acyclicity which implies β-acyclicity which implies α-acyclicity. Moreover, Berge-acyclicity implies all of them. None of the reverse implications hold including the Berge one. In other words, these four notions are different.
== Isomorphism, symmetry, and equality ==
A hypergraph homomorphism is a map from the vertex set of one hypergraph to another such that each edge maps to one other edge.
A hypergraph
H
=
(
X
,
E
)
{\displaystyle H=(X,E)}
is isomorphic to a hypergraph
G
=
(
Y
,
F
)
{\displaystyle G=(Y,F)}
, written as
H
≃
G
{\displaystyle H\simeq G}
if there exists a bijection
ϕ
:
X
→
Y
{\displaystyle \phi :X\to Y}
and a permutation
π
{\displaystyle \pi }
of
I
{\displaystyle I}
such that
ϕ
(
e
i
)
=
f
π
(
i
)
{\displaystyle \phi (e_{i})=f_{\pi (i)}}
The bijection
ϕ
{\displaystyle \phi }
is then called the isomorphism of the graphs. Note that
H
≃
G
{\displaystyle H\simeq G}
if and only if
H
∗
≃
G
∗
{\displaystyle H^{*}\simeq G^{*}}
.
When the edges of a hypergraph are explicitly labeled, one has the additional notion of strong isomorphism. One says that
H
{\displaystyle H}
is strongly isomorphic to
G
{\displaystyle G}
if the permutation is the identity. One then writes
H
≅
G
{\displaystyle H\cong G}
. Note that all strongly isomorphic graphs are isomorphic, but not vice versa.
When the vertices of a hypergraph are explicitly labeled, one has the notions of equivalence, and also of equality. One says that
H
{\displaystyle H}
is equivalent to
G
{\displaystyle G}
, and writes
H
≡
G
{\displaystyle H\equiv G}
if the isomorphism
ϕ
{\displaystyle \phi }
has
ϕ
(
x
n
)
=
y
n
{\displaystyle \phi (x_{n})=y_{n}}
and
ϕ
(
e
i
)
=
f
π
(
i
)
{\displaystyle \phi (e_{i})=f_{\pi (i)}}
Note that
H
≡
G
{\displaystyle H\equiv G}
if and only if
H
∗
≅
G
∗
{\displaystyle H^{*}\cong G^{*}}
If, in addition, the permutation
π
{\displaystyle \pi }
is the identity, one says that
H
{\displaystyle H}
equals
G
{\displaystyle G}
, and writes
H
=
G
{\displaystyle H=G}
. Note that, with this definition of equality, graphs are self-dual:
(
H
∗
)
∗
=
H
{\displaystyle \left(H^{*}\right)^{*}=H}
A hypergraph automorphism is an isomorphism from a vertex set into itself, that is a relabeling of vertices. The set of automorphisms of a hypergraph H (= (X, E)) is a group under composition, called the automorphism group of the hypergraph and written Aut(H).
=== Examples ===
Consider the hypergraph
H
{\displaystyle H}
with edges
H
=
{
e
1
=
{
a
,
b
}
,
e
2
=
{
b
,
c
}
,
e
3
=
{
c
,
d
}
,
e
4
=
{
d
,
a
}
,
e
5
=
{
b
,
d
}
,
e
6
=
{
a
,
c
}
}
{\displaystyle H=\lbrace e_{1}=\lbrace a,b\rbrace ,e_{2}=\lbrace b,c\rbrace ,e_{3}=\lbrace c,d\rbrace ,e_{4}=\lbrace d,a\rbrace ,e_{5}=\lbrace b,d\rbrace ,e_{6}=\lbrace a,c\rbrace \rbrace }
and
G
=
{
f
1
=
{
α
,
β
}
,
f
2
=
{
β
,
γ
}
,
f
3
=
{
γ
,
δ
}
,
f
4
=
{
δ
,
α
}
,
f
5
=
{
α
,
γ
}
,
f
6
=
{
β
,
δ
}
}
{\displaystyle G=\lbrace f_{1}=\lbrace \alpha ,\beta \rbrace ,f_{2}=\lbrace \beta ,\gamma \rbrace ,f_{3}=\lbrace \gamma ,\delta \rbrace ,f_{4}=\lbrace \delta ,\alpha \rbrace ,f_{5}=\lbrace \alpha ,\gamma \rbrace ,f_{6}=\lbrace \beta ,\delta \rbrace \rbrace }
Then clearly
H
{\displaystyle H}
and
G
{\displaystyle G}
are isomorphic (with
ϕ
(
a
)
=
α
{\displaystyle \phi (a)=\alpha }
, etc.), but they are not strongly isomorphic. So, for example, in
H
{\displaystyle H}
, vertex
a
{\displaystyle a}
meets edges 1, 4 and 6, so that,
e
1
∩
e
4
∩
e
6
=
{
a
}
{\displaystyle e_{1}\cap e_{4}\cap e_{6}=\lbrace a\rbrace }
In graph
G
{\displaystyle G}
, there does not exist any vertex that meets edges 1, 4 and 6:
f
1
∩
f
4
∩
f
6
=
∅
{\displaystyle f_{1}\cap f_{4}\cap f_{6}=\varnothing }
In this example,
H
{\displaystyle H}
and
G
{\displaystyle G}
are equivalent,
H
≡
G
{\displaystyle H\equiv G}
, and the duals are strongly isomorphic:
H
∗
≅
G
∗
{\displaystyle H^{*}\cong G^{*}}
.
=== Symmetry ===
The rank
r
(
H
)
{\displaystyle r(H)}
of a hypergraph
H
{\displaystyle H}
is the maximum cardinality of any of the edges in the hypergraph. If all edges have the same cardinality k, the hypergraph is said to be uniform or k-uniform, or is called a k-hypergraph. A graph is just a 2-uniform hypergraph.
The degree d(v) of a vertex v is the number of edges that contain it. H is k-regular if every vertex has degree k.
The dual of a uniform hypergraph is regular and vice versa.
Two vertices x and y of H are called symmetric if there exists an automorphism such that
ϕ
(
x
)
=
y
{\displaystyle \phi (x)=y}
. Two edges
e
i
{\displaystyle e_{i}}
and
e
j
{\displaystyle e_{j}}
are said to be symmetric if there exists an automorphism such that
ϕ
(
e
i
)
=
e
j
{\displaystyle \phi (e_{i})=e_{j}}
.
A hypergraph is said to be vertex-transitive (or vertex-symmetric) if all of its vertices are symmetric. Similarly, a hypergraph is edge-transitive if all edges are symmetric. If a hypergraph is both edge- and vertex-symmetric, then the hypergraph is simply transitive.
Because of hypergraph duality, the study of edge-transitivity is identical to the study of vertex-transitivity.
== Partitions ==
A partition theorem due to E. Dauber states that, for an edge-transitive hypergraph
H
=
(
X
,
E
)
{\displaystyle H=(X,E)}
, there exists a partition
(
X
1
,
X
2
,
⋯
,
X
K
)
{\displaystyle (X_{1},X_{2},\cdots ,X_{K})}
of the vertex set
X
{\displaystyle X}
such that the subhypergraph
H
X
k
{\displaystyle H_{X_{k}}}
generated by
X
k
{\displaystyle X_{k}}
is transitive for each
1
≤
k
≤
K
{\displaystyle 1\leq k\leq K}
, and such that
∑
k
=
1
K
r
(
H
X
k
)
=
r
(
H
)
{\displaystyle \sum _{k=1}^{K}r\left(H_{X_{k}}\right)=r(H)}
where
r
(
H
)
{\displaystyle r(H)}
is the rank of H.
As a corollary, an edge-transitive hypergraph that is not vertex-transitive is bicolorable.
Graph partitioning (and in particular, hypergraph partitioning) has many applications to IC design and parallel computing. Efficient and scalable hypergraph partitioning algorithms are also important for processing large scale hypergraphs in machine learning tasks.
== Further generalizations ==
One possible generalization of a hypergraph is to allow edges to point at other edges. There are two variations of this generalization. In one, the edges consist not only of a set of vertices, but may also contain subsets of vertices, subsets of subsets of vertices and so on ad infinitum. In essence, every edge is just an internal node of a tree or directed acyclic graph, and vertices are the leaf nodes. A hypergraph is then just a collection of trees with common, shared nodes (that is, a given internal node or leaf may occur in several different trees). Conversely, every collection of trees can be understood as this generalized hypergraph. Since trees are widely used throughout computer science and many other branches of mathematics, one could say that hypergraphs appear naturally as well. So, for example, this generalization arises naturally as a model of term algebra; edges correspond to terms and vertices correspond to constants or variables.
For such a hypergraph, set membership then provides an ordering, but the ordering is neither a partial order nor a preorder, since it is not transitive. The graph corresponding to the Levi graph of this generalization is a directed acyclic graph. Consider, for example, the generalized hypergraph whose vertex set is
V
=
{
a
,
b
}
{\displaystyle V=\{a,b\}}
and whose edges are
e
1
=
{
a
,
b
}
{\displaystyle e_{1}=\{a,b\}}
and
e
2
=
{
a
,
e
1
}
{\displaystyle e_{2}=\{a,e_{1}\}}
. Then, although
b
∈
e
1
{\displaystyle b\in e_{1}}
and
e
1
∈
e
2
{\displaystyle e_{1}\in e_{2}}
, it is not true that
b
∈
e
2
{\displaystyle b\in e_{2}}
. However, the transitive closure of set membership for such hypergraphs does induce a partial order, and "flattens" the hypergraph into a partially ordered set.
Alternately, edges can be allowed to point at other edges, irrespective of the requirement that the edges be ordered as directed, acyclic graphs. This allows graphs with edge-loops, which need not contain vertices at all. For example, consider the generalized hypergraph consisting of two edges
e
1
{\displaystyle e_{1}}
and
e
2
{\displaystyle e_{2}}
, and zero vertices, so that
e
1
=
{
e
2
}
{\displaystyle e_{1}=\{e_{2}\}}
and
e
2
=
{
e
1
}
{\displaystyle e_{2}=\{e_{1}\}}
. As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation. In particular, there is no transitive closure of set membership for such hypergraphs. Although such structures may seem strange at first, they can be readily understood by noting that the equivalent generalization of their Levi graph is no longer bipartite, but is rather just some general directed graph.
The generalized incidence matrix for such hypergraphs is, by definition, a square matrix, of a rank equal to the total number of vertices plus edges. Thus, for the above example, the incidence matrix is simply
[
0
1
1
0
]
.
{\displaystyle \left[{\begin{matrix}0&1\\1&0\end{matrix}}\right].}
== See also ==
== Notes ==
== References ==
Berge, Claude (1984). Hypergraphs: Combinatorics of Finite Sets. Elsevier. ISBN 978-0-08-088023-5.
Berge, C.; Ray-Chaudhuri, D. (2006). Hypergraph Seminar: Ohio State University, 1972. Lecture Notes in Mathematics. Vol. 411. Springer. ISBN 978-3-540-37803-7.
"Hypergraph", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Bretto, Alain (2013). Hypergraph Theory: An Introduction. Springer. ISBN 978-3-319-00080-0.
Voloshin, Vitaly I. (2002). Coloring Mixed Hypergraphs: Theory, Algorithms and Applications: Theory, Algorithms, and Applications. Fields Institute Monographs. Vol. 17. American Mathematical Society. ISBN 978-0-8218-2812-0.
Voloshin, Vitaly I. (2009). Introduction to Graph and Hypergraph Theory. Nova Science. ISBN 978-1-61470-112-5.
This article incorporates material from hypergraph on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== External links ==
PAOHVis: open-source PAOHVis system for visualizing dynamic hypergraphs. | Wikipedia/Hypergraph |
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics.
A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references.
A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.).
== Operations ==
The basic operations provided by a graph data structure G usually include:
adjacent(G, x, y): tests whether there is an edge from the vertex x to the vertex y;
neighbors(G, x): lists all vertices y such that there is an edge from the vertex x to the vertex y;
add_vertex(G, x): adds the vertex x, if it is not there;
remove_vertex(G, x): removes the vertex x, if it is there;
add_edge(G, x, y, z): adds the edge z from the vertex x to the vertex y, if it is not there;
remove_edge(G, x, y): removes the edge from the vertex x to the vertex y, if it is there;
get_vertex_value(G, x): returns the value associated with the vertex x;
set_vertex_value(G, x, v): sets the value associated with the vertex x to v.
Structures that associate values to the edges usually also provide:
get_edge_value(G, x, y): returns the value associated with the edge (x, y);
set_edge_value(G, x, y, v): sets the value associated with the edge (x, y) to v.
== Common data structures for graph representation ==
Adjacency list
Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are also stored as objects, in which case each vertex stores its incident edges and each edge stores its incident vertices.
Adjacency matrix
A two-dimensional matrix, in which the rows represent source vertices and columns represent destination vertices. Data on edges and vertices must be stored externally. Only the cost for one edge can be stored between each pair of vertices.
Incidence matrix
A two-dimensional matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate the incidence relation between the vertex at a row and edge at a column.
The following table gives the time complexity cost of performing various operations on graphs, for each of these representations, with |V| the number of vertices and |E| the number of edges. In the matrix representations, the entries encode the cost of following an edge. The cost of edges that are not present are assumed to be ∞.
Adjacency lists are generally preferred for the representation of sparse graphs, while an adjacency matrix is preferred if the graph is dense; that is, the number of edges
|
E
|
{\displaystyle |E|}
is close to the number of vertices squared,
|
V
|
2
{\displaystyle |V|^{2}}
, or if one must be able to quickly look up if there is an edge connecting two vertices.
=== More efficient representation of adjacency sets ===
The time complexity of operations in the adjacency list representation can be improved by storing the sets of adjacent vertices in more efficient data structures, such as hash tables or balanced binary search trees (the latter representation requires that vertices are identified by elements of a linearly ordered set, such as integers or character strings). A representation of adjacent vertices via hash tables leads to an amortized average time complexity of
O
(
1
)
{\displaystyle O(1)}
to test adjacency of two given vertices and to remove an edge and an amortized average time complexity of
O
(
deg
(
x
)
)
{\displaystyle O(\deg(x))}
to remove a given vertex x of degree
deg
(
x
)
{\displaystyle \deg(x)}
. The time complexity of the other operations and the asymptotic space requirement do not change.
== Parallel representations ==
The parallelization of graph problems faces significant challenges: Data-driven computations, unstructured problems, poor locality and high data access to computation ratio. The graph representation used for parallel architectures plays a significant role in facing those challenges. Poorly chosen representations may unnecessarily drive up the communication cost of the algorithm, which will decrease its scalability. In the following, shared and distributed memory architectures are considered.
=== Shared memory ===
In the case of a shared memory model, the graph representations used for parallel processing are the same as in the sequential case, since parallel read-only access to the graph representation (e.g. an adjacency list) is efficient in shared memory.
=== Distributed memory ===
In the distributed memory model, the usual approach is to partition the vertex set
V
{\displaystyle V}
of the graph into
p
{\displaystyle p}
sets
V
0
,
…
,
V
p
−
1
{\displaystyle V_{0},\dots ,V_{p-1}}
. Here,
p
{\displaystyle p}
is the amount of available processing elements (PE). The vertex set partitions are then distributed to the PEs with matching index, additionally to the corresponding edges. Every PE has its own subgraph representation, where edges with an endpoint in another partition require special attention. For standard communication interfaces like MPI, the ID of the PE owning the other endpoint has to be identifiable. During computation in a distributed graph algorithms, passing information along these edges implies communication.
Partitioning the graph needs to be done carefully - there is a trade-off between low communication and even size partitioning But partitioning a graph is a NP-hard problem, so it is not feasible to calculate them. Instead, the following heuristics are used.
1D partitioning: Every processor gets
n
/
p
{\displaystyle n/p}
vertices and the corresponding outgoing edges. This can be understood as a row-wise or column-wise decomposition of the adjacency matrix. For algorithms operating on this representation, this requires an All-to-All communication step as well as
O
(
m
)
{\displaystyle {\mathcal {O}}(m)}
message buffer sizes, as each PE potentially has outgoing edges to every other PE.
2D partitioning: Every processor gets a submatrix of the adjacency matrix. Assume the processors are aligned in a rectangle
p
=
p
r
×
p
c
{\displaystyle p=p_{r}\times p_{c}}
, where
p
r
{\displaystyle p_{r}}
and
p
c
{\displaystyle p_{c}}
are the amount of processing elements in each row and column, respectively. Then each processor gets a submatrix of the adjacency matrix of dimension
(
n
/
p
r
)
×
(
n
/
p
c
)
{\displaystyle (n/p_{r})\times (n/p_{c})}
. This can be visualized as a checkerboard pattern in a matrix. Therefore, each processing unit can only have outgoing edges to PEs in the same row and column. This bounds the amount of communication partners for each PE to
p
r
+
p
c
−
1
{\displaystyle p_{r}+p_{c}-1}
out of
p
=
p
r
×
p
c
{\displaystyle p=p_{r}\times p_{c}}
possible ones.
== Compressed representations ==
Graphs with trillions of edges occur in machine learning, social network analysis, and other areas. Compressed graph representations have been developed to reduce I/O and memory requirements. General techniques such as Huffman coding are applicable, but the adjacency list or adjacency matrix can be processed in specific ways to increase efficiency.
== Graph traversal ==
=== Breadth first search and depth first search ===
Breadth-first search (BFS) and depth-first search (DFS) are two closely-related approaches that are used for exploring all of the nodes in a given connected component. Both start with an arbitrary node, the "root".
== See also ==
Graph traversal for more information on graph walking strategies
Graph database for graph (data structure) persistency
Graph rewriting for rule based transformations of graphs (graph data structures)
Graph drawing software for software, systems, and providers of systems for drawing graphs
== References ==
== External links ==
Boost Graph Library: a powerful C++ graph library s.a. Boost (C++ libraries)
Networkx: a Python graph library
GraphMatcher a java program to align directed/undirected graphs.
GraphBLAS A specification for a library interface for operations on graphs, with a particular focus on sparse graphs. | Wikipedia/Graph_(abstract_data_type) |
In graph theory, a branch of mathematics, the disjoint union of graphs is an operation that combines two or more graphs to form a larger graph.
It is analogous to the disjoint union of sets and is constructed by making the vertex set of the result be the disjoint union of the vertex sets of the given graphs and by making the edge set of the result be the disjoint union of the edge sets of the given graphs. Any disjoint union of two or more nonempty graphs is necessarily disconnected.
== Notation ==
The disjoint union is also called the graph sum and may be represented either by a plus sign or a circled plus sign: If
G
{\displaystyle G}
and
H
{\displaystyle H}
are two graphs, then
G
+
H
{\displaystyle G+H}
or
G
⊕
H
{\displaystyle G\oplus H}
denotes their disjoint union.
== Related graph classes ==
Certain special classes of graphs may be represented using disjoint union operations. In particular:
The forests are the disjoint unions of trees.
The cluster graphs are the disjoint unions of complete graphs.
The 2-regular graphs are the disjoint unions of cycle graphs.
More generally, every graph is the disjoint union of connected graphs, its connected components.
The cographs are the graphs that can be constructed from single-vertex graphs by a combination of disjoint union and complement operations.
== References == | Wikipedia/Disjoint_union_of_graphs |
In graph theory, a mixed graph G = (V, E, A) is a graph consisting of a set of vertices V, a set of (undirected) edges E, and a set of directed edges (or arcs) A.
== Definitions and notation ==
Consider adjacent vertices
u
,
v
∈
V
{\displaystyle u,v\in V}
. A directed edge, called an arc, is an edge with an orientation and can be denoted as
u
v
→
{\displaystyle {\overrightarrow {uv}}}
or
(
u
,
v
)
{\displaystyle (u,v)}
(note that
u
{\displaystyle u}
is the tail and
v
{\displaystyle v}
is the head of the arc). Also, an undirected edge, or edge, is an edge with no orientation and can be denoted as
u
v
{\displaystyle uv}
or
[
u
,
v
]
{\displaystyle [u,v]}
.
For the purpose of our example we will not be considering loops or multiple edges of mixed graphs.
A walk in a mixed graph is a sequence
v
0
,
c
1
,
v
1
,
c
2
,
v
2
,
…
,
c
k
,
v
k
{\displaystyle v_{0},c_{1},v_{1},c_{2},v_{2},\dots ,c_{k},v_{k}}
of vertices and edges/arcs such that for every index
i
{\displaystyle i}
, either
c
i
=
v
i
v
i
+
1
{\displaystyle c_{i}=v_{i}v_{i+1}}
is an edge of the graph or
c
i
=
v
i
v
i
+
1
→
{\displaystyle c_{i}={\overrightarrow {v_{i}v_{i+1}}}}
is an arc of the graph. This walk is a path if it does not repeat any edges, arcs, or vertices, except possibly the first and last vertices. A walk is closed if its first and last vertices are the same, and a closed path is a cycle. A mixed graph is acyclic if it does not contain a cycle.
== Coloring ==
Mixed graph coloring can be thought of as labeling or an assignment of k different colors (where k is a positive integer) to the vertices of a mixed graph. Different colors must be assigned to vertices that are connected by an edge. The colors may be represented by the numbers from 1 to k, and for a directed arc, the tail of the arc must be colored by a smaller number than the head of the arc.
=== Example ===
For example, consider the figure to the right. Our available k-colors to color our mixed graph are {1, 2, 3}. Since u and v are connected by an edge, they must receive different colors or labelings (u and v are labelled 1 and 2, respectively). We also have an arc from v to w. Since orientation assigns an ordering, we must label the tail (v) with a smaller color (or integer from our set) than the head (w) of our arc.
=== Strong and weak coloring ===
A (strong) proper k-coloring of a mixed graph is a function c : V → [k] where [k] := {1, 2, …, k} such that c(u) ≠ c(v) if uv ∈ E and c(u) < c(v) if
u
v
→
∈
A
{\displaystyle {\overrightarrow {uv}}\in A}
.
A weaker condition on our arcs can be applied and we can consider a weak proper k-coloring of a mixed graph to be a function c : V → [k] where [k] := {1, 2, …, k} such that c(u) ≠ c(v) if uv ∈ E and c(u) ≤ c(v) if
u
v
→
∈
A
{\displaystyle {\overrightarrow {uv}}\in A}
.
Referring back to our example, this means that we can label both the head and tail of (v,w) with the positive integer 2.
=== Counting ===
A coloring may or may not exist for a mixed graph. In order for a mixed graph to have a k-coloring, the graph cannot contain any directed cycles. If such a k-coloring exists, then we refer to the smallest k needed in order to properly color our graph as the chromatic number, denoted by χ(G). The number of proper k-colorings is a polynomial function of k called the chromatic polynomial of our graph G (by analogy with the chromatic polynomial of undirected graphs) and can be denoted by χG(k).
=== Computing weak chromatic polynomials ===
The deletion–contraction method can be used to compute weak chromatic polynomials of mixed graphs. This method involves deleting (i.e., removing) an edge or arc and possibly joining the remaining vertices incident to that edge or arc to form one vertex. After deleting an edge e from a mixed graph G = (V, E, A) we obtain the mixed graph (V, E – e, A). We denote this deletion of the edge e by G – e. Similarly, by deleting an arc a from a mixed graph, we obtain (V, E, A – a) where we denote the deletion of a by G – a. Also, we denote the contraction of e and a by G/e and G/a, respectively. From Propositions given in Beck et al. we obtain the following equations to compute the chromatic polynomial of a mixed graph:
χ
G
(
k
)
=
χ
G
−
e
(
k
)
−
χ
G
/
e
(
k
)
{\displaystyle \chi _{G}(k)=\chi _{G-e}(k)-\chi _{G/e}(k)}
,
χ
G
(
k
)
=
χ
G
−
a
(
k
)
+
χ
G
/
a
(
k
)
−
χ
G
a
(
k
)
{\displaystyle \chi _{G}(k)=\chi _{G-a}(k)+\chi _{G/a}(k)-\chi _{G_{a}}(k)}
.
== Applications ==
=== Scheduling problem ===
Mixed graphs may be used to model job shop scheduling problems in which a collection of tasks is to be performed, subject to certain timing constraints. In this sort of problem, undirected edges may be used to model a constraint that two tasks are incompatible (they cannot be performed simultaneously). Directed edges may be used to model precedence constraints, in which one task must be performed before another. A graph defined in this way from a scheduling problem is called a disjunctive graph. The mixed graph coloring problem can be used to find a schedule of minimum length for performing all the tasks.
=== Bayesian inference ===
Mixed graphs are also used as graphical models for Bayesian inference. In this context, an acyclic mixed graph (one with no cycles of directed edges) is also called a chain graph. The directed edges of these graphs are used to indicate a causal connection between two events, in which the outcome of the first event influences the probability of the second event. Undirected edges, instead, indicate a non-causal correlation between two events. A connected component of the undirected subgraph of a chain graph is called a chain. A chain graph may be transformed into an undirected graph by constructing its moral graph, an undirected graph formed from the chain graph by adding undirected edges between pairs of vertices that have outgoing edges to the same chain, and then forgetting the orientations of the directed edges.
== Notes ==
== References ==
Beck, M.; Blado, D.; Crawford, J.; Jean-Louis, T.; Young, M. (2013), "On weak chromatic polynomials of mixed graphs", Graphs and Combinatorics, 31: 91–98, arXiv:1210.4634, doi:10.1007/s00373-013-1381-1.
Cowell, Robert G.; Dawid, A. Philip; Lauritzen, Steffen L.; Spiegelhalter, David J. (1999), Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks, Springer-Verlag New York, p. 27, doi:10.1007/0-387-22630-3 (inactive 1 November 2024), ISBN 0-387-98767-3{{citation}}: CS1 maint: DOI inactive as of November 2024 (link)
Hansen, Pierre; Kuplinsky, Julio; de Werra, Dominique (1997), "Mixed graph colorings", Mathematical Methods of Operations Research, 45 (1): 145–160, doi:10.1007/BF01194253, MR 1435900.
Ries, B. (2007), "Coloring some classes of mixed graphs", Discrete Applied Mathematics, 155 (1): 1–6, doi:10.1016/j.dam.2006.05.004, MR 2281351.
== External links ==
Weisstein, Eric W. "Mixed Graph". MathWorld. | Wikipedia/Mixed_graph |
Graph may refer to:
== Mathematics ==
Graph (discrete mathematics), a structure made of vertices and edges
Graph theory, the study of such graphs and their properties
Graph (topology), a topological space resembling a graph in the sense of discrete mathematics
Graph of a function
Graph of a relation
Graph paper
Chart, a means of representing data (also called a graph)
== Computing ==
Graph (abstract data type), an abstract data type representing relations or connections
graph (Unix), Unix command-line utility
Conceptual graph, a model for knowledge representation and reasoning
Microsoft Graph, a Microsoft API developer platform that connects multiple services and devices
== Other uses ==
HMS Graph, a submarine of the UK Royal Navy
== See also ==
Complex network
Graf
Graff (disambiguation)
Graph database
Grapheme, in linguistics
Graphemics
Graphic (disambiguation)
-graphy (suffix from the Greek for "describe," "write" or "draw")
List of information graphics software
Statistical graphics | Wikipedia/Graph_(disambiguation) |
In the mathematical area of graph theory, a chordal graph is one in which all cycles of four or more vertices have a chord, which is an edge that is not part of the cycle but connects two vertices of the cycle. Equivalently, every induced cycle in the graph should have exactly three vertices. The chordal graphs may also be characterized as the graphs that have perfect elimination orderings, as the graphs in which each minimal separator is a clique, and as the intersection graphs of subtrees of a tree. They are sometimes also called rigid circuit graphs or triangulated graphs: a chordal completion of a graph is typically called a triangulation of that graph.
Chordal graphs are a subset of the perfect graphs. They may be recognized in linear time, and several problems that are hard on other classes of graphs such as graph coloring may be solved in polynomial time when the input is chordal. The treewidth of an arbitrary graph may be characterized by the size of the cliques in the chordal graphs that contain it.
== Perfect elimination and efficient recognition ==
A perfect elimination ordering in a graph is an ordering of the vertices of the graph such that, for each vertex v, v and the neighbors of v that occur after v in the order form a clique. A graph is chordal if and only if it has a perfect elimination ordering.
Rose, Lueker & Tarjan (1976) (see also Habib et al. 2000) show that a perfect elimination ordering of a chordal graph may be found efficiently using an algorithm known as lexicographic breadth-first search. This algorithm maintains a partition of the vertices of the graph into a sequence of sets; initially this sequence consists of a single set with all vertices. The algorithm repeatedly chooses a vertex v from the earliest set in the sequence that contains previously unchosen vertices, and splits each set S of the sequence into two smaller subsets, the first consisting of the neighbors of v in S and the second consisting of the non-neighbors. When this splitting process has been performed for all vertices, the sequence of sets has one vertex per set, in the reverse of a perfect elimination ordering.
Since both this lexicographic breadth first search process and the process of testing whether an ordering is a perfect elimination ordering can be performed in linear time, it is possible to recognize chordal graphs in linear time. The graph sandwich problem on chordal graphs is NP-complete whereas the probe graph problem on chordal graphs has polynomial-time complexity.
The set of all perfect elimination orderings of a chordal graph can be modeled as the basic words of an antimatroid; Chandran et al. (2003) use this connection to antimatroids as part of an algorithm for efficiently listing all perfect elimination orderings of a given chordal graph.
== Maximal cliques and graph coloring ==
Another application of perfect elimination orderings is finding a maximum clique of a chordal graph in polynomial-time, while the same problem for general graphs is NP-complete. More generally, a chordal graph can have only linearly many maximal cliques, while non-chordal graphs may have exponentially many. This implies that the class of chordal graphs has few cliques. To list all maximal cliques of a chordal graph, simply find a perfect elimination ordering, form a clique for each vertex v together with the neighbors of v that are later than v in the perfect elimination ordering, and test whether each of the resulting cliques is maximal.
The clique graphs of chordal graphs are the dually chordal graphs.
The largest maximal clique is a maximum clique, and, as chordal graphs are perfect, the size of this clique equals the chromatic number of the chordal graph. Chordal graphs are perfectly orderable: an optimal coloring may be obtained by applying a greedy coloring algorithm to the vertices in the reverse of a perfect elimination ordering.
The chromatic polynomial of a chordal graph is easy to compute. Find a perfect elimination ordering v1, v2, …, vn. Let Ni equal the number of neighbors of vi that come after vi in that ordering. For instance, Nn = 0. The chromatic polynomial equals
(
x
−
N
1
)
(
x
−
N
2
)
⋯
(
x
−
N
n
)
.
{\displaystyle (x-N_{1})(x-N_{2})\cdots (x-N_{n}).}
(The last factor is simply x, so x divides the polynomial, as it should.) Clearly, this computation depends on chordality.
== Minimal separators ==
In any graph, a vertex separator is a set of vertices the removal of which leaves the remaining graph disconnected; a separator is minimal if it has no proper subset that is also a separator. According to a theorem of Dirac (1961), chordal graphs are graphs in which each minimal separator is a clique; Dirac used this characterization to prove that chordal graphs are perfect.
The family of chordal graphs may be defined inductively as the graphs whose vertices can be divided into three nonempty subsets A, S, and B, such that
A
∪
S
{\displaystyle A\cup S}
and
S
∪
B
{\displaystyle S\cup B}
both form chordal induced subgraphs, S is a clique, and there are no edges from A to B. That is, they are the graphs that have a recursive decomposition by clique separators into smaller subgraphs. For this reason, chordal graphs have also sometimes been called decomposable graphs.
== Intersection graphs of subtrees ==
An alternative characterization of chordal graphs, due to Gavril (1974), involves trees and their subtrees.
From a collection of subtrees of a tree, one can define a subtree graph, which is an intersection graph that has one vertex per subtree and an edge connecting any two subtrees that overlap in one or more nodes of the tree. Gavril showed that the subtree graphs are exactly the chordal graphs.
A representation of a chordal graph as an intersection of subtrees forms a tree decomposition of the graph, with treewidth equal to one less than the size of the largest clique in the graph; the tree decomposition of any graph G can be viewed in this way as a representation of G as a subgraph of a chordal graph. The tree decomposition of a graph is also the junction tree of the junction tree algorithm.
== Relation to other graph classes ==
=== Subclasses ===
Interval graphs are the intersection graphs of subtrees of path graphs, a special case of trees. Therefore, they are a subfamily of chordal graphs.
Split graphs are graphs that are both chordal and the complements of chordal graphs. Bender, Richmond & Wormald (1985) showed that, in the limit as n goes to infinity, the fraction of n-vertex chordal graphs that are split approaches one.
Ptolemaic graphs are graphs that are both chordal and distance hereditary.
Quasi-threshold graphs are a subclass of Ptolemaic graphs that are both chordal and cographs. Block graphs are another subclass of Ptolemaic graphs in which every two maximal cliques have at most one vertex in common. A special type is windmill graphs, where the common vertex is the same for every pair of cliques.
Strongly chordal graphs are graphs that are chordal and contain no n-sun (for n ≥ 3) as an induced subgraph. Here an n-sun is an n-vertex chordal graph G together with a collection of n degree-two vertices, adjacent to the edges of a Hamiltonian cycle in G.
K-trees are chordal graphs in which all maximal cliques and all maximal clique separators have the same size. Apollonian networks are chordal maximal planar graphs, or equivalently planar 3-trees. Maximal outerplanar graphs are a subclass of 2-trees, and therefore are also chordal.
=== Superclasses ===
Chordal graphs are a subclass of the well known perfect graphs.
Other superclasses of chordal graphs include weakly chordal graphs, cop-win graphs, odd-hole-free graphs, even-hole-free graphs, and Meyniel graphs. Chordal graphs are precisely the graphs that are both odd-hole-free and even-hole-free (see holes in graph theory).
Every chordal graph is a strangulated graph, a graph in which every peripheral cycle is a triangle, because peripheral cycles are a special case of induced cycles. Strangulated graphs are graphs that can be formed by clique-sums of chordal graphs and maximal planar graphs. Therefore, strangulated graphs include maximal planar graphs.
== Chordal completions and treewidth ==
If G is an arbitrary graph, a chordal completion of G (or minimum fill-in) is a chordal graph that contains G as a subgraph. The parameterized version of minimum fill-in is fixed parameter tractable, and moreover, is solvable in parameterized subexponential time.
The treewidth of G is one less than the number of vertices in a maximum clique of a chordal completion chosen to minimize this clique size.
The k-trees are the graphs to which no additional edges can be added without increasing their treewidth to a number larger than k.
Therefore, the k-trees are their own chordal completions, and form a subclass of the chordal graphs. Chordal completions can also be used to characterize several other related classes of graphs.
== Notes ==
== References ==
Agnarsson, Geir (2003), "On chordal graphs and their chromatic polynomials", Mathematica Scandinavica, 93 (2): 240–246, doi:10.7146/math.scand.a-14421, MR 2009583.
Bender, E. A.; Richmond, L. B.; Wormald, N. C. (1985), "Almost all chordal graphs split", J. Austral. Math. Soc., A, 38 (2): 214–221, doi:10.1017/S1446788700023077, MR 0770128.
Berge, Claude (1967), "Some Classes of Perfect Graphs", in Harary, Frank (ed.), Graph Theory and Theoretical Physics, Academic Press, pp. 155–165, MR 0232694.
Berry, Anne; Golumbic, Martin Charles; Lipshteyn, Marina (2007), "Recognizing chordal probe graphs and cycle-bicolorable graphs", SIAM Journal on Discrete Mathematics, 21 (3): 573–591, doi:10.1137/050637091.
Bodlaender, H. L.; Fellows, M. R.; Warnow, T. J. (1992), "Two strikes against perfect phylogeny" (PDF), Proc. of 19th International Colloquium on Automata Languages and Programming, Lecture Notes in Computer Science, vol. 623, pp. 273–283, doi:10.1007/3-540-55719-9_80, hdl:1874/16653.
Chandran, L. S.; Ibarra, L.; Ruskey, F.; Sawada, J. (2003), "Enumerating and characterizing the perfect elimination orderings of a chordal graph" (PDF), Theoretical Computer Science, 307 (2): 303–317, doi:10.1016/S0304-3975(03)00221-4.
Dirac, G. A. (1961), "On rigid circuit graphs", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 25 (1–2): 71–76, doi:10.1007/BF02992776, MR 0130190, S2CID 120608513.
Fomin, Fedor V.; Villanger, Yngve (2013), "Subexponential Parameterized Algorithm for Minimum Fill-In", SIAM J. Comput., 42 (6): 2197–2216, arXiv:1104.2230, doi:10.1137/11085390X, S2CID 934546.
Fulkerson, D. R.; Gross, O. A. (1965), "Incidence matrices and interval graphs", Pacific J. Math., 15 (3): 835–855, doi:10.2140/pjm.1965.15.835.
Gavril, Fănică (1974), "The intersection graphs of subtrees in trees are exactly the chordal graphs", Journal of Combinatorial Theory, Series B, 16: 47–56, doi:10.1016/0095-8956(74)90094-X.
Golumbic, Martin Charles (1980), Algorithmic Graph Theory and Perfect Graphs, Academic Press.
Habib, Michel; McConnell, Ross; Paul, Christophe; Viennot, Laurent (2000), "Lex-BFS and partition refinement, with applications to transitive orientation, interval graph recognition, and consecutive ones testing", Theoretical Computer Science, 234 (1–2): 59–84, doi:10.1016/S0304-3975(97)00241-7.
Kaplan, Haim; Shamir, Ron; Tarjan, Robert (1999), "Tractability of Parameterized Completion Problems on Chordal, Strongly Chordal, and Proper Interval Graphs", SIAM J. Comput., 28 (5): 1906–1922, doi:10.1137/S0097539796303044.
Maffray, Frédéric (2003), "On the coloration of perfect graphs", in Reed, Bruce A.; Sales, Cláudia L. (eds.), Recent Advances in Algorithms and Combinatorics, CMS Books in Mathematics, vol. 11, Springer-Verlag, pp. 65–84, doi:10.1007/0-387-22444-0_3, ISBN 0-387-95434-1.
Parra, Andreas; Scheffler, Petra (1997), "Characterizations and algorithmic applications of chordal graph embeddings", Discrete Applied Mathematics, 79 (1–3): 171–188, doi:10.1016/S0166-218X(97)00041-3, MR 1478250.
Patil, H. P. (1986), "On the structure of k-trees", Journal of Combinatorics, Information and System Sciences, 11 (2–4): 57–64, MR 0966069.
Rose, Donald J. (December 1970), "Triangulated graphs and the elimination process", Journal of Mathematical Analysis and Applications, 32 (3): 597–609, doi:10.1016/0022-247x(70)90282-9
Rose, D.; Lueker, George; Tarjan, Robert E. (1976), "Algorithmic aspects of vertex elimination on graphs", SIAM Journal on Computing, 5 (2): 266–283, doi:10.1137/0205021, MR 0408312.
Seymour, P. D.; Weaver, R. W. (1984), "A generalization of chordal graphs", Journal of Graph Theory, 8 (2): 241–251, doi:10.1002/jgt.3190080206, MR 0742878.
Szwarcfiter, J.L.; Bornstein, C.F. (1994), "Clique graphs of chordal and path graphs", SIAM Journal on Discrete Mathematics, 7 (2): 331–336, doi:10.1137/s0895480191223191, hdl:11422/1497.
== External links ==
Information System on Graph Class Inclusions: chordal graph
Weisstein, Eric W. "Chordal Graph". MathWorld. | Wikipedia/Chordal_graph |
In the mathematical discipline of graph theory, the line graph of an undirected graph G is another graph L(G) that represents the adjacencies between edges of G. L(G) is constructed in the following way: for each edge in G, make a vertex in L(G); for every two edges in G that have a vertex in common, make an edge between their corresponding vertices in L(G).
The name line graph comes from a paper by Harary & Norman (1960) although both Whitney (1932) and Krausz (1943) used the construction before this. Other terms used for the line graph include the covering graph, the derivative, the edge-to-vertex dual, the conjugate, the representative graph, and the θ-obrazom, as well as the edge graph, the interchange graph, the adjoint graph, and the derived graph.
Hassler Whitney (1932) proved that with one exceptional case the structure of a connected graph G can be recovered completely from its line graph. Many other properties of line graphs follow by translating the properties of the underlying graph from vertices into edges, and by Whitney's theorem the same translation can also be done in the other direction. Line graphs are claw-free, and the line graphs of bipartite graphs are perfect. Line graphs are characterized by nine forbidden subgraphs and can be recognized in linear time.
Various extensions of the concept of a line graph have been studied, including line graphs of line graphs, line graphs of multigraphs, line graphs of hypergraphs, and line graphs of weighted graphs.
== Formal definition ==
Given a graph G, its line graph L(G) is a graph such that
each vertex of L(G) represents an edge of G; and
two vertices of L(G) are adjacent if and only if their corresponding edges share a common endpoint ("are incident") in G.
That is, it is the intersection graph of the edges of G, representing each edge by the set of its two endpoints.
== Example ==
The following figures show a graph (left, with blue vertices) and its line graph (right, with green vertices). Each vertex of the line graph is shown labeled with the pair of endpoints of the corresponding edge in the original graph. For instance, the green vertex on the right labeled 1,3 corresponds to the edge on the left between the blue vertices 1 and 3. Green vertex 1,3 is adjacent to three other green vertices: 1,4 and 1,2 (corresponding to edges sharing the endpoint 1 in the blue graph) and 4,3 (corresponding to an edge sharing the endpoint 3 in the blue graph).
== Properties ==
=== Translated properties of the underlying graph ===
Properties of a graph G that depend only on adjacency between edges may be translated into equivalent properties in L(G) that depend on adjacency between vertices. For instance, a matching in G is a set of edges no two of which are adjacent, and corresponds to a set of vertices in L(G) no two of which are adjacent, that is, an independent set.
Thus,
The line graph of a connected graph is connected. If G is connected, it contains a path connecting any two of its edges, which translates into a path in L(G) containing any two of the vertices of L(G). However, a graph G that has some isolated vertices, and is therefore disconnected, may nevertheless have a connected line graph.
A line graph has an articulation point if and only if the underlying graph has a bridge for which neither endpoint has degree one.
For a graph G with n vertices and m edges, the number of vertices of the line graph L(G) is m, and the number of edges of L(G) is half the sum of the squares of the degrees of the vertices in G, minus m.
An independent set in L(G) corresponds to a matching in G. In particular, a maximum independent set in L(G) corresponds to maximum matching in G. Since maximum matchings may be found in polynomial time, so may the maximum independent sets of line graphs, despite the hardness of the maximum independent set problem for more general families of graphs. Similarly, a rainbow-independent set in L(G) corresponds to a rainbow matching in G.
The edge chromatic number of a graph G is equal to the vertex chromatic number of its line graph L(G).
The line graph of an edge-transitive graph is vertex-transitive. This property can be used to generate families of graphs that (like the Petersen graph) are vertex-transitive but are not Cayley graphs: if G is an edge-transitive graph that has at least five vertices, is not bipartite, and has odd vertex degrees, then L(G) is a vertex-transitive non-Cayley graph.
If a graph G has an Euler cycle, that is, if G is connected and has an even number of edges at each vertex, then the line graph of G is Hamiltonian. However, not all Hamiltonian cycles in line graphs come from Euler cycles in this way; for instance, the line graph of a Hamiltonian graph G is itself Hamiltonian, regardless of whether G is also Eulerian.
If two simple graphs are isomorphic then their line graphs are also isomorphic. The Whitney graph isomorphism theorem provides a converse to this for all but one pair of connected graphs.
In the context of complex network theory, the line graph of a random network preserves many of the properties of the network such as the small-world property (the existence of short paths between all pairs of vertices) and the shape of its degree distribution. Evans & Lambiotte (2009) observe that any method for finding vertex clusters in a complex network can be applied to the line graph and used to cluster its edges instead.
=== Whitney isomorphism theorem ===
If the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph K3 and the claw K1,3, which have isomorphic line graphs but are not themselves isomorphic.
As well as K3 and K1,3, there are some other exceptional small graphs with the property that their line graph has a higher degree of symmetry than the graph itself. For instance, the diamond graph K1,1,2 (two triangles sharing an edge) has four graph automorphisms but its line graph K1,2,2 has eight. In the illustration of the diamond graph shown, rotating the graph by 90 degrees is not a symmetry of the graph, but is a symmetry of its line graph. However, all such exceptional cases have at most four vertices. A strengthened version of the Whitney isomorphism theorem states that, for connected graphs with more than four vertices, there is a one-to-one correspondence between isomorphisms of the graphs and isomorphisms of their line graphs.
Analogues of the Whitney isomorphism theorem have been proven for the line graphs of multigraphs, but are more complicated in this case.
=== Strongly regular and perfect line graphs ===
The line graph of the complete graph Kn is also known as the triangular graph, the Johnson graph J(n, 2), or the complement of the Kneser graph KGn,2. Triangular graphs are characterized by their spectra, except for n = 8. They may also be characterized (again with the exception of K8) as the strongly regular graphs with parameters srg(n(n − 1)/2, 2(n − 2), n − 2, 4). The three strongly regular graphs with the same parameters and spectrum as L(K8) are the Chang graphs, which may be obtained by graph switching from L(K8).
The line graph of a bipartite graph is perfect (see Kőnig's theorem), but need not be bipartite as the example of the claw graph shows. The line graphs of bipartite graphs form one of the key building blocks of perfect graphs, used in the proof of the strong perfect graph theorem. A special case of these graphs are the rook's graphs, line graphs of complete bipartite graphs. Like the line graphs of complete graphs, they can be characterized with one exception by their numbers of vertices, numbers of edges, and number of shared neighbors for adjacent and non-adjacent points. The one exceptional case is L(K4,4), which shares its parameters with the Shrikhande graph. When both sides of the bipartition have the same number of vertices, these graphs are again strongly regular. It has been shown that, except for C3 , C4 , and C5 , all connected strongly regular graphs can be made non-strongly regular within two line graph transformations. The extension to disconnected graphs would require that the graph is not a disjoint union of C3.
More generally, a graph G is said to be a line perfect graph if L(G) is a perfect graph. The line perfect graphs are exactly the graphs that do not contain a simple cycle of odd length greater than three. Equivalently, a graph is line perfect if and only if each of its biconnected components is either bipartite or of the form K4 (the tetrahedron) or K1,1,n (a book of one or more triangles all sharing a common edge). Every line perfect graph is itself perfect.
=== Other related graph families ===
All line graphs are claw-free graphs, graphs without an induced subgraph in the form of a three-leaf tree. As with claw-free graphs more generally, every connected line graph L(G) with an even number of edges has a perfect matching; equivalently, this means that if the underlying graph G has an even number of edges, its edges can be partitioned into two-edge paths.
The line graphs of trees are exactly the claw-free block graphs. These graphs have been used to solve a problem in extremal graph theory, of constructing a graph with a given number of edges and vertices whose largest tree induced as a subgraph is as small as possible.
All eigenvalues of the adjacency matrix A of a line graph are at least −2. The reason for this is that A can be written as
A
=
J
T
J
−
2
I
{\displaystyle A=J^{\mathsf {T}}J-2I}
, where J is the signless incidence matrix of the pre-line graph and I is the identity. In particular, A + 2I is the Gramian matrix of a system of vectors: all graphs with this property have been called generalized line graphs.
== Characterization and recognition ==
=== Clique partition ===
For an arbitrary graph G, and an arbitrary vertex v in G, the set of edges incident to v corresponds to a clique in the line graph L(G). The cliques formed in this way partition the edges of L(G). Each vertex of L(G) belongs to exactly two of them (the two cliques corresponding to the two endpoints of the corresponding edge in G).
The existence of such a partition into cliques can be used to characterize the line graphs: A graph L is the line graph of some other graph or multigraph if and only if it is possible to find a collection of cliques in L (allowing some of the cliques to be single vertices) that partition the edges of L, such that each vertex of L belongs to exactly two of the cliques. It is the line graph of a graph (rather than a multigraph) if this set of cliques satisfies the additional condition that no two vertices of L are both in the same two cliques. Given such a family of cliques, the underlying graph G for which L is the line graph can be recovered by making one vertex in G for each clique, and an edge in G for each vertex in L with its endpoints being the two cliques containing the vertex in L. By the strong version of Whitney's isomorphism theorem, if the underlying graph G has more than four vertices, there can be only one partition of this type.
For example, this characterization can be used to show that the following graph is not a line graph:
In this example, the edges going upward, to the left, and to the right from the central degree-four vertex do not have any cliques in common. Therefore, any partition of the graph's edges into cliques would have to have at least one clique for each of these three edges, and these three cliques would all intersect in that central vertex, violating the requirement that each vertex appear in exactly two cliques. Thus, the graph shown is not a line graph.
=== Forbidden subgraphs ===
Another characterization of line graphs was proven in Beineke (1970) (and reported earlier without proof by Beineke (1968)). He showed that there are nine minimal graphs that are not line graphs, such that any graph that is not a line graph has one of these nine graphs as an induced subgraph. That is, a graph is a line graph if and only if no subset of its vertices induces one of these nine graphs. In the example above, the four topmost vertices induce a claw (that is, a complete bipartite graph K1,3), shown on the top left of the illustration of forbidden subgraphs. Therefore, by Beineke's characterization, this example cannot be a line graph. For graphs with minimum degree at least 5, only the six subgraphs in the left and right columns of the figure are needed in the characterization.
=== Algorithms ===
Roussopoulos (1973) and Lehot (1974) described linear time algorithms for recognizing line graphs and reconstructing their original graphs. Sysło (1982) generalized these methods to directed graphs. Degiorgi & Simon (1995) described an efficient data structure for maintaining a dynamic graph, subject to vertex insertions and deletions, and maintaining a representation of the input as a line graph (when it exists) in time proportional to the number of changed edges at each step.
The algorithms of Roussopoulos (1973) and Lehot (1974) are based on characterizations of line graphs involving odd triangles (triangles in the line graph with the property that there exists another vertex adjacent to an odd number of triangle vertices). However, the algorithm of Degiorgi & Simon (1995) uses only Whitney's isomorphism theorem. It is complicated by the need to recognize deletions that cause the remaining graph to become a line graph, but when specialized to the static recognition problem only insertions need to be performed, and the algorithm performs the following steps:
Construct the input graph L by adding vertices one at a time, at each step choosing a vertex to add that is adjacent to at least one previously-added vertex. While adding vertices to L, maintain a graph G for which L = L(G); if the algorithm ever fails to find an appropriate graph G, then the input is not a line graph and the algorithm terminates.
When adding a vertex v to a graph L(G) for which G has four or fewer vertices, it might be the case that the line graph representation is not unique. But in this case, the augmented graph is small enough that a representation of it as a line graph can be found by a brute force search in constant time.
When adding a vertex v to a larger graph L that equals the line graph of another graph G, let S be the subgraph of G formed by the edges that correspond to the neighbors of v in L. Check that S has a vertex cover consisting of one vertex or two non-adjacent vertices. If there are two vertices in the cover, augment G by adding an edge (corresponding to v) that connects these two vertices. If there is only one vertex in the cover, then add a new vertex to G, adjacent to this vertex.
Each step either takes constant time, or involves finding a vertex cover of constant size within a graph S whose size is proportional to the number of neighbors of v. Thus, the total time for the whole algorithm is proportional to the sum of the numbers of neighbors of all vertices, which (by the handshaking lemma) is proportional to the number of input edges.
== Iterating the line graph operator ==
van Rooij & Wilf (1965) consider the sequence of graphs
G
,
L
(
G
)
,
L
(
L
(
G
)
)
,
L
(
L
(
L
(
G
)
)
)
,
…
.
{\displaystyle G,L(G),L(L(G)),L(L(L(G))),\dots .\ }
They show that, when G is a finite connected graph, only four behaviors are possible for this sequence:
If G is a cycle graph then L(G) and each subsequent graph in this sequence are isomorphic to G itself. These are the only connected graphs for which L(G) is isomorphic to G.
If G is a claw K1,3, then L(G) and all subsequent graphs in the sequence are triangles.
If G is a path graph then each subsequent graph in the sequence is a shorter path until eventually the sequence terminates with an empty graph.
In all remaining cases, the sizes of the graphs in this sequence eventually increase without bound.
If G is not connected, this classification applies separately to each component of G.
For connected graphs that are not paths, all sufficiently high numbers of iteration of the line graph operation produce graphs that are Hamiltonian.
== Generalizations ==
=== Medial graphs and convex polyhedra ===
When a planar graph G has maximum vertex degree three, its line graph is planar, and every planar embedding of G can be extended to an embedding of L(G). However, there exist planar graphs with higher degree whose line graphs are nonplanar. These include, for example, the 5-star K1,5, the gem graph formed by adding two non-crossing diagonals within a regular pentagon, and all convex polyhedra with a vertex of degree four or more.
An alternative construction, the medial graph, coincides with the line graph for planar graphs with maximum degree three, but is always planar. It has the same vertices as the line graph, but potentially fewer edges: two vertices of the medial graph are adjacent if and only if the corresponding two edges are consecutive on some face of the planar embedding. The medial graph of the dual graph of a plane graph is the same as the medial graph of the original plane graph.
For regular polyhedra or simple polyhedra, the medial graph operation can be represented geometrically by the operation of cutting off each vertex of the polyhedron by a plane through the midpoints of all its incident edges. This operation is known variously as the second truncation, degenerate truncation, or rectification.
=== Total graphs ===
The total graph T(G) of a graph G has as its vertices the elements (vertices or edges) of G, and has an edge between two elements whenever they are either incident or adjacent. The total graph may also be obtained by subdividing each edge of G and then taking the square of the subdivided graph.
=== Multigraphs ===
The concept of the line graph of G may naturally be extended to the case where G is a multigraph. In this case, the characterizations of these graphs can be simplified: the characterization in terms of clique partitions no longer needs to prevent two vertices from belonging to the same to cliques, and the characterization by forbidden graphs has seven forbidden graphs instead of nine.
However, for multigraphs, there are larger numbers of pairs of non-isomorphic graphs that have the same line graphs. For instance a complete bipartite graph K1,n has the same line graph as the dipole graph and Shannon multigraph with the same number of edges. Nevertheless, analogues to Whitney's isomorphism theorem can still be derived in this case.
=== Line digraphs ===
It is also possible to generalize line graphs to directed graphs. If G is a directed graph, its directed line graph or line digraph has one vertex for each edge of G. Two vertices representing directed edges from u to v and from w to x in G are connected by an edge from uv to wx in the line digraph when v = w. That is, each edge in the line digraph of G represents a length-two directed path in G. The de Bruijn graphs may be formed by repeating this process of forming directed line graphs, starting from a complete directed graph.
=== Weighted line graphs ===
In a line graph L(G), each vertex of degree k in the original graph G creates k(k − 1)/2 edges in the line graph. For many types of analysis this means high-degree nodes in G are over-represented in the line graph L(G). For instance, consider a random walk on the vertices of the original graph G. This will pass along some edge e with some frequency f. On the other hand, this edge e is mapped to a unique vertex, say v, in the line graph L(G). If we now perform the same type of random walk on the vertices of the line graph, the frequency with which v is visited can be completely different from f. If our edge e in G was connected to nodes of degree O(k), it will be traversed O(k2) more frequently in the line graph L(G). Put another way, the Whitney graph isomorphism theorem guarantees that the line graph almost always encodes the topology of the original graph G faithfully but it does not guarantee that dynamics on these two graphs have a simple relationship. One solution is to construct a weighted line graph, that is, a line graph with weighted edges. There are several natural ways to do this. For instance if edges d and e in the graph G are incident at a vertex v with degree k, then in the line graph L(G) the edge connecting the two vertices d and e can be given weight 1/(k − 1). In this way every edge in G (provided neither end is connected to a vertex of degree 1) will have strength 2 in the line graph L(G) corresponding to the two ends that the edge has in G. It is straightforward to extend this definition of a weighted line graph to cases where the original graph G was directed or even weighted. The principle in all cases is to ensure the line graph L(G) reflects the dynamics as well as the topology of the original graph G.
=== Line graphs of hypergraphs ===
The edges of a hypergraph may form an arbitrary family of sets, so the line graph of a hypergraph is the same as the intersection graph of the sets from the family.
=== Disjointness graph ===
The disjointness graph of G, denoted D(G), is constructed in the following way: for each edge in G, make a vertex in D(G); for every two edges in G that do not have a vertex in common, make an edge between their corresponding vertices in D(G). In other words, D(G) is the complement graph of L(G). A clique in D(G) corresponds to an independent set in L(G), and vice versa.
== Notes ==
== References ==
== External links ==
Line graphs, Information System on Graph Class Inclusions
Weisstein, Eric W. "Line Graph". MathWorld. | Wikipedia/Line_graph |
In the mathematical field of graph theory, an automorphism is a permutation of the vertices such that edges are mapped to edges and non-edges are mapped to non-edges. A graph is a vertex-transitive graph if, given any two vertices v1 and v2 of G, there is an automorphism f such that
f
(
v
1
)
=
v
2
.
{\displaystyle f(v_{1})=v_{2}.\ }
In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical.
Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph).
== Finite examples ==
Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices.
Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees.
== Properties ==
The edge-connectivity of a connected vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3.
If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d.
== Infinite examples ==
Infinite vertex-transitive graphs include:
infinite paths (infinite in both directions)
infinite regular trees, e.g. the Cayley graph of the free group
graphs of uniform tessellations (see a complete list of planar tessellations), including all tilings by regular polygons
infinite Cayley graphs
the Rado graph
Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001. In 2005, Eskin, Fisher, and Whyte confirmed the counterexample.
== See also ==
Edge-transitive graph
Lovász conjecture
Semi-symmetric graph
Zero-symmetric graph
== References ==
== External links ==
Weisstein, Eric W. "Vertex-transitive graph". MathWorld.
A census of small connected cubic vertex-transitive graphs. Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012.
Vertex-transitive Graphs On Fewer Than 48 Vertices. Gordon Royle and Derek Holt, 2020. | Wikipedia/Vertex-transitive_graph |
In the mathematical field of graph theory, a graph G is symmetric or arc-transitive if, given any two ordered pairs of adjacent vertices
(
u
1
,
v
1
)
{\displaystyle (u_{1},v_{1})}
and
(
u
2
,
v
2
)
{\displaystyle (u_{2},v_{2})}
of G, there is an automorphism
f
:
V
(
G
)
→
V
(
G
)
{\displaystyle f:V(G)\rightarrow V(G)}
such that
f
(
u
1
)
=
u
2
{\displaystyle f(u_{1})=u_{2}}
and
f
(
v
1
)
=
v
2
.
{\displaystyle f(v_{1})=v_{2}.}
In other words, a graph is symmetric if its automorphism group acts transitively on ordered pairs of adjacent vertices (that is, upon edges considered as having a direction). Such a graph is sometimes also called 1-arc-transitive or flag-transitive.
By definition (ignoring u1 and u2), a symmetric graph without isolated vertices must also be vertex-transitive. Since the definition above maps one edge to another, a symmetric graph must also be edge-transitive. However, an edge-transitive graph need not be symmetric, since a—b might map to c—d, but not to d—c. Star graphs are a simple example of being edge-transitive without being vertex-transitive or symmetric. As a further example, semi-symmetric graphs are edge-transitive and regular, but not vertex-transitive.
Every connected symmetric graph must thus be both vertex-transitive and edge-transitive, and the converse is true for graphs of odd degree. However, for even degree, there exist connected graphs which are vertex-transitive and edge-transitive, but not symmetric. Such graphs are called half-transitive. The smallest connected half-transitive graph is Holt's graph, with degree 4 and 27 vertices. Confusingly, some authors use the term "symmetric graph" to mean a graph which is vertex-transitive and edge-transitive, rather than an arc-transitive graph. Such a definition would include half-transitive graphs, which are excluded under the definition above.
A distance-transitive graph is one where instead of considering pairs of adjacent vertices (i.e. vertices a distance of 1 apart), the definition covers two pairs of vertices, each the same distance apart. Such graphs are automatically symmetric, by definition.
A t-arc is defined to be a sequence of t + 1 vertices, such that any two consecutive vertices in the sequence are adjacent, and with any repeated vertices being more than 2 steps apart. A t-transitive graph is a graph such that the automorphism group acts transitively on t-arcs, but not on (t + 1)-arcs. Since 1-arcs are simply edges, every symmetric graph of degree 3 or more must be t-transitive for some t, and the value of t can be used to further classify symmetric graphs. The cube is 2-transitive, for example.
Note that conventionally the term "symmetric graph" is not complementary to the term "asymmetric graph," as the latter refers to a graph that has no nontrivial symmetries at all.
== Examples ==
Two basic families of symmetric graphs for any number of vertices are the cycle graphs (of degree 2) and the complete graphs. Further symmetric graphs are formed by the vertices and edges of the regular and quasiregular polyhedra: the cube, octahedron, icosahedron, dodecahedron, cuboctahedron, and icosidodecahedron. Extension of the cube to n dimensions gives the hypercube graphs (with 2n vertices and degree n). Similarly extension of the octahedron to n dimensions gives the graphs of the cross-polytopes, this family of graphs (with 2n vertices and degree 2n − 2) are sometimes referred to as the cocktail party graphs - they are complete graphs with a set of edges making a perfect matching removed. Additional families of symmetric graphs with an even number of vertices 2n, are the evenly split complete bipartite graphs Kn,n and the crown graphs on 2n vertices. Many other symmetric graphs can be classified as circulant graphs (but not all).
The Rado graph forms an example of a symmetric graph with infinitely many vertices and infinite degree.
=== Cubic symmetric graphs ===
Combining the symmetry condition with the restriction that graphs be cubic (i.e. all vertices have degree 3) yields quite a strong condition, and such graphs are rare enough to be listed. They all have an even number of vertices. The Foster census and its extensions provide such lists. The Foster census was begun in the 1930s by Ronald M. Foster while he was employed by Bell Labs, and in 1988 (when Foster was 92) the then current Foster census (listing all cubic symmetric graphs up to 512 vertices) was published in book form. The first thirteen items in the list are cubic symmetric graphs with up to 30 vertices (ten of these are also distance-transitive; the exceptions are as indicated):
Other well known cubic symmetric graphs are the Dyck graph, the Foster graph and the Biggs–Smith graph. The ten distance-transitive graphs listed above, together with the Foster graph and the Biggs–Smith graph, are the only cubic distance-transitive graphs.
== Properties ==
The vertex-connectivity of a symmetric graph is always equal to the degree d. In contrast, for vertex-transitive graphs in general, the vertex-connectivity is bounded below by 2(d + 1)/3.
A t-transitive graph of degree 3 or more has girth at least 2(t − 1). However, there are no finite t-transitive graphs of degree 3 or more for t ≥ 8. In the case of the degree being exactly 3 (cubic symmetric graphs), there are none for t ≥ 6.
== See also ==
Algebraic graph theory
Gallery of named graphs
Regular map
== References ==
== External links ==
Cubic symmetric graphs (The Foster Census). Data files for all cubic symmetric graphs up to 768 vertices, and some cubic graphs with up to 1000 vertices. Gordon Royle, updated February 2001, retrieved 2009-04-18.
Trivalent (cubic) symmetric graphs on up to 10000 vertices. Marston Conder, 2011. | Wikipedia/Arc-transitive_graph |
In graph theory, series–parallel graphs are graphs with two distinguished vertices called terminals, formed recursively by two simple composition operations. They can be used to model series and parallel electric circuits.
== Definition and terminology ==
In this context, the term graph means multigraph.
There are several ways to define series–parallel graphs.
=== First definition ===
The following definition basically follows the one used by David Eppstein.
A two-terminal graph (TTG) is a graph with two distinguished vertices, s and t called source and sink, respectively.
The parallel composition Pc = Pc(X,Y) of two TTGs X and Y is a TTG created from the disjoint union of graphs X and Y by merging the sources of X and Y to create the source of Pc and merging the sinks of X and Y to create the sink of Pc.
The series composition Sc = Sc(X,Y) of two TTGs X and Y is a TTG created from the disjoint union of graphs X and Y by merging the sink of X with the source of Y. The source of X becomes the source of Sc and the sink of Y becomes the sink of Sc.
A two-terminal series–parallel graph (TTSPG) is a graph that may be constructed by a sequence of series and parallel compositions starting from a set of copies of a single-edge graph K2 with assigned terminals.
Definition 1. Finally, a graph is called series–parallel (SP-graph), if it is a TTSPG when some two of its vertices are regarded as source and sink.
In a similar way one may define series–parallel digraphs, constructed from copies of single-arc graphs, with arcs directed from the source to the sink.
=== Second definition ===
The following definition specifies the same class of graphs.
Definition 2. A graph is an SP-graph, if it may be turned into K2 by a sequence of the following operations:
Replacement of a pair of parallel edges with a single edge that connects their common endpoints
Replacement of a pair of edges incident to a vertex of degree 2 other than s or t with a single edge.
== Properties ==
Every series–parallel graph has treewidth at most 2 and branchwidth at most 2. Indeed, a graph has treewidth at most 2 if and only if it has branchwidth at most 2, if and only if every biconnected component is a series–parallel graph. The maximal series–parallel graphs, graphs to which no additional edges can be added without destroying their series–parallel structure, are exactly the 2-trees.
2-connected series–parallel graphs are characterised by having no subgraph homeomorphic to K4.
Series parallel graphs may also be characterized by their ear decompositions.
== Computational complexity ==
SP-graphs may be recognized in linear time and their series–parallel decomposition may be constructed in linear time as well.
Besides being a model of certain types of electric networks, these graphs are of interest in computational complexity theory, because a number of standard graph problems are solvable in linear time on SP-graphs, including finding of the maximum matching, maximum independent set, minimum dominating set and Hamiltonian completion. Some of these problems are NP-complete for general graphs. The solution capitalizes on the fact that if the answers for one of these problems are known for two SP-graphs, then one can quickly find the answer for their series and parallel compositions.
== Generalization ==
The generalized series–parallel graphs (GSP-graphs) are an extension of the SP-graphs with the same algorithmic efficiency for the mentioned problems. The class of GSP-graphs include the classes of SP-graphs and outerplanar graphs.
GSP graphs may be specified by Definition 2 augmented with the third operation of deletion of a dangling vertex (vertex of degree 1). Alternatively, Definition 1 may be augmented with the following operation.
The source merge S = M(X,Y) of two TTGs X and Y is a TTG created from the disjoint union of graphs X and Y by merging the source of X with the source of Y. The source and sink of X become the source and sink of S respectively.
An SPQR tree is a tree structure that can be defined for an arbitrary 2-vertex-connected graph. It has S-nodes, which are analogous to the series composition operations in series–parallel graphs, P-nodes, which are analogous to the parallel composition operations in series–parallel graphs, and R-nodes, which do not correspond to series–parallel composition operations. A 2-connected graph is series–parallel if and only if there are no R-nodes in its SPQR tree.
== See also ==
Threshold graph
Cograph
Hanner polytope
Series-parallel partial order
== References == | Wikipedia/Series–parallel_graph |
In discrete mathematics, and more specifically in graph theory, a vertex (plural vertices) or node is the fundamental unit of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). In a diagram of a graph, a vertex is usually represented by a circle with a label, and an edge is represented by a line or arrow extending from one vertex to another.
From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a semantic network is a graph in which the vertices represent concepts or classes of objects.
The two vertices forming an edge are said to be the endpoints of this edge, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v.
== Types of vertices ==
The degree of a vertex, denoted 𝛿(v) in a graph is the number of edges incident to it. An isolated vertex is a vertex with degree zero; that is, a vertex that is not an endpoint of any edge (the example image illustrates one isolated vertex). A leaf vertex (also pendant vertex) is a vertex with degree one. In a directed graph, one can distinguish the outdegree (number of outgoing edges), denoted 𝛿 +(v), from the indegree (number of incoming edges), denoted 𝛿−(v); a source vertex is a vertex with indegree zero, while a sink vertex is a vertex with outdegree zero. A simplicial vertex is one whose closed neighborhood forms a clique: every two neighbors are adjacent. A universal vertex is a vertex that is adjacent to every other vertex in the graph.
A cut vertex is a vertex the removal of which would disconnect the remaining graph; a vertex separator is a collection of vertices the removal of which would disconnect the remaining graph into small pieces. A k-vertex-connected graph is a graph in which removing fewer than k vertices always leaves the remaining graph connected. An independent set is a set of vertices no two of which are adjacent, and a vertex cover is a set of vertices that includes at least one endpoint of each edge in the graph. The vertex space of a graph is a vector space having a set of basis vectors corresponding with the graph's vertices.
A graph is vertex-transitive if it has symmetries that map any vertex to any other vertex. In the context of graph enumeration and graph isomorphism it is important to distinguish between labeled vertices and unlabeled vertices. A labeled vertex is a vertex that is associated with extra information that enables it to be distinguished from other labeled vertices; two graphs can be considered isomorphic only if the correspondence between their vertices pairs up vertices with equal labels. An unlabeled vertex is one that can be substituted for any other vertex based only on its adjacencies in the graph and not based on any additional information.
Vertices in graphs are analogous to, but not the same as, vertices of polyhedra: the skeleton of a polyhedron forms a graph, the vertices of which are the vertices of the polyhedron, but polyhedron vertices have additional structure (their geometric location) that is not assumed to be present in graph theory. The vertex figure of a vertex in a polyhedron is analogous to the neighborhood of a vertex in a graph.
== See also ==
Node (computer science)
Graph theory
Glossary of graph theory
== References ==
Gallo, Giorgio; Pallotino, Stefano (1988). "Shortest path algorithms". Annals of Operations Research. 13 (1): 1–79. doi:10.1007/BF02288320. S2CID 62752810.
Berge, Claude, Théorie des graphes et ses applications. Collection Universitaire de Mathématiques, II Dunod, Paris 1958, viii+277 pp. (English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition. Dover, New York 2001)
Chartrand, Gary (1985). Introductory graph theory. New York: Dover. ISBN 0-486-24775-9.
Biggs, Norman; Lloyd, E. H.; Wilson, Robin J. (1986). Graph theory, 1736-1936. Oxford [Oxfordshire]: Clarendon Press. ISBN 0-19-853916-9.
Harary, Frank (1969). Graph theory. Reading, Mass.: Addison-Wesley Publishing. ISBN 0-201-41033-8.
Harary, Frank; Palmer, Edgar M. (1973). Graphical enumeration. New York, Academic Press. ISBN 0-12-324245-2.
== External links ==
Weisstein, Eric W. "Graph Vertex". MathWorld. | Wikipedia/Vertex_(graph_theory) |
In the mathematical field of graph theory, the term "null graph" may refer either to the order-zero graph, or alternatively, to any edgeless graph (the latter is sometimes called an "empty graph").
== Order-zero graph ==
The order-zero graph, K0, is the unique graph having no vertices (hence its order is zero). It follows that K0 also has no edges. Thus the null graph is a regular graph of degree zero. Some authors exclude K0 from consideration as a graph (either by definition, or more simply as a matter of convenience). Whether including K0 as a valid graph is useful depends on context. On the positive side, K0 follows naturally from the usual set-theoretic definitions of a graph (it is the ordered pair (V, E) for which the vertex and edge sets, V and E, are both empty), in proofs it serves as a natural base case for mathematical induction, and similarly, in recursively defined data structures K0 is useful for defining the base case for recursion (by treating the null tree as the child of missing edges in any non-null binary tree, every non-null binary tree has exactly two children). On the negative side, including K0 as a graph requires that many well-defined formulas for graph properties include exceptions for it (for example, either "counting all strongly connected components of a graph" becomes "counting all non-null strongly connected components of a graph", or the definition of connected graphs has to be modified not to include K0). To avoid the need for such exceptions, it is often assumed in literature that the term graph implies "graph with at least one vertex" unless context suggests otherwise.
In category theory, the order-zero graph is, according to some definitions of "category of graphs," the initial object in the category.
K0 does fulfill (vacuously) most of the same basic graph properties as does K1 (the graph with one vertex and no edges). As some examples, K0 is of size zero, it is equal to its complement graph K0, a forest, and a planar graph. It may be considered undirected, directed, or even both; when considered as directed, it is a directed acyclic graph. And it is both a complete graph and an edgeless graph. However, definitions for each of these graph properties will vary depending on whether context allows for K0.
== Edgeless graph ==
For each natural number n, the edgeless graph (or empty graph) Kn of order n is the graph with n vertices and zero edges. An edgeless graph is occasionally referred to as a null graph in contexts where the order-zero graph is not permitted.
It is a 0-regular graph. The notation Kn arises from the fact that the n-vertex edgeless graph is the complement of the complete graph Kn.
== See also ==
Glossary of graph theory
Cycle graph
Path graph
== Notes ==
== References ==
== External links ==
Media related to Null graphs at Wikimedia Commons | Wikipedia/Empty_graph |
In graph theory, a strongly regular graph (SRG) is a regular graph G = (V, E) with v vertices and degree k such that for some given integers
λ
,
μ
≥
0
{\displaystyle \lambda ,\mu \geq 0}
every two adjacent vertices have λ common neighbours, and
every two non-adjacent vertices have μ common neighbours.
Such a strongly regular graph is denoted by srg(v, k, λ, μ). Its complement graph is also strongly regular: it is an srg(v, v − k − 1, v − 2 − 2k + μ, v − 2k + λ).
A strongly regular graph is a distance-regular graph with diameter 2 whenever μ is non-zero. It is a locally linear graph whenever λ = 1.
== Etymology ==
A strongly regular graph is denoted as an srg(v, k, λ, μ) in the literature. By convention, graphs which satisfy the definition trivially are excluded from detailed studies and lists of strongly regular graphs. These include the disjoint union of one or more equal-sized complete graphs, and their complements, the complete multipartite graphs with equal-sized independent sets.
Andries Brouwer and Hendrik van Maldeghem (see #References) use an alternate but fully equivalent definition of a strongly regular graph based on spectral graph theory: a strongly regular graph is a finite regular graph that has exactly three eigenvalues, only one of which is equal to the degree k, of multiplicity 1. This automatically rules out fully connected graphs (which have only two distinct eigenvalues, not three) and disconnected graphs (for which the multiplicity of the degree k is equal to the number of different connected components, which would therefore exceed one). Much of the literature, including Brouwer, refers to the larger eigenvalue as r (with multiplicity f) and the smaller one as s (with multiplicity g).
== History ==
Strongly regular graphs were introduced by R.C. Bose in 1963. They built upon earlier work in the 1950s in the then-new field of spectral graph theory.
== Examples ==
The cycle of length 5 is an srg(5, 2, 0, 1).
The Petersen graph is an srg(10, 3, 0, 1).
The Clebsch graph is an srg(16, 5, 0, 2).
The Shrikhande graph is an srg(16, 6, 2, 2) which is not a distance-transitive graph.
The n × n square rook's graph, i.e., the line graph of a balanced complete bipartite graph Kn,n, is an srg(n2, 2n − 2, n − 2, 2). The parameters for n = 4 coincide with those of the Shrikhande graph, but the two graphs are not isomorphic. (The vertex neighborhood for the Shrikhande graph is a hexagon, while that for the rook graph is two triangles.)
The line graph of a complete graph Kn is an
srg
(
(
n
2
)
,
2
(
n
−
2
)
,
n
−
2
,
4
)
{\textstyle \operatorname {srg} \left({\binom {n}{2}},2(n-2),n-2,4\right)}
.
The three Chang graphs are srg(28, 12, 6, 4), the same as the line graph of K8, but these four graphs are not isomorphic.
Every generalized quadrangle of order (s, t) gives an srg((s + 1)(st + 1), s(t + 1), s − 1, t + 1) as its line graph. For example, GQ(2, 4) gives srg(27, 10, 1, 5) as its line graph.
The Schläfli graph is an srg(27, 16, 10, 8) and is the complement of the aforementioned line graph on GQ(2, 4).
The Hoffman–Singleton graph is an srg(50, 7, 0, 1).
The Gewirtz graph is an srg(56, 10, 0, 2).
The M22 graph aka the Mesner graph is an srg(77, 16, 0, 4).
The Brouwer–Haemers graph is an srg(81, 20, 1, 6).
The Higman–Sims graph is an srg(100, 22, 0, 6).
The Local McLaughlin graph is an srg(162, 56, 10, 24).
The Cameron graph is an srg(231, 30, 9, 3).
The Berlekamp–van Lint–Seidel graph is an srg(243, 22, 1, 2).
The McLaughlin graph is an srg(275, 112, 30, 56).
The Paley graph of order q is an srg(q, (q − 1)/2, (q − 5)/4, (q − 1)/4). The smallest Paley graph, with q = 5, is the 5-cycle (above).
Self-complementary arc-transitive graphs are strongly regular.
A strongly regular graph is called primitive if both the graph and its complement are connected. All the above graphs are primitive, as otherwise μ = 0 or λ = k.
Conway's 99-graph problem asks for the construction of an srg(99, 14, 1, 2). It is unknown whether a graph with these parameters exists, and John Horton Conway offered a $1000 prize for the solution to this problem.
=== Triangle-free graphs ===
The strongly regular graphs with λ = 0 are triangle free. Apart from the complete graphs on fewer than 3 vertices and all complete bipartite graphs, the seven listed earlier (pentagon, Petersen, Clebsch, Hoffman-Singleton, Gewirtz, Mesner-M22, and Higman-Sims) are the only known ones.
=== Geodetic graphs ===
Every strongly regular graph with
μ
=
1
{\displaystyle \mu =1}
is a geodetic graph, a graph in which every two vertices have a unique unweighted shortest path. The only known strongly regular graphs with
μ
=
1
{\displaystyle \mu =1}
are those where
λ
{\displaystyle \lambda }
is 0, therefore triangle-free as well. These are called the Moore graphs and are explored below in more detail. Other combinations of parameters such as (400, 21, 2, 1) have not yet been ruled out. Despite ongoing research on the properties that a strongly regular graph with
μ
=
1
{\displaystyle \mu =1}
would have, it is not known whether any more exist or even whether their number is finite. Only the elementary result is known, that
λ
{\displaystyle \lambda }
cannot be 1 for such a graph.
== Algebraic properties of strongly regular graphs ==
=== Basic relationship between parameters ===
The four parameters in an srg(v, k, λ, μ) are not independent: In order for an srg(v, k, λ, μ) to exist, the parameters must obey the following relation:
(
v
−
k
−
1
)
μ
=
k
(
k
−
λ
−
1
)
{\displaystyle (v-k-1)\mu =k(k-\lambda -1)}
The above relation is derived through a counting argument as follows:
Imagine the vertices of the graph to lie in three levels. Pick any vertex as the root, in Level 0. Then its k neighbors lie in Level 1, and all other vertices lie in Level 2.
Vertices in Level 1 are directly connected to the root, hence they must have λ other neighbors in common with the root, and these common neighbors must also be in Level 1. Since each vertex has degree k, there are
k
−
λ
−
1
{\displaystyle k-\lambda -1}
edges remaining for each Level 1 node to connect to vertices in Level 2. Therefore, there are
k
(
k
−
λ
−
1
)
{\displaystyle k(k-\lambda -1)}
edges between Level 1 and Level 2.
Vertices in Level 2 are not directly connected to the root, hence they must have μ common neighbors with the root, and these common neighbors must all be in Level 1. There are
(
v
−
k
−
1
)
{\displaystyle (v-k-1)}
vertices in Level 2, and each is connected to μ vertices in Level 1. Therefore the number of edges between Level 1 and Level 2 is
(
v
−
k
−
1
)
μ
{\displaystyle (v-k-1)\mu }
.
Equating the two expressions for the edges between Level 1 and Level 2, the relation follows.
This relation is a necessary condition for the existence of a strongly regular graph, but not a sufficient condition. For instance, the quadruple (21,10,4,5) obeys this relation, but there does not exist a strongly regular graph with these parameters.
=== Adjacency matrix equations ===
Let I denote the identity matrix and let J denote the matrix of ones, both matrices of order v. The adjacency matrix A of a strongly regular graph satisfies two equations.
First:
A
J
=
J
A
=
k
J
,
{\displaystyle AJ=JA=kJ,}
which is a restatement of the regularity requirement. This shows that k is an eigenvalue of the adjacency matrix with the all-ones eigenvector.
Second:
A
2
=
k
I
+
λ
A
+
μ
(
J
−
I
−
A
)
{\displaystyle A^{2}=kI+\lambda {A}+\mu (J-I-A)}
which expresses strong regularity. The ij-th element of the left hand side gives the number of two-step paths from i to j. The first term of the right hand side gives the number of two-step paths from i back to i, namely k edges out and back in. The second term gives the number of two-step paths when i and j are directly connected. The third term gives the corresponding value when i and j are not connected. Since the three cases are mutually exclusive and collectively exhaustive, the simple additive equality follows.
Conversely, a graph whose adjacency matrix satisfies both of the above conditions and which is not a complete or null graph is a strongly regular graph.
=== Eigenvalues and graph spectrum ===
Since the adjacency matrix A is symmetric, it follows that its eigenvectors are orthogonal. We already observed one eigenvector above which is made of all ones, corresponding to the eigenvalue k. Therefore the other eigenvectors x must all satisfy
J
x
=
0
{\displaystyle Jx=0}
where J is the all-ones matrix as before. Take the previously established equation:
A
2
=
k
I
+
λ
A
+
μ
(
J
−
I
−
A
)
{\displaystyle A^{2}=kI+\lambda {A}+\mu (J-I-A)}
and multiply the above equation by eigenvector x:
A
2
x
=
k
I
x
+
λ
A
x
+
μ
(
J
−
I
−
A
)
x
{\displaystyle A^{2}x=kIx+\lambda {A}x+\mu (J-I-A)x}
Call the corresponding eigenvalue p (not to be confused with
λ
{\displaystyle \lambda }
the graph parameter) and substitute
A
x
=
p
x
{\displaystyle Ax=px}
,
J
x
=
0
{\displaystyle Jx=0}
and
I
x
=
x
{\displaystyle Ix=x}
:
p
2
x
=
k
x
+
λ
p
x
−
μ
x
−
μ
p
x
{\displaystyle p^{2}x=kx+\lambda px-\mu x-\mu px}
Eliminate x and rearrange to get a quadratic:
p
2
+
(
μ
−
λ
)
p
−
(
k
−
μ
)
=
0
{\displaystyle p^{2}+(\mu -\lambda )p-(k-\mu )=0}
This gives the two additional eigenvalues
1
2
[
(
λ
−
μ
)
±
(
λ
−
μ
)
2
+
4
(
k
−
μ
)
]
{\displaystyle {\frac {1}{2}}\left[(\lambda -\mu )\pm {\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]}
. There are thus exactly three eigenvalues for a strongly regular matrix.
Conversely, a connected regular graph with only three eigenvalues is strongly regular.
Following the terminology in much of the strongly regular graph literature, the larger eigenvalue is called r with multiplicity f and the smaller one is called s with multiplicity g.
Since the sum of all the eigenvalues is the trace of the adjacency matrix, which is zero in this case, the respective multiplicities f and g can be calculated:
Eigenvalue k has multiplicity 1.
Eigenvalue
r
=
1
2
[
(
λ
−
μ
)
+
(
λ
−
μ
)
2
+
4
(
k
−
μ
)
]
{\displaystyle r={\frac {1}{2}}\left[(\lambda -\mu )+{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]}
has multiplicity
f
=
1
2
[
(
v
−
1
)
−
2
k
+
(
v
−
1
)
(
λ
−
μ
)
(
λ
−
μ
)
2
+
4
(
k
−
μ
)
]
{\displaystyle f={\frac {1}{2}}\left[(v-1)-{\frac {2k+(v-1)(\lambda -\mu )}{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}}\right]}
Eigenvalue
s
=
1
2
[
(
λ
−
μ
)
−
(
λ
−
μ
)
2
+
4
(
k
−
μ
)
]
{\displaystyle s={\frac {1}{2}}\left[(\lambda -\mu )-{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}\,\right]}
has multiplicity
g
=
1
2
[
(
v
−
1
)
+
2
k
+
(
v
−
1
)
(
λ
−
μ
)
(
λ
−
μ
)
2
+
4
(
k
−
μ
)
]
{\displaystyle g={\frac {1}{2}}\left[(v-1)+{\frac {2k+(v-1)(\lambda -\mu )}{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}}\right]}
As the multiplicities must be integers, their expressions provide further constraints on the values of v, k, μ, and λ.
Strongly regular graphs for which
2
k
+
(
v
−
1
)
(
λ
−
μ
)
≠
0
{\displaystyle 2k+(v-1)(\lambda -\mu )\neq 0}
have integer eigenvalues with unequal multiplicities.
Strongly regular graphs for which
2
k
+
(
v
−
1
)
(
λ
−
μ
)
=
0
{\displaystyle 2k+(v-1)(\lambda -\mu )=0}
are called conference graphs because of their connection with symmetric conference matrices. Their parameters reduce to
srg
(
v
,
1
2
(
v
−
1
)
,
1
4
(
v
−
5
)
,
1
4
(
v
−
1
)
)
.
{\displaystyle \operatorname {srg} \left(v,{\frac {1}{2}}(v-1),{\frac {1}{4}}(v-5),{\frac {1}{4}}(v-1)\right).}
Their eigenvalues are
r
=
−
1
+
v
2
{\displaystyle r={\frac {-1+{\sqrt {v}}}{2}}}
and
s
=
−
1
−
v
2
{\displaystyle s={\frac {-1-{\sqrt {v}}}{2}}}
, both of whose multiplicities are equal to
v
−
1
2
{\displaystyle {\frac {v-1}{2}}}
. Further, in this case, v must equal the sum of two squares, related to the Bruck–Ryser–Chowla theorem.
Further properties of the eigenvalues and their multiplicities are:
(
A
−
r
I
)
×
(
A
−
s
I
)
=
μ
.
J
{\displaystyle (A-rI)\times (A-sI)=\mu .J}
, therefore
(
k
−
r
)
.
(
k
−
s
)
=
μ
v
{\displaystyle (k-r).(k-s)=\mu v}
λ
−
μ
=
r
+
s
{\displaystyle \lambda -\mu =r+s}
k
−
μ
=
−
r
×
s
{\displaystyle k-\mu =-r\times s}
k
≥
r
{\displaystyle k\geq r}
Given an srg(v, k, λ, μ) with eigenvalues r and s, its complement srg(v, v − k − 1, v − 2 − 2k + μ, v − 2k + λ) has eigenvalues -1-s and -1-r.
Alternate equations for the multiplicities are
f
=
(
s
+
1
)
k
(
k
−
s
)
μ
(
s
−
r
)
{\displaystyle f={\frac {(s+1)k(k-s)}{\mu (s-r)}}}
and
g
=
(
r
+
1
)
k
(
k
−
r
)
μ
(
r
−
s
)
{\displaystyle g={\frac {(r+1)k(k-r)}{\mu (r-s)}}}
The frame quotient condition:
v
k
(
v
−
k
−
1
)
=
f
g
(
r
−
s
)
2
{\displaystyle vk(v-k-1)=fg(r-s)^{2}}
. As a corollary,
v
=
(
r
−
s
)
2
{\displaystyle v=(r-s)^{2}}
if and only if
f
,
g
=
k
,
v
−
k
−
1
{\displaystyle {f,g}={k,v-k-1}}
in some order.
Krein conditions:
(
v
−
k
−
1
)
2
(
k
2
+
r
3
)
≥
(
r
+
1
)
3
k
2
{\displaystyle (v-k-1)^{2}(k^{2}+r^{3})\geq (r+1)^{3}k^{2}}
and
(
v
−
k
−
1
)
2
(
k
2
+
s
3
)
≥
(
s
+
1
)
3
k
2
{\displaystyle (v-k-1)^{2}(k^{2}+s^{3})\geq (s+1)^{3}k^{2}}
Absolute bound:
v
≤
f
(
f
+
3
)
2
{\displaystyle v\leq {\frac {f(f+3)}{2}}}
and
v
≤
g
(
g
+
3
)
2
{\displaystyle v\leq {\frac {g(g+3)}{2}}}
.
Claw bound: if
r
+
1
>
s
(
s
+
1
)
(
μ
+
1
)
2
{\displaystyle r+1>{\frac {s(s+1)(\mu +1)}{2}}}
, then
μ
=
s
2
{\displaystyle \mu =s^{2}}
or
μ
=
s
(
s
+
1
)
{\displaystyle \mu =s(s+1)}
.
If any of the above conditions are violated for a set of parameters, then there exists no strongly regular graph for those parameters. Brouwer has compiled such lists of existence or non-existence here with reasons for non-existence if any. For example, there exists no srg(28,9,0,4) because that violates one of the Krein conditions and one of the absolute bound conditions.
=== The Hoffman–Singleton theorem ===
As noted above, the multiplicities of the eigenvalues are given by
M
±
=
1
2
[
(
v
−
1
)
±
2
k
+
(
v
−
1
)
(
λ
−
μ
)
(
λ
−
μ
)
2
+
4
(
k
−
μ
)
]
{\displaystyle M_{\pm }={\frac {1}{2}}\left[(v-1)\pm {\frac {2k+(v-1)(\lambda -\mu )}{\sqrt {(\lambda -\mu )^{2}+4(k-\mu )}}}\right]}
which must be integers.
In 1960, Alan Hoffman and Robert Singleton examined those expressions when applied on Moore graphs that have λ = 0 and μ = 1. Such graphs are free of triangles (otherwise λ would exceed zero) and quadrilaterals (otherwise μ would exceed 1), hence they have a girth (smallest cycle length) of 5. Substituting the values of λ and μ in the equation
(
v
−
k
−
1
)
μ
=
k
(
k
−
λ
−
1
)
{\displaystyle (v-k-1)\mu =k(k-\lambda -1)}
, it can be seen that
v
=
k
2
+
1
{\displaystyle v=k^{2}+1}
, and the eigenvalue multiplicities reduce to
M
±
=
1
2
[
k
2
±
2
k
−
k
2
4
k
−
3
]
{\displaystyle M_{\pm }={\frac {1}{2}}\left[k^{2}\pm {\frac {2k-k^{2}}{\sqrt {4k-3}}}\right]}
For the multiplicities to be integers, the quantity
2
k
−
k
2
4
k
−
3
{\displaystyle {\frac {2k-k^{2}}{\sqrt {4k-3}}}}
must be rational, therefore either the numerator
2
k
−
k
2
{\displaystyle 2k-k^{2}}
is zero or the denominator
4
k
−
3
{\displaystyle {\sqrt {4k-3}}}
is an integer.
If the numerator
2
k
−
k
2
{\displaystyle 2k-k^{2}}
is zero, the possibilities are:
k = 0 and v = 1 yields a trivial graph with one vertex and no edges, and
k = 2 and v = 5 yields the 5-vertex cycle graph
C
5
{\displaystyle C_{5}}
, usually drawn as a regular pentagon.
If the denominator
4
k
−
3
{\displaystyle {\sqrt {4k-3}}}
is an integer t, then
4
k
−
3
{\displaystyle 4k-3}
is a perfect square
t
2
{\displaystyle t^{2}}
, so
k
=
t
2
+
3
4
{\displaystyle k={\frac {t^{2}+3}{4}}}
. Substituting:
M
±
=
1
2
[
(
t
2
+
3
4
)
2
±
t
2
+
3
2
−
(
t
2
+
3
4
)
2
t
]
32
M
±
=
(
t
2
+
3
)
2
±
8
(
t
2
+
3
)
−
(
t
2
+
3
)
2
t
=
t
4
+
6
t
2
+
9
±
−
t
4
+
2
t
2
+
15
t
=
t
4
+
6
t
2
+
9
±
(
−
t
3
+
2
t
+
15
t
)
{\displaystyle {\begin{aligned}M_{\pm }&={\frac {1}{2}}\left[\left({\frac {t^{2}+3}{4}}\right)^{2}\pm {\frac {{\frac {t^{2}+3}{2}}-\left({\frac {t^{2}+3}{4}}\right)^{2}}{t}}\right]\\32M_{\pm }&=(t^{2}+3)^{2}\pm {\frac {8(t^{2}+3)-(t^{2}+3)^{2}}{t}}\\&=t^{4}+6t^{2}+9\pm {\frac {-t^{4}+2t^{2}+15}{t}}\\&=t^{4}+6t^{2}+9\pm \left(-t^{3}+2t+{\frac {15}{t}}\right)\end{aligned}}}
Since both sides are integers,
15
t
{\displaystyle {\frac {15}{t}}}
must be an integer, therefore t is a factor of 15, namely
t
∈
{
±
1
,
±
3
,
±
5
,
±
15
}
{\displaystyle t\in \{\pm 1,\pm 3,\pm 5,\pm 15\}}
, therefore
k
∈
{
1
,
3
,
7
,
57
}
{\displaystyle k\in \{1,3,7,57\}}
. In turn:
k = 1 and v = 2 yields a trivial graph of two vertices joined by an edge,
k = 3 and v = 10 yields the Petersen graph,
k = 7 and v = 50 yields the Hoffman–Singleton graph, discovered by Hoffman and Singleton in the course of this analysis, and
k = 57 and v = 3250 famously predicts a graph that has neither been discovered since 1960, nor has its existence been disproven.
The Hoffman-Singleton theorem states that there are no strongly regular girth-5 Moore graphs except the ones listed above.
== See also ==
Partial geometry
Seidel adjacency matrix
Two-graph
== Notes ==
== References ==
Andries Brouwer and Hendrik van Maldeghem (2022), Strongly Regular Graphs. Cambridge: Cambridge University Press. ISBN 1316512037. ISBN 978-1316512036
A.E. Brouwer, A.M. Cohen, and A. Neumaier (1989), Distance Regular Graphs. Berlin, New York: Springer-Verlag. ISBN 3-540-50619-5, ISBN 0-387-50619-5
Chris Godsil and Gordon Royle (2004), Algebraic Graph Theory. New York: Springer-Verlag. ISBN 0-387-95241-1
== External links ==
Eric W. Weisstein, Mathworld article with numerous examples.
Gordon Royle, List of larger graphs and families.
Andries E. Brouwer, Parameters of Strongly Regular Graphs.
Brendan McKay, Some collections of graphs.
Ted Spence, Strongly regular graphs on at most 64 vertices. | Wikipedia/Strongly_regular_graph |
A conceptual graph (CG) is a formalism for knowledge representation. In the first published paper on CGs, John F. Sowa used them to represent the conceptual schemas used in database systems. The first book on CGs applied them to a wide range of topics in artificial intelligence, computer science, and cognitive science.
== Research branches ==
Since 1984, the model has been developed along three main directions: a graphical interface for first-order logic, a diagrammatic calculus of logics, and a graph-based knowledge representation and reasoning model.
=== Graphical interface for first-order logic ===
In this approach, a formula in first-order logic (predicate calculus) is represented by a labeled graph.
A linear notation, called the Conceptual Graph Interchange Format (CGIF), has been standardized in the ISO standard for common logic.
The diagram above is an example of the display form for a conceptual graph. Each box is called a concept node, and each oval is called a relation node. In CGIF, this CG would be represented by the following statement:
[Cat Elsie] [Sitting *x] [Mat *y] (agent ?x Elsie) (location ?x ?y)
In CGIF, brackets enclose the information inside the concept nodes, and parentheses enclose the information inside the relation nodes. The letters x and y, which are called coreference labels, show how the concept and relation nodes are connected. In CLIF, those letters are mapped to variables, as in the following statement:
(exists ((x Sitting) (y Mat)) (and (Cat Elsie) (agent x Elsie) (location x y)))
As this example shows, the asterisks on the coreference labels *x and *y in CGIF map to existentially quantified variables in CLIF, and the question marks on ?x and ?y map to bound variables in CLIF. A universal quantifier, represented @every*z in CGIF, would be represented forall (z) in CLIF.
Reasoning can be done by translating graphs into logical formulas, then applying a logical inference engine.
=== Diagrammatic calculus of logics ===
Another research branch continues the work on existential graphs of Charles Sanders Peirce, which were one of the origins of conceptual graphs as proposed by Sowa. In this approach, developed in particular by Dau (Dau 2003), conceptual graphs are conceptual diagrams rather than graphs in the sense of graph theory, and reasoning operations are performed by operations on these diagrams.
=== Graph-based knowledge representation and reasoning model ===
Key features of GBKR, the graph-based knowledge representation and reasoning model developed by Chein and Mugnier and the Montpellier group, can be summarized as follows:
All kinds of knowledge (ontology, rules, constraints and facts) are labeled graphs, which provide an intuitive and easily understandable means to represent knowledge.
Reasoning mechanisms are based on graph notions, basically the classical notion of graph homomorphism; this allows, in particular, to link basic reasoning problems to other fundamental problems in computer science (e.g., problems concerning conjunctive queries in relational databases, or constraint satisfaction problems).
The formalism is logically founded, i.e., it has a semantics in first-order logic and the inference mechanisms are sound and complete with respect to deduction in first-order logic.
From a computational viewpoint, the graph homomorphism notion was recognized in the 1990s as a central notion, and complexity results and efficient algorithms have been obtained in several domains.
COGITANT and COGUI are tools that implement the GBKR model. COGITANT is a library of C++ classes that implement most of the GBKR notions and reasoning mechanisms. COGUI is a graphical user interface dedicated to the construction of a GBKR knowledge base (it integrates COGITANT and, among numerous functionalities, it contains a translator from GBKR to RDF/S and conversely).
== See also ==
Alphabet of human thought
Chunking (psychology)
Resource Description Framework (RDF)
SPARQL (Graph Query Language)
Semantic network
== References ==
=== Bibliography ===
Chein, Michel; Mugnier, Marie-Laure (2009). Graph-based Knowledge Representation: Computational Foundations of Conceptual Graphs. Springer. doi:10.1007/978-1-84800-286-9. ISBN 978-1-84800-285-2.
Dau, F. (2003). The Logic System of Concept Graphs with Negation and Its Relationship to Predicate Logic. Lecture Notes in Computer Science. Vol. 2892. Springer.
Sowa, John F. (July 1976). "Conceptual Graphs for a Data Base Interface" (PDF). IBM Journal of Research and Development. 20 (4): 336–357. doi:10.1147/rd.204.0336.
Sowa, John F. (1984). Conceptual Structures: Information Processing in Mind and Machine. Reading, MA: Addison-Wesley. ISBN 978-0-201-14472-7.
Velardi, Paola; Pazienza, Maria Teresa; De' Giovanetti, Mario (March 1988). "Conceptual graphs for the analysis and generation of sentences". IBM Journal of Research and Development. 32 (2). IBM Corp. Riverton, NJ, USA: 251–267. doi:10.1147/rd.322.0251.
== External links ==
Conceptual Graphs Home Page
Annual international conferences (ICCS) at DBLP
Conceptual Graphs on John F. Sowa's Website | Wikipedia/Conceptual_graph |
In the area of mathematics called combinatorial group theory, the Schreier coset graph is a graph associated with a group G, a generating set of G, and a subgroup of G. The Schreier graph encodes the abstract structure of the group modulo an equivalence relation formed by the cosets of the subgroup.
The graph is named after Otto Schreier, who used the term "Nebengruppenbild". An equivalent definition was made in an early paper of Todd and Coxeter.
== Description ==
Given a group G, a subgroup H ≤ G, and a generating set S = {si : i in I} of G, the Schreier graph Sch(G, H, S) is a graph whose vertices are the right cosets Hg = {hg : h in H} for g in G and whose edges are of the form (Hg, Hgs) for g in G and s in S.
More generally, if X is any G-set, one can define a Schreier graph Sch(G, X, S) of the action of G on X (with respect to the generating set S): its vertices are the elements of X, and its edges are of the form (x, xs) for x in X and s in S. This includes the original Schreier coset graph definition, as H\G is a naturally a G-set with respect to multiplication from the right. From an algebraic-topological perspective, the graph Sch(G, X, S) has no distinguished vertex, whereas Sch(G, H, S) has the distinguished vertex H, and is thus a pointed graph.
The Cayley graph of the group G itself is the Schreier coset graph for H = {1G} (Gross & Tucker 1987, p. 73).
A spanning tree of a Schreier coset graph corresponds to a Schreier transversal, as in Schreier's subgroup lemma (Conder 2003).
The book "Categories and Groupoids" listed below relates this to the theory of covering morphisms of groupoids. A subgroup H of a group G determines a covering morphism of groupoids
p
:
K
→
G
{\displaystyle p:K\rightarrow G}
and if S is a generating set for G then its inverse image under p is the Schreier graph of (G, S).
== Applications ==
The graph is useful to understand coset enumeration and the Todd–Coxeter algorithm.
Coset graphs can be used to form large permutation representations of groups and were used by Graham Higman to show that the alternating groups of large enough degree are Hurwitz groups (Conder 2003).
Stallings' core graphs are retracts of Schreier graphs of free groups, and are an essential tool for computing with subgroups of a free group.
Every vertex-transitive graph is a coset graph.
== References ==
Magnus, W.; Karrass, A.; Solitar, D. (1976), Combinatorial Group Theory, Dover
Conder, Marston (2003), "Group actions on graphs, maps and surfaces with maximum symmetry", Groups St. Andrews 2001 in Oxford. Vol. I, London Math. Soc. Lecture Note Ser., vol. 304, Cambridge University Press, pp. 63–91, MR 2051519
Gross, Jonathan L.; Tucker, Thomas W. (1987), Topological graph theory, Wiley-Interscience Series in Discrete Mathematics and Optimization, New York: John Wiley & Sons, ISBN 978-0-471-04926-5, MR 0898434
Schreier graphs of the Basilica group Authors: Daniele D'Angeli, Alfredo Donno, Michel Matter, Tatiana Nagnibeda
Philip J. Higgins, Categories and Groupoids, van Nostrand, New York, Lecture Notes, 1971, Republished as TAC Reprint, 2005 | Wikipedia/Schreier_coset_graph |
In graph theory, the tensor product G × H of graphs G and H is a graph such that
the vertex set of G × H is the Cartesian product V(G) × V(H); and
vertices (g,h) and (g',h' ) are adjacent in G × H if and only if
g is adjacent to g' in G, and
h is adjacent to h' in H.
The tensor product is also called the direct product, Kronecker product, categorical product, cardinal product, relational product, weak direct product, or conjunction. As an operation on binary relations, the tensor product was introduced by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1912). It is also equivalent to the Kronecker product of the adjacency matrices of the graphs.
The notation G × H is also (and formerly normally was) used to represent another construction known as the Cartesian product of graphs, but nowadays more commonly refers to the tensor product. The cross symbol shows visually the two edges resulting from the tensor product of two edges. This product should not be confused with the strong product of graphs.
== Examples ==
The tensor product G × K2 is a bipartite graph, called the bipartite double cover of G. The bipartite double cover of the Petersen graph is the Desargues graph: K2 × G(5,2) = G(10,3). The bipartite double cover of a complete graph Kn is a crown graph (a complete bipartite graph Kn,n minus a perfect matching).
The tensor product of a complete graph with itself is the complement of a Rook's graph. Its vertices can be placed in an n-by-n grid, so that each vertex is adjacent to the vertices that are not in the same row or column of the grid.
== Properties ==
The tensor product is the category-theoretic product in the category of graphs and graph homomorphisms. That is, a homomorphism to G × H corresponds to a pair of homomorphisms to G and to H. In particular, a graph I admits a homomorphism into G × H if and only if it admits a homomorphism into G and into H.
To see that, in one direction, observe that a pair of homomorphisms fG : I → G and fH : I → H yields a homomorphism
{
f
:
I
→
G
×
H
f
(
v
)
=
(
f
G
(
v
)
,
f
H
(
v
)
)
{\displaystyle {\begin{cases}f:I\to G\times H\\f(v)=\left(f_{G}(v),f_{H}(v)\right)\end{cases}}}
In the other direction, a homomorphism f : I → G × H can be composed with the projections homomorphisms
{
π
G
:
G
×
H
→
G
π
G
(
(
u
,
u
′
)
)
=
u
{
π
H
:
G
×
H
→
H
π
H
(
(
u
,
u
′
)
)
=
u
′
{\displaystyle {\begin{cases}\pi _{G}:G\times H\to G\\\pi _{G}((u,u'))=u\end{cases}}\qquad \qquad {\begin{cases}\pi _{H}:G\times H\to H\\\pi _{H}((u,u'))=u'\end{cases}}}
to yield homomorphisms to G and to H.
The adjacency matrix of G × H is the Kronecker (tensor) product of the adjacency matrices of G and H.
If a graph can be represented as a tensor product, then there may be multiple different representations (tensor products do not satisfy unique factorization) but each representation has the same number of irreducible factors. Imrich (1998) gives a polynomial time algorithm for recognizing tensor product graphs and finding a factorization of any such graph.
If either G or H is bipartite, then so is their tensor product. G × H is connected if and only if both factors are connected and at least one factor is nonbipartite. In particular the bipartite double cover of G is connected if and only if G is connected and nonbipartite.
The Hedetniemi conjecture, which gave a formula for the chromatic number of a tensor product, was disproved by Yaroslav Shitov (2019).
The tensor product of graphs equips the category of graphs and graph homomorphisms with the structure of a symmetric closed monoidal category. Let G0 denote the underlying set of vertices of the graph G. The internal hom [G, H] has functions f : G0 → H0 as vertices and an edge from f : G0 → H0 to f' : G0 → H0 whenever an edge {x, y} in G implies {f (x), f ' (y)} in H.
== See also ==
Graph product
Strong product of graphs
== Notes ==
== References ==
Brown, R.; Morris, I.; Shrimpton, J.; Wensley, C. D. (2008), "Graphs of Morphisms of Graphs", The Electronic Journal of Combinatorics, 15: A1.
Hahn, Geňa; Sabidussi, Gert (1997), Graph symmetry: algebraic methods and applications, NATO Advanced Science Institutes Series, vol. 497, Springer, p. 116, ISBN 978-0-7923-4668-5.
Imrich, W. (1998), "Factoring cardinal product graphs in polynomial time", Discrete Mathematics, 192: 119–144, doi:10.1016/S0012-365X(98)00069-7, MR 1656730
Imrich, Wilfried; Klavžar, Sandi (2000), Product Graphs: Structure and Recognition, Wiley, ISBN 0-471-37039-8
Shitov, Yaroslav (May 2019), Counterexamples to Hedetniemi's conjecture, arXiv:1905.02167
Weichsel, Paul M. (1962), "The Kronecker product of graphs", Proceedings of the American Mathematical Society, 13 (1): 47–52, doi:10.2307/2033769, JSTOR 2033769, MR 0133816
Whitehead, A. N.; Russell, B. (1912), Principia Mathematica, Cambridge University Press, vol. 2, p. 384
== External links ==
Nicolas Bray. "Graph Tensor Product". MathWorld. | Wikipedia/Tensor_product_of_graphs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.