text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In topology and related areas of mathematics, the set of all possible topologies on a given set forms a partially ordered set. This order relation can be used for comparison of the topologies.
== Definition ==
A topology on a set may be defined as the collection of subsets which are considered to be "open". (An alternative definition is that it is the collection of subsets which are considered "closed". These two ways of defining the topology are essentially equivalent because the complement of an open set is closed and vice versa. In the following, it doesn't matter which definition is used.)
For definiteness the reader should think of a topology as the family of open sets of a topological space, since that is the standard meaning of the word "topology".
Let τ1 and τ2 be two topologies on a set X such that τ1 is contained in τ2:
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
.
That is, every element of τ1 is also an element of τ2. Then the topology τ1 is said to be a coarser (weaker or smaller) topology than τ2, and τ2 is said to be a finer (stronger or larger) topology than τ1.
If additionally
τ
1
≠
τ
2
{\displaystyle \tau _{1}\neq \tau _{2}}
we say τ1 is strictly coarser than τ2 and τ2 is strictly finer than τ1.
The binary relation ⊆ defines a partial ordering relation on the set of all possible topologies on X.
== Examples ==
The finest topology on X is the discrete topology; this topology makes all subsets open. The coarsest topology on X is the trivial topology; this topology only admits the empty set
and the whole space as open sets.
In function spaces and spaces of measures there are often a number of possible topologies. See topologies on the set of operators on a Hilbert space for some intricate relationships.
All possible polar topologies on a dual pair are finer than the weak topology and coarser than the strong topology.
The complex vector space Cn may be equipped with either its usual (Euclidean) topology, or its Zariski topology. In the latter, a subset V of Cn is closed if and only if it consists of all solutions to some system of polynomial equations. Since any such V also is a closed set in the ordinary sense, but not vice versa, the Zariski topology is strictly weaker than the ordinary one.
== Properties ==
Let τ1 and τ2 be two topologies on a set X. Then the following statements are equivalent:
τ1 ⊆ τ2
the identity map idX : (X, τ2) → (X, τ1) is a continuous map.
the identity map idX : (X, τ1) → (X, τ2) is a strongly/relatively open map.
(The identity map idX is surjective and therefore it is strongly open if and only if it is relatively open.)
Two immediate corollaries of the above equivalent statements are
A continuous map f : X → Y remains continuous if the topology on Y becomes coarser or the topology on X finer.
An open (resp. closed) map f : X → Y remains open (resp. closed) if the topology on Y becomes finer or the topology on X coarser.
One can also compare topologies using neighborhood bases. Let τ1 and τ2 be two topologies on a set X and let Bi(x) be a local base for the topology τi at x ∈ X for i = 1,2. Then τ1 ⊆ τ2 if and only if for all x ∈ X, each open set U1 in B1(x) contains some open set U2 in B2(x). Intuitively, this makes sense: a finer topology should have smaller neighborhoods.
== Lattice of topologies ==
The set of all topologies on a set X together with the partial ordering relation ⊆ forms a complete lattice that is also closed under arbitrary intersections. That is, any collection of topologies on X have a meet (or infimum) and a join (or supremum). The meet of a collection of topologies is the intersection of those topologies. The join, however, is not generally the union of those topologies (the union of two topologies need not be a topology) but rather the topology generated by the union.
Every complete lattice is also a bounded lattice, which is to say that it has a greatest and least element. In the case of topologies, the greatest element is the discrete topology and the least element is the trivial topology.
The lattice of topologies on a set
X
{\displaystyle X}
is a complemented lattice; that is, given a topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
there exists a topology
τ
′
{\displaystyle \tau '}
on
X
{\displaystyle X}
such that the intersection
τ
∩
τ
′
{\displaystyle \tau \cap \tau '}
is the trivial topology and the topology generated by the union
τ
∪
τ
′
{\displaystyle \tau \cup \tau '}
is the discrete topology.
If the set
X
{\displaystyle X}
has at least three elements, the lattice of topologies on
X
{\displaystyle X}
is not modular, and hence not distributive either.
== See also ==
Initial topology, the coarsest topology on a set to make a family of mappings from that set continuous
Final topology, the finest topology on a set to make a family of mappings into that set continuous
== Notes ==
== References ==
Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities) | Wikipedia/Finer_topology |
In mathematics, a path in a topological space
X
{\displaystyle X}
is a continuous function from a closed interval into
X
.
{\displaystyle X.}
Paths play an important role in the fields of topology and mathematical analysis.
For example, a topological space for which there exists a path connecting any two points is said to be path-connected. Any space may be broken up into path-connected components. The set of path-connected components of a space
X
{\displaystyle X}
is often denoted
π
0
(
X
)
.
{\displaystyle \pi _{0}(X).}
One can also define paths and loops in pointed spaces, which are important in homotopy theory. If
X
{\displaystyle X}
is a topological space with basepoint
x
0
,
{\displaystyle x_{0},}
then a path in
X
{\displaystyle X}
is one whose initial point is
x
0
{\displaystyle x_{0}}
. Likewise, a loop in
X
{\displaystyle X}
is one that is based at
x
0
{\displaystyle x_{0}}
.
== Definition ==
A curve in a topological space
X
{\displaystyle X}
is a continuous function
f
:
J
→
X
{\displaystyle f:J\to X}
from a non-empty and non-degenerate interval
J
⊆
R
.
{\displaystyle J\subseteq \mathbb {R} .}
A path in
X
{\displaystyle X}
is a curve
f
:
[
a
,
b
]
→
X
{\displaystyle f:[a,b]\to X}
whose domain
[
a
,
b
]
{\displaystyle [a,b]}
is a compact non-degenerate interval (meaning
a
<
b
{\displaystyle a<b}
are real numbers), where
f
(
a
)
{\displaystyle f(a)}
is called the initial point of the path and
f
(
b
)
{\displaystyle f(b)}
is called its terminal point.
A path from
x
{\displaystyle x}
to
y
{\displaystyle y}
is a path whose initial point is
x
{\displaystyle x}
and whose terminal point is
y
.
{\displaystyle y.}
Every non-degenerate compact interval
[
a
,
b
]
{\displaystyle [a,b]}
is homeomorphic to
[
0
,
1
]
,
{\displaystyle [0,1],}
which is why a path is sometimes, especially in homotopy theory, defined to be a continuous function
f
:
[
0
,
1
]
→
X
{\displaystyle f:[0,1]\to X}
from the closed unit interval
I
:=
[
0
,
1
]
{\displaystyle I:=[0,1]}
into
X
.
{\displaystyle X.}
An arc or C0-arc in
X
{\displaystyle X}
is a path in
X
{\displaystyle X}
that is also a topological embedding.
Importantly, a path is not just a subset of
X
{\displaystyle X}
that "looks like" a curve, it also includes a parameterization. For example, the maps
f
(
x
)
=
x
{\displaystyle f(x)=x}
and
g
(
x
)
=
x
2
{\displaystyle g(x)=x^{2}}
represent two different paths from 0 to 1 on the real line.
A loop in a space
X
{\displaystyle X}
based at
x
∈
X
{\displaystyle x\in X}
is a path from
x
{\displaystyle x}
to
x
.
{\displaystyle x.}
A loop may be equally well regarded as a map
f
:
[
0
,
1
]
→
X
{\displaystyle f:[0,1]\to X}
with
f
(
0
)
=
f
(
1
)
{\displaystyle f(0)=f(1)}
or as a continuous map from the unit circle
S
1
{\displaystyle S^{1}}
to
X
{\displaystyle X}
f
:
S
1
→
X
.
{\displaystyle f:S^{1}\to X.}
This is because
S
1
{\displaystyle S^{1}}
is the quotient space of
I
=
[
0
,
1
]
{\displaystyle I=[0,1]}
when
0
{\displaystyle 0}
is identified with
1.
{\displaystyle 1.}
The set of all loops in
X
{\displaystyle X}
forms a space called the loop space of
X
.
{\displaystyle X.}
== Homotopy of paths ==
Paths and loops are central subjects of study in the branch of algebraic topology called homotopy theory. A homotopy of paths makes precise the notion of continuously deforming a path while keeping its endpoints fixed.
Specifically, a homotopy of paths, or path-homotopy, in
X
{\displaystyle X}
is a family of paths
f
t
:
[
0
,
1
]
→
X
{\displaystyle f_{t}:[0,1]\to X}
indexed by
I
=
[
0
,
1
]
{\displaystyle I=[0,1]}
such that
f
t
(
0
)
=
x
0
{\displaystyle f_{t}(0)=x_{0}}
and
f
t
(
1
)
=
x
1
{\displaystyle f_{t}(1)=x_{1}}
are fixed.
the map
F
:
[
0
,
1
]
×
[
0
,
1
]
→
X
{\displaystyle F:[0,1]\times [0,1]\to X}
given by
F
(
s
,
t
)
=
f
t
(
s
)
{\displaystyle F(s,t)=f_{t}(s)}
is continuous.
The paths
f
0
{\displaystyle f_{0}}
and
f
1
{\displaystyle f_{1}}
connected by a homotopy are said to be homotopic (or more precisely path-homotopic, to distinguish between the relation defined on all continuous functions between fixed spaces). One can likewise define a homotopy of loops keeping the base point fixed.
The relation of being homotopic is an equivalence relation on paths in a topological space. The equivalence class of a path
f
{\displaystyle f}
under this relation is called the homotopy class of
f
,
{\displaystyle f,}
often denoted
[
f
]
.
{\displaystyle [f].}
== Path composition ==
One can compose paths in a topological space in the following manner. Suppose
f
{\displaystyle f}
is a path from
x
{\displaystyle x}
to
y
{\displaystyle y}
and
g
{\displaystyle g}
is a path from
y
{\displaystyle y}
to
z
{\displaystyle z}
. The path
f
g
{\displaystyle fg}
is defined as the path obtained by first traversing
f
{\displaystyle f}
and then traversing
g
{\displaystyle g}
:
f
g
(
s
)
=
{
f
(
2
s
)
0
≤
s
≤
1
2
g
(
2
s
−
1
)
1
2
≤
s
≤
1.
{\displaystyle fg(s)={\begin{cases}f(2s)&0\leq s\leq {\frac {1}{2}}\\g(2s-1)&{\frac {1}{2}}\leq s\leq 1.\end{cases}}}
Clearly path composition is only defined when the terminal point of
f
{\displaystyle f}
coincides with the initial point of
g
.
{\displaystyle g.}
If one considers all loops based at a point
x
0
,
{\displaystyle x_{0},}
then path composition is a binary operation.
Path composition, whenever defined, is not associative due to the difference in parametrization. However it is associative up to path-homotopy. That is,
[
(
f
g
)
h
]
=
[
f
(
g
h
)
]
.
{\displaystyle [(fg)h]=[f(gh)].}
Path composition defines a group structure on the set of homotopy classes of loops based at a point
x
0
{\displaystyle x_{0}}
in
X
.
{\displaystyle X.}
The resultant group is called the fundamental group of
X
{\displaystyle X}
based at
x
0
,
{\displaystyle x_{0},}
usually denoted
π
1
(
X
,
x
0
)
.
{\displaystyle \pi _{1}\left(X,x_{0}\right).}
In situations calling for associativity of path composition "on the nose," a path in
X
{\displaystyle X}
may instead be defined as a continuous map from an interval
[
0
,
a
]
{\displaystyle [0,a]}
to
X
{\displaystyle X}
for any real
a
≥
0.
{\displaystyle a\geq 0.}
(Such a path is called a Moore path.) A path
f
{\displaystyle f}
of this kind has a length
|
f
|
{\displaystyle |f|}
defined as
a
.
{\displaystyle a.}
Path composition is then defined as before with the following modification:
f
g
(
s
)
=
{
f
(
s
)
0
≤
s
≤
|
f
|
g
(
s
−
|
f
|
)
|
f
|
≤
s
≤
|
f
|
+
|
g
|
{\displaystyle fg(s)={\begin{cases}f(s)&0\leq s\leq |f|\\g(s-|f|)&|f|\leq s\leq |f|+|g|\end{cases}}}
Whereas with the previous definition,
f
,
{\displaystyle f,}
g
{\displaystyle g}
, and
f
g
{\displaystyle fg}
all have length
1
{\displaystyle 1}
(the length of the domain of the map), this definition makes
|
f
g
|
=
|
f
|
+
|
g
|
.
{\displaystyle |fg|=|f|+|g|.}
What made associativity fail for the previous definition is that although
(
f
g
)
h
{\displaystyle (fg)h}
and
f
(
g
h
)
{\displaystyle f(gh)}
have the same length, namely
1
,
{\displaystyle 1,}
the midpoint of
(
f
g
)
h
{\displaystyle (fg)h}
occurred between
g
{\displaystyle g}
and
h
,
{\displaystyle h,}
whereas the midpoint of
f
(
g
h
)
{\displaystyle f(gh)}
occurred between
f
{\displaystyle f}
and
g
{\displaystyle g}
. With this modified definition
(
f
g
)
h
{\displaystyle (fg)h}
and
f
(
g
h
)
{\displaystyle f(gh)}
have the same length, namely
|
f
|
+
|
g
|
+
|
h
|
,
{\displaystyle |f|+|g|+|h|,}
and the same midpoint, found at
(
|
f
|
+
|
g
|
+
|
h
|
)
/
2
{\displaystyle \left(|f|+|g|+|h|\right)/2}
in both
(
f
g
)
h
{\displaystyle (fg)h}
and
f
(
g
h
)
{\displaystyle f(gh)}
; more generally they have the same parametrization throughout.
== Fundamental groupoid ==
There is a categorical picture of paths which is sometimes useful. Any topological space
X
{\displaystyle X}
gives rise to a category where the objects are the points of
X
{\displaystyle X}
and the morphisms are the homotopy classes of paths. Since any morphism in this category is an isomorphism, this category is a groupoid called the fundamental groupoid of
X
.
{\displaystyle X.}
Loops in this category are the endomorphisms (all of which are actually automorphisms). The automorphism group of a point
x
0
{\displaystyle x_{0}}
in
X
{\displaystyle X}
is just the fundamental group based at
x
0
{\displaystyle x_{0}}
. More generally, one can define the fundamental groupoid on any subset
A
{\displaystyle A}
of
X
,
{\displaystyle X,}
using homotopy classes of paths joining points of
A
.
{\displaystyle A.}
This is convenient for Van Kampen's Theorem.
== See also ==
Curve § Topology
Locally path-connected space – Property of topological spacesPages displaying short descriptions of redirect targets
Path space (disambiguation)
Path-connected space – Topological space that is connectedPages displaying short descriptions of redirect targets
== References ==
Ronald Brown, Topology and groupoids, Booksurge PLC, (2006).
J. Peter May, A concise course in algebraic topology, University of Chicago Press, (1999).
James Munkres, Topology 2ed, Prentice Hall, (2000). | Wikipedia/Path_(topology) |
In topology, the cartesian product of topological spaces can be given several different topologies. One of the more natural choices is the box topology, where a base is given by the Cartesian products of open sets in the component spaces. Another possibility is the product topology, where a base is also given by the Cartesian products of open sets in the component spaces, but only finitely many of which can be unequal to the entire component space.
While the box topology has a somewhat more intuitive definition than the product topology, it satisfies fewer desirable properties. In particular, if all the component spaces are compact, the box topology on their Cartesian product will not necessarily be compact, although the product topology on their Cartesian product will always be compact. In general, the box topology is finer than the product topology, although the two agree in the case of finite direct products (or when all but finitely many of the factors are trivial).
== Definition ==
Given
X
{\displaystyle X}
such that
X
:=
∏
i
∈
I
X
i
,
{\displaystyle X:=\prod _{i\in I}X_{i},}
or the (possibly infinite) Cartesian product of the topological spaces
X
i
{\displaystyle X_{i}}
, indexed by
i
∈
I
{\displaystyle i\in I}
, the box topology on
X
{\displaystyle X}
is generated by the base
B
=
{
∏
i
∈
I
U
i
∣
U
i
open in
X
i
}
.
{\displaystyle {\mathcal {B}}=\left\{\prod _{i\in I}U_{i}\mid U_{i}{\text{ open in }}X_{i}\right\}.}
The name box comes from the case of Rn, in which the basis sets look like boxes. The set
∏
i
∈
I
X
i
{\displaystyle \prod _{i\in I}X_{i}}
endowed with the box topology is sometimes denoted by
◻
i
∈
I
X
i
.
{\displaystyle {\underset {i\in I}{\square }}X_{i}.}
== Properties ==
Box topology on Rω:
The box topology is completely regular
The box topology is neither compact nor connected
The box topology is not first countable (hence not metrizable)
The box topology is not separable
The box topology is paracompact (and hence normal and completely regular) if the continuum hypothesis is true
=== Example — failure of continuity ===
The following example is based on the Hilbert cube. Let Rω denote the countable cartesian product of R with itself, i.e. the set of all sequences in R. Equip R with the standard topology and Rω with the box topology. Define:
{
f
:
R
→
R
ω
x
↦
(
x
,
x
,
x
,
…
)
{\displaystyle {\begin{cases}f:\mathbf {R} \to \mathbf {R} ^{\omega }\\x\mapsto (x,x,x,\ldots )\end{cases}}}
So all the component functions are the identity and hence continuous, however we will show f is not continuous. To see this, consider the open set
U
=
∏
n
=
1
∞
(
−
1
n
,
1
n
)
.
{\displaystyle U=\prod _{n=1}^{\infty }\left(-{\tfrac {1}{n}},{\tfrac {1}{n}}\right).}
Suppose f were continuous. Then, since:
f
(
0
)
=
(
0
,
0
,
0
,
…
)
∈
U
,
{\displaystyle f(0)=(0,0,0,\ldots )\in U,}
there should exist
ε
>
0
{\displaystyle \varepsilon >0}
such that
(
−
ε
,
ε
)
⊂
f
−
1
(
U
)
.
{\displaystyle (-\varepsilon ,\varepsilon )\subset f^{-1}(U).}
But this would imply that
f
(
ε
2
)
=
(
ε
2
,
ε
2
,
ε
2
,
…
)
∈
U
,
{\displaystyle f\left({\tfrac {\varepsilon }{2}}\right)=\left({\tfrac {\varepsilon }{2}},{\tfrac {\varepsilon }{2}},{\tfrac {\varepsilon }{2}},\ldots \right)\in U,}
which is false since
ε
2
>
1
n
{\displaystyle {\tfrac {\varepsilon }{2}}>{\tfrac {1}{n}}}
for
n
>
2
ε
.
{\displaystyle n>{\tfrac {2}{\varepsilon }}.}
Thus f is not continuous even though all its component functions are.
=== Example — failure of compactness ===
Consider the countable product
X
=
∏
i
∈
N
X
i
{\displaystyle X=\prod _{i\in \mathbb {N} }X_{i}}
where for each i,
X
i
=
{
0
,
1
}
{\displaystyle X_{i}=\{0,1\}}
with the discrete topology. The box topology on
X
{\displaystyle X}
will also be the discrete topology. Since discrete spaces are compact if and only if they are finite, we immediately see that
X
{\displaystyle X}
is not compact, even though its component spaces are.
X
{\displaystyle X}
is not sequentially compact either: consider the sequence
{
x
n
}
n
=
1
∞
{\displaystyle \{x_{n}\}_{n=1}^{\infty }}
given by
(
x
n
)
m
=
{
0
m
<
n
1
m
≥
n
{\displaystyle (x_{n})_{m}={\begin{cases}0&m<n\\1&m\geq n\end{cases}}}
Since no two points in the sequence are the same, the sequence has no limit point, and therefore
X
{\displaystyle X}
is not sequentially compact.
=== Convergence in the box topology ===
Topologies are often best understood by describing how sequences converge. In general, a Cartesian product of a space
X
{\displaystyle X}
with itself over an indexing set
S
{\displaystyle S}
is precisely the space of functions from
S
{\displaystyle S}
to
X
{\displaystyle X}
, denoted
∏
s
∈
S
X
=
X
S
{\textstyle \prod _{s\in S}X=X^{S}}
. The product topology yields the topology of pointwise convergence; sequences of functions converge if and only if they converge at every point of
S
{\displaystyle S}
.
Because the box topology is finer than the product topology, convergence of a sequence in the box topology is a more stringent condition. Assuming
X
{\displaystyle X}
is Hausdorff, a sequence
(
f
n
)
n
{\displaystyle (f_{n})_{n}}
of functions in
X
S
{\displaystyle X^{S}}
converges in the box topology to a function
f
∈
X
S
{\displaystyle f\in X^{S}}
if and only if it converges pointwise to
f
{\displaystyle f}
and
there is a finite subset
S
0
⊂
S
{\displaystyle S_{0}\subset S}
and there is an
N
{\displaystyle N}
such that for all
n
>
N
{\displaystyle n>N}
the sequence
(
f
n
(
s
)
)
n
{\displaystyle (f_{n}(s))_{n}}
in
X
{\displaystyle X}
is constant for all
s
∈
S
∖
S
0
{\displaystyle s\in S\setminus S_{0}}
. In other words, the sequence
(
f
n
(
s
)
)
n
{\displaystyle (f_{n}(s))_{n}}
is eventually constant for nearly all
s
{\displaystyle s}
and in a uniform way.
== Comparison with product topology ==
The basis sets in the product topology have almost the same definition as the above, except with the qualification that all but finitely many Ui are equal to the component space Xi. The product topology satisfies a very desirable property for maps fi : Y → Xi into the component spaces: the product map f: Y → X defined by the component functions fi is continuous if and only if all the fi are continuous. As shown above, this does not always hold in the box topology. This actually makes the box topology very useful for providing counterexamples—many qualities such as compactness, connectedness, metrizability, etc., if possessed by the factor spaces, are not in general preserved in the product with this topology.
== See also ==
Cylinder set
List of topologies
== Notes ==
== References ==
Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). ISBN 0030794854.
Willard, Stephen (2004). General Topology. Dover Publications. ISBN 0-486-43479-6.
== External links ==
"Box topology". PlanetMath. | Wikipedia/Box_topology |
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.
Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.
The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.
== Description ==
An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.
The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials.
The sides of a polynomial equation contain one or more terms. For example, the equation
A
x
2
+
B
x
+
C
−
y
=
0
{\displaystyle Ax^{2}+Bx+C-y=0}
has left-hand side
A
x
2
+
B
x
+
C
−
y
{\displaystyle Ax^{2}+Bx+C-y}
, which has four terms, and right-hand side
0
{\displaystyle 0}
, consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables).
An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount must be removed from the other pan to keep the scale in balance. More generally, an equation remains balanced if the same operation is performed on each side.
== Properties ==
Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:
Adding or subtracting the same quantity to both sides of an equation. This shows that every equation is equivalent to an equation in which the right-hand side is zero.
Multiplying or dividing both sides of an equation by a non-zero quantity.
Applying an identity to transform one side of the equation. For example, expanding a product or factoring a sum.
For a system: adding to both sides of an equation the corresponding side of another equation, multiplied by the same quantity.
If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation
x
=
1
{\displaystyle x=1}
has the solution
x
=
1.
{\displaystyle x=1.}
Raising both sides to the exponent of 2 (which means applying the function
f
(
s
)
=
s
2
{\displaystyle f(s)=s^{2}}
to both sides of the equation) changes the equation to
x
2
=
1
{\displaystyle x^{2}=1}
, which not only has the previous solution but also introduces the extraneous solution,
x
=
−
1.
{\displaystyle x=-1.}
Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.
The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.
== Examples ==
=== Analogous illustration ===
An equation is analogous to a weighing scale, balance, or seesaw.
Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).
In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.
=== Parameters and unknowns ===
Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.
An example of an equation involving x and y as unknowns and the parameter R is
x
2
+
y
2
=
R
2
.
{\displaystyle x^{2}+y^{2}=R^{2}.}
When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.
Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax2 + bx + c = 0.
The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.
A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system
3
x
+
5
y
=
2
5
x
+
8
y
=
3
{\displaystyle {\begin{aligned}3x+5y&=2\\5x+8y&=3\end{aligned}}}
has the unique solution x = −1, y = 1.
=== Identities ===
An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.
In algebra, an example of an identity is the difference of two squares:
x
2
−
y
2
=
(
x
+
y
)
(
x
−
y
)
{\displaystyle x^{2}-y^{2}=(x+y)(x-y)}
which is true for all x and y.
Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:
sin
2
(
θ
)
+
cos
2
(
θ
)
=
1
{\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1}
and
sin
(
2
θ
)
=
2
sin
(
θ
)
cos
(
θ
)
{\displaystyle \sin(2\theta )=2\sin(\theta )\cos(\theta )}
which are both true for all values of θ.
For example, to solve for the value of θ that satisfies the equation:
3
sin
(
θ
)
cos
(
θ
)
=
1
,
{\displaystyle 3\sin(\theta )\cos(\theta )=1\,,}
where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:
3
2
sin
(
2
θ
)
=
1
,
{\displaystyle {\frac {3}{2}}\sin(2\theta )=1\,,}
yielding the following solution for θ:
θ
=
1
2
arcsin
(
2
3
)
≈
20.9
∘
.
{\displaystyle \theta ={\frac {1}{2}}\arcsin \left({\frac {2}{3}}\right)\approx 20.9^{\circ }.}
Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.
== Algebra ==
Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.
=== Polynomial equations ===
In general, an algebraic equation or polynomial equation is an equation of the form
P
=
0
{\displaystyle P=0}
, or
P
=
Q
{\displaystyle P=Q}
where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).
For example,
x
5
−
3
x
+
1
=
0
{\displaystyle x^{5}-3x+1=0}
is a univariate algebraic (polynomial) equation with integer coefficients and
y
4
+
x
y
2
=
x
3
3
−
x
y
2
+
y
2
−
1
7
{\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}}
is a multivariate polynomial equation over the rational numbers.
Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.
A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
=== Systems of linear equations ===
A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example,
3
x
+
2
y
−
z
=
1
2
x
−
2
y
+
4
z
=
−
2
−
x
+
1
2
y
−
z
=
0
{\displaystyle {\begin{alignedat}{7}3x&&\;+\;&&2y&&\;-\;&&z&&\;=\;&&1&\\2x&&\;-\;&&2y&&\;+\;&&4z&&\;=\;&&-2&\\-x&&\;+\;&&{\tfrac {1}{2}}y&&\;-\;&&z&&\;=\;&&0&\end{alignedat}}}
is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
x
=
1
y
=
−
2
z
=
−
2
{\displaystyle {\begin{alignedat}{2}x&\,=\,&1\\y&\,=\,&-2\\z&\,=\,&-2\end{alignedat}}}
since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually.
In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
== Geometry ==
=== Analytic geometry ===
In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form
a
x
+
b
y
+
c
z
+
d
=
0
{\displaystyle ax+by+cz+d=0}
, where
a
,
b
,
c
{\displaystyle a,b,c}
and
d
{\displaystyle d}
are real numbers and
x
,
y
,
z
{\displaystyle x,y,z}
are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values
a
,
b
,
c
{\displaystyle a,b,c}
are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in
R
2
{\displaystyle \mathbb {R} ^{2}}
or as the solution set of two linear equations with values in
R
3
.
{\displaystyle \mathbb {R} ^{3}.}
A conic section is the intersection of a cone with equation
x
2
+
y
2
=
z
2
{\displaystyle x^{2}+y^{2}=z^{2}}
and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.
The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.
Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.
=== Cartesian equations ===
In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.
One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).
The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.
=== Parametric equations ===
A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example,
x
=
cos
t
y
=
sin
t
{\displaystyle {\begin{aligned}x&=\cos t\\y&=\sin t\end{aligned}}}
are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.
The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).
== Number theory ==
=== Diophantine equations ===
A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.
Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.
The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.
=== Algebraic and transcendental numbers ===
An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.
=== Algebraic geometry ===
Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.
== Differential equations ==
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
=== Ordinary differential equations ===
An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.
=== Partial differential equations ===
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
== Types of equations ==
Equations can be classified according to the types of operations and quantities involved. Important types include:
An algebraic equation or polynomial equation is an equation in which both sides are polynomials (see also system of polynomial equations). These are further classified by degree:
linear equation for degree one
quadratic equation for degree two
cubic equation for degree three
quartic equation for degree four
quintic equation for degree five
sextic equation for degree six
septic equation for degree seven
octic equation for degree eight
A Diophantine equation is an equation where the unknowns are required to be integers
A transcendental equation is an equation involving a transcendental function of its unknowns
A parametric equation is an equation in which the solutions for the variables are expressed as functions of some other variables, called parameters appearing in the equations
A functional equation is an equation in which the unknowns are functions rather than simple quantities
Equations involving derivatives, integrals and finite differences:
A differential equation is a functional equation involving derivatives of the unknown functions, where the function and its derivatives are evaluated at the same point, such as
f
′
(
x
)
=
x
2
{\displaystyle f'(x)=x^{2}}
. Differential equations are subdivided into ordinary differential equations for functions of a single variable and partial differential equations for functions of multiple variables
An integral equation is a functional equation involving the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from a differential equation primarily through a change of variable substituting the function by its derivative, however this is not the case when the integral is taken over an open surface
An integro-differential equation is a functional equation involving both the derivatives and the antiderivatives of the unknown functions. For functions of one variable, such an equation differs from integral and differential equations through a similar change of variable.
A functional differential equation of delay differential equation is a function equation involving derivatives of the unknown functions, evaluated at multiple points, such as
f
′
(
x
)
=
f
(
x
−
2
)
{\displaystyle f'(x)=f(x-2)}
A difference equation is an equation where the unknown is a function f that occurs in the equation through f(x), f(x−1), ..., f(x−k), for some whole integer k called the order of the equation. If x is restricted to be an integer, a difference equation is the same as a recurrence relation
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process
== See also ==
== Notes ==
== References ==
== External links ==
Winplot: General Purpose plotter that can draw and animate 2D and 3D mathematical equations.
Equation plotter: A web page for producing and downloading pdf or postscript plots of the solution sets to equations and inequations in two variables (x and y). | Wikipedia/Equation_(mathematics) |
A system of polynomial equations (sometimes simply a polynomial system) is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k.
A solution of a polynomial system is a set of values for the xis which belong to some algebraically closed field extension K of k, and make all equations true. When k is the field of rational numbers, K is generally assumed to be the field of complex numbers, because each solution belongs to a field extension of k, which is isomorphic to a subfield of the complex numbers.
This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fields k in which computation (including equality testing) is easy and efficient, that is the field of rational numbers and finite fields.
Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, see Diophantine equation.
== Definition ==
A simple example of a system of polynomial equations is
x
2
+
y
2
−
5
=
0
x
y
−
2
=
0.
{\displaystyle {\begin{aligned}x^{2}+y^{2}-5&=0\\xy-2&=0.\end{aligned}}}
Its solutions are the four pairs (x, y) = (1, 2), (2, 1), (-1, -2), (-2, -1). These solutions can easily be checked by substitution, but more work is needed for proving that there are no other solutions.
The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions.
A system of polynomial equations, or polynomial system is a collection of equations
f
1
(
x
1
,
…
,
x
m
)
=
0
⋮
f
n
(
x
1
,
…
,
x
m
)
=
0
,
{\displaystyle {\begin{aligned}f_{1}\left(x_{1},\ldots ,x_{m}\right)&=0\\&\;\;\vdots \\f_{n}\left(x_{1},\ldots ,x_{m}\right)&=0,\end{aligned}}}
where each fh is a polynomial in the indeterminates x1, ..., xm, with integer coefficients, or coefficients in some fixed field, often the field of rational numbers or a finite field. Other fields of coefficients, such as the real numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers).
A solution of a polynomial system is a tuple of values of (x1, ..., xm) that satisfies all equations of the polynomial system. The solutions are sought in the complex numbers, or more generally in an algebraically closed field containing the coefficients. In particular, in characteristic zero, all complex solutions are sought. Searching for the real or rational solutions are much more difficult problems that are not considered in this article.
The set of solutions is not always finite; for example, the solutions of the system
x
(
x
−
1
)
=
0
x
(
y
−
1
)
=
0
{\displaystyle {\begin{aligned}x(x-1)&=0\\x(y-1)&=0\end{aligned}}}
are a point (x,y) = (1,1) and a line x = 0. Even when the solution set is finite, there is, in general, no closed-form expression of the solutions (in the case of a single equation, this is Abel–Ruffini theorem).
The Barth surface, shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numerous singular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such an overdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most 53 = 125, by Bézout's theorem. However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface.
== Basic properties and definitions ==
A system is overdetermined if the number of equations is higher than the number of variables. A system is inconsistent if it has no complex solution (or, if the coefficients are not complex numbers, no solution in an algebraically closed field containing the coefficients). By Hilbert's Nullstellensatz this means that 1 is a linear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the system x3 – 1 = 0, x2 – 1 = 0 is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solution x = 1.
A system is underdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in an algebraically closed field that contains the coefficients of the equations). This is a non-trivial result of commutative algebra that involves, in particular, Hilbert's Nullstellensatz and Krull's principal ideal theorem.
A system is zero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that the algebraic variety of the solutions has dimension zero. A system with infinitely many solutions is said to be positive-dimensional.
A zero-dimensional system with as many equations as variables is sometimes said to be well-behaved.
Bézout's theorem asserts that a well-behaved system whose equations have degrees d1, ..., dn has at most d1⋅⋅⋅dn solutions. This bound is sharp. If all the degrees are equal to d, this bound becomes dn and is exponential in the number of variables. (The fundamental theorem of algebra is the special case n = 1.)
This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound).
== What is solving? ==
The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of a Gröbner basis of the left-hand sides of the equations. The system is inconsistent if this Gröbner basis is reduced to 1. The system is zero-dimensional if, for every variable there is a leading monomial of some element of the Gröbner basis which is a pure power of this variable. For this test, the best monomial order (that is the one which leads generally to the fastest computation) is usually the graded reverse lexicographic one (grevlex).
If the system is positive-dimensional, it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield of algebraic geometry.
A natural example of such a question concerning positive-dimensional systems is the following: decide if a polynomial system over the rational numbers has a finite number of real solutions and compute them. A generalization of this question is find at least one solution in each connected component of the set of real solutions of a polynomial system. The classical algorithm for solving these question is cylindrical algebraic decomposition, which has a doubly exponential computational complexity and therefore cannot be used in practice, except for very small examples.
For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is called numeric. A solution is certified if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions.
The other way of representing the solutions is said to be algebraic. It uses the fact that, for a zero-dimensional system, the solutions belong to the algebraic closure of the field k of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficients is a highly unstable problem.
== Extensions ==
=== Trigonometric equations ===
A trigonometric equation is an equation g = 0 where g is a trigonometric polynomial. Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (using sum and difference formulas), replacing sin(x) and cos(x) by two new variables s and c and adding the new equation s2 + c2 – 1 = 0.
For example, because of the identity
cos
(
3
x
)
=
4
cos
3
(
x
)
−
3
cos
(
x
)
,
{\displaystyle \cos(3x)=4\cos ^{3}(x)-3\cos(x),}
solving the equation
sin
3
(
x
)
+
cos
(
3
x
)
=
0
{\displaystyle \sin ^{3}(x)+\cos(3x)=0}
is equivalent to solving the polynomial system
{
s
3
+
4
c
3
−
3
c
=
0
s
2
+
c
2
−
1
=
0.
{\displaystyle {\begin{cases}s^{3}+4c^{3}-3c&=0\\s^{2}+c^{2}-1&=0.\end{cases}}}
For each solution (c0, s0) of this system, there is a unique solution x of the equation such that 0 ≤ x < 2π.
In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system.
=== Solutions in a finite field ===
When solving a system over a finite field k with q elements, one is primarily interested in the solutions in k. As the elements of k are exactly the solutions of the equation xq – x = 0, it suffices, for restricting the solutions to k, to add the equation xiq – xi = 0 for each variable xi.
=== Coefficients in a number field or in a finite field with non-prime order ===
The elements of an algebraic number field are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers.
For example, if a system contains
2
{\displaystyle {\sqrt {2}}}
, a system over the rational numbers is obtained by adding the equation r22 – 2 = 0 and replacing
2
{\displaystyle {\sqrt {2}}}
by r2 in the other equations.
In the case of a finite field, the same transformation allows always supposing that the field k has a prime order.
== Algebraic representation of the solutions ==
=== Regular chains ===
The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomials f1(x1), f2(x1, x2), ..., fn(x1, ..., xn) such that, for every i such that 1 ≤ i ≤ n
fi is a polynomial in x1, ..., xi only, which has a degree di > 0 in xi;
the coefficient of xidi in fi is a polynomial in x1, ..., xi −1 which does not have any common zero with f1, ..., fi − 1.
To such a regular chain is associated a triangular system of equations
{
f
1
(
x
1
)
=
0
f
2
(
x
1
,
x
2
)
=
0
⋮
f
n
(
x
1
,
x
2
,
…
,
x
n
)
=
0.
{\displaystyle {\begin{cases}f_{1}(x_{1})=0\\f_{2}(x_{1},x_{2})=0\\\quad \vdots \\f_{n}(x_{1},x_{2},\ldots ,x_{n})=0.\end{cases}}}
The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained from fi has degree di and thus that the system has d1 ... dn solutions, provided that there is no multiple root in this resolution process (fundamental theorem of algebra).
Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions.
{
x
2
−
1
=
0
(
x
−
1
)
(
y
−
1
)
=
0
y
2
−
1
=
0.
{\displaystyle {\begin{cases}x^{2}-1=0\\(x-1)(y-1)=0\\y^{2}-1=0.\end{cases}}}
There are several algorithms for computing a triangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional) into regular chains (or regular semi-algebraic systems).
There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first the Gröbner basis for the graded reverse lexicographic order (grevlex), then deducing the lexicographical Gröbner basis by FGLM algorithm and finally applying the Lextriangular algorithm.
This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of:
The output may involve huge integers which may make the computation and the use of the result problematic.
To deduce the numeric values of the solutions from the output, one has to solve univariate polynomials with approximate coefficients, which is a highly unstable problem.
The first issue has been solved by Dahan and Schost: Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, called equiprojectable decomposition, depends only on the choice of the coordinates. This allows the use of modular methods for computing efficiently the equiprojectable decomposition.
The second issue is generally solved by outputting regular chains of a special form, sometimes called shape lemma, for which all di but the first one are equal to 1. For getting such regular chains, one may have to add a further variable, called separating variable, which is given the index 0. The rational univariate representation, described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis.
=== Rational univariate representation ===
The rational univariate representation or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier.
A RUR of a zero-dimensional system consists in a linear combination x0 of the variables, called separating variable, and a system of equations
{
h
(
x
0
)
=
0
x
1
=
g
1
(
x
0
)
/
g
0
(
x
0
)
⋮
x
n
=
g
n
(
x
0
)
/
g
0
(
x
0
)
,
{\displaystyle {\begin{cases}h(x_{0})=0\\x_{1}=g_{1}(x_{0})/g_{0}(x_{0})\\\quad \vdots \\x_{n}=g_{n}(x_{0})/g_{0}(x_{0}),\end{cases}}}
where h is a univariate polynomial in x0 of degree D and g0, ..., gn are univariate polynomials in x0 of degree less than D.
Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties.
All but a finite number linear combinations of the variables are separating variables.
When the separating variable is chosen, the RUR exists and is unique. In particular h and the gi are defined independently of any algorithm to compute them.
The solutions of the system are in one-to-one correspondence with the roots of h and the multiplicity of each root of h equals the multiplicity of the corresponding solution.
The solutions of the system are obtained by substituting the roots of h in the other equations.
If h does not have any multiple root then g0 is the derivative of h.
For example, for the system in the previous section, every linear combination of the variable, except the multiples of x, y and x + y, is a separating variable. If one chooses t = x – y/2 as a separating variable, then the RUR is
{
t
3
−
t
=
0
x
=
t
2
+
2
t
−
1
3
t
2
−
1
y
=
t
2
−
2
t
−
1
3
t
2
−
1
.
{\displaystyle {\begin{cases}t^{3}-t=0\\x={\frac {t^{2}+2t-1}{3t^{2}-1}}\\y={\frac {t^{2}-2t-1}{3t^{2}-1}}.\\\end{cases}}}
The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size.
For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them in rational functions. This allows production of certified approximations of the solutions to any given precision.
Moreover, the univariate polynomial h(x0) of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides the prime decomposition of the given ideal (that is the primary decomposition of the radical of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities.
Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension.
== Solving numerically ==
=== General solving algorithms ===
The general numerical algorithms which are designed for any system of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to find all solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution.
Nevertheless, two methods deserve to be mentioned here.
Newton's method may be used if the number of equations is equal to the number of variables. It does not allow one to find all the solutions nor to prove that there is no solution. But it is very fast when starting from a point which is close to a solution. Therefore, it is a basic tool for the homotopy continuation method described below.
Optimization is rarely used for solving polynomial systems, but it succeeded, circa 1970, in showing that a system of 81 quadratic equations in 56 variables is not inconsistent. With the other known methods, this remains beyond the possibilities of modern technology, as of 2022. This method consists simply in minimizing the sum of the squares of the equations. If zero is found as a local minimum, then it is attained at a solution. This method works for overdetermined systems, but outputs an empty information if all local minimums which are found are positive.
=== Homotopy continuation method ===
This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades.
This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say
N
{\displaystyle N}
, is kept.
In the second step, a system
g
1
=
0
,
…
,
g
n
=
0
{\displaystyle g_{1}=0,\,\ldots ,\,g_{n}=0}
of polynomial equations is generated which has exactly
N
{\displaystyle N}
solutions that are easy to compute. This new system has the same number
n
{\displaystyle n}
of variables and the same number
n
{\displaystyle n}
of equations and the same general structure as the system to solve,
f
1
=
0
,
…
,
f
n
=
0
{\displaystyle f_{1}=0,\,\ldots ,\,f_{n}=0}
.
Then a homotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system
(
1
−
t
)
g
1
+
t
f
1
=
0
,
…
,
(
1
−
t
)
g
n
+
t
f
n
=
0
{\displaystyle (1-t)g_{1}+tf_{1}=0,\,\ldots ,\,(1-t)g_{n}+tf_{n}=0}
.
The homotopy continuation consists in deforming the parameter
t
{\displaystyle t}
from 0 to 1 and following the
N
{\displaystyle N}
solutions during this deformation. This gives the desired solutions for
t
=
1
{\displaystyle t=1}
. Following means that, if
t
1
<
t
2
{\displaystyle t_{1}<t_{2}}
, the solutions for
t
=
t
2
{\displaystyle t=t_{2}}
are deduced from the solutions for
t
=
t
1
{\displaystyle t=t_{1}}
by Newton's method. The difficulty here is to well choose the value of
t
2
−
t
1
:
{\displaystyle t_{2}-t_{1}:}
Too large, Newton's convergence may be slow and may even jump from a solution path to another one. Too small, and the number of steps slows down the method.
=== Numerically solving from the rational univariate representation ===
To deduce the numeric values of the solutions from a RUR seems easy: it suffices to compute the roots of the univariate polynomial and to substitute them in the other equations. This is not so easy because the evaluation of a polynomial at the roots of another polynomial is highly unstable.
The roots of the univariate polynomial have thus to be computed at a high precision which may not be defined once for all. There are two algorithms which fulfill this requirement.
Aberth method, implemented in MPSolve computes all the complex roots to any precision.
Uspensky's algorithm of Collins and Akritas, improved by Rouillier and Zimmermann and based on Descartes' rule of signs. This algorithms computes the real roots, isolated in intervals of arbitrary small width. It is implemented in Maple (functions fsolve and RootFinding[Isolate]).
== Software packages ==
There are at least four software packages which can solve zero-dimensional systems automatically (by automatically, one means that no human intervention is needed between input and output, and thus that no knowledge of the method by the user is needed). There are also several other software packages which may be useful for solving zero-dimensional systems. Some of them are listed after the automatic solvers.
The Maple function RootFinding[Isolate] takes as input any polynomial system over the rational numbers (if some coefficients are floating point numbers, they are converted to rational numbers) and outputs the real solutions represented either (optionally) as intervals of rational numbers or as floating point approximations of arbitrary precision. If the system is not zero dimensional, this is signaled as an error.
Internally, this solver, designed by F. Rouillier computes first a Gröbner basis and then a Rational Univariate Representation from which the required approximation of the solutions are deduced. It works routinely for systems having up to a few hundred complex solutions.
The rational univariate representation may be computed with Maple function Groebner[RationalUnivariateRepresentation].
To extract all the complex solutions from a rational univariate representation, one may use MPSolve, which computes the complex roots of univariate polynomials to any precision. It is recommended to run MPSolve several times, doubling the precision each time, until solutions remain stable, as the substitution of the roots in the equations of the input variables can be highly unstable.
The second solver is PHCpack, written under the direction of J. Verschelde. PHCpack implements the homotopy continuation method. This solver computes the isolated complex solutions of polynomial systems having as many equations as variables.
The third solver is Bertini, written by D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini uses numerical homotopy continuation with adaptive precision. In addition to computing zero-dimensional solution sets, both PHCpack and Bertini are capable of working with positive dimensional solution sets.
The fourth solver is the Maple library RegularChains, written by Marc Moreno-Maza and collaborators. It contains various functions for solving polynomial systems by means of regular chains.
== See also ==
Elimination theory
Systems of polynomial inequalities
Triangular decomposition
Wu's method of characteristic set
== References == | Wikipedia/System_of_polynomial_equations |
In mathematics and computer science, an algorithm ( ) is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning).
In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.
As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
== Etymology ==
Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Here, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi".
The word algorism in English came to mean the use of place-value notation in calculations; it occurs in the Ancrene Wisse from circa 1225. By the time Geoffrey Chaucer wrote The Canterbury Tales in the late 14th century, he used a variant of the same word in describing augrym stones, stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. By 1596, this form of the word was used in English, as algorithm, by Thomas Hood.
== Definition ==
One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure
or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device.
== History ==
=== Ancient algorithms ===
Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD).
The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to c. 2500 BC describes the earliest division algorithm. During the Hammurabi dynasty c. 1800 – c. 1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus c. 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus,: Ch 9.2 and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC).: Ch 9.1 Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta.
The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
=== Computers ===
==== Weight-driven clocks ====
Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".
==== Electromechanical relay ====
Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape (c. 1870s) was in use, as were Hollerith cards (c. 1890). Then came the teleprinter (c. 1910) with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".
=== Formalization ===
In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
== Representations ==
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms.
=== Turing machines ===
There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.
=== Flowchart representation ===
The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.
== Algorithmic analysis ==
It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of
O
(
n
)
{\displaystyle O(n)}
, using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of
O
(
1
)
{\displaystyle O(1)}
, otherwise
O
(
n
)
{\displaystyle O(n)}
is required.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost
O
(
log
n
)
{\displaystyle O(\log n)}
) outperforms a sequential search (cost
O
(
n
)
{\displaystyle O(n)}
) when used for table lookups on sorted lists or arrays.
=== Formal versus empirical ===
The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.
=== Execution efficiency ===
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
=== Best Case and Worst Case ===
The best case of an algorithm refers to the scenario or input for which the algorithm or data structure takes the least time and resources to complete its tasks. The worst case of an algorithm is the case that causes the algorithm or data structure to consume the maximum period of time and computational resources.
== Design ==
Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases.
=== Structured programming ===
Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.
== Legal status ==
By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
== Classification ==
=== By implementation ===
Recursion
A recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decisions at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value.
Quantum algorithm
Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
=== By design paradigm ===
Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are:
Brute-force or exhaustive search
Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Divide and conquer
A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called prune and search or decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the binary search algorithm.
Search and enumeration
Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking.
Randomized algorithm
Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:
Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time.
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
Reduction of complexity
This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Back tracking
In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
=== Optimization problems ===
For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
Linear programming
When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Dynamic programming
When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial.
The greedy method
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
The heuristic method
In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
== Examples ==
One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as:
High-level description:
If a set of numbers is empty, then there is no highest number.
Assume the first number in the set is the largest.
For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest.
When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set.
(Quasi-)formal description:
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:
== See also ==
== Notes ==
== Bibliography ==
Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363
== Further reading ==
== External links ==
"Algorithm". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Weisstein, Eric W. "Algorithm". MathWorld.
Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology
Algorithm repositories
The Stony Brook Algorithm Repository – State University of New York at Stony Brook
Collected Algorithms of the ACM – Associations for Computing Machinery
The Stanford GraphBase Archived December 6, 2015, at the Wayback Machine – Stanford University | Wikipedia/Algorithms |
Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures.
For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study.
== Basic idea ==
In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A.
=== Arity ===
An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments (also called infix notation), like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). One way of talking about an algebra, then, is by referring to it as an algebra of a certain type
Ω
{\displaystyle \Omega }
, where
Ω
{\displaystyle \Omega }
is an ordered sequence of natural numbers representing the arity of the operations of the algebra. However, some researchers also allow infinitary operations, such as
⋀
α
∈
J
x
α
{\displaystyle \textstyle \bigwedge _{\alpha \in J}x_{\alpha }}
where J is an infinite index set, which is an operation in the algebraic theory of complete lattices.
=== Equations ===
After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A.
== Varieties ==
A collection of algebraic structures defined by identities is called a variety or equational class.
Restricting one's study to varieties rules out:
quantification, including universal quantification (∀) except before an equation, and existential quantification (∃)
logical connectives other than conjunction (∧)
relations other than equality, in particular inequalities, both a ≠ b and order relations
The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only.
Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope.
The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all non-zero elements in a field, so inversion cannot be added to the type).
One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that has finite products. For example, a topological group is just a group in the category of topological spaces.
=== Examples ===
Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way, since the usual definitions often involve quantification or inequalities.
==== Groups ====
As an example, consider the definition of a group. Usually a group is defined in terms of a single binary operation ∗, subject to the axioms:
Associativity (as in the previous section): x ∗ (y ∗ z) = (x ∗ y) ∗ z; formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z.
Identity element: There exists an element e such that for each element x, one has e ∗ x = x = x ∗ e; formally: ∃e ∀x. e∗x=x=x∗e.
Inverse element: The identity element is easily seen to be unique, and is usually denoted by e. Then for each x, there exists an element i such that x ∗ i = e = i ∗ x; formally: ∀x ∃i. x∗i=e=i∗x.
(Some authors also use the "closure" axiom that x ∗ y belongs to A whenever x and y do, but here this is already implied by calling ∗ a binary operation.)
This definition of a group does not immediately fit the point of view of universal algebra, because the axioms of the identity element and inversion are not stated purely in terms of equational laws which hold universally "for all ..." elements, but also involve the existential quantifier "there exists ...". The group axioms can be phrased as universally quantified equations by specifying, in addition to the binary operation ∗, a nullary operation e and a unary operation ~, with ~x usually written as x−1. The axioms become:
Associativity: x ∗ (y ∗ z) = (x ∗ y) ∗ z.
Identity element: e ∗ x = x = x ∗ e; formally: ∀x. e∗x=x=x∗e.
Inverse element: x ∗ (~x) = e = (~x) ∗ x; formally: ∀x. x∗~x=e=~x∗x.
To summarize, the usual definition has:
a single binary operation (signature (2))
1 equational law (associativity)
2 quantified laws (identity and inverse)
while the universal algebra definition has:
3 operations: one binary, one unary, and one nullary (signature (2, 1, 0))
3 equational laws (associativity, identity, and inverse)
no quantified laws (except outermost universal quantifiers, which are allowed in varieties)
A key point is that the extra operations do not add information, but follow uniquely from the usual definition of a group. Although the usual definition did not uniquely specify the identity element e, an easy exercise shows that it is unique, as is the inverse of each element.
The universal algebra point of view is well adapted to category theory. For example, when defining a group object in category theory, where the object in question may not be a set, one must use equational laws (which make sense in general categories), rather than quantified laws (which refer to individual elements). Further, the inverse and identity are specified as morphisms in the category. For example, in a topological group, the inverse must not only exist element-wise, but must give a continuous mapping (a morphism). Some authors also require the identity map to be a closed inclusion (a cofibration).
==== Other examples ====
Most algebraic structures are examples of universal algebras.
Rings, semigroups, quasigroups, groupoids, magmas, loops, and others.
Vector spaces over a fixed field and modules over a fixed ring are universal algebras. These have a binary addition and a family of unary scalar multiplication operators, one for each element of the field or ring.
Examples of relational algebras include semilattices, lattices, and Boolean algebras.
== Basic constructions ==
We assume that the type,
Ω
{\displaystyle \Omega }
, has been fixed. Then there are three basic constructions in universal algebra: homomorphic image, subalgebra, and product.
A homomorphism between two algebras A and B is a function h : A → B from the set A to the set B such that, for every operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1, ..., xn)) = fB(h(x1), ..., h(xn)). (Sometimes the subscripts on f are taken off when it is clear from context which algebra the function is from.) For example, if e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well as definitions of certain special kinds of homomorphisms, are listed under Homomorphism. In particular, we can take the homomorphic image of an algebra, h(A).
A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic structures is the cartesian product of the sets with the operations defined coordinatewise.
== Some basic theorems ==
The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc.
Birkhoff's HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under homomorphic images, subalgebras, and arbitrary direct products.
== Motivations and applications ==
In addition to its unifying approach, universal algebra also gives deep theorems and important examples and counterexamples. It provides a useful framework for those who intend to start the study of new classes of algebras.
It can enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has also provided conceptual clarification; as J.D.H. Smith puts it, "What looks messy and complicated in a particular framework may turn out to be simple and obvious in the proper general one."
In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these classes, but with universal algebra, they can be proven once and for all for every kind of algebraic system.
The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids.
=== Constraint satisfaction problem ===
Universal algebra provides a natural language for the constraint satisfaction problem (CSP). CSP refers to an important class of computational problems where, given a relational algebra A and an existential sentence
φ
{\displaystyle \varphi }
over this algebra, the question is to find out whether
φ
{\displaystyle \varphi }
can be satisfied in A. The algebra A is often fixed, so that CSPA refers to the problem whose instance is only the existential sentence
φ
{\displaystyle \varphi }
.
It is proved that every computational problem can be formulated as CSPA for some algebra A.
For example, the n-coloring problem can be stated as CSP of the algebra ({0, 1, ..., n−1}, ≠), i.e. an algebra with n elements and a single relation, inequality.
== Generalizations ==
Universal algebra has also been studied using the techniques of category theory. In this approach, instead of writing a list of operations and equations obeyed by those operations, one can describe an algebraic structure using categories of a special sort, known as Lawvere theories or more generally algebraic theories. Alternatively, one can describe algebraic structures using monads. The two approaches are closely related, with each having their own advantages.
In particular, every Lawvere theory gives a monad on the category of sets, while any "finitary" monad on the category of sets arises from a Lawvere theory. However, a monad describes algebraic structures within one particular category (for example the category of sets), while algebraic theories describe structure within any of a large class of categories (namely those having finite products).
A more recent development in category theory is operad theory – an operad is a set of operations, similar to a universal algebra, but restricted in that equations are only allowed between expressions with the variables, with no duplication or omission of variables allowed. Thus, rings can be described as the so-called "algebras" of some operad, but not groups, since the law gg−1 = 1 duplicates the variable g on the left side and omits it on the right side. At first this may seem to be a troublesome restriction, but the payoff is that operads have certain advantages: for example, one can hybridize the concepts of ring and vector space to obtain the concept of associative algebra, but one cannot form a similar hybrid of the concepts of group and vector space.
Another development is partial algebra where the operators can be partial functions. Certain partial functions can also be handled by a generalization of Lawvere theories known as "essentially algebraic theories".
Another generalization of universal algebra is model theory, which is sometimes described as "universal algebra + logic".
== History ==
In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.: v
At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." At the time George Boole's algebra of logic made a strong counterpoint to ordinary number algebra, so the term "universal" served to calm strained sensibilities.
Whitehead's early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole's algebra of logic. Whitehead wrote in his book:
"Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular. The comparative study necessarily presupposes some previous separate study, comparison being impossible without knowledge."
Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred Tarski, Andrzej Mostowski, and their students.
In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff's papers, dealing with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly Maltsev in the 1940s went unnoticed because of the war. Tarski's lecture at the 1950 International Congress of Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others.
In the late 1950s, Edward Marczewski emphasized the importance of free algebras, leading to the publication of more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski, Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others.
Starting with William Lawvere's thesis in 1963, techniques from category theory have become important in universal algebra.
== See also ==
Equational logic
Graph algebra
Term algebra
Clone
Universal algebraic geometry
Simple algebra (universal algebra)
== Footnotes ==
== References ==
== External links ==
Algebra Universalis—a journal dedicated to Universal Algebra. | Wikipedia/Equational_theory |
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world.
The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems.
== Overview ==
Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem.
Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner.: 6 Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering.
Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.
In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a proportional–integral–derivative controller (PID controller) system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.
Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.
== History ==
Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel.
: 22
This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt: 22 in 1788.
In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis.
Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes.
Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today.
=== Mathematical modelling ===
David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are probably correct, heuristically explainable, and yield control system designs which meet practically important objectives.
== Control systems ==
== Control theory ==
== Education ==
At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University.
Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education.
== Careers ==
A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career.
There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering.
Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually.
In India, control System Engineering is provided at different levels with a diploma, graduation and postgraduation. These programs require the candidate to have chosen physics, chemistry and mathematics for their secondary schooling or relevant bachelor's degree for postgraduate studies.
== Recent advancement ==
Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock.: 23 The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components.
Therefore, at the design stage either:
Digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or
Analog components are mapped into discrete domain and design is carried out there.
The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers.
Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme.
Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc.
== See also ==
== References ==
== Further reading ==
D. Q. Mayne (1965). P. H. Hammond (ed.). A Gradient Method for Determining Optimal Control of Nonlinear Stochastic Systems in Proceedings of IFAC Symposium, Theory of Self-Adaptive Control Systems. Plenum Press. pp. 19–27.
Bennett, Stuart (June 1986). A history of control engineering, 1800-1930. IET. ISBN 978-0-86341-047-5.
Bennett, Stuart (1993). A history of control engineering, 1930-1955. IET. ISBN 978-0-86341-299-8.
Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning. ISBN 978-1-4018-5806-3.
Arnold Zankl (2006). Milestones in Automation: From the Transistor to the Digital Factory. Wiley-VCH. ISBN 978-3-89578-259-6.
Franklin, Gene F.; Powell, J. David; Emami-Naeini, Abbas (2014). Feedback control of dynamic systems (7th ed.). Stanford Cali. U.S.: Pearson. p. 880. ISBN 9780133496598.
== External links ==
Control Labs Worldwide
The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook
Control System Integrators Association
List of control systems integrators
Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG)
Systems Science & Control Engineering: An Open Access Journal | Wikipedia/Control_system_engineering |
Fractional-order control (FOC) is a field of control theory that uses the fractional-order integrator as part of the control system design toolkit. The use of fractional calculus can improve and generalize well-established control methods and strategies.
The fundamental advantage of FOC is that the fractional-order integrator weights history using a function that decays with a power-law tail. The effect is that the effects of all time are computed for each iteration of the control algorithm. This creates a "distribution of time constants", the upshot of which is there is no particular time constant, or resonance frequency, for the system.
In fact, the fractional integral operator
1
s
λ
{\displaystyle {\frac {1}{s^{\lambda }}}}
is different from any integer-order rational transfer function
G
I
(
s
)
{\displaystyle {G_{I}}(s)}
, in the sense that it is a non-local operator that possesses an infinite memory and takes into account the whole history of its input signal.
Fractional-order control shows promise in many controlled environments that suffer from the classical problems of overshoot and resonance, as well as time diffuse applications such as thermal dissipation and chemical mixing. Fractional-order control has also been demonstrated to be capable of suppressing chaotic behaviors in mathematical models of, for example, muscular blood vessels and robotics.
Initiated from the 1980's by the Pr. Oustaloup's group, the CRONE approach is one of the most developed control-system design methodologies that uses fractional-order operator properties.
== See also ==
Differintegral
Fractional calculus
Fractional-order system
== External links ==
Dr. YangQuan Chen's latest homepage for the applied fractional calculus (AFC)
Dr. YangQuan Chen's page about fractional calculus on Google Sites
=== References === | Wikipedia/Fractional-order_control |
A programmable logic controller (PLC) or programmable controller is an industrial computer that has been ruggedized and adapted for the control of manufacturing processes, such as assembly lines, machines, robotic devices, or any activity that requires high reliability, ease of programming, and process fault diagnosis.
PLCs can range from small modular devices with tens of inputs and outputs (I/O), in a housing integral with the processor, to large rack-mounted modular devices with thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for many arrangements of digital and analog I/O, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact.
PLCs were first developed in the automobile manufacturing industry to provide flexible, rugged and easily programmable controllers to replace hard-wired relay logic systems. Dick Morley, who invented the first PLC, the Modicon 084, for General Motors in 1968, is considered the father of PLC.
A PLC is an example of a hard real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation may result. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
== Invention and early development ==
The PLC originated in the late 1960s in the automotive industry in the US and was designed to replace relay logic systems. Before, control logic for manufacturing was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers.
The hard-wired nature of these components made it difficult for design engineers to alter the automation process. Changes would require rewiring and careful updating of the documentation. Troubleshooting was a tedious process. When general-purpose computers became available, they were soon applied to control logic in industrial processes. These early computers were unreliable and required specialist programmers and strict control of working conditions, such as temperature, cleanliness, and power quality.
The PLC provided several advantages over earlier automation systems. It was designed to tolerate the industrial environment better than systems intended for office use, and was more reliable, compact, and required less maintenance than relay systems. It was easily expandable with additional I/O modules. While relay systems required tedious and sometimes complicated hardware changes in case of reconfiguration, a PLC can be reconfigured by loading new or modified code. This allowed for easier iteration over manufacturing process design. With a simple programming language focused on logic and switching operations, it was more user-friendly than computers using general-purpose programming languages. Early PLCs were programmed in ladder logic, which strongly resembled a schematic diagram of relay logic. It also permitted its operation to be monitored.
=== Modicon ===
In 1968, GM Hydramatic, the automatic transmission division of General Motors, issued a request for proposals for an electronic replacement for hard-wired relay systems based on a white paper written by engineer Edward R. Clark. The winning proposal came from Bedford Associates from Bedford, Massachusetts. The result, built in 1969, was the first PLC and designated the 084, because it was Bedford Associates' eighty-fourth project.
Bedford Associates started a company dedicated to developing, manufacturing, selling, and servicing this new product, which they named Modicon (standing for modular digital controller). One of the people who worked on that project was Dick Morley, who is considered to be the father of the PLC. The Modicon brand was sold in 1977 to Gould Electronics and later to Schneider Electric, its current owner. About this same time, Modicon created Modbus, a data communications protocol used with its PLCs. Modbus has since become a standard open protocol commonly used to connect many industrial electrical devices.
One of the first 084 models built is now on display at Schneider Electric's facility in North Andover, Massachusetts. It was presented to Modicon by GM, when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until after the 984 made its appearance.
=== Allen-Bradley ===
In a parallel development, Odo Josef Struger is sometimes known as the "father of the programmable logic controller" as well. He was involved in the invention of the Allen-Bradley programmable logic controller and is credited with coining the PLC acronym. Allen-Bradley (now a brand owned by Rockwell Automation) became a major PLC manufacturer in the United States during his tenure. Struger played a leadership role in developing IEC 61131-3 PLC programming language standards.
=== Early methods of programming ===
Many early PLC programming applications were not capable of graphical representation of the logic, and so it was instead represented as a series of logic expressions in some kind of Boolean format, similar to Boolean algebra. As programming terminals evolved, because ladder logic was a familiar format used for electro-mechanical control panels, it became more commonly used. Newer formats, such as state logic, function block diagrams, and structured text exist. Ladder logic remains popular because PLCs solve the logic in a predictable and repeating sequence, and ladder logic allows the person writing the logic to see any issues with the timing of the logic sequence more easily than would be possible in other formats.
Up to the mid-1990s, PLCs were programmed using proprietary programming panels or special-purpose programming terminals, which often had dedicated function keys representing the various logical elements of PLC programs. Some proprietary programming terminals displayed the elements of PLC programs as graphic symbols, but plain ASCII character representations of contacts, coils, and wires were common. Programs were stored on cassette tape cartridges. Facilities for printing and documentation were minimal due to a lack of memory capacity. The oldest PLCs used magnetic-core memory.
== Architecture ==
A PLC is an industrial microprocessor-based controller with programmable memory used to store program instructions and various functions. It consists of:
A processor unit (CPU) which interprets inputs, executes the control program stored in memory and sends output signals,
A power supply unit which converts AC voltage to DC,
A memory unit storing data from inputs and program to be executed by the processor,
An input and output interface, where the controller receives and sends data from and to external devices,
A communications interface to receive and transmit data on communication networks from and to remote PLCs.
PLCs require a programming device which is used to develop and later download the created program into the memory of the controller.
Modern PLCs generally contain a real-time operating system, such as OS-9 or VxWorks.
=== Mechanical design ===
There are two types of mechanical design for PLC systems. A single box (also called a brick) is a small programmable controller that fits all units and interfaces into one compact casing, although, typically, additional expansion modules for inputs and outputs are available. The second design type – a modular PLC – has a chassis (also called a rack) that provides space for modules with different functions, such as power supply, processor, selection of I/O modules and communication interfaces – which all can be customized for the particular application. Several racks can be administered by a single processor and may have thousands of inputs and outputs. Either a special high-speed serial I/O link or comparable communication method is used so that racks can be distributed away from the processor, reducing the wiring costs for large plants.
=== Discrete and analog signals ===
Discrete (digital) signals can only take on or off value (1 or 0, true or false). Examples of devices providing a discrete signal include limit switches and photoelectric sensors.
Analog signals can use voltage or current that is analogous to the monitored variable and can take any value within their scale. Pressure, temperature, flow, and weight are often represented by analog signals. These are typically interpreted as integer values with various ranges of accuracy depending on the device and the number of bits available to store the data. For example, an analog 0 to 10 V or 4-20 mA current loop input would be converted into an integer value of 0 to 32,767. The PLC will take this value and translate it into the desired units of the process so the operator or program can read it.
=== Redundancy ===
Some special processes need to work permanently with minimum unwanted downtime. Therefore, it is necessary to design a system that is fault tolerant. In such cases, to increase the system availability in the event of hardware component failure, redundant CPU or I/O modules with the same functionality can be added to a hardware configuration to prevent a total or partial process shutdown due to hardware failure. Other redundancy scenarios could be related to safety-critical processes, for example, large hydraulic presses could require that two PLCs turn on output before the press can come down in case one PLC does not behave properly.
== Programming ==
Programmable logic controllers are intended to be used by engineers without a programming background. For this reason, a graphical programming language called ladder logic was first developed. It resembles the schematic diagram of a system built with electromechanical relays and was adopted by many manufacturers and later standardized in the IEC 61131-3 control systems programming standard. As of 2015, it is still widely used, thanks to its simplicity.
As of 2015, the majority of PLC systems adhere to the IEC 61131-3 standard that defines 2 textual programming languages: Structured Text (similar to Pascal) and Instruction List; as well as 3 graphical languages: ladder logic, function block diagram and sequential function chart. Instruction List was deprecated in the third edition of the standard.
Modern PLCs can be programmed in a variety of ways, from the relay-derived ladder logic to programming languages such as specially adapted dialects of BASIC and C.
While the fundamental concepts of PLC programming are common to all manufacturers, differences in I/O addressing, memory organization, and instruction sets mean that PLC programs are never perfectly interchangeable between different makers. Even within the same product line of a single manufacturer, different models may not be directly compatible.
=== Programming device ===
Manufacturers develop programming software for their PLCs. In addition to being able to program PLCs in multiple languages, they provide common features like hardware diagnostics and maintenance, software debugging, and offline simulation.
PLC programs are typically written in a programming device, which can take the form of a desktop console, special software on a personal computer, or a handheld device. The program is then downloaded to the PLC through a cable connection or over a network. It is stored either in non-volatile flash memory or battery-backed-up RAM on the PLC. In some PLCs, the program is transferred from the programming device using a programming board that writes the program into a removable chip, such as EPROM that is then inserted into the PLC.
=== Simulation ===
An incorrectly programmed PLC can result in lost productivity and dangerous conditions for programmed equipment. PLC simulation is a feature often found in PLC programming software. It allows for testing and debugging early in a project's development. Testing the project in simulation improves its quality, increases the level of safety associated with equipment and can save time during the installation and commissioning of automated control applications since many scenarios can be tried and tested before the system is activated.
== Functionality ==
The main difference compared to most other computing devices is that PLCs are intended for and therefore tolerant of more severe environmental conditions (such as dust, moisture, heat, cold), while offering extensive input/output (I/O) to connect the PLC to sensors and actuators. PLC input can include simple digital elements such as limit switches, analog variables from process sensors (such as temperature and pressure), and more complex data such as that from positioning or machine vision systems. PLC output can include elements such as indicator lamps, sirens, electric motors, pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a fieldbus or computer network that plugs into the PLC.
The functionality of the PLC has evolved over the years to include sequential relay control, motion control, process control, distributed control systems, and networking. The data handling, storage, processing power, and communication capabilities of some modern PLCs are approximately equivalent to desktop computers. PLC-like programming combined with remote I/O hardware, allows a general-purpose desktop computer to overlap some PLCs in certain applications. Desktop computer controllers have not been generally accepted in heavy industry because desktop computers run on less stable operating systems than PLCs, and because the desktop computer hardware is typically not designed to the same levels of tolerance to temperature, humidity, vibration, and longevity as the processors used in PLCs. Operating systems such as Windows do not lend themselves to deterministic logic execution, with the result that the controller may not always respond to changes of input status with the consistency in timing expected from PLCs. Desktop logic applications find use in less critical situations, such as laboratory automation and use in small facilities where the application is less demanding and critical.
=== Basic functions ===
The most basic function of a programmable logic controller is to emulate the functions of electromechanical relays. Discrete inputs are given a unique address, and a PLC instruction can test if the input state is on or off. Just as a series of relay contacts perform a logical AND function, not allowing current to pass unless all the contacts are closed, so a series of "examine if on" instructions will energize its output storage bit if all the input bits are on. Similarly, a parallel set of instructions will perform a logical OR. In an electromechanical relay wiring diagram, a group of contacts controlling one coil is called a "rung" of a "ladder diagram", and this concept is also used to describe PLC logic. Some models of PLC limit the number of series and parallel instructions in one "rung" of logic. The output of each rung sets or clears a storage bit, which may be associated with a physical output address or which may be an "internal coil" with no physical connection. Such internal coils can be used, for example, as a common element in multiple separate rungs. Unlike physical relays, there is usually no limit to the number of times an input, output or internal coil can be referenced in a PLC program.
Some PLCs enforce a strict left-to-right, top-to-bottom execution order for evaluating the rung logic. This is different from electro-mechanical relay contacts, which, in a sufficiently complex circuit, may either pass current left-to-right or right-to-left, depending on the configuration of surrounding contacts. The elimination of these "sneak paths" is either a bug or a feature, depending on the programming style.
More advanced instructions of the PLC may be implemented as functional blocks, which carry out some operation when enabled by a logical input and which produce outputs to signal, for example, completion or errors, while manipulating variables internally that may not correspond to discrete logic.
=== Communication ===
PLCs use built-in ports, such as USB, Ethernet, RS-232, RS-485, or RS-422 to communicate with external devices (sensors, actuators) and systems (programming software, SCADA, user interface). Communication is carried over various industrial network protocols, like Modbus, or EtherNet/IP. Many of these protocols are vendor specific.
PLCs used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for user interface devices such as keypads or PC-type workstations.
Formerly, some manufacturers offered dedicated communication modules as an add-on function where the processor had no network connection built-in.
=== User interface ===
PLCs may need to interact with people for the purpose of configuration, alarm reporting, or everyday control. A human-machine interface (HMI) is employed for this purpose. HMIs are also referred to as man-machine interfaces (MMIs) and graphical user interfaces (GUIs). A simple system may use buttons and lights to interact with the user. Text displays are available as well as graphical touch screens. More complex systems use programming and monitoring software installed on a computer, with the PLC connected via a communication interface.
== Process of a scan cycle ==
A PLC works in a program scan cycle, where it executes its program repeatedly. The simplest scan cycle consists of 3 steps:
Read inputs.
Execute the program.
Write outputs.
The program follows the sequence of instructions. It typically takes a time span of tens of milliseconds for the processor to evaluate all the instructions and update the status of all outputs. If the system contains remote I/O—for example, an external rack with I/O modules—then that introduces additional uncertainty in the response time of the PLC system.
As PLCs became more advanced, methods were developed to change the sequence of ladder execution, and subroutines were implemented.
Special-purpose I/O modules may be used where the scan time of the PLC is too long to allow predictable performance. Precision timing modules, or counter modules for use with shaft encoders, are used where the scan time would be too long to reliably count pulses or detect the sense of rotation of an encoder. This allows even a relatively slow PLC to still interpret the counted values to control a machine, as the accumulation of pulses is done by a dedicated module that is unaffected by the speed of program execution.
== Security ==
In his book from 1998, E. A. Parr pointed out that even though most programmable controllers require physical keys and passwords, the lack of strict access control and version control systems, as well as an easy-to-understand programming language make it likely that unauthorized changes to programs will happen and remain unnoticed.
Prior to the discovery of the Stuxnet computer worm in June 2010, the security of PLCs received little attention. Modern programmable controllers generally contain real-time operating systems, which can be vulnerable to exploits in a similar way as desktop operating systems, like Microsoft Windows. PLCs can also be attacked by gaining control of a computer they communicate with. Since 2011, these concerns have grown – networking is becoming more commonplace in the PLC environment, connecting the previously separated plant floor networks and office networks.
In February 2021, Rockwell Automation publicly disclosed a critical vulnerability affecting its Logix controllers family. The secret cryptographic key used to verify communication between the PLC and workstation could be extracted from the programming software (Studio 5000 Logix Designer) and used to remotely change program code and configuration of a connected controller. The vulnerability was given a severity score of 10 out of 10 on the CVSS vulnerability scale. At the time of writing, the mitigation of the vulnerability was to limit network access to affected devices.
== Safety PLCs ==
Safety PLCs can be either a standalone device or a safety-rated hardware and functionality added to existing controller architectures (Allen-Bradley GuardLogix, Siemens F-series, etc.). These differ from conventional PLC types by being suitable for safety-critical applications for which PLCs have traditionally been supplemented with hard-wired safety relays and areas of the memory dedicated to the safety instructions. The standard of safety level is the SIL.
A safety PLC might be used to control access to a robot cell with trapped-key access, or to manage the shutdown response to an emergency stop button on a conveyor production line. Such PLCs typically have a restricted regular instruction set augmented with safety-specific instructions designed to interface with emergency stop buttons, light screens, and other safety-related devices.
The flexibility that such systems offer has resulted in rapid growth of demand for these controllers.
== PLC compared with other control systems ==
PLCs are well adapted to a range of automation tasks. These are typically industrial processes in manufacturing where the cost of developing and maintaining the automation system is high relative to the total cost of the automation, and where changes to the system would be expected during its operational life. PLCs contain input and output devices compatible with industrial pilot devices and controls; little electrical design is required, and the design problem centers on expressing the desired sequence of operations. PLC applications are typically highly customized systems, so the cost of a packaged PLC is low compared to the cost of a specific custom-built controller design. On the other hand, in the case of mass-produced goods, customized control systems are economical. This is due to the lower cost of the components, which can be optimally chosen instead of a "generic" solution, and where the non-recurring engineering charges are spread over thousands or millions of units.
Programmable controllers are widely used in motion, positioning, or torque control. Some manufacturers produce motion control units to be integrated with PLC so that G-code (involving a CNC machine) can be used to instruct machine movements.
=== PLC chip / embedded controller ===
These are for small machines and systems with low or medium volume. They can execute PLC languages such as Ladder, Flow-Chart/Grafcet, etc. They are similar to traditional PLCs, but their small size allows developers to design them into custom printed circuit boards like a microcontroller, without computer programming knowledge, but with a language that is easy to use, modify and maintain. They sit between the classic PLC / micro-PLC and microcontrollers.
=== Microcontrollers ===
A microcontroller-based design would be appropriate where hundreds or thousands of units will be produced and so the development cost (design of power supplies, input/output hardware, and necessary testing and certification) can be spread over many sales, and where the end-user would not need to alter the control. Automotive applications are an example; millions of units are built each year, and very few end-users alter the programming of these controllers. However, some specialty vehicles such as transit buses economically use PLCs instead of custom-designed controls, because the volumes are low and the development cost would be uneconomical.
=== Single-board computers ===
Very complex process control, such as those used in the chemical industry, may require algorithms and performance beyond the capability of even high-performance PLCs. Very high-speed or precision controls may also require customized solutions; for example, aircraft flight controls. Single-board computers using semi-customized or fully proprietary hardware may be chosen for very demanding control applications where the high development and maintenance cost can be supported. "Soft PLCs" running on desktop-type computers can interface with industrial I/O hardware while executing programs within a version of commercial operating systems adapted for process control needs.
The rising popularity of single board computers has also had an influence on the development of PLCs. Traditional PLCs are generally closed platforms, but some newer PLCs (e.g. groov EPIC from Opto 22, ctrlX from Bosch Rexroth, PFC200 from Wago, PLCnext from Phoenix Contact, and Revolution Pi from Kunbus) provide the features of traditional PLCs on an open platform.
=== Programmable logic relays (PLR) ===
In more recent years, small products called programmable logic relays (PLRs) or smart relays, have become more common and accepted. These are similar to PLCs and are used in light industries where only a few points of I/O are needed, and low cost is desired. These small devices are typically made in a common physical size and shape by several manufacturers and branded by the makers of larger PLCs to fill their low-end product range. Most of these have 8 to 12 discrete inputs, 4 to 8 discrete outputs, and up to 2 analog inputs. Most such devices include a tiny postage stamp-sized LCD screen for viewing simplified ladder logic (only a very small portion of the program being visible at a given time) and status of I/O points, and typically these screens are accompanied by a 4-way rocker push-button plus four more separate push-buttons, similar to the key buttons on a VCR remote control, and used to navigate and edit the logic. Most have an RS-232 or RS-485 port for connecting to a PC so that programmers can use user-friendly software for programming instead of the small LCD and push-button set for this purpose. Unlike regular PLCs that are usually modular and greatly expandable, the PLRs are usually not modular or expandable, but their cost can be significantly lower than that a PLC, and they still offer robust design and deterministic execution of the logic.
A variant of PLCs, used in remote locations is the remote terminal unit or RTU. An RTU is typically a low power, ruggedized PLC whose key function is to manage the communications links between the site and the central control system (typically SCADA) or in some modern systems, "The Cloud". Unlike factory automation using wired communication protocols such as Ethernet, communications links to remote sites are often radio-based and are less reliable. To account for the reduced reliability, RTU will buffer messages or switch to alternate communications paths. When buffering messages, the RTU will timestamp each message so that a full history of site events can be reconstructed. RTUs, being PLCs, have a wide range of I/O and are fully programmable, typically with languages from the IEC 61131-3 standard that is common to many PLCs, RTUs and DCSs. In remote locations, it is common to use an RTU as a gateway for a PLC, where the PLC is performing all site control and the RTU is managing communications, time-stamping events and monitoring ancillary equipment. On sites with only a handful of I/O, the RTU may also be the site PLC and will perform both communications and control functions.
== See also ==
1-bit computing
Industrial control system
PLC technician
== References ==
=== Bibliography ===
== Further reading ==
Daniel Kandray, Programmable Automation Technologies, Industrial Press, 2010 ISBN 978-0-8311-3346-7, Chapter 8 Introduction to Programmable Logic Controllers
Walker, Mark John (2012-09-08). The Programmable Logic Controller: its prehistory, emergence and application (PDF) (PhD thesis). Department of Communication and Systems Faculty of Mathematics, Computing and Technology: The Open University. Archived (PDF) from the original on 2018-06-20. Retrieved 2018-06-20. | Wikipedia/Programmable_logic_controller |
In cybernetics and control theory, a setpoint (SP; also set point) is the desired or target value for an essential variable, or process value (PV) of a control system, which may differ from the actual measured value of the variable. Departure of such a variable from its setpoint is one basis for error-controlled regulation using negative feedback for automatic control. A setpoint can be any physical quantity or parameter that a control system seeks to regulate, such as temperature, pressure, flow rate, position, speed, or any other measurable attribute.
In the context of PID controller, the setpoint represents the reference or goal for the controlled process variable. It serves as the benchmark against which the actual process variable (PV) is continuously compared. The PID controller calculates an error signal by taking the difference between the setpoint and the current value of the process variable. Mathematically, this error is expressed as:
e
(
t
)
=
S
P
−
P
V
(
t
)
,
{\displaystyle e(t)=SP-PV(t),}
where
e
(
t
)
{\displaystyle e(t)}
is the error at a given time
t
{\displaystyle t}
,
S
P
{\displaystyle SP}
is the setpoint,
P
V
(
t
)
{\displaystyle PV(t)}
is the process variable at time
t
{\displaystyle t}
.
The PID controller uses this error signal to determine how to adjust the control output to bring the process variable as close as possible to the setpoint while maintaining stability and minimizing overshoot.
== Examples ==
Cruise control
The
S
P
−
P
V
{\displaystyle SP-PV}
error can be used to return a system to its norm. An everyday example is the cruise control on a road vehicle; where external influences such as gradients cause speed changes (PV), and the driver also alters the desired set speed (SP). The automatic control algorithm restores the actual speed to the desired speed in the optimum way, without delay or overshoot, by altering the power output of the vehicle's engine. In this way the
S
P
−
P
V
{\displaystyle SP-PV}
error is used to control the PV so that it equals the SP. A widespread of
S
P
−
P
V
{\displaystyle SP-PV}
error is classically used in the PID controller.
Industrial applications
Special consideration must be given for engineering applications. In industrial systems, physical or process restraints may limit the determined set point. For example, a reactor which operates more efficiently at higher temperatures may be rated to withstand 500°C. However, for safety reasons, the set point for the reactor temperature control loop would be well below this limit, even if this means the reactor is running less efficiently.
== See also ==
Process control
Proportional–integral–derivative controller
== References == | Wikipedia/Setpoint_(control_system) |
In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.
Under these assumptions an optimal control scheme within the class of linear control laws can be derived by a completion-of-squares argument. This control law which is known as the LQG controller, is unique and it is simply a combination of a Kalman filter (a linear–quadratic state estimator (LQE)) together with a linear–quadratic regulator (LQR). The separation principle states that the state estimator and the state feedback can be designed independently. LQG control applies to both linear time-invariant systems as well as linear time-varying systems, and constitutes a linear dynamic feedback control law that is easily computed and implemented: the LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension.
A deeper statement of the separation principle is that the LQG controller is still optimal in a wider class of possibly nonlinear controllers. That is, utilizing a nonlinear control scheme will not improve the expected value of the cost function. This version of the separation principle is a special case of the separation principle of stochastic control which states that even when the process and output noise sources are possibly non-Gaussian martingales, as long as the system dynamics are linear, the optimal control separates into an optimal state estimator (which may no longer be a Kalman filter) and an LQR regulator.
In the classical LQG setting, implementation of the LQG controller may be problematic when the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also, the solution is no longer unique. Despite these facts numerical algorithms are available to solve the associated optimal projection equations which constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller.
LQG optimality does not automatically ensure good robustness properties. The robust stability of the closed loop system must be checked separately after the LQG controller has been designed. To promote robustness some of the system parameters may be assumed stochastic instead of deterministic. The associated more difficult control problem leads to a similar optimal controller of which only the controller parameters are different.
It is possible to compute the expected value of the cost function for the optimal gains, as well as any other set of stable gains.
The LQG controller is also used to control perturbed non-linear systems.
== Mathematical description of the problem and solution ==
=== Continuous time ===
Consider the continuous-time linear dynamic system
x
˙
(
t
)
=
A
(
t
)
x
(
t
)
+
B
(
t
)
u
(
t
)
+
v
(
t
)
,
{\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)+B(t)\mathbf {u} (t)+\mathbf {v} (t),}
y
(
t
)
=
C
(
t
)
x
(
t
)
+
w
(
t
)
,
{\displaystyle \mathbf {y} (t)=C(t)\mathbf {x} (t)+\mathbf {w} (t),}
where
x
{\displaystyle {\mathbf {x} }}
represents the vector of state variables of the system,
u
{\displaystyle {\mathbf {u} }}
the vector of control inputs and
y
{\displaystyle {\mathbf {y} }}
the vector of measured outputs available for feedback. Both additive white Gaussian system noise
v
(
t
)
{\displaystyle \mathbf {v} (t)}
and additive white Gaussian measurement noise
w
(
t
)
{\displaystyle \mathbf {w} (t)}
affect the system. Given this system the objective is to find the control input history
u
(
t
)
{\displaystyle {\mathbf {u} }(t)}
which at every time
t
{\displaystyle {\mathbf {} }t}
may depend linearly only on the past measurements
y
(
t
′
)
,
0
≤
t
′
<
t
{\displaystyle {\mathbf {y} }(t'),0\leq t'<t}
such that the following cost function is minimized:
J
=
E
[
x
T
(
T
)
F
x
(
T
)
+
∫
0
T
x
T
(
t
)
Q
(
t
)
x
(
t
)
+
u
T
(
t
)
R
(
t
)
u
(
t
)
d
t
]
,
{\displaystyle J=\mathbb {E} \left[{\mathbf {x} ^{\mathrm {T} }}(T)F{\mathbf {x} }(T)+\int _{0}^{T}{\mathbf {x} ^{\mathrm {T} }}(t)Q(t){\mathbf {x} }(t)+{\mathbf {u} ^{\mathrm {T} }}(t)R(t){\mathbf {u} }(t)\,dt\right],}
F
≥
0
,
Q
(
t
)
≥
0
,
R
(
t
)
>
0
,
{\displaystyle F\geq 0,\quad Q(t)\geq 0,\quad R(t)>0,}
where
E
{\displaystyle \mathbb {E} }
denotes the expected value. The final time (horizon)
T
{\displaystyle {\mathbf {} }T}
may be either finite or infinite. If the horizon tends to infinity the first term
x
T
(
T
)
F
x
(
T
)
{\displaystyle {\mathbf {x} }^{\mathrm {T} }(T)F{\mathbf {x} }(T)}
of the cost function becomes negligible and irrelevant to the problem. Also to keep the costs finite the cost function has to be taken to be
J
/
T
{\displaystyle {\mathbf {} }J/T}
.
The LQG controller that solves the LQG control problem is specified by the following equations:
x
^
˙
(
t
)
=
A
(
t
)
x
^
(
t
)
+
B
(
t
)
u
(
t
)
+
L
(
t
)
(
y
(
t
)
−
C
(
t
)
x
^
(
t
)
)
,
x
^
(
0
)
=
E
[
x
(
0
)
]
,
{\displaystyle {\dot {\hat {\mathbf {x} }}}(t)=A(t){\hat {\mathbf {x} }}(t)+B(t){\mathbf {u} }(t)+L(t)\left({\mathbf {y} }(t)-C(t){\hat {\mathbf {x} }}(t)\right),\quad {\hat {\mathbf {x} }}(0)=\mathbb {E} \left[{\mathbf {x} }(0)\right],}
u
(
t
)
=
−
K
(
t
)
x
^
(
t
)
.
{\displaystyle {\mathbf {u} }(t)=-K(t){\hat {\mathbf {x} }}(t).}
The matrix
L
(
t
)
{\displaystyle {\mathbf {} }L(t)}
is called the Kalman gain of the associated Kalman filter represented by the first equation. At each time
t
{\displaystyle {\mathbf {} }t}
this filter generates estimates
x
^
(
t
)
{\displaystyle {\hat {\mathbf {x} }}(t)}
of the state
x
(
t
)
{\displaystyle {\mathbf {x} }(t)}
using the past measurements and inputs. The Kalman gain
L
(
t
)
{\displaystyle {\mathbf {} }L(t)}
is computed from the matrices
A
(
t
)
,
C
(
t
)
{\displaystyle {\mathbf {} }A(t),C(t)}
, the two intensity matrices
V
(
t
)
,
W
(
t
)
{\displaystyle \mathbf {} V(t),W(t)}
associated to the white Gaussian noises
v
(
t
)
{\displaystyle \mathbf {v} (t)}
and
w
(
t
)
{\displaystyle \mathbf {w} (t)}
and finally
E
[
x
(
0
)
x
T
(
0
)
]
{\displaystyle \mathbb {E} \left[{\mathbf {x} }(0){\mathbf {x} }^{\mathrm {T} }(0)\right]}
. These five matrices determine the Kalman gain through the following associated matrix Riccati differential equation:
P
˙
(
t
)
=
A
(
t
)
P
(
t
)
+
P
(
t
)
A
T
(
t
)
−
P
(
t
)
C
T
(
t
)
W
−
1
(
t
)
C
(
t
)
P
(
t
)
+
V
(
t
)
,
{\displaystyle {\dot {P}}(t)=A(t)P(t)+P(t)A^{\mathrm {T} }(t)-P(t)C^{\mathrm {T} }(t){\mathbf {} }W^{-1}(t)C(t)P(t)+V(t),}
P
(
0
)
=
E
[
x
(
0
)
x
T
(
0
)
]
.
{\displaystyle P(0)=\mathbb {E} \left[{\mathbf {x} }(0){\mathbf {x} }^{\mathrm {T} }(0)\right].}
Given the solution
P
(
t
)
,
0
≤
t
≤
T
{\displaystyle P(t),0\leq t\leq T}
the Kalman gain equals
L
(
t
)
=
P
(
t
)
C
T
(
t
)
W
−
1
(
t
)
.
{\displaystyle {\mathbf {} }L(t)=P(t)C^{\mathrm {T} }(t)W^{-1}(t).}
The matrix
K
(
t
)
{\displaystyle {\mathbf {} }K(t)}
is called the feedback gain matrix. This matrix is determined by the matrices
A
(
t
)
,
B
(
t
)
,
Q
(
t
)
,
R
(
t
)
{\displaystyle {\mathbf {} }A(t),B(t),Q(t),R(t)}
and
F
{\displaystyle {\mathbf {} }F}
through the following associated matrix Riccati differential equation:
−
S
˙
(
t
)
=
A
T
(
t
)
S
(
t
)
+
S
(
t
)
A
(
t
)
−
S
(
t
)
B
(
t
)
R
−
1
(
t
)
B
T
(
t
)
S
(
t
)
+
Q
(
t
)
,
{\displaystyle -{\dot {S}}(t)=A^{\mathrm {T} }(t)S(t)+S(t)A(t)-S(t)B(t)R^{-1}(t)B^{\mathrm {T} }(t)S(t)+Q(t),}
S
(
T
)
=
F
.
{\displaystyle {\mathbf {} }S(T)=F.}
Given the solution
S
(
t
)
,
0
≤
t
≤
T
{\displaystyle {\mathbf {} }S(t),0\leq t\leq T}
the feedback gain equals
K
(
t
)
=
R
−
1
(
t
)
B
T
(
t
)
S
(
t
)
.
{\displaystyle {\mathbf {} }K(t)=R^{-1}(t)B^{\mathrm {T} }(t)S(t).}
Observe the similarity of the two matrix Riccati differential equations, the first one running forward in time, the second one running backward in time. This similarity is called duality. The first matrix Riccati differential equation solves the linear–quadratic estimation problem (LQE). The second matrix Riccati differential equation solves the linear–quadratic regulator problem (LQR). These problems are dual and together they solve the linear–quadratic–Gaussian control problem (LQG). So the LQG problem separates into the LQE and LQR problem that can be solved independently. Therefore, the LQG problem is called separable.
When
A
(
t
)
,
B
(
t
)
,
C
(
t
)
,
Q
(
t
)
,
R
(
t
)
{\displaystyle {\mathbf {} }A(t),B(t),C(t),Q(t),R(t)}
and the noise intensity matrices
V
(
t
)
{\displaystyle \mathbf {} V(t)}
,
W
(
t
)
{\displaystyle \mathbf {} W(t)}
do not depend on
t
{\displaystyle {\mathbf {} }t}
and when
T
{\displaystyle {\mathbf {} }T}
tends to infinity the LQG controller becomes a time-invariant dynamic system. In that case the second matrix Riccati differential equation may be replaced by the associated algebraic Riccati equation.
=== Discrete time ===
Since the discrete-time LQG control problem is similar to the one in continuous-time, the description below focuses on the mathematical equations.
The discrete-time linear system equations are
x
i
+
1
=
A
i
x
k
+
B
i
u
i
+
v
i
,
{\displaystyle {\mathbf {x} }_{i+1}=A_{i}\mathbf {x} _{k}+B_{i}\mathbf {u} _{i}+\mathbf {v} _{i},}
y
i
=
C
i
x
i
+
w
i
.
{\displaystyle \mathbf {y} _{i}=C_{i}\mathbf {x} _{i}+\mathbf {w} _{i}.}
Here
i
{\displaystyle \mathbf {} i}
represents the discrete time index and
v
i
,
w
i
{\displaystyle \mathbf {v} _{i},\mathbf {w} _{i}}
represent discrete-time Gaussian white noise processes with covariance matrices
V
i
,
W
i
{\displaystyle \mathbf {} V_{i},W_{i}}
, respectively, and are independent of each other.
The quadratic cost function to be minimized is
J
=
E
[
x
N
T
F
x
N
+
∑
i
=
0
N
−
1
(
x
i
T
Q
i
x
i
+
u
i
T
R
i
u
i
)
]
,
{\displaystyle J=\mathbb {E} \left[{\mathbf {x} }_{N}^{\mathrm {T} }F{\mathbf {x} }_{N}+\sum _{i=0}^{N-1}(\mathbf {x} _{i}^{\mathrm {T} }Q_{i}\mathbf {x} _{i}+\mathbf {u} _{i}^{\mathrm {T} }R_{i}\mathbf {u} _{i})\right],}
F
≥
0
,
Q
i
≥
0
,
R
i
>
0.
{\displaystyle F\geq 0,Q_{i}\geq 0,R_{i}>0.\,}
The discrete-time LQG controller is
x
^
i
+
1
=
A
i
x
^
i
+
B
i
u
i
+
L
i
+
1
(
y
i
+
1
−
C
i
+
1
{
A
i
x
^
i
+
B
i
u
i
}
)
,
x
^
0
=
E
[
x
0
]
{\displaystyle {\hat {\mathbf {x} }}_{i+1}=A_{i}{\hat {\mathbf {x} }}_{i}+B_{i}{\mathbf {u} }_{i}+L_{i+1}\left({\mathbf {y} }_{i+1}-C_{i+1}\left\{A_{i}{\hat {\mathbf {x} }}_{i}+B_{i}\mathbf {u} _{i}\right\}\right),\qquad {\hat {\mathbf {x} }}_{0}=\mathbb {E} [{\mathbf {x} }_{0}]}
,
u
i
=
−
K
i
x
^
i
.
{\displaystyle \mathbf {u} _{i}=-K_{i}{\hat {\mathbf {x} }}_{i}.\,}
and
x
^
i
{\displaystyle {\hat {\mathbf {x} }}_{i}}
corresponds to the predictive estimate
x
^
i
=
E
[
x
i
|
y
i
,
u
i
−
1
]
{\displaystyle {\hat {\mathbf {x} }}_{i}=\mathbb {E} [\mathbf {x} _{i}|\mathbf {y} ^{i},\mathbf {u} ^{i-1}]}
.
The Kalman gain equals
L
i
=
P
i
C
i
T
(
C
i
P
i
C
i
T
+
W
i
)
−
1
,
{\displaystyle {\mathbf {} }L_{i}=P_{i}C_{i}^{\mathrm {T} }(C_{i}P_{i}C_{i}^{\mathrm {T} }+W_{i})^{-1},}
where
P
i
{\displaystyle {\mathbf {} }P_{i}}
is determined by the following matrix Riccati difference equation that runs forward in time:
P
i
+
1
=
A
i
(
P
i
−
P
i
C
i
T
(
C
i
P
i
C
i
T
+
W
i
)
−
1
C
i
P
i
)
A
i
T
+
V
i
,
P
0
=
E
[
(
x
0
−
x
^
0
)
(
x
0
−
x
^
0
)
T
]
.
{\displaystyle P_{i+1}=A_{i}\left(P_{i}-P_{i}C_{i}^{\mathrm {T} }\left(C_{i}P_{i}C_{i}^{\mathrm {T} }+W_{i}\right)^{-1}C_{i}P_{i}\right)A_{i}^{\mathrm {T} }+V_{i},\qquad P_{0}=\mathbb {E} [\left({\mathbf {x} }_{0}-{\hat {\mathbf {x} }}_{0}\right)\left({\mathbf {x} }_{0}-{\hat {\mathbf {x} }}_{0}\right)^{\mathrm {T} }].}
The feedback gain matrix equals
K
i
=
(
B
i
T
S
i
+
1
B
i
+
R
i
)
−
1
B
i
T
S
i
+
1
A
i
{\displaystyle {\mathbf {} }K_{i}=(B_{i}^{\mathrm {T} }S_{i+1}B_{i}+R_{i})^{-1}B_{i}^{\mathrm {T} }S_{i+1}A_{i}}
where
S
i
{\displaystyle {\mathbf {} }S_{i}}
is determined by the following matrix Riccati difference equation that runs backward in time:
S
i
=
A
i
T
(
S
i
+
1
−
S
i
+
1
B
i
(
B
i
T
S
i
+
1
B
i
+
R
i
)
−
1
B
i
T
S
i
+
1
)
A
i
+
Q
i
,
S
N
=
F
.
{\displaystyle S_{i}=A_{i}^{\mathrm {T} }\left(S_{i+1}-S_{i+1}B_{i}\left(B_{i}^{\mathrm {T} }S_{i+1}B_{i}+R_{i}\right)^{-1}B_{i}^{\mathrm {T} }S_{i+1}\right)A_{i}+Q_{i},\quad S_{N}=F.}
If all the matrices in the problem formulation are time-invariant and if the horizon
N
{\displaystyle {\mathbf {} }N}
tends to infinity the discrete-time LQG controller becomes time-invariant. In that case the matrix Riccati difference equations may be replaced by their associated discrete-time algebraic Riccati equations. These determine the time-invariant linear–quadratic estimator and the time-invariant linear–quadratic regulator in discrete-time. To keep the costs finite instead of
J
{\displaystyle {\mathbf {} }J}
one has to consider
J
/
N
{\displaystyle {\mathbf {} }J/N}
in this case.
== See also ==
Stochastic control
Separation principle in stochastic control
Witsenhausen's counterexample
== References ==
== Further reading ==
Stengel, Robert F. (1994). Optimal Control and Estimation. New York: Dover. ISBN 0-486-68200-5. | Wikipedia/Linear-quadratic-Gaussian_control |
Industrial process control (IPC) or simply process control is a system used in modern manufacturing which uses the principles of control theory and physical industrial control systems to monitor, control and optimize continuous industrial production processes using control algorithms. This ensures that the industrial machines run smoothly and safely in factories and efficiently use energy to transform raw materials into high-quality finished products with reliable consistency while reducing energy waste and economic costs, something which could not be achieved purely by human manual control.
In IPC, control theory provides the theoretical framework to understand system dynamics, predict outcomes and design control strategies to ensure predetermined objectives, utilizing concepts like feedback loops, stability analysis and controller design. On the other hand, the physical apparatus of IPC, based on automation technologies, consists of several components. Firstly, a network of sensors continuously measure various process variables (such as temperature, pressure, etc.) and product quality variables. A programmable logic controller (PLC, for smaller, less complex processes) or a distributed control system (DCS, for large-scale or geographically dispersed processes) analyzes this sensor data transmitted to it, compares it to predefined setpoints using a set of instructions or a mathematical model called the control algorithm and then, in case of any deviation from these setpoints (e.g., temperature exceeding setpoint), makes quick corrective adjustments through actuators such as valves (e.g. cooling valve for temperature control), motors or heaters to guide the process back to the desired operational range. This creates a continuous closed-loop cycle of measurement, comparison, control action, and re-evaluation which guarantees that the process remains within established parameters. The HMI (Human-Machine Interface) acts as the "control panel" for the IPC system where small number of human operators can monitor the process and make informed decisions regarding adjustments. IPCs can range from controlling the temperature and level of a single process vessel (controlled environment tank for mixing, separating, reacting, or storing materials in industrial processes.) to a complete chemical processing plant with several thousand control feedback loops.
IPC provides several critical benefits to manufacturing companies. By maintaining a tight control over key process variables, it helps reduce energy use, minimize waste and shorten downtime for peak efficiency and reduced costs. It ensures consistent and improved product quality with little variability, which satisfies the customers and strengthens the company's reputation. It improves safety by detecting and alerting human operators about potential issues early, thus preventing accidents, equipment failures, process disruptions and costly downtime. Analyzing trends and behaviors in the vast amounts of data collected real-time helps engineers identify areas of improvement, refine control strategies and continuously enhance production efficiency using a data-driven approach.
IPC is used across a wide range of industries where precise control is important. The applications can range from controlling the temperature and level of a single process vessel, to a complete chemical processing plant with several thousand control loops. In automotive manufacturing, IPC ensures consistent quality by meticulously controlling processes like welding and painting. Mining operations are optimized with IPC monitoring ore crushing and adjusting conveyor belt speeds for maximum output. Dredging benefits from precise control of suction pressure, dredging depth and sediment discharge rate by IPC, ensuring efficient and sustainable practices. Pulp and paper production leverages IPC to regulate chemical processes (e.g., pH and bleach concentration) and automate paper machine operations to control paper sheet moisture content and drying temperature for consistent quality. In chemical plants, it ensures the safe and efficient production of chemicals by controlling temperature, pressure and reaction rates. Oil refineries use it to smoothly convert crude oil into gasoline and other petroleum products. In power plants, it helps maintain stable operating conditions necessary for a continuous electricity supply. In food and beverage production, it helps ensure consistent texture, safety and quality. Pharmaceutical companies relies on it to produce life-saving drugs safely and effectively. The development of large industrial process control systems has been instrumental in enabling the design of large high volume and complex processes, which could not be otherwise economically or safely operated.
== History ==
Early process control breakthroughs came most frequently in the form of water control devices. Ktesibios of Alexandria is credited for inventing float valves to regulate water level of water clocks in the 3rd century BC. In the 1st century AD, Heron of Alexandria invented a water valve similar to the fill valve used in modern toilets.
Later process controls inventions involved basic physics principles. In 1620, Cornelis Drebbel invented a bimetallic thermostat for controlling the temperature in a furnace. In 1681, Denis Papin discovered the pressure inside a vessel could be regulated by placing weights on top of the vessel lid. In 1745, Edmund Lee created the fantail to improve windmill efficiency; a fantail was a smaller windmill placed 90° of the larger fans to keep the face of the windmill pointed directly into the oncoming wind.
With the dawn of the Industrial Revolution in the 1760s, process controls inventions were aimed to replace human operators with mechanized processes. In 1784, Oliver Evans created a water-powered flourmill which operated using buckets and screw conveyors. Henry Ford applied the same theory in 1910 when the assembly line was created to decrease human intervention in the automobile production process.
For continuously variable process control it was not until 1922 that a formal control law for what we now call PID control or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error, but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.
His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
== Development of modern process control operations ==
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.
With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant, and communicate with the graphic display in the control room or rooms. The distributed control system (DCS) was born.
The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
== Hierarchy ==
The accompanying diagram is a general model which shows functional manufacturing levels in a large process using processor and computer-based control.
Referring to the diagram: Level 0 contains the field devices such as flow and temperature sensors (process value readings - PV), and final control elements (FCE), such as control valves; Level 1 contains the industrialized Input/Output (I/O) modules, and their associated distributed electronic processors; Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens; Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets; Level 4 is the production scheduling level.
== Control model ==
To determine the fundamental model for any process, the inputs and outputs of the system are defined differently than for other chemical processes. The balance equations are defined by the control inputs and outputs rather than the material inputs. The control model is a set of equations used to predict the behavior of a system and can help determine what the response to change will be. The state variable (x) is a measurable variable that is a good indicator of the state of the system, such as temperature (energy balance), volume (mass balance) or concentration (component balance). Input variable (u) is a specified variable that commonly include flow rates.
The entering and exiting flows are both considered control inputs. The control input can be classified as a manipulated, disturbance, or unmonitored variable. Parameters (p) are usually a physical limitation and something that is fixed for the system, such as the vessel volume or the viscosity of the material. Output (y) is the metric used to determine the behavior of the system. The control output can be classified as measured, unmeasured, or unmonitored.
== Types ==
Processes can be characterized as batch, continuous, or hybrid. Batch applications require that specific quantities of raw materials be combined in specific ways for particular duration to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).
A continuous physical system is represented through variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to produce very large quantities of product per year (millions to billions of pounds). Such controls use feedback such as in the PID controller A PID Controller includes proportional, integrating, and derivative controller functions.
Applications having elements of batch and continuous process control are often called hybrid applications.
== Control loops ==
The fundamental building block of any industrial control system is the control loop, which controls just one process variable. An example is shown in the accompanying diagram, where the flow rate in a pipe is controlled by a PID controller, assisted by what is effectively a cascaded loop in the form of a valve servo-controller to ensure correct valve positioning.
Some large systems may have several hundreds or thousands of control loops. In complex processes the loops are interactive, so that the operation of one loop may affect the operation of another. The system diagram for representing control loops is a Piping and instrumentation diagram.
Commonly used control systems include programmable logic controller (PLC), Distributed Control System (DCS) or SCADA.
A further example is shown. If a control valve were used to hold level in a tank, the level controller would compare the equivalent reading of a level sensor to the level setpoint and determine whether more or less valve opening was necessary to keep the level constant. A cascaded flow controller could then calculate the change in the valve position.
== Economic advantages ==
The economic nature of many products manufactured in batch and continuous processes require highly efficient operation due to thin margins. The competing factor in process control is that products must meet certain specifications in order to be satisfactory. These specifications can come in two forms: a minimum and maximum for a property of the material or product, or a range within which the property must be. All loops are susceptible to disturbances and therefore a buffer must be used on process set points to ensure disturbances do not cause the material or product to go out of specifications. This buffer comes at an economic cost (i.e. additional processing, maintaining elevated or depressed process conditions, etc.).
Process efficiency can be enhanced by reducing the margins necessary to ensure product specifications are met. This can be done by improving the control of the process to minimize the effect of disturbances on the process. The efficiency is improved in a two step method of narrowing the variance and shifting the target. Margins can be narrowed through various process upgrades (i.e. equipment upgrades, enhanced control methods, etc.). Once margins are narrowed, an economic analysis can be done on the process to determine how the set point target is to be shifted. Less conservative process set points lead to increased economic efficiency. Effective process control strategies increase the competitive advantage of manufacturers who employ them.
== See also ==
== References ==
== Further reading ==
Walker, Mark John (2012-09-08). The Programmable Logic Controller: its prehistory, emergence and application (PDF) (PhD thesis). Department of Communication and Systems Faculty of Mathematics, Computing and Technology: The Open University. Archived (PDF) from the original on 2018-06-20. Retrieved 2018-06-20.
== External links ==
A Complete Guide to Statistical Process Control
The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook
PID control virtual laboratory, free video tutorials, on-line simulators, advanced process control schemes | Wikipedia/Process_control |
In control engineering and system identification, a state-space representation is a mathematical model of a physical system that uses state variables to track how inputs shape system behavior over time through first-order differential equations or difference equations. These state variables change based on their current values and inputs, while outputs depend on the states and sometimes the inputs too. The state space (also called time-domain approach and equivalent to phase space in certain dynamical systems) is a geometric space where the axes are these state variables, and the system’s state is represented by a state vector.
For linear, time-invariant, and finite-dimensional systems, the equations can be written in matrix form, offering a compact alternative to the frequency domain’s Laplace transforms for multiple-input and multiple-output (MIMO) systems. Unlike the frequency domain approach, it works for systems beyond just linear ones with zero initial conditions. This approach turns systems theory into an algebraic framework, making it possible to use Kronecker structures for efficient analysis.
State-space models are applied in fields such as economics, statistics, computer science, electrical engineering, and neuroscience. In econometrics, for example, state-space models can be used to decompose a time series into trend and cycle, compose individual indicators into a composite index, identify turning points of the business cycle, and estimate GDP using latent and unobserved time series. Many applications rely on the Kalman Filter or a state observer to produce estimates of the current unknown state variables using their previous observations.
== State variables ==
The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system,
n
{\displaystyle n}
, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables, or the system cannot be solved.
== Linear systems ==
The most general state-space representation of a linear system with
p
{\displaystyle p}
inputs,
q
{\displaystyle q}
outputs and
n
{\displaystyle n}
state variables is written in the following form:
x
˙
(
t
)
=
A
(
t
)
x
(
t
)
+
B
(
t
)
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)}
y
(
t
)
=
C
(
t
)
x
(
t
)
+
D
(
t
)
u
(
t
)
{\displaystyle \mathbf {y} (t)=\mathbf {C} (t)\mathbf {x} (t)+\mathbf {D} (t)\mathbf {u} (t)}
where:
In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common LTI case, matrices will be time invariant. The time variable
t
{\displaystyle t}
can be continuous (e.g.
t
∈
R
{\displaystyle t\in \mathbb {R} }
) or discrete (e.g.
t
∈
Z
{\displaystyle t\in \mathbb {Z} }
). In the latter case, the time variable
k
{\displaystyle k}
is usually used instead of
t
{\displaystyle t}
. Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:
=== Example: continuous-time LTI case ===
Stability and natural response characteristics of a continuous-time LTI system (i.e., linear with matrices that are constant with respect to time) can be studied from the eigenvalues of the matrix
A
{\displaystyle \mathbf {A} }
. The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:
G
(
s
)
=
k
(
s
−
z
1
)
(
s
−
z
2
)
(
s
−
z
3
)
(
s
−
p
1
)
(
s
−
p
2
)
(
s
−
p
3
)
(
s
−
p
4
)
.
{\displaystyle \mathbf {G} (s)=k{\frac {(s-z_{1})(s-z_{2})(s-z_{3})}{(s-p_{1})(s-p_{2})(s-p_{3})(s-p_{4})}}.}
The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of
s
I
−
A
{\displaystyle s\mathbf {I} -\mathbf {A} }
,
λ
(
s
)
=
|
s
I
−
A
|
.
{\displaystyle \lambda (s)=\left|s\mathbf {I} -\mathbf {A} \right|.}
The roots of this polynomial (the eigenvalues) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability.
The zeros found in the numerator of
G
(
s
)
{\displaystyle \mathbf {G} (s)}
can similarly be used to determine whether the system is minimum phase.
The system may still be input–output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable).
=== Controllability ===
The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only if
rank
[
B
A
B
A
2
B
⋯
A
n
−
1
B
]
=
n
,
{\displaystyle \operatorname {rank} {\begin{bmatrix}\mathbf {B} &\mathbf {A} \mathbf {B} &\mathbf {A} ^{2}\mathbf {B} &\cdots &\mathbf {A} ^{n-1}\mathbf {B} \end{bmatrix}}=n,}
where rank is the number of linearly independent rows in a matrix, and where n is the number of state variables.
=== Observability ===
Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system).
A continuous time-invariant linear state-space model is observable if and only if
rank
[
C
C
A
⋮
C
A
n
−
1
]
=
n
.
{\displaystyle \operatorname {rank} {\begin{bmatrix}\mathbf {C} \\\mathbf {C} \mathbf {A} \\\vdots \\\mathbf {C} \mathbf {A} ^{n-1}\end{bmatrix}}=n.}
=== Transfer function ===
The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way:
First, taking the Laplace transform of
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)}
yields
s
X
(
s
)
−
x
(
0
)
=
A
X
(
s
)
+
B
U
(
s
)
.
{\displaystyle s\mathbf {X} (s)-\mathbf {x} (0)=\mathbf {A} \mathbf {X} (s)+\mathbf {B} \mathbf {U} (s).}
Next, we simplify for
X
(
s
)
{\displaystyle \mathbf {X} (s)}
, giving
(
s
I
−
A
)
X
(
s
)
=
x
(
0
)
+
B
U
(
s
)
{\displaystyle (s\mathbf {I} -\mathbf {A} )\mathbf {X} (s)=\mathbf {x} (0)+\mathbf {B} \mathbf {U} (s)}
and thus
X
(
s
)
=
(
s
I
−
A
)
−
1
x
(
0
)
+
(
s
I
−
A
)
−
1
B
U
(
s
)
.
{\displaystyle \mathbf {X} (s)=(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} \mathbf {U} (s).}
Substituting for
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the output equation
Y
(
s
)
=
C
X
(
s
)
+
D
U
(
s
)
,
{\displaystyle \mathbf {Y} (s)=\mathbf {C} \mathbf {X} (s)+\mathbf {D} \mathbf {U} (s),}
giving
Y
(
s
)
=
C
(
(
s
I
−
A
)
−
1
x
(
0
)
+
(
s
I
−
A
)
−
1
B
U
(
s
)
)
+
D
U
(
s
)
.
{\displaystyle \mathbf {Y} (s)=\mathbf {C} ((s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} \mathbf {U} (s))+\mathbf {D} \mathbf {U} (s).}
Assuming zero initial conditions
x
(
0
)
=
0
{\displaystyle \mathbf {x} (0)=\mathbf {0} }
and a single-input single-output (SISO) system, the transfer function is defined as the ratio of output and input
G
(
s
)
=
Y
(
s
)
/
U
(
s
)
{\displaystyle G(s)=Y(s)/U(s)}
. For a multiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the transfer function matrix is derived from
Y
(
s
)
=
G
(
s
)
U
(
s
)
{\displaystyle \mathbf {Y} (s)=\mathbf {G} (s)\mathbf {U} (s)}
using the method of equating the coefficients which yields
G
(
s
)
=
C
(
s
I
−
A
)
−
1
B
+
D
.
{\displaystyle \mathbf {G} (s)=\mathbf {C} (s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} +\mathbf {D} .}
Consequently,
G
(
s
)
{\displaystyle \mathbf {G} (s)}
is a matrix with the dimension
q
×
p
{\displaystyle q\times p}
which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The Rosenbrock system matrix provides a bridge between the state-space representation and its transfer function.
=== Canonical realizations ===
Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system):
Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
G
(
s
)
=
n
1
s
3
+
n
2
s
2
+
n
3
s
+
n
4
s
4
+
d
1
s
3
+
d
2
s
2
+
d
3
s
+
d
4
.
{\displaystyle \mathbf {G} (s)={\frac {n_{1}s^{3}+n_{2}s^{2}+n_{3}s+n_{4}}{s^{4}+d_{1}s^{3}+d_{2}s^{2}+d_{3}s+d_{4}}}.}
The coefficients can now be inserted directly into the state-space model by the following approach:
x
˙
(
t
)
=
[
0
1
0
0
0
0
1
0
0
0
0
1
−
d
4
−
d
3
−
d
2
−
d
1
]
x
(
t
)
+
[
0
0
0
1
]
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\-d_{4}&-d_{3}&-d_{2}&-d_{1}\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}0\\0\\0\\1\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
n
4
n
3
n
2
n
1
]
x
(
t
)
.
{\displaystyle \mathbf {y} (t)={\begin{bmatrix}n_{4}&n_{3}&n_{2}&n_{1}\end{bmatrix}}\mathbf {x} (t).}
This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).
The transfer function coefficients can also be used to construct another type of canonical form
x
˙
(
t
)
=
[
0
0
0
−
d
4
1
0
0
−
d
3
0
1
0
−
d
2
0
0
1
−
d
1
]
x
(
t
)
+
[
n
4
n
3
n
2
n
1
]
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}0&0&0&-d_{4}\\1&0&0&-d_{3}\\0&1&0&-d_{2}\\0&0&1&-d_{1}\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}n_{4}\\n_{3}\\n_{2}\\n_{1}\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
0
0
0
1
]
x
(
t
)
.
{\displaystyle \mathbf {y} (t)={\begin{bmatrix}0&0&0&1\end{bmatrix}}\mathbf {x} (t).}
This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).
=== Proper transfer functions ===
Transfer functions which are only proper (and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant.
G
(
s
)
=
G
S
P
(
s
)
+
G
(
∞
)
.
{\displaystyle \mathbf {G} (s)=\mathbf {G} _{\mathrm {SP} }(s)+\mathbf {G} (\infty ).}
The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially
y
(
t
)
=
G
(
∞
)
u
(
t
)
{\displaystyle \mathbf {y} (t)=\mathbf {G} (\infty )\mathbf {u} (t)}
. Together we then get a state-space realization with matrices A, B and C determined by the strictly proper part, and matrix D determined by the constant.
Here is an example to clear things up a bit:
G
(
s
)
=
s
2
+
3
s
+
3
s
2
+
2
s
+
1
=
s
+
2
s
2
+
2
s
+
1
+
1
{\displaystyle \mathbf {G} (s)={\frac {s^{2}+3s+3}{s^{2}+2s+1}}={\frac {s+2}{s^{2}+2s+1}}+1}
which yields the following controllable realization
x
˙
(
t
)
=
[
−
2
−
1
1
0
]
x
(
t
)
+
[
1
0
]
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}-2&-1\\1&0\\\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}1\\0\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
1
2
]
x
(
t
)
+
[
1
]
u
(
t
)
{\displaystyle \mathbf {y} (t)={\begin{bmatrix}1&2\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}1\end{bmatrix}}\mathbf {u} (t)}
Notice how the output also depends directly on the input. This is due to the
G
(
∞
)
{\displaystyle \mathbf {G} (\infty )}
constant in the transfer function.
=== Feedback ===
A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system:
u
(
t
)
=
K
y
(
t
)
{\displaystyle \mathbf {u} (t)=K\mathbf {y} (t)}
.
Since the values of K are unrestricted the values can easily be negated for negative feedback.
The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}
y
(
t
)
=
C
x
(
t
)
+
D
u
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}
becomes
x
˙
(
t
)
=
A
x
(
t
)
+
B
K
y
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+BK\mathbf {y} (t)}
y
(
t
)
=
C
x
(
t
)
+
D
K
y
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+DK\mathbf {y} (t)}
solving the output equation for
y
(
t
)
{\displaystyle \mathbf {y} (t)}
and substituting in the state equation results in
x
˙
(
t
)
=
(
A
+
B
K
(
I
−
D
K
)
−
1
C
)
x
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A+BK\left(I-DK\right)^{-1}C\right)\mathbf {x} (t)}
y
(
t
)
=
(
I
−
D
K
)
−
1
C
x
(
t
)
{\displaystyle \mathbf {y} (t)=\left(I-DK\right)^{-1}C\mathbf {x} (t)}
The advantage of this is that the eigenvalues of A can be controlled by setting K appropriately through eigendecomposition of
(
A
+
B
K
(
I
−
D
K
)
−
1
C
)
{\displaystyle \left(A+BK\left(I-DK\right)^{-1}C\right)}
.
This assumes that the closed-loop system is controllable or that the unstable eigenvalues of A can be made stable through appropriate choice of K.
==== Example ====
For a strictly proper system D equals zero. Another fairly common situation is when all states are outputs, i.e. y = x, which yields C = I, the identity matrix. This would then result in the simpler equations
x
˙
(
t
)
=
(
A
+
B
K
)
x
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A+BK\right)\mathbf {x} (t)}
y
(
t
)
=
x
(
t
)
{\displaystyle \mathbf {y} (t)=\mathbf {x} (t)}
This reduces the necessary eigendecomposition to just
A
+
B
K
{\displaystyle A+BK}
.
=== Feedback with setpoint (reference) input ===
In addition to feedback, an input,
r
(
t
)
{\displaystyle r(t)}
, can be added such that
u
(
t
)
=
−
K
y
(
t
)
+
r
(
t
)
{\displaystyle \mathbf {u} (t)=-K\mathbf {y} (t)+\mathbf {r} (t)}
.
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}
y
(
t
)
=
C
x
(
t
)
+
D
u
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}
becomes
x
˙
(
t
)
=
A
x
(
t
)
−
B
K
y
(
t
)
+
B
r
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)-BK\mathbf {y} (t)+B\mathbf {r} (t)}
y
(
t
)
=
C
x
(
t
)
−
D
K
y
(
t
)
+
D
r
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)-DK\mathbf {y} (t)+D\mathbf {r} (t)}
solving the output equation for
y
(
t
)
{\displaystyle \mathbf {y} (t)}
and substituting in the state equation
results in
x
˙
(
t
)
=
(
A
−
B
K
(
I
+
D
K
)
−
1
C
)
x
(
t
)
+
B
(
I
−
K
(
I
+
D
K
)
−
1
D
)
r
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A-BK\left(I+DK\right)^{-1}C\right)\mathbf {x} (t)+B\left(I-K\left(I+DK\right)^{-1}D\right)\mathbf {r} (t)}
y
(
t
)
=
(
I
+
D
K
)
−
1
C
x
(
t
)
+
(
I
+
D
K
)
−
1
D
r
(
t
)
{\displaystyle \mathbf {y} (t)=\left(I+DK\right)^{-1}C\mathbf {x} (t)+\left(I+DK\right)^{-1}D\mathbf {r} (t)}
One fairly common simplification to this system is removing D, which reduces the equations to
x
˙
(
t
)
=
(
A
−
B
K
C
)
x
(
t
)
+
B
r
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A-BKC\right)\mathbf {x} (t)+B\mathbf {r} (t)}
y
(
t
)
=
C
x
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)}
=== Moving object example ===
A classical linear system is that of one-dimensional movement of an object (e.g., a cart).
Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring:
m
y
¨
(
t
)
=
u
(
t
)
−
b
y
˙
(
t
)
−
k
y
(
t
)
{\displaystyle m{\ddot {y}}(t)=u(t)-b{\dot {y}}(t)-ky(t)}
where
y
(
t
)
{\displaystyle y(t)}
is position;
y
˙
(
t
)
{\displaystyle {\dot {y}}(t)}
is velocity;
y
¨
(
t
)
{\displaystyle {\ddot {y}}(t)}
is acceleration
u
(
t
)
{\displaystyle u(t)}
is an applied force
b
{\displaystyle b}
is the viscous friction coefficient
k
{\displaystyle k}
is the spring constant
m
{\displaystyle m}
is the mass of the object
The state equation would then become
[
x
˙
1
(
t
)
x
˙
2
(
t
)
]
=
[
0
1
−
k
m
−
b
m
]
[
x
1
(
t
)
x
2
(
t
)
]
+
[
0
1
m
]
u
(
t
)
{\displaystyle {\begin{bmatrix}{\dot {\mathbf {x} }}_{1}(t)\\{\dot {\mathbf {x} }}_{2}(t)\end{bmatrix}}={\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}{\begin{bmatrix}\mathbf {x} _{1}(t)\\\mathbf {x} _{2}(t)\end{bmatrix}}+{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
1
0
]
[
x
1
(
t
)
x
2
(
t
)
]
{\displaystyle \mathbf {y} (t)=\left[{\begin{matrix}1&0\end{matrix}}\right]\left[{\begin{matrix}\mathbf {x_{1}} (t)\\\mathbf {x_{2}} (t)\end{matrix}}\right]}
where
x
1
(
t
)
{\displaystyle x_{1}(t)}
represents the position of the object
x
2
(
t
)
=
x
˙
1
(
t
)
{\displaystyle x_{2}(t)={\dot {x}}_{1}(t)}
is the velocity of the object
x
˙
2
(
t
)
=
x
¨
1
(
t
)
{\displaystyle {\dot {x}}_{2}(t)={\ddot {x}}_{1}(t)}
is the acceleration of the object
the output
y
(
t
)
{\displaystyle \mathbf {y} (t)}
is the position of the object
The controllability test is then
[
B
A
B
]
=
[
[
0
1
m
]
[
0
1
−
k
m
−
b
m
]
[
0
1
m
]
]
=
[
0
1
m
1
m
−
b
m
2
]
{\displaystyle {\begin{bmatrix}B&AB\end{bmatrix}}={\begin{bmatrix}{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}&{\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}\end{bmatrix}}={\begin{bmatrix}0&{\frac {1}{m}}\\{\frac {1}{m}}&-{\frac {b}{m^{2}}}\end{bmatrix}}}
which has full rank for all
b
{\displaystyle b}
and
m
{\displaystyle m}
. This means, that if initial state of the system is known (
y
(
t
)
{\displaystyle y(t)}
,
y
˙
(
t
)
{\displaystyle {\dot {y}}(t)}
,
y
¨
(
t
)
{\displaystyle {\ddot {y}}(t)}
), and if the
b
{\displaystyle b}
and
m
{\displaystyle m}
are constants, then there is a force
u
{\displaystyle u}
that could move the cart into any other position in the system.
The observability test is then
[
C
C
A
]
=
[
[
1
0
]
[
1
0
]
[
0
1
−
k
m
−
b
m
]
]
=
[
1
0
0
1
]
{\displaystyle {\begin{bmatrix}C\\CA\end{bmatrix}}={\begin{bmatrix}{\begin{bmatrix}1&0\end{bmatrix}}\\{\begin{bmatrix}1&0\end{bmatrix}}{\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}}
which also has full rank. Therefore, this system is both controllable and observable.
== Nonlinear systems ==
The more general form of a state-space model can be written as two functions.
x
˙
(
t
)
=
f
(
t
,
x
(
t
)
,
u
(
t
)
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {f} (t,x(t),u(t))}
y
(
t
)
=
h
(
t
,
x
(
t
)
,
u
(
t
)
)
{\displaystyle \mathbf {y} (t)=\mathbf {h} (t,x(t),u(t))}
The first is the state equation and the latter is the output equation.
If the function
f
(
⋅
,
⋅
,
⋅
)
{\displaystyle f(\cdot ,\cdot ,\cdot )}
is a linear combination of states and inputs then the equations can be written in matrix notation like above.
The
u
(
t
)
{\displaystyle u(t)}
argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).
=== Pendulum example ===
A classic nonlinear system is a simple unforced pendulum
m
ℓ
2
θ
¨
(
t
)
=
−
m
ℓ
g
sin
θ
(
t
)
−
k
ℓ
θ
˙
(
t
)
{\displaystyle m\ell ^{2}{\ddot {\theta }}(t)=-m\ell g\sin \theta (t)-k\ell {\dot {\theta }}(t)}
where
θ
(
t
)
{\displaystyle \theta (t)}
is the angle of the pendulum with respect to the direction of gravity
m
{\displaystyle m}
is the mass of the pendulum (pendulum rod's mass is assumed to be zero)
g
{\displaystyle g}
is the gravitational acceleration
k
{\displaystyle k}
is coefficient of friction at the pivot point
ℓ
{\displaystyle \ell }
is the radius of the pendulum (to the center of gravity of the mass
m
{\displaystyle m}
)
The state equations are then
x
˙
1
(
t
)
=
x
2
(
t
)
{\displaystyle {\dot {x}}_{1}(t)=x_{2}(t)}
x
˙
2
(
t
)
=
−
g
ℓ
sin
x
1
(
t
)
−
k
m
ℓ
x
2
(
t
)
{\displaystyle {\dot {x}}_{2}(t)=-{\frac {g}{\ell }}\sin {x_{1}}(t)-{\frac {k}{m\ell }}{x_{2}}(t)}
where
x
1
(
t
)
=
θ
(
t
)
{\displaystyle x_{1}(t)=\theta (t)}
is the angle of the pendulum
x
2
(
t
)
=
x
˙
1
(
t
)
{\displaystyle x_{2}(t)={\dot {x}}_{1}(t)}
is the rotational velocity of the pendulum
x
˙
2
=
x
¨
1
{\displaystyle {\dot {x}}_{2}={\ddot {x}}_{1}}
is the rotational acceleration of the pendulum
Instead, the state equation can be written in the general form
x
˙
(
t
)
=
[
x
˙
1
(
t
)
x
˙
2
(
t
)
]
=
f
(
t
,
x
(
t
)
)
=
[
x
2
(
t
)
−
g
ℓ
sin
x
1
(
t
)
−
k
m
ℓ
x
2
(
t
)
]
.
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}{\dot {x}}_{1}(t)\\{\dot {x}}_{2}(t)\end{bmatrix}}=\mathbf {f} (t,x(t))={\begin{bmatrix}x_{2}(t)\\-{\frac {g}{\ell }}\sin {x_{1}}(t)-{\frac {k}{m\ell }}{x_{2}}(t)\end{bmatrix}}.}
The equilibrium/stationary points of a system are when
x
˙
=
0
{\displaystyle {\dot {x}}=0}
and so the equilibrium points of a pendulum are those that satisfy
[
x
1
x
2
]
=
[
n
π
0
]
{\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}n\pi \\0\end{bmatrix}}}
for integers n.
== See also ==
== References ==
== Further reading ==
== External links ==
Wolfram language functions for linear state-space models, affine state-space models, and nonlinear state-space models. | Wikipedia/State_(controls) |
In mathematics, catastrophe theory is a branch of bifurcation theory in the study of dynamical systems; it is also a particular special case of more general singularity theory in geometry.
Bifurcation theory studies and classifies phenomena characterized by sudden shifts in behavior arising from small changes in circumstances, analysing how the qualitative nature of equation solutions depends on the parameters that appear in the equation. This may lead to sudden and dramatic changes, for example the unpredictable timing and magnitude of a landslide.
Catastrophe theory originated with the work of the French mathematician René Thom in the 1960s, and became very popular due to the efforts of Christopher Zeeman in the 1970s. It considers the special case where the long-run stable equilibrium can be identified as the minimum of a smooth, well-defined potential function (Lyapunov function).
Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. However, examined in a larger parameter space, catastrophe theory reveals that such bifurcation points tend to occur as part of well-defined qualitative geometrical structures.
In the late 1970s, applications of catastrophe theory to areas outside its scope began to be criticized, especially in biology and social sciences. Zahler and Sussmann, in a 1977 article in Nature, referred to such applications as being "characterised by incorrect reasoning, far-fetched assumptions, erroneous consequences, and exaggerated claims". As a result, catastrophe theory has become less popular in applications.
== Elementary catastrophes ==
Catastrophe theory analyzes degenerate critical points of the potential function — points where not just the first derivative, but one or more higher derivatives of the potential function are also zero. These are called the germs of the catastrophe geometries. The degeneracy of these critical points can be unfolded by expanding the potential function as a Taylor series in small perturbations of the parameters.
When the degenerate points are not merely accidental, but are structurally stable, the degenerate points exist as organising centres for particular geometric structures of lower degeneracy, with critical features in the parameter space around them. If the potential function depends on two or fewer active variables, and four or fewer active parameters, then there are only seven generic structures for these bifurcation geometries, with corresponding standard forms into which the Taylor series around the catastrophe germs can be transformed by diffeomorphism (a smooth transformation whose inverse is also smooth). These seven fundamental types are now presented, with the names that Thom gave them.
== Potential functions of one active variable ==
Catastrophe theory studies dynamical systems that describe the evolution of a state variable
x
{\displaystyle x}
over time
t
{\displaystyle t}
:
x
˙
=
d
x
d
t
=
−
d
V
(
u
,
x
)
d
x
{\displaystyle {\dot {x}}={\dfrac {dx}{dt}}=-{\dfrac {dV(u,x)}{dx}}}
In the above equation,
V
{\displaystyle V}
is referred to as the potential function, and
u
{\displaystyle u}
is often a vector or a scalar which parameterise the potential function. The value of
u
{\displaystyle u}
may change over time, and it can also be referred to as the control variable. In the following examples, parameters like
a
,
b
{\displaystyle a,b}
are such controls.
=== Fold catastrophe ===
V
=
x
3
+
a
x
{\displaystyle V=x^{3}+ax\,}
When a < 0, the potential V has two extrema - one stable, and one unstable. If the parameter a is slowly increased, the system can follow the stable minimum point. But at a = 0 the stable and unstable extrema meet, and annihilate. This is the bifurcation point. At a > 0 there is no longer a stable solution. If a physical system is followed through a fold bifurcation, one therefore finds that as a reaches 0, the stability of the a < 0 solution is suddenly lost, and the system will make a sudden transition to a new, very different behaviour. This bifurcation value of the parameter a is sometimes called the "tipping point".
=== Cusp catastrophe ===
V
=
x
4
+
a
x
2
+
b
x
{\displaystyle V=x^{4}+ax^{2}+bx\,}
The cusp geometry is very common when one explores what happens to a fold bifurcation if a second parameter, b, is added to the control space. Varying the parameters, one finds that there is now a curve (blue) of points in (a,b) space where stability is lost, where the stable solution will suddenly jump to an alternate outcome.
But in a cusp geometry the bifurcation curve loops back on itself, giving a second branch where this alternate solution itself loses stability, and will make a jump back to the original solution set. By repeatedly increasing b and then decreasing it, one can therefore observe hysteresis loops, as the system alternately follows one solution, jumps to the other, follows the other back, and then jumps back to the first.
However, this is only possible in the region of parameter space a < 0. As a is increased, the hysteresis loops become smaller and smaller, until above a = 0 they disappear altogether (the cusp catastrophe), and there is only one stable solution.
One can also consider what happens if one holds b constant and varies a. In the symmetrical case b = 0, one observes a pitchfork bifurcation as a is reduced, with one stable solution suddenly splitting into two stable solutions and one unstable solution as the physical system passes to a < 0 through the cusp point (0,0) (an example of spontaneous symmetry breaking). Away from the cusp point, there is no sudden change in a physical solution being followed: when passing through the curve of fold bifurcations, all that happens is an alternate second solution becomes available.
A famous suggestion is that the cusp catastrophe can be used to model the behaviour of a stressed dog, which may respond by becoming cowed or becoming angry. The suggestion is that at moderate stress (a > 0), the dog will exhibit a smooth transition of response from cowed to angry, depending on how it is provoked. But higher stress levels correspond to moving to the region (a < 0). Then, if the dog starts cowed, it will remain cowed as it is irritated more and more, until it reaches the 'fold' point, when it will suddenly, discontinuously snap through to angry mode. Once in 'angry' mode, it will remain angry, even if the direct irritation parameter is considerably reduced.
A simple mechanical system, the "Zeeman Catastrophe Machine", nicely illustrates a cusp catastrophe. In this device, smooth variations in the position of the end of a spring can cause sudden changes in the rotational position of an attached wheel.
Catastrophic failure of a complex system with parallel redundancy can be evaluated based on the relationship between local and external stresses. The model of the structural fracture mechanics is similar to the cusp catastrophe behavior. The model predicts reserve ability of a complex system.
Other applications include the outer sphere electron transfer frequently encountered in chemical and biological systems, modelling the dynamics of cloud condensation nuclei in the atmosphere, and modelling real estate prices.
Fold bifurcations and the cusp geometry are by far the most important practical consequences of catastrophe theory. They are patterns which reoccur again and again in physics, engineering and mathematical modelling.
They produce the strong gravitational lensing events and provide astronomers with one of the methods used for detecting black holes and the dark matter of the universe, via the phenomenon of gravitational lensing producing multiple images of distant quasars.
The remaining simple catastrophe geometries are very specialised in comparison.
=== Swallowtail catastrophe ===
V
=
x
5
+
a
x
3
+
b
x
2
+
c
x
{\displaystyle V=x^{5}+ax^{3}+bx^{2}+cx\,}
The control parameter space is three-dimensional. The bifurcation set in parameter space is made up of three surfaces of fold bifurcations, which meet in two lines of cusp bifurcations, which in turn meet at a single swallowtail bifurcation point.
As the parameters go through the surface of fold bifurcations, one minimum and one maximum of the potential function disappear. At the cusp bifurcations, two minima and one maximum are replaced by one minimum; beyond them the fold bifurcations disappear. At the swallowtail point, two minima and two maxima all meet at a single value of x. For values of a > 0, beyond the swallowtail, there is either one maximum-minimum pair, or none at all, depending on the values of b and c. Two of the surfaces of fold bifurcations, and the two lines of cusp bifurcations where they meet for a < 0, therefore disappear at the swallowtail point, to be replaced with only a single surface of fold bifurcations remaining. Salvador Dalí's last painting, The Swallow's Tail, was based on this catastrophe.
=== Butterfly catastrophe ===
V
=
x
6
+
a
x
4
+
b
x
3
+
c
x
2
+
d
x
{\displaystyle V=x^{6}+ax^{4}+bx^{3}+cx^{2}+dx\,}
Depending on the parameter values, the potential function may have three, two, or one different local minima, separated by the loci of fold bifurcations. At the butterfly point, the different 3-surfaces of fold bifurcations, the 2-surfaces of cusp bifurcations, and the lines of swallowtail bifurcations all meet up and disappear, leaving a single cusp structure remaining when a > 0.
== Potential functions of two active variables ==
Umbilic catastrophes are examples of corank 2 catastrophes. They can be observed in optics in the focal surfaces created by light reflecting off a surface in three dimensions and are intimately connected with the geometry of nearly spherical surfaces: umbilical point.
Thom proposed that the hyperbolic umbilic catastrophe modeled the breaking of a wave and the elliptical umbilic modeled the creation of hair-like structures.
=== Hyperbolic umbilic catastrophe ===
V
=
x
3
+
y
3
+
a
x
y
+
b
x
+
c
y
{\displaystyle V=x^{3}+y^{3}+axy+bx+cy\,}
=== Elliptic umbilic catastrophe ===
V
=
x
3
3
−
x
y
2
+
a
(
x
2
+
y
2
)
+
b
x
+
c
y
{\displaystyle V={\frac {x^{3}}{3}}-xy^{2}+a(x^{2}+y^{2})+bx+cy\,}
=== Parabolic umbilic catastrophe ===
V
=
x
2
y
+
y
4
+
a
x
2
+
b
y
2
+
c
x
+
d
y
{\displaystyle V=x^{2}y+y^{4}+ax^{2}+by^{2}+cx+dy\,}
== Arnold's notation ==
Vladimir Arnold gave the catastrophes the ADE classification, due to a deep connection with simple Lie groups.
A0 - a non-singular point:
V
=
x
{\displaystyle V=x}
.
A1 - a local extremum, either a stable minimum or unstable maximum
V
=
±
x
2
+
a
x
{\displaystyle V=\pm x^{2}+ax}
.
A2 - the fold
A3 - the cusp
A4 - the swallowtail
A5 - the butterfly
Ak - a representative of an infinite sequence of one variable forms
V
=
x
k
+
1
+
⋯
{\displaystyle V=x^{k+1}+\cdots }
D4− - the elliptical umbilic
D4+ - the hyperbolic umbilic
D5 - the parabolic umbilic
Dk - a representative of an infinite sequence of further umbilic forms
E6 - the symbolic umbilic
V
=
x
3
+
y
4
+
a
x
y
2
+
b
x
y
+
c
x
+
d
y
+
e
y
2
{\displaystyle V=x^{3}+y^{4}+axy^{2}+bxy+cx+dy+ey^{2}}
E7
E8
There are objects in singularity theory which correspond to most of the other simple Lie groups.
== Optics ==
As predicted by catastrophe theory, singularities are generic, and stable under perturbation. This explains how the bright lines and surfaces are stable under perturbation. The caustics one sees at the bottom of a swimming pool, for example, have a distinctive texture and only has a few types of singular points, even though the surface of the water is ever changing.
The edge of the rainbow, for example, has a fold catastrophe. Due to the wave nature of light, the catastrophe has fine diffraction details described by the Airy function. This is a generic result and does not depend on the precise shape of the water droplet, and so the edge of the rainbow always has the shape of an Airy function. The same Airy function fold catastrophe can be seen in nuclear-nuclear scattering ("nuclear rainbow").
The cusp catastrophe is the next-simplest to observe. Due to the wave nature of light, the catastrophe has fine diffraction details described by the Pearcey function. Higher-order catastrophes, such as the swallowtail and the butterfly, have also been observed.
== See also ==
== References ==
== Bibliography ==
Arnold, Vladimir Igorevich (1992) Catastrophe Theory, 3rd ed. Berlin: Springer-Verlag
V. S. Afrajmovich, V. I. Arnold, et al., Bifurcation Theory And Catastrophe Theory, ISBN 3-540-65379-1
Bełej, M. Kulesza, S. (2013) M"odeling the Real Estate Prices in Olsztyn under Instability Conditions", Folia Oeconomica Stetinensia 11(1): 61–72, ISSN (Online) 1898–0198, ISSN (Print) 1730–4237, doi:10.2478/v10031-012-0008-7
Castrigiano, Domenico P. L. and Hayes, Sandra A. (2004) Catastrophe Theory, second edition, Boulder: Westview ISBN 0-8133-4126-4
Gilmore, Robert (1993) Catastrophe Theory for Scientists and Engineers, New York: Dover
Petters, Arlie O., Levine, Harold and Wambsganss, Joachim (2001) Singularity Theory and Gravitational Lensing, Boston: Birkhäuser ISBN 0-8176-3668-4
Postle, Denis (1980) Catastrophe Theory – Predict and avoid personal disasters, Fontana Paperbacks ISBN 0-00-635559-5
Poston, Tim and Stewart, Ian (1998) Catastrophe: Theory and Its Applications, New York: Dover ISBN 0-486-69271-X
Sanns, Werner (2000) Catastrophe Theory with Mathematica: A Geometric Approach, Germany: DAV
Saunders, Peter Timothy (1980) An Introduction to Catastrophe Theory, Cambridge, England: Cambridge University Press
Thom, René (1989) Structural Stability and Morphogenesis: An Outline of a General Theory of Models, Reading, MA: Addison-Wesley ISBN 0-201-09419-3
Woodcock, Alexander Edward Richard and Davis, Monte. (1978) Catastrophe Theory, New York: E. P. Dutton ISBN 0-525-07812-6
Zeeman, E.C. (1977) Catastrophe Theory-Selected Papers 1972–1977, Reading, MA: Addison-Wesley
== External links ==
CompLexicon: Catastrophe Theory
Catastrophe teacher
Java simulation of Zeeman's catastrophe machine | Wikipedia/Catastrophe_theory |
A signal-flow graph or signal-flowgraph (SFG), invented by Claude Shannon, but often called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, and branches (edges, arcs, or arrows) represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs (also called digraphs), which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications.
SFGs are most commonly used to represent signal flow in a physical system and its controller(s), forming a cyber-physical system. Among their other uses are the representation of signal flow in various electronic networks and amplifiers, digital filters, state-variable filters and some other types of analog filters. In nearly all literature, a signal-flow graph is associated with a set of linear equations.
== History ==
Wai-Kai Chen wrote: "The concept of a signal-flow graph was originally worked out by Shannon [1942]
in dealing with analog computers. The greatest credit for the formulation of signal-flow graphs is normally extended to Mason [1953], [1956]. He showed how to use the signal-flow graph technique to solve some difficult electronic problems in a relatively simple manner. The term signal flow graph was used because of its original application to electronic problems and the association with electronic signals and flowcharts of the systems under study."
Lorens wrote: "Previous to Mason's work, C. E. Shannon worked out a number of the properties of what are now known as flow graphs. Unfortunately, the paper originally had a restricted classification and very few people had access to the material."
"The rules for the evaluation of the graph determinant of a Mason Graph were first given and proven by Shannon [1942] using mathematical induction. His work remained essentially unknown even after Mason published his classical work in 1953. Three years later, Mason [1956] rediscovered the rules and proved them by considering the value of a determinant and how it changes as variables are added to the graph. [...]"
== Domain of application ==
Robichaud et al. identify the domain of application of SFGs as follows:
"All the physical systems analogous to these networks [constructed of ideal transformers, active elements and gyrators] constitute the domain of application of the techniques developed [here]. Trent has shown that all the physical systems which satisfy the following conditions fall into this category.
The finite lumped system is composed of a number of simple parts, each of which has known dynamical properties which can be defined by equations using two types of scalar variables and parameters of the system. Variables of the first type represent quantities which can be measured, at least conceptually, by attaching an indicating instrument to two connection points of the element. Variables of the second type characterize quantities which can be measured by connecting a meter in series with the element. Relative velocities and positions, pressure differentials and voltages are typical quantities of the first class, whereas electric currents, forces, rates of heat flow, are variables of the second type. Firestone has been the first to distinguish these two types of variables with the names across variables and through variables.
Variables of the first type must obey a mesh law, analogous to Kirchhoff's voltage law, whereas variables of the second type must satisfy an incidence law analogous to Kirchhoff's current law.
Physical dimensions of appropriate products of the variables of the two types must be consistent. For the systems in which these conditions are satisfied, it is possible to draw a linear graph isomorphic with the dynamical properties of the system as described by the chosen variables. The techniques [...] can be applied directly to these linear graphs as well as to electrical networks, to obtain a signal flow graph of the system."
== Basic flow graph concepts ==
The following illustration and its meaning were introduced by Mason to illustrate basic concepts:
In the simple flow graphs of the figure, a functional dependence of a node is indicated by an incoming arrow, the node originating this influence is the beginning of this arrow, and in its most general form the signal flow graph indicates by incoming arrows only those nodes that influence the processing at the receiving node, and at each node, i, the incoming variables are processed according to a function associated with that node, say Fi. The flowgraph in (a) represents a set of explicit relationships:
x
1
=
an independent variable
x
2
=
F
2
(
x
1
,
x
3
)
x
3
=
F
3
(
x
1
,
x
2
,
x
3
)
{\displaystyle {\begin{aligned}x_{\mathrm {1} }&={\text{an independent variable}}\\x_{\mathrm {2} }&=F_{2}(x_{\mathrm {1} },x_{\mathrm {3} })\\x_{\mathrm {3} }&=F_{3}(x_{\mathrm {1} },x_{\mathrm {2} },x_{\mathrm {3} })\\\end{aligned}}}
Node x1 is an isolated node because no arrow is incoming; the equations for x2 and x3 have the graphs shown in parts (b) and (c) of the figure.
These relationships define for every node a function that processes the input signals it receives. Each non-source node combines the input signals in some manner, and broadcasts a resulting signal along each outgoing branch. "A flow graph, as defined originally by Mason, implies a set of functional relations, linear or not."
However, the commonly used Mason graph is more restricted, assuming that each node simply sums its incoming arrows, and that each branch involves only the initiating node involved. Thus, in this more restrictive approach, the node x1 is unaffected while:
x
2
=
f
21
(
x
1
)
+
f
23
(
x
3
)
{\displaystyle x_{2}=f_{21}(x_{1})+f_{23}(x_{3})}
x
3
=
f
31
(
x
1
)
+
f
32
(
x
2
)
+
f
33
(
x
3
)
,
{\displaystyle x_{3}=f_{31}(x_{1})+f_{32}(x_{2})+f_{33}(x_{3})\ ,}
and now the functions fij can be associated with the signal-flow branches ij joining the pair of nodes xi, xj, rather than having general relationships associated with each node. A contribution by a node to itself like f33 for x3 is called a self-loop. Frequently these functions are simply multiplicative factors (often called transmittances or gains), for example, fij(xj)=cijxj, where c is a scalar, but possibly a function of some parameter like the Laplace transform variable s. Signal-flow graphs are very often used with Laplace-transformed signals, because then they represent systems of Linear differential equations. In this case the transmittance, c(s), often is called a transfer function.
=== Choosing the variables ===
In general, there are several ways of choosing the variables in a complex system. Corresponding to each choice, a system of equations can be written and each system of equations can be represented in a graph. This formulation of the equations becomes direct and automatic if one has at his disposal techniques which permit the drawing of a graph directly from the schematic diagram of the system under study. The structure of the graphs thus obtained is related in a simple manner to the topology of the schematic diagram, and it becomes unnecessary to consider the equations, even implicitly, to obtain the graph. In some cases, one has simply to imagine the flow graph in the schematic diagram and the desired answers can be obtained without even drawing the flow graph.
=== Non-uniqueness ===
Robichaud et al. wrote: "The signal flow graph contains the same information as the equations from which it is derived; but there does not exist a one-to-one correspondence between the graph and the system of equations. One system will give different graphs according to the order in which the equations are used to define the variable written on the left-hand side." If all equations relate all dependent variables, then there are n! possible SFGs to choose from.
== Linear signal-flow graphs ==
Linear signal-flow graph (SFG) methods only apply to linear time-invariant systems, as studied by their associated theory. When modeling a system of interest, the first step is often to determine the equations representing the system's operation without assigning causes and effects (this is called acausal modeling). A SFG is then derived from this system of equations.
A linear SFG consists of nodes indicated by dots and weighted directional branches indicated by arrows. The nodes are the variables of the equations and the branch weights are the coefficients. Signals may only traverse a branch in the direction indicated by its arrow. The elements of a SFG can only represent the operations of multiplication by a coefficient and addition, which are sufficient to represent the constrained equations. When a signal traverses a branch in its indicated direction, the signal is multiplied the weight of the branch. When two or more branches direct into the same node, their outputs are added.
For systems described by linear algebraic or differential equations, the signal-flow graph is mathematically equivalent to the system of equations describing the system, and the equations governing the nodes are discovered for each node by summing incoming branches to that node. These incoming branches convey the contributions of the other nodes, expressed as the connected node value multiplied by the weight of the connecting branch, usually a real number or function of some parameter (for example a Laplace transform variable s).
For linear active networks, Choma writes: "By a 'signal flow representation' [or 'graph', as it is commonly referred to] we mean a diagram that, by displaying the algebraic relationships among relevant branch variables of network, paints an unambiguous picture of the way an applied input signal ‘flows’ from input-to-output ... ports."
A motivation for a SFG analysis is described by Chen:
"The analysis of a linear system reduces ultimately to the solution of a system of linear algebraic equations. As an alternative to conventional algebraic methods of solving the system, it is possible to obtain a solution by considering the properties of certain directed graphs associated with the system." [See subsection: Solving linear equations.] "The unknowns of the equations correspond to the nodes of the graph, while the linear relations between them appear in the form of directed edges connecting the nodes. ...The associated directed graphs in many cases can be set up directly by inspection of the physical system without the necessity of first formulating the →associated equations..."
=== Basic components ===
A linear signal flow graph is related to a system of linear equations of the following form:
x
j
=
∑
k
=
1
N
t
j
k
x
k
{\displaystyle {\begin{aligned}x_{\mathrm {j} }&=\sum _{\mathrm {k} =1}^{\mathrm {N} }t_{\mathrm {jk} }x_{\mathrm {k} }\end{aligned}}}
where
t
j
k
{\displaystyle t_{jk}}
= transmittance (or gain) from
x
k
{\displaystyle x_{k}}
to
x
j
{\displaystyle x_{j}}
.
The figure to the right depicts various elements and constructs of a signal flow graph (SFG).
Exhibit (a) is a node. In this case, the node is labeled
x
{\displaystyle x}
. A node is a vertex representing a variable or signal.
A source node has only outgoing branches (represents an independent variable). As a special case, an input node is characterized by having one or more attached arrows pointing away from the node and no arrows pointing into the node. Any open, complete SFG will have at least one input node.
An output or sink node has only incoming branches (represents a dependent variable). Although any node can be an output, explicit output nodes are often used to provide clarity. Explicit output nodes are characterized by having one or more attached arrows pointing into the node and no arrows pointing away from the node. Explicit output nodes are not required.
A mixed node has both incoming and outgoing branches.
Exhibit (b) is a branch with a multiplicative gain of
m
{\displaystyle m}
. The meaning is that the output, at the tip of the arrow, is
m
{\displaystyle m}
times the input at the tail of the arrow. The gain can be a simple constant or a function (for example: a function of some transform variable such as
s
{\displaystyle s}
,
ω
{\displaystyle \omega }
, or
z
{\displaystyle z}
, for Laplace, Fourier or Z-transform relationships).
Exhibit (c) is a branch with a multiplicative gain of one. When the gain is omitted, it is assumed to be unity.
Exhibit (d)
V
i
n
{\displaystyle V_{in}}
is an input node. In this case,
V
i
n
{\displaystyle V_{in}}
is multiplied by the gain
m
{\displaystyle m}
.
Exhibit (e)
I
o
u
t
{\displaystyle I_{out}}
is an explicit output node; the incoming edge has a gain of
m
{\displaystyle m}
.
Exhibit (f) depicts addition. When two or more arrows point into a node, the signals carried by the edges are added.
Exhibit (g) depicts a simple loop. The loop gain is
A
×
m
{\displaystyle A\times m}
.
Exhibit (h) depicts the expression
Z
=
a
X
+
b
Y
{\displaystyle Z=aX+bY}
.
Terms used in linear SFG theory also include:
Path. A path is a continuous set of branches traversed in the direction indicated by the branch arrows.
Open path. If no node is re-visited, the path is open.
Forward path. A path from an input node (source) to an output node (sink) that does not re-visit any node.
Path gain: the product of the gains of all the branches in the path.
Loop. A closed path. (it originates and ends on the same node, and no node is touched more than once).
Loop gain: the product of the gains of all the branches in the loop.
Non-touching loops. Non-touching loops have no common nodes.
Graph reduction. Removal of one or more nodes from a graph using graph transformations.
Residual node. In any contemplated process of graph reduction, the nodes to be retained in the new graph are called residual nodes.
Splitting a node. Splitting a node corresponds to splitting a node into two half nodes, one being a sink and the other a source.
Index: The index of a graph is the minimum number of nodes which have to be split in order to remove all the loops in a graph.
Index node. The nodes that are split to determine the index of a graph are called index nodes, and in general they are not unique.
=== Systematic reduction to sources and sinks ===
A signal-flow graph may be simplified by graph transformation rules. These simplification rules are also referred to as signal-flow graph algebra.
The purpose of this reduction is to relate the dependent variables of interest (residual nodes, sinks) to its independent variables (sources).
The systematic reduction of a linear signal-flow graph is a graphical method equivalent to the Gauss-Jordan elimination method for solving linear equations.
The rules presented below may be applied over and over until the signal flow graph is reduced to its "minimal residual form". Further reduction can require loop elimination or the use of a "reduction formula" with the goal to directly connect sink nodes representing the dependent variables to the source nodes representing the independent variables. By these means, any signal-flow graph can be simplified by successively removing internal nodes until only the input and output and index nodes remain. Robichaud described this process of systematic flow-graph reduction:
The reduction of a graph proceeds by the elimination of certain nodes to obtain a residual graph showing only the variables of interest. This elimination of nodes is called "node absorption". This method is close to the familiar process of successive eliminations of undesired variables in a system of equations. One can eliminate a variable by removing the corresponding node in the graph. If one reduces the graph sufficiently, it is possible to obtain the solution for any variable and this is the objective which will be kept in mind in this description of the different methods of reduction of the graph. In practice, however, the techniques of reduction will be used solely to transform the graph to a residual graph expressing some fundamental relationships. Complete solutions will be more easily obtained by application of Mason's rule.
The graph itself programs the reduction process. Indeed a simple inspection of the graph readily suggests the different steps of the reduction which are carried out by elementary transformations, by loop elimination, or by the use of a reduction formula.
For digitally reducing a flow graph using an algorithm, Robichaud extends the notion of a simple flow graph to a generalized flow graph:
Before describing the process of reduction...the correspondence between the graph and a system of linear equations ... must be generalized...The generalized graphs will represent some operational relationships between groups of variables...To each branch of the generalized graph is associated a matrix giving the relationships between the variables represented by the nodes at the extremities of that branch...
The elementary transformations [defined by Robichaud in his Figure 7.2, p. 184] and the loop reduction permit the elimination of any node j of the graph by the reduction formula:[described in Robichaud's Equation 7-1]. With the reduction formula, it is always possible to reduce a graph of any order... [After reduction] the final graph will be a cascade graph in which the variables of the sink nodes are explicitly expressed as functions of the sources. This is the only method for reducing the generalized graph since Mason's rule is obviously inapplicable.
The definition of an elementary transformation varies from author to author:
Some authors only consider as elementary transformations the summation of parallel-edge gains and the multiplication of series-edge gains, but not the elimination of self-loops
Other authors consider the elimination of a self-loop as an elementary transformation
Parallel edges. Replace parallel edges with a single edge having a gain equal to the sum of original gains.
The graph on the left has parallel edges between nodes. On the right, these parallel edges have been replaced with a single edge having a gain equal to the sum of the gains on each original edge.
The equations corresponding to the reduction between N and node I1 are:
N
=
I
1
f
1
+
I
1
f
2
+
I
1
f
3
+
.
.
.
N
=
I
1
(
f
1
+
f
2
+
f
3
)
+
.
.
.
{\displaystyle {\begin{aligned}N&=I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {1} }f_{\mathrm {2} }+I_{\mathrm {1} }f_{\mathrm {3} }+...\\N&=I_{\mathrm {1} }(f_{\mathrm {1} }+f_{\mathrm {2} }+f_{\mathrm {3} })+...\\\end{aligned}}}
Outflowing edges. Replace outflowing edges with edges directly flowing from the node's sources.
The graph on the left has an intermediate node N between nodes from which it has inflows, and nodes to which it flows out.
The graph on the right shows direct flows between these node sets, without transiting via N.
For the sake of simplicity, N and its inflows are not represented. The outflows from N are eliminated.
The equations corresponding to the reduction directly relating N's input signals to its output signals are:
N
=
I
1
f
1
+
I
2
f
2
+
I
3
f
3
O
1
=
g
1
N
O
2
=
g
2
N
O
3
=
g
3
N
O
1
=
g
1
(
I
1
f
1
+
I
2
f
2
+
I
3
f
3
)
O
2
=
g
2
(
I
1
f
1
+
I
2
f
2
+
I
3
f
3
)
O
3
=
g
3
(
I
1
f
1
+
I
2
f
2
+
I
3
f
3
)
O
1
=
I
1
f
1
g
1
+
I
2
f
2
g
1
+
I
3
f
3
g
1
O
2
=
I
1
f
1
g
2
+
I
2
f
2
g
2
+
I
3
f
3
g
2
O
3
=
I
1
f
1
g
3
+
I
2
f
2
g
3
+
I
3
f
3
g
3
{\displaystyle {\begin{aligned}N&=I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} }\\O_{\mathrm {1} }&=g_{\mathrm {1} }N\\O_{\mathrm {2} }&=g_{\mathrm {2} }N\\O_{\mathrm {3} }&=g_{\mathrm {3} }N\\O_{\mathrm {1} }&=g_{\mathrm {1} }(I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} })\\O_{\mathrm {2} }&=g_{\mathrm {2} }(I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} })\\O_{\mathrm {3} }&=g_{\mathrm {3} }(I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} })\\O_{\mathrm {1} }&=I_{\mathrm {1} }f_{\mathrm {1} }g_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }g_{\mathrm {1} }+I_{\mathrm {3} }f_{\mathrm {3} }g_{\mathrm {1} }\\O_{\mathrm {2} }&=I_{\mathrm {1} }f_{\mathrm {1} }g_{\mathrm {2} }+I_{\mathrm {2} }f_{\mathrm {2} }g_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} }g_{\mathrm {2} }\\O_{\mathrm {3} }&=I_{\mathrm {1} }f_{\mathrm {1} }g_{\mathrm {3} }+I_{\mathrm {2} }f_{\mathrm {2} }g_{\mathrm {3} }+I_{\mathrm {3} }f_{\mathrm {3} }g_{\mathrm {3} }\\\end{aligned}}}
Zero-signal nodes.
Eliminate outflowing edges from a node determined to have a value of zero.
If the value of a node is zero, its outflowing edges can be eliminated.
Nodes without outflows.
Eliminate a node without outflows.
In this case, N is not a variable of interest, and it has no outgoing edges; therefore, N, and its inflowing edges, can be eliminated.
Self-looping edge. Replace looping edges by adjusting the gains on the incoming edges.
The graph on the left has a looping edge at node N, with a gain of g. On the right, the looping edge has been eliminated, and all inflowing edges have their gain divided by (1-g).The equations corresponding to the reduction between N and all its input signals are:
==== Implementations ====
The above procedure for building the SFG from an acausal system of equations and for solving the SFG's gains have been implemented as an add-on to MATHLAB 68, an on-line system providing machine aid for the mechanical symbolic processes encountered in analysis.
=== Solving linear equations ===
Signal flow graphs can be used to solve sets of simultaneous linear equations. The set of equations must be consistent and all equations must be linearly independent.
==== Putting the equations in "standard form" ====
For M equations with N unknowns where each yj is a known value and each xj is an unknown value, there is equation for each known of the following form.
∑
k
=
1
N
c
j
k
x
k
=
y
j
{\displaystyle {\begin{aligned}\sum _{\mathrm {k} =1}^{\mathrm {N} }c_{\mathrm {jk} }x_{\mathrm {k} }&=y_{\mathrm {j} }\end{aligned}}}
; the usual form for simultaneous linear equations with 1 ≤ j ≤ M
Although it is feasible, particularly for simple cases, to establish a signal flow graph using the equations in this form, some rearrangement allows a general procedure that works easily for any set of equations, as now is presented. To proceed, first the equations are rewritten as
∑
k
=
1
N
c
j
k
x
k
−
y
j
=
0
{\displaystyle {\begin{aligned}\sum _{\mathrm {k} =1}^{\mathrm {N} }c_{\mathrm {jk} }x_{\mathrm {k} }-y_{\mathrm {j} }&=0\end{aligned}}}
and further rewritten as
∑
k
=
1
N
c
j
k
x
k
+
x
j
−
y
j
=
x
j
{\displaystyle {\begin{aligned}\sum _{\mathrm {k=1} }^{\mathrm {N} }c_{\mathrm {jk} }x_{\mathrm {k} }+x_{\mathrm {j} }-y_{\mathrm {j} }&=x_{\mathrm {j} }\end{aligned}}}
and finally rewritten as
∑
k
=
1
N
(
c
j
k
+
δ
j
k
)
x
k
−
y
j
=
x
j
{\displaystyle {\begin{aligned}\sum _{\mathrm {k=1} }^{\mathrm {N} }(c_{\mathrm {jk} }+\delta _{\mathrm {jk} })x_{\mathrm {k} }-y_{\mathrm {j} }&=x_{\mathrm {j} }\end{aligned}}}
; form suitable to be expressed as a signal flow graph.
where δkj = Kronecker delta
The signal-flow graph is now arranged by selecting one of these equations and addressing the node on the right-hand side. This is the node for which the node connects to itself with the branch of weight including a '+1', making a self-loop in the flow graph. The other terms in that equation connect this node first to the source in this equation and then to all the other branches incident on this node. Every equation is treated this way, and then each incident branch is joined to its respective emanating node. For example, the case of three variables is shown in the figure, and the first equation is:
x
1
=
(
c
11
+
1
)
x
1
+
c
12
x
2
+
c
13
x
3
−
y
1
,
{\displaystyle x_{1}=\left(c_{11}+1\right)x_{1}+c_{12}x_{2}+c_{13}x_{3}-y_{1}\ ,}
where the right side of this equation is the sum of the weighted arrows incident on node x1.
As there is a basic symmetry in the treatment of every node, a simple starting point is an arrangement of nodes with each node at one vertex of a regular polygon. When expressed using the general coefficients {cin}, the environment of each node is then just like all the rest apart from a permutation of indices. Such an implementation for a set of three simultaneous equations is seen in the figure.
Often the known values, yj are taken as the primary causes and the unknowns values, xj to be effects, but regardless of this interpretation, the last form for the set of equations can be represented as a signal-flow graph. This point is discussed further in the subsection Interpreting 'causality'.
==== Applying Mason's gain formula ====
In the most general case, the values for all the xk variables can be calculated by computing Mason's gain formula for the path from each yj to each xk and using superposition.
x
k
=
∑
j
=
1
M
(
G
k
j
)
y
j
{\displaystyle {\begin{aligned}x_{\mathrm {k} }&=\sum _{\mathrm {j} =1}^{\mathrm {M} }(G_{\mathrm {kj} })y_{\mathrm {j} }\end{aligned}}}
where Gkj = the sum of Mason's gain formula computed for all the paths from input yj to variable xk.
In general, there are N-1 paths from yj to variable xk so the computational effort to calculated Gkj is proportional to N-1.
Since there are M values of yj, Gkj must be computed M times for a single value of xk. The computational effort to calculate a single xk variable is proportional to (N-1)(M). The effort to compute all the xk variables is proportional to (N)(N-1)(M). If there are N equations and N unknowns, then the computation effort is on the order of N3.
== Relation to block diagrams ==
For some authors, a linear signal-flow graph is more constrained than a block diagram, in that the SFG rigorously describes linear algebraic equations represented by a directed graph.
For other authors, linear block diagrams and linear signal-flow graphs are equivalent ways of depicting a system, and either can be used to solve the gain.
A tabulation of the comparison between block diagrams and signal-flow graphs is provided by Bakshi & Bakshi, and another tabulation by Kumar. According to Barker et al.:
"The signal flow graph is the most convenient method for representing a dynamic system. The topology of the graph is compact and the rules for manipulating it are easier to program than the corresponding rules that apply to block diagrams."
In the figure, a simple block diagram for a feedback system is shown with two possible interpretations as a signal-flow graph. The input R(s) is the Laplace-transformed input signal; it is shown as a source node in the signal-flow graph (a source node has no input edges). The output signal C(s) is the Laplace-transformed output variable. It is represented as a sink node in the flow diagram (a sink has no output edges). G(s) and H(s) are transfer functions, with H(s) serving to feed back a modified version of the output to the input, B(s). The two flow graph representations are equivalent.
== Interpreting 'causality' ==
The term "cause and effect" was applied by Mason to SFGs:
"The process of constructing a graph is one of tracing a succession of cause and effects through the physical system. One variable is expressed as an explicit effect due to certain causes; they in turn, are recognized as effects due to still other causes."
— S.J. Mason: Section IV: Illustrative applications of flow graph technique
and has been repeated by many later authors:
"The signal flow graph is another visual tool for representing causal relationships between components of the system. It is a simplified version of a block diagram introduced by S.J. Mason as a cause-and-effect representation of linear systems."
— Arthur G.O. Mutambara: Design and Analysis of Control Systems, p.238
However, Mason's paper is concerned to show in great detail how a set of equations is connected to an SFG, an emphasis unrelated to intuitive notions of "cause and effect". Intuitions can be helpful for arriving at an SFG or for gaining insight from an SFG, but are inessential to the SFG. The essential connection of the SFG is to its own set of equations, as described, for example, by Ogata:
"A signal-flow graph is a diagram that represents a set of simultaneous algebraic equations. When applying the signal flow graph method to analysis of control systems, we must first transform linear differential equations into algebraic equations in [the Laplace transform variable] s.."
— Katsuhiko Ogata: Modern Control Engineering, p. 104
There is no reference to "cause and effect" here, and as said by Barutsky:
"Like block diagrams, signal flow graphs represent the computational, not the physical structure of a system."
— Wolfgang Borutzky, Bond Graph Methodology, p. 10
The term "cause and effect" may be misinterpreted as it applies to the SFG, and taken incorrectly to suggest a system view of causality, rather than a computationally based meaning. To keep discussion clear, it may be advisable to use the term "computational causality", as is suggested for bond graphs:
"Bond-graph literature uses the term computational causality, indicating the order of calculation in a simulation, in order to avoid any interpretation in the sense of intuitive causality."
The term "computational causality" is explained using the example of current and voltage in a resistor:
"The computational causality of physical laws can therefore not be predetermined, but depends upon the particular use of that law. We cannot conclude whether it is the current flowing through a resistor that causes a voltage drop, or whether it is the difference in potentials at the two ends of the resistor that cause current to flow. Physically these are simply two concurrent aspects of one and the same physical phenomenon. Computationally, we may have to assume at times one position, and at other times the other."
— François Cellier & Ernesto Kofman: §1.5 Simulation software today and tomorrow, p. 15
A computer program or algorithm can be arranged to solve a set of equations using various strategies. They differ in how they prioritize finding some of the variables in terms of the others, and these algorithmic decisions, which are simply about solution strategy, then set up the variables expressed as dependent variables earlier in the solution to be "effects", determined by the remaining variables that now are "causes", in the sense of "computational causality".
Using this terminology, it is computational causality, not system causality, that is relevant to the SFG. There exists a wide-ranging philosophical debate, not concerned specifically with the SFG, over connections between computational causality and system causality.
== Signal-flow graphs for analysis and design ==
Signal-flow graphs can be used for analysis, that is for understanding a model of an existing system, or for synthesis, that is for determining the properties of a design alternative.
=== Signal-flow graphs for dynamic systems analysis ===
When building a model of a dynamic system, a list of steps is provided by Dorf & Bishop:
Define the system and its components.
Formulate the mathematical model and list the needed assumptions.
Write the differential equations describing the model.
Solve the equations for the desired output variables.
Examine the solutions and the assumptions.
If needed, reanalyze or redesign the system.
—RC Dorf and RH Bishop, Modern Control Systems, Chapter 2, p. 2
In this workflow, equations of the physical system's mathematical model are used to derive the signal-flow graph equations.
=== Signal-flow graphs for design synthesis ===
Signal-flow graphs have been used in Design Space Exploration (DSE), as an intermediate representation towards a physical implementation. The DSE process seeks a suitable solution among different alternatives. In contrast with the typical analysis workflow, where a system of interest is first modeled with the physical equations of its components, the specification for synthesizing a design could be a desired transfer function. For example, different strategies would create different signal-flow graphs, from which implementations are derived.
Another example uses an annotated SFG as an expression of the continuous-time behavior, as input to an architecture generator
== Shannon and Shannon-Happ formulas ==
Shannon's formula is an analytic expression for calculating the gain of an interconnected set of amplifiers in an analog computer. During World War II, while investigating the functional operation of an analog computer, Claude Shannon developed his formula. Because of wartime restrictions, Shannon's work was not published at that time, and, in 1952, Mason rediscovered the same formula.
William W. Happ generalized the Shannon formula for topologically closed systems. The Shannon-Happ formula can be used for deriving transfer functions, sensitivities, and error functions.
For a consistent set of linear unilateral relations, the Shannon-Happ formula expresses the solution using direct substitution (non-iterative).
NASA's electrical circuit software NASAP is based on the Shannon-Happ formula.
== Linear signal-flow graph examples ==
=== Simple voltage amplifier ===
The amplification of a signal V1 by an amplifier with gain a12 is described mathematically by
V
2
=
a
12
V
1
.
{\displaystyle V_{2}=a_{12}V_{1}\,.}
This relationship represented by the signal-flow graph of Figure 1. is that V2 is dependent on V1 but it implies no dependency of V1 on V2. See Kou page 57.
=== Ideal negative feedback amplifier ===
A possible SFG for the asymptotic gain model for a negative feedback amplifier is shown in Figure 3, and leads to the equation for the gain of this amplifier as
G
=
y
2
x
1
{\displaystyle G={\frac {y_{2}}{x_{1}}}}
=
G
∞
(
T
T
+
1
)
+
G
0
(
1
T
+
1
)
.
{\displaystyle =G_{\infty }\left({\frac {T}{T+1}}\right)+G_{0}\left({\frac {1}{T+1}}\right)\ .}
The interpretation of the parameters is as follows: T = return ratio, G∞ = direct amplifier gain, G0 = feedforward (indicating the possible bilateral nature of the feedback, possibly deliberate as in the case of feedforward compensation). Figure 3 has the interesting aspect that it resembles Figure 2 for the two-port network with the addition of the extra feedback relation x2 = T y1.
From this gain expression an interpretation of the parameters G0 and G∞ is evident, namely:
G
∞
=
lim
T
→
∞
G
;
G
0
=
lim
T
→
0
G
.
{\displaystyle G_{\infty }=\lim _{T\to \infty }G\ ;\ G_{0}=\lim _{T\to 0}G\ .}
There are many possible SFG's associated with any particular gain relation. Figure 4 shows another SFG for the asymptotic gain model that can be easier to interpret in terms of a circuit. In this graph, parameter β is interpreted as a feedback factor and A as a "control parameter", possibly related to a dependent source in the circuit. Using this graph, the gain is
G
=
y
2
x
1
{\displaystyle G={\frac {y_{2}}{x_{1}}}}
=
G
0
+
A
1
−
β
A
.
{\displaystyle =G_{0}+{\frac {A}{1-\beta A}}\ .}
To connect to the asymptotic gain model, parameters A and β cannot be arbitrary circuit parameters, but must relate to the return ratio T by:
T
=
−
β
A
,
{\displaystyle T=-\beta A\ ,}
and to the asymptotic gain as:
G
∞
=
lim
T
→
∞
G
=
G
0
−
1
β
.
{\displaystyle G_{\infty }=\lim _{T\to \infty }G=G_{0}-{\frac {1}{\beta }}\ .}
Substituting these results into the gain expression,
G
=
G
0
+
1
β
−
T
1
+
T
{\displaystyle G=G_{0}+{\frac {1}{\beta }}{\frac {-T}{1+T}}}
=
G
0
+
(
G
0
−
G
∞
)
−
T
1
+
T
{\displaystyle =G_{0}+(G_{0}-G_{\infty }){\frac {-T}{1+T}}}
=
G
∞
T
1
+
T
+
G
0
1
1
+
T
,
{\displaystyle =G_{\infty }{\frac {T}{1+T}}+G_{0}{\frac {1}{1+T}}\ ,}
which is the formula of the asymptotic gain model.
=== Electrical circuit containing a two-port network ===
The figure to the right depicts a circuit that contains a y-parameter two-port network. Vin is the input of the circuit and V2 is the output. The two-port equations impose a set of linear constraints between its port voltages and currents. The terminal equations impose other constraints. All these constraints are represented in the SFG (Signal Flow Graph) below the circuit. There is only one path from input to output which is shown in a different color and has a (voltage) gain of -RLy21. There are also three loops: -Riny11, -RLy22, Riny21RLy12. Sometimes a loop indicates intentional feedback but it can also indicate a constraint on the relationship of two variables. For example, the equation that describes a resistor says that the ratio of the voltage across the resistor to the current through the resistor is a constant which is called the resistance. This can be interpreted as the voltage is the input and the current is the output, or the current is the input and the voltage is the output, or merely that the voltage and current have a linear relationship. Virtually all passive two terminal devices in a circuit will show up in the SFG as a loop.
The SFG and the schematic depict the same circuit, but the schematic also suggests the circuit's purpose. Compared to the schematic, the SFG is awkward but it does have the advantage that the input to output gain can be written down by inspection using Mason's rule.
=== Mechatronics : Position servo with multi-loop feedback ===
This example is representative of a SFG (signal-flow graph) used to represent a servo control system and illustrates several features of SFGs. Some of the loops (loop 3, loop 4 and loop 5) are extrinsic intentionally designed feedback loops. These are shown with dotted lines. There are also intrinsic loops (loop 0, loop1, loop2) that are not intentional feedback loops, although they can be analyzed as though they were. These loops are shown with solid lines. Loop 3 and loop 4 are also known as minor loops because they are inside a larger loop.
The forward path begins with θC, the desired position command. This is multiplied by KP which could be a constant or a function of frequency. KP incorporates the conversion gain of the DAC and any filtering on the DAC output. The output of KP is the velocity command VωC which is multiplied by KV which can be a constant or a function of frequency. The output of KV is the current command, VIC which is multiplied by KC which can be a constant or a function of frequency. The output of KC is the amplifier output voltage, VA. The current, IM, though the motor winding is the integral of the voltage applied to the inductance. The motor produces a torque, T, proportional to IM. Permanent magnet motors tend to have a linear current to torque function. The conversion constant of current to torque is KM. The torque, T, divided by the load moment of inertia, M, is the acceleration, α, which is integrated to give the load velocity ω which is integrated to produce the load position, θLC.
The forward path of loop 0 asserts that acceleration is proportional to torque and the velocity is the time integral of acceleration. The backward path says that as the speed increases there is a friction or drag that counteracts the torque. Torque on the load decreases proportionately to the load velocity until the point is reached that all the torque is used to overcome friction and the acceleration drops to zero. Loop 0 is intrinsic.
Loop1 represents the interaction of an inductor's current with its internal and external series resistance. The current through an inductance is the time integral of the voltage across the inductance. When a voltage is first applied, all of it appears across the inductor. This is shown by the forward path through
1
s
L
M
{\displaystyle {\frac {1}{s\mathrm {L} _{\mathrm {M} }}}\,}
. As the current increases, voltage is dropped across the inductor internal resistance RM and the external resistance RS. This reduces the voltage across the inductor and is represented by the feedback path -(RM + RS). The current continues to increase but at a steadily decreasing rate until the current reaches the point at which all the voltage is dropped across (RM + RS). Loop 1 is intrinsic.
Loop2 expresses the effect of the motor back EMF. Whenever a permanent magnet motor rotates, it acts like a generator and produces a voltage in its windings. It does not matter whether the rotation is caused by a torque applied to the drive shaft or by current applied to the windings. This voltage is referred to as back EMF. The conversion gain of rotational velocity to back EMF is GM. The polarity of the back EMF is such that it diminishes the voltage across the winding inductance. Loop 2 is intrinsic.
Loop 3 is extrinsic. The current in the motor winding passes through a sense resister. The voltage, VIM, developed across the sense resister is fed back to the negative terminal of the power amplifier KC. This feedback causes the voltage amplifier to act like a voltage controlled current source. Since the motor torque is proportional to motor current, the sub-system VIC to the output torque acts like a voltage controlled torque source. This sub-system may be referred to as the "current loop" or "torque loop". Loop 3 effectively diminishes the effects of loop 1 and loop 2.
Loop 4 is extrinsic. A tachometer (actually a low power dc generator) produces an output voltage VωM that is proportional to is angular velocity. This voltage is fed to the negative input of KV. This feedback causes the sub-system from VωC to the load angular velocity to act like a voltage to velocity source. This sub-system may be referred to as the "velocity loop". Loop 4 effectively diminishes the effects of loop 0 and loop 3.
Loop 5 is extrinsic. This is the overall position feedback loop. The feedback comes from an angle encoder that produces a digital output. The output position is subtracted from the desired position by digital hardware which drives a DAC which drives KP. In the SFG, the conversion gain of the DAC is incorporated into KP.
See Mason's rule for development of Mason's Gain Formula for this example.
== Terminology and classification of signal-flow graphs ==
There is some confusion in literature about what a signal-flow graph is; Henry Paynter, inventor of bond graphs, writes: "But much of the decline of signal-flow graphs [...] is due in part to the mistaken notion that the branches must be linear and the nodes must be summative. Neither assumption was embraced by Mason, himself !"
=== Standards covering signal-flow graphs ===
IEEE Std 155-1960, IEEE Standards on Circuits: Definitions of Terms for Linear Signal Flow Graphs, 1960.
This IEEE standard defines a signal-flow graph as a network of directed branches representing dependent and independent signals as nodes. Incoming branches carry branch signals to the dependent node signals. A dependent node signal is the algebraic sum of the incoming branch signals at that node, i.e. nodes are summative.
=== State transition signal-flow graph ===
A state transition SFG or state diagram is a simulation diagram for a system of equations, including the initial conditions of the states.
=== Closed flowgraph ===
Closed flowgraphs describe closed systems and have been utilized to provide a rigorous theoretical basis for topological techniques of circuit analysis.
Terminology for closed flowgraph theory includes:
Contributive node. Summing point for two or more incoming signals resulting in only one outgoing signal.
Distributive node. Sampling point for two or more outgoing signals resulting from only one incoming signal.
Compound node. Contraction of a contributive node and a distributive node.
Strictly dependent & strictly independent node. A strictly independent node represent s an independent source; a strictly dependent node represents a meter.
Open & Closed Flowgraphs. An open flowgraph contains strictly dependent or strictly independent nodes; otherwise it is a closed flowgraph.
== Nonlinear flow graphs ==
Mason introduced both nonlinear and linear flow graphs. To clarify this point, Mason wrote : "A linear flow graph is one whose associated equations are linear."
=== Examples of nonlinear branch functions ===
It we denote by xj the signal at node j, the following are examples of node functions that do not pertain to a linear time-invariant system:
x
j
=
x
k
×
x
l
x
k
=
a
b
s
(
x
j
)
x
l
=
log
(
x
k
)
x
m
=
t
×
x
j
,where
t
represents time
{\displaystyle {\begin{aligned}x_{\mathrm {j} }&=x_{\mathrm {k} }\times x_{\mathrm {l} }\\x_{\mathrm {k} }&=abs(x_{\mathrm {j} })\\x_{\mathrm {l} }&=\log(x_{\mathrm {k} })\\x_{\mathrm {m} }&=t\times x_{\mathrm {j} }{\text{ ,where }}t{\text{ represents time}}\\\end{aligned}}}
=== Examples of nonlinear signal-flow graph models ===
Although they generally can't be transformed between time domain and frequency domain representations for classical control theory analysis, nonlinear signal-flow graphs can be found in electrical engineering literature.
Nonlinear signal-flow graphs can also be found in life sciences, for example, Dr Arthur Guyton's model of the cardiovascular system.
== Applications of SFG techniques in various fields of science ==
Electronic circuits
Characterizing sequential circuits of the Moore and Mealy type, obtaining regular expressions from state diagrams.
Synthesis of non-linear data converters
Control and network theory
Stochastic signal processing.
Reliability of electronic systems
Physiology and biophysics
Cardiac output regulation
Simulation
Simulation on analog computers
Neuroscience and Combinatorics
Study of Polychrony
== See also ==
Asymptotic gain model
Bond graphs
Coates graph
Control Systems/Signal Flow Diagrams in the Control Systems Wikibook
Flow graph (mathematics)
Leapfrog filter for an example of filter design using a signal flow graph
Mason's gain formula
Minor loop feedback
Noncommutative signal-flow graph
== Notes ==
== References ==
Ernest J. Henley & R. A. Williams (1973). Graph theory in modern engineering; computer aided design, control, optimization, reliability analysis. Academic Press. ISBN 978-0-08-095607-7. Book almost entirely devoted to this topic.
Kou, Benjamin C. (1967), Automatic Control Systems, Prentice Hall
Robichaud, Louis P.A.; Maurice Boisvert; Jean Robert (1962). Signal flow graphs and applications. Prentice-Hall electrical engineering series. Englewood Cliffs, N.J.: Prentice Hall. pp. xiv, 214 p.
Deo, Narsingh (1974), Graph Theory with Applications to Engineering and Computer Science, PHI Learning Pvt. Ltd., p. 418, ISBN 978-81-203-0145-0
K Thulasiramen; MNS Swarmy (2011). "§6.11 The Coates and Mason graphs". Graphs: Theory and algorithms. John Wiley & Sons. pp. 163 ff. ISBN 9781118030257.
Ogata, Katsuhiko (2002). "Section 3-9 Signal Flow Graphs". Modern Control Engineering 4th Edition. Prentice-Hal. ISBN 978-0-13-043245-2.
Phang, Khoman (2000-12-14). "2.5 An overview of Signal-flow graphs" (PDF). CMOS Optical Preamplifier Design Using Graphical Circuit Analysis (Thesis). Department of Electrical and Computer Engineering, University of Toronto. © Copyright by Khoman Phang 2001
== Further reading ==
Wai-Kai Chen (1976). Applied Graph Theory. North Holland Publishing Company. ISBN 978-0720423624. Chapter 3 for the essentials, but applications are scattered throughout the book.
Wai-Kai Chen (May 1964). "Some applications of linear graphs". Contract DA-28-043-AMC-00073 (E). Coordinated Science Laboratory, University of Illinois, Urbana. Archived from the original on January 10, 2015.
K. Thulasiraman & M. N. S. Swamy (1992). Graphs: Theory and Algorithms. John Wiley & Sons. 6.10-6.11 for the essential mathematical idea. ISBN 978-0-471-51356-8.
Shu-Park Chan (2006). "Graph theory". In Richard C. Dorf (ed.). Circuits, Signals, and Speech and Image Processing (3rd ed.). CRC Press. § 3.6. ISBN 978-1-4200-0308-6. Compares Mason and Coates graph approaches with Maxwell's k-tree approach.
RF Hoskins (2014). "Flow-graph and signal flow-graph analysis of linear systems". In SR Deards (ed.). Recent Developments in Network Theory: Proceedings of the Symposium Held at the College of Aeronautics, Cranfield, September 1961. Elsevier. ISBN 9781483223568. A comparison of the utility of the Coates flow graph and the Mason flow graph.
== External links ==
M. L. Edwards: S-parameters, signal flow graphs, and other matrix representations All Rights Reserved
H Schmid: Signal-Flow Graphs in 12 Short Lessons
Control Systems/Signal Flow Diagrams at Wikibooks
Media related to Signal flow graphs at Wikimedia Commons | Wikipedia/Signal-flow_graph |
Positive systems constitute a class of systems that has the important property that its state variables are never negative, given a positive initial state. These systems appear frequently in practical applications, as these variables represent physical quantities, with positive sign (levels, heights, concentrations, etc.).
The fact that a system is positive has important implications in the control system design. For instance, an asymptotically stable positive linear time-invariant system always admits a diagonal quadratic Lyapunov function, which makes these systems more numerical tractable in the context of Lyapunov analysis.
It is also important to take this positivity into account for state observer design, as standard observers (for example Luenberger observers) might give illogical negative values.
== Conditions for positivity ==
A continuous-time linear system
x
˙
=
A
x
{\displaystyle {\dot {x}}=Ax}
is positive if and only if A is a Metzler matrix.
A discrete-time linear system
x
(
k
+
1
)
=
A
x
(
k
)
{\displaystyle x(k+1)=Ax(k)}
is positive if and only if A is a nonnegative matrix.
== See also ==
Metzler matrix
Nonnegative matrix
Positive feedback
== References == | Wikipedia/Positive_systems |
A proportional–integral–derivative controller (PID controller or three-term controller) is a feedback-based control loop mechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used in industrial control systems and various other applications where constant control through modulation is necessary without human intervention. The PID controller automatically compares the desired target value (setpoint or SP) with the actual value of the system (process variable or PV). The difference between these two values is called the error value, denoted as
e
(
t
)
{\displaystyle e(t)}
.
It then applies corrective actions automatically to bring the PV to the same value as the SP using three methods: The proportional (P) component responds to the current error value by producing an output that is directly proportional to the magnitude of the error. This provides immediate correction based on how far the system is from the desired setpoint. The integral (I) component, in turn, considers the cumulative sum of past errors to address any residual steady-state errors that persist over time, eliminating lingering discrepancies. Lastly, the derivative (D) component predicts future error by assessing the rate of change of the error, which helps to mitigate overshoot and enhance system stability, particularly when the system undergoes rapid changes. The PID output signal can directly control actuators through voltage, current, or other modulation methods, depending on the application. The PID controller reduces the likelihood of human error and improves automation.
A common example is a vehicle’s cruise control system. For instance, when a vehicle encounters a hill, its speed will decrease if the engine power output is kept constant. The PID controller adjusts the engine's power output to restore the vehicle to its desired speed, doing so efficiently with minimal delay and overshoot.
The theoretical foundation of PID controllers dates back to the early 1920s with the development of automatic steering systems for ships. This concept was later adopted for automatic process control in manufacturing, first appearing in pneumatic actuators and evolving into electronic controllers. PID controllers are widely used in numerous applications requiring accurate, stable, and optimized automatic control, such as temperature regulation, motor speed control, and industrial process management.
== Fundamental operation ==
The distinguishing feature of the PID controller is the ability to use the three control terms of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an error value
e
(
t
)
{\displaystyle e(t)}
as the difference between a desired setpoint
SP
=
r
(
t
)
{\displaystyle {\text{SP}}=r(t)}
and a measured process variable
PV
=
y
(
t
)
{\displaystyle {\text{PV}}=y(t)}
:
e
(
t
)
=
r
(
t
)
−
y
(
t
)
{\displaystyle e(t)=r(t)-y(t)}
, and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a control variable
u
(
t
)
{\displaystyle u(t)}
, such as the opening of a control valve, to a new value determined by a weighted sum of the control terms.
The PID controller directly generates a continuous control signal based on error, without discrete modulation.
In this model:
Term P is proportional to the current value of the SP − PV error
e
(
t
)
{\displaystyle e(t)}
. For example, if the error is large, the control output will be proportionately large by using the gain factor "Kp". Using proportional control alone will result in an error between the set point and the process value because the controller requires an error to generate the proportional output response. In steady state process conditions an equilibrium is reached, with a steady SP-PV "offset".
Term I accounts for past values of the SP − PV error and integrates them over time to produce the I term. For example, if there is a residual SP − PV error after the application of proportional control, the integral term seeks to eliminate the residual error by adding a control effect due to the historic cumulative value of the error. When the error is eliminated, the integral term will cease to grow. This will result in the proportional effect diminishing as the error decreases, but this is compensated for by the growing integral effect.
Term D is a best estimate of the future trend of the SP − PV error, based on its current rate of change. It is sometimes called "anticipatory control", as it is effectively seeking to reduce the effect of the SP − PV error by exerting a control influence generated by the rate of error change. The more rapid the change, the greater the controlling or damping effect.
Tuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response.
Control action – The mathematical model and practical loop above both use a direct control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This is because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is called reverse acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.
=== Control function ===
The overall control function is
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}},}
where
K
p
{\displaystyle K_{\text{p}}}
,
K
i
{\displaystyle K_{\text{i}}}
, and
K
d
{\displaystyle K_{\text{d}}}
, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted P, I, and D).
=== Standard form ===
In the standard form of the equation (see later in article),
K
i
{\displaystyle K_{\text{i}}}
and
K
d
{\displaystyle K_{\text{d}}}
are respectively replaced by
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
and
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
; the advantage of this being that
T
i
{\displaystyle T_{\text{i}}}
and
T
d
{\displaystyle T_{\text{d}}}
have some understandable physical meaning, as they represent an integration time and a derivative time respectively.
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
is the time constant with which the controller will attempt to approach the set point.
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
determines how long the controller will tolerate the output being consistently above or below the set point.
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
e
(
t
)
d
t
)
{\displaystyle u(t)=K_{\text{p}}\left(e(t)+{\frac {1}{T_{\text{i}}}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +T_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}\right)}
where
T
i
=
K
p
K
i
{\displaystyle T_{\text{i}}={K_{\text{p}} \over K_{\text{i}}}}
is the integration time constant, and
T
d
=
K
d
K
p
{\displaystyle T_{\text{d}}={K_{\text{d}} \over K_{\text{p}}}}
is the derivative time constant.
=== Selective use of control terms ===
Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value.
=== Applicability ===
The use of the PID algorithm does not guarantee optimal control of the system or its control stability (see § Limitations, below). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.
== History ==
=== Origins ===
The centrifugal governor was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed.
With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt's self-designed "conical pendulum" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept.
Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper On Governors. He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. The problem was examined further in 1874 by Edward Routh, Charles Sturm, and in 1895, Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria.
In subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor.
About this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868.
Another early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based.
It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.
His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Trials were carried out on the USS New Mexico, with the controllers controlling the angular velocity (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve.
The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others in the 1930s.
=== Industrial control ===
The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback. This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows. The integral term was called Reset. Later the derivative term was added by a further bellows and adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–1.0 bar) was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.
Most modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers.
=== Electronic analog controllers ===
Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers.
== Control loop example ==
Consider a robotic arm that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.
The sensed position is the process variable (PV).
The desired position is called the setpoint (SP).
The difference between the PV and SP is the error (e), which quantifies whether the arm is too low or too high and by how much.
The input to the process (the electric current in the motor) is the output from the PID controller. It is called either the manipulated variable (MV) or the control variable (CV).
The PID controller continuously adjusts the input current to achieve smooth motion.
By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).
=== Proportional ===
The obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.
=== Integral ===
An integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both weakly reacting at the start (because the action would be small at the beginning, depending on time to become significant) and more aggressive at the end (the action increases as long as the error is positive, even if the error is near zero).
Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If it decreases, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable.
=== Derivative ===
A derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but rather the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).
=== Control damping ===
In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.
=== Response to disturbances ===
If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.
=== Applications ===
In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, force, feed rate, flow rate, chemical composition (component concentrations), weight, position, speed, and practically every other variable for which a measurement exists.
== Controller theory ==
This section describes the parallel or non-interacting form of the PID controller. For other forms please see § Alternative nomenclature and forms.
The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining
u
(
t
)
{\displaystyle u(t)}
as the controller output, the final form of the PID algorithm is
u
(
t
)
=
M
V
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=\mathrm {MV} (t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau +K_{\text{d}}{\frac {de(t)}{dt}},}
where
K
p
{\displaystyle K_{\text{p}}}
is the proportional gain, a tuning parameter,
K
i
{\displaystyle K_{\text{i}}}
is the integral gain, a tuning parameter,
K
d
{\displaystyle K_{\text{d}}}
is the derivative gain, a tuning parameter,
e
(
t
)
=
S
P
−
P
V
(
t
)
{\displaystyle e(t)=\mathrm {SP} -\mathrm {PV} (t)}
is the error (SP is the setpoint, and PV(t) is the process variable),
t
{\displaystyle t}
is the time or instantaneous time (the present),
τ
{\displaystyle \tau }
is the variable of integration (takes on values from time 0 to the present
t
{\displaystyle t}
).
Equivalently, the transfer function in the Laplace domain of the PID controller is
L
(
s
)
=
K
p
+
K
i
/
s
+
K
d
s
{\displaystyle L(s)=K_{\text{p}}+K_{\text{i}}/s+K_{\text{d}}s}
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle ={K_{\text{d}}s^{2}+K_{\text{p}}s+K_{\text{i}} \over s}}
where
s
{\displaystyle s}
is the complex angular frequency.
=== Proportional term ===
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain constant.
The proportional term is given by
P
out
=
K
p
e
(
t
)
.
{\displaystyle P_{\text{out}}=K_{\text{p}}e(t).}
A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.
==== Steady-state error ====
The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term.
=== Integral term ===
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.
The integral term is given by
I
out
=
K
i
∫
0
t
e
(
τ
)
d
τ
.
{\displaystyle I_{\text{out}}=K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau .}
The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning).
=== Derivative term ===
The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.
The derivative term is given by
D
out
=
K
d
d
e
(
t
)
d
t
.
{\displaystyle D_{\text{out}}=K_{\text{d}}{\frac {de(t)}{dt}}.}
Derivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers – because of its variable impact on system stability in real-world applications.
== Loop tuning ==
Tuning a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.
Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.
Some processes have a degree of nonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions).
=== Stability ===
If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by excess gain, particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired.
Mathematically, the origins of instability can be seen in the Laplace domain.
The closed-loop transfer function is
H
(
s
)
=
K
(
s
)
G
(
s
)
1
+
K
(
s
)
G
(
s
)
,
{\displaystyle H(s)={\frac {K(s)G(s)}{1+K(s)G(s)}},}
where
K
(
s
)
{\displaystyle K(s)}
is the PID transfer function, and
G
(
s
)
{\displaystyle G(s)}
is the plant transfer function. A system is unstable where the closed-loop transfer function diverges for some
s
{\displaystyle s}
. This happens in situations where
K
(
s
)
G
(
s
)
=
−
1
{\displaystyle K(s)G(s)=-1}
. In other words, this happens when
|
K
(
s
)
G
(
s
)
|
=
1
{\displaystyle |K(s)G(s)|=1}
with a 180° phase shift. Stability is guaranteed when
K
(
s
)
G
(
s
)
<
1
{\displaystyle K(s)G(s)<1}
for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion.
=== Optimal behavior ===
The optimal behavior on a process change or setpoint change varies depending on the application.
Two basic requirements are regulation (disturbance rejection – staying at a given setpoint) and command tracking (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.
=== Overview of tuning methods ===
There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.
The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.
=== Manual tuning ===
If the system must remain online, one tuning method is to first set
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
values to zero. Increase the
K
p
{\displaystyle K_{p}}
until the output of the loop oscillates; then set
K
p
{\displaystyle K_{p}}
to approximately half that value for a "quarter amplitude decay"-type response. Then increase
K
i
{\displaystyle K_{i}}
until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase
K
d
{\displaystyle K_{d}}
, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much
K
p
{\displaystyle K_{p}}
causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a
K
p
{\displaystyle K_{p}}
setting significantly less than half that of the
K
p
{\displaystyle K_{p}}
setting that was causing oscillation.
=== Ziegler–Nichols method ===
Another heuristic tuning method is known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain
K
u
{\displaystyle K_{u}}
at which the output of the loop starts to oscillate constantly.
K
u
{\displaystyle K_{u}}
and the oscillation period
T
u
{\displaystyle T_{u}}
are used to set the gains as follows:
The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result.
These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
are dependent on the oscillation period
T
u
{\displaystyle T_{u}}
.
=== Cohen–Coon parameters ===
This method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of
1
4
{\displaystyle {\tfrac {1}{4}}}
. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.
=== Relay (Åström–Hägglund) method ===
Published in 1984 by Karl Johan Åström and Tore Hägglund, the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.
Specifically, the ultimate period
T
u
{\displaystyle T_{u}}
is assumed to be equal to the observed period, and the ultimate gain is computed as
K
u
=
4
b
/
π
a
,
{\displaystyle K_{u}=4b/\pi a,}
where a is the amplitude of the process variable oscillation, and b is the amplitude of the control output change which caused it.
There are numerous variants on the relay method.
=== First-order model with dead time ===
The transfer function for a first-order process with dead time is
y
(
s
)
=
k
p
e
−
θ
s
τ
p
s
+
1
u
(
s
)
,
{\displaystyle y(s)={\frac {k_{\text{p}}e^{-\theta s}}{\tau _{\text{p}}s+1}}u(s),}
where kp is the process gain, τp is the time constant, θ is the dead time, and u(s) is a step change input. Converting this transfer function to the time domain results in
y
(
t
)
=
k
p
Δ
u
(
1
−
e
−
t
−
θ
τ
p
)
,
{\displaystyle y(t)=k_{\text{p}}\Delta u\left(1-e^{\frac {-t-\theta }{\tau _{\text{p}}}}\right),}
using the same parameters found above.
It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain (kp) is equal to the change in output divided by the change in input. The dead time θ is the amount of time between when the step change occurred and when the output first changed. The time constant (τp) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants.
=== Tuning software ===
Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients.
Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.
Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response.
== Limitations ==
While PID controllers are applicable to many control problems and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide optimal control. The fundamental difficulty with PID control is that it is a feedback control system with constant parameters and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer that has no model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.
The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.
=== Linearity and symmetry ===
PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.
A nonlinear valve in a flow control application, for instance, will result in variable loop sensitivity that requires damping to prevent instability. One solution is to include a model of the valve's nonlinearity in the control algorithm to compensate for this.
An asymmetric application, for example, is temperature control in HVAC systems that use only active heating (via a heating element) whereas only passive cooling is available. Overshoot of rising temperature can only be corrected slowly; active cooling is not available to force temperature downward as a function of the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.
=== Noise in derivative term ===
A problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller.
== Modifications to the algorithm ==
The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.
=== Integral windup ===
One common problem resulting from the ideal PID implementations is integral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:
Disabling the integration until the PV has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the regulator output within feasible bounds.
=== Overshooting from known disturbances ===
For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.
=== PI controller ===
A PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.
The controller output is given by
K
P
Δ
+
K
I
∫
Δ
d
t
{\displaystyle K_{P}\Delta +K_{I}\int \Delta \,dt}
where
Δ
{\displaystyle \Delta }
is the error or deviation of actual measured value (PV) from the setpoint (SP).
Δ
=
S
P
−
P
V
.
{\displaystyle \Delta =SP-PV.}
A PI controller can be modelled easily in software such as Simulink or Xcos using a "flow chart" box involving Laplace operators:
C
=
G
(
1
+
τ
s
)
τ
s
{\displaystyle C={\frac {G(1+\tau s)}{\tau s}}}
where
G
=
K
P
{\displaystyle G=K_{P}}
= proportional gain
G
τ
=
K
I
{\displaystyle {\frac {G}{\tau }}=K_{I}}
= integral gain
Setting a value for
G
{\displaystyle G}
is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.
Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.
=== Deadband ===
Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.
=== Setpoint step change ===
The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:
Setpoint ramping
In this modification, the setpoint is gradually moved from its old value to a newly specified value using a linear or first-order differential ramp function. This avoids the discontinuity present in a simple step change.
Derivative of the process variable
In this case the PID controller measures the derivative of the measured PV, rather than the derivative of the error. This quantity is always continuous (i.e., never has a step change as a result of changed setpoint). This modification is a simple case of setpoint weighting.
Setpoint weighting
Setpoint weighting adds adjustable factors (usually between 0 and 1) to the setpoint in the error in the proportional and derivative element of the controller. The error in the integral term must be the true control error to avoid steady-state control errors. These two extra parameters do not affect the response to load disturbances and measurement noise and can be tuned to improve the controller's setpoint response.
=== Feed-forward ===
The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.
=== Bumpless operation ===
PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.
=== Other improvements ===
In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic, or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order. The order of the integrator and differentiator add increased flexibility to the controller.
== Cascade control ==
One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers..
For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.
The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system it controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response.
== Alternative nomenclature and forms ==
=== Standard versus parallel (ideal) form ===
The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form. In this form the
K
p
{\displaystyle K_{p}}
gain is applied to the
I
o
u
t
{\displaystyle I_{\mathrm {out} }}
, and
D
o
u
t
{\displaystyle D_{\mathrm {out} }}
terms, yielding:
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
d
t
e
(
t
)
)
{\displaystyle u(t)=K_{p}\left(e(t)+{\frac {1}{T_{i}}}\int _{0}^{t}e(\tau )\,d\tau +T_{d}{\frac {d}{dt}}e(t)\right)}
where
T
i
{\displaystyle T_{i}}
is the integral time
T
d
{\displaystyle T_{d}}
is the derivative time
In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at
T
d
{\displaystyle T_{d}}
seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in
T
i
{\displaystyle T_{i}}
seconds (or samples). The resulting compensated single error value is then scaled by the single gain
K
p
{\displaystyle K_{p}}
to compute the control variable.
In the parallel form, shown in the controller theory section
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
d
t
e
(
t
)
{\displaystyle u(t)=K_{p}e(t)+K_{i}\int _{0}^{t}e(\tau )\,d\tau +K_{d}{\frac {d}{dt}}e(t)}
the gain parameters are related to the parameters of the standard form through
K
i
=
K
p
/
T
i
{\displaystyle K_{i}=K_{p}/T_{i}}
and
K
d
=
K
p
T
d
{\displaystyle K_{d}=K_{p}T_{d}}
. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.
=== Reciprocal gain, a.k.a. proportional band ===
In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain
K
p
{\displaystyle K_{p}}
not as "output per degree", but rather in the reciprocal form of a proportional band
100
/
K
p
{\displaystyle 100/K_{p}}
, which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.
=== Basing derivative action on PV ===
In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.
=== Basing proportional action on PV ===
Most commercial control systems offer the option of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.
Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.
M
V
(
t
)
=
K
p
(
−
P
V
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
−
T
d
d
d
t
P
V
(
t
)
)
{\displaystyle \mathrm {MV(t)} =K_{p}\left(\,{-PV(t)}+{\frac {1}{T_{i}}}\int _{0}^{t}{e(\tau )}\,{d\tau }-T_{d}{\frac {d}{dt}}PV(t)\right)}
King describes an effective chart-based method.
=== Laplace form ===
Sometimes it is useful to write the PID regulator in Laplace transform form:
G
(
s
)
=
K
p
+
K
i
s
+
K
d
s
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle G(s)=K_{p}+{\frac {K_{i}}{s}}+K_{d}{s}={\frac {K_{d}{s^{2}}+K_{p}{s}+K_{i}}{s}}}
Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.
=== Series/interacting form ===
Another representation of the PID controller is the series, or interacting form
G
(
s
)
=
K
c
(
1
τ
i
s
+
1
)
(
τ
d
s
+
1
)
{\displaystyle G(s)=K_{c}({\frac {1}{\tau _{i}{s}}}+1)(\tau _{d}{s}+1)}
where the parameters are related to the parameters of the standard form through
K
p
=
K
c
⋅
α
{\displaystyle K_{p}=K_{c}\cdot \alpha }
,
T
i
=
τ
i
⋅
α
{\displaystyle T_{i}=\tau _{i}\cdot \alpha }
, and
T
d
=
τ
d
α
{\displaystyle T_{d}={\frac {\tau _{d}}{\alpha }}}
with
α
=
1
+
τ
d
τ
i
{\displaystyle \alpha =1+{\frac {\tau _{d}}{\tau _{i}}}}
.
This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.
=== Discrete implementation ===
The analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be discretized. Approximations for first-order derivatives are made by backward finite differences.
u
(
t
)
{\displaystyle u(t)}
and
e
(
t
)
{\displaystyle e(t)}
are discretized with a sampling period
Δ
t
{\displaystyle \Delta t}
, k is the sample index.
Differentiating both sides of PID equation using Newton's notation gives:
u
˙
(
t
)
=
K
p
e
˙
(
t
)
+
K
i
e
(
t
)
+
K
d
e
¨
(
t
)
{\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
Derivative terms are approximated as,
f
˙
(
t
k
)
=
d
f
(
t
k
)
d
t
=
f
(
t
k
)
−
f
(
t
k
−
1
)
Δ
t
{\displaystyle {\dot {f}}(t_{k})={\dfrac {df(t_{k})}{dt}}={\dfrac {f(t_{k})-f(t_{k-1})}{\Delta t}}}
So,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
˙
(
t
k
)
−
e
˙
(
t
k
−
1
)
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\dot {e}}(t_{k})-{\dot {e}}(t_{k-1})}{\Delta t}}}
Applying backward difference again gives,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
−
e
(
t
k
−
1
)
−
e
(
t
k
−
2
)
Δ
t
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}-{\frac {e(t_{k-1})-e(t_{k-2})}{\Delta t}}}{\Delta t}}}
By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
e
(
t
k
)
+
(
−
K
p
−
2
K
d
Δ
t
)
e
(
t
k
−
1
)
+
K
d
Δ
t
e
(
t
k
−
2
)
{\displaystyle u(t_{k})=u(t_{k-1})+\left(K_{p}+K_{i}\Delta t+{\dfrac {K_{d}}{\Delta t}}\right)e(t_{k})+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {K_{d}}{\Delta t}}e(t_{k-2})}
or:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
K
p
[
(
1
+
Δ
t
T
i
+
T
d
Δ
t
)
e
(
t
k
)
+
(
−
1
−
2
T
d
Δ
t
)
e
(
t
k
−
1
)
+
T
d
Δ
t
e
(
t
k
−
2
)
]
{\displaystyle u(t_{k})=u(t_{k-1})+K_{p}\left[\left(1+{\dfrac {\Delta t}{T_{i}}}+{\dfrac {T_{d}}{\Delta t}}\right)e(t_{k})+\left(-1-{\dfrac {2T_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {T_{d}}{\Delta t}}e(t_{k-2})\right]}
s.t.
T
i
=
K
p
/
K
i
,
T
d
=
K
d
/
K
p
{\displaystyle T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}}
Note: This method solves in fact
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
+
u
0
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}+u_{0}}
where
u
0
{\displaystyle u_{0}}
is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.
== Pseudocode ==
Here is a very simple and explicit group of pseudocode that can be easily understood by the layman:
Kp - proportional gain
Ki - integral gain
Kd - derivative gain
dt - loop interval time (assumes reasonable scale)
previous_error := 0
integral := 0
loop:
error := setpoint − measured_value
proportional := error;
integral := integral + error × dt
derivative := (error - previous_error) / dt
output := Kp × proportional + Ki × integral + Kd × derivative
previous_error := error
wait(dt)
goto loop
Below a pseudocode illustrates how to implement a PID considering the PID as an IIR filter:
The Z-transform of a PID can be written as (
Δ
t
{\displaystyle \Delta _{t}}
is the sampling time):
C
(
z
)
=
K
p
+
K
i
Δ
t
z
z
−
1
+
K
d
Δ
t
z
−
1
z
{\displaystyle C(z)=K_{p}+K_{i}\Delta _{t}{\frac {z}{z-1}}+{\frac {K_{d}}{\Delta _{t}}}{\frac {z-1}{z}}}
and expressed in a IIR form (in agreement with the discrete implementation shown above):
C
(
z
)
=
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
+
(
−
K
p
−
2
K
d
Δ
t
)
z
−
1
+
K
d
Δ
t
z
−
2
1
−
z
−
1
{\displaystyle C(z)={\frac {\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)z^{-1}+{\dfrac {K_{d}}{\Delta _{t}}}z^{-2}}{1-z^{-1}}}}
We can then deduce the recursive iteration often found in FPGA implementation
u
[
n
]
=
u
[
n
−
1
]
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
ϵ
[
n
]
+
(
−
K
p
−
2
K
d
Δ
t
)
ϵ
[
n
−
1
]
+
K
d
Δ
t
ϵ
[
n
−
2
]
{\displaystyle u[n]=u[n-1]+\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)\epsilon [n]+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)\epsilon [n-1]+{\dfrac {K_{d}}{\Delta _{t}}}\epsilon [n-2]}
A0 := Kp + Ki*dt + Kd/dt
A1 := -Kp - 2*Kd/dt
A2 := Kd/dt
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
output := output + A0 * error[0] + A1 * error[1] + A2 * error[2]
wait(dt)
goto loop
Here, Kp is a dimensionless number, Ki is expressed in
s
−
1
{\displaystyle s^{-1}}
and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases.
In the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error.
Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm.
A common issue when using
K
d
{\displaystyle K_{d}}
is the response to the derivative of a rising or falling edge of the setpoint as shown below:
A typical workaround is to filter the derivative action using a low pass filter of time constant
τ
d
/
N
{\displaystyle \tau _{d}/N}
where
3
<=
N
<=
10
{\displaystyle 3<=N<=10}
:
A variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative:
A0 := Kp + Ki*dt
A1 := -Kp
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
A0d := Kd/dt
A1d := - 2.0*Kd/dt
A2d := Kd/dt
N := 5
tau := Kd / (Kp*N) // IIR filter time constant
alpha := dt / (2*tau)
d0 := 0
d1 := 0
fd0 := 0
fd1 := 0
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
// PI
output := output + A0 * error[0] + A1 * error[1]
// Filtered D
d1 := d0
d0 := A0d * error[0] + A1d * error[1] + A2d * error[2]
fd1 := fd0
fd0 := ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1
output := output + fd0
wait(dt)
goto loop
== See also ==
Control theory
Active disturbance rejection control
== Notes ==
== References ==
== Further reading ==
== External links ==
PID tuning using Mathematica
PID tuning using Python
Principles of PID Control and Tuning
Introduction to the key terms associated with PID Temperature Control
=== PID tutorials ===
PID Control in MATLAB/Simulink and Python with TCLab
What's All This P-I-D Stuff, Anyhow? Article in Electronic Design
Shows how to build a PID controller with basic electronic components (pg. 22)
PID Without a PhD
PID Control with MATLAB and Simulink
PID with single Operational Amplifier
Proven Methods and Best Practices for PID Control
Principles of PID Control and Tuning
PID Tuning Guide: A Best-Practices Approach to Understanding and Tuning PID Controllers
Michael Barr (2002-07-30), Introduction to Closed-Loop Control, Embedded Systems Programming, archived from the original on 2010-02-09
Jinghua Zhong, Mechanical Engineering, Purdue University (Spring 2006). "PID Controller Tuning: A Short Tutorial" (PDF). Archived from the original (PDF) on 2015-04-21. Retrieved 2013-12-04.{{cite web}}: CS1 maint: multiple names: authors list (link)
Introduction to P,PI,PD & PID Controller with MATLAB
Improving The Beginners PID | Wikipedia/PID_algorithm |
A fuzzy control system is a control system based on fuzzy logic – a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).
Fuzzy logic is widely used in machine control. The term "fuzzy" refers to the fact that the logic involved can deal with concepts that cannot be expressed as the "true" or "false" but rather as "partially true". Although alternative approaches such as genetic algorithms and neural networks can perform just as well as fuzzy logic in many cases, fuzzy logic has the advantage that the solution to the problem can be cast in terms that human operators can understand, such that that their experience can be used in the design of the controller. This makes it easier to mechanize tasks that are already successfully performed by humans.
== History and applications ==
Fuzzy logic was proposed by Lotfi A. Zadeh of the University of California at Berkeley in a 1965 paper. He elaborated on his ideas in a 1973 paper that introduced the concept of "linguistic variables", which in this article equates to a variable defined as a fuzzy set. Other research followed, with the first industrial application, a cement kiln built in Denmark, coming on line in 1976.
Fuzzy systems were initially implemented in Japan.
Interest in fuzzy systems was sparked by Seiji Yasunobu and Soji Miyamoto of Hitachi, who in 1985 provided simulations that demonstrated the feasibility of fuzzy control systems for the Sendai Subway. Their ideas were adopted, and fuzzy systems were used to control accelerating, braking, and stopping when the Namboku Line opened in 1987.
In 1987, Takeshi Yamakawa demonstrated the use of fuzzy control, through a set of simple dedicated fuzzy logic chips, in an "inverted pendulum" experiment. This is a classic control problem, in which a vehicle tries to keep a pole mounted on its top by a hinge upright by moving back and forth. Yamakawa subsequently made the demonstration more sophisticated by mounting a wine glass containing water and even a live mouse to the top of the pendulum: the system maintained stability in both cases. Yamakawa eventually went on to organize his own fuzzy-systems research lab to help exploit his patents in the field.
Japanese engineers subsequently developed a wide range of fuzzy systems for both industrial and consumer applications. In 1988 Japan established the Laboratory for International Fuzzy Engineering (LIFE), a cooperative arrangement between 48 companies to pursue fuzzy research. The automotive company Volkswagen was the only foreign corporate member of LIFE, dispatching a researcher for a duration of three years.
Japanese consumer goods often incorporate fuzzy systems. Matsushita vacuum cleaners use microcontrollers running fuzzy algorithms to interrogate dust sensors and adjust suction power accordingly. Hitachi washing machines use fuzzy controllers to load-weight, fabric-mix, and dirt sensors and automatically set the wash cycle for the best use of power, water, and detergent.
Canon developed an autofocusing camera that uses a charge-coupled device (CCD) to measure the clarity of the image in six regions of its field of view and use the information provided to determine if the image is in focus. It also tracks the rate of change of lens movement during focusing, and controls its speed to prevent overshoot. The camera's fuzzy control system uses 12 inputs: 6 to obtain the current clarity data provided by the CCD and 6 to measure the rate of change of lens movement. The output is the position of the lens. The fuzzy control system uses 13 rules and requires 1.1 kilobytes of memory.
An industrial air conditioner designed by Mitsubishi uses 25 heating rules and 25 cooling rules. A temperature sensor provides input, with control outputs fed to an inverter, a compressor valve, and a fan motor. Compared to the previous design, the fuzzy controller heats and cools five times faster, reduces power consumption by 24%, increases temperature stability by a factor of two, and uses fewer sensors.
Other applications investigated or implemented include: character and handwriting recognition; optical fuzzy systems; robots, including one for making Japanese flower arrangements; voice-controlled robot helicopters (hovering is a "balancing act" rather similar to the inverted pendulum problem); rehabilitation robotics to provide patient-specific solutions (e.g. to control heart rate and blood pressure ); control of flow of powders in film manufacture; elevator systems; and so on.
Work on fuzzy systems is also proceeding in North America and Europe, although on a less extensive scale than in Japan.
The US Environmental Protection Agency has investigated fuzzy control for energy-efficient motors, and NASA has studied fuzzy control for automated space docking: simulations show that a fuzzy control system can greatly reduce fuel consumption.
Firms such as Boeing, General Motors, Allen-Bradley, Chrysler, Eaton, and Whirlpool have worked on fuzzy logic for use in low-power refrigerators, improved automotive transmissions, and energy-efficient electric motors.
In 1995 Maytag introduced an "intelligent" dishwasher based on a fuzzy controller and a "one-stop sensing module" that combines a thermistor, for temperature measurement; a conductivity sensor, to measure detergent level from the ions present in the wash; a turbidity sensor that measures scattered and transmitted light to measure the soiling of the wash; and a magnetostrictive sensor to read spin rate. The system determines the optimum wash cycle for any load to obtain the best results with the least amount of energy, detergent, and water. It even adjusts for dried-on foods by tracking the last time the door was opened, and estimates the number of dishes by the number of times the door was opened.
Xiera Technologies Inc. has developed the first auto-tuner for the fuzzy logic controller's knowledge base known as edeX. This technology was tested by Mohawk College and was able to solve non-linear 2x2 and 3x3 multi-input multi-output problems.
Research and development is also continuing on fuzzy applications in software, as opposed to firmware, design, including fuzzy expert systems and integration of fuzzy logic with neural-network and so-called adaptive "genetic" software systems, with the ultimate goal of building "self-learning" fuzzy-control systems. These systems can be employed to control complex, nonlinear dynamic plants, for example, human body.
== Fuzzy sets ==
The input variables in a fuzzy control system are in general mapped by sets of membership functions similar to this, known as "fuzzy sets". The process of converting a crisp input value to a fuzzy value is called "fuzzification". The fuzzy logic based approach had been considered by designing two fuzzy systems, one for error heading angle and the other for velocity control.
A control system may also have various types of switch, or "ON-OFF", inputs along with its analog inputs, and such switch inputs of course will always have a truth value equal to either 1 or 0, but the scheme can deal with them as simplified fuzzy functions that happen to be either one value or another.
Given "mappings" of input variables into membership functions and truth values, the microcontroller then makes decisions for what action to take, based on a set of "rules", each of the form:
IF brake temperature IS warm AND speed IS not very fast
THEN brake pressure IS slightly decreased.
In this example, the two input variables are "brake temperature" and "speed" that have values defined as fuzzy sets. The output variable, "brake pressure" is also defined by a fuzzy set that can have values like "static" or "slightly increased" or "slightly decreased" etc.
=== Fuzzy control in detail ===
Fuzzy controllers are very simple conceptually. They consist of an input stage, a processing stage, and an output stage. The input stage maps sensor or other inputs, such as switches, thumbwheels, and so on, to the appropriate membership functions and truth values. The processing stage invokes each appropriate rule and generates a result for each, then combines the results of the rules. Finally, the output stage converts the combined result back into a specific control output value.
The most common shape of membership functions is triangular, although trapezoidal and bell curves are also used, but the shape is generally less important than the number of curves and their placement. From three to seven curves are generally appropriate to cover the required range of an input value, or the "universe of discourse" in fuzzy jargon.
As discussed earlier, the processing stage is based on a collection of logic rules in the form of IF-THEN statements, where the IF part is called the "antecedent" and the THEN part is called the "consequent". Typical fuzzy control systems have dozens of rules.
Consider a rule for a thermostat:
IF (temperature is "cold") THEN turn (heater is "high")
This rule uses the truth value of the "temperature" input, which is some truth value of "cold", to generate a result in the fuzzy set for the "heater" output, which is some value of "high". This result is used with the results of other rules to finally generate the crisp composite output. Obviously, the greater the truth value of "cold", the higher the truth value of "high", though this does not necessarily mean that the output itself will be set to "high" since this is only one rule among many.
In some cases, the membership functions can be modified by "hedges" that are equivalent to adverbs. Common hedges include "about", "near", "close to", "approximately", "very", "slightly", "too", "extremely", and "somewhat". These operations may have precise definitions, though the definitions can vary considerably between different implementations. "Very", for one example, squares membership functions; since the membership values are always less than 1, this narrows the membership function. "Extremely" cubes the values to give greater narrowing, while "somewhat" broadens the function by taking the square root.
In practice, the fuzzy rule sets usually have several antecedents that are combined using fuzzy operators, such as AND, OR, and NOT, though again the definitions tend to vary: AND, in one popular definition, simply uses the minimum weight of all the antecedents, while OR uses the maximum value. There is also a NOT operator that subtracts a membership function from 1 to give the "complementary" function.
There are several ways to define the result of a rule, but one of the most common and simplest is the "max-min" inference method, in which the output membership function is given the truth value generated by the premise.
Rules can be solved in parallel in hardware, or sequentially in software. The results of all the rules that have fired are "defuzzified" to a crisp value by one of several methods. There are dozens, in theory, each with various advantages or drawbacks.
The "centroid" method is very popular, in which the "center of mass" of the result provides the crisp value. Another approach is the "height" method, which takes the value of the biggest contributor. The centroid method favors the rule with the output of greatest area, while the height method obviously favors the rule with the greatest output value.
The diagram below demonstrates max-min inferencing and centroid defuzzification for a system with input variables "x", "y", and "z" and an output variable "n". Note that "mu" is standard fuzzy-logic nomenclature for "truth value":
Notice how each rule provides a result as a truth value of a particular membership function for the output variable. In centroid defuzzification the values are OR'd, that is, the maximum value is used and values are not added, and the results are then combined using a centroid calculation.
Fuzzy control system design is based on empirical methods, basically a methodical approach to trial-and-error. The general process is as follows:
Document the system's operational specifications and inputs and outputs.
Document the fuzzy sets for the inputs.
Document the rule set.
Determine the defuzzification method.
Run through test suite to validate system, adjust details as required.
Complete document and release to production.
As a general example, consider the design of a fuzzy controller for a steam turbine. The block diagram of this control system appears as follows:
The input and output variables map into the following fuzzy set:
—where:
N3: Large negative.
N2: Medium negative.
N1: Small negative.
Z: Zero.
P1: Small positive.
P2: Medium positive.
P3: Large positive.
The rule set includes such rules as:
rule 1: IF temperature IS cool AND pressure IS weak,
THEN throttle is P3.
rule 2: IF temperature IS cool AND pressure IS low,
THEN throttle is P2.
rule 3: IF temperature IS cool AND pressure IS ok,
THEN throttle is Z.
rule 4: IF temperature IS cool AND pressure IS strong,
THEN throttle is N2.
In practice, the controller accepts the inputs and maps them into their membership functions and truth values. These mappings are then fed into the rules. If the rule specifies an AND relationship between the mappings of the two input variables, as the examples above do, the minimum of the two is used as the combined truth value; if an OR is specified, the maximum is used. The appropriate output state is selected and assigned a membership value at the truth level of the premise. The truth values are then defuzzified.
For example, assume the temperature is in the "cool" state, and the pressure is in the "low" and "ok" states. The pressure values ensure that only rules 2 and 3 fire:
The two outputs are then defuzzified through centroid defuzzification:
__________________________________________________________________
| Z P2
1 -+ * *
| * * * *
| * * * *
| * * * *
| * 222222222
| * 22222222222
| 333333332222222222222
+---33333333222222222222222-->
^
+150
__________________________________________________________________
The output value will adjust the throttle and then the control cycle will begin again to generate the next value.
=== Building a fuzzy controller ===
Consider implementing with a microcontroller chip a simple feedback controller:
A fuzzy set is defined for the input error variable "e", and the derived change in error, "delta", as well as the "output", as follows:
LP: large positive
SP: small positive
ZE: zero
SN: small negative
LN: large negative
If the error ranges from -1 to +1, with the analog-to-digital converter used having a resolution of 0.25, then the input variable's fuzzy set (which, in this case, also applies to the output variable) can be described very simply as a table, with the error / delta / output values in the top row and the truth values for each membership function arranged in rows beneath:
_______________________________________________________________________
-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1
_______________________________________________________________________
mu(LP) 0 0 0 0 0 0 0.3 0.7 1
mu(SP) 0 0 0 0 0.3 0.7 1 0.7 0.3
mu(ZE) 0 0 0.3 0.7 1 0.7 0.3 0 0
mu(SN) 0.3 0.7 1 0.7 0.3 0 0 0 0
mu(LN) 1 0.7 0.3 0 0 0 0 0 0
_______________________________________________________________________ —or, in graphical form (where each "X" has a value of 0.1):
LN SN ZE SP LP
+------------------------------------------------------------------+
| |
-1.0 | XXXXXXXXXX XXX : : : |
-0.75 | XXXXXXX XXXXXXX : : : |
-0.5 | XXX XXXXXXXXXX XXX : : |
-0.25 | : XXXXXXX XXXXXXX : : |
0.0 | : XXX XXXXXXXXXX XXX : |
0.25 | : : XXXXXXX XXXXXXX : |
0.5 | : : XXX XXXXXXXXXX XXX |
0.75 | : : : XXXXXXX XXXXXXX |
1.0 | : : : XXX XXXXXXXXXX |
| |
+------------------------------------------------------------------+
Suppose this fuzzy system has the following rule base:
rule 1: IF e = ZE AND delta = ZE THEN output = ZE
rule 2: IF e = ZE AND delta = SP THEN output = SN
rule 3: IF e = SN AND delta = SN THEN output = LP
rule 4: IF e = LP OR delta = LP THEN output = LN
These rules are typical for control applications in that the antecedents consist of the logical combination of the error and error-delta signals, while the consequent is a control command output.
The rule outputs can be defuzzified using a discrete centroid computation:
SUM( I = 1 TO 4 OF ( mu(I) * output(I) ) ) / SUM( I = 1 TO 4 OF mu(I) )
Now, suppose that at a given time:
e = 0.25
delta = 0.5
Then this gives:
________________________
e delta
________________________
mu(LP) 0 0.3
mu(SP) 0.7 1
mu(ZE) 0.7 0.3
mu(SN) 0 0
mu(LN) 0 0
________________________
Plugging this into rule 1 gives:
rule 1: IF e = ZE AND delta = ZE THEN output = ZE
mu(1) = MIN( 0.7, 0.3 ) = 0.3
output(1) = 0
-- where:
mu(1): Truth value of the result membership function for rule 1. In terms of a centroid calculation, this is the "mass" of this result for this discrete case.
output(1): Value (for rule 1) where the result membership function (ZE) is maximum over the output variable fuzzy set range. That is, in terms of a centroid calculation, the location of the "center of mass" for this individual result. This value is independent of the value of "mu". It simply identifies the location of ZE along the output range.
The other rules give:
rule 2: IF e = ZE AND delta = SP THEN output = SN
mu(2) = MIN( 0.7, 1 ) = 0.7
output(2) = -0.5
rule 3: IF e = SN AND delta = SN THEN output = LP
mu(3) = MIN( 0.0, 0.0 ) = 0
output(3) = 1
rule 4: IF e = LP OR delta = LP THEN output = LN
mu(4) = MAX( 0.0, 0.3 ) = 0.3
output(4) = -1
The centroid computation yields:
m
u
(
1
)
⋅
o
u
t
p
u
t
(
1
)
+
m
u
(
2
)
⋅
o
u
t
p
u
t
(
2
)
+
m
u
(
3
)
⋅
o
u
t
p
u
t
(
3
)
+
m
u
(
4
)
⋅
o
u
t
p
u
t
(
4
)
m
u
(
1
)
+
m
u
(
2
)
+
m
u
(
3
)
+
m
u
(
4
)
{\displaystyle {\frac {mu(1)\cdot output(1)+mu(2)\cdot output(2)+mu(3)\cdot output(3)+mu(4)\cdot output(4)}{mu(1)+mu(2)+mu(3)+mu(4)}}}
=
(
0.3
⋅
0
)
+
(
0.7
⋅
−
0.5
)
+
(
0
⋅
1
)
+
(
0.3
⋅
−
1
)
0.3
+
0.7
+
0
+
0.3
{\displaystyle ={\frac {(0.3\cdot 0)+(0.7\cdot -0.5)+(0\cdot 1)+(0.3\cdot -1)}{0.3+0.7+0+0.3}}}
=
−
0.5
{\displaystyle =-0.5}
—for the final control output. Simple. Of course the hard part is figuring out what rules actually work correctly in practice.
If you have problems figuring out the centroid equation, remember that a centroid is defined by summing all the moments (location times mass) around the center of gravity and equating the sum to zero. So if
X
0
{\displaystyle X_{0}}
is the center of gravity,
X
i
{\displaystyle X_{i}}
is the location of each mass, and
M
i
{\displaystyle M_{i}}
is each mass, this gives:
0
=
(
X
1
−
X
0
)
⋅
M
1
+
(
X
2
−
X
0
)
⋅
M
2
+
…
+
(
X
n
−
X
0
)
⋅
M
n
{\displaystyle 0=(X_{1}-X_{0})\cdot M_{1}+(X_{2}-X_{0})\cdot M_{2}+\ldots +(X_{n}-X_{0})\cdot M_{n}}
0
=
(
X
1
⋅
M
1
+
X
2
⋅
M
2
+
…
+
X
n
⋅
M
n
)
−
X
0
⋅
(
M
1
+
M
2
+
…
+
M
n
)
{\displaystyle 0=(X_{1}\cdot M_{1}+X_{2}\cdot M_{2}+\ldots +X_{n}\cdot M_{n})-X_{0}\cdot (M_{1}+M_{2}+\ldots +M_{n})}
X
0
⋅
(
M
1
+
M
2
+
…
+
M
n
)
=
X
1
⋅
M
1
+
X
2
⋅
M
2
+
…
+
X
n
⋅
M
n
{\displaystyle X_{0}\cdot (M_{1}+M_{2}+\ldots +M_{n})=X_{1}\cdot M_{1}+X_{2}\cdot M_{2}+\ldots +X_{n}\cdot M_{n}}
X
0
=
X
1
⋅
M
1
+
X
2
⋅
M
2
+
…
+
X
n
⋅
M
n
M
1
+
M
2
+
…
+
M
n
{\displaystyle X_{0}={\frac {X_{1}\cdot M_{1}+X_{2}\cdot M_{2}+\ldots +X_{n}\cdot M_{n}}{M_{1}+M_{2}+\ldots +M_{n}}}}
In our example, the values of mu correspond to the masses, and the values of X to location of the masses
(mu, however, only 'corresponds to the masses' if the initial 'mass' of the output functions are all the same/equivalent. If they are not the same, i.e. some are narrow triangles, while others maybe wide trapezoids or shouldered triangles, then the mass or area of the output function must be known or calculated. It is this mass that is then scaled by mu and multiplied by its location X_i).
This system can be implemented on a standard microprocessor, but dedicated fuzzy chips are now available. For example, Adaptive Logic INC of San Jose, California, sells a "fuzzy chip", the AL220, that can accept four analog inputs and generate four analog outputs. A block diagram of the chip is shown below:
+---------+ +-------+
analog --4-->| analog | | mux / +--4--> analog
in | mux | | SH | out
+----+----+ +-------+
| ^
V |
+-------------+ +--+--+
| ADC / latch | | DAC |
+------+------+ +-----+
| ^
| |
8 +-----------------------------+
| | |
| V |
| +-----------+ +-------------+ |
+-->| fuzzifier | | defuzzifier +--+
+-----+-----+ +-------------+
| ^
| +-------------+ |
| | rule | |
+->| processor +--+
| (50 rules) |
+------+------+
|
+------+------+
| parameter |
| memory |
| 256 x 8 |
+-------------+
ADC: analog-to-digital converter
DAC: digital-to-analog converter
SH: sample/hold
== Antilock brakes ==
As an example, consider an anti-lock braking system, directed by a microcontroller chip. The microcontroller has to make decisions based on brake temperature, speed, and other variables in the system.
The variable "temperature" in this system can be subdivided into a range of "states": "cold", "cool", "moderate", "warm", "hot", "very hot". The transition from one state to the next is hard to define.
An arbitrary static threshold might be set to divide "warm" from "hot". For example, at exactly 90 degrees, warm ends and hot begins. But this would result in a discontinuous change when the input value passed over that threshold. The transition wouldn't be smooth, as would be required in braking situations.
The way around this is to make the states fuzzy. That is, allow them to change gradually from one state to the next. In order to do this, there must be a dynamic relationship established between different factors.
Start by defining the input temperature states using "membership functions":
With this scheme, the input variable's state no longer jumps abruptly from one state to the next. Instead, as the temperature changes, it loses value in one membership function while gaining value in the next. In other words, its ranking in the category of cold decreases as it becomes more highly ranked in the warmer category.
At any sampled timeframe, the "truth value" of the brake temperature will almost always be in some degree part of two membership functions: i.e.: '0.6 nominal and 0.4 warm', or '0.7 nominal and 0.3 cool', and so on.
The above example demonstrates a simple application, using the abstraction of values from multiple values. This only represents one kind of data, however, in this case, temperature.
Adding additional sophistication to this braking system, could be done by additional factors such as traction, speed, inertia, set up in dynamic functions, according to the designed fuzzy system.
== Logical interpretation of fuzzy control ==
In spite of the appearance there are several difficulties to give a rigorous logical interpretation of the IF-THEN rules. As an example, interpret a rule as IF (temperature is "cold") THEN (heater is "high") by the first order formula Cold(x)→High(y) and assume that r is an input such that Cold(r) is false. Then the formula Cold(r)→High(t) is true for any t and therefore any t gives a correct control given r. A rigorous logical justification of fuzzy control is given in Hájek's book (see Chapter 7) where fuzzy control is represented as a theory of Hájek's basic logic.
In Gerla 2005 another logical approach to fuzzy control is proposed based on fuzzy logic programming: Denote by f the fuzzy function arising of an IF-THEN systems of rules. Then this system can be translated into a fuzzy program P containing a series of rules whose head is "Good(x,y)". The interpretation of this predicate in the least fuzzy Herbrand model of P coincides with f. This gives further useful tools to fuzzy control.
== Fuzzy qualitative simulation ==
Before an Artificial Intelligence system is able to plan the action sequence, some kind of model is needed. For video games, the model is equal to the game rules. From the programming perspective, the game rules are implemented as a Physics engine which accepts an action from a player and calculates, if the action is valid. After the action was executed, the game is in follow up state. If the aim isn't only to play mathematical games but determine the actions for real world applications, the most obvious bottleneck is, that no game rules are available. The first step is to model the domain. System identification can be realized with precise mathematical equations or with Fuzzy rules.
Using Fuzzy logic and ANFIS systems (Adaptive network based fuzzy inference system) for creating the forward model for a domain has many disadvantages. A qualitative simulation isn't able to determine the correct follow up state, but the system will only guess what will happen if the action was taken. The Fuzzy qualitative simulation can't predict the exact numerical values, but it's using imprecise natural language to speculate about the future. It takes the current situation plus the actions from the past and generates the expected follow up state of the game.
The output of the ANFIS system isn't providing correct information, but only a Fuzzy set notation, for example [0,0.2,0.4,0]. After converting the set notation back into numerical values the accuracy get worse. This makes Fuzzy qualitative simulation a bad choice for practical applications.
== Applications ==
Fuzzy control systems are suitable when the process complexity is high including uncertainty and nonlinear behavior, and there are no precise mathematical models available. Successful applications of fuzzy control systems have been reported worldwide mainly in Japan with pioneering solutions since 80s.
Some applications reported in the literature are:
Air conditioners
Automatic focus systems in cameras
Domestic appliances (refrigerators, washing machines...)
Control and optimization of industrial processes and system
Writing systems
Fuel efficiency in engines
Environment
Expert systems
Decision trees
Robotics
Autonomous vehicles
== See also ==
Dynamic logic
Bayesian inference
Function approximation
Fuzzy concept
Fuzzy markup language
Hysteresis
Neuro-fuzzy
Fuzzy control language
Type-2 fuzzy sets and systems
== References ==
== Further reading ==
Kevin M. Passino and Stephen Yurkovich, Fuzzy Control, Addison Wesley Longman, Menlo Park, CA, 1998 (522 pages) Archived 2008-12-15 at the Wayback Machine
Kazuo Tanaka; Hua O. Wang (2001). Fuzzy control systems design and analysis: a linear matrix inequality approach. John Wiley and Sons. ISBN 978-0-471-32324-2.
Cox, E. (Oct. 1992). Fuzzy fundamentals. IEEE Spectrum, 29:10. pp. 58–61.
Cox, E. (Feb. 1993) Adaptive fuzzy systems. IEEE Spectrum, 30:2. pp. 7–31.
Jan Jantzen, "Tuning Of Fuzzy PID Controllers", Technical University of Denmark, report 98-H 871, September 30, 1998. [1]
Jan Jantzen, Foundations of Fuzzy Control. Wiley, 2007 (209 pages) (Table of contents)
Computational Intelligence: A Methodological Introduction by Kruse, Borgelt, Klawonn, Moewes, Steinbrecher, Held, 2013, Springer, ISBN 9781447150121
== External links ==
Robert Babuska and Ebrahim Mamdani, ed. (2008). "Fuzzy control". Scholarpedia. Retrieved 31 December 2022.
Introduction to Fuzzy Control Archived 2010-08-05 at the Wayback Machine
Fuzzy Logic in Embedded Microcomputers and Control Systems
IEC 1131-7 CD1 Archived 2021-03-04 at the Wayback Machine IEC 1131-7 CD1 PDF
Online interactive demonstration of a system with 3 fuzzy rules
Data driven fuzzy systems | Wikipedia/Fuzzy_control |
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common in mathematical models and scientific laws; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly.
Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers, and many numerical methods have been developed to determine solutions with a given degree of accuracy. The theory of dynamical systems analyzes the qualitative aspects of solutions, such as their average behavior over a long time interval.
== History ==
Differential equations came into existence with the invention of calculus by Isaac Newton and Gottfried Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Newton listed three kinds of differential equations:
d
y
d
x
=
f
(
x
)
d
y
d
x
=
f
(
x
,
y
)
x
1
∂
y
∂
x
1
+
x
2
∂
y
∂
x
2
=
y
{\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=f(x)\\[4pt]{\frac {dy}{dx}}&=f(x,y)\\[4pt]x_{1}{\frac {\partial y}{\partial x_{1}}}&+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}}
In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function.
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.
Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form
y
′
+
P
(
x
)
y
=
Q
(
x
)
y
n
{\displaystyle y'+P(x)y=Q(x)y^{n}\,}
for which the following year Leibniz obtained solutions by simplifying it.
Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.
In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum.
== Example ==
In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.
In some cases, this differential equation (called an equation of motion) may be solved explicitly.
An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.
== Types ==
Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.
=== Ordinary differential equations ===
An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals.
Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function).
As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.
=== Partial differential equations ===
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness.
=== Non-linear differential equations ===
A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.
Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations.
=== Equation order and degree ===
The order of the differential equation is the highest order of derivative of the unknown function that appears in the differential equation.
For example, an equation containing only first-order derivatives is a first-order differential equation, an equation containing the second-order derivative is a second-order differential equation, and so on.
When it is written as a polynomial equation in the unknown function and its derivatives, its degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function, or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation
y
′
+
y
2
=
0
{\displaystyle y'+y^{2}=0}
is of degree one for the first meaning but not for the second one.
Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation.
=== Examples ===
In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones.
Heterogeneous first-order linear constant coefficient ordinary differential equation:
d
u
d
x
=
c
u
+
x
2
.
{\displaystyle {\frac {du}{dx}}=cu+x^{2}.}
Homogeneous second-order linear ordinary differential equation:
d
2
u
d
x
2
−
x
d
u
d
x
+
u
=
0.
{\displaystyle {\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.}
Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
d
2
u
d
x
2
+
ω
2
u
=
0.
{\displaystyle {\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.}
Heterogeneous first-order nonlinear ordinary differential equation:
d
u
d
x
=
u
2
+
4.
{\displaystyle {\frac {du}{dx}}=u^{2}+4.}
Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
L
d
2
u
d
x
2
+
g
sin
u
=
0.
{\displaystyle L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.}
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
Homogeneous first-order linear partial differential equation:
∂
u
∂
t
+
t
∂
u
∂
x
=
0.
{\displaystyle {\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.}
Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
=
0.
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.}
Homogeneous third-order non-linear partial differential equation, the KdV equation:
∂
u
∂
t
=
6
u
∂
u
∂
x
−
∂
3
u
∂
x
3
.
{\displaystyle {\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.}
== Existence of solutions ==
Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.
For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point
(
a
,
b
)
{\displaystyle (a,b)}
in the xy-plane, define some rectangular region
Z
{\displaystyle Z}
, such that
Z
=
[
l
,
m
]
×
[
n
,
p
]
{\displaystyle Z=[l,m]\times [n,p]}
and
(
a
,
b
)
{\displaystyle (a,b)}
is in the interior of
Z
{\displaystyle Z}
. If we are given a differential equation
d
y
d
x
=
g
(
x
,
y
)
{\textstyle {\frac {dy}{dx}}=g(x,y)}
and the condition that
y
=
b
{\displaystyle y=b}
when
x
=
a
{\displaystyle x=a}
, then there is locally a solution to this problem if
g
(
x
,
y
)
{\displaystyle g(x,y)}
and
∂
g
∂
x
{\textstyle {\frac {\partial g}{\partial x}}}
are both continuous on
Z
{\displaystyle Z}
. This solution exists on some interval with its center at
a
{\displaystyle a}
. The solution may not be unique. (See Ordinary differential equation for other results.)
However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:
f
n
(
x
)
d
n
y
d
x
n
+
⋯
+
f
1
(
x
)
d
y
d
x
+
f
0
(
x
)
y
=
g
(
x
)
{\displaystyle f_{n}(x){\frac {d^{n}y}{dx^{n}}}+\cdots +f_{1}(x){\frac {dy}{dx}}+f_{0}(x)y=g(x)}
such that
y
(
x
0
)
=
y
0
,
y
′
(
x
0
)
=
y
0
′
,
y
″
(
x
0
)
=
y
0
″
,
…
{\displaystyle {\begin{aligned}y(x_{0})&=y_{0},&y'(x_{0})&=y'_{0},&y''(x_{0})&=y''_{0},&\ldots \end{aligned}}}
For any nonzero
f
n
(
x
)
{\displaystyle f_{n}(x)}
, if
{
f
0
,
f
1
,
…
}
{\displaystyle \{f_{0},f_{1},\ldots \}}
and
g
{\displaystyle g}
are continuous on some interval containing
x
0
{\displaystyle x_{0}}
,
y
{\displaystyle y}
exists and is unique.
== Related concepts ==
A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.
Integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals.
An integro-differential equation (IDE) is an equation that combines aspects of a differential equation and an integral equation.
A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations.
A stochastic partial differential equation (SPDE) is an equation that generalizes SDEs to include space-time noise processes, with applications in quantum field theory and statistical mechanics.
An ultrametric pseudo-differential equation is an equation which contains p-adic numbers in an ultrametric space. Mathematical models that involve ultrametric pseudo-differential equations use pseudo-differential operators instead of differential operators.
A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form.
== Connection to difference equations ==
The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.
== Applications ==
The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.
Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.
The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations.
== Software ==
Some CAS software can solve differential equations. These are the commands used in the leading programs:
Maple: dsolve
Mathematica: DSolve[]
Maxima: ode2(equation, y, x)
SageMath: desolve()
SymPy: sympy.solvers.ode.dsolve(equation)
Xcas: desolve(y'=k*y,y)
== See also ==
== References ==
== Further reading ==
Abbott, P.; Neill, H. (2003). Teach Yourself Calculus. pp. 266–277.
Blanchard, P.; Devaney, R. L.; Hall, G. R. (2006). Differential Equations. Thompson.
Boyce, W.; DiPrima, R.; Meade, D. (2017). Elementary Differential Equations and Boundary Value Problems. Wiley.
Coddington, E. A.; Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill.
Ince, E. L. (1956). Ordinary Differential Equations. Dover.
Johnson, W. (1913). A Treatise on Ordinary and Partial Differential Equations. John Wiley and Sons. In University of Michigan Historical Math Collection
Polyanin, A. D.; Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2.
Porter, R. I. (1978). "XIX Differential Equations". Further Elementary Analysis.
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Daniel Zwillinger (12 May 2014). Handbook of Differential Equations. Elsevier Science. ISBN 978-1-4832-6396-0.
== External links ==
Media related to Differential equations at Wikimedia Commons
Lectures on Differential Equations MIT Open CourseWare Videos
Online Notes / Differential Equations Paul Dawkins, Lamar University
Differential Equations, S.O.S. Mathematics
Introduction to modeling via differential equations Introduction to modeling by means of differential equations, with critical remarks.
Mathematical Assistant on Web Symbolic ODE tool, using Maxima
Exact Solutions of Ordinary Differential Equations
Collection of ODE and DAE models of physical systems Archived 2008-12-19 at the Wayback Machine MATLAB models
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC
Khan Academy Video playlist on differential equations Topics covered in a first year course in differential equations.
MathDiscuss Video playlist on differential equations | Wikipedia/Differential_equations |
In control engineering and system identification, a state-space representation is a mathematical model of a physical system that uses state variables to track how inputs shape system behavior over time through first-order differential equations or difference equations. These state variables change based on their current values and inputs, while outputs depend on the states and sometimes the inputs too. The state space (also called time-domain approach and equivalent to phase space in certain dynamical systems) is a geometric space where the axes are these state variables, and the system’s state is represented by a state vector.
For linear, time-invariant, and finite-dimensional systems, the equations can be written in matrix form, offering a compact alternative to the frequency domain’s Laplace transforms for multiple-input and multiple-output (MIMO) systems. Unlike the frequency domain approach, it works for systems beyond just linear ones with zero initial conditions. This approach turns systems theory into an algebraic framework, making it possible to use Kronecker structures for efficient analysis.
State-space models are applied in fields such as economics, statistics, computer science, electrical engineering, and neuroscience. In econometrics, for example, state-space models can be used to decompose a time series into trend and cycle, compose individual indicators into a composite index, identify turning points of the business cycle, and estimate GDP using latent and unobserved time series. Many applications rely on the Kalman Filter or a state observer to produce estimates of the current unknown state variables using their previous observations.
== State variables ==
The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system,
n
{\displaystyle n}
, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables, or the system cannot be solved.
== Linear systems ==
The most general state-space representation of a linear system with
p
{\displaystyle p}
inputs,
q
{\displaystyle q}
outputs and
n
{\displaystyle n}
state variables is written in the following form:
x
˙
(
t
)
=
A
(
t
)
x
(
t
)
+
B
(
t
)
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t)}
y
(
t
)
=
C
(
t
)
x
(
t
)
+
D
(
t
)
u
(
t
)
{\displaystyle \mathbf {y} (t)=\mathbf {C} (t)\mathbf {x} (t)+\mathbf {D} (t)\mathbf {u} (t)}
where:
In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common LTI case, matrices will be time invariant. The time variable
t
{\displaystyle t}
can be continuous (e.g.
t
∈
R
{\displaystyle t\in \mathbb {R} }
) or discrete (e.g.
t
∈
Z
{\displaystyle t\in \mathbb {Z} }
). In the latter case, the time variable
k
{\displaystyle k}
is usually used instead of
t
{\displaystyle t}
. Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:
=== Example: continuous-time LTI case ===
Stability and natural response characteristics of a continuous-time LTI system (i.e., linear with matrices that are constant with respect to time) can be studied from the eigenvalues of the matrix
A
{\displaystyle \mathbf {A} }
. The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:
G
(
s
)
=
k
(
s
−
z
1
)
(
s
−
z
2
)
(
s
−
z
3
)
(
s
−
p
1
)
(
s
−
p
2
)
(
s
−
p
3
)
(
s
−
p
4
)
.
{\displaystyle \mathbf {G} (s)=k{\frac {(s-z_{1})(s-z_{2})(s-z_{3})}{(s-p_{1})(s-p_{2})(s-p_{3})(s-p_{4})}}.}
The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of
s
I
−
A
{\displaystyle s\mathbf {I} -\mathbf {A} }
,
λ
(
s
)
=
|
s
I
−
A
|
.
{\displaystyle \lambda (s)=\left|s\mathbf {I} -\mathbf {A} \right|.}
The roots of this polynomial (the eigenvalues) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability.
The zeros found in the numerator of
G
(
s
)
{\displaystyle \mathbf {G} (s)}
can similarly be used to determine whether the system is minimum phase.
The system may still be input–output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable).
=== Controllability ===
The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only if
rank
[
B
A
B
A
2
B
⋯
A
n
−
1
B
]
=
n
,
{\displaystyle \operatorname {rank} {\begin{bmatrix}\mathbf {B} &\mathbf {A} \mathbf {B} &\mathbf {A} ^{2}\mathbf {B} &\cdots &\mathbf {A} ^{n-1}\mathbf {B} \end{bmatrix}}=n,}
where rank is the number of linearly independent rows in a matrix, and where n is the number of state variables.
=== Observability ===
Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system).
A continuous time-invariant linear state-space model is observable if and only if
rank
[
C
C
A
⋮
C
A
n
−
1
]
=
n
.
{\displaystyle \operatorname {rank} {\begin{bmatrix}\mathbf {C} \\\mathbf {C} \mathbf {A} \\\vdots \\\mathbf {C} \mathbf {A} ^{n-1}\end{bmatrix}}=n.}
=== Transfer function ===
The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way:
First, taking the Laplace transform of
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)}
yields
s
X
(
s
)
−
x
(
0
)
=
A
X
(
s
)
+
B
U
(
s
)
.
{\displaystyle s\mathbf {X} (s)-\mathbf {x} (0)=\mathbf {A} \mathbf {X} (s)+\mathbf {B} \mathbf {U} (s).}
Next, we simplify for
X
(
s
)
{\displaystyle \mathbf {X} (s)}
, giving
(
s
I
−
A
)
X
(
s
)
=
x
(
0
)
+
B
U
(
s
)
{\displaystyle (s\mathbf {I} -\mathbf {A} )\mathbf {X} (s)=\mathbf {x} (0)+\mathbf {B} \mathbf {U} (s)}
and thus
X
(
s
)
=
(
s
I
−
A
)
−
1
x
(
0
)
+
(
s
I
−
A
)
−
1
B
U
(
s
)
.
{\displaystyle \mathbf {X} (s)=(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} \mathbf {U} (s).}
Substituting for
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the output equation
Y
(
s
)
=
C
X
(
s
)
+
D
U
(
s
)
,
{\displaystyle \mathbf {Y} (s)=\mathbf {C} \mathbf {X} (s)+\mathbf {D} \mathbf {U} (s),}
giving
Y
(
s
)
=
C
(
(
s
I
−
A
)
−
1
x
(
0
)
+
(
s
I
−
A
)
−
1
B
U
(
s
)
)
+
D
U
(
s
)
.
{\displaystyle \mathbf {Y} (s)=\mathbf {C} ((s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {x} (0)+(s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} \mathbf {U} (s))+\mathbf {D} \mathbf {U} (s).}
Assuming zero initial conditions
x
(
0
)
=
0
{\displaystyle \mathbf {x} (0)=\mathbf {0} }
and a single-input single-output (SISO) system, the transfer function is defined as the ratio of output and input
G
(
s
)
=
Y
(
s
)
/
U
(
s
)
{\displaystyle G(s)=Y(s)/U(s)}
. For a multiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the transfer function matrix is derived from
Y
(
s
)
=
G
(
s
)
U
(
s
)
{\displaystyle \mathbf {Y} (s)=\mathbf {G} (s)\mathbf {U} (s)}
using the method of equating the coefficients which yields
G
(
s
)
=
C
(
s
I
−
A
)
−
1
B
+
D
.
{\displaystyle \mathbf {G} (s)=\mathbf {C} (s\mathbf {I} -\mathbf {A} )^{-1}\mathbf {B} +\mathbf {D} .}
Consequently,
G
(
s
)
{\displaystyle \mathbf {G} (s)}
is a matrix with the dimension
q
×
p
{\displaystyle q\times p}
which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The Rosenbrock system matrix provides a bridge between the state-space representation and its transfer function.
=== Canonical realizations ===
Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system):
Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
G
(
s
)
=
n
1
s
3
+
n
2
s
2
+
n
3
s
+
n
4
s
4
+
d
1
s
3
+
d
2
s
2
+
d
3
s
+
d
4
.
{\displaystyle \mathbf {G} (s)={\frac {n_{1}s^{3}+n_{2}s^{2}+n_{3}s+n_{4}}{s^{4}+d_{1}s^{3}+d_{2}s^{2}+d_{3}s+d_{4}}}.}
The coefficients can now be inserted directly into the state-space model by the following approach:
x
˙
(
t
)
=
[
0
1
0
0
0
0
1
0
0
0
0
1
−
d
4
−
d
3
−
d
2
−
d
1
]
x
(
t
)
+
[
0
0
0
1
]
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}0&1&0&0\\0&0&1&0\\0&0&0&1\\-d_{4}&-d_{3}&-d_{2}&-d_{1}\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}0\\0\\0\\1\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
n
4
n
3
n
2
n
1
]
x
(
t
)
.
{\displaystyle \mathbf {y} (t)={\begin{bmatrix}n_{4}&n_{3}&n_{2}&n_{1}\end{bmatrix}}\mathbf {x} (t).}
This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).
The transfer function coefficients can also be used to construct another type of canonical form
x
˙
(
t
)
=
[
0
0
0
−
d
4
1
0
0
−
d
3
0
1
0
−
d
2
0
0
1
−
d
1
]
x
(
t
)
+
[
n
4
n
3
n
2
n
1
]
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}0&0&0&-d_{4}\\1&0&0&-d_{3}\\0&1&0&-d_{2}\\0&0&1&-d_{1}\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}n_{4}\\n_{3}\\n_{2}\\n_{1}\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
0
0
0
1
]
x
(
t
)
.
{\displaystyle \mathbf {y} (t)={\begin{bmatrix}0&0&0&1\end{bmatrix}}\mathbf {x} (t).}
This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).
=== Proper transfer functions ===
Transfer functions which are only proper (and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant.
G
(
s
)
=
G
S
P
(
s
)
+
G
(
∞
)
.
{\displaystyle \mathbf {G} (s)=\mathbf {G} _{\mathrm {SP} }(s)+\mathbf {G} (\infty ).}
The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially
y
(
t
)
=
G
(
∞
)
u
(
t
)
{\displaystyle \mathbf {y} (t)=\mathbf {G} (\infty )\mathbf {u} (t)}
. Together we then get a state-space realization with matrices A, B and C determined by the strictly proper part, and matrix D determined by the constant.
Here is an example to clear things up a bit:
G
(
s
)
=
s
2
+
3
s
+
3
s
2
+
2
s
+
1
=
s
+
2
s
2
+
2
s
+
1
+
1
{\displaystyle \mathbf {G} (s)={\frac {s^{2}+3s+3}{s^{2}+2s+1}}={\frac {s+2}{s^{2}+2s+1}}+1}
which yields the following controllable realization
x
˙
(
t
)
=
[
−
2
−
1
1
0
]
x
(
t
)
+
[
1
0
]
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}-2&-1\\1&0\\\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}1\\0\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
1
2
]
x
(
t
)
+
[
1
]
u
(
t
)
{\displaystyle \mathbf {y} (t)={\begin{bmatrix}1&2\end{bmatrix}}\mathbf {x} (t)+{\begin{bmatrix}1\end{bmatrix}}\mathbf {u} (t)}
Notice how the output also depends directly on the input. This is due to the
G
(
∞
)
{\displaystyle \mathbf {G} (\infty )}
constant in the transfer function.
=== Feedback ===
A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system:
u
(
t
)
=
K
y
(
t
)
{\displaystyle \mathbf {u} (t)=K\mathbf {y} (t)}
.
Since the values of K are unrestricted the values can easily be negated for negative feedback.
The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}
y
(
t
)
=
C
x
(
t
)
+
D
u
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}
becomes
x
˙
(
t
)
=
A
x
(
t
)
+
B
K
y
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+BK\mathbf {y} (t)}
y
(
t
)
=
C
x
(
t
)
+
D
K
y
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+DK\mathbf {y} (t)}
solving the output equation for
y
(
t
)
{\displaystyle \mathbf {y} (t)}
and substituting in the state equation results in
x
˙
(
t
)
=
(
A
+
B
K
(
I
−
D
K
)
−
1
C
)
x
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A+BK\left(I-DK\right)^{-1}C\right)\mathbf {x} (t)}
y
(
t
)
=
(
I
−
D
K
)
−
1
C
x
(
t
)
{\displaystyle \mathbf {y} (t)=\left(I-DK\right)^{-1}C\mathbf {x} (t)}
The advantage of this is that the eigenvalues of A can be controlled by setting K appropriately through eigendecomposition of
(
A
+
B
K
(
I
−
D
K
)
−
1
C
)
{\displaystyle \left(A+BK\left(I-DK\right)^{-1}C\right)}
.
This assumes that the closed-loop system is controllable or that the unstable eigenvalues of A can be made stable through appropriate choice of K.
==== Example ====
For a strictly proper system D equals zero. Another fairly common situation is when all states are outputs, i.e. y = x, which yields C = I, the identity matrix. This would then result in the simpler equations
x
˙
(
t
)
=
(
A
+
B
K
)
x
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A+BK\right)\mathbf {x} (t)}
y
(
t
)
=
x
(
t
)
{\displaystyle \mathbf {y} (t)=\mathbf {x} (t)}
This reduces the necessary eigendecomposition to just
A
+
B
K
{\displaystyle A+BK}
.
=== Feedback with setpoint (reference) input ===
In addition to feedback, an input,
r
(
t
)
{\displaystyle r(t)}
, can be added such that
u
(
t
)
=
−
K
y
(
t
)
+
r
(
t
)
{\displaystyle \mathbf {u} (t)=-K\mathbf {y} (t)+\mathbf {r} (t)}
.
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}
y
(
t
)
=
C
x
(
t
)
+
D
u
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}
becomes
x
˙
(
t
)
=
A
x
(
t
)
−
B
K
y
(
t
)
+
B
r
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)-BK\mathbf {y} (t)+B\mathbf {r} (t)}
y
(
t
)
=
C
x
(
t
)
−
D
K
y
(
t
)
+
D
r
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)-DK\mathbf {y} (t)+D\mathbf {r} (t)}
solving the output equation for
y
(
t
)
{\displaystyle \mathbf {y} (t)}
and substituting in the state equation
results in
x
˙
(
t
)
=
(
A
−
B
K
(
I
+
D
K
)
−
1
C
)
x
(
t
)
+
B
(
I
−
K
(
I
+
D
K
)
−
1
D
)
r
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A-BK\left(I+DK\right)^{-1}C\right)\mathbf {x} (t)+B\left(I-K\left(I+DK\right)^{-1}D\right)\mathbf {r} (t)}
y
(
t
)
=
(
I
+
D
K
)
−
1
C
x
(
t
)
+
(
I
+
D
K
)
−
1
D
r
(
t
)
{\displaystyle \mathbf {y} (t)=\left(I+DK\right)^{-1}C\mathbf {x} (t)+\left(I+DK\right)^{-1}D\mathbf {r} (t)}
One fairly common simplification to this system is removing D, which reduces the equations to
x
˙
(
t
)
=
(
A
−
B
K
C
)
x
(
t
)
+
B
r
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\left(A-BKC\right)\mathbf {x} (t)+B\mathbf {r} (t)}
y
(
t
)
=
C
x
(
t
)
{\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)}
=== Moving object example ===
A classical linear system is that of one-dimensional movement of an object (e.g., a cart).
Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring:
m
y
¨
(
t
)
=
u
(
t
)
−
b
y
˙
(
t
)
−
k
y
(
t
)
{\displaystyle m{\ddot {y}}(t)=u(t)-b{\dot {y}}(t)-ky(t)}
where
y
(
t
)
{\displaystyle y(t)}
is position;
y
˙
(
t
)
{\displaystyle {\dot {y}}(t)}
is velocity;
y
¨
(
t
)
{\displaystyle {\ddot {y}}(t)}
is acceleration
u
(
t
)
{\displaystyle u(t)}
is an applied force
b
{\displaystyle b}
is the viscous friction coefficient
k
{\displaystyle k}
is the spring constant
m
{\displaystyle m}
is the mass of the object
The state equation would then become
[
x
˙
1
(
t
)
x
˙
2
(
t
)
]
=
[
0
1
−
k
m
−
b
m
]
[
x
1
(
t
)
x
2
(
t
)
]
+
[
0
1
m
]
u
(
t
)
{\displaystyle {\begin{bmatrix}{\dot {\mathbf {x} }}_{1}(t)\\{\dot {\mathbf {x} }}_{2}(t)\end{bmatrix}}={\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}{\begin{bmatrix}\mathbf {x} _{1}(t)\\\mathbf {x} _{2}(t)\end{bmatrix}}+{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}\mathbf {u} (t)}
y
(
t
)
=
[
1
0
]
[
x
1
(
t
)
x
2
(
t
)
]
{\displaystyle \mathbf {y} (t)=\left[{\begin{matrix}1&0\end{matrix}}\right]\left[{\begin{matrix}\mathbf {x_{1}} (t)\\\mathbf {x_{2}} (t)\end{matrix}}\right]}
where
x
1
(
t
)
{\displaystyle x_{1}(t)}
represents the position of the object
x
2
(
t
)
=
x
˙
1
(
t
)
{\displaystyle x_{2}(t)={\dot {x}}_{1}(t)}
is the velocity of the object
x
˙
2
(
t
)
=
x
¨
1
(
t
)
{\displaystyle {\dot {x}}_{2}(t)={\ddot {x}}_{1}(t)}
is the acceleration of the object
the output
y
(
t
)
{\displaystyle \mathbf {y} (t)}
is the position of the object
The controllability test is then
[
B
A
B
]
=
[
[
0
1
m
]
[
0
1
−
k
m
−
b
m
]
[
0
1
m
]
]
=
[
0
1
m
1
m
−
b
m
2
]
{\displaystyle {\begin{bmatrix}B&AB\end{bmatrix}}={\begin{bmatrix}{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}&{\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}{\begin{bmatrix}0\\{\frac {1}{m}}\end{bmatrix}}\end{bmatrix}}={\begin{bmatrix}0&{\frac {1}{m}}\\{\frac {1}{m}}&-{\frac {b}{m^{2}}}\end{bmatrix}}}
which has full rank for all
b
{\displaystyle b}
and
m
{\displaystyle m}
. This means, that if initial state of the system is known (
y
(
t
)
{\displaystyle y(t)}
,
y
˙
(
t
)
{\displaystyle {\dot {y}}(t)}
,
y
¨
(
t
)
{\displaystyle {\ddot {y}}(t)}
), and if the
b
{\displaystyle b}
and
m
{\displaystyle m}
are constants, then there is a force
u
{\displaystyle u}
that could move the cart into any other position in the system.
The observability test is then
[
C
C
A
]
=
[
[
1
0
]
[
1
0
]
[
0
1
−
k
m
−
b
m
]
]
=
[
1
0
0
1
]
{\displaystyle {\begin{bmatrix}C\\CA\end{bmatrix}}={\begin{bmatrix}{\begin{bmatrix}1&0\end{bmatrix}}\\{\begin{bmatrix}1&0\end{bmatrix}}{\begin{bmatrix}0&1\\-{\frac {k}{m}}&-{\frac {b}{m}}\end{bmatrix}}\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}}
which also has full rank. Therefore, this system is both controllable and observable.
== Nonlinear systems ==
The more general form of a state-space model can be written as two functions.
x
˙
(
t
)
=
f
(
t
,
x
(
t
)
,
u
(
t
)
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {f} (t,x(t),u(t))}
y
(
t
)
=
h
(
t
,
x
(
t
)
,
u
(
t
)
)
{\displaystyle \mathbf {y} (t)=\mathbf {h} (t,x(t),u(t))}
The first is the state equation and the latter is the output equation.
If the function
f
(
⋅
,
⋅
,
⋅
)
{\displaystyle f(\cdot ,\cdot ,\cdot )}
is a linear combination of states and inputs then the equations can be written in matrix notation like above.
The
u
(
t
)
{\displaystyle u(t)}
argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).
=== Pendulum example ===
A classic nonlinear system is a simple unforced pendulum
m
ℓ
2
θ
¨
(
t
)
=
−
m
ℓ
g
sin
θ
(
t
)
−
k
ℓ
θ
˙
(
t
)
{\displaystyle m\ell ^{2}{\ddot {\theta }}(t)=-m\ell g\sin \theta (t)-k\ell {\dot {\theta }}(t)}
where
θ
(
t
)
{\displaystyle \theta (t)}
is the angle of the pendulum with respect to the direction of gravity
m
{\displaystyle m}
is the mass of the pendulum (pendulum rod's mass is assumed to be zero)
g
{\displaystyle g}
is the gravitational acceleration
k
{\displaystyle k}
is coefficient of friction at the pivot point
ℓ
{\displaystyle \ell }
is the radius of the pendulum (to the center of gravity of the mass
m
{\displaystyle m}
)
The state equations are then
x
˙
1
(
t
)
=
x
2
(
t
)
{\displaystyle {\dot {x}}_{1}(t)=x_{2}(t)}
x
˙
2
(
t
)
=
−
g
ℓ
sin
x
1
(
t
)
−
k
m
ℓ
x
2
(
t
)
{\displaystyle {\dot {x}}_{2}(t)=-{\frac {g}{\ell }}\sin {x_{1}}(t)-{\frac {k}{m\ell }}{x_{2}}(t)}
where
x
1
(
t
)
=
θ
(
t
)
{\displaystyle x_{1}(t)=\theta (t)}
is the angle of the pendulum
x
2
(
t
)
=
x
˙
1
(
t
)
{\displaystyle x_{2}(t)={\dot {x}}_{1}(t)}
is the rotational velocity of the pendulum
x
˙
2
=
x
¨
1
{\displaystyle {\dot {x}}_{2}={\ddot {x}}_{1}}
is the rotational acceleration of the pendulum
Instead, the state equation can be written in the general form
x
˙
(
t
)
=
[
x
˙
1
(
t
)
x
˙
2
(
t
)
]
=
f
(
t
,
x
(
t
)
)
=
[
x
2
(
t
)
−
g
ℓ
sin
x
1
(
t
)
−
k
m
ℓ
x
2
(
t
)
]
.
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}{\dot {x}}_{1}(t)\\{\dot {x}}_{2}(t)\end{bmatrix}}=\mathbf {f} (t,x(t))={\begin{bmatrix}x_{2}(t)\\-{\frac {g}{\ell }}\sin {x_{1}}(t)-{\frac {k}{m\ell }}{x_{2}}(t)\end{bmatrix}}.}
The equilibrium/stationary points of a system are when
x
˙
=
0
{\displaystyle {\dot {x}}=0}
and so the equilibrium points of a pendulum are those that satisfy
[
x
1
x
2
]
=
[
n
π
0
]
{\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}n\pi \\0\end{bmatrix}}}
for integers n.
== See also ==
== References ==
== Further reading ==
== External links ==
Wolfram language functions for linear state-space models, affine state-space models, and nonlinear state-space models. | Wikipedia/State_space_(controls) |
Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need a priori information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.
== Parameter estimation ==
The foundation of adaptive control is parameter estimation, which is a branch of system identification. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control). Projection and normalization are commonly used to improve the robustness of estimation algorithms.
== Classification of adaptive control techniques ==
In general, one should distinguish between:
Feedforward adaptive control
Feedback adaptive control
as well as between
Direct methods
Indirect methods
Hybrid methods
Direct methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters. Hybrid methods rely on both estimation of parameters and direct modification of the control law.
There are several broad categories of feedback adaptive control (classification can vary):
Dual adaptive controllers – based on dual control theory
Optimal dual controllers – difficult to design
Suboptimal dual controllers
Nondual adaptive controllers
Adaptive pole placement
Extremum-seeking controllers
Iterative learning control
Gain scheduling
Model reference adaptive controllers (MRACs) – incorporate a reference model defining desired closed loop performance
Gradient optimization MRACs – use local rule for adjusting params when performance differs from reference. Ex.: "MIT rule".
Stability optimized MRACs
Model identification adaptive controllers (MIACs) – perform system identification while the system is running
Cautious adaptive controllers – use current SI to modify control law, allowing for SI uncertainty
Certainty equivalent adaptive controllers – take current SI to be the true system, assume no uncertainty
Nonparametric adaptive controllers
Parametric adaptive controllers
Explicit parameter adaptive controllers
Implicit parameter adaptive controllers
Multiple models – Use large number of models, which are distributed in the region of uncertainty, and based on the responses of the plant and the models. One model is chosen at every instant, which is closest to the plant according to some metric.
Some special topics in adaptive control can be introduced as well:
Adaptive control based on discrete-time process identification
Adaptive control based on the model reference control technique
Adaptive control based on continuous-time process models
Adaptive control of multivariable processes
Adaptive control of nonlinear processes
Concurrent learning adaptive control, which relaxes the condition on persistent excitation for parameter convergence for a class of systems
In recent times, adaptive control has been merged with intelligent techniques such as fuzzy and neural networks to bring forth new concepts such as fuzzy adaptive control.
== Applications ==
When designing adaptive control systems, special consideration is necessary of convergence and robustness issues. Lyapunov stability is typically used to derive control adaptation laws and show .
Self-tuning of subsequently fixed linear controllers during the implementation phase for one operating point;
Self-tuning of subsequently fixed robust controllers during the implementation phase for whole range of operating points;
Self-tuning of fixed controllers on request if the process behaviour changes due to ageing, drift, wear, etc.;
Adaptive control of linear controllers for nonlinear or time-varying processes;
Adaptive control or self-tuning control of nonlinear controllers for nonlinear processes;
Adaptive control or self-tuning control of multivariable controllers for multivariable processes (MIMO systems);
Usually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.
A particularly successful application of adaptive control has been adaptive flight control. This body of work has focused on guaranteeing stability of a model reference adaptive control scheme using Lyapunov arguments. Several successful flight-test demonstrations have been conducted, including fault tolerant adaptive control.
== See also ==
Nonlinear control
Intelligent control
Lyapunov optimization
== References ==
== Further reading ==
B. Egardt, Stability of Adaptive Controllers. New York: Springer-Verlag, 1979.
I. D. Landau, Adaptive Control: The Model Reference Approach. New York: Marcel Dekker, 1979.
P. A. Ioannou and J. Sun, Robust Adaptive Control. Upper Saddle River, NJ: Prentice-Hall, 1996.
K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems. Englewood Cliffs, NJ: Prentice Hall, 1989; Dover Publications, 2004.
S. Sastry and M. Bodson, Adaptive Control: Stability, Convergence and Robustness. Prentice Hall, 1989.
K. J. Astrom and B. Wittenmark, Adaptive Control. Reading, MA: Addison-Wesley, 1995.
I. D. Landau, R. Lozano, and M. M’Saad, Adaptive Control. New York, NY: Springer-Verlag, 1998.
G. Tao, Adaptive Control Design and Analysis. Hoboken, NJ: Wiley-Interscience, 2003.
P. A. Ioannou and B. Fidan, Adaptive Control Tutorial. SIAM, 2006.
G. C. Goodwin and K. S. Sin, Adaptive Filtering Prediction and Control. Englewood Cliffs, NJ: Prentice-Hall, 1984.
M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlinear and Adaptive Control Design. Wiley Interscience, 1995.
P. A. Ioannou and P. V. Kokotovic, Adaptive Systems with Reduced Models. Springer Verlag, 1983.
Annaswamy, Anuradha M.; Fradkov, Alexander L. (2021). "A historical perspective of adaptive control and learning". Annual Reviews in Control. 52: 18–41. arXiv:2108.11336. doi:10.1016/j.arcontrol.2021.10.014. S2CID 237290042.
== External links ==
Shankar Sastry and Marc Bodson, Adaptive Control: Stability, Convergence, and Robustness, Prentice-Hall, 1989-1994 (book)
K. Sevcik: Tutorial on Model Reference Adaptive Control (Drexel University)
Tutorial on Concurrent Learning Model Reference Adaptive Control G. Chowdhary (slides, relevant papers, and matlab code) | Wikipedia/Adaptive_control |
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.
Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.
Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.
== History ==
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.
A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.
Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.
== Open-loop and closed-loop (feedback) control ==
== Classical control theory ==
== Linear and nonlinear control theory ==
The field of control theory can be divided into two branches:
Linear control theory – This applies to systems made of devices which obey the superposition principle, which means roughly that the output is proportional to the input. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, zeros and poles, which give solutions for system response and design techniques for most systems of interest.
Nonlinear control theory – This covers a wider class of systems that do not obey the superposition principle, and applies to more real-world systems because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The few mathematical techniques which have been developed to handle them are more difficult and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theorem, and describing functions. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system using perturbation theory, and linear techniques can be used.
== Analysis techniques – frequency domain and time domain ==
Mathematical techniques for analyzing and designing control systems fall into two different categories:
Frequency domain – In this type the values of the state variables, the mathematical variables representing the system's input, output and feedback are represented as functions of frequency. The input signal and the system's transfer function are converted from time functions to functions of frequency by a transform such as the Fourier transform, Laplace transform, or Z transform. The advantage of this technique is that it results in a simplification of the mathematics; the differential equations that represent the system are replaced by algebraic equations in the frequency domain which is much simpler to solve. However, frequency domain techniques can only be used with linear systems, as mentioned above.
Time-domain state space representation – In this type the values of the state variables are represented as functions of time. With this model, the system being analyzed is represented by one or more differential equations. Since frequency domain techniques are limited to linear systems, time domain is widely used to analyze real-world nonlinear systems. Although these are more difficult to solve, modern computer simulation techniques such as simulation languages have made their analysis routine.
In contrast to the frequency-domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.
== System interfacing ==
Control systems can be divided into different categories depending on the number of inputs and outputs.
Single-input single-output (SISO) – This is the simplest and most common type, in which one output is controlled by one control signal. Examples are the cruise control example above, or an audio system, in which the control input is the input audio signal and the output is the sound waves from the speaker.
Multiple-input multiple-output (MIMO) – These are found in more complicated systems. For example, modern large telescopes such as the Keck and MMT have mirrors composed of many separate segments each controlled by an actuator. The shape of the entire mirror is constantly adjusted by a MIMO active optics control system using input from multiple sensors at the focal plane, to compensate for changes in the mirror shape due to thermal expansion, contraction, stresses as it is rotated and distortion of the wavefront due to turbulence in the atmosphere. Complicated systems such as nuclear reactors and human cells are simulated by a computer as large MIMO control systems.
=== Classical SISO system design ===
The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.
=== Modern MIMO system design ===
Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.
== Topics in control theory ==
=== Stability ===
The stability of a general dynamical system with no input can be described with Lyapunov stability criteria.
A linear system is called bounded-input bounded-output (BIBO) stable if its output will stay bounded for any bounded input.
Stability for nonlinear systems that take an input is input-to-state stability (ISS), which combines Lyapunov stability and a notion similar to BIBO stability.
For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.
Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside
in the open left half of the complex plane for continuous time, when the Laplace transform is used to obtain the transfer function.
inside the unit circle for discrete time, when the Z-transform is used.
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the
x
{\displaystyle x}
axis is the real axis and the discrete Z-transform is in circular coordinates where the
ρ
{\displaystyle \rho }
axis is the real axis.
When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
If a system in question has an impulse response of
x
[
n
]
=
0.5
n
u
[
n
]
{\displaystyle \ x[n]=0.5^{n}u[n]}
then the Z-transform (see this example), is given by
X
(
z
)
=
1
1
−
0.5
z
−
1
{\displaystyle \ X(z)={\frac {1}{1-0.5z^{-1}}}}
which has a pole in
z
=
0.5
{\displaystyle z=0.5}
(zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle.
However, if the impulse response was
x
[
n
]
=
1.5
n
u
[
n
]
{\displaystyle \ x[n]=1.5^{n}u[n]}
then the Z-transform is
X
(
z
)
=
1
1
−
1.5
z
−
1
{\displaystyle \ X(z)={\frac {1}{1-1.5z^{-1}}}}
which has a pole at
z
=
1.5
{\displaystyle z=1.5}
and is not BIBO stable since the pole has a modulus strictly greater than one.
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
=== Controllability and observability ===
Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.
=== Control specification ===
Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).
A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have
R
e
[
λ
]
<
−
λ
¯
{\displaystyle Re[\lambda ]<-{\overline {\lambda }}}
, where
λ
¯
{\displaystyle {\overline {\lambda }}}
is a fixed value strictly greater than zero, instead of simply asking that
R
e
[
λ
]
<
0
{\displaystyle Re[\lambda ]<0}
.
Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).
Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).
=== Model identification and robustness ===
A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.
System identification
The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that
m
x
¨
(
t
)
=
−
K
x
(
t
)
−
B
x
˙
(
t
)
{\displaystyle m{\ddot {x}}(t)=-Kx(t)-\mathrm {B} {\dot {x}}(t)}
. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.
Analysis
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.
Constraints
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
== System classifications ==
=== Linear systems control ===
For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
=== Nonlinear systems control ===
Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.
=== Decentralized systems control ===
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
=== Deterministic and stochastic systems control ===
A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.
== Main control strategies ==
Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
List of the main control techniques
Optimal control is a particular control technique in which the control signal optimizes a certain "cost index": for example, in the case of a satellite, the jet thrusts needed to bring it to desired trajectory that consume the least amount of fuel. Two optimal control design methods have been widely used in industrial applications, as it has been shown they can guarantee closed-loop stability. These are Model Predictive Control (MPC) and linear-quadratic-Gaussian control (LQG). The first can more explicitly take into account constraints on the signals in the system, which is an important feature in many industrial processes. However, the "optimal control" structure in MPC is only a means to achieve such a result, as it does not optimize a true performance index of the closed-loop control system. Together with PID controllers, MPC systems are the most widely used control technique in process control.
Robust control deals explicitly with uncertainty in its approach to controller design. Controllers designed using robust control methods tend to be able to cope with small differences between the true system and the nominal model used for design. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness. Examples of modern robust control techniques include H-infinity loop-shaping developed by Duncan McFarlane and Keith Glover, Sliding mode control (SMC) developed by Vadim Utkin, and safe protocols designed for control of large heterogeneous populations of electric loads in Smart Power Grid applications. Robust methods aim to achieve robust performance and/or stability in the presence of small modeling errors.
Stochastic control deals with control design with uncertainty in the model. In typical stochastic control problems, it is assumed that there exist random noise and disturbances in the model and the controller, and the control design must take into account these random deviations.
Adaptive control uses on-line identification of the process parameters, or modification of controller gains, thereby obtaining strong robustness properties. Adaptive controls were applied for the first time in the aerospace industry in the 1950s, and have found particular success in that field.
A hierarchical control system is a type of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
Intelligent control uses various AI computing approaches like artificial neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms or a combination of these methods, such as neuro-fuzzy algorithms, to control a dynamic system.
Self-organized criticality control may be defined as attempts to interfere in the processes by which the self-organized system dissipates energy.
== People in systems and control ==
Many active and historical figures made significant contribution to control theory including
Pierre-Simon Laplace invented the Z-transform in his work on probability theory, now used to solve discrete-time control theory problems. The Z-transform is a discrete-time equivalent of the Laplace transform which is named after him.
Irmgard Flugge-Lotz developed the theory of discontinuous automatic control and applied it to automatic aircraft control systems.
Alexander Lyapunov in the 1890s marks the beginning of stability theory.
Harold S. Black invented the concept of negative feedback amplifiers in 1927. He managed to develop stable negative feedback amplifiers in the 1930s.
Harry Nyquist developed the Nyquist stability criterion for feedback systems in the 1930s.
Richard Bellman developed dynamic programming in the 1940s.
Warren E. Dixon, control theorist and a professor
Kyriakos G. Vamvoudakis, developed synchronous reinforcement learning algorithms to solve optimal control and game theoretic problems
Andrey Kolmogorov co-developed the Wiener–Kolmogorov filter in 1941.
Norbert Wiener co-developed the Wiener–Kolmogorov filter and coined the term cybernetics in the 1940s.
John R. Ragazzini introduced digital control and the use of Z-transform in control theory (invented by Laplace) in the 1950s.
Lev Pontryagin introduced the maximum principle and the bang-bang principle.
Pierre-Louis Lions developed viscosity solutions into stochastic control and optimal control methods.
Rudolf E. Kálmán pioneered the state-space approach to systems and control. Introduced the notions of controllability and observability. Developed the Kalman filter for linear estimation.
Ali H. Nayfeh who was one of the main contributors to nonlinear control theory and published many books on perturbation methods
Jan C. Willems Introduced the concept of dissipativity, as a generalization of Lyapunov function to input/state/output systems. The construction of the storage function, as the analogue of a Lyapunov function is called, led to the study of the linear matrix inequality (LMI) in control theory. He pioneered the behavioral approach to mathematical systems theory.
== See also ==
Examples of control systems
Topics in control theory
Other related topics
== References ==
== Further reading ==
Levine, William S., ed. (1996). The Control Handbook. New York: CRC Press. ISBN 978-0-8493-8570-4.
Karl J. Åström; Richard M. Murray (2008). Feedback Systems: An Introduction for Scientists and Engineers (PDF). Princeton University Press. ISBN 978-0-691-13576-2.
Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning. ISBN 978-1-4018-5806-3.
Vannevar Bush (1929). Operational Circuit Analysis. John Wiley and Sons, Inc.
Robert F. Stengel (1994). Optimal Control and Estimation. Dover Publications. ISBN 978-0-486-68200-6.
Franklin; et al. (2002). Feedback Control of Dynamic Systems (4 ed.). New Jersey: Prentice Hall. ISBN 978-0-13-032393-4.
Joseph L. Hellerstein; Dawn M. Tilbury; Sujay Parekh (2004). Feedback Control of Computing Systems. John Wiley and Sons. ISBN 978-0-471-26637-2.
Diederich Hinrichsen and Anthony J. Pritchard (2005). Mathematical Systems Theory I – Modelling, State Space Analysis, Stability and Robustness. Springer. ISBN 978-3-540-44125-0.
Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition (PDF). Springer. ISBN 978-0-387-98489-6.
Goodwin, Graham (2001). Control System Design. Prentice Hall. ISBN 978-0-13-958653-8.
Christophe Basso (2012). Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide. Artech House. ISBN 978-1608075577.
Boris J. Lurie; Paul J. Enright (2019). Classical Feedback Control with Nonlinear Multi-loop Systems (3 ed.). CRC Press. ISBN 978-1-1385-4114-6.
For Chemical Engineering
Luyben, William (1989). Process Modeling, Simulation, and Control for Chemical Engineers. McGraw Hill. ISBN 978-0-07-039159-8.
== External links ==
Control Tutorials for Matlab, a set of worked-through control examples solved by several different methods.
Control Tuning and Best Practices
Advanced control structures, free on-line simulators explaining the control theory | Wikipedia/Controller_(control_theory) |
In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex valued frequency-domain (the z-domain or z-plane) representation.
It can be considered a discrete-time equivalent of the Laplace transform (the s-domain or s-plane). This similarity is explored in the theory of time-scale calculus.
While the continuous-time Fourier transform is evaluated on the s-domain's vertical axis (the imaginary axis), the discrete-time Fourier transform is evaluated along the z-domain's unit circle. The s-domain's left half-plane maps to the area inside the z-domain's unit circle, while the s-domain's right half-plane maps to the area outside of the z-domain's unit circle.
In signal processing, one of the means of designing digital filters is to take analog designs, subject them to a bilinear transform which maps them from the s-domain to the z-domain, and then produce the digital filter by inspection, manipulation, or numerical approximation. Such methods tend not to be accurate except in the vicinity of the complex unity, i.e. at low frequencies.
== History ==
The foundational concept now recognized as the Z-transform, which is a cornerstone in the analysis and design of digital control systems, was not entirely novel when it emerged in the mid-20th century. Its embryonic principles can be traced back to the work of the French mathematician Pierre-Simon Laplace, who is better known for the Laplace transform, a closely related mathematical technique. However, the explicit formulation and application of what we now understand as the Z-transform were significantly advanced in 1947 by Witold Hurewicz and colleagues. Their work was motivated by the challenges presented by sampled-data control systems, which were becoming increasingly relevant in the context of radar technology during that period. The Z-transform provided a systematic and effective method for solving linear difference equations with constant coefficients, which are ubiquitous in the analysis of discrete-time signals and systems.
The method was further refined and gained its official nomenclature, "the Z-transform," in 1952, thanks to the efforts of John R. Ragazzini and Lotfi A. Zadeh, who were part of the sampled-data control group at Columbia University. Their work not only solidified the mathematical framework of the Z-transform but also expanded its application scope, particularly in the field of electrical engineering and control systems.
A notable extension, known as the modified or advanced Z-transform, was later introduced by Eliahu I. Jury. Jury's work extended the applicability and robustness of the Z-transform, especially in handling initial conditions and providing a more comprehensive framework for the analysis of digital control systems. This advanced formulation has played a pivotal role in the design and stability analysis of discrete-time control systems, contributing significantly to the field of digital signal processing.
Interestingly, the conceptual underpinnings of the Z-transform intersect with a broader mathematical concept known as the method of generating functions, a powerful tool in combinatorics and probability theory. This connection was hinted at as early as 1730 by Abraham de Moivre, a pioneering figure in the development of probability theory. De Moivre utilized generating functions to solve problems in probability, laying the groundwork for what would eventually evolve into the Z-transform. From a mathematical perspective, the Z-transform can be viewed as a specific instance of a Laurent series, where the sequence of numbers under investigation is interpreted as the coefficients in the (Laurent) expansion of an analytic function. This perspective not only highlights the deep mathematical roots of the Z-transform but also illustrates its versatility and broad applicability across different branches of mathematics and engineering.
== Definition ==
The Z-transform can be defined as either a one-sided or two-sided transform. (Just like we have the one-sided Laplace transform and the two-sided Laplace transform.)
=== Bilateral Z-transform ===
The bilateral or two-sided Z-transform of a discrete-time signal
x
[
n
]
{\displaystyle x[n]}
is the formal power series
X
(
z
)
{\displaystyle X(z)}
defined as:
where
n
{\displaystyle n}
is an integer and
z
{\displaystyle z}
is, in general, a complex number. In polar form,
z
{\displaystyle z}
may be written as:
z
=
A
e
j
ϕ
=
A
⋅
(
cos
ϕ
+
j
sin
ϕ
)
{\displaystyle z=Ae^{j\phi }=A\cdot (\cos {\phi }+j\sin {\phi })}
where
A
{\displaystyle A}
is the magnitude of
z
{\displaystyle z}
,
j
{\displaystyle j}
is the imaginary unit, and
ϕ
{\displaystyle \phi }
is the complex argument (also referred to as angle or phase) in radians.
=== Unilateral Z-transform ===
Alternatively, in cases where
x
[
n
]
{\displaystyle x[n]}
is defined only for
n
≥
0
{\displaystyle n\geq 0}
, the single-sided or unilateral Z-transform is defined as:
In signal processing, this definition can be used to evaluate the Z-transform of the unit impulse response of a discrete-time causal system.
An important example of the unilateral Z-transform is the probability-generating function, where the component
x
[
n
]
{\displaystyle x[n]}
is the probability that a discrete random variable takes the value. The properties of Z-transforms (listed in § Properties) have useful interpretations in the context of probability theory.
== Inverse Z-transform ==
The inverse Z-transform is:
where
C
{\displaystyle C}
is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). In the case where the ROC is causal (see Example 2), this means the path
C
{\displaystyle C}
must encircle all of the poles of
X
(
z
)
{\displaystyle X(z)}
.
A special case of this contour integral occurs when
C
{\displaystyle C}
is the unit circle. This contour can be used when the ROC includes the unit circle, which is always guaranteed when
X
(
z
)
{\displaystyle X(z)}
is stable, that is, when all the poles are inside the unit circle. With this contour, the inverse Z-transform simplifies to the inverse discrete-time Fourier transform, or Fourier series, of the periodic values of the Z-transform around the unit circle:
The Z-transform with a finite range of
n
{\displaystyle n}
and a finite number of uniformly spaced
z
{\displaystyle z}
values can be computed efficiently via Bluestein's FFT algorithm. The discrete-time Fourier transform (DTFT)—not to be confused with the discrete Fourier transform (DFT)—is a special case of such a Z-transform obtained by restricting
z
{\displaystyle z}
to lie on the unit circle.
The following three methods are often used for the evaluation of the inverse -transform,
=== Direct Evaluation by Contour Integration ===
This method involves applying the Cauchy Residue Theorem to evaluate the inverse Z-transform. By integrating around a closed contour in the complex plane, the residues at the poles of the Z-transform function inside the ROC are summed. This technique is particularly useful when working with functions expressed in terms of complex variables.
=== Expansion into a Series of Terms in the Variables z and z-1 ===
In this method, the Z-transform is expanded into a power series. This approach is useful when the Z-transform function is rational, allowing for the approximation of the inverse by expanding into a series and determining the signal coefficients term by term.
=== Partial-Fraction Expansion and Table Lookup ===
This technique decomposes the Z-transform into a sum of simpler fractions, each corresponding to known Z-transform pairs. The inverse Z-transform is then determined by looking up each term in a standard table of Z-transform pairs. This method is widely used for its efficiency and simplicity, especially when the original function can be easily broken down into recognizable components.
==== Example: ====
A) Determine the inverse Z-transform of the following by series expansion method,
X
(
z
)
=
1
1
−
1.5
z
−
1
+
0.5
z
−
2
{\displaystyle X(z)={\frac {1}{1-1.5z^{-1}+0.5z^{-2}}}}
Solution:
Case 1:
ROC:
|
Z
|
>
1
{\displaystyle \left\vert Z\right\vert >1}
Since the ROC is the exterior of a circle,
x
(
n
)
{\displaystyle x(n)}
is causal (signal existing for n≥0).
X
(
z
)
=
1
1
−
3
2
z
−
1
+
1
2
z
−
2
=
1
+
3
2
z
−
1
+
7
4
z
−
2
+
15
8
z
−
3
+
31
16
z
−
4
+
.
.
.
.
{\displaystyle X(z)={1 \over 1-{3 \over 2}z^{-1}+{1 \over 2}z^{-2}}=1+{{3 \over 2}z^{-1}}+{{7 \over 4}z^{-2}}+{{15 \over 8}z^{-3}}+{{31 \over 16}z^{-4}}+....}
thus,
x
(
n
)
=
{
1
,
3
2
,
7
4
,
15
8
,
31
16
…
}
↑
{\displaystyle {\begin{aligned}x(n)&=\left\{1,{\frac {3}{2}},{\frac {7}{4}},{\frac {15}{8}},{\frac {31}{16}}\ldots \right\}\\&\qquad \!\uparrow \\\end{aligned}}}
(arrow indicates term at x(0)=1)
Note that in each step of long division process we eliminate lowest power term of
z
−
1
{\displaystyle z^{-1}}
.
Case 2:
ROC:
|
Z
|
<
0.5
{\displaystyle \left\vert Z\right\vert <0.5}
Since the ROC is the interior of a circle,
x
(
n
)
{\displaystyle x(n)}
is anticausal (signal existing for n<0).
By performing long division we get,
X
(
z
)
=
1
1
−
3
2
z
−
1
+
1
2
z
−
2
=
2
z
2
+
6
z
3
+
14
z
4
+
30
z
5
+
…
{\displaystyle X(z)={\frac {1}{1-{\frac {3}{2}}z^{-1}+{\frac {1}{2}}z^{-2}}}=2z^{2}+6z^{3}+14z^{4}+30z^{5}+\ldots }
x
(
n
)
=
{
30
,
14
,
6
,
2
,
0
,
0
}
↑
{\displaystyle {\begin{aligned}x(n)&=\{30,14,6,2,0,0\}\\&\qquad \qquad \qquad \quad \ \ \,\uparrow \\\end{aligned}}}
(arrow indicates term at x(0)=0)
Note that in each step of long division process we eliminate lowest power term of
z
{\displaystyle z}
.
Note:
When the signal is causal, we get positive powers of
z
{\displaystyle z}
and when the signal is anticausal, we get negative powers of
z
{\displaystyle z}
.
z
k
{\displaystyle z^{k}}
indicates term at
x
(
−
k
)
{\displaystyle x(-k)}
and
z
−
k
{\displaystyle z^{-k}}
indicates term at
x
(
k
)
{\displaystyle x(k)}
.
B) Determine the inverse Z-transform of the following by series expansion method,
Eliminating negative powers if
z
{\displaystyle z}
and dividing by
z
{\displaystyle z}
,
X
(
z
)
z
=
z
2
z
(
z
2
−
1.5
z
+
0.5
)
=
z
z
2
−
1.5
z
+
0.5
{\displaystyle {\frac {X(z)}{z}}={\frac {z^{2}}{z(z^{2}-1.5z+0.5)}}={\frac {z}{z^{2}-1.5z+0.5}}}
By Partial Fraction Expansion,
X
(
z
)
z
=
z
(
z
−
1
)
(
z
−
0.5
)
=
A
1
z
−
0.5
+
A
2
z
−
1
A
1
=
(
z
−
0.5
)
X
(
z
)
z
|
z
=
0.5
=
0.5
(
0.5
−
1
)
=
−
1
A
2
=
(
z
−
1
)
X
(
z
)
z
|
z
=
1
=
1
1
−
0.5
=
2
X
(
z
)
z
=
2
z
−
1
−
1
z
−
0.5
{\displaystyle {\begin{aligned}{\frac {X(z)}{z}}&={\frac {z}{(z-1)(z-0.5)}}={\frac {A_{1}}{z-0.5}}+{\frac {A_{2}}{z-1}}\\[4pt]&A_{1}=\left.{\frac {(z-0.5)X(z)}{z}}\right\vert _{z=0.5}={\frac {0.5}{(0.5-1)}}=-1\\[4pt]&A_{2}=\left.{\frac {(z-1)X(z)}{z}}\right\vert _{z=1}={\frac {1}{1-0.5}}={2}\\[4pt]{\frac {X(z)}{z}}&={\frac {2}{z-1}}-{\frac {1}{z-0.5}}\end{aligned}}}
Case 1:
ROC:
|
Z
|
>
1
{\displaystyle \left\vert Z\right\vert >1}
Both the terms are causal, hence
x
(
n
)
{\displaystyle x(n)}
is causal.
x
(
n
)
=
2
(
1
)
n
u
(
n
)
−
1
(
0.5
)
n
u
(
n
)
=
(
2
−
0.5
n
)
u
(
n
)
{\displaystyle {\begin{aligned}x(n)&=2{(1)^{n}}u(n)-1{(0.5)^{n}}u(n)\\&=(2-0.5^{n})u(n)\\\end{aligned}}}
Case 2:
ROC:
|
Z
|
<
0.5
{\displaystyle \left\vert Z\right\vert <0.5}
Both the terms are anticausal, hence
x
(
n
)
{\displaystyle x(n)}
is anticausal.
x
(
n
)
=
−
2
(
1
)
n
u
(
−
n
−
1
)
−
(
−
1
(
0.5
)
n
u
(
−
n
−
1
)
)
=
(
0.5
n
−
2
)
u
(
−
n
−
1
)
{\displaystyle {\begin{aligned}x(n)&=-2{(1)^{n}}u(-n-1)-(-1{(0.5)^{n}}u(-n-1))\\&=(0.5^{n}-2)u(-n-1)\\\end{aligned}}}
Case 3:
ROC:
0.5
<
|
Z
|
<
1
{\displaystyle 0.5<\left\vert Z\right\vert <1}
One of the terms is causal (p=0.5 provides the causal part) and other is anticausal (p=1 provides the anticausal part), hence
x
(
n
)
{\displaystyle x(n)}
is both sided.
x
(
n
)
=
−
2
(
1
)
n
u
(
−
n
−
1
)
−
1
(
0.5
)
n
u
(
n
)
=
−
2
u
(
−
n
−
1
)
−
0.5
n
u
(
n
)
{\displaystyle {\begin{aligned}x(n)&=-2{(1)^{n}}u(-n-1)-1{(0.5)^{n}}u(n)\\&=-2u(-n-1)-0.5^{n}u(n)\\\end{aligned}}}
== Region of convergence ==
The region of convergence (ROC) is the set of points in the complex plane for which the Z-transform summation converges (i.e. doesn't blow up in magnitude to infinity):
R
O
C
=
{
z
:
|
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
|
<
∞
}
{\displaystyle \mathrm {ROC} =\left\{z:\left|\sum _{n=-\infty }^{\infty }x[n]z^{-n}\right|<\infty \right\}}
=== Example 1 (no ROC) ===
Let
x
[
n
]
=
(
.5
)
n
.
{\displaystyle x[n]=(.5)^{n}\ .}
Expanding
x
[
n
]
{\displaystyle x[n]}
on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
it becomes
x
[
n
]
=
{
…
,
(
.5
)
−
3
,
(
.5
)
−
2
,
(
.5
)
−
1
,
1
,
(
.5
)
,
(
.5
)
2
,
(
.5
)
3
,
…
}
=
{
…
,
2
3
,
2
2
,
2
,
1
,
(
.5
)
,
(
.5
)
2
,
(
.5
)
3
,
…
}
.
{\displaystyle x[n]=\left\{\dots ,(.5)^{-3},(.5)^{-2},(.5)^{-1},1,(.5),(.5)^{2},(.5)^{3},\dots \right\}=\left\{\dots ,2^{3},2^{2},2,1,(.5),(.5)^{2},(.5)^{3},\dots \right\}.}
Looking at the sum
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
→
∞
.
{\displaystyle \sum _{n=-\infty }^{\infty }x[n]z^{-n}\to \infty .}
Therefore, there are no values of
z
{\displaystyle z}
that satisfy this condition.
=== Example 2 (causal ROC) ===
Let
x
[
n
]
=
(
.5
)
n
u
[
n
]
{\displaystyle x[n]=(.5)^{n}\,u[n]}
(where
u
{\displaystyle u}
is the Heaviside step function). Expanding
x
[
n
]
{\displaystyle x[n]}
on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
it becomes
x
[
n
]
=
{
…
,
0
,
0
,
0
,
1
,
(
.5
)
,
(
.5
)
2
,
(
.5
)
3
,
…
}
.
{\displaystyle x[n]=\left\{\dots ,0,0,0,1,(.5),(.5)^{2},(.5)^{3},\dots \right\}.}
Looking at the sum
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
=
∑
n
=
0
∞
(
.5
)
n
z
−
n
=
∑
n
=
0
∞
(
.5
z
)
n
=
1
1
−
(
.5
)
z
−
1
.
{\displaystyle \sum _{n=-\infty }^{\infty }x[n]z^{-n}=\sum _{n=0}^{\infty }(.5)^{n}z^{-n}=\sum _{n=0}^{\infty }\left({\frac {.5}{z}}\right)^{n}={\frac {1}{1-(.5)z^{-1}}}.}
The last equality arises from the infinite geometric series and the equality only holds if
|
(
.5
)
z
−
1
|
<
1
,
{\displaystyle |(.5)z^{-1}|<1,}
which can be rewritten in terms of
z
{\displaystyle z}
as
|
z
|
>
(
.5
)
.
{\displaystyle |z|>(.5).}
Thus, the ROC is
|
z
|
>
(
.5
)
.
{\displaystyle |z|>(.5).}
In this case the ROC is the complex plane with a disc of radius 0.5 at the origin "punched out".
=== Example 3 (anti causal ROC) ===
Let
x
[
n
]
=
−
(
.5
)
n
u
[
−
n
−
1
]
{\displaystyle x[n]=-(.5)^{n}\,u[-n-1]}
(where
u
{\displaystyle u}
is the Heaviside step function). Expanding
x
[
n
]
{\displaystyle x[n]}
on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
it becomes
x
[
n
]
=
{
…
,
−
(
.5
)
−
3
,
−
(
.5
)
−
2
,
−
(
.5
)
−
1
,
0
,
0
,
0
,
0
,
…
}
.
{\displaystyle x[n]=\left\{\dots ,-(.5)^{-3},-(.5)^{-2},-(.5)^{-1},0,0,0,0,\dots \right\}.}
Looking at the sum
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
=
−
∑
n
=
−
∞
−
1
(
.5
)
n
z
−
n
=
−
∑
m
=
1
∞
(
z
.5
)
m
=
−
(
.5
)
−
1
z
1
−
(
.5
)
−
1
z
=
−
1
(
.5
)
z
−
1
−
1
=
1
1
−
(
.5
)
z
−
1
{\displaystyle {\begin{aligned}\sum _{n=-\infty }^{\infty }x[n]\,z^{-n}&=-\sum _{n=-\infty }^{-1}(.5)^{n}\,z^{-n}\\&=-\sum _{m=1}^{\infty }\left({\frac {z}{.5}}\right)^{m}\\&=-{\frac {(.5)^{-1}z}{1-(.5)^{-1}z}}\\&=-{\frac {1}{(.5)z^{-1}-1}}\\&={\frac {1}{1-(.5)z^{-1}}}\\\end{aligned}}}
and using the infinite geometric series again, the equality only holds if
|
(
.5
)
−
1
z
|
<
1
{\displaystyle |(.5)^{-1}z|<1}
which can be rewritten in terms of
z
{\displaystyle z}
as
|
z
|
<
(
.5
)
.
{\displaystyle |z|<(.5).}
Thus, the ROC is
|
z
|
<
(
.5
)
.
{\displaystyle |z|<(.5).}
In this case the ROC is a disc centered at the origin and of radius 0.5.
What differentiates this example from the previous example is only the ROC. This is intentional to demonstrate that the transform result alone is insufficient.
=== Examples conclusion ===
Examples 2 & 3 clearly show that the Z-transform
X
(
z
)
{\displaystyle X(z)}
of
x
[
n
]
{\displaystyle x[n]}
is unique when and only when specifying the ROC. Creating the pole–zero plot for the causal and anticausal case show that the ROC for either case does not include the pole that is at 0.5. This extends to cases with multiple poles: the ROC will never contain poles.
In example 2, the causal system yields a ROC that includes
|
z
|
=
∞
{\displaystyle |z|=\infty }
while the anticausal system in example 3 yields an ROC that includes
|
z
|
=
0.
{\displaystyle |z|=0.}
In systems with multiple poles it is possible to have a ROC that includes neither
|
z
|
=
∞
{\displaystyle |z|=\infty }
nor
|
z
|
=
0.
{\displaystyle |z|=0.}
The ROC creates a circular band. For example,
x
[
n
]
=
(
.5
)
n
u
[
n
]
−
(
.75
)
n
u
[
−
n
−
1
]
{\displaystyle x[n]=(.5)^{n}\,u[n]-(.75)^{n}\,u[-n-1]}
has poles at 0.5 and 0.75. The ROC will be 0.5 < |z| < 0.75, which includes neither the origin nor infinity. Such a system is called a mixed-causality system as it contains a causal term
(
.5
)
n
u
[
n
]
{\displaystyle (.5)^{n}\,u[n]}
and an anticausal term
−
(
.75
)
n
u
[
−
n
−
1
]
.
{\displaystyle -(.75)^{n}\,u[-n-1].}
The stability of a system can also be determined by knowing the ROC alone. If the ROC contains the unit circle (i.e., |z| = 1) then the system is stable. In the above systems the causal system (Example 2) is stable because |z| > 0.5 contains the unit circle.
Let us assume we are provided a Z-transform of a system without a ROC (i.e., an ambiguous
x
[
n
]
{\displaystyle x[n]}
). We can determine a unique
x
[
n
]
{\displaystyle x[n]}
provided we desire the following:
Stability
Causality
For stability the ROC must contain the unit circle. If we need a causal system then the ROC must contain infinity and the system function will be a right-sided sequence. If we need an anticausal system then the ROC must contain the origin and the system function will be a left-sided sequence. If we need both stability and causality, all the poles of the system function must be inside the unit circle.
The unique
x
[
n
]
{\displaystyle x[n]}
can then be found.
== Properties ==
Parseval's theorem
∑
n
=
−
∞
∞
x
1
[
n
]
x
2
∗
[
n
]
=
1
j
2
π
∮
C
X
1
(
v
)
X
2
∗
(
1
v
∗
)
v
−
1
d
v
{\displaystyle \sum _{n=-\infty }^{\infty }x_{1}[n]x_{2}^{*}[n]\quad =\quad {\frac {1}{j2\pi }}\oint _{C}X_{1}(v)X_{2}^{*}({\tfrac {1}{v^{*}}})v^{-1}\mathrm {d} v}
Initial value theorem: If
x
[
n
]
{\displaystyle x[n]}
is causal, then
x
[
0
]
=
lim
z
→
∞
X
(
z
)
.
{\displaystyle x[0]=\lim _{z\to \infty }X(z).}
Final value theorem: If the poles of
(
z
−
1
)
X
(
z
)
{\displaystyle (z-1)X(z)}
are inside the unit circle, then
x
[
∞
]
=
lim
z
→
1
(
z
−
1
)
X
(
z
)
.
{\displaystyle x[\infty ]=\lim _{z\to 1}(z-1)X(z).}
== Table of common Z-transform pairs ==
Here:
u
:
n
↦
u
[
n
]
=
{
1
,
n
≥
0
0
,
n
<
0
{\displaystyle u:n\mapsto u[n]={\begin{cases}1,&n\geq 0\\0,&n<0\end{cases}}}
is the unit (or Heaviside) step function and
δ
:
n
↦
δ
[
n
]
=
{
1
,
n
=
0
0
,
n
≠
0
{\displaystyle \delta :n\mapsto \delta [n]={\begin{cases}1,&n=0\\0,&n\neq 0\end{cases}}}
is the discrete-time unit impulse function (cf Dirac delta function which is a continuous-time version). The two functions are chosen together so that the unit step function is the accumulation (running total) of the unit impulse function.
== Relationship to Fourier series and Fourier transform ==
For values of
z
{\displaystyle z}
in the region
|
z
|
=
1
{\displaystyle |z|{=}1}
, known as the unit circle, we can express the transform as a function of a single real variable
ω
{\displaystyle \omega }
by defining
z
=
e
j
ω
.
{\displaystyle z{=}e^{j\omega }.}
And the bi-lateral transform reduces to a Fourier series:
which is also known as the discrete-time Fourier transform (DTFT) of the
x
[
n
]
{\displaystyle x[n]}
sequence. This
2
π
{\displaystyle 2\pi }
-periodic function is the periodic summation of a Fourier transform, which makes it a widely used analysis tool. To understand this, let
X
(
f
)
{\displaystyle X(f)}
be the Fourier transform of any function,
x
(
t
)
{\displaystyle x(t)}
, whose samples at some interval
T
{\displaystyle T}
equal the
x
[
n
]
{\displaystyle x[n]}
sequence. Then the DTFT of the
x
[
n
]
{\displaystyle x[n]}
sequence can be written as follows.
where
T
{\displaystyle T}
has units of seconds,
f
{\displaystyle f}
has units of hertz. Comparison of the two series reveals that
ω
=
2
π
f
T
{\displaystyle \omega {=}2\pi fT}
is a normalized frequency with unit of radian per sample. The value
ω
=
2
π
{\displaystyle \omega {=}2\pi }
corresponds to
f
=
1
T
{\textstyle f{=}{\frac {1}{T}}}
. And now, with the substitution
f
=
ω
2
π
T
,
{\textstyle f{=}{\frac {\omega }{2\pi T}},}
Eq.1 can be expressed in terms of
X
(
ω
−
2
π
k
2
π
T
)
{\displaystyle X({\tfrac {\omega -2\pi k}{2\pi T}})}
(a Fourier transform):
As parameter T changes, the individual terms of Eq.2 move farther apart or closer together along the f-axis. In Eq.3 however, the centers remain 2π apart, while their widths expand or contract. When sequence
x
(
n
T
)
{\displaystyle x(nT)}
represents the impulse response of an LTI system, these functions are also known as its frequency response. When the
x
(
n
T
)
{\displaystyle x(nT)}
sequence is periodic, its DTFT is divergent at one or more harmonic frequencies, and zero at all other frequencies. This is often represented by the use of amplitude-variant Dirac delta functions at the harmonic frequencies. Due to periodicity, there are only a finite number of unique amplitudes, which are readily computed by the much simpler discrete Fourier transform (DFT). (See Discrete-time Fourier transform § Periodic data.)
== Relationship to Laplace transform ==
=== Bilinear transform ===
The bilinear transform can be used to convert continuous-time filters (represented in the Laplace domain) into discrete-time filters (represented in the Z-domain), and vice versa. The following substitution is used:
s
=
2
T
(
z
−
1
)
(
z
+
1
)
{\displaystyle s={\frac {2}{T}}{\frac {(z-1)}{(z+1)}}}
to convert some function
H
(
s
)
{\displaystyle H(s)}
in the Laplace domain to a function
H
(
z
)
{\displaystyle H(z)}
in the Z-domain (Tustin transformation), or
z
=
e
s
T
≈
1
+
s
T
/
2
1
−
s
T
/
2
{\displaystyle z=e^{sT}\approx {\frac {1+sT/2}{1-sT/2}}}
from the Z-domain to the Laplace domain. Through the bilinear transformation, the complex s-plane (of the Laplace transform) is mapped to the complex z-plane (of the z-transform). While this mapping is (necessarily) nonlinear, it is useful in that it maps the entire
j
ω
{\displaystyle j\omega }
axis of the s-plane onto the unit circle in the z-plane. As such, the Fourier transform (which is the Laplace transform evaluated on the
j
ω
{\displaystyle j\omega }
axis) becomes the discrete-time Fourier transform. This assumes that the Fourier transform exists; i.e., that the
j
ω
{\displaystyle j\omega }
axis is in the region of convergence of the Laplace transform.
=== Starred transform ===
Given a one-sided Z-transform
X
(
z
)
{\displaystyle X(z)}
of a time-sampled function, the corresponding starred transform produces a Laplace transform and restores the dependence on
T
{\displaystyle T}
(the sampling parameter):
X
∗
(
s
)
=
X
(
z
)
|
z
=
e
s
T
{\displaystyle {\bigg .}X^{*}(s)=X(z){\bigg |}_{\displaystyle z=e^{sT}}}
The inverse Laplace transform is a mathematical abstraction known as an impulse-sampled function.
== Linear constant-coefficient difference equation ==
The linear constant-coefficient difference (LCCD) equation is a representation for a linear system based on the autoregressive moving-average equation:
∑
p
=
0
N
y
[
n
−
p
]
α
p
=
∑
q
=
0
M
x
[
n
−
q
]
β
q
.
{\displaystyle \sum _{p=0}^{N}y[n-p]\alpha _{p}=\sum _{q=0}^{M}x[n-q]\beta _{q}.}
Both sides of the above equation can be divided by
α
0
{\displaystyle \alpha _{0}}
if it is not zero. By normalizing with
α
0
=
1
,
{\displaystyle \alpha _{0}{=}1,}
the LCCD equation can be written
y
[
n
]
=
∑
q
=
0
M
x
[
n
−
q
]
β
q
−
∑
p
=
1
N
y
[
n
−
p
]
α
p
.
{\displaystyle y[n]=\sum _{q=0}^{M}x[n-q]\beta _{q}-\sum _{p=1}^{N}y[n-p]\alpha _{p}.}
This form of the LCCD equation is favorable to make it more explicit that the "current" output
y
[
n
]
{\displaystyle y[n]}
is a function of past outputs
y
[
n
−
p
]
,
{\displaystyle y[n-p],}
current input
x
[
n
]
,
{\displaystyle x[n],}
and previous inputs
x
[
n
−
q
]
.
{\displaystyle x[n-q].}
=== Transfer function ===
Taking the Z-transform of the above equation (using linearity and time-shifting laws) yields:
Y
(
z
)
∑
p
=
0
N
z
−
p
α
p
=
X
(
z
)
∑
q
=
0
M
z
−
q
β
q
{\displaystyle Y(z)\sum _{p=0}^{N}z^{-p}\alpha _{p}=X(z)\sum _{q=0}^{M}z^{-q}\beta _{q}}
where
X
(
z
)
{\displaystyle X(z)}
and
Y
(
z
)
{\displaystyle Y(z)}
are the z-transform of
x
[
n
]
{\displaystyle x[n]}
and
y
[
n
]
,
{\displaystyle y[n],}
respectively. (Notation conventions typically use capitalized letters to refer to the z-transform of a signal denoted by a corresponding lower case letter, similar to the convention used for notating Laplace transforms.)
Rearranging results in the system's transfer function:
H
(
z
)
=
Y
(
z
)
X
(
z
)
=
∑
q
=
0
M
z
−
q
β
q
∑
p
=
0
N
z
−
p
α
p
=
β
0
+
z
−
1
β
1
+
z
−
2
β
2
+
⋯
+
z
−
M
β
M
α
0
+
z
−
1
α
1
+
z
−
2
α
2
+
⋯
+
z
−
N
α
N
.
{\displaystyle H(z)={\frac {Y(z)}{X(z)}}={\frac {\sum _{q=0}^{M}z^{-q}\beta _{q}}{\sum _{p=0}^{N}z^{-p}\alpha _{p}}}={\frac {\beta _{0}+z^{-1}\beta _{1}+z^{-2}\beta _{2}+\cdots +z^{-M}\beta _{M}}{\alpha _{0}+z^{-1}\alpha _{1}+z^{-2}\alpha _{2}+\cdots +z^{-N}\alpha _{N}}}.}
=== Zeros and poles ===
From the fundamental theorem of algebra the numerator has
M
{\displaystyle M}
roots (corresponding to zeros of
H
{\displaystyle H}
) and the denominator has
N
{\displaystyle N}
roots (corresponding to poles). Rewriting the transfer function in terms of zeros and poles
H
(
z
)
=
(
1
−
q
1
z
−
1
)
(
1
−
q
2
z
−
1
)
⋯
(
1
−
q
M
z
−
1
)
(
1
−
p
1
z
−
1
)
(
1
−
p
2
z
−
1
)
⋯
(
1
−
p
N
z
−
1
)
,
{\displaystyle H(z)={\frac {(1-q_{1}z^{-1})(1-q_{2}z^{-1})\cdots (1-q_{M}z^{-1})}{(1-p_{1}z^{-1})(1-p_{2}z^{-1})\cdots (1-p_{N}z^{-1})}},}
where
q
k
{\displaystyle q_{k}}
is the
k
th
{\displaystyle k^{\text{th}}}
zero and
p
k
{\displaystyle p_{k}}
is the
k
th
{\displaystyle k^{\text{th}}}
pole. The zeros and poles are commonly complex and when plotted on the complex plane (z-plane) it is called the pole–zero plot.
In addition, there may also exist zeros and poles at
z
=
0
{\displaystyle z{=}0}
and
z
=
∞
.
{\displaystyle z{=}\infty .}
If we take these poles and zeros as well as multiple-order zeros and poles into consideration, the number of zeros and poles are always equal.
By factoring the denominator, partial fraction decomposition can be used, which can then be transformed back to the time domain. Doing so would result in the impulse response and the linear constant coefficient difference equation of the system.
=== Output response ===
If such a system
H
(
z
)
{\displaystyle H(z)}
is driven by a signal
X
(
z
)
{\displaystyle X(z)}
then the output is
Y
(
z
)
=
H
(
z
)
X
(
z
)
.
{\displaystyle Y(z)=H(z)X(z).}
By performing partial fraction decomposition on
Y
(
z
)
{\displaystyle Y(z)}
and then taking the inverse Z-transform the output
y
[
n
]
{\displaystyle y[n]}
can be found. In practice, it is often useful to fractionally decompose
Y
(
z
)
z
{\displaystyle \textstyle {\frac {Y(z)}{z}}}
before multiplying that quantity by
z
{\displaystyle z}
to generate a form of
Y
(
z
)
{\displaystyle Y(z)}
which has terms with easily computable inverse Z-transforms.
== See also ==
Advanced Z-transform
Bilinear transform
Difference equation (recurrence relation)
Discrete convolution
Discrete-time Fourier transform
Finite impulse response
Formal power series
Generating function
Generating function transformation
Laplace transform
Laurent series
Least-squares spectral analysis
Probability-generating function
Star transform
Zak transform
Zeta function regularization
== References ==
== Further reading ==
Refaat El Attar, Lecture notes on Z-Transform, Lulu Press, Morrisville NC, 2005. ISBN 1-4116-1979-X.
Ogata, Katsuhiko, Discrete Time Control Systems 2nd Ed, Prentice-Hall Inc, 1995, 1987. ISBN 0-13-034281-5.
Alan V. Oppenheim and Ronald W. Schafer (1999). Discrete-Time Signal Processing, 2nd Edition, Prentice Hall Signal Processing Series. ISBN 0-13-754920-2.
== External links ==
"Z-transform". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Merrikh-Bayat, Farshad (2014). "Two Methods for Numerical Inversion of the Z-Transform". arXiv:1409.1727 [math.NA].
Z-Transform table of some common Laplace transforms
Mathworld's entry on the Z-transform
Z-Transform threads in Comp.DSP
A graphic of the relationship between Laplace transform s-plane to Z-plane of the Z transform
A video-based explanation of the Z-Transform for engineers
What is the z-Transform? | Wikipedia/Z-transform |
Charles François (5 September 1922 – 31 July 2019) was a Belgian administrator, editor and scientist in the fields of cybernetics, systems theory and systems science, internationally known for his main work the International Encyclopedia of Systems and Cybernetics.
== Biography ==
Charles François was born in Belgium in 1922, and studied consular and commercial sciences at Brussels Free University.
After the Second World War he emigrated to the Belgian Congo, where he stayed from 1945 to 1960, at first as an administrative officer in government and later on creating and developing his own commercial business, also exercising journalism and the socio-political chronicle. Later, he moved to Argentina in 1963, and managed the commercial Office of the Belgian Embassy in Buenos Aires from 1966 to his retirement in 1987.
François inspired and founded the Group for the Study of Integrated Systems, Argentine National Division of the International Society for the Systems Sciences, in 1976, and was its honorary president. He was an honorary member of the International Federation for Systems Research European Systems Society, and founding editor of the International Encyclopedia of Systems and Cybernetics, editions of 1997 and 2004 in two volumes.
He encouraged its work in progress in course of organization through the Bertalanffy Center of Systems and Cybernetics, member of the International Academy of Systems and Cybernetics Sciences, honorary president of the Latin American Association of Systemics, Honorary Professor of ITBA, and was a visiting professor at various universities and educational institutions in Argentina, Mexico, Perú, Colombia, Venezuela and Brasil, where he encouraged the creation of many study groups. He became a member of systemic boards and integrated the editorial boards of various journals on systems and cybernetics.
In 2007 he received the Norbert Wiener golden medal from the American Society for Cybernetics as a tribute for his work on cybernetics. He inspired the "Charles François Price" at the International Academy of Systems and Cybernetics Sciences, intended to promote contributions and participation of young systematists at their international meetings. François died on July 31, 2019.En su honor, el Grupo de Estudio de Sistemas Integrados (GESI) y colaboradores, editó en 2020:Charles François;pionero y mentor de instituciones sistémicas y cibernéticas en América Latina; un homenaje colectivo.
== Work ==
In 1952 François came in contact with cybernetics through Norbert Wiener's Cybernetics. In 1958 he joined the Society for General Systems Research now the International Society for the Systems Sciences. Since 1970, François has participated in numerous meetings of various systems and cybernetics societies.
Many courses and seminars on systemics and cybernetics were given by Charles François in Argentina and also in Perú at the IAS; the last edition of them is his "Curso de Teoría General de Sistemas y Cibernética con representaciones gráficas", a CD-ROM disk with more than 280 drawings edited by Group for the Study of Integrated Systems in 2007, also in English.
== Publications ==
Contributions in books:
1976, Cybernétique et Prospective, Namur: International Association of Cybernetics, 1976.
1978, Introducción à la Prospectiva Buenos Aires: Pleamar, 1978.
1985, El uso de Modelos Sistémicos-Cibernéticos como metodología científica, (Systemic-Cybernetic Models used as scientific methodology)
1986, Enfoques Sistémicos en el Estudio de las Sociedades (Systemic Approaches to the Study of Societies).
1992, Diccionario de Teoría General de Sistemas y Cibernética, Buenos Aires: GESI. The first work of its kind in Spanish (475 terms).
1997, International Encyclopedia of Systems and Cybernetics, edited by Charles François, München: K. G. Saur. The Academic board of this Encyclopedia includes members such as: John N. Warfield, Robert Trappl, Ranulph Glanville, Anthony Judge, Markus Schwaninger, Heiner Benking, Matjaz Mulej, and Gerhard Chroust.
2004 2nd. Edition of the International Encyclopedia of Systems and Cybernetics in two volumes.
Selected articles and papers:
1982, "A systemic study of socio-historical systems", paper 1982.
1999, "Systemics and Cybernetics in a Historical Perspective, ".
2000, Hidden long term systemic constraints in present-day socio-economical global mutation of mankind, paper.
2002, Foreign Debt mechanism: The "Cow in the corral" paper.
2004, The need for an integrated systemic-cybernetic language for concepts and models in complex and vague subject areas:... lecture at the presentation of the 2nd volume of the second edition of the International Encyclopedia of Systems and Cybernetics, Humboldt University, Saur Library, Berlin, 2004.
2012, [1] Charles François (* 5. September 1922): 90 years of life in 9 worlds. IFSR Newsletter 2012 Vol. 29 No. 1 September [2]
== References ==
== External links ==
Web archive of Charles François home page at the International Federation for Systems Research with multiple links.
Charles Francois, A Tribute by Silvia Zweifel short ISSS video at vimeo.com, 2019.Charles François;pionero y mentor de instituciones sistémicas y cibernéticas en América Latina; un homenaje colectivo. GESI, 2020
https://web.archive.org/web/20070315103413/http://wwwu.uni-klu.ac.at/gossimit/ifsr/francois/encyclopedia.htm | Wikipedia/Charles_François_(systems_scientist) |
A plant in control theory is the combination of process and actuator. A plant is often referred to with a transfer function
(commonly in the s-domain) which indicates the relation between an input signal and the output signal of a system without feedback, commonly determined by physical properties of the system. An example would be an actuator with its transfer of the input of the actuator to its physical displacement. In a system with feedback, the plant still has the same transfer function, but a control unit and a feedback loop (with their respective transfer functions) are added to the system.
== References == | Wikipedia/Plant_(control_theory) |
The mass-spring-damper model consists of discrete mass nodes distributed throughout an object and interconnected via a network of springs and dampers.
This form of model is also well-suited for modelling objects with complex material behavior such as those with nonlinearity or viscoelasticity.
As well as engineering simulation, these systems have applications in computer graphics and computer animation.
== Derivation (Single Mass) ==
Deriving the equations of motion for this model is usually done by summing the forces on the mass (including any applied external forces
F
external
)
{\displaystyle F_{\text{external}})}
:
Σ
F
=
−
k
x
−
c
x
˙
+
F
external
=
m
x
¨
{\displaystyle \Sigma F=-kx-c{\dot {x}}+F_{\text{external}}=m{\ddot {x}}}
By rearranging this equation, we can derive the standard form:
x
¨
+
2
ζ
ω
n
x
˙
+
ω
n
2
x
=
u
{\displaystyle {\ddot {x}}+2\zeta \omega _{n}{\dot {x}}+\omega _{n}^{2}x=u}
where
ω
n
=
k
m
;
ζ
=
c
2
m
ω
n
;
u
=
F
external
m
{\displaystyle \omega _{n}={\sqrt {\frac {k}{m}}};\quad \zeta ={\frac {c}{2m\omega _{n}}};\quad u={\frac {F_{\text{external}}}{m}}}
ω
n
{\displaystyle \omega _{n}}
is the undamped natural frequency and
ζ
{\displaystyle \zeta }
is the damping ratio. The homogeneous equation for the mass spring system is:
x
¨
+
2
ζ
ω
n
x
˙
+
ω
n
2
x
=
0
{\displaystyle {\ddot {x}}+2\zeta \omega _{n}{\dot {x}}+\omega _{n}^{2}x=0}
This has the solution:
x
=
A
e
−
ω
n
t
(
ζ
+
ζ
2
−
1
)
+
B
e
−
ω
n
t
(
ζ
−
ζ
2
−
1
)
{\displaystyle x=Ae^{-\omega _{n}t\left(\zeta +{\sqrt {\zeta ^{2}-1}}\right)}+Be^{-\omega _{n}t\left(\zeta -{\sqrt {\zeta ^{2}-1}}\right)}}
If
ζ
<
1
{\displaystyle \zeta <1}
then
ζ
2
−
1
{\displaystyle \zeta ^{2}-1}
is negative, meaning the square root will be imaginary and therefore the solution will have an oscillatory component.
== See also ==
Numerical methods
Soft body dynamics#Spring/mass models
Finite element analysis
== References == | Wikipedia/Mass-spring-damper_model |
In mathematics, a linear differential equation is a differential equation that is linear in the unknown function and its derivatives, so it can be written in the form
a
0
(
x
)
y
+
a
1
(
x
)
y
′
+
a
2
(
x
)
y
″
⋯
+
a
n
(
x
)
y
(
n
)
=
b
(
x
)
{\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''\cdots +a_{n}(x)y^{(n)}=b(x)}
where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
Such an equation is an ordinary differential equation (ODE). A linear differential equation may also be a linear partial differential equation (PDE), if the unknown function depends on several variables, and the derivatives that appear in the equation are partial derivatives.
== Types of solution ==
A linear differential equation or a system of linear equations such that the associated homogeneous equations have constant coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is also true for a linear equation of order one, with non-constant coefficients. An equation of order two or higher with non-constant coefficients cannot, in general, be solved by quadrature. For order two, Kovacic's algorithm allows deciding whether there are solutions in terms of integrals, and computing them if any.
The solutions of homogeneous linear differential equations with polynomial coefficients are called holonomic functions. This class of functions is stable under sums, products, differentiation, integration, and contains many usual functions and special functions such as exponential function, logarithm, sine, cosine, inverse trigonometric functions, error function, Bessel functions and hypergeometric functions. Their representation by the defining differential equation and initial conditions allows making algorithmic (on these functions) most operations of calculus, such as computation of antiderivatives, limits, asymptotic expansion, and numerical evaluation to any precision, with a certified error bound.
== Basic terminology ==
The highest order of derivation that appears in a (linear) differential equation is the order of the equation. The term b(x), which does not depend on the unknown function and its derivatives, is sometimes called the constant term of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be homogeneous, as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the associated homogeneous equation. A differential equation has constant coefficients if only constant functions appear as coefficients in the associated homogeneous equation.
A solution of a differential equation is a function that satisfies the equation.
The solutions of a homogeneous linear differential equation form a vector space. In the ordinary case, this vector space has a finite dimension, equal to the order of the equation. All solutions of a linear differential equation are found by adding to a particular solution any solution of the associated homogeneous equation.
== Linear differential operator ==
A basic differential operator of order i is a mapping that maps any differentiable function to its ith derivative, or, in the case of several variables, to one of its partial derivatives of order i. It is commonly denoted
d
i
d
x
i
{\displaystyle {\frac {d^{i}}{dx^{i}}}}
in the case of univariate functions, and
∂
i
1
+
⋯
+
i
n
∂
x
1
i
1
⋯
∂
x
n
i
n
{\displaystyle {\frac {\partial ^{i_{1}+\cdots +i_{n}}}{\partial x_{1}^{i_{1}}\cdots \partial x_{n}^{i_{n}}}}}
in the case of functions of n variables. The basic differential operators include the derivative of order 0, which is the identity mapping.
A linear differential operator (abbreviated, in this article, as linear operator or, simply, operator) is a linear combination of basic differential operators, with differentiable functions as coefficients. In the univariate case, a linear operator has thus the form
a
0
(
x
)
+
a
1
(
x
)
d
d
x
+
⋯
+
a
n
(
x
)
d
n
d
x
n
,
{\displaystyle a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},}
where a0(x), ..., an(x) are differentiable functions, and the nonnegative integer n is the order of the operator (if an(x) is not the zero function).
Let L be a linear differential operator. The application of L to a function f is usually denoted Lf or Lf(X), if one needs to specify the variable (this must not be confused with a multiplication). A linear differential operator is a linear operator, since it maps sums to sums and the product by a scalar to the product by the same scalar.
As the sum of two linear operators is a linear operator, as well as the product (on the left) of a linear operator by a differentiable function, the linear differential operators form a vector space over the real numbers or the complex numbers (depending on the nature of the functions that are considered). They form also a free module over the ring of differentiable functions.
The language of operators allows a compact writing for differentiable equations: if
L
=
a
0
(
x
)
+
a
1
(
x
)
d
d
x
+
⋯
+
a
n
(
x
)
d
n
d
x
n
,
{\displaystyle L=a_{0}(x)+a_{1}(x){\frac {d}{dx}}+\cdots +a_{n}(x){\frac {d^{n}}{dx^{n}}},}
is a linear differential operator, then the equation
a
0
(
x
)
y
+
a
1
(
x
)
y
′
+
a
2
(
x
)
y
″
+
⋯
+
a
n
(
x
)
y
(
n
)
=
b
(
x
)
{\displaystyle a_{0}(x)y+a_{1}(x)y'+a_{2}(x)y''+\cdots +a_{n}(x)y^{(n)}=b(x)}
may be rewritten
L
y
=
b
(
x
)
.
{\displaystyle Ly=b(x).}
There may be several variants to this notation; in particular the variable of differentiation may appear explicitly or not in y and the right-hand and of the equation, such as Ly(x) = b(x) or Ly = b.
The kernel of a linear differential operator is its kernel as a linear mapping, that is the vector space of the solutions of the (homogeneous) differential equation Ly = 0.
In the case of an ordinary differential operator of order n, Carathéodory's existence theorem implies that, under very mild conditions, the kernel of L is a vector space of dimension n, and that the solutions of the equation Ly(x) = b(x) have the form
S
0
(
x
)
+
c
1
S
1
(
x
)
+
⋯
+
c
n
S
n
(
x
)
,
{\displaystyle S_{0}(x)+c_{1}S_{1}(x)+\cdots +c_{n}S_{n}(x),}
where c1, ..., cn are arbitrary numbers. Typically, the hypotheses of Carathéodory's theorem are satisfied in an interval I, if the functions b, a0, ..., an are continuous in I, and there is a positive real number k such that |an(x)| > k for every x in I.
== Homogeneous equation with constant coefficients ==
A homogeneous linear differential equation has constant coefficients if it has the form
a
0
y
+
a
1
y
′
+
a
2
y
″
+
⋯
+
a
n
y
(
n
)
=
0
{\displaystyle a_{0}y+a_{1}y'+a_{2}y''+\cdots +a_{n}y^{(n)}=0}
where a1, ..., an are (real or complex) numbers. In other words, it has constant coefficients if it is defined by a linear operator with constant coefficients.
The study of these differential equations with constant coefficients dates back to Leonhard Euler, who introduced the exponential function ex, which is the unique solution of the equation f′ = f such that f(0) = 1. It follows that the nth derivative of ecx is cnecx, and this allows solving homogeneous linear differential equations rather easily.
Let
a
0
y
+
a
1
y
′
+
a
2
y
″
+
⋯
+
a
n
y
(
n
)
=
0
{\displaystyle a_{0}y+a_{1}y'+a_{2}y''+\cdots +a_{n}y^{(n)}=0}
be a homogeneous linear differential equation with constant coefficients (that is a0, ..., an are real or complex numbers).
Searching solutions of this equation that have the form eαx is equivalent to searching the constants α such that
a
0
e
α
x
+
a
1
α
e
α
x
+
a
2
α
2
e
α
x
+
⋯
+
a
n
α
n
e
α
x
=
0.
{\displaystyle a_{0}e^{\alpha x}+a_{1}\alpha e^{\alpha x}+a_{2}\alpha ^{2}e^{\alpha x}+\cdots +a_{n}\alpha ^{n}e^{\alpha x}=0.}
Factoring out eαx (which is never zero), shows that α must be a root of the characteristic polynomial
a
0
+
a
1
t
+
a
2
t
2
+
⋯
+
a
n
t
n
{\displaystyle a_{0}+a_{1}t+a_{2}t^{2}+\cdots +a_{n}t^{n}}
of the differential equation, which is the left-hand side of the characteristic equation
a
0
+
a
1
t
+
a
2
t
2
+
⋯
+
a
n
t
n
=
0.
{\displaystyle a_{0}+a_{1}t+a_{2}t^{2}+\cdots +a_{n}t^{n}=0.}
When these roots are all distinct, one has n distinct solutions that are not necessarily real, even if the coefficients of the equation are real. These solutions can be shown to be linearly independent, by considering the Vandermonde determinant of the values of these solutions at x = 0, ..., n – 1. Together they form a basis of the vector space of solutions of the differential equation (that is, the kernel of the differential operator).
In the case where the characteristic polynomial has only simple roots, the preceding provides a complete basis of the solutions vector space. In the case of multiple roots, more linearly independent solutions are needed for having a basis. These have the form
x
k
e
α
x
,
{\displaystyle x^{k}e^{\alpha x},}
where k is a nonnegative integer, α is a root of the characteristic polynomial of multiplicity m, and k < m. For proving that these functions are solutions, one may remark that if α is a root of the characteristic polynomial of multiplicity m, the characteristic polynomial may be factored as P(t)(t − α)m. Thus, applying the differential operator of the equation is equivalent with applying first m times the operator
d
d
x
−
α
{\textstyle {\frac {d}{dx}}-\alpha }
, and then the operator that has P as characteristic polynomial. By the exponential shift theorem,
(
d
d
x
−
α
)
(
x
k
e
α
x
)
=
k
x
k
−
1
e
α
x
,
{\displaystyle \left({\frac {d}{dx}}-\alpha \right)\left(x^{k}e^{\alpha x}\right)=kx^{k-1}e^{\alpha x},}
and thus one gets zero after k + 1 application of
d
d
x
−
α
{\textstyle {\frac {d}{dx}}-\alpha }
.
As, by the fundamental theorem of algebra, the sum of the multiplicities of the roots of a polynomial equals the degree of the polynomial, the number of above solutions equals the order of the differential equation, and these solutions form a basis of the vector space of the solutions.
In the common case where the coefficients of the equation are real, it is generally more convenient to have a basis of the solutions consisting of real-valued functions. Such a basis may be obtained from the preceding basis by remarking that, if a + ib is a root of the characteristic polynomial, then a – ib is also a root, of the same multiplicity. Thus a real basis is obtained by using Euler's formula, and replacing
x
k
e
(
a
+
i
b
)
x
{\displaystyle x^{k}e^{(a+ib)x}}
and
x
k
e
(
a
−
i
b
)
x
{\displaystyle x^{k}e^{(a-ib)x}}
by
x
k
e
a
x
cos
(
b
x
)
{\displaystyle x^{k}e^{ax}\cos(bx)}
and
x
k
e
a
x
sin
(
b
x
)
{\displaystyle x^{k}e^{ax}\sin(bx)}
.
=== Second-order case ===
A homogeneous linear differential equation of the second order may be written
y
″
+
a
y
′
+
b
y
=
0
,
{\displaystyle y''+ay'+by=0,}
and its characteristic polynomial is
r
2
+
a
r
+
b
.
{\displaystyle r^{2}+ar+b.}
If a and b are real, there are three cases for the solutions, depending on the discriminant D = a2 − 4b. In all three cases, the general solution depends on two arbitrary constants c1 and c2.
If D > 0, the characteristic polynomial has two distinct real roots α, and β. In this case, the general solution is
c
1
e
α
x
+
c
2
e
β
x
.
{\displaystyle c_{1}e^{\alpha x}+c_{2}e^{\beta x}.}
If D = 0, the characteristic polynomial has a double root −a/2, and the general solution is
(
c
1
+
c
2
x
)
e
−
a
x
/
2
.
{\displaystyle (c_{1}+c_{2}x)e^{-ax/2}.}
If D < 0, the characteristic polynomial has two complex conjugate roots α ± βi, and the general solution is
c
1
e
(
α
+
β
i
)
x
+
c
2
e
(
α
−
β
i
)
x
,
{\displaystyle c_{1}e^{(\alpha +\beta i)x}+c_{2}e^{(\alpha -\beta i)x},}
which may be rewritten in real terms, using Euler's formula as
e
α
x
(
c
1
cos
(
β
x
)
+
c
2
sin
(
β
x
)
)
.
{\displaystyle e^{\alpha x}(c_{1}\cos(\beta x)+c_{2}\sin(\beta x)).}
Finding the solution y(x) satisfying y(0) = d1 and y′(0) = d2, one equates the values of the above general solution at 0 and its derivative there to d1 and d2, respectively. This results in a linear system of two linear equations in the two unknowns c1 and c2. Solving this system gives the solution for a so-called Cauchy problem, in which the values at 0 for the solution of the DEQ and its derivative are specified.
== Non-homogeneous equation with constant coefficients ==
A non-homogeneous equation of order n with constant coefficients may be written
y
(
n
)
(
x
)
+
a
1
y
(
n
−
1
)
(
x
)
+
⋯
+
a
n
−
1
y
′
(
x
)
+
a
n
y
(
x
)
=
f
(
x
)
,
{\displaystyle y^{(n)}(x)+a_{1}y^{(n-1)}(x)+\cdots +a_{n-1}y'(x)+a_{n}y(x)=f(x),}
where a1, ..., an are real or complex numbers, f is a given function of x, and y is the unknown function (for sake of simplicity, "(x)" will be omitted in the following).
There are several methods for solving such an equation. The best method depends on the nature of the function f that makes the equation non-homogeneous. If f is a linear combination of exponential and sinusoidal functions, then the exponential response formula may be used. If, more generally, f is a linear combination of functions of the form xneax, xn cos(ax), and xn sin(ax), where n is a nonnegative integer, and a a constant (which need not be the same in each term), then the method of undetermined coefficients may be used. Still more general, the annihilator method applies when f satisfies a homogeneous linear differential equation, typically, a holonomic function.
The most general method is the variation of constants, which is presented here.
The general solution of the associated homogeneous equation
y
(
n
)
+
a
1
y
(
n
−
1
)
+
⋯
+
a
n
−
1
y
′
+
a
n
y
=
0
{\displaystyle y^{(n)}+a_{1}y^{(n-1)}+\cdots +a_{n-1}y'+a_{n}y=0}
is
y
=
u
1
y
1
+
⋯
+
u
n
y
n
,
{\displaystyle y=u_{1}y_{1}+\cdots +u_{n}y_{n},}
where (y1, ..., yn) is a basis of the vector space of the solutions and u1, ..., un are arbitrary constants. The method of variation of constants takes its name from the following idea. Instead of considering u1, ..., un as constants, they can be considered as unknown functions that have to be determined for making y a solution of the non-homogeneous equation. For this purpose, one adds the constraints
0
=
u
1
′
y
1
+
u
2
′
y
2
+
⋯
+
u
n
′
y
n
0
=
u
1
′
y
1
′
+
u
2
′
y
2
′
+
⋯
+
u
n
′
y
n
′
⋮
0
=
u
1
′
y
1
(
n
−
2
)
+
u
2
′
y
2
(
n
−
2
)
+
⋯
+
u
n
′
y
n
(
n
−
2
)
,
{\displaystyle {\begin{aligned}0&=u'_{1}y_{1}+u'_{2}y_{2}+\cdots +u'_{n}y_{n}\\0&=u'_{1}y'_{1}+u'_{2}y'_{2}+\cdots +u'_{n}y'_{n}\\&\;\;\vdots \\0&=u'_{1}y_{1}^{(n-2)}+u'_{2}y_{2}^{(n-2)}+\cdots +u'_{n}y_{n}^{(n-2)},\end{aligned}}}
which imply (by product rule and induction)
y
(
i
)
=
u
1
y
1
(
i
)
+
⋯
+
u
n
y
n
(
i
)
{\displaystyle y^{(i)}=u_{1}y_{1}^{(i)}+\cdots +u_{n}y_{n}^{(i)}}
for i = 1, ..., n – 1, and
y
(
n
)
=
u
1
y
1
(
n
)
+
⋯
+
u
n
y
n
(
n
)
+
u
1
′
y
1
(
n
−
1
)
+
u
2
′
y
2
(
n
−
1
)
+
⋯
+
u
n
′
y
n
(
n
−
1
)
.
{\displaystyle y^{(n)}=u_{1}y_{1}^{(n)}+\cdots +u_{n}y_{n}^{(n)}+u'_{1}y_{1}^{(n-1)}+u'_{2}y_{2}^{(n-1)}+\cdots +u'_{n}y_{n}^{(n-1)}.}
Replacing in the original equation y and its derivatives by these expressions, and using the fact that y1, ..., yn are solutions of the original homogeneous equation, one gets
f
=
u
1
′
y
1
(
n
−
1
)
+
⋯
+
u
n
′
y
n
(
n
−
1
)
.
{\displaystyle f=u'_{1}y_{1}^{(n-1)}+\cdots +u'_{n}y_{n}^{(n-1)}.}
This equation and the above ones with 0 as left-hand side form a system of n linear equations in u′1, ..., u′n whose coefficients are known functions (f, the yi, and their derivatives). This system can be solved by any method of linear algebra. The computation of antiderivatives gives u1, ..., un, and then y = u1y1 + ⋯ + unyn.
As antiderivatives are defined up to the addition of a constant, one finds again that the general solution of the non-homogeneous equation is the sum of an arbitrary solution and the general solution of the associated homogeneous equation.
== First-order equation with variable coefficients ==
The general form of a linear ordinary differential equation of order 1, after dividing out the coefficient of y′(x), is:
y
′
(
x
)
=
f
(
x
)
y
(
x
)
+
g
(
x
)
.
{\displaystyle y'(x)=f(x)y(x)+g(x).}
If the equation is homogeneous, i.e. g(x) = 0, one may rewrite and integrate:
y
′
y
=
f
,
log
y
=
k
+
F
,
{\displaystyle {\frac {y'}{y}}=f,\qquad \log y=k+F,}
where k is an arbitrary constant of integration and
F
=
∫
f
d
x
{\displaystyle F=\textstyle \int f\,dx}
is any antiderivative of f. Thus, the general solution of the homogeneous equation is
y
=
c
e
F
,
{\displaystyle y=ce^{F},}
where c = ek is an arbitrary constant.
For the general non-homogeneous equation, it is useful to multiply both sides of the equation by the reciprocal e−F of a solution of the homogeneous equation. This gives
y
′
e
−
F
−
y
f
e
−
F
=
g
e
−
F
.
{\displaystyle y'e^{-F}-yfe^{-F}=ge^{-F}.}
As
−
f
e
−
F
=
d
d
x
(
e
−
F
)
,
{\displaystyle -fe^{-F}={\tfrac {d}{dx}}\left(e^{-F}\right),}
the product rule allows rewriting the equation as
d
d
x
(
y
e
−
F
)
=
g
e
−
F
.
{\displaystyle {\frac {d}{dx}}\left(ye^{-F}\right)=ge^{-F}.}
Thus, the general solution is
y
=
c
e
F
+
e
F
∫
g
e
−
F
d
x
,
{\displaystyle y=ce^{F}+e^{F}\int ge^{-F}dx,}
where c is a constant of integration, and F is any antiderivative of f (changing of antiderivative amounts to change the constant of integration).
=== Example ===
Solving the equation
y
′
(
x
)
+
y
(
x
)
x
=
3
x
.
{\displaystyle y'(x)+{\frac {y(x)}{x}}=3x.}
The associated homogeneous equation
y
′
(
x
)
+
y
(
x
)
x
=
0
{\displaystyle y'(x)+{\frac {y(x)}{x}}=0}
gives
y
′
y
=
−
1
x
,
{\displaystyle {\frac {y'}{y}}=-{\frac {1}{x}},}
that is
y
=
c
x
.
{\displaystyle y={\frac {c}{x}}.}
Dividing the original equation by one of these solutions gives
x
y
′
+
y
=
3
x
2
.
{\displaystyle xy'+y=3x^{2}.}
That is
(
x
y
)
′
=
3
x
2
,
{\displaystyle (xy)'=3x^{2},}
x
y
=
x
3
+
c
,
{\displaystyle xy=x^{3}+c,}
and
y
(
x
)
=
x
2
+
c
/
x
.
{\displaystyle y(x)=x^{2}+c/x.}
For the initial condition
y
(
1
)
=
α
,
{\displaystyle y(1)=\alpha ,}
one gets the particular solution
y
(
x
)
=
x
2
+
α
−
1
x
.
{\displaystyle y(x)=x^{2}+{\frac {\alpha -1}{x}}.}
== System of linear differential equations ==
A system of linear differential equations consists of several linear differential equations that involve several unknown functions. In general one restricts the study to systems such that the number of unknown functions equals the number of equations.
An arbitrary linear ordinary differential equation and a system of such equations can be converted into a first order system of linear differential equations by adding variables for all but the highest order derivatives. That is, if
y
′
,
y
″
,
…
,
y
(
k
)
{\displaystyle y',y'',\ldots ,y^{(k)}}
appear in an equation, one may replace them by new unknown functions
y
1
,
…
,
y
k
{\displaystyle y_{1},\ldots ,y_{k}}
that must satisfy the equations
y
′
=
y
1
{\displaystyle y'=y_{1}}
and
y
i
′
=
y
i
+
1
,
{\displaystyle y_{i}'=y_{i+1},}
for i = 1, ..., k – 1.
A linear system of the first order, which has n unknown functions and n differential equations may normally be solved for the derivatives of the unknown functions. If it is not the case this is a differential-algebraic system, and this is a different theory. Therefore, the systems that are considered here have the form
y
1
′
(
x
)
=
b
1
(
x
)
+
a
1
,
1
(
x
)
y
1
+
⋯
+
a
1
,
n
(
x
)
y
n
⋮
y
n
′
(
x
)
=
b
n
(
x
)
+
a
n
,
1
(
x
)
y
1
+
⋯
+
a
n
,
n
(
x
)
y
n
,
{\displaystyle {\begin{aligned}y_{1}'(x)&=b_{1}(x)+a_{1,1}(x)y_{1}+\cdots +a_{1,n}(x)y_{n}\\[1ex]&\;\;\vdots \\[1ex]y_{n}'(x)&=b_{n}(x)+a_{n,1}(x)y_{1}+\cdots +a_{n,n}(x)y_{n},\end{aligned}}}
where
b
n
{\displaystyle b_{n}}
and the
a
i
,
j
{\displaystyle a_{i,j}}
are functions of x. In matrix notation, this system may be written (omitting "(x)")
y
′
=
A
y
+
b
.
{\displaystyle \mathbf {y} '=A\mathbf {y} +\mathbf {b} .}
The solving method is similar to that of a single first order linear differential equations, but with complications stemming from noncommutativity of matrix multiplication.
Let
u
′
=
A
u
.
{\displaystyle \mathbf {u} '=A\mathbf {u} .}
be the homogeneous equation associated to the above matrix equation.
Its solutions form a vector space of dimension n, and are therefore the columns of a square matrix of functions
U
(
x
)
{\displaystyle U(x)}
, whose determinant is not the zero function. If n = 1, or A is a matrix of constants, or, more generally, if A commutes with its antiderivative
B
=
∫
A
d
x
{\displaystyle \textstyle B=\int Adx}
, then one may choose U equal the exponential of B. In fact, in these cases, one has
d
d
x
exp
(
B
)
=
A
exp
(
B
)
.
{\displaystyle {\frac {d}{dx}}\exp(B)=A\exp(B).}
In the general case there is no closed-form solution for the homogeneous equation, and one has to use either a numerical method, or an approximation method such as Magnus expansion.
Knowing the matrix U, the general solution of the non-homogeneous equation is
y
(
x
)
=
U
(
x
)
y
0
+
U
(
x
)
∫
U
−
1
(
x
)
b
(
x
)
d
x
,
{\displaystyle \mathbf {y} (x)=U(x)\mathbf {y_{0}} +U(x)\int U^{-1}(x)\mathbf {b} (x)\,dx,}
where the column matrix
y
0
{\displaystyle \mathbf {y_{0}} }
is an arbitrary constant of integration.
If initial conditions are given as
y
(
x
0
)
=
y
0
,
{\displaystyle \mathbf {y} (x_{0})=\mathbf {y} _{0},}
the solution that satisfies these initial conditions is
y
(
x
)
=
U
(
x
)
U
−
1
(
x
0
)
y
0
+
U
(
x
)
∫
x
0
x
U
−
1
(
t
)
b
(
t
)
d
t
.
{\displaystyle \mathbf {y} (x)=U(x)U^{-1}(x_{0})\mathbf {y_{0}} +U(x)\int _{x_{0}}^{x}U^{-1}(t)\mathbf {b} (t)\,dt.}
== Higher order with variable coefficients ==
A linear ordinary equation of order one with variable coefficients may be solved by quadrature, which means that the solutions may be expressed in terms of integrals. This is not the case for order at least two. This is the main result of Picard–Vessiot theory which was initiated by Émile Picard and Ernest Vessiot, and whose recent developments are called differential Galois theory.
The impossibility of solving by quadrature can be compared with the Abel–Ruffini theorem, which states that an algebraic equation of degree at least five cannot, in general, be solved by radicals. This analogy extends to the proof methods and motivates the denomination of differential Galois theory.
Similarly to the algebraic case, the theory allows deciding which equations may be solved by quadrature, and if possible solving them. However, for both theories, the necessary computations are extremely difficult, even with the most powerful computers.
Nevertheless, the case of order two with rational coefficients has been completely solved by Kovacic's algorithm.
=== Cauchy–Euler equation ===
Cauchy–Euler equations are examples of equations of any order, with variable coefficients, that can be solved explicitly. These are the equations of the form
x
n
y
(
n
)
(
x
)
+
a
n
−
1
x
n
−
1
y
(
n
−
1
)
(
x
)
+
⋯
+
a
0
y
(
x
)
=
0
,
{\displaystyle x^{n}y^{(n)}(x)+a_{n-1}x^{n-1}y^{(n-1)}(x)+\cdots +a_{0}y(x)=0,}
where
a
0
,
…
,
a
n
−
1
{\displaystyle a_{0},\ldots ,a_{n-1}}
are constant coefficients.
== Holonomic functions ==
A holonomic function, also called a D-finite function, is a function that is a solution of a homogeneous linear differential equation with polynomial coefficients.
Most functions that are commonly considered in mathematics are holonomic or quotients of holonomic functions. In fact, holonomic functions include polynomials, algebraic functions, logarithm, exponential function, sine, cosine, hyperbolic sine, hyperbolic cosine, inverse trigonometric and inverse hyperbolic functions, and many special functions such as Bessel functions and hypergeometric functions.
Holonomic functions have several closure properties; in particular, sums, products, derivative and integrals of holonomic functions are holonomic. Moreover, these closure properties are effective, in the sense that there are algorithms for computing the differential equation of the result of any of these operations, knowing the differential equations of the input.
Usefulness of the concept of holonomic functions results of Zeilberger's theorem, which follows.
A holonomic sequence is a sequence of numbers that may be generated by a recurrence relation with polynomial coefficients. The coefficients of the Taylor series at a point of a holonomic function form a holonomic sequence. Conversely, if the sequence of the coefficients of a power series is holonomic, then the series defines a holonomic function (even if the radius of convergence is zero). There are efficient algorithms for both conversions, that is for computing the recurrence relation from the differential equation, and vice versa.
It follows that, if one represents (in a computer) holonomic functions by their defining differential equations and initial conditions, most calculus operations can be done automatically on these functions, such as derivative, indefinite and definite integral, fast computation of Taylor series (thanks of the recurrence relation on its coefficients), evaluation to a high precision with certified bound of the approximation error, limits, localization of singularities, asymptotic behavior at infinity and near singularities, proof of identities, etc.
== See also ==
Continuous-repayment mortgage
Fourier transform
Laplace transform
Linear difference equation
Variation of parameters
== References ==
Birkhoff, Garrett & Rota, Gian-Carlo (1978), Ordinary Differential Equations, New York: John Wiley and Sons, Inc., ISBN 0-471-07411-X
Gershenfeld, Neil (1999), The Nature of Mathematical Modeling, Cambridge, UK.: Cambridge University Press, ISBN 978-0-521-57095-4
Robinson, James C. (2004), An Introduction to Ordinary Differential Equations, Cambridge, UK.: Cambridge University Press, ISBN 0-521-82650-0
== External links ==
http://eqworld.ipmnet.ru/en/solutions/ode.htm
Dynamic Dictionary of Mathematical Function. Automatic and interactive study of many holonomic functions. | Wikipedia/Linear_differential_equation |
In the subject area of control theory, an internal model is a process that simulates the response of the system in order to estimate the outcome of a system disturbance. The internal model principle was first articulated in 1976 by B. A. Francis and W. M. Wonham as an explicit formulation of the Conant and Ashby good regulator theorem. It stands in contrast to classical control, in that the classical feedback loop fails to explicitly model the controlled system (although the classical controller may contain an implicit model).
The internal model theory of motor control argues that the motor system is controlled by the constant interactions of the “plant” and the “controller.” The plant is the body part being controlled, while the internal model itself is considered part of the controller. Information from the controller, such as information from the central nervous system (CNS), feedback information, and the efference copy, is sent to the plant which moves accordingly.
Internal models can be controlled through either feed-forward or feedback control. Feed-forward control computes its input into a system using only the current state and its model of the system. It does not use feedback, so it cannot correct for errors in its control. In feedback control, some of the output of the system can be fed back into the system's input, and the system is then able to make adjustments or compensate for errors from its desired output. Two primary types of internal models have been proposed: forward models and inverse models. In simulations, models can be combined to solve more complex movement tasks.
== Forward models ==
In their simplest form, forward models take the input of a motor command to the “plant” and output a predicted position of the body.
The motor command input to the forward model can be an efference copy, as seen in Figure 1. The output from that forward model, the predicted position of the body, is then compared with the actual position of the body. The actual and predicted position of the body may differ due to noise introduced into the system by either internal (e.g. body sensors are not perfect, sensory noise) or external (e.g. unpredictable forces from outside the body) sources. If the actual and predicted body positions differ, the difference can be fed back as an input into the entire system again so that an adjusted set of motor commands can be formed to create a more accurate movement.
== Inverse models ==
Inverse models use the desired and actual position of the body as inputs to estimate the necessary motor commands which would transform the current position into the desired one. For example, in an arm reaching task, the desired position (or a trajectory of consecutive positions) of the arm is input into the postulated inverse model, and the inverse model generates the motor commands needed to control the arm and bring it into this desired configuration (Figure 2). Inverse internal models are also in close connection with the uncontrolled manifold hypothesis (UCM), see also here.
== Combined forward and inverse models ==
Theoretical work has shown that in models of motor control, when inverse models are used in combination with a forward model, the efference copy of the motor command output from the inverse model can be used as an input to a forward model for further predictions. For example, if, in addition to reaching with the arm, the hand must be controlled to grab an object, an efference copy of the arm motor command can be input into a forward model to estimate the arm's predicted trajectory. With this information, the controller can then generate the appropriate motor command telling the hand to grab the object. It has been proposed that if they exist, this combination of inverse and forward models would allow the CNS to take a desired action (reach with the arm), accurately control the reach and then accurately control the hand to grip an object.
== Adaptive Control theory ==
With the assumption that new models can be acquired and pre-existing models can be updated, the efference copy is important for the adaptive control of a movement task. Throughout the duration of a motor task, an efference copy is fed into a forward model known as a dynamics predictor whose output allows prediction of the motor output. When applying adaptive control theory techniques to motor control, efference copy is used in indirect control schemes as the input to the reference model.
== Scientists ==
A wide range of scientists contribute to progress on the internal model hypothesis. Michael I. Jordan, Emanuel Todorov and
Daniel Wolpert contributed significantly to the mathematical formalization. Sandro Mussa-Ivaldi, Mitsuo Kawato, Claude Ghez, Reza Shadmehr, Randy Flanagan and Konrad Kording contributed with numerous behavioral experiments. The DIVA model of speech production developed by Frank H. Guenther and colleagues uses combined forward and inverse models to produce auditory trajectories with simulated speech articulators. Two interesting inverse internal models for the control of speech production were developed by Iaroslav Blagouchine & Eric Moreau. Both models combine the optimum principles and the equilibrium-point hypothesis (motor commands λ are taken as coordinates of the internal space). The input motor command λ is found by minimizing the length of the path traveled in the internal space, either under the acoustical constraint (the first model), or under the both acoustical and mechanical constraints (the second model). The acoustical constraint is related to the quality of the produced speech (measured in terms of formants), while the mechanical one is related to the stiffness of the tongue's body. The first model, in which the stiffness remains uncontrolled, is in agreement with the standard UCM hypothesis. In contrast, the second optimum internal model, in which the stiffness is prescribed, displays the good variability of speech (at least, in the reasonable range of stiffness) and is in agreement with the more recent versions of the uncontrolled manifold hypothesis (UCM). There is also a rich clinical literature on internal models including work from John Krakauer, Pietro Mazzoni, Maurice A. Smith, Kurt Thoroughman, Joern Diedrichsen, and Amy Bastian.
== See also ==
Repetitive control
Efference copy
== References == | Wikipedia/Internal_model_(motor_control) |
This is an alphabetical list of people who have made significant contributions in the fields of system analysis and control theory.
== Eminent researchers ==
The eminent researchers (born after 1920) include the winners of at least one award of the IEEE Control Systems Award, the Giorgio Quazza Medal, the Hendrik W. Bode Lecture Prize, the Richard E. Bellman Control Heritage Award, the Rufus Oldenburger Medal, or higher awards such as the IEEE Medal of Honor and the National Medal of Science. The earlier pioneers such as Nicolas Minorsky (1885–1970), Harry Nyquist (1889–1976), Harold Locke Hazen (1901–1980), Charles Stark Draper (1901–1987), Hendrik Wade Bode (1905–1982), Gordon S. Brown (1907–1996), John F. Coales (1907–1999), Rufus Oldenburger (1908–1969), John R. Ragazzini (1912–1988), Nathaniel B. Nichols (1914–1997), John Zaborszky (1914–2008) and Harold Chestnut (1917–2001) are not included.
== Eminent researchers of USSR (including Russian SFSR, Ukrainian SSR, Byelorussian SSR, etc. from 1922 to 1991) ==
== Other active researchers ==
== Historical figures in systems and control ==
These people have made outstanding historical contributions to systems and control.
== See also ==
List of engineers
List of systems engineers
List of systems scientists
== References ==
== External links ==
People in control, in: IEEE Control Systems Magazine, Volume 24, Issue 5, Oct. 2004 pp 12–15.
ISA, the International Society for Measurement and Control, homepage. | Wikipedia/People_in_systems_and_control |
Cruise control (also known as speed control, cruise command, autocruise, or tempomat) is a system that automatically controls the speed of an automobile. The system is a servomechanism that takes over the car's throttle to maintain a steady speed set by the driver.
== History ==
Speed control existed in early automobiles such as the Wilson-Pilcher in the early 1900s. They had a lever on the steering column that could be used to set the speed to be maintained by the engine. In 1908, the Peerless included a governor to keep the speed of the engine through an extra throttle lever on the steering wheel. Peerless successfully used a flyball governor. They advertised their system as being able to "maintain speed whether uphill or down."
A governor was used by James Watt and Matthew Boulton in 1788 to control steam engines, but the use of governors dates at least back to the 17th century. On an engine, the governor uses centrifugal force to adjust the throttle position to adapt the engine's speed to different loads (e.g., when going up a hill).
Modern cruise control (also known as a speedostat or tempomat) was invented in 1948 by the blind inventor and mechanical engineer Ralph Teetor. He came up with the idea due to being frustrated by his driver's habit of speeding up and slowing down as he talked.
A more significant factor in developing cruise control was the 35 mph (56 km/h) speed limit imposed in the United States during World War II to reduce gasoline use and tire wear. A mechanism controlled by the driver provided resistance to further pressure on the accelerator pedal when the vehicle reached the desired speed. Teetor's idea of a dashboard speed selector with a mechanism connected to the driveshaft and a device able to push against the gas pedal was patented in 1950. He added a speed lock capability that maintained the car's speed until the driver tapped the brake pedal or turned off the system.
A 1955 U.S. patent for a "constant speed regulator" was filed in 1950 by M-Sgt Frank J. Riley. He conceived the device while driving on the Pennsylvania Turnpike and installed his invention in his car in 1948.
Another inventor named Harold Exline, working independently of Riley, also invented a type of cruise control that he first installed on his car and friends' cars. Exline filed a U.S. patent for a "vacuum powered throttle control with electrically controlled air valve" in 1951, which was granted in 1956. Despite these patents, Riley, Exline, and subsequent patent holders were not able to collect royalties for any cruise control inventions.
The first car with Teetor's "speedostat" system was the 1958 Chrysler Imperial (called "auto-pilot"), using a speed control dial on the dashboard. This system calculated ground speed from the rotating speedometer cable and used a bi-directional screw-drive electric motor to vary the throttle position as needed. Cadillac soon renamed and marketed the device as "cruise control."
In 1965, American Motors Corporation (AMC) introduced a low-priced automatic speed control for its large-sized cars with automatic transmissions. The AMC "cruise command" unit was actuated through a push-button on the dashboard once the car's desired speed was reached. The throttle position was automatically adjusted by a vacuum control that opened and closed the throttle based on input from the speedometer cable rather than through an adjustable control on the dashboard. The unit would shut off anytime the brakes were applied.
Daniel Aaron Wisner invented an "automotive electronic cruise control" in 1968 as an engineer for RCA's Industrial and Automation Systems Division in Plymouth, Michigan. His invention is described in two patents filed that year (US patents 3570622 and 3511329), with the second introducing digital memory, and was the first electronic device that controlled a car.
Due to the 1973 oil crisis and rising fuel prices, the device became more popular in the U.S. "Cruise control can save gas by avoiding surges that expel fuel" while driving at steady speeds. In 1974, AMC, GM, and Chrysler priced the option at $60 to $70, while Ford charged $103.
In the late 1980s, an integrated circuit for Wisner's design for electronic cruise control was finally commercially developed by Motorola as the MC14460 Automotive Speed Control Processor in CMOS. The advantage of electronic speed control over its mechanical predecessor was that it could be integrated with electronic accident avoidance and engine management systems.
== Operation ==
The driver must manually bring the vehicle up to speed and use a button to set the cruise control to the current speed, except in the case of adaptive cruise control.
The cruise control takes its speed signal from a rotating driveshaft, speedometer cable, wheel speed sensor from the engine's RPM, or internal speed pulses produced electronically by the vehicle. Most systems do not allow the use of the cruise control below a certain speed - typically around 25 or 30 mph (40 or 48 km/h). The vehicle will maintain the desired speed by pulling the throttle cable with a solenoid, a vacuum-driven servomechanism, or by using the electronic systems built into the vehicle (fully electronic) if it uses a 'drive-by-wire' system.
All cruise control systems must have the capability to be turned off explicitly and automatically when the driver depresses the brake pedal and often also the clutch. Cruise control systems frequently include a memory feature to resume the set speed after braking and a coast feature to reduce the set speed without braking. When the cruise control is engaged, the throttle can still accelerate the car, but once the pedal is released, it will slow down the vehicle until it reaches the previously set speed.
On the latest vehicles fitted with electronic throttle control, cruise control can be integrated into the vehicle's engine management system. Modern "adaptive" systems include the ability to automatically reduce speed when the distance to a car in front, or the speed limit, decreases.
The cruise control systems of some vehicles incorporate a "speed limiter" function, which will not allow the vehicle to accelerate beyond a preset maximum; this can usually be overridden by fully depressing the accelerator pedal. Most systems will prevent the vehicle from increasing engine speed to accelerate beyond the chosen speed. However, they will not apply the brakes in the event of over-speeding downhill, nor stop the car from going faster than the selected speed even with the engine just idling.
Cruise control is less flexible on vehicles with a manual transmission because depressing the clutch pedal and shifting gears usually disengages the cruise control. The "resume" feature has to be used each time after selecting the new gear and releasing the clutch. Therefore, cruise control is most beneficial at motorway/highway speeds when top gear is used virtually all the time. The speed limiter function, however, does not have this problem.
== Advantages and disadvantages ==
Some advantages of cruise control include:
It is helpful for long drives along highways and sparsely populated roads by reducing driver fatigue and improving comfort by allowing positioning changes more safely.
Some drivers use it to avoid speeding, particularly those who may subconsciously increase speed during a long highway journey.
Increased fuel efficiency
However, when misused, cruise control can lead to accidents due to several factors, such as:
hazardous weather conditions. The U.S. state of Michigan warns against using cruise control if the road has ice or snow, while the Canadian province of British Columbia recommends not using cruise control on wet roads. In some older cars, if they skid with cruise control enabled, the vehicle will keep accelerating, increasing the chance of losing control. If the vehicle is sliding on ice, the driver should not brake or accelerate, but just let the vehicle slow down on its own.
speeding around curves that require slowing down
rough or loose terrain that could negatively affect the cruise control controls
Encourages drivers to pay less attention to driving, increasing the risk of an accident
Risk of sudden unintended acceleration (SUA) and possible accidents. Drivers with feet at rest lose spatial perception and may hit the accelerator instead of the brake pedal in a sudden emergency.
== Adaptive cruise control ==
Some modern vehicles have adaptive cruise control (ACC) systems, a general term meaning improved cruise control. Dynamic set speed systems use the GPS position of speed limit signs from a database. Many systems also incorporate cameras, lasers, and millimeter-wave radar equipment to determine how close a vehicle is to others or other objects on the roadway.
The technologies can be set to maintain a distance from vehicles in front of the car; the system will automatically slow down based on the vehicles in front or continue to keep the set speed. Some systems cannot detect completely stationary cars or pedestrians, so the driver must always pay attention. Automatic braking systems use either a single or a combination of sensors (radar, lidar, and camera) to allow the vehicle to keep pace with the car it is following, slow when closing in on the vehicle in front, and accelerate to the preset speed when traffic allows. Some systems also feature forward collision warning systems, which warn the driver if a vehicle in front—given the speed of both vehicles—gets too close within the preset headway or braking distance.
Vehicles with adaptive cruise control are considered a Level 1 autonomous car, as defined by SAE International.
== See also ==
Proportional–integral–derivative controller (PID), a fundamental control concept used in car cruise control
Lane centering
== References ==
== External links ==
Ulsoy, A. Galip; Peng, Huei; Çakmakci, Melih (2012). Automotive Control Systems. Cambridge University Press. pp. 213–224. ISBN 9781107010116.
Cruise control block diagram
Overview of intelligent vehicle safety systems
Intelligent Transport Systems
Preventive safety applications and technologies
Cruise Control as Auto-Pilot at Snopes.com (was: "Cruise [Un]Control: Driver sets the cruise control on his vehicle, then slips into the backseat for a nap") | Wikipedia/Cruise_control |
A control loop is the fundamental building block of control systems in general and industrial control systems in particular. It consists of the process sensor, the controller function, and the final control element (FCE) which controls the process necessary to automatically adjust the value of a measured process variable (PV) to equal the value of a desired set-point (SP).
There are two common classes of control loop: open loop and closed loop.
In an open-loop control system, the control action from the controller is independent of the process variable. An example of this is a central heating boiler controlled only by a timer. The control action is the switching on or off of the boiler. The process variable is the building temperature. This controller operates the heating system for a constant time regardless of the temperature of the building.
In a closed-loop control system, the control action from the controller is dependent on the desired and actual process variable. In the case of the boiler analogy, this would utilize a thermostat to monitor the building temperature, and feed back a signal to ensure the controller output maintains the building temperature close to that set on the thermostat. A closed-loop controller has a feedback loop which ensures the controller exerts a control action to control a process variable at the same value as the setpoint. For this reason, closed-loop controllers are also called feedback controllers.
== Open-loop and closed-loop ==
Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback).
In open-loop control, the control action from the controller is independent of the "process output" (or "controlled process variable"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature.
In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy, this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the "reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers.
The definition of a closed loop control system according to the British Standards Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."
Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control."
=== Other examples ===
An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers.
A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.
In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller.
== Application ==
The accompanying diagram shows a control loop with a single PV input, a control function, and the control output (CO) which modulates the action of the final control element (FCE) to alter the value of the manipulated variable (MV). In this example, a flow control loop is shown, but can be level, temperature, or any one of many process parameters which need to be controlled. The control function shown is an "intermediate type" such as a PID controller which means it can generate a full range of output signals anywhere between 0-100%, rather than just an on/off signal.
In this example, the value of the PV is always the same as the MV, as they are in series in the pipeline. However, if the feed from the valve was to a tank, and the controller function was to control the level using the fill valve, the PV would be the tank level, and the MV would be the flow to the tank.
The controller function can be a discrete controller or a function block in a computerised control system such as a distributed control system or a programmable logic controller. In all cases, a control loop diagram is a very convenient and useful way of representing the control function and its interaction with plant. In practice at a process control level, control loops are normally abbreviated using standard symbols in a Piping and instrumentation diagram, which shows all elements of the process measurement and control based on a process flow diagram.
At a detailed level, the control loop connection diagram is created to show the electrical and pneumatic connections. This greatly aids diagnostics and repair, as all the connections for a single control function are on one diagram.
== Loop and control equipment tagging ==
To aid unique identification of equipment, each loop and its elements are identified by a "tagging" system and each element has a unique tag identification.
Based on the standards ANSI/ISA S5.1 and ISO 14617-6, the identifications consist of up to 5 letters.
The first identification letter is for the measured value, the second is a modifier, 3rd indicates the passive/readout function, 4th - active/output function, and the 5th is the function modifier. This is followed by loop number, which is unique to that loop.
For instance, FIC045 means it is the Flow Indicating Controller in control loop 045. This is also known as the "tag" identifier of the field device, which is normally given to the location and function of the instrument. The same loop may have FT045 - which is the flow transmitter in the same loop.
For reference designation of any equipment in industrial systems the standard IEC 61346 (Industrial systems, installations and equipment and industrial products — Structuring principles and reference
== References == | Wikipedia/Control_loop |
A distributed control system (DCS) is a computerized control system for a process or plant usually with many control loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localizing control functions near the process plant, with remote monitoring and supervision.
Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality of Supervisory control and data acquisition (SCADA) and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not necessarily geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do.
== Structure ==
The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays.
The accompanying diagram is a general model which shows functional manufacturing levels using computerised control.
Referring to the diagram;
Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves
Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors.
Level 2 contains the supervisory computers, which collect information from processor nodes on the system, and provide the operator control screens.
Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets
Level 4 is the production scheduling level.
Levels 1 and 2 are the functional levels of a traditional DCS, in which all equipment are part of an integrated system from a single manufacturer.
Levels 3 and 4 are not strictly process control in the traditional sense, but where production control and scheduling takes place.
=== Technical points ===
The processor nodes and operator graphical displays are connected over proprietary or industry standard networks, and network reliability is increased by dual redundancy cabling over diverse routes. This distributed topology also reduces the amount of field cabling by siting the I/O modules and their associated processors close to the process plant.
The processors receive information from input modules, process the information and decide control actions to be signalled by the output modules. The field inputs and outputs can be analog signals e.g. 4–20 mA DC current loop or two-state signals that switch either "on" or "off", such as relay contacts or a semiconductor switch.
DCSs are connected to sensors and actuators and use setpoint control to control the flow of material through the plant. A typical application is a PID controller fed by a flow meter and using a control valve as the final control element. The DCS sends the setpoint required by the process to the controller which instructs a valve to operate so that the process reaches and stays at the desired setpoint. (see 4–20 mA schematic for example).
Large oil refineries and chemical plants have several thousand I/O points and employ very large DCS. Processes are not limited to fluidic flow through pipes, however, and can also include things like paper machines and their associated quality controls, variable speed drives and motor control centers, cement kilns, mining operations, ore processing facilities, and many others.
DCSs in very high reliability applications can have dual redundant processors with "hot" switch over on fault, to enhance the reliability of the control system.
Although 4–20 mA has been the main field signalling standard, modern DCS systems can also support fieldbus digital protocols, such as Foundation Fieldbus, profibus, HART, modbus, PC Link, etc.
Modern DCSs also support neural networks and fuzzy logic applications. Recent research focuses on the synthesis of optimal distributed controllers, which optimizes a certain H-infinity or the H 2 control criterion.
== Typical applications ==
Distributed control systems (DCS) are dedicated systems used in manufacturing processes that are continuous or batch-oriented.
Processes where a DCS might be used include:
Chemical plants
Petrochemical plants, refineries, Oil platforms, FPSOs and LNG plants
Pulp and paper mills (see also: quality control system QCS)
Boiler controls and power plant systems
Nuclear power plants
Environmental control systems
Water management systems
Water treatment plants
Sewage treatment plants
Food and food processing
Agrochemical and fertilizer
Metal and mines
Automobile manufacturing
Metallurgical process plants
Pharmaceutical manufacturing
Sugar refining plants
Agriculture applications
== History ==
=== Evolution of process control operations ===
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large amount of human oversight to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.
With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system was born.
The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
=== Origins ===
Early minicomputers were used in the control of industrial processes since the beginning of the 1960s. The IBM 1800, for example, was an early computer that had input/output hardware to gather process signals in a plant for conversion from field contact levels (for digital points) and analog signals to the digital domain.
The first industrial control computer system was built 1959 at the Texaco Port Arthur, Texas, refinery with an RW-300 of the Ramo-Wooldridge Company.
In 1975, both Yamatake-Honeywell and Japanese electrical engineering firm Yokogawa introduced their own independently produced DCS's - TDC 2000 and CENTUM systems, respectively. US-based Bristol also introduced their UCS 3000 universal controller in 1975. In 1978 Valmet introduced their own DCS system called Damatic (latest web-based generation Valmet DNAe). In 1980, Bailey (now part of ABB) introduced the NETWORK 90 system, Fisher Controls (now part of Emerson Electric) introduced the PROVoX system, Fischer & Porter Company (now also part of ABB) introduced DCI-4000 (DCI stands for Distributed Control Instrumentation).
The DCS largely came about due to the increased availability of microcomputers and the proliferation of microprocessors in the world of process control. Computers had already been applied to process automation for some time in the form of both direct digital control (DDC) and setpoint control. In the early 1970s Taylor Instrument Company, (now part of ABB) developed the 1010 system, Foxboro the FOX1 system, Fisher Controls the DC2 system and Bailey Controls the 1055 systems. All of these were DDC applications implemented within minicomputers (DEC PDP-11, Varian Data Machines, MODCOMP etc.) and connected to proprietary Input/Output hardware. Sophisticated (for the time) continuous as well as batch control was implemented in this way. A more conservative approach was setpoint control, where process computers supervised clusters of analog process controllers. A workstation provided visibility into the process using text and crude character graphics. Availability of a fully functional graphical user interface was a way away.
=== Development ===
Central to the DCS model was the inclusion of control function blocks. Function blocks evolved from early, more primitive DDC concepts of "Table Driven" software. One of the first embodiments of object-oriented software, function blocks were self-contained "blocks" of code that emulated analog hardware control components and performed tasks that were essential to process control, such as execution of PID algorithms. Function blocks continue to endure as the predominant method of control for DCS suppliers, and are supported by key technologies such as Foundation Fieldbus today.
Midac Systems, of Sydney, Australia, developed an objected-oriented distributed direct digital control system in 1982. The central system ran 11 microprocessors sharing tasks and common memory and connected to a serial communication network of distributed controllers each running two Z80s. The system was installed at the University of Melbourne.
Digital communication between distributed controllers, workstations and other computing elements (peer to peer access) was one of the primary advantages of the DCS. Attention was duly focused on the networks, which provided the all-important lines of communication that, for process applications, had to incorporate specific functions such as determinism and redundancy. As a result, many suppliers embraced the IEEE 802.4 networking standard. This decision set the stage for the wave of migrations necessary when information technology moved into process automation and IEEE 802.3 rather than IEEE 802.4 prevailed as the control LAN.
=== The network-centric era of the 1980s ===
In the 1980s, users began to look at DCSs as more than just basic process control. A very early example of a Direct Digital Control DCS was completed by the Australian business Midac in 1981–82 using R-Tec Australian designed hardware. The system installed at the University of Melbourne used a serial communications network, connecting campus buildings back to a control room "front end". Each remote unit ran two Z80 microprocessors, while the front end ran eleven Z80s in a parallel processing configuration with paged common memory to share tasks and that could run up to 20,000 concurrent control objects.
It was believed that if openness could be achieved and greater amounts of data could be shared throughout the enterprise that even greater things could be achieved. The first attempts to increase the openness of DCSs resulted in the adoption of the predominant operating system of the day: UNIX. UNIX and its companion networking technology TCP-IP were developed by the US Department of Defense for openness, which was precisely the issue the process industries were looking to resolve.
As a result, suppliers also began to adopt Ethernet-based networks with their own proprietary protocol layers. The full TCP/IP standard was not implemented, but the use of Ethernet made it possible to implement the first instances of object management and global data access technology. The 1980s also witnessed the first PLCs integrated into the DCS infrastructure. Plant-wide historians also emerged to capitalize on the extended reach of automation systems. The first DCS supplier to adopt UNIX and Ethernet networking technologies was Foxboro, who introduced the I/A Series system in 1987.
=== The application-centric era of the 1990s ===
The drive toward openness in the 1980s gained momentum through the 1990s with the increased adoption of commercial off-the-shelf (COTS) components and IT standards. Probably the biggest transition undertaken during this time was the move from the UNIX operating system to the Windows environment. While the realm of the real time operating system (RTOS) for control applications remains dominated by real time commercial variants of UNIX or proprietary operating systems, everything above real-time control has made the transition to Windows.
The introduction of Microsoft at the desktop and server layers resulted in the development of technologies such as OLE for process control (OPC), which is now a de facto industry connectivity standard. Internet technology also began to make its mark in automation and the world, with most DCS HMI supporting Internet connectivity. The 1990s were also known for the "Fieldbus Wars", where rival organizations competed to define what would become the IEC fieldbus standard for digital communication with field instrumentation instead of 4–20 milliamp analog communications. The first fieldbus installations occurred in the 1990s. Towards the end of the decade, the technology began to develop significant momentum, with the market consolidated around Ethernet I/P, Foundation Fieldbus and Profibus PA for process automation applications. Some suppliers built new systems from the ground up to maximize functionality with fieldbus, such as Rockwell PlantPAx System, Honeywell with Experion & Plantscape SCADA systems, ABB with System 800xA, Emerson Process Management with the Emerson Process Management DeltaV control system, Siemens with the SPPA-T3000 or Simatic PCS 7, Forbes Marshall with the Microcon+ control system and Azbil Corporation with the Harmonas-DEO system. Fieldbus technics have been used to integrate machine, drives, quality and condition monitoring applications to one DCS with Valmet DNA system.
The impact of COTS, however, was most pronounced at the hardware layer. For years, the primary business of DCS suppliers had been the supply of large amounts of hardware, particularly I/O and controllers. The initial proliferation of DCSs required the installation of prodigious amounts of this hardware, most of it manufactured from the bottom up by DCS suppliers. Standard computer components from manufacturers such as Intel and Motorola, however, made it cost prohibitive for DCS suppliers to continue making their own components, workstations, and networking hardware.
As the suppliers made the transition to COTS components, they also discovered that the hardware market was shrinking fast. COTS not only resulted in lower manufacturing costs for the supplier, but also steadily decreasing prices for the end users, who were also becoming increasingly vocal over what they perceived to be unduly high hardware costs. Some suppliers that were previously stronger in the PLC business, such as Rockwell Automation and Siemens, were able to leverage their expertise in manufacturing control hardware to enter the DCS marketplace with cost effective offerings, while the stability/scalability/reliability and functionality of these emerging systems are still improving. The traditional DCS suppliers introduced new generation DCS System based on the latest Communication and IEC Standards, which resulting in a trend of combining the traditional concepts/functionalities for PLC and DCS into a one for all solution—named "Process Automation System" (PAS). The gaps among the various systems remain at the areas such as: the database integrity, pre-engineering functionality, system maturity, communication transparency and reliability. While it is expected the cost ratio is relatively the same (the more powerful the systems are, the more expensive they will be), the reality of the automation business is often operating strategically case by case. The current next evolution step is called Collaborative Process Automation Systems.
To compound the issue, suppliers were also realizing that the hardware market was becoming saturated. The life cycle of hardware components such as I/O and wiring is also typically in the range of 15 to over 20 years, making for a challenging replacement market. Many of the older systems that were installed in the 1970s and 1980s are still in use today, and there is a considerable installed base of systems in the market that are approaching the end of their useful life. Developed industrial economies in North America, Europe, and Japan already had many thousands of DCSs installed, and with few if any new plants being built, the market for new hardware was shifting rapidly to smaller, albeit faster growing regions such as China, Latin America, and Eastern Europe.
Because of the shrinking hardware business, suppliers began to make the challenging transition from a hardware-based business model to one based on software and value-added services. It is a transition that is still being made today. The applications portfolio offered by suppliers expanded considerably in the '90s to include areas such as production management, model-based control, real-time optimization, plant asset management (PAM), Real-time performance management (RPM) tools, alarm management, and many others. To obtain the true value from these applications, however, often requires a considerable service content, which the suppliers also provide.
=== Modern systems (2010 onwards) ===
The latest developments in DCS include the following new technologies:
Wireless systems and protocols
Remote transmission, logging and data historian
Mobile interfaces and controls
Embedded web-servers
Increasingly, and ironically, DCS are becoming centralised at plant level, with the ability to log into the remote equipment. This enables operator to control both at enterprise level ( macro ) and at the equipment level (micro), both within and outside the plant, because the importance of the physical location drops due to interconnectivity primarily thanks to wireless and remote access.
The more wireless protocols are developed and refined, the more they are included in DCS. DCS controllers are now often equipped with embedded servers and provide on-the-go web access. Whether DCS will lead Industrial Internet of Things (IIOT) or borrow key elements from remains to be seen.
Many vendors provide the option of a mobile HMI, ready for both Android and iOS. With these interfaces, the threat of security breaches and possible damage to plant and process are now very real.
== See also ==
Annunciator panel
Building automation
EPICS
Industrial control system
Plant process and emergency shutdown systems
Safety instrumented system (SIS)
TANGO
== References == | Wikipedia/Distributed_control_system |
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference.
== Methodology ==
=== Optimization problems ===
In a genetic algorithm, a population of candidate solutions (called individuals, creatures, organisms, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.
The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
A typical genetic algorithm requires:
a genetic representation of the solution domain,
a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits (also called bit set or bit string). Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators.
==== Initialization ====
The population size depends on the nature of the problem, but typically contains hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found or the distribution of the sampling probability tuned to focus in those areas of greater interest.
==== Selection ====
During each successive generation, a portion of the existing population is selected to reproduce for a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem-dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used.
==== Genetic operators ====
The next step is to generate a second generation population of solutions from those selected, through a combination of genetic operators: crossover (also called recombination), and mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated.
Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms.
It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem's complexity class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. An adequate population size ensures sufficient genetic diversity for the problem at hand, but can lead to a waste of computational resources if set to a value larger than required.
==== Heuristics ====
In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution.
==== Termination ====
This generational process is repeated until a termination condition has been reached. Common terminating conditions are:
A solution is found that satisfies minimum criteria
Fixed number of generations reached
Allocated budget (computation time/money) reached
The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results
Manual inspection
Combinations of the above
== The building block hypothesis ==
Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular, it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of:
A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness.
A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic.
Goldberg describes the heuristic as follows:
"Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings.
"Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks."
Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs' efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms.
== Limitations ==
The practical use of a genetic algorithm has limitations, especially as compared to alternative optimization algorithms:
Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods cannot deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems.
Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or a plane . In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts.
The "better" solution is only in comparison to other solutions. As a result, the stopping criterion is not clear in every problem.
In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation.
Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems.
GAs cannot effectively solve problems in which the only fitness measure is a binary pass/fail outcome (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure.
For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches.
== Variants ==
=== Chromosome representation ===
The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so-called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution.
Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation.
An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes.
Another important expansion of the Genetic Algorithm (GA) accessible solution space was driven by the need to make representations amenable to variable levels of knowledge about the solution states. Variable-length representations were inspired by the observation that, in nature, evolution tends to progress from simpler organisms to more complex ones—suggesting an underlying rationale for embracing flexible structures. A second, more pragmatic motivation was that most real-world engineering and knowledge-based problems do not naturally conform to rigid knowledge structures.
These early innovations in variable-length representations laid essential groundwork for the development of Genetic programming, which further extended the classical GA paradigm. Such representations required enhancements to the simplistic genetic operators used for fixed-length chromosomes, enabling the emergence of more sophisticated and adaptive GA models.
=== Elitism ===
A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next.
=== Parallel implementations ===
Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction.
Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function.
=== Adaptive GAs ===
Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Researchers have analyzed GA convergence analytically.
Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. There are more examples of AGA variants: Successive zooming method is an early example of improving convergence. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. Recent approaches use more abstract variables for deciding pc and pm. Examples are dominance & co-dominance principles and LIGA (levelized interpolative genetic algorithm), which combines a flexible GA with modified A* search to tackle search space anisotropicity.
It can be quite effective to combine GA with other optimization methods. A GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing.
This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency.
A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination.
A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA.
== Problem domains ==
Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems.
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain).
Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields
In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task:
[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate.
[...]
I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs.
== History ==
In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998).
Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution only became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania.
=== Commercial products ===
In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes.
In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Since the 1990s, MATLAB has built in three derivative-free optimization heuristic algorithms (simulated annealing, particle swarm optimization, genetic algorithm) and two direct search algorithms (simplex search, pattern search).
== Related techniques ==
=== Parent fields ===
Genetic algorithms are a sub-field:
Evolutionary algorithms
Evolutionary computing
Metaheuristics
Stochastic optimization
Optimization
=== Related fields ===
==== Evolutionary algorithms ====
Evolutionary algorithms is a sub-field of evolutionary computing.
Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES).
Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents.
Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover.
Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. There are many variants of Genetic Programming, including Cartesian genetic programming, Gene expression programming, grammatical evolution, Linear genetic programming, Multi expression programming etc.
Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.
Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
==== Swarm intelligence ====
Swarm intelligence is a sub-field of evolutionary computing.
Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas.
Although considered an Estimation of distribution algorithm, Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables.
==== Other evolutionary computing algorithms ====
Evolutionary computation is a sub-field of the metaheuristic methods.
Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms.
Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining.
Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space.
Differential evolution (DE) inspired by migration of superorganisms.
Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant.
==== Other metaheuristic methods ====
Metaheuristic methods broadly fall within stochastic optimisation methods.
Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule.
Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space.
Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions.
==== Other stochastic optimisation methods ====
The cross-entropy (CE) method generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration.
Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics.
== See also ==
Genetic programming
List of genetic algorithm applications
Genetic algorithms in signal processing (a.k.a. particle filters)
Propagation of schema
Universal Darwinism
Metaheuristics
Learning classifier system
Rule-based machine learning
== References ==
== Bibliography ==
== External links ==
=== Resources ===
[1] Provides a list of resources in the genetic algorithms field
An Overview of the History and Flavors of Evolutionary Algorithms
=== Tutorials ===
Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma
An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints.
A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University An excellent tutorial with much theory
"Essentials of Metaheuristics", 2009 (225 p). Free open text by Sean Luke.
Global Optimization Algorithms – Theory and Application Archived 11 September 2008 at the Wayback Machine
Genetic Algorithms in Python Tutorial with the intuition behind GAs and Python implementation.
Genetic Algorithms evolves to solve the prisoner's dilemma. Written by Robert Axelrod. | Wikipedia/Genetic_algorithms |
Motion control is a sub-field of automation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. Motion control systems are extensively used in a variety of fields for automation purposes, including precision engineering, micromanufacturing, biotechnology, and nanotechnology. The main components involved typically include a motion controller, an energy amplifier, and one or more prime movers or actuators. Motion control may be open loop or closed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems include stepper motor or fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System.
Typically the position or velocity of machines are controlled using some type of device such as a hydraulic pump, linear actuator, or electric motor, generally a servo. Motion control is an important part of robotics and CNC machine tools, however in these instances it is more complex than when used with specialized machines, where the kinematics are usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile, semiconductor production, and assembly industries.
Motion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of motion control because most of robotic manipulators are driven by electrical servo motors and the key objective is the control of motion.
== Overview ==
The basic architecture of a motion control system contains:
A motion controller, which calculates and controls the mechanical trajectories (motion profile) an actuator must follow (i.e., motion planning) and, in closed loop systems, employs feedback to make control corrections and thus implement closed-loop control.
A drive or amplifier to transform the control signal from the motion controller into energy that is presented to the actuator. Newer "intelligent" drives can close the position and velocity loops internally, resulting in much more accurate control.
A prime mover or actuator such as a hydraulic pump, pneumatic cylinder, linear actuator, or electric motor for output motion.
In closed loop systems, one or more feedback sensors such as absolute and incremental encoders, resolvers or Hall effect devices to return the position or velocity of the actuator to the motion controller in order to close the position or velocity control loops.
Mechanical components to transform the motion of the actuator into the desired motion, including: gears, shafting, ball screw, belts, linkages, and linear and rotational bearings.
The interface between the motion controller and drives it control is very critical when coordinated motion is required, as it must provide tight synchronization. Historically the only open interface was an analog signal, until open interfaces were developed that satisfied the requirements of coordinated motion control, the first being SERCOS in 1991 which is now enhanced to SERCOS III. Later interfaces capable of motion control include Ethernet/IP, Profinet IRT, Ethernet Powerlink, and EtherCAT.
Common control functions include:
Velocity control.
Position (point-to-point) control: There are several methods for computing a motion trajectory. These are often based on the velocity profiles of a move such as a triangular profile, trapezoidal profile, or an S-curve profile.
Pressure or force control.
Impedance control: This type of control is suitable for environment interaction and object manipulation, such as in robotics.
Electronic gearing (or cam profiling): The position of a slave axis is mathematically linked to the position of a master axis. A good example of this would be in a system where two rotating drums turn at a given ratio to each other. A more advanced case of electronic gearing is electronic camming. With electronic camming, a slave axis follows a profile that is a function of the master position. This profile need not be salted, but it must be an animated function
== See also ==
Match moving, for motion tracking in computer-generated imagery
Mechatronics, the science of computer-controlled smart motion devices
Control system
PID controller, proportional-integral-derivative controller
Slewing
Pneumatics
Ethernet/IP
High performance positioning system for controlling high precision at high speed
== External links ==
What is a Motion Controller? Technical Summary for Motion Engineers
== Further reading ==
Tan K. K., T. H. Lee and S. Huang, Precision motion control: Design and implementation, 2nd ed., London, Springer, 2008.
Ellis, George, Control System Design Guide, Fourth Edition: Using Your Computer to Understand and Diagnose Feedback Controllers
== References == | Wikipedia/Motion_control |
A bond graph is a graphical representation of a physical dynamic system. It allows the conversion of the system into a state-space representation. It is similar to a block diagram or signal-flow graph, with the major difference that the arcs in bond graphs represent bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Bond graphs are multi-energy domain (e.g. mechanical, electrical, hydraulic, etc.) and domain neutral. This means a bond graph can incorporate multiple domains seamlessly.
The bond graph is composed of the "bonds" which link together "single-port", "double-port" and "multi-port" elements (see below for details). Each bond represents the instantaneous flow of energy (dE/dt) or power. The flow in each bond is denoted by a pair of variables called power variables, akin to conjugate variables, whose product is the instantaneous power of the bond. The power variables are broken into two parts: flow and effort. For example, for the bond of an electrical system, the flow is the current, while the effort is the voltage. By multiplying current and voltage in this example you can get the instantaneous power of the bond.
A bond has two other features described briefly here, and discussed in more detail below. One is the "half-arrow" sign convention. This defines the assumed direction of positive energy flow. As with electrical circuit diagrams and free-body diagrams, the choice of positive direction is arbitrary, with the caveat that the analyst must be consistent throughout with the chosen definition. The other feature is the "causality". This is a vertical bar placed on only one end of the bond. It is not arbitrary. As described below, there are rules for assigning the proper causality to a given port, and rules for the precedence among ports. Causality explains the mathematical relationship between effort and flow. The positions of the causalities show which of the power variables are dependent and which are independent.
If the dynamics of the physical system to be modeled operate on widely varying time scales, fast continuous-time behaviors can be modeled as instantaneous phenomena by using a hybrid bond graph. Bond graphs were invented by Henry Paynter.
== Systems for bond graph ==
Many systems can be expressed in terms used in bond graph. These terms are expressed in the table below.
Conventions for the table below:
P
{\displaystyle P}
is the active power;
X
^
{\displaystyle {\hat {X}}}
is a matrix object;
x
→
{\displaystyle {\vec {x}}}
is a vector object;
x
†
{\displaystyle x^{\dagger }}
is the Hermitian conjugate of x; it is the complex conjugate of the transpose of x. If x is a scalar, then the Hermitian conjugate is the same as the complex conjugate;
D
t
n
{\displaystyle D_{t}^{n}}
is the Euler notation for differentiation, where:
D
t
n
f
(
t
)
=
{
∫
−
∞
t
f
(
s
)
d
s
,
n
=
−
1
f
(
t
)
,
n
=
0
∂
n
f
(
t
)
∂
t
n
,
n
>
0
{\displaystyle D_{t}^{n}f(t)={\begin{cases}\displaystyle \int _{-\infty }^{t}f(s)\,ds,&n=-1\\[2pt]f(t),&n=0\\[2pt]{\dfrac {\partial ^{n}f(t)}{\partial t^{n}}},&n>0\end{cases}}}
{
⟨
x
⟩
α
:=
|
x
|
α
sgn
(
x
)
⟨
a
⟩
=
k
⟨
b
⟩
β
⟹
⟨
b
⟩
=
(
1
k
⟨
a
⟩
)
1
/
β
{\displaystyle {\begin{cases}\langle x\rangle ^{\alpha }:=|x|^{\alpha }\operatorname {sgn}(x)\\\langle {a}\rangle =k\langle b\rangle ^{\beta }\implies \langle b\rangle =\left({\frac {1}{k}}\langle a\rangle \right)^{1/\beta }\end{cases}}}
Vergent-factor:
ϕ
L
=
{
Prismatic
:
length
cross-sectional
area
Cylinder
:
ln
(
r
a
d
i
u
s
o
u
t
r
a
d
i
u
s
i
n
)
2
π
⋅
length
Sphere
:
1
4
π
(
r
a
d
i
u
s
i
n
∥
−
r
a
d
i
u
s
o
u
t
)
{\displaystyle \phi _{L}={\begin{cases}{\textrm {Prismatic}}:\ {\dfrac {\textrm {length}}{{\textrm {cross-sectional}}\ {\textrm {area}}}}\\{\textrm {Cylinder}}:\ {\dfrac {\ln \left({\frac {\mathrm {radius_{out}} }{\mathrm {radius_{in}} }}\right)}{2\pi \cdot {\textrm {length}}}}\\{\textrm {Sphere}}:\ {\dfrac {1}{4\pi \left(\mathrm {radius_{in}} \parallel \mathrm {-radius_{out}} \right)}}\end{cases}}}
Other systems:
Thermodynamic power system (flow is entropy-rate and effort is temperature)
Electrochemical power system (flow is chemical activity and effort is chemical potential)
Thermochemical power system (flow is mass-rate and effort is mass specific enthalpy)
Macroeconomics currency-rate system (displacement is commodity and effort is price per commodity)
Microeconomics currency-rate system (displacement is population and effort is GDP per capita)
== Tetrahedron of state ==
The tetrahedron of state is a tetrahedron that graphically shows the conversion between effort and flow. The adjacent image shows the tetrahedron in its generalized form. The tetrahedron can be modified depending on the energy domain.
Using the tetrahedron of state, one can find a mathematical relationship between any variables on the tetrahedron. This is done by following the arrows around the diagram and multiplying any constants along the way. For example, if you wanted to find the relationship between generalized flow and generalized displacement, you would start at the f(t) and then integrate it to get q(t). More examples of equations can be seen below.
Relationship between generalized displacement and generalized flow.
q
(
t
)
=
∫
f
(
t
)
d
t
{\displaystyle q(t)=\int f(t)\,dt}
Relationship between generalized flow and generalized effort.
f
(
t
)
=
1
R
⋅
e
(
t
)
{\displaystyle f(t)={\frac {1}{R}}\cdot e(t)}
Relationship between generalized flow and generalized momentum.
f
(
t
)
=
1
I
⋅
p
(
t
)
{\displaystyle f(t)={\frac {1}{I}}\cdot p(t)}
Relationship between generalized momentum and generalized effort.
p
(
t
)
=
∫
e
(
t
)
d
t
{\displaystyle p(t)=\int e(t)\,dt}
Relationship between generalized flow and generalized effort, involving the constant C.
e
(
t
)
=
1
C
∫
f
(
t
)
d
t
{\displaystyle e(t)={\frac {1}{C}}\int f(t)\,dt}
All of the mathematical relationships remain the same when switching energy domains, only the symbols change. This can be seen with the following examples.
Relationship between displacement and velocity.
x
(
t
)
=
∫
v
(
t
)
d
t
{\displaystyle x(t)=\int v(t)\,dt}
Relationship between current and voltage, this is also known as Ohm's law.
i
(
t
)
=
1
R
V
(
t
)
{\displaystyle i(t)={\frac {1}{R}}V(t)}
Relationship between force and displacement, also known as Hooke's law. The negative sign is dropped in this equation because the sign is factored into the way the arrow is pointing in the bond graph.
F
(
t
)
=
k
x
(
t
)
{\displaystyle F(t)=kx(t)}
For power systems, the formula for the frequency of resonance is as follows:
ω
=
1
L
C
{\displaystyle \omega ={\sqrt {\frac {1}{LC}}}}
For power density systems, the formula for the velocity of the resonance wave is as follows:
c
=
1
L
C
{\displaystyle c={\sqrt {\frac {1}{LC}}}}
== Components ==
If an engine is connected to a wheel through a shaft, the power is being transmitted in the rotational mechanical domain, meaning the effort and the flow are torque (τ) and angular velocity (ω) respectively. A word bond graph is a first step towards a bond graph, in which words define the components. As a word bond graph, this system would look like:
engine
−
−
−
−
−
ω
τ
wheel
{\displaystyle {\text{engine}}\;{\overset {\textstyle \tau }{\underset {\textstyle \omega }{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-}}}\;{\text{wheel}}}
A half-arrow is used to provide a sign convention, so if the engine is doing work when τ and ω are positive, then the diagram would be drawn:
engine
−
−
−
⇁
ω
τ
wheel
{\displaystyle {\text{engine}}\;{\overset {\textstyle \tau }{\underset {\textstyle \omega }{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}\;{\text{wheel}}}
This system can also be represented in a more general method. This involves changing from using the words, to symbols representing the same items. These symbols are based on the generalized form, as explained above. As the engine is applying a torque to the wheel, it will be represented as a source of effort for the system. The wheel can be presented by an impedance on the system. Further, the torque and angular velocity symbols are dropped and replaced with the generalized symbols for effort and flow. While not necessary in the example, it is common to number the bonds, to keep track of in equations. The simplified diagram can be seen below.
S
e
−
−
−
⇁
f
1
e
1
I
{\displaystyle {S_{e}}\;{\overset {\textstyle e_{1}}{\underset {\textstyle f_{1}}{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}\;{\text{I}}}
Given that effort is always above the flow on the bond, it is also possible to drop the effort and flow symbols altogether, without losing any relevant information. However, the bond number should not be dropped. The example can be seen below.
S
e
−
−
−
⇁
1
I
{\displaystyle {S_{e}}\;{\overset {\textstyle _{1}}{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}\;{\text{I}}}
The bond number will be important later when converting from the bond graph to state-space equations.
=== Association of elements ===
==== Series association ====
Suppose that an element has the following behavior:
e
(
t
)
=
α
g
(
q
(
t
)
)
{\displaystyle e(t)=\alpha g(q(t))}
where
g
(
x
)
{\displaystyle g(x)}
is a generic function (it can even differentiate/integrate its input) and
α
{\displaystyle \alpha }
is the element's constant. Then, suppose that in a 1-junction you have many of this type of element. Then the total voltage across the junction is:
e
(
t
)
=
(
∑
i
α
i
)
g
(
q
(
t
)
)
⟹
α
eq
=
∑
i
=
1
N
α
i
{\displaystyle e(t)=\left(\sum _{i}\alpha _{i}\right)g(q(t))\implies {\begin{array}{||c||}\hline \displaystyle \alpha _{\text{eq}}=\sum _{i=1}^{N}\alpha _{i}\\\hline \end{array}}}
==== Parallel association ====
Suppose that an element has the following behavior:
e
(
t
)
=
g
(
α
q
(
t
)
)
{\displaystyle e(t)=g(\alpha q(t))}
where
g
(
x
)
{\displaystyle g(x)}
is a generic function (it can even differentiate/integrate its input) and
α
{\displaystyle \alpha }
is the element's constant. Then, suppose that in a 0-junction you have many of this type of element. Then it is valid:
g
−
1
(
e
(
t
)
)
=
α
i
q
i
(
t
)
⟹
1
α
i
g
−
1
(
e
(
t
)
)
=
q
i
(
t
)
⟹
(
∑
i
1
α
i
)
g
−
1
(
e
(
t
)
)
=
q
(
t
)
⟹
g
(
g
−
1
(
e
(
t
)
)
)
=
g
(
1
∑
i
1
α
i
q
(
t
)
)
⟹
α
eq
=
∥
i
=
1
N
α
i
{\displaystyle g^{-1}\left(e(t)\right)=\alpha _{i}q_{i}(t)\implies {\frac {1}{\alpha _{i}}}g^{-1}(e(t))=q_{i}(t)\implies \left(\sum _{i}{\frac {1}{\alpha _{i}}}\right)g^{-1}(e(t))=q(t)\implies g(g^{-1}(e(t)))=g\left({\frac {1}{\sum _{i}{\frac {1}{\alpha _{i}}}}}q(t)\right)\implies {\begin{array}{|c|}\hline \alpha _{\text{eq}}=\parallel _{i=1}^{N}\alpha _{i}\\\hline \end{array}}}
=== Single-port elements ===
Single-port elements are elements in a bond graph that can have only one port.
==== Sources and sinks ====
Sources are elements that represent the input for a system. They will either input effort or flow into a system. They are denoted by a capital "S" with either a lower case "e" or "f" for effort or flow respectively. Sources will always have the arrow pointing away from the element. Examples of sources include: motors (source of effort, torque), voltage sources (source of effort), and current sources (source of flow).
S
e
−
−
−
⇀
J
and
S
f
−
−
−
⇀
J
{\displaystyle S_{e}\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ J\qquad {\text{and}}\qquad S_{f}\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ J}
where J indicates a junction.
Sinks are elements that represent the output for a system. They are represented the same way as sources, but have the arrow pointing into the element instead of away from it.
J
−
−
−
⇀
S
e
and
J
−
−
−
⇀
S
f
{\displaystyle J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ S_{e}\qquad {\text{and}}\qquad J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ S_{f}}
==== Inertia ====
Inertia elements are denoted by a capital "I", and always have power flowing into them. Inertia elements are elements that store energy. Most commonly these are a mass for mechanical systems, and inductors for electrical systems.
J
−
−
−
⇀
I
{\displaystyle J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ I}
==== Resistance ====
Resistance elements are denoted by a capital "R", and always have power flowing into them. Resistance elements are elements that dissipate energy. Most commonly these are a damper, for mechanical systems, and resistors for electrical systems.
J
−
−
−
⇀
R
{\displaystyle J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ R}
==== Compliance ====
Compliance elements are denoted by a capital "C", and always have power flowing into them. Compliance elements are elements that store potential energy. Most commonly these are springs for mechanical systems, and capacitors for electrical systems.
J
−
−
−
⇀
C
{\displaystyle J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}}}\;\ C}
=== Two-port elements ===
These elements have two ports. They are used to change the power between or within a system. When converting from one to the other, no power is lost during the transfer. The elements have a constant that will be given with it. The constant is called a transformer constant or gyrator constant depending on which element is being used. These constants will commonly be displayed as a ratio below the element.
==== Transformer ====
A transformer applies a relationship between flow in flow out, and effort in effort out. Examples include an ideal electrical transformer or a lever.
Denoted
−
−
−
⇀
1
T
R
−
−
−
⇀
2
r
:
1
{\displaystyle {\begin{matrix}{\overset {\textstyle _{1}}{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}\ \ \ TR\ \ {\overset {\textstyle _{2}}{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}\ \\^{r:1}\end{matrix}}}
where the r denotes the modulus of the transformer. This means
f
1
r
=
f
2
{\displaystyle f_{1}r=f_{2}}
and
e
2
r
=
e
1
{\displaystyle e_{2}r=e_{1}}
==== Gyrator ====
A gyrator applies a relationship between flow in effort out, and effort in flow out. An example of a gyrator is a DC motor, which converts voltage (electrical effort) into angular velocity (angular mechanical flow).
−
−
−
⇀
1
G
Y
−
−
−
⇀
2
g
:
1
{\displaystyle {\begin{matrix}{\overset {\textstyle _{1}}{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}\ \ \ GY\ \ {\overset {\textstyle _{2}}{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}\ \\^{g:1}\end{matrix}}}
meaning that
e
2
=
g
f
1
{\displaystyle e_{2}=gf_{1}}
and
e
1
=
g
f
2
.
{\displaystyle e_{1}=gf_{2}.}
=== Multi-port elements ===
Junctions, unlike the other elements can have any number of ports either in or out. Junctions split power across their ports. There are two distinct junctions, the 0-junction and the 1-junction which differ only in how effort and flow are carried across. The same junction in series can be combined, but different junctions in series cannot.
==== 0-junctions ====
0-junctions behave such that all effort values (and its time integral/derivative) are equal across the bonds, but the sum of the flow values in equals the sum of the flow values out, or equivalently, all flows sum to zero. In an electrical circuit, the 0-junction is a node and represents a voltage shared by all components at that node. In a mechanical circuit, the 0-junction is a joint among components, and represents a force shared by all components connected to it.
all
e
's are equal
{\displaystyle {\text{all }}e{\text{'s are equal}}}
∑
f
in
=
∑
f
out
{\displaystyle \sum f_{\text{in}}=\sum f_{\text{out}}}
An example is shown below.
−
−
−
⇁
1
0
↾
2
−
−
−
⇁
3
{\displaystyle {\overset {\textstyle _{1}}{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}{\stackrel {\textstyle {\stackrel {\textstyle _{2}}{\upharpoonright }}}{0}}{\overset {\textstyle _{3}}{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}}
Resulting equations:
e
1
=
e
2
=
e
3
{\displaystyle e_{1}=e_{2}=e_{3}}
f
1
=
f
2
+
f
3
{\displaystyle f_{1}=f_{2}+f_{3}}
==== 1-junctions ====
1-junctions behave opposite of 0-junctions. 1-junctions behave such that all flow values (and its time integral/derivative) are equal across the bonds, but the sum of the effort values in equals the sum the effort values out, or equivalently, all efforts sum to zero. In an electrical circuit, the 1 junction represents a series connection among components. In a mechanical circuit, the 1-junction represents a velocity shared by all components connected to it.
all
f
's are equal
{\displaystyle {\text{all }}f{\text{'s are equal}}}
∑
e
in
=
∑
e
out
{\displaystyle \sum e_{\text{in}}=\sum e_{\text{out}}}
An example is shown below.
−
−
−
⇁
1
1
↾
2
−
−
−
⇁
3
{\displaystyle {\overset {\textstyle _{1}}{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}{\stackrel {\textstyle {\stackrel {\textstyle _{2}}{\upharpoonright }}}{1}}{\overset {\textstyle _{3}}{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoondown }}}}
Resulting equations:
f
1
=
f
2
=
f
3
{\displaystyle f_{1}=f_{2}=f_{3}}
e
1
=
e
2
+
e
3
{\displaystyle e_{1}=e_{2}+e_{3}}
== Causality ==
Bond graphs have a notion of causality, indicating which side of a bond determines the instantaneous effort and which determines the instantaneous flow. In formulating the dynamic equations that describe the system, causality defines, for each modeling element, which variable is dependent and which is independent. By propagating the causation graphically from one modeling element to the other, analysis of large-scale models becomes easier. Completing causal assignment in a bond graph model will allow the detection of modeling situation where an algebraic loop exists; that is the situation when a variable is defined recursively as a function of itself.
As an example of causality, consider a capacitor in series with a battery. It is not physically possible to charge a capacitor instantly, so anything connected in parallel with a capacitor will necessarily have the same voltage (effort variable) as that across the capacitor. Similarly, an inductor cannot change flux instantly and so any component in series with an inductor will necessarily have the same flow as the inductor. Because capacitors and inductors are passive devices, they cannot maintain their respective voltage and flow indefinitely—the components to which they are attached will affect their respective voltage and flow, but only indirectly by affecting their current and voltage respectively.
Note: Causality is a symmetric relationship. When one side "causes" effort, the other side "causes" flow.
In bond graph notation, a causal stroke may be added to one end of the power bond to indicate that this side is defining the flow. Consequently, the side opposite from the casual stroke controls the effort.
Sources of flow (
S
f
{\displaystyle S_{f}}
) define flow, so they host the causal stroke:
S
f
|
−
−
−
⇀
{\displaystyle S_{f}\;|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!}
Sources of effort (
S
e
{\displaystyle S_{e}}
) define effort, so the other end hosts the causal stroke:
S
e
−
−
−
⇀
|
{\displaystyle S_{e}\;-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!|}
Consider a constant-torque motor driving a wheel, i.e. a source of effort (
S
e
{\displaystyle S_{e}}
). That would be drawn as follows:
motor
S
e
−
−
−
⇀
|
ω
τ
wheel
{\displaystyle {\begin{array}{r}{\text{motor}}\\S_{e}\end{array}}\;{\overset {\textstyle \tau }{\underset {\textstyle \omega }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!|}}}\;{\text{wheel}}}
Symmetrically, the side with the causal stroke (in this case the wheel) defines the flow for the bond.
Causality results in compatibility constraints. Clearly only one end of a power bond can define the effort and so only one end of a bond can (the other end) have a causal stroke. In addition, the two passive components with time-dependent behavior,
I
{\displaystyle I}
and
C
{\displaystyle C}
, can only have one sort of causation: an
I
{\displaystyle I}
component determines flow; a
C
{\displaystyle C}
component defines effort. So from a junction,
J
{\displaystyle J}
, the preferred causal orientation is as follows:
J
−
−
−
⇀
|
I
and
J
|
−
−
−
⇀
C
{\displaystyle J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!|}}}\;I\qquad {\text{and}}\qquad J\;{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}\;C}
The reason that this is the preferred method for these elements can be further analyzed if you consider the equations they would give shown by the tetrahedron of state.
f
(
t
)
=
1
I
∫
e
(
t
)
d
t
and
e
(
t
)
=
1
C
∫
f
(
t
)
d
t
{\displaystyle f(t)={\frac {1}{I}}\int e(t)\,dt\qquad {\text{and}}\qquad e(t)={\frac {1}{C}}\int f(t)\,dt}
The resulting equations involve the integral of the independent power variable. This is preferred over the result of having the causality the other way, which results in derivative. The equations can be seen below.
e
(
t
)
=
I
f
˙
(
t
)
and
f
(
t
)
=
C
e
˙
(
t
)
{\displaystyle e(t)=I{\dot {f}}(t)\qquad {\text{and}}\qquad f(t)=C{\dot {e}}(t)}
It is possible for a bond graph to have a causal bar on one of these elements in the non-preferred manner. In such a case a "causal conflict" is said to have occurred at that bond. The results of a causal conflict are only seen when writing the state-space equations for the graph. It is explained in more details in that section.
A resistor has no time-dependent behavior: apply a voltage and get a flow instantly, or apply a flow and get a voltage instantly, thus a resistor can be at either end of a causal bond:
J
−
−
−
⇀
|
R
and
J
|
−
−
−
⇀
R
{\displaystyle J\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!\rightharpoonup \!\!\!|}}}\;R\qquad {\text{and}}\qquad J\;{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}\;R}
Transformers are passive, neither dissipating nor storing energy, so causality passes through them:
−
−
−
−
−
|
T
F
−
−
−
−
−
|
or
|
−
−
−
−
−
T
F
|
−
−
−
−
−
{\displaystyle \;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!|}}}\;TF\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!|}}}\;\qquad {\text{or}}\qquad \;{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-}}}\;TF\;{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-}}}\;}
A gyrator transforms flow to effort and effort to flow, so if flow is caused on one side, effort is caused on the other side and vice versa:
|
−
−
−
−
−
G
Y
−
−
−
−
−
|
or
−
−
−
−
−
|
G
Y
|
−
−
−
−
−
{\displaystyle \;{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-}}}\;GY\;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!|}}}\;\qquad {\text{or}}\qquad \;{\overset {\textstyle }{\underset {\textstyle }{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!|}}}\;GY\;{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-}}}\;}
=== Junctions ===
In a 0-junction, efforts are equal; in a 1-junction, flows are equal. Thus, with causal bonds, only one bond can cause the effort in a 0-junction and only one can cause the flow in a 1-junction. Thus, if the causality of one bond of a junction is known, the causality of the others is also known. That one bond is called the 'strong bond'
strong bond
→
⊣
0
⊥
⊤
⊣
and
strong bond
→
⊢
1
⊤
⊥
⊢
{\displaystyle {\text{strong bond}}\rightarrow \;\dashv \!{\overset {\textstyle \top }{\underset {\textstyle \bot }{0}}}\!\dashv \qquad {\text{and}}\qquad {\text{strong bond}}\rightarrow \;\vdash \!{\overset {\textstyle \bot }{\underset {\textstyle \top }{1}}}\!\vdash }
In a nutshell, 0-junctions must have a single causal bar, 1-junctions must have all but one causal bars.
=== Determining causality ===
In order to determine the causality of a bond graph certain steps must be followed. Those steps are:
Draw Source Causal Bars
Draw Preferred causality for C and I bonds
Draw causal bars for 0 and 1 junctions, transformers and gyrators
Draw R bond causal bars
If a causal conflict occurs, change C or I bond to differentiation
A walk-through of the steps is shown below.
S
f
−
−
−
⇀
0
−
−
−
⇀
T
R
−
−
−
⇀
0
−
−
−
⇀
C
5
⇃
r
:
1
⇃
C
2
R
6
{\displaystyle {\begin{matrix}S_{f}&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&TR&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&C_{5}\\&&\downharpoonleft &&^{r:1}&&\downharpoonleft &&\\&&C_{2}&&&&R_{6}&&\end{matrix}}}
The first step is to draw causality for the sources, over which there is only one. This results in the graph below.
S
f
|
−
−
−
⇀
0
−
−
−
⇀
T
R
−
−
−
⇀
0
−
−
−
⇀
C
5
⇃
r
:
1
⇃
C
2
R
6
{\displaystyle {\begin{matrix}S_{f}&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&TR&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&C_{5}\\&&\downharpoonleft &&^{r:1}&&\downharpoonleft &&\\&&C_{2}&&&&R_{6}&&\end{matrix}}}
The next step is to draw the preferred causality for the C bonds.
S
f
|
−
−
−
⇀
0
−
−
−
⇀
T
R
−
−
−
⇀
0
|
−
−
−
⇀
C
5
⇃
¯
r
:
1
⇃
C
2
R
6
{\displaystyle {\begin{matrix}S_{f}&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&TR&{\overset {\textstyle }{\underset {\textstyle }{\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&C_{5}\\&&{\bar {\downharpoonleft }}&&^{r:1}&&\downharpoonleft &&\\&&C_{2}&&&&R_{6}&&\end{matrix}}}
Next apply the causality for the 0 and 1 junctions, transformers, and gyrators.
S
f
|
−
−
−
⇀
0
|
−
−
−
⇀
T
R
|
−
−
−
⇀
0
|
−
−
−
⇀
C
5
⇃
¯
r
:
1
⇃
_
C
2
R
6
{\displaystyle {\begin{matrix}S_{f}&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&TR&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&C_{5}\\&&{\bar {\downharpoonleft }}&&^{r:1}&&{\underline {\downharpoonleft }}&&\\&&C_{2}&&&&R_{6}&&\end{matrix}}}
However, there is an issue with 0-junction on the left. The 0-junction has two causal bars at the junction, but the 0-junction wants one and only one at the junction. This was caused by having
C
2
{\textstyle C_{2}}
be in the preferred causality. The only way to fix this is to flip that causal bar. This results in a causal conflict, the corrected version of the graph is below, with the
⋆
{\textstyle \star }
representing the causal conflict.
S
f
|
−
−
−
⇀
0
|
−
−
−
⇀
T
R
|
−
−
−
⇀
0
|
−
−
−
⇀
C
5
⇃
_
⋆
r
:
1
⇃
_
C
2
R
6
{\displaystyle {\begin{matrix}S_{f}&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&TR&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&0&{\overset {\textstyle }{\underset {\textstyle }{|\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightharpoonup }}}&C_{5}\\&&{\underline {\downharpoonleft }}\star &&^{r:1}&&{\underline {\downharpoonleft }}&&\\&&C_{2}&&&&R_{6}&&\end{matrix}}}
== Converting from other systems ==
One of the main advantages of using bond graphs is that once you have a bond graph it doesn't matter the original energy domain. Below are some of the steps to apply when converting from the energy domain to a bond graph.
=== Electromagnetic ===
The steps for solving an Electromagnetic problem as a bond graph are as follows:
Place an 0-junction at each node
Insert Sources, R, I, C, TR, and GY bonds with 1 junctions
Ground (both sides if a transformer or gyrator is present)
Assign power flow direction
Simplify
These steps are shown more clearly in the examples below.
=== Linear mechanical ===
The steps for solving a Linear Mechanical problem as a bond graph are as follows:
Place 1-junctions for each distinct velocity (usually at a mass)
Insert R and C bonds at their own 0-junctions between the 1 junctions where they act
Insert Sources and I bonds on the 1 junctions where they act
Assign power flow direction
Simplify
These steps are shown more clearly in the examples below.
=== Simplifying ===
The simplifying step is the same regardless if the system was electromagnetic or linear mechanical. The steps are:
Remove Bond of zero power (due to ground or zero velocity)
Remove 0 and 1 junctions with less than three bonds
Simplify parallel power
Combine 0 junctions in series
Combine 1 junctions in series
These steps are shown more clearly in the examples below.
=== Parallel power ===
Parallel power is when power runs in parallel in a bond graph. An example of parallel power is shown below.
Parallel power can be simplified, by recalling the relationship between effort and flow for 0 and 1-junctions. To solve parallel power you will first want to write down all of the equations for the junctions. For the example provided, the equations can be seen below. (Please make note of the number bond the effort/flow variable represents).
f
1
=
f
2
=
f
3
e
2
=
e
4
=
e
7
e
1
=
e
2
+
e
3
f
2
=
f
4
+
f
7
e
3
=
e
5
=
e
6
f
7
=
f
6
=
f
8
f
3
=
f
5
+
f
6
e
7
+
e
6
=
e
8
{\displaystyle {\begin{matrix}f_{1}=f_{2}=f_{3}&&e_{2}=e_{4}=e_{7}\\e_{1}=e_{2}+e_{3}&&f_{2}=f_{4}+f_{7}\\&&\\e_{3}=e_{5}=e_{6}&&f_{7}=f_{6}=f_{8}\\f_{3}=f_{5}+f_{6}&&e_{7}+e_{6}=e_{8}\end{matrix}}}
By manipulating these equations you can arrange them such that you can find an equivalent set of 0 and 1-junctions to describe the parallel power.
For example, because
e
3
=
e
6
{\textstyle e_{3}=e_{6}}
and
e
2
=
e
7
{\textstyle e_{2}=e_{7}}
you can replace the variables in the equation
e
1
=
e
2
+
e
3
{\textstyle e_{1}=e_{2}+e_{3}}
resulting in
e
1
=
e
6
+
e
7
{\textstyle e_{1}=e_{6}+e_{7}}
and since
e
6
+
e
7
=
e
8
{\textstyle e_{6}+e_{7}=e_{8}}
, we now know that
e
1
=
e
8
{\displaystyle e_{1}=e_{8}}
. This relationship of two effort variables equaling can be explained by an 0-junction. Manipulating other equations you can find that
f
4
=
f
5
{\displaystyle f_{4}=f_{5}}
which describes the relationship of a 1-junction. Once you have determined the relationships that you need you can redraw the parallel power section with the new junctions. The result for the example show is seen below.
=== Examples ===
==== Simple electrical system ====
A simple electrical circuit consisting of a voltage source, resistor, and capacitor in series.
The first step is to draw 0-junctions at all of the nodes:
0
0
0
0
{\displaystyle {\begin{matrix}&0&&0&\\&&&&\\&&&&\\&0&&0&\end{matrix}}}
The next step is to add all of the elements acting at their own 1-junction:
R
|
0
−
1
−
0
|
|
S
e
−
1
1
−
C
|
|
0
_
−
−
−
0
{\displaystyle {\begin{matrix}&&&&R&&&&\\&&&&|&&&&\\&&0&-&1&-&0&&\\&&|&&&&|&&\\S_{e}&-&1&&&&1&-&C\\&&|&&&&|&&\\&&{\underline {0}}&-&-&-&0&&\end{matrix}}}
The next step is to pick a ground. The ground is simply an 0-junction that is going to be assumed to have no voltage. For this case, the ground will be chosen to be the lower left 0-junction, that is underlined above. The next step is to draw all of the arrows for the bond graph. The arrows on junctions should point towards ground (following a similar path to current). For resistance, inertance, and compliance elements, the arrows always point towards the elements. The result of drawing the arrows can be seen below, with the 0-junction marked with a star as the ground.
Now that we have the Bond graph, we can start the process of simplifying it. The first step is to remove all the ground nodes. Both of the bottom 0-junctions can be removed, because they are both grounded. The result is shown below.
Next, the junctions with less than three bonds can be removed. This is because flow and effort pass through these junctions without being modified, so they can be removed to allow us to draw less. The result can be seen below.
The final step is to apply causality to the bond graph. Applying causality was explained above. The final bond graph is shown below.
==== Advanced electrical system ====
A more advanced electrical system with a current source, resistors, capacitors, and a transformer
Following the steps with this circuit will result in the bond graph below, before it is simplified. The nodes marked with the star denote the ground.
Simplifying the bond graph will result in the image below.
Lastly, applying causality will result in the bond graph below. The bond with star denotes a causal conflict.
==== Simple linear mechanical ====
A simple linear mechanical system, consisting of a mass on a spring that is attached to a wall. The mass has some force being applied to it. An image of the system is shown below.
For a mechanical system, the first step is to place a 1-junction at each distinct velocity, in this case there are two distinct velocities, the mass and the wall. It is usually helpful to label the 1-junctions for reference. The result is below.
1
mass
1
wall
{\displaystyle {\begin{matrix}&&\\&&\\1_{\text{mass}}&&\\&&\\&&\\&&\\1_{\text{wall}}&&\end{matrix}}}
The next step is to draw the R and C bonds at their own 0-junctions between the 1-junctions where they act. For this example there is only one of these bonds, the C bond for the spring. It acts between the 1-junction representing the mass and the 1-junction representing the wall. The result is below.
1
mass
|
0
−
C
:
1
k
|
1
wall
{\displaystyle {\begin{matrix}&&\\&&\\1_{\text{mass}}&&\\|&&\\0&-&C:{\frac {1}{k}}\\|&&\\1_{\text{wall}}&&\end{matrix}}}
Next you want to add the sources and I bonds on the 1-junction where they act. There is one source, the source of effort (force) and one I bond, the mass of the mass both of which act on the 1-junction of the mass. The result is shown below.
S
e
:
F
(
t
)
|
1
mass
−
I
:
m
|
0
−
C
:
1
k
|
1
wall
{\displaystyle {\begin{matrix}S_{e}:F(t)&&\\|&&\\1_{\text{mass}}&-&I:m\\|&&\\0&-&C:{\frac {1}{k}}\\|&&\\1_{\text{wall}}&&\end{matrix}}}
Next power flow is to be assigned. Like the electrical examples, power should flow towards ground, in this case the 1-junction of the wall. Exceptions to this are R, C, or I bond, which always point towards the element. The resulting bond graph is below.
Now that the bond graph has been generated, it can be simplified. Because the wall is grounded (has zero velocity), you can remove that junction. As such the 0-junction the C bond is on, can also be removed because it will then have less than three bonds. The simplified bond graph can be seen below.
The last step is to apply causality, the final bond graph can be seen below.
==== Advanced linear mechanical ====
A more advanced linear mechanical system can be seen below.
Just like the above example, the first step is to make 1-junctions at each of the distant velocities. In this example there are three distant velocity, Mass 1, Mass 2, and the wall. Then you connect all of the bonds and assign power flow. The bond can be seen below.
Next you start the process of simplifying the bond graph, by removing the 1-junction of the wall, and removing junctions with less than three bonds. The bond graph can be seen below.
There is parallel power in the bond graph. Solving parallel power was explained above. The result of solving it can be seen below.
Lastly, apply causality, the final bond graph can be seen below.
== State equations ==
Once a bond graph is complete, it can be utilized to generate the state-space representation equations of the system. State-space representation is especially powerful as it allows complex multi-order differential system to be solved as a system of first-order equations instead. The general form of the state equation is
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)}
where
x
(
t
)
{\textstyle \mathbf {x} (t)}
is a column matrix of the state variables, or the unknowns of the system.
x
˙
(
t
)
{\textstyle {\dot {\mathbf {x} }}(t)}
is the time derivative of the state variables.
u
(
t
)
{\textstyle \mathbf {u} (t)}
is a column matrix of the inputs of the system. And
A
{\textstyle \mathbf {A} }
and
B
{\textstyle \mathbf {B} }
are matrices of constants based on the system. The state variables of a system are
q
(
t
)
{\textstyle q(t)}
and
p
(
t
)
{\textstyle p(t)}
values for each C and I bond without a causal conflict. Each I bond gets a
p
(
t
)
{\textstyle p(t)}
while each C bond gets a
q
(
t
)
{\textstyle q(t)}
.
For example, if you have the following bond graph
you would have the following
x
˙
(
t
)
{\textstyle {\dot {\mathbf {x} }}(t)}
,
x
(
t
)
{\textstyle \mathbf {x} (t)}
, and
u
(
t
)
{\textstyle \mathbf {u} (t)}
matrices:
x
˙
(
t
)
=
[
p
˙
3
(
t
)
q
˙
6
(
t
)
]
and
x
(
t
)
=
[
p
3
(
t
)
q
6
(
t
)
]
and
u
(
t
)
=
[
e
1
(
t
)
]
{\displaystyle {\dot {\mathbf {x} }}(t)={\begin{bmatrix}{\dot {p}}_{3}(t)\\{\dot {q}}_{6}(t)\end{bmatrix}}\qquad {\text{and}}\qquad \mathbf {x} (t)={\begin{bmatrix}p_{3}(t)\\q_{6}(t)\end{bmatrix}}\qquad {\text{and}}\qquad \mathbf {u} (t)={\begin{bmatrix}e_{1}(t)\end{bmatrix}}}
The matrices of
A
{\textstyle \mathbf {A} }
and
B
{\textstyle \mathbf {B} }
are solved by determining the relationship of the state variables and their respective elements, as was described in the tetrahedron of state. The first step to solve the state equations is to list all of the governing equations for the bond graph. The table below shows the relationship between bonds and their governing equations.
"♦" denotes preferred causality.
For the example provided,
the governing equations are the following.
e
1
=
input
{\textstyle e_{1}={\text{input}}}
e
3
=
e
1
−
e
2
−
e
4
{\textstyle e_{3}=e_{1}-e_{2}-e_{4}}
f
1
=
f
2
=
f
4
=
f
3
{\textstyle f_{1}=f_{2}=f_{4}=f_{3}}
e
2
=
R
2
f
2
{\textstyle e_{2}=R_{2}f_{2}}
f
3
=
1
I
3
∫
e
3
d
t
=
1
I
3
p
3
{\textstyle f_{3}={\frac {1}{I_{3}}}\int e_{3}\,dt={\frac {1}{I_{3}}}p_{3}}
f
5
=
f
4
⋅
r
{\textstyle f_{5}=f_{4}\cdot r}
e
4
=
e
5
⋅
r
{\textstyle e_{4}=e_{5}\cdot r}
e
5
=
e
7
=
e
6
{\textstyle e_{5}=e_{7}=e_{6}}
f
6
=
f
5
−
f
7
{\textstyle f_{6}=f_{5}-f_{7}}
e
6
=
1
C
6
∫
f
6
d
t
=
1
C
6
q
6
{\textstyle e_{6}={\frac {1}{C_{6}}}\int f_{6}\,dt={\frac {1}{C_{6}}}q_{6}}
f
7
=
1
R
7
e
7
{\textstyle f_{7}={\frac {1}{R_{7}}}e_{7}}
These equations can be manipulated to yield the state equations. For this example, you are trying to find equations that relate
p
˙
3
(
t
)
{\textstyle {\dot {p}}_{3}(t)}
and
q
˙
6
(
t
)
{\textstyle {\dot {q}}_{6}(t)}
in terms of
p
3
(
t
)
{\textstyle p_{3}(t)}
,
q
6
(
t
)
{\textstyle q_{6}(t)}
, and
e
1
(
t
)
{\textstyle e_{1}(t)}
.
To start you should recall from the tetrahedron of state that
p
˙
3
(
t
)
=
e
3
(
t
)
{\textstyle {\dot {p}}_{3}(t)=e_{3}(t)}
starting with equation 2, you can rearrange it so that
e
3
=
e
1
−
e
2
−
e
4
{\displaystyle e_{3}=e_{1}-e_{2}-e_{4}}
.
e
2
{\displaystyle e_{2}}
can be substituted for equation 4, while in equation 4,
f
2
{\displaystyle f_{2}}
can be replaced by
f
3
{\displaystyle f_{3}}
due to equation 3, which can then be replaced by equation 5.
e
4
{\displaystyle e_{4}}
can likewise be replaced using equation 7, in which
e
5
{\displaystyle e_{5}}
can be replaced with
e
6
{\displaystyle e_{6}}
which can then be replaced with equation 10. Following these substituted yields the first state equation which is shown below.
p
˙
3
(
t
)
=
e
3
(
t
)
=
e
1
(
t
)
−
R
2
I
3
p
3
(
t
)
−
r
C
6
q
6
(
t
)
{\displaystyle {\dot {p}}_{3}(t)=e_{3}(t)=e_{1}(t)-{\frac {R_{2}}{I_{3}}}p_{3}(t)-{\frac {r}{C_{6}}}q_{6}(t)}
The second state equation can likewise be solved, by recalling that
q
˙
6
(
t
)
=
f
6
(
t
)
{\textstyle {\dot {q}}_{6}(t)=f_{6}(t)}
. The second state equation is shown below.
q
˙
6
(
t
)
=
f
6
(
t
)
=
r
I
3
p
3
(
t
)
−
1
R
7
⋅
C
6
q
6
(
t
)
{\displaystyle {\dot {q}}_{6}(t)=f_{6}(t)={\frac {r}{I_{3}}}p_{3}(t)-{\frac {1}{R_{7}\cdot C_{6}}}q_{6}(t)}
Both equations can further be rearranged into matrix form. The result of which is below.
[
p
˙
3
(
t
)
q
˙
6
(
t
)
]
=
[
−
R
2
I
3
−
r
C
6
r
I
3
−
1
R
7
⋅
C
6
]
[
p
3
(
t
)
q
6
(
t
)
]
+
[
1
0
]
[
e
1
(
t
)
]
{\displaystyle {\begin{bmatrix}{\dot {p}}_{3}(t)\\{\dot {q}}_{6}(t)\end{bmatrix}}={\begin{bmatrix}-{\frac {R_{2}}{I_{3}}}&-{\frac {r}{C_{6}}}\\{\frac {r}{I_{3}}}&-{\frac {1}{R_{7}\cdot C_{6}}}\end{bmatrix}}{\begin{bmatrix}p_{3}(t)\\q_{6}(t)\end{bmatrix}}+{\begin{bmatrix}1\\0\end{bmatrix}}{\begin{bmatrix}e_{1}(t)\end{bmatrix}}}
At this point the equations can be treated as any other state-space representation problem.
== International conferences on bond graph modeling (ECMS and ICBGM) ==
A bibliography on bond graph modeling may be extracted from the following conferences :
ECMS-2013 27th European Conference on Modelling and Simulation, May 27–30, 2013, Ålesund, Norway
ECMS-2008 22nd European Conference on Modelling and Simulation, June 3–6, 2008 Nicosia, Cyprus
ICBGM-2007: 8th International Conference on Bond Graph Modeling And Simulation, January 15–17, 2007, San Diego, California, U.S.A.
ECMS-2006 20TH European Conference on Modelling and Simulation, May 28–31, 2006, Bonn, Germany
IMAACA-2005 International Mediterranean Modeling Multiconference
ICBGM-2005 International Conference on Bond Graph Modeling and Simulation, January 23–27, 2005, New Orleans, Louisiana, U.S.A. – Papers
ICBGM-2003 International Conference on Bond Graph Modeling and Simulation (ICBGM'2003) January 19–23, 2003, Orlando, Florida, USA – Papers
14TH European Simulation symposium October 23–26, 2002 Dresden, Germany
ESS'2001 13th European Simulation symposium, Marseilles, France October 18–20, 2001
ICBGM-2001 International Conference on Bond Graph Modeling and Simulation (ICBGM 2001), Phoenix, Arizona U.S.A.
European Simulation Multi-conference 23-26 May, 2000, Gent, Belgium
11th European Simulation symposium, October 26–28, 1999 Castle, Friedrich-Alexander University, Erlangen-Nuremberg, Germany
ICBGM-1999 International Conference on Bond Graph Modeling and Simulation January 17–20, 1999 San Francisco, California
ESS-97 9TH European Simulation Symposium and Exhibition Simulation in Industry, Passau, Germany, October 19–22, 1997
ICBGM-1997 3rd International Conference on Bond Graph Modeling And Simulation, January 12–15, 1997, Sheraton-Crescent Hotel, Phoenix, Arizona
11th European Simulation Multiconference Istanbul, Turkey, June 1–4, 1997
ESM-1996 10th annual European Simulation Multiconference Budapest, Hungary, June 2–6, 1996
ICBGM-1995 Int. Conf. on Bond Graph Modeling and Simulation (ICBGM’95), January 15–18, 1995, Las Vegas, Nevada.
== See also ==
20-sim simulation software based on the bond graph theory
AMESim simulation software based on the bond graph theory
Hybrid bond graph
Coenergy
== References ==
== Further reading ==
Kypuros, Javier (2013). System dynamics and control with bond graph modeling. Boca Raton: Taylor&Francis. doi:10.1201/b14676. ISBN 978-1-4665-6075-8.
Paynter, Henry M. (1960). Analysis and design of engineering systems. M.I.T. Press. ISBN 0-262-16004-8. {{cite book}}: ISBN / Date incompatibility (help)
Karnopp, Dean C.; Margolis, Donald L.; Rosenberg, Ronald C. (1990). System dynamics: a unified approach. New York: John Wiley & Sons. ISBN 0-471-62171-4.
Thoma, Jean Ulrich (1975). Bond graphs: introduction and applications. Oxford: Pergamon Press. ISBN 0-08-018882-6.
Gawthrop, Peter J.; Smith, Lorcan P. S. (1996). Metamodelling: bond graphs and dynamic systems. London: Prentice Hall. ISBN 0-13-489824-9.
Brown, Forbes T. (2007). Engineering system dynamics – a unified graph-centered approach. Boca Raton: Taylor & Francis. ISBN 978-0-8493-9648-9.
Mukherjee, Amalendu; Karmakar, Ranjit (2000). Modelling and simulation of engineering systems through bondgraphs. Boca Raton: CRC Press. ISBN 978-0-8493-0982-3.
Gawthrop, P.J.; Ballance, D.J. (1999). "Chapter 2: Symbolic computation for manipulation of hierarchical bond graphs". In Munro, N. (ed.). Symbolic Methods in Control System Analysis and Design. London: Institution of Electrical Engineers. pp. 23-52. ISBN 0-85296-943-0.
Borutzky, Wolfgang (2010). Bond Graph Methodology. London: Springer. doi:10.1007/978-1-84882-882-7. ISBN 978-1-84882-881-0.
http://www.site.uottawa.ca/~rhabash/ESSModelFluid.pdf Explains modeling the bond graph in the fluid domain
http://www.dartmouth.edu/~sullivan/22files/Fluid_sys_anal_w_chart.pdf Explains modeling the bond graph in the fluid domain
== External links ==
Simscape Official MATLAB/Simulink add-on library for graphical bond graph programming
BG V.2.1 Freeware MATLAB/Simulink add-on library for graphical bond graph programming | Wikipedia/Bond_graph |
A proportional–integral–derivative controller (PID controller or three-term controller) is a feedback-based control loop mechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used in industrial control systems and various other applications where constant control through modulation is necessary without human intervention. The PID controller automatically compares the desired target value (setpoint or SP) with the actual value of the system (process variable or PV). The difference between these two values is called the error value, denoted as
e
(
t
)
{\displaystyle e(t)}
.
It then applies corrective actions automatically to bring the PV to the same value as the SP using three methods: The proportional (P) component responds to the current error value by producing an output that is directly proportional to the magnitude of the error. This provides immediate correction based on how far the system is from the desired setpoint. The integral (I) component, in turn, considers the cumulative sum of past errors to address any residual steady-state errors that persist over time, eliminating lingering discrepancies. Lastly, the derivative (D) component predicts future error by assessing the rate of change of the error, which helps to mitigate overshoot and enhance system stability, particularly when the system undergoes rapid changes. The PID output signal can directly control actuators through voltage, current, or other modulation methods, depending on the application. The PID controller reduces the likelihood of human error and improves automation.
A common example is a vehicle’s cruise control system. For instance, when a vehicle encounters a hill, its speed will decrease if the engine power output is kept constant. The PID controller adjusts the engine's power output to restore the vehicle to its desired speed, doing so efficiently with minimal delay and overshoot.
The theoretical foundation of PID controllers dates back to the early 1920s with the development of automatic steering systems for ships. This concept was later adopted for automatic process control in manufacturing, first appearing in pneumatic actuators and evolving into electronic controllers. PID controllers are widely used in numerous applications requiring accurate, stable, and optimized automatic control, such as temperature regulation, motor speed control, and industrial process management.
== Fundamental operation ==
The distinguishing feature of the PID controller is the ability to use the three control terms of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an error value
e
(
t
)
{\displaystyle e(t)}
as the difference between a desired setpoint
SP
=
r
(
t
)
{\displaystyle {\text{SP}}=r(t)}
and a measured process variable
PV
=
y
(
t
)
{\displaystyle {\text{PV}}=y(t)}
:
e
(
t
)
=
r
(
t
)
−
y
(
t
)
{\displaystyle e(t)=r(t)-y(t)}
, and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a control variable
u
(
t
)
{\displaystyle u(t)}
, such as the opening of a control valve, to a new value determined by a weighted sum of the control terms.
The PID controller directly generates a continuous control signal based on error, without discrete modulation.
In this model:
Term P is proportional to the current value of the SP − PV error
e
(
t
)
{\displaystyle e(t)}
. For example, if the error is large, the control output will be proportionately large by using the gain factor "Kp". Using proportional control alone will result in an error between the set point and the process value because the controller requires an error to generate the proportional output response. In steady state process conditions an equilibrium is reached, with a steady SP-PV "offset".
Term I accounts for past values of the SP − PV error and integrates them over time to produce the I term. For example, if there is a residual SP − PV error after the application of proportional control, the integral term seeks to eliminate the residual error by adding a control effect due to the historic cumulative value of the error. When the error is eliminated, the integral term will cease to grow. This will result in the proportional effect diminishing as the error decreases, but this is compensated for by the growing integral effect.
Term D is a best estimate of the future trend of the SP − PV error, based on its current rate of change. It is sometimes called "anticipatory control", as it is effectively seeking to reduce the effect of the SP − PV error by exerting a control influence generated by the rate of error change. The more rapid the change, the greater the controlling or damping effect.
Tuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response.
Control action – The mathematical model and practical loop above both use a direct control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This is because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is called reverse acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.
=== Control function ===
The overall control function is
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}},}
where
K
p
{\displaystyle K_{\text{p}}}
,
K
i
{\displaystyle K_{\text{i}}}
, and
K
d
{\displaystyle K_{\text{d}}}
, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted P, I, and D).
=== Standard form ===
In the standard form of the equation (see later in article),
K
i
{\displaystyle K_{\text{i}}}
and
K
d
{\displaystyle K_{\text{d}}}
are respectively replaced by
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
and
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
; the advantage of this being that
T
i
{\displaystyle T_{\text{i}}}
and
T
d
{\displaystyle T_{\text{d}}}
have some understandable physical meaning, as they represent an integration time and a derivative time respectively.
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
is the time constant with which the controller will attempt to approach the set point.
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
determines how long the controller will tolerate the output being consistently above or below the set point.
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
e
(
t
)
d
t
)
{\displaystyle u(t)=K_{\text{p}}\left(e(t)+{\frac {1}{T_{\text{i}}}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +T_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}\right)}
where
T
i
=
K
p
K
i
{\displaystyle T_{\text{i}}={K_{\text{p}} \over K_{\text{i}}}}
is the integration time constant, and
T
d
=
K
d
K
p
{\displaystyle T_{\text{d}}={K_{\text{d}} \over K_{\text{p}}}}
is the derivative time constant.
=== Selective use of control terms ===
Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value.
=== Applicability ===
The use of the PID algorithm does not guarantee optimal control of the system or its control stability (see § Limitations, below). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.
== History ==
=== Origins ===
The centrifugal governor was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed.
With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt's self-designed "conical pendulum" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept.
Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper On Governors. He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. The problem was examined further in 1874 by Edward Routh, Charles Sturm, and in 1895, Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria.
In subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor.
About this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868.
Another early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based.
It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.
His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Trials were carried out on the USS New Mexico, with the controllers controlling the angular velocity (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve.
The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others in the 1930s.
=== Industrial control ===
The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback. This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows. The integral term was called Reset. Later the derivative term was added by a further bellows and adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–1.0 bar) was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.
Most modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers.
=== Electronic analog controllers ===
Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers.
== Control loop example ==
Consider a robotic arm that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.
The sensed position is the process variable (PV).
The desired position is called the setpoint (SP).
The difference between the PV and SP is the error (e), which quantifies whether the arm is too low or too high and by how much.
The input to the process (the electric current in the motor) is the output from the PID controller. It is called either the manipulated variable (MV) or the control variable (CV).
The PID controller continuously adjusts the input current to achieve smooth motion.
By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).
=== Proportional ===
The obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.
=== Integral ===
An integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both weakly reacting at the start (because the action would be small at the beginning, depending on time to become significant) and more aggressive at the end (the action increases as long as the error is positive, even if the error is near zero).
Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If it decreases, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable.
=== Derivative ===
A derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but rather the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).
=== Control damping ===
In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.
=== Response to disturbances ===
If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.
=== Applications ===
In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, force, feed rate, flow rate, chemical composition (component concentrations), weight, position, speed, and practically every other variable for which a measurement exists.
== Controller theory ==
This section describes the parallel or non-interacting form of the PID controller. For other forms please see § Alternative nomenclature and forms.
The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining
u
(
t
)
{\displaystyle u(t)}
as the controller output, the final form of the PID algorithm is
u
(
t
)
=
M
V
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=\mathrm {MV} (t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau +K_{\text{d}}{\frac {de(t)}{dt}},}
where
K
p
{\displaystyle K_{\text{p}}}
is the proportional gain, a tuning parameter,
K
i
{\displaystyle K_{\text{i}}}
is the integral gain, a tuning parameter,
K
d
{\displaystyle K_{\text{d}}}
is the derivative gain, a tuning parameter,
e
(
t
)
=
S
P
−
P
V
(
t
)
{\displaystyle e(t)=\mathrm {SP} -\mathrm {PV} (t)}
is the error (SP is the setpoint, and PV(t) is the process variable),
t
{\displaystyle t}
is the time or instantaneous time (the present),
τ
{\displaystyle \tau }
is the variable of integration (takes on values from time 0 to the present
t
{\displaystyle t}
).
Equivalently, the transfer function in the Laplace domain of the PID controller is
L
(
s
)
=
K
p
+
K
i
/
s
+
K
d
s
{\displaystyle L(s)=K_{\text{p}}+K_{\text{i}}/s+K_{\text{d}}s}
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle ={K_{\text{d}}s^{2}+K_{\text{p}}s+K_{\text{i}} \over s}}
where
s
{\displaystyle s}
is the complex angular frequency.
=== Proportional term ===
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain constant.
The proportional term is given by
P
out
=
K
p
e
(
t
)
.
{\displaystyle P_{\text{out}}=K_{\text{p}}e(t).}
A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.
==== Steady-state error ====
The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term.
=== Integral term ===
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.
The integral term is given by
I
out
=
K
i
∫
0
t
e
(
τ
)
d
τ
.
{\displaystyle I_{\text{out}}=K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau .}
The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning).
=== Derivative term ===
The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.
The derivative term is given by
D
out
=
K
d
d
e
(
t
)
d
t
.
{\displaystyle D_{\text{out}}=K_{\text{d}}{\frac {de(t)}{dt}}.}
Derivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers – because of its variable impact on system stability in real-world applications.
== Loop tuning ==
Tuning a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.
Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.
Some processes have a degree of nonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions).
=== Stability ===
If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by excess gain, particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired.
Mathematically, the origins of instability can be seen in the Laplace domain.
The closed-loop transfer function is
H
(
s
)
=
K
(
s
)
G
(
s
)
1
+
K
(
s
)
G
(
s
)
,
{\displaystyle H(s)={\frac {K(s)G(s)}{1+K(s)G(s)}},}
where
K
(
s
)
{\displaystyle K(s)}
is the PID transfer function, and
G
(
s
)
{\displaystyle G(s)}
is the plant transfer function. A system is unstable where the closed-loop transfer function diverges for some
s
{\displaystyle s}
. This happens in situations where
K
(
s
)
G
(
s
)
=
−
1
{\displaystyle K(s)G(s)=-1}
. In other words, this happens when
|
K
(
s
)
G
(
s
)
|
=
1
{\displaystyle |K(s)G(s)|=1}
with a 180° phase shift. Stability is guaranteed when
K
(
s
)
G
(
s
)
<
1
{\displaystyle K(s)G(s)<1}
for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion.
=== Optimal behavior ===
The optimal behavior on a process change or setpoint change varies depending on the application.
Two basic requirements are regulation (disturbance rejection – staying at a given setpoint) and command tracking (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.
=== Overview of tuning methods ===
There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.
The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.
=== Manual tuning ===
If the system must remain online, one tuning method is to first set
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
values to zero. Increase the
K
p
{\displaystyle K_{p}}
until the output of the loop oscillates; then set
K
p
{\displaystyle K_{p}}
to approximately half that value for a "quarter amplitude decay"-type response. Then increase
K
i
{\displaystyle K_{i}}
until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase
K
d
{\displaystyle K_{d}}
, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much
K
p
{\displaystyle K_{p}}
causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a
K
p
{\displaystyle K_{p}}
setting significantly less than half that of the
K
p
{\displaystyle K_{p}}
setting that was causing oscillation.
=== Ziegler–Nichols method ===
Another heuristic tuning method is known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain
K
u
{\displaystyle K_{u}}
at which the output of the loop starts to oscillate constantly.
K
u
{\displaystyle K_{u}}
and the oscillation period
T
u
{\displaystyle T_{u}}
are used to set the gains as follows:
The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result.
These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
are dependent on the oscillation period
T
u
{\displaystyle T_{u}}
.
=== Cohen–Coon parameters ===
This method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of
1
4
{\displaystyle {\tfrac {1}{4}}}
. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.
=== Relay (Åström–Hägglund) method ===
Published in 1984 by Karl Johan Åström and Tore Hägglund, the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.
Specifically, the ultimate period
T
u
{\displaystyle T_{u}}
is assumed to be equal to the observed period, and the ultimate gain is computed as
K
u
=
4
b
/
π
a
,
{\displaystyle K_{u}=4b/\pi a,}
where a is the amplitude of the process variable oscillation, and b is the amplitude of the control output change which caused it.
There are numerous variants on the relay method.
=== First-order model with dead time ===
The transfer function for a first-order process with dead time is
y
(
s
)
=
k
p
e
−
θ
s
τ
p
s
+
1
u
(
s
)
,
{\displaystyle y(s)={\frac {k_{\text{p}}e^{-\theta s}}{\tau _{\text{p}}s+1}}u(s),}
where kp is the process gain, τp is the time constant, θ is the dead time, and u(s) is a step change input. Converting this transfer function to the time domain results in
y
(
t
)
=
k
p
Δ
u
(
1
−
e
−
t
−
θ
τ
p
)
,
{\displaystyle y(t)=k_{\text{p}}\Delta u\left(1-e^{\frac {-t-\theta }{\tau _{\text{p}}}}\right),}
using the same parameters found above.
It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain (kp) is equal to the change in output divided by the change in input. The dead time θ is the amount of time between when the step change occurred and when the output first changed. The time constant (τp) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants.
=== Tuning software ===
Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients.
Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.
Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response.
== Limitations ==
While PID controllers are applicable to many control problems and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide optimal control. The fundamental difficulty with PID control is that it is a feedback control system with constant parameters and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer that has no model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.
The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.
=== Linearity and symmetry ===
PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.
A nonlinear valve in a flow control application, for instance, will result in variable loop sensitivity that requires damping to prevent instability. One solution is to include a model of the valve's nonlinearity in the control algorithm to compensate for this.
An asymmetric application, for example, is temperature control in HVAC systems that use only active heating (via a heating element) whereas only passive cooling is available. Overshoot of rising temperature can only be corrected slowly; active cooling is not available to force temperature downward as a function of the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.
=== Noise in derivative term ===
A problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller.
== Modifications to the algorithm ==
The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.
=== Integral windup ===
One common problem resulting from the ideal PID implementations is integral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:
Disabling the integration until the PV has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the regulator output within feasible bounds.
=== Overshooting from known disturbances ===
For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.
=== PI controller ===
A PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.
The controller output is given by
K
P
Δ
+
K
I
∫
Δ
d
t
{\displaystyle K_{P}\Delta +K_{I}\int \Delta \,dt}
where
Δ
{\displaystyle \Delta }
is the error or deviation of actual measured value (PV) from the setpoint (SP).
Δ
=
S
P
−
P
V
.
{\displaystyle \Delta =SP-PV.}
A PI controller can be modelled easily in software such as Simulink or Xcos using a "flow chart" box involving Laplace operators:
C
=
G
(
1
+
τ
s
)
τ
s
{\displaystyle C={\frac {G(1+\tau s)}{\tau s}}}
where
G
=
K
P
{\displaystyle G=K_{P}}
= proportional gain
G
τ
=
K
I
{\displaystyle {\frac {G}{\tau }}=K_{I}}
= integral gain
Setting a value for
G
{\displaystyle G}
is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.
Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.
=== Deadband ===
Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.
=== Setpoint step change ===
The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:
Setpoint ramping
In this modification, the setpoint is gradually moved from its old value to a newly specified value using a linear or first-order differential ramp function. This avoids the discontinuity present in a simple step change.
Derivative of the process variable
In this case the PID controller measures the derivative of the measured PV, rather than the derivative of the error. This quantity is always continuous (i.e., never has a step change as a result of changed setpoint). This modification is a simple case of setpoint weighting.
Setpoint weighting
Setpoint weighting adds adjustable factors (usually between 0 and 1) to the setpoint in the error in the proportional and derivative element of the controller. The error in the integral term must be the true control error to avoid steady-state control errors. These two extra parameters do not affect the response to load disturbances and measurement noise and can be tuned to improve the controller's setpoint response.
=== Feed-forward ===
The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.
=== Bumpless operation ===
PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.
=== Other improvements ===
In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic, or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order. The order of the integrator and differentiator add increased flexibility to the controller.
== Cascade control ==
One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers..
For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.
The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system it controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response.
== Alternative nomenclature and forms ==
=== Standard versus parallel (ideal) form ===
The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form. In this form the
K
p
{\displaystyle K_{p}}
gain is applied to the
I
o
u
t
{\displaystyle I_{\mathrm {out} }}
, and
D
o
u
t
{\displaystyle D_{\mathrm {out} }}
terms, yielding:
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
d
t
e
(
t
)
)
{\displaystyle u(t)=K_{p}\left(e(t)+{\frac {1}{T_{i}}}\int _{0}^{t}e(\tau )\,d\tau +T_{d}{\frac {d}{dt}}e(t)\right)}
where
T
i
{\displaystyle T_{i}}
is the integral time
T
d
{\displaystyle T_{d}}
is the derivative time
In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at
T
d
{\displaystyle T_{d}}
seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in
T
i
{\displaystyle T_{i}}
seconds (or samples). The resulting compensated single error value is then scaled by the single gain
K
p
{\displaystyle K_{p}}
to compute the control variable.
In the parallel form, shown in the controller theory section
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
d
t
e
(
t
)
{\displaystyle u(t)=K_{p}e(t)+K_{i}\int _{0}^{t}e(\tau )\,d\tau +K_{d}{\frac {d}{dt}}e(t)}
the gain parameters are related to the parameters of the standard form through
K
i
=
K
p
/
T
i
{\displaystyle K_{i}=K_{p}/T_{i}}
and
K
d
=
K
p
T
d
{\displaystyle K_{d}=K_{p}T_{d}}
. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.
=== Reciprocal gain, a.k.a. proportional band ===
In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain
K
p
{\displaystyle K_{p}}
not as "output per degree", but rather in the reciprocal form of a proportional band
100
/
K
p
{\displaystyle 100/K_{p}}
, which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.
=== Basing derivative action on PV ===
In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.
=== Basing proportional action on PV ===
Most commercial control systems offer the option of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.
Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.
M
V
(
t
)
=
K
p
(
−
P
V
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
−
T
d
d
d
t
P
V
(
t
)
)
{\displaystyle \mathrm {MV(t)} =K_{p}\left(\,{-PV(t)}+{\frac {1}{T_{i}}}\int _{0}^{t}{e(\tau )}\,{d\tau }-T_{d}{\frac {d}{dt}}PV(t)\right)}
King describes an effective chart-based method.
=== Laplace form ===
Sometimes it is useful to write the PID regulator in Laplace transform form:
G
(
s
)
=
K
p
+
K
i
s
+
K
d
s
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle G(s)=K_{p}+{\frac {K_{i}}{s}}+K_{d}{s}={\frac {K_{d}{s^{2}}+K_{p}{s}+K_{i}}{s}}}
Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.
=== Series/interacting form ===
Another representation of the PID controller is the series, or interacting form
G
(
s
)
=
K
c
(
1
τ
i
s
+
1
)
(
τ
d
s
+
1
)
{\displaystyle G(s)=K_{c}({\frac {1}{\tau _{i}{s}}}+1)(\tau _{d}{s}+1)}
where the parameters are related to the parameters of the standard form through
K
p
=
K
c
⋅
α
{\displaystyle K_{p}=K_{c}\cdot \alpha }
,
T
i
=
τ
i
⋅
α
{\displaystyle T_{i}=\tau _{i}\cdot \alpha }
, and
T
d
=
τ
d
α
{\displaystyle T_{d}={\frac {\tau _{d}}{\alpha }}}
with
α
=
1
+
τ
d
τ
i
{\displaystyle \alpha =1+{\frac {\tau _{d}}{\tau _{i}}}}
.
This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.
=== Discrete implementation ===
The analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be discretized. Approximations for first-order derivatives are made by backward finite differences.
u
(
t
)
{\displaystyle u(t)}
and
e
(
t
)
{\displaystyle e(t)}
are discretized with a sampling period
Δ
t
{\displaystyle \Delta t}
, k is the sample index.
Differentiating both sides of PID equation using Newton's notation gives:
u
˙
(
t
)
=
K
p
e
˙
(
t
)
+
K
i
e
(
t
)
+
K
d
e
¨
(
t
)
{\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
Derivative terms are approximated as,
f
˙
(
t
k
)
=
d
f
(
t
k
)
d
t
=
f
(
t
k
)
−
f
(
t
k
−
1
)
Δ
t
{\displaystyle {\dot {f}}(t_{k})={\dfrac {df(t_{k})}{dt}}={\dfrac {f(t_{k})-f(t_{k-1})}{\Delta t}}}
So,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
˙
(
t
k
)
−
e
˙
(
t
k
−
1
)
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\dot {e}}(t_{k})-{\dot {e}}(t_{k-1})}{\Delta t}}}
Applying backward difference again gives,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
−
e
(
t
k
−
1
)
−
e
(
t
k
−
2
)
Δ
t
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}-{\frac {e(t_{k-1})-e(t_{k-2})}{\Delta t}}}{\Delta t}}}
By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
e
(
t
k
)
+
(
−
K
p
−
2
K
d
Δ
t
)
e
(
t
k
−
1
)
+
K
d
Δ
t
e
(
t
k
−
2
)
{\displaystyle u(t_{k})=u(t_{k-1})+\left(K_{p}+K_{i}\Delta t+{\dfrac {K_{d}}{\Delta t}}\right)e(t_{k})+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {K_{d}}{\Delta t}}e(t_{k-2})}
or:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
K
p
[
(
1
+
Δ
t
T
i
+
T
d
Δ
t
)
e
(
t
k
)
+
(
−
1
−
2
T
d
Δ
t
)
e
(
t
k
−
1
)
+
T
d
Δ
t
e
(
t
k
−
2
)
]
{\displaystyle u(t_{k})=u(t_{k-1})+K_{p}\left[\left(1+{\dfrac {\Delta t}{T_{i}}}+{\dfrac {T_{d}}{\Delta t}}\right)e(t_{k})+\left(-1-{\dfrac {2T_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {T_{d}}{\Delta t}}e(t_{k-2})\right]}
s.t.
T
i
=
K
p
/
K
i
,
T
d
=
K
d
/
K
p
{\displaystyle T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}}
Note: This method solves in fact
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
+
u
0
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}+u_{0}}
where
u
0
{\displaystyle u_{0}}
is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.
== Pseudocode ==
Here is a very simple and explicit group of pseudocode that can be easily understood by the layman:
Kp - proportional gain
Ki - integral gain
Kd - derivative gain
dt - loop interval time (assumes reasonable scale)
previous_error := 0
integral := 0
loop:
error := setpoint − measured_value
proportional := error;
integral := integral + error × dt
derivative := (error - previous_error) / dt
output := Kp × proportional + Ki × integral + Kd × derivative
previous_error := error
wait(dt)
goto loop
Below a pseudocode illustrates how to implement a PID considering the PID as an IIR filter:
The Z-transform of a PID can be written as (
Δ
t
{\displaystyle \Delta _{t}}
is the sampling time):
C
(
z
)
=
K
p
+
K
i
Δ
t
z
z
−
1
+
K
d
Δ
t
z
−
1
z
{\displaystyle C(z)=K_{p}+K_{i}\Delta _{t}{\frac {z}{z-1}}+{\frac {K_{d}}{\Delta _{t}}}{\frac {z-1}{z}}}
and expressed in a IIR form (in agreement with the discrete implementation shown above):
C
(
z
)
=
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
+
(
−
K
p
−
2
K
d
Δ
t
)
z
−
1
+
K
d
Δ
t
z
−
2
1
−
z
−
1
{\displaystyle C(z)={\frac {\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)z^{-1}+{\dfrac {K_{d}}{\Delta _{t}}}z^{-2}}{1-z^{-1}}}}
We can then deduce the recursive iteration often found in FPGA implementation
u
[
n
]
=
u
[
n
−
1
]
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
ϵ
[
n
]
+
(
−
K
p
−
2
K
d
Δ
t
)
ϵ
[
n
−
1
]
+
K
d
Δ
t
ϵ
[
n
−
2
]
{\displaystyle u[n]=u[n-1]+\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)\epsilon [n]+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)\epsilon [n-1]+{\dfrac {K_{d}}{\Delta _{t}}}\epsilon [n-2]}
A0 := Kp + Ki*dt + Kd/dt
A1 := -Kp - 2*Kd/dt
A2 := Kd/dt
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
output := output + A0 * error[0] + A1 * error[1] + A2 * error[2]
wait(dt)
goto loop
Here, Kp is a dimensionless number, Ki is expressed in
s
−
1
{\displaystyle s^{-1}}
and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases.
In the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error.
Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm.
A common issue when using
K
d
{\displaystyle K_{d}}
is the response to the derivative of a rising or falling edge of the setpoint as shown below:
A typical workaround is to filter the derivative action using a low pass filter of time constant
τ
d
/
N
{\displaystyle \tau _{d}/N}
where
3
<=
N
<=
10
{\displaystyle 3<=N<=10}
:
A variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative:
A0 := Kp + Ki*dt
A1 := -Kp
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
A0d := Kd/dt
A1d := - 2.0*Kd/dt
A2d := Kd/dt
N := 5
tau := Kd / (Kp*N) // IIR filter time constant
alpha := dt / (2*tau)
d0 := 0
d1 := 0
fd0 := 0
fd1 := 0
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
// PI
output := output + A0 * error[0] + A1 * error[1]
// Filtered D
d1 := d0
d0 := A0d * error[0] + A1d * error[1] + A2d * error[2]
fd1 := fd0
fd0 := ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1
output := output + fd0
wait(dt)
goto loop
== See also ==
Control theory
Active disturbance rejection control
== Notes ==
== References ==
== Further reading ==
== External links ==
PID tuning using Mathematica
PID tuning using Python
Principles of PID Control and Tuning
Introduction to the key terms associated with PID Temperature Control
=== PID tutorials ===
PID Control in MATLAB/Simulink and Python with TCLab
What's All This P-I-D Stuff, Anyhow? Article in Electronic Design
Shows how to build a PID controller with basic electronic components (pg. 22)
PID Without a PhD
PID Control with MATLAB and Simulink
PID with single Operational Amplifier
Proven Methods and Best Practices for PID Control
Principles of PID Control and Tuning
PID Tuning Guide: A Best-Practices Approach to Understanding and Tuning PID Controllers
Michael Barr (2002-07-30), Introduction to Closed-Loop Control, Embedded Systems Programming, archived from the original on 2010-02-09
Jinghua Zhong, Mechanical Engineering, Purdue University (Spring 2006). "PID Controller Tuning: A Short Tutorial" (PDF). Archived from the original (PDF) on 2015-04-21. Retrieved 2013-12-04.{{cite web}}: CS1 maint: multiple names: authors list (link)
Introduction to P,PI,PD & PID Controller with MATLAB
Improving The Beginners PID | Wikipedia/PID_controller |
Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.
== Certainty equivalence ==
An extremely well-studied formulation in stochastic control is that of linear quadratic Gaussian control. Here the model is linear, the objective function is the expected value of a quadratic form, and the disturbances are purely additive. A basic result for discrete-time centralized systems with only additive uncertainty is the certainty equivalence property: that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all centralized systems with linear equations of evolution, quadratic cost function, and noise entering the model only additively; the quadratic assumption allows for the optimal control laws, which follow the certainty-equivalence property, to be linear functions of the observations of the controllers.
Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, noise in the multiplicative parameters of the model, or decentralization of control—causes the certainty equivalence property not to hold. For example, its failure to hold for decentralized control was demonstrated in Witsenhausen's counterexample.
== Discrete time ==
In a discrete-time context, the decision-maker observes the state variable, possibly with observational noise, in each time period. The objective may be to optimize the sum of expected values of a nonlinear (possibly quadratic) objective function over all the time periods from the present to the final period of concern, or to optimize the value of the objective function as of the final period only. At each time period new observations are made, and the control variables are to be adjusted optimally. Finding the optimal solution for the present time may involve iterating a matrix Riccati equation backwards in time from the last period to the present period.
In the discrete-time case with uncertainty about the parameter values in the transition matrix (giving the effect of current values of the state variables on their own evolution) and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a Riccati equation can still be obtained for iterating backward to each period's solution even though certainty equivalence does not apply.ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications.
=== Example ===
A typical specification of the discrete-time stochastic linear quadratic control problem is to minimize: ch. 13,
E
1
∑
t
=
1
S
[
y
t
T
Q
y
t
+
u
t
T
R
u
t
]
{\displaystyle \mathrm {E} _{1}\sum _{t=1}^{S}\left[y_{t}^{\mathsf {T}}Qy_{t}+u_{t}^{\mathsf {T}}Ru_{t}\right]}
where E1 is the expected value operator conditional on y0, superscript T indicates a matrix transpose, and S is the time horizon, subject to the state equation
y
t
=
A
t
y
t
−
1
+
B
t
u
t
,
{\displaystyle y_{t}=A_{t}y_{t-1}+B_{t}u_{t},}
where y is an n × 1 vector of observable state variables, u is a k × 1 vector of control variables, At is the time t realization of the stochastic n × n state transition matrix, Bt is the time t realization of the stochastic n × k matrix of control multipliers, and Q (n × n) and R (k × k) are known symmetric positive definite cost matrices. We assume that each element of A and B is jointly independently and identically distributed through time, so the expected value operations need not be time-conditional.
Induction backwards in time can be used to obtain the optimal control solution at each time,: ch. 13
u
t
∗
=
−
[
E
(
B
T
X
t
B
+
R
)
]
−
1
E
(
B
T
X
t
A
)
y
t
−
1
,
{\displaystyle u_{t}^{*}=-\left[\mathrm {E} \left(B^{\mathsf {T}}X_{t}B+R\right)\right]^{-1}\mathrm {E} \left(B^{\mathsf {T}}X_{t}A\right)y_{t-1},}
with the symmetric positive definite cost-to-go matrix X evolving backwards in time from
X
S
=
Q
{\displaystyle X_{S}=Q}
according to
X
t
−
1
=
Q
+
E
[
A
T
X
t
A
]
−
E
[
A
T
X
t
B
]
[
E
(
B
T
X
t
B
+
R
)
]
−
1
E
(
B
T
X
t
A
)
,
{\displaystyle X_{t-1}=Q+\mathrm {E} \left[A^{\mathsf {T}}X_{t}A\right]-\mathrm {E} \left[A^{\mathsf {T}}X_{t}B\right]\left[\mathrm {E} (B^{\mathsf {T}}X_{t}B+R)\right]^{-1}\mathrm {E} \left(B^{\mathsf {T}}X_{t}A\right),}
which is known as the discrete-time dynamic Riccati equation of this problem. The only information needed regarding the unknown parameters in the A and B matrices is the expected value and variance of each element of each matrix and the covariances among elements of the same matrix and among elements across matrices.
The optimal control solution is unaffected if zero-mean, i.i.d. additive shocks also appear in the state equation, so long as they are uncorrelated with the parameters in the A and B matrices. But if they are so correlated, then the optimal control solution for each period contains an additional additive constant vector. If an additive constant vector appears in the state equation, then again the optimal control solution for each period contains an additional additive constant vector.
The steady-state characterization of X (if it exists), relevant for the infinite-horizon problem in which S goes to infinity, can be found by iterating the dynamic equation for X repeatedly until it converges; then X is characterized by removing the time subscripts from its dynamic equation.
== Continuous time ==
If the model is in continuous time, the controller knows the state of the system at each instant of time. The objective is to maximize either an integral of, for example, a concave function of a state variable over a horizon from time zero (the present) to a terminal time T, or a concave function of a state variable at some future date T. As time evolves, new observations are continuously made and the control variables are continuously adjusted in optimal fashion.
== Stochastic model predictive control ==
In the literature, there are two types of MPCs for stochastic systems; Robust model predictive control and Stochastic Model Predictive Control (SMPC). Robust model predictive control is a more conservative method which considers the worst scenario in the optimization procedure. However, this method, similar to other robust controls, deteriorates the overall controller's performance and also is applicable only for systems with bounded uncertainties. The alternative method, SMPC, considers soft constraints which limit the risk of violation by a probabilistic inequality.
=== In finance ===
In a continuous time approach in a finance context, the state variable in the stochastic differential equation is usually wealth or net worth, and the controls are the shares placed at each time in the various assets. Given the asset allocation chosen at any time, the determinants of the change in wealth are usually the stochastic returns to assets and the interest rate on the risk-free asset. The field of stochastic control has developed greatly since the 1970s, particularly in its applications to finance. Robert Merton used stochastic control to study optimal portfolios of safe and risky assets. His work and that of Black–Scholes changed the nature of the finance literature. Influential mathematical textbook treatments were by Fleming and Rishel, and by Fleming and Soner. These techniques were applied by Stein to the 2008 financial crisis.
The maximization, say of the expected logarithm of net worth at a terminal date T, is subject to stochastic processes on the components of wealth. In this case, in continuous time Itô's equation is the main tool of analysis. In the case where the maximization is an integral of a concave function of utility over an horizon (0,T), dynamic programming is used. There is no certainty equivalence as in the older literature, because the coefficients of the control variables—that is, the returns received by the chosen shares of assets—are stochastic.
== See also ==
Backward stochastic differential equation
Stochastic process
Control theory
Multiplier uncertainty
Stochastic scheduling
Separation principle in stochastic control
== References ==
== Further reading ==
Dixit, Avinash (1991). "A Simplified Treatment of the Theory of Optimal Regulation of Brownian Motion". Journal of Economic Dynamics and Control. 15 (4): 657–673. doi:10.1016/0165-1889(91)90037-2.
Yong, Jiongmin; Zhou, Xun Yu (1999). Stochastic Controls : Hamiltonian Systems and HJB Equations. New York: Springer. ISBN 0-387-98723-1. | Wikipedia/Stochastic_control |
Vector control, also called field-oriented control (FOC), is a variable-frequency drive (VFD) control method in which the stator currents of a three-phase AC motor are identified as two orthogonal components that can be visualized with a vector. One component defines the magnetic flux of the motor, the other the torque. The control system of the drive calculates the corresponding current component references from the flux and torque references given by the drive's speed control. Typically proportional-integral (PI) controllers are used to keep the measured current components at their reference values. The pulse-width modulation of the variable-frequency drive defines the transistor switching according to the stator voltage references that are the output of the PI current controllers.
FOC is used to control AC synchronous and induction motors. It was originally developed for high-performance motor applications that are required to operate smoothly over the full speed range, generate full torque at zero speed, and have high dynamic performance including fast acceleration and deceleration. However, it is becoming increasingly attractive for lower performance applications as well due to FOC's motor size, cost and power consumption reduction superiority. It is expected that with increasing computational power of the microprocessors it will eventually nearly universally displace single-variable scalar control (volts-per-Hertz, V/f control).
== Development history ==
Technische Universität Darmstadt's K. Hasse and Siemens' F. Blaschke pioneered vector control of AC motors starting in 1968 and in the early 1970s. Hasse in terms of proposing indirect vector control, Blaschke in terms of proposing direct vector control. Technical University Braunschweig's Werner Leonhard further developed FOC techniques and was instrumental in opening up opportunities for AC drives to be a competitive alternative to DC drives.
Yet it was not until after the commercialization of microprocessors, that is in the early 1980s, that general purpose AC drives became available. Barriers to use FOC for AC drive applications included higher cost and complexity and lower maintainability compared to DC drives, FOC having until then required many electronic components in terms of sensors, amplifiers and
so on.
The Park transformation has long been widely used in the analysis and study of synchronous and induction machines. The transformation is by far the single most important concept needed for an understanding of how FOC works, the concept having been first conceptualized in a 1929 paper authored by Robert H. Park. Park's paper was ranked second most important in terms of impact from among all power engineering related papers ever published in the twentieth century. The novelty of Park's work involves his ability to transform any related machine's linear differential equation set from one with time varying coefficients to another with time invariant coefficients resulting in a linear time-invariant system or LTI system.
== Technical overview ==
Overview of key competing VFD control platforms:
While the analysis of AC drive controls can be technically quite involved ("See also" section), such analysis invariably starts with modeling of the drive-motor circuit involved along the lines of accompanying signal flow graph and equations.
In vector control, an AC induction or synchronous motor is controlled under all operating conditions like a separately excited DC motor. That is, the AC motor behaves like a DC motor in which the field flux linkage and armature flux linkage created by the respective field and armature (or torque component) currents are orthogonally aligned such that, when torque is controlled, the field flux linkage is not affected, hence enabling dynamic torque response.
Vector control accordingly generates a three-phase PWM motor voltage output derived from a complex voltage vector to control a complex current vector derived from motor's three-phase stator current input through projections or rotations back and forth between the three-phase speed and time dependent system and these vectors' rotating reference-frame two-coordinate time invariant system.
Such complex stator current space vector can be defined in a (d,q) coordinate system with orthogonal components along d (direct) and q (quadrature) axes such that field flux linkage component of current is aligned along the d axis and torque component of current is aligned along the q axis. The induction motor's (d,q) coordinate system can be superimposed to the motor's instantaneous (a,b,c) three-phase sinusoidal system as shown in accompanying image (phases b & c not shown for clarity). Components of the (d,q) system current vector allow conventional control such as proportional and integral, or PI, control, as with a DC motor.
Projections associated with the (d,q) coordinate system typically involve:
Forward projection from instantaneous currents to (a,b,c) complex stator current space vector representation of the three-phase sinusoidal system.
Forward three-to-two phase, (a,b,c)-to-(
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
) projection using the Clarke transformation. Vector control implementations usually assume ungrounded motor with balanced three-phase currents such that only two motor current phases need to be sensed. Also, backward two-to-three phase, (
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
)-to-(a,b,c) projection uses space vector PWM modulator or inverse Clarke transformation and one of the other PWM modulators.
Forward and backward two-to-two phase,(
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
)-to-(d,q) and (d,q)-to-(
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
) projections using the Park and inverse Park transformations, respectively.
The idea of using the park transform is to convert the system of three phase currents and voltages into a two coordinate linear time-invariant system. By making the system LTI is what enables the use of simple and easy to implement PI controllers, and also simplifies the control of flux and torque producing currents.
However, it is not uncommon for sources to use combined transform three-to-two, (a,b,c)-to-(d,q) and inverse projections.
While (d,q) coordinate system rotation can arbitrarily be set to any speed, there are three preferred speeds or reference frames:
Stationary reference frame where (d,q) coordinate system does not rotate;
Synchronously rotating reference frame where (d,q) coordinate system rotates at synchronous speed;
Rotor reference frame where (d,q) coordinate system rotates at rotor speed.
Decoupled torque and field currents can thus be derived from raw stator current inputs for control algorithm development.
Whereas magnetic field and torque components in DC motors can be operated relatively simply by separately controlling the respective field and armature currents, economical control of AC motors in variable speed application has required development of microprocessor-based controls with all AC drives now using powerful DSP (digital signal processing) technology.
Inverters can be implemented as either open-loop sensorless or closed-loop FOC, the key limitation of open-loop operation being minimum speed possible at 100% torque, namely, about 0.8 Hz compared to standstill for closed-loop operation.
There are two vector control methods, direct or feedback vector control (DFOC) and indirect or feedforward vector control (IFOC), IFOC being more commonly used because in closed-loop mode such drives more easily operate throughout the speed range from zero speed to high-speed field-weakening. In DFOC, flux magnitude and angle feedback signals are directly calculated using so-called voltage or current models. In IFOC, flux space angle feedforward and flux magnitude signals first measure stator currents and rotor speed for then deriving flux space angle proper by summing the rotor angle corresponding to the rotor speed and the calculated reference value of slip angle corresponding to the slip frequency.
Sensorless control (see Sensorless FOC Block Diagram) of AC drives is attractive for cost and reliability considerations. Sensorless control requires derivation of rotor speed information from measured stator voltage and currents in combination with open-loop estimators or closed-loop observers.
== Application ==
Stator phase currents are measured, converted to complex space vector in (a,b,c) coordinate system.
Current is converted to (
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
) coordinate system. Transformed to a coordinate system rotating in rotor reference frame, rotor position is derived by integrating the speed by means of speed measurement sensor.
Rotor flux linkage vector is estimated by multiplying the stator current vector with magnetizing inductance Lm and low-pass filtering the result with the rotor no-load time constant Lr/Rr, namely, the rotor inductance to rotor resistance ratio.
Current vector is converted to (d,q) coordinate system.
d-axis component of the stator current vector is used to control the rotor flux linkage and the imaginary q-axis component is used to control the motor torque. While PI controllers can be used to control these currents, bang-bang type current control provides better dynamic performance.
PI controllers provide (d,q) coordinate voltage components. A decoupling term is sometimes added to the controller output to improve control performance to mitigate cross coupling or big and rapid changes in speed, current and flux linkage. PI-controller also sometimes need low-pass filtering at the input or output to prevent the current ripple due to transistor switching from being amplified excessively and destabilizing the control. However, such filtering also limits the dynamic control system performance. High switching frequency (typically more than 10 kHz) is typically required to minimize filtering requirements for high-performance drives such as servo drives.
Voltage components are transformed from (d,q) coordinate system to (
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
) coordinate system.
Voltage components are transformed from (
α
{\displaystyle \alpha }
,
β
{\displaystyle \beta }
) coordinate system to (a,b,c) coordinate system or fed in Pulse-Width Modulation (PWM) modulator, or both, for signaling to the power inverter section.
Significant aspects of vector control application:
Speed or position measurement or some sort of estimation is needed.
Torque and flux can be changed reasonably fast, in less than 5-10 milliseconds, by changing the references.
The step response has some overshoot if PI control is used.
The switching frequency of the transistors is usually constant and set by the modulator.
The accuracy of the torque depends on the accuracy of the motor parameters used in the control. Thus large errors due to for example rotor temperature changes often are encountered.
Reasonable processor performance is required; typically the control algorithm is calculated every PWM cycle.
Although the vector control algorithm is more complicated than the Direct Torque Control (DTC), the algorithm need not be calculated as frequently as the DTC algorithm. Also the current sensors need not be the best in the market. Thus the cost of the processor and other control hardware is lower making it suitable for applications where the ultimate performance of DTC is not required.
== See also ==
== References == | Wikipedia/Vector_control_(motor) |
A fire-control system (FCS) is a number of components working together, usually a gun data computer, a director and radar, which is designed to assist a ranged weapon system to target, track, and hit a target. It performs the same task as a human gunner firing a weapon, but attempts to do so faster and more accurately.
== Naval fire control ==
=== Origins ===
The original fire-control systems were developed for ships.
The early history of naval fire control was dominated by the engagement of targets within visual range (also referred to as direct fire). In fact, most naval engagements before 1800 were conducted at ranges of 20 to 50 yards (20 to 50 m).
Even during the American Civil War, the famous engagement between USS Monitor and CSS Virginia was often conducted at less than 100 yards (90 m) range.
Rapid technical improvements in the late 19th century greatly increased the range at which gunfire was possible. Rifled guns of much larger size firing explosive shells of lighter relative weight (compared to all-metal balls) so greatly increased the range of the guns that the main problem became aiming them while the ship was moving on the waves. This problem was solved with the introduction of the gyroscope, which corrected this motion and provided sub-degree accuracies. Guns were now free to grow to any size, and quickly surpassed 10 inches (250 mm) calibre by the 1890s. These guns were capable of such great range that the primary limitation was seeing the target, leading to the use of high masts on ships.
Another technical improvement was the introduction of the steam turbine which greatly increased the performance of the ships. Earlier reciprocating engine powered capital ships were capable of perhaps 16 knots, but the first large turbine ships were capable of over 20 knots. Combined with the long range of the guns, this meant that the target ship could move a considerable distance, several ship lengths, between the time the shells were fired and landed. One could no longer eyeball the aim with any hope of accuracy. Moreover, in naval engagements it is also necessary to control the firing of several guns at once.
Naval gun fire control potentially involves three levels of complexity. Local control originated with primitive gun installations aimed by the individual gun crews. Director control aims all guns on the ship at a single target. Coordinated gunfire from a formation of ships at a single target was a focus of battleship fleet operations. Corrections are made for surface wind velocity, firing ship roll and pitch, powder magazine temperature, drift of rifled projectiles, individual gun bore diameter adjusted for shot-to-shot enlargement, and rate of change of range with additional modifications to the firing solution based upon the observation of preceding shots.
The resulting directions, known as a firing solution, would then be fed back out to the turrets for laying. If the rounds missed, an observer could work out how far they missed by and in which direction, and this information could be fed back into the computer along with any changes in the rest of the information and another shot attempted.
At first, the guns were aimed using the technique of artillery spotting. It involved firing a gun at the target, observing the projectile's point of impact (fall of shot), and correcting the aim based on where the shell was observed to land, which became more and more difficult as the range of the gun increased.
Between the American Civil War and 1905, numerous small improvements, such as telescopic sights and optical rangefinders, were made in fire control. There were also procedural improvements, like the use of plotting boards to manually predict the position of a ship during an engagement.
=== World War I ===
Then increasingly sophisticated mechanical calculators were employed for proper gun laying, typically with various spotters and distance measures being sent to a central plotting station deep within the ship. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments. Around 1905, mechanical fire control aids began to become available, such as the Dreyer Table, Dumaresq (which was also part of the Dreyer Table), and Argo Clock, but these devices took a number of years to become widely deployed. These devices were early forms of rangekeepers.
Arthur Pollen and Frederic Charles Dreyer independently developed the first such systems. Pollen began working on the problem after noting the poor accuracy of naval artillery at a gunnery practice near Malta in 1900. Lord Kelvin, widely regarded as Britain's leading scientist first proposed using an analogue computer to solve the equations which arise from the relative motion of the ships engaged in the battle and the time delay in the flight of the shell to calculate the required trajectory and therefore the direction and elevation of the guns.
Pollen aimed to produce a combined mechanical computer and automatic plot of ranges and rates for use in centralised fire control. To obtain accurate data of the target's position and relative motion, Pollen developed a plotting unit (or plotter) to capture this data. To this he added a gyroscope to allow for the yaw of the firing ship. Like the plotter, the primitive gyroscope of the time required substantial development to provide continuous and reliable guidance. Although the trials in 1905 and 1906 were unsuccessful, they showed promise. Pollen was encouraged in his efforts by the rapidly rising figure of Admiral Jackie Fisher, Admiral Arthur Knyvet Wilson and the Director of Naval Ordnance and Torpedoes (DNO), John Jellicoe. Pollen continued his work, with occasional tests carried out on Royal Navy warships.
Meanwhile, a group led by Dreyer designed a similar system. Although both systems were ordered for new and existing ships of the Royal Navy, the Dreyer system eventually found most favour with the Navy in its definitive Mark IV* form. The addition of director control facilitated a full, practicable fire control system for World War I ships, and most RN capital ships were so fitted by mid 1916. The director was high up over the ship where operators had a superior view over any gunlayer in the turrets. It was also able to co-ordinate the fire of the turrets so that their combined fire worked together. This improved aiming and larger optical rangefinders improved the estimate of the enemy's position at the time of firing. The system was eventually replaced by the improved "Admiralty Fire Control Table" for ships built after 1927.
=== World War II ===
During their long service life, rangekeepers were updated often as technology advanced, and by World War II they were a critical part of an integrated fire-control system. The incorporation of radar into the fire-control system early in World War II provided ships the ability to conduct effective gunfire operations at long range in poor weather and at night. For U.S. Navy gun fire control systems, see ship gun fire-control systems.
The use of director-controlled firing, together with the fire control computer, removed the control of the gun laying from the individual turrets to a central position; although individual gun mounts and multi-gun turrets would retain a local control option for use when battle damage limited director information transfer (these would be simpler versions called "turret tables" in the Royal Navy). Guns could then be fired in planned salvos, with each gun giving a slightly different trajectory. Dispersion of shot caused by differences in individual guns, individual projectiles, powder ignition sequences, and transient distortion of ship structure was undesirably large at typical naval engagement ranges. Directors high on the superstructure had a better view of the enemy than a turret mounted sight, and the crew operating them were distant from the sound and shock of the guns. Gun directors were topmost, and the ends of their optical rangefinders protruded from their sides, giving them a distinctive appearance.
Unmeasured and uncontrollable ballistic factors, like high-altitude temperature, humidity, barometric pressure, wind direction and velocity, required final adjustment through observation of the fall of shot. Visual range measurement (of both target and shell splashes) was difficult prior to the availability of radar. The British favoured coincidence rangefinders while the Germans favoured the stereoscopic type. The former were less able to range on an indistinct target but easier on the operator over a long period of use, the latter the reverse.
Submarines were also equipped with fire control computers for the same reasons, but their problem was even more pronounced; in a typical "shot", the torpedo would take one to two minutes to reach its target. Calculating the proper "lead" given the relative motion of the two vessels was very difficult, and torpedo data computers were added to dramatically improve the speed of these calculations.
In a typical World War II British ship the fire control system connected the individual gun turrets to the director tower (where the sighting instruments were located) and the analogue computer in the heart of the ship. In the director tower, operators trained their telescopes on the target; one telescope measured elevation and the other bearing. Rangefinder telescopes on a separate mounting measured the distance to the target. These measurements were converted by the Fire Control Table into the bearings and elevations for the guns to fire upon. In the turrets, the gunlayers adjusted the elevation of their guns to match an indicator for the elevation transmitted from the Fire Control table—a turret layer did the same for bearing. When the guns were on target they were centrally fired.
Even with as much mechanization of the process, it still required a large human element; the Transmitting Station (the room that housed the Dreyer table) for HMS Hood's main guns housed 27 crew.
Directors were largely unprotected from enemy fire. It was difficult to put much weight of armour so high up on the ship, and even if the armour did stop a shot, the impact alone would likely knock the instruments out of alignment. Sufficient armour to protect from smaller shells and fragments from hits to other parts of the ship was the limit.
The performance of the analog computer was impressive. The battleship USS North Carolina during a 1945 test was able to maintain an accurate firing solution on a target during a series of high-speed turns.
It is a major advantage for a warship to be able to maneuver while engaging a target.
Night naval engagements at long range became feasible when radar data could be input to the rangekeeper. The effectiveness of this combination was demonstrated in November 1942 at the Third Battle of Savo Island when the USS Washington engaged the Japanese battleship Kirishima at a range of 8,400 yards (7.7 km) at night. Kirishima was set aflame, suffered a number of explosions, and was scuttled by her crew. She had been hit by at least nine 16-inch (410 mm) rounds out of 75 fired (12% hit rate).
The wreck of Kirishima was discovered in 1992 and showed that the entire bow section of the ship was missing.
The Japanese during World War II did not develop radar or automated fire control to the level of the US Navy and were at a significant disadvantage.
=== Post-1945 ===
By the 1950s gun turrets were increasingly unmanned, with gun laying controlled remotely from the ship's control centre using inputs from radar and other sources.
The last combat action for the analog rangekeepers, at least for the US Navy, was in the 1991 Persian Gulf War when the rangekeepers on the Iowa-class battleships directed their last rounds in combat.
== Aircraft based fire control ==
=== World War II bomb sights ===
An early use of fire-control systems was in bomber aircraft, with the use of computing bombsights that accepted altitude and airspeed information to predict and display the impact point of a bomb released at that time. The best known United States device was the Norden bombsight.
=== World War II aerial gunnery sights ===
Simple systems, known as lead computing sights also made their appearance inside aircraft late in the war as gyro gunsights. These devices used a gyroscope to measure turn rates, and moved the gunsight's aim-point to take this into account, with the aim point presented through a reflector sight. The only manual "input" to the sight was the target distance, which was typically handled by dialing in the size of the target's wing span at some known range. Small radar units were added in the post-war period to automate even this input, but it was some time before they were fast enough to make the pilots completely happy with them. The first implementation of a centralized fire control system in a production aircraft was on the B-29.
=== Post-World War II systems ===
By the start of the Vietnam War, a new computerized bombing predictor, called the Low Altitude Bombing System (LABS), began to be integrated into the systems of aircraft equipped to carry nuclear armaments. This new bomb computer was revolutionary in that the release command for the bomb was given by the computer, not the pilot; the pilot designated the target using the radar or other targeting system, then "consented" to release the weapon, and the computer then did so at a calculated "release point" some seconds later. This is very different from previous systems, which, though they had also become computerized, still calculated an "impact point" showing where the bomb would fall if the bomb were released at that moment. The key advantage is that the weapon can be released accurately even when the plane is maneuvering. Most bombsights until this time required that the plane maintain a constant attitude (usually level), though dive-bombing sights were also common.
The LABS system was originally designed to facilitate a tactic called toss bombing, to allow the aircraft to remain out of range of a weapon's blast radius. The principle of calculating the release point, however, was eventually integrated into the fire control computers of later bombers and strike aircraft, allowing level, dive and toss bombing. In addition, as the fire control computer became integrated with ordnance systems, the computer can take the flight characteristics of the weapon to be launched into account.
== Land based fire control ==
=== Anti-aircraft based fire control ===
By the start of World War II, aircraft altitude performance had increased so much that anti-aircraft guns had similar predictive problems, and were increasingly equipped with fire-control computers. The main difference between these systems and the ones on ships was size and speed. The early versions of the High Angle Control System, or HACS, of Britain's Royal Navy were examples of a system that predicted based upon the assumption that target speed, direction, and altitude would remain constant during the prediction cycle, which consisted of the time to fuze the shell and the time of flight of the shell to the target. The USN Mk 37 system made similar assumptions except that it could predict assuming a constant rate of altitude change. The Kerrison Predictor is an example of a system that was built to solve laying in "real time", simply by pointing the director at the target and then aiming the gun at a pointer it directed. It was also deliberately designed to be small and light, in order to allow it to be easily moved along with the guns it served.
The radar-based M-9/SCR-584 Anti-Aircraft System was used to direct air defense artillery since 1943. The MIT Radiation Lab's SCR-584 was the first radar system with automatic following, Bell Laboratory's M-9 was an electronic analog fire-control computer that replaced complicated and difficult-to-manufacture mechanical computers (such as the Sperry M-7 or British Kerrison predictor). In combination with the VT proximity fuze, this system accomplished the astonishing feat of shooting down V-1 cruise missiles with less than 100 shells per plane (thousands were typical in earlier AA systems). This system was instrumental in the defense of London and Antwerp against the V-1.
Although listed in Land based fire control section anti-aircraft fire control systems can also be found on naval and aircraft systems.
=== Coast artillery fire control ===
In the United States Army Coast Artillery Corps, Coast Artillery fire control systems began to be developed at the end of the 19th century and progressed on through World War II.
Early systems made use of multiple observation or base end stations (see Figure 1) to find and track targets attacking American harbors. Data from these stations were then passed to plotting rooms, where analog mechanical devices, such as the plotting board, were used to estimate targets' positions and derive firing data for batteries of coastal guns assigned to interdict them.
U.S. Coast Artillery forts bristled with a variety of armament, ranging from 12-inch coast defense mortars, through 3-inch and 6-inch mid-range artillery, to the larger guns, which included 10-inch and 12-inch barbette and disappearing carriage guns, 14-inch railroad artillery, and 16-inch cannon installed just prior to and up through World War II.
Fire control in the Coast Artillery became more and more sophisticated in terms of correcting firing data for such factors as weather conditions, the condition of powder used, or the Earth's rotation. Provisions were also made for adjusting firing data for the observed fall of shells. As shown in Figure 2, all of these data were fed back to the plotting rooms on a finely tuned schedule controlled by a system of time interval bells that rang throughout each harbor defense system.
It was only later in World War II that electro-mechanical gun data computers, connected to coast defense radars, began to replace optical observation and manual plotting methods in controlling coast artillery. Even then, the manual methods were retained as a back-up through the end of the war.
=== Direct and indirect fire control systems ===
Land based fire control systems can be used to aid in both Direct fire and Indirect fire weapon engagement. These systems can be found on weapons ranging from small handguns to large artillery weapons.
== Modern fire control systems ==
Modern fire-control computers, like all high-performance computers, are digital. The added performance allows basically any input to be added, from air density and wind, to wear on the barrels and distortion due to heating. These sorts of effects are noticeable for any sort of gun, and fire-control computers have started appearing on smaller and smaller platforms. Tanks were one early use that automated gun laying had, using a laser rangefinder and a barrel-distortion meter. Fire-control computers are useful not just for aiming large cannons, but also for aiming machine guns, small cannons, guided missiles, rifles, grenades, and rockets—any kind of weapon that can have its launch or firing parameters varied. They are typically installed on ships, submarines, aircraft, tanks and even on some small arms—for example, the grenade launcher developed for use on the Fabrique Nationale F2000 bullpup assault rifle. Fire-control computers have gone through all the stages of technology that computers have, with some designs based upon analogue technology and later vacuum tubes which were later replaced with transistors.
Fire-control systems are often interfaced with sensors (such as sonar, radar, infra-red search and track, laser range-finders, anemometers, wind vanes, thermometers, barometers, etc.) in order to cut down or eliminate the amount of information that must be manually entered in order to calculate an effective solution. Sonar, radar, IRST and range-finders can give the system the direction to and/or distance of the target. Alternatively, an optical sight can be provided that an operator can simply point at the target, which is easier than having someone input the range using other methods and gives the target less warning that it is being tracked. Typically, weapons fired over long ranges need environmental information—the farther a munition travels, the more the wind, temperature, air density, etc. will affect its trajectory, so having accurate information is essential for a good solution. Sometimes, for very long-range rockets, environmental data has to be obtained at high altitudes or in between the launching point and the target. Often, satellites or balloons are used to gather this information.
Once the firing solution is calculated, many modern fire-control systems are also able to aim and fire the weapon(s). Once again, this is in the interest of speed and accuracy, and in the case of a vehicle like an aircraft or tank, in order to allow the pilot/gunner/etc. to perform other actions simultaneously, such as tracking the target or flying the aircraft. Even if the system is unable to aim the weapon itself, for example the fixed cannon on an aircraft, it is able to give the operator cues on how to aim. Typically, the cannon points straight ahead and the pilot must maneuver the aircraft so that it oriented correctly before firing. In most aircraft the aiming cue takes the form of a "pipper" which is projected on the heads-up display (HUD). The pipper shows the pilot where the target must be relative to the aircraft in order to hit it. Once the pilot maneuvers the aircraft so that the target and pipper are superimposed, he or she fires the weapon, or on some aircraft the weapon will fire automatically at this point, in order to overcome the delay of the pilot. In the case of a missile launch, the fire-control computer may give the pilot feedback about whether the target is in range of the missile and how likely the missile is to hit if launched at any particular moment. The pilot will then wait until the probability reading is satisfactorily high before launching the weapon.
== See also ==
Target acquisition
Counter-battery radar
Director (military)
Fire-control radar
Gun stabilizer
List of U.S. Army fire control and sighting material by supply catalog designation
Predicted impact point
Ship gun fire-control systems
Tartar Guided Missile Fire Control System
== References ==
== Further reading ==
Baxter, James Phinney (1946). Scientists Against Time. Little, Brown and Company. ISBN 0-26252-012-5. {{cite book}}: ISBN / Date incompatibility (help)
Campbell, John (1985). Naval Weapons of World War Two. Naval Institute Press. ISBN 0-87021-459-4.
Fairfield, A.P. (1921). Naval Ordnance. The Lord Baltimore Press.
Frieden, David R. (1985). Principles of Naval Weapons Systems. Naval Institute Press. ISBN 0-87021-537-X.
Friedman, Norman (2008). Naval Firepower: Battleship Guns and Gunnery in the Dreadnought Era. Seaforth. ISBN 978-1-84415-701-3.
Hans, Mort; Taranovich, Steve (10 December 2012). "Design hindsight from the tail-gunner position of a WWII bomber, Part one". EDN. Retrieved 18 August 2020.
Pollen, Antony (1980). The Great Gunnery Scandal — The Mystery of Jutland. Collins. ISBN 0-00-216298-9.
Roch, Axel. "Fire-Control and Human-Computer Interaction: Towards a History of the Computer Mouse (1940-1965)". Stanford University. Archived from the original on 15 February 2020. Retrieved 18 August 2020.
Schleihauf, William (2001). "The Dumaresq and the Dreyer". Warship International. XXXVIII (1). International Naval Research Organization: 6–29. ISSN 0043-0374.
Schleihauf, William (2001). "The Dumaresq and the Dreyer, Part II". Warship International. XXXVIII (2). International Naval Research Organization: 164–201. ISSN 0043-0374.
Schleihauf, William (2001). "The Dumaresq and the Dreyer, Part III". Warship International. XXXVIII (3). International Naval Research Organization: 221–233. ISSN 0043-0374.
Wright, Christopher C. (2004). "Questions on the Effectiveness of U.S. Navy Battleship Gunnery: Notes on the Origin of U.S. Navy Gun Fire Control System Range Keepers". Warship International. XLI (1): 55–78. ISSN 0043-0374.
== External links ==
Between Human and Machine: Feedback, Control, and Computing Before Cybernetics – Google Books
BASIC programs for battleship and antiaircraft gun fire control Archived 2012-10-03 at the Wayback Machine
National Fire Control Symposium | Wikipedia/Fire-control_system |
An industrial control system (ICS) is an electronic control system and associated instrumentation used for industrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems (DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such as control valves.
Larger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or DCSs, and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications.
== Discrete controllers ==
The simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.
Quite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allows the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective.
== Distributed control systems ==
A distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralized control rooms and local on-plant monitoring and control.
A DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling.
A DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system.
The processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.
The field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either on or off, such as relay contacts or a semiconductor switch.
Distributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals.
== SCADA systems ==
Supervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery.
The SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks.
The SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded.
== Programmable logic controllers ==
PLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
== History ==
Process control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process.
However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of distributed control was realised.
The introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name distributed control system (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks.
While the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.
SCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control.
The boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed.
In 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions.
== Security ==
SCADA and PLCs are vulnerable to cyber attack. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.
== See also ==
Automation
Plant process and emergency shutdown systems
MTConnect
OPC Foundation
Safety instrumented system (SIS)
Control system security
Operational Technology
== References ==
== Further reading ==
Guide to Industrial Control Systems (ICS) Security, SP800-82 Rev2, National Institute of Standards and Technology, May 2015.
Walker, Mark John (2012-09-08). The Programmable Logic Controller: its prehistory, emergence and application (PDF) (PhD thesis). Department of Communication and Systems Faculty of Mathematics, Computing and Technology: The Open University. Archived (PDF) from the original on 2018-06-20. Retrieved 2018-06-20.
== External links ==
"New Age of Industrial Controllers". Archived from the original on 2016-03-03.
Proview, an open source process control system
"10 Reasons to choose PC Based Control". Manufacturing Automation. February 2015. | Wikipedia/Industrial_control_systems |
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector
v
{\displaystyle \mathbf {v} }
of a linear transformation
T
{\displaystyle T}
is scaled by a constant factor
λ
{\displaystyle \lambda }
when the linear transformation is applied to it:
T
v
=
λ
v
{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }
. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor
λ
{\displaystyle \lambda }
(possibly negative).
Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.
The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system.
== Matrices ==
For an
n
×
n
{\displaystyle n{\times }n}
matrix A and a nonzero vector
v
{\displaystyle \mathbf {v} }
of length
n
{\displaystyle n}
, if multiplying A by
v
{\displaystyle \mathbf {v} }
(denoted
A
v
{\displaystyle A\mathbf {v} }
) simply scales
v
{\displaystyle \mathbf {v} }
by a factor λ, where λ is a scalar, then
v
{\displaystyle \mathbf {v} }
is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as:
A
v
=
λ
v
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }
.
Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices.
== Overview ==
Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.
In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation
T
(
v
)
=
λ
v
,
{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}
referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex.
The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.
Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like
d
d
x
{\displaystyle {\tfrac {d}{dx}}}
, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as
d
d
x
e
λ
x
=
λ
e
λ
x
.
{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}
Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication
A
v
=
λ
v
,
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}
where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it.
Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them:
The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue.
If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis.
== History ==
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.
In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.
In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation.
Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices.
Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability.
In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.
At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961.
== Eigenvalues and eigenvectors of matrices ==
Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.
Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications.
Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors
x
=
[
1
−
3
4
]
and
y
=
[
−
20
60
−
80
]
.
{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.}
These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that
x
=
λ
y
.
{\displaystyle \mathbf {x} =\lambda \mathbf {y} .}
In this case,
λ
=
−
1
20
{\displaystyle \lambda =-{\frac {1}{20}}}
.
Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A,
A
v
=
w
,
{\displaystyle A\mathbf {v} =\mathbf {w} ,}
or
[
A
11
A
12
⋯
A
1
n
A
21
A
22
⋯
A
2
n
⋮
⋮
⋱
⋮
A
n
1
A
n
2
⋯
A
n
n
]
[
v
1
v
2
⋮
v
n
]
=
[
w
1
w
2
⋮
w
n
]
{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}
where, for each row,
w
i
=
A
i
1
v
1
+
A
i
2
v
2
+
⋯
+
A
i
n
v
n
=
∑
j
=
1
n
A
i
j
v
j
.
{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.}
If it occurs that v and w are scalar multiples, that is if
then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A.
Equation (1) can be stated equivalently as
where I is the n by n identity matrix and 0 is the zero vector.
=== Eigenvalues and the characteristic polynomial ===
Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation
Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A.
The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms,
where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A.
As a brief example, which is described in more detail in the examples section later, consider the matrix
A
=
[
2
1
1
2
]
.
{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
Taking the determinant of (A − λI), the characteristic polynomial of A is
det
(
A
−
λ
I
)
=
|
2
−
λ
1
1
2
−
λ
|
=
3
−
4
λ
+
λ
2
.
{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.}
Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation
(
A
−
λ
I
)
v
=
0
{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} }
. In this example, the eigenvectors are any nonzero scalar multiples of
v
λ
=
1
=
[
1
−
1
]
,
v
λ
=
3
=
[
1
1
]
.
{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.}
If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers.
The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.
=== Spectrum of a matrix ===
The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities.
An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix.
=== Algebraic multiplicity ===
Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial.
Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,
det
(
A
−
λ
I
)
=
(
λ
1
−
λ
)
μ
A
(
λ
1
)
(
λ
2
−
λ
)
μ
A
(
λ
2
)
⋯
(
λ
d
−
λ
)
μ
A
(
λ
d
)
.
{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.}
If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as
1
≤
μ
A
(
λ
i
)
≤
n
,
μ
A
=
∑
i
=
1
d
μ
A
(
λ
i
)
=
n
.
{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}}
If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue.
=== Eigenspaces, geometric multiplicity, and the eigenbasis for matrices ===
Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2),
E
=
{
v
:
(
A
−
λ
I
)
v
=
0
}
.
{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.}
On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of
C
n
{\displaystyle \mathbb {C} ^{n}}
.
Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ.
The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
. Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as
γ
A
(
λ
)
=
n
−
rank
(
A
−
λ
I
)
.
{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).}
Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n.
1
≤
γ
A
(
λ
)
≤
μ
A
(
λ
)
≤
n
{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n}
To prove the inequality
γ
A
(
λ
)
≤
μ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}
, consider how the definition of geometric multiplicity implies the existence of
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
orthonormal eigenvectors
v
1
,
…
,
v
γ
A
(
λ
)
{\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}}
, such that
A
v
k
=
λ
v
k
{\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}}
. We can therefore find a (unitary) matrix V whose first
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
columns are these eigenvectors, and whose remaining columns can be any orthonormal set of
n
−
γ
A
(
λ
)
{\displaystyle n-\gamma _{A}(\lambda )}
vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating
D
:=
V
T
A
V
{\displaystyle D:=V^{T}AV}
, we get a matrix whose top left block is the diagonal matrix
λ
I
γ
A
(
λ
)
{\displaystyle \lambda I_{\gamma _{A}(\lambda )}}
. This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding
−
ξ
V
{\displaystyle -\xi V}
on both sides, we get
(
A
−
ξ
I
)
V
=
V
(
D
−
ξ
I
)
{\displaystyle (A-\xi I)V=V(D-\xi I)}
since I commutes with V. In other words,
A
−
ξ
I
{\displaystyle A-\xi I}
is similar to
D
−
ξ
I
{\displaystyle D-\xi I}
, and
det
(
A
−
ξ
I
)
=
det
(
D
−
ξ
I
)
{\displaystyle \det(A-\xi I)=\det(D-\xi I)}
. But from the definition of D, we know that
det
(
D
−
ξ
I
)
{\displaystyle \det(D-\xi I)}
contains a factor
(
ξ
−
λ
)
γ
A
(
λ
)
{\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}}
, which means that the algebraic multiplicity of
λ
{\displaystyle \lambda }
must satisfy
μ
A
(
λ
)
≥
γ
A
(
λ
)
{\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )}
.
Suppose A has
d
≤
n
{\displaystyle d\leq n}
distinct eigenvalues
λ
1
,
…
,
λ
d
{\displaystyle \lambda _{1},\ldots ,\lambda _{d}}
, where the geometric multiplicity of
λ
i
{\displaystyle \lambda _{i}}
is
γ
A
(
λ
i
)
{\displaystyle \gamma _{A}(\lambda _{i})}
. The total geometric multiplicity of A,
γ
A
=
∑
i
=
1
d
γ
A
(
λ
i
)
,
d
≤
γ
A
≤
n
,
{\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}}
is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If
γ
A
=
n
{\displaystyle \gamma _{A}=n}
, then
The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space
C
n
{\displaystyle \mathbb {C} ^{n}}
.
A basis of
C
n
{\displaystyle \mathbb {C} ^{n}}
can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis
Any vector in
C
n
{\displaystyle \mathbb {C} ^{n}}
can be written as a linear combination of eigenvectors of A.
=== Additional properties ===
Let
A
{\displaystyle A}
be an arbitrary
n
×
n
{\displaystyle n\times n}
matrix of complex numbers with eigenvalues
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
. Each eigenvalue appears
μ
A
(
λ
i
)
{\displaystyle \mu _{A}(\lambda _{i})}
times in this list, where
μ
A
(
λ
i
)
{\displaystyle \mu _{A}(\lambda _{i})}
is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:
The trace of
A
{\displaystyle A}
, defined as the sum of its diagonal elements, is also the sum of all eigenvalues,
tr
(
A
)
=
∑
i
=
1
n
a
i
i
=
∑
i
=
1
n
λ
i
=
λ
1
+
λ
2
+
⋯
+
λ
n
.
{\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.}
The determinant of
A
{\displaystyle A}
is the product of all its eigenvalues,
det
(
A
)
=
∏
i
=
1
n
λ
i
=
λ
1
λ
2
⋯
λ
n
.
{\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.}
The eigenvalues of the
k
{\displaystyle k}
th power of
A
{\displaystyle A}
; i.e., the eigenvalues of
A
k
{\displaystyle A^{k}}
, for any positive integer
k
{\displaystyle k}
, are
λ
1
k
,
…
,
λ
n
k
{\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}}
.
The matrix
A
{\displaystyle A}
is invertible if and only if every eigenvalue is nonzero.
If
A
{\displaystyle A}
is invertible, then the eigenvalues of
A
−
1
{\displaystyle A^{-1}}
are
1
λ
1
,
…
,
1
λ
n
{\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}}
and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.
If
A
{\displaystyle A}
is equal to its conjugate transpose
A
∗
{\displaystyle A^{*}}
, or equivalently if
A
{\displaystyle A}
is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix.
If
A
{\displaystyle A}
is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively.
If
A
{\displaystyle A}
is unitary, every eigenvalue has absolute value
|
λ
i
|
=
1
{\displaystyle |\lambda _{i}|=1}
.
If
A
{\displaystyle A}
is a
n
×
n
{\displaystyle n\times n}
matrix and
{
λ
1
,
…
,
λ
k
}
{\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}}
are its eigenvalues, then the eigenvalues of matrix
I
+
A
{\displaystyle I+A}
(where
I
{\displaystyle I}
is the identity matrix) are
{
λ
1
+
1
,
…
,
λ
k
+
1
}
{\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}}
. Moreover, if
α
∈
C
{\displaystyle \alpha \in \mathbb {C} }
, the eigenvalues of
α
I
+
A
{\displaystyle \alpha I+A}
are
{
λ
1
+
α
,
…
,
λ
k
+
α
}
{\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}}
. More generally, for a polynomial
P
{\displaystyle P}
the eigenvalues of matrix
P
(
A
)
{\displaystyle P(A)}
are
{
P
(
λ
1
)
,
…
,
P
(
λ
k
)
}
{\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}}
.
=== Left and right eigenvectors ===
Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the
n
×
n
{\displaystyle n\times n}
matrix
A
{\displaystyle A}
in the defining equation, equation (1),
A
v
=
λ
v
.
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .}
The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix
A
{\displaystyle A}
. In this formulation, the defining equation is
u
A
=
κ
u
,
{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,}
where
κ
{\displaystyle \kappa }
is a scalar and
u
{\displaystyle u}
is a
1
×
n
{\displaystyle 1\times n}
matrix. Any row vector
u
{\displaystyle u}
satisfying this equation is called a left eigenvector of
A
{\displaystyle A}
and
κ
{\displaystyle \kappa }
is its associated eigenvalue. Taking the transpose of this equation,
A
T
u
T
=
κ
u
T
.
{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.}
Comparing this equation to equation (1), it follows immediately that a left eigenvector of
A
{\displaystyle A}
is the same as the transpose of a right eigenvector of
A
T
{\displaystyle A^{\textsf {T}}}
, with the same eigenvalue. Furthermore, since the characteristic polynomial of
A
T
{\displaystyle A^{\textsf {T}}}
is the same as the characteristic polynomial of
A
{\displaystyle A}
, the left and right eigenvectors of
A
{\displaystyle A}
are associated with the same eigenvalues.
=== Diagonalization and the eigendecomposition ===
Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A,
Q
=
[
v
1
v
2
⋯
v
n
]
.
{\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.}
Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue,
A
Q
=
[
λ
1
v
1
λ
2
v
2
⋯
λ
n
v
n
]
.
{\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.}
With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then
A
Q
=
Q
Λ
.
{\displaystyle AQ=Q\Lambda .}
Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1,
A
=
Q
Λ
Q
−
1
,
{\displaystyle A=Q\Lambda Q^{-1},}
or by instead left multiplying both sides by Q−1,
Q
−
1
A
Q
=
Λ
.
{\displaystyle Q^{-1}AQ=\Lambda .}
A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.
Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable.
A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces.
=== Variational characterization ===
In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of
H
{\displaystyle H}
is the maximum value of the quadratic form
x
T
H
x
/
x
T
x
{\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} }
. A value of
x
{\displaystyle \mathbf {x} }
that realizes that maximum is an eigenvector.
=== Matrix examples ===
==== Two-dimensional matrix example ====
Consider the matrix
A
=
[
2
1
1
2
]
.
{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues.
Taking the determinant to find characteristic polynomial of A,
det
(
A
−
λ
I
)
=
|
[
2
1
1
2
]
−
λ
[
1
0
0
1
]
|
=
|
2
−
λ
1
1
2
−
λ
|
=
3
−
4
λ
+
λ
2
=
(
λ
−
3
)
(
λ
−
1
)
.
{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}}
Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A.
For λ=1, equation (2) becomes,
(
A
−
I
)
v
λ
=
1
=
[
1
1
1
1
]
[
v
1
v
2
]
=
[
0
0
]
{\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}
1
v
1
+
1
v
2
=
0
{\displaystyle 1v_{1}+1v_{2}=0}
Any nonzero vector with v1 = −v2 solves this equation. Therefore,
v
λ
=
1
=
[
v
1
−
v
1
]
=
[
1
−
1
]
{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}
is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector.
For λ=3, equation (2) becomes
(
A
−
3
I
)
v
λ
=
3
=
[
−
1
1
1
−
1
]
[
v
1
v
2
]
=
[
0
0
]
−
1
v
1
+
1
v
2
=
0
;
1
v
1
−
1
v
2
=
0
{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}}
Any nonzero vector with v1 = v2 solves this equation. Therefore,
v
λ
=
3
=
[
v
1
v
1
]
=
[
1
1
]
{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}}
is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector.
Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively.
==== Three-dimensional matrix example ====
Consider the matrix
A
=
[
2
0
0
0
3
4
0
4
9
]
.
{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
|
[
2
0
0
0
3
4
0
4
9
]
−
λ
[
1
0
0
0
1
0
0
0
1
]
|
=
|
2
−
λ
0
0
0
3
−
λ
4
0
4
9
−
λ
|
,
=
(
2
−
λ
)
[
(
3
−
λ
)
(
9
−
λ
)
−
16
]
=
−
λ
3
+
14
λ
2
−
35
λ
+
22.
{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}}
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors
[
1
0
0
]
T
{\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}}
,
[
0
−
2
1
]
T
{\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}}
, and
[
0
1
2
]
T
{\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}}
, or any nonzero multiple thereof.
==== Three-dimensional matrix example with complex eigenvalues ====
Consider the cyclic permutation matrix
A
=
[
0
1
0
0
0
1
1
0
0
]
.
{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.}
This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are
λ
1
=
1
λ
2
=
−
1
2
+
i
3
2
λ
3
=
λ
2
∗
=
−
1
2
−
i
3
2
{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}
where
i
{\displaystyle i}
is an imaginary unit with
i
2
=
−
1
{\displaystyle i^{2}=-1}
.
For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example,
A
[
5
5
5
]
=
[
5
5
5
]
=
1
⋅
[
5
5
5
]
.
{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.}
For the complex conjugate pair of imaginary eigenvalues,
λ
2
λ
3
=
1
,
λ
2
2
=
λ
3
,
λ
3
2
=
λ
2
.
{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.}
Then
A
[
1
λ
2
λ
3
]
=
[
λ
2
λ
3
1
]
=
λ
2
⋅
[
1
λ
2
λ
3
]
,
{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}
and
A
[
1
λ
3
λ
2
]
=
[
λ
3
λ
2
1
]
=
λ
3
⋅
[
1
λ
3
λ
2
]
.
{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.}
Therefore, the other two eigenvectors of A are complex and are
v
λ
2
=
[
1
λ
2
λ
3
]
T
{\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}}
and
v
λ
3
=
[
1
λ
3
λ
2
]
T
{\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}}
with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,
v
λ
2
=
v
λ
3
∗
.
{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.}
==== Diagonal matrix example ====
Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix
A
=
[
1
0
0
0
2
0
0
0
3
]
.
{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
(
1
−
λ
)
(
2
−
λ
)
(
3
−
λ
)
,
{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.
Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,
v
λ
1
=
[
1
0
0
]
,
v
λ
2
=
[
0
1
0
]
,
v
λ
3
=
[
0
0
1
]
,
{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
==== Triangular matrix example ====
A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.
Consider the lower triangular matrix,
A
=
[
1
0
0
1
2
0
2
3
3
]
.
{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
(
1
−
λ
)
(
2
−
λ
)
(
3
−
λ
)
,
{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.
These eigenvalues correspond to the eigenvectors,
v
λ
1
=
[
1
−
1
1
2
]
,
v
λ
2
=
[
0
1
−
3
]
,
v
λ
3
=
[
0
0
1
]
,
{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
==== Matrix with repeated eigenvalues example ====
As in the previous example, the lower triangular matrix
A
=
[
2
0
0
0
1
2
0
0
0
1
3
0
0
0
1
3
]
,
{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}
has a characteristic polynomial that is the product of its diagonal elements,
det
(
A
−
λ
I
)
=
|
2
−
λ
0
0
0
1
2
−
λ
0
0
0
1
3
−
λ
0
0
0
1
3
−
λ
|
=
(
2
−
λ
)
2
(
3
−
λ
)
2
.
{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.}
The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A.
On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector
[
0
1
−
1
1
]
T
{\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}}
and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector
[
0
0
0
1
]
T
{\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}}
. The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
=== Eigenvector-eigenvalue identity ===
For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix,
|
v
i
,
j
|
2
=
∏
k
(
λ
i
−
λ
k
(
M
j
)
)
∏
k
≠
i
(
λ
i
−
λ
k
)
,
{\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},}
where
M
j
{\textstyle M_{j}}
is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature.
== Eigenvalues and eigenfunctions of differential operators ==
The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation
D
f
(
t
)
=
λ
f
(
t
)
{\displaystyle Df(t)=\lambda f(t)}
The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions.
=== Derivative operator example ===
Consider the derivative operator
d
d
t
{\displaystyle {\tfrac {d}{dt}}}
with eigenvalue equation
d
d
t
f
(
t
)
=
λ
f
(
t
)
.
{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}
This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function
f
(
t
)
=
f
(
0
)
e
λ
t
,
{\displaystyle f(t)=f(0)e^{\lambda t},}
is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant.
The main eigenfunction article gives other examples.
== General definition ==
The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V,
T
:
V
→
V
.
{\displaystyle T:V\to V.}
We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that
This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.
=== Eigenspaces, geometric multiplicity, and the eigenbasis ===
Given an eigenvalue λ, consider the set
E
=
{
v
:
T
(
v
)
=
λ
v
}
,
{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},}
which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ.
By definition of a linear transformation,
T
(
x
+
y
)
=
T
(
x
)
+
T
(
y
)
,
T
(
α
x
)
=
α
T
(
x
)
,
{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}}
for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then
T
(
u
+
v
)
=
λ
(
u
+
v
)
,
T
(
α
v
)
=
λ
(
α
v
)
.
{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}}
So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.
If that subspace has dimension 1, it is sometimes called an eigenline.
The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.
The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.
Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable.
=== Spectral theory ===
If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue.
For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.
=== Associative algebras and representation theory ===
One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory.
The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively.
Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence.
== Dynamic equations ==
The simplest difference equations have the form
x
t
=
a
1
x
t
−
1
+
a
2
x
t
−
2
+
⋯
+
a
k
x
t
−
k
.
{\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.}
The solution of this equation for x in terms of t is found by using its characteristic equation
λ
k
−
a
1
λ
k
−
1
−
a
2
λ
k
−
2
−
⋯
−
a
k
−
1
λ
−
a
k
=
0
,
{\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,}
which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations
x
t
−
1
=
x
t
−
1
,
…
,
x
t
−
k
+
1
=
x
t
−
k
+
1
,
{\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},}
giving a k-dimensional system of the first order in the stacked variable vector
[
x
t
⋯
x
t
−
k
+
1
]
{\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}}
in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots
λ
1
,
…
,
λ
k
,
{\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},}
for use in the solution equation
x
t
=
c
1
λ
1
t
+
⋯
+
c
k
λ
k
t
.
{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.}
A similar procedure is used for solving a differential equation of the form
d
k
x
d
t
k
+
a
k
−
1
d
k
−
1
x
d
t
k
−
1
+
⋯
+
a
1
d
x
d
t
+
a
0
x
=
0.
{\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.}
== Calculation ==
The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.
=== Classical method ===
The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point.
==== Eigenvalues ====
The eigenvalues of a matrix
A
{\displaystyle A}
can be determined by finding the roots of the characteristic polynomial. This is easy for
2
×
2
{\displaystyle 2\times 2}
matrices, but the difficulty increases rapidly with the size of the matrix.
In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an
n
×
n
{\displaystyle n\times n}
matrix is a sum of
n
!
{\displaystyle n!}
different products.
Explicit algebraic formulas for the roots of a polynomial exist only if the degree
n
{\displaystyle n}
is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree
n
{\displaystyle n}
is the characteristic polynomial of some companion matrix of order
n
{\displaystyle n}
.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical.
==== Eigenvectors ====
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix
A
=
[
4
1
6
3
]
{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}}
we can find its eigenvectors by solving the equation
A
v
=
6
v
{\displaystyle Av=6v}
, that is
[
4
1
6
3
]
[
x
y
]
=
6
⋅
[
x
y
]
{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}}
This matrix equation is equivalent to two linear equations
{
4
x
+
y
=
6
x
6
x
+
3
y
=
6
y
{\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.}
that is
{
−
2
x
+
y
=
0
6
x
−
3
y
=
0
{\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.}
Both equations reduce to the single linear equation
y
=
2
x
{\displaystyle y=2x}
. Therefore, any vector of the form
[
a
2
a
]
T
{\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}}
, for any nonzero real number
a
{\displaystyle a}
, is an eigenvector of
A
{\displaystyle A}
with eigenvalue
λ
=
6
{\displaystyle \lambda =6}
.
The matrix
A
{\displaystyle A}
above has another eigenvalue
λ
=
1
{\displaystyle \lambda =1}
. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of
3
x
+
y
=
0
{\displaystyle 3x+y=0}
, that is, any vector of the form
[
b
−
3
b
]
T
{\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}}
, for any nonzero real number
b
{\displaystyle b}
.
=== Simple iterative methods ===
The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by
(
A
−
μ
I
)
−
1
{\displaystyle (A-\mu I)^{-1}}
; this causes it to converge to an eigenvector of the eigenvalue closest to
μ
∈
C
{\displaystyle \mu \in \mathbb {C} }
.
If
v
{\displaystyle \mathbf {v} }
is (a good approximation of) an eigenvector of
A
{\displaystyle A}
, then the corresponding eigenvalue can be computed as
λ
=
v
∗
A
v
v
∗
v
{\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}}
where
v
∗
{\displaystyle \mathbf {v} ^{*}}
denotes the conjugate transpose of
v
{\displaystyle \mathbf {v} }
.
=== Modern methods ===
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.
Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.
== Applications ==
=== Geometric transformations ===
Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes.
The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.
The characteristic equation for a rotation is a quadratic equation with discriminant
D
=
−
4
(
sin
θ
)
2
{\displaystyle D=-4(\sin \theta )^{2}}
, which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,
cos
θ
±
i
sin
θ
{\displaystyle \cos \theta \pm i\sin \theta }
; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues.
=== Principal component analysis ===
The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling.
=== Graphs ===
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix
A
{\displaystyle A}
, or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either
D
−
A
{\displaystyle D-A}
(sometimes called the combinatorial Laplacian) or
I
−
D
−
1
/
2
A
D
−
1
/
2
{\displaystyle I-D^{-1/2}AD^{-1/2}}
(sometimes called the normalized Laplacian), where
D
{\displaystyle D}
is a diagonal matrix with
D
i
i
{\displaystyle D_{ii}}
equal to the degree of vertex
v
i
{\displaystyle v_{i}}
, and in
D
−
1
/
2
{\displaystyle D^{-1/2}}
, the
i
{\displaystyle i}
th diagonal entry is
1
/
deg
(
v
i
)
{\textstyle 1/{\sqrt {\deg(v_{i})}}}
. The
k
{\displaystyle k}
th principal eigenvector of a graph is defined as either the eigenvector corresponding to the
k
{\displaystyle k}
th largest or
k
{\displaystyle k}
th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.
=== Markov chains ===
A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.
=== Vibration analysis ===
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by
m
x
¨
+
k
x
=
0
{\displaystyle m{\ddot {x}}+kx=0}
or
m
x
¨
=
−
k
x
{\displaystyle m{\ddot {x}}=-kx}
That is, acceleration is proportional to position (i.e., we expect
x
{\displaystyle x}
to be sinusoidal in time).
In
n
{\displaystyle n}
dimensions,
m
{\displaystyle m}
becomes a mass matrix and
k
{\displaystyle k}
a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem
k
x
=
ω
2
m
x
{\displaystyle kx=\omega ^{2}mx}
where
ω
2
{\displaystyle \omega ^{2}}
is the eigenvalue and
ω
{\displaystyle \omega }
is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of
k
{\displaystyle k}
alone. Furthermore, damped vibration, governed by
m
x
¨
+
c
x
˙
+
k
x
=
0
{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}
leads to a so-called quadratic eigenvalue problem,
(
ω
2
m
+
ω
c
+
k
)
x
=
0.
{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.}
This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.
=== Tensor of moment of inertia ===
In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.
=== Stress tensor ===
In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.
=== Schrödinger equation ===
An example of an eigenvalue equation where the transformation
T
{\displaystyle T}
is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
H
ψ
E
=
E
ψ
E
{\displaystyle H\psi _{E}=E\psi _{E}\,}
where
H
{\displaystyle H}
, the Hamiltonian, is a second-order differential operator and
ψ
E
{\displaystyle \psi _{E}}
, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue
E
{\displaystyle E}
, interpreted as its energy.
However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for
ψ
E
{\displaystyle \psi _{E}}
within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which
ψ
E
{\displaystyle \psi _{E}}
and
H
{\displaystyle H}
can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
. In this notation, the Schrödinger equation is:
H
|
Ψ
E
⟩
=
E
|
Ψ
E
⟩
{\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle }
where
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
is an eigenstate of
H
{\displaystyle H}
and
E
{\displaystyle E}
represents the eigenvalue.
H
{\displaystyle H}
is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above
H
|
Ψ
E
⟩
{\displaystyle H|\Psi _{E}\rangle }
is understood to be the vector obtained by application of the transformation
H
{\displaystyle H}
to
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
.
=== Wave transport ===
Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix
t
{\displaystyle \mathbf {t} }
. The eigenvectors of the transmission operator
t
†
t
{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }
form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,
τ
{\displaystyle \tau }
, of
t
†
t
{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }
correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with
τ
max
=
1
{\displaystyle \tau _{\max }=1}
and
τ
min
=
0
{\displaystyle \tau _{\min }=0}
. Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.
=== Molecular orbitals ===
In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.
=== Geology and glaciology ===
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms.
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered
v
1
,
v
2
,
v
3
{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}}
by their eigenvalues
E
1
≥
E
2
≥
E
3
{\displaystyle E_{1}\geq E_{2}\geq E_{3}}
;
v
1
{\displaystyle \mathbf {v} _{1}}
then is the primary orientation/dip of clast,
v
2
{\displaystyle \mathbf {v} _{2}}
is the secondary and
v
3
{\displaystyle \mathbf {v} _{3}}
is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of
E
1
{\displaystyle E_{1}}
,
E
2
{\displaystyle E_{2}}
, and
E
3
{\displaystyle E_{3}}
are dictated by the nature of the sediment's fabric. If
E
1
=
E
2
=
E
3
{\displaystyle E_{1}=E_{2}=E_{3}}
, the fabric is said to be isotropic. If
E
1
=
E
2
>
E
3
{\displaystyle E_{1}=E_{2}>E_{3}}
, the fabric is said to be planar. If
E
1
>
E
2
>
E
3
{\displaystyle E_{1}>E_{2}>E_{3}}
, the fabric is said to be linear.
=== Basic reproduction number ===
The basic reproduction number (
R
0
{\displaystyle R_{0}}
) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then
R
0
{\displaystyle R_{0}}
is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,
t
G
{\displaystyle t_{G}}
, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time
t
G
{\displaystyle t_{G}}
has passed. The value
R
0
{\displaystyle R_{0}}
is then the largest eigenvalue of the next generation matrix.
=== Eigenfaces ===
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.
== See also ==
Antieigenvalue theory
Eigenoperator
Eigenplane
Eigenmoments
Eigenvalue algorithm
Quantum states
Jordan normal form
List of numerical-analysis software
Nonlinear eigenproblem
Normal eigenvalue
Quadratic eigenvalue problem
Singular value
Spectrum of a matrix
== Notes ==
=== Citations ===
== Sources ==
== Further reading ==
== External links ==
What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts"
Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu.
Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy
Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown
Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an
n
×
n
{\displaystyle n\times n}
size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.)
Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors
=== Theory ===
Computation of Eigenvalues
Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst | Wikipedia/Eigenvalues |
Control engineering, also known as control systems engineering and, in some European countries, automation engineering, is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering, chemical engineering and mechanical engineering at many institutions around the world.
The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems.
== Overview ==
Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem.
Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner.: 6 Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering.
Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.
In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a proportional–integral–derivative controller (PID controller) system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.
Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.
== History ==
Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel.
: 22
This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt: 22 in 1788.
In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis.
Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes.
Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today.
=== Mathematical modelling ===
David Quinn Mayne, (1930–2024) was among the early developers of a rigorous mathematical method for analysing Model predictive control algorithms (MPC). It is currently used in tens of thousands of applications and is a core part of the advanced control technology by hundreds of process control producers. MPC's major strength is its capacity to deal with nonlinearities and hard constraints in a simple and intuitive fashion. His work underpins a class of algorithms that are probably correct, heuristically explainable, and yield control system designs which meet practically important objectives.
== Control systems ==
== Control theory ==
== Education ==
At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University.
Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education.
== Careers ==
A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are typically paired with an electrical or mechanical engineering degree, but can also be paired with a degree in chemical engineering. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career.
There are not very many careers that are classified as "control engineer", most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering.
Because of this, there are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, chemical companies, petroleum companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, Phillips 66, Eastman, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. Process Control Engineers, typically found in Refineries and Specialty Chemical plants, can earn upwards of $90k annually.
In India, control System Engineering is provided at different levels with a diploma, graduation and postgraduation. These programs require the candidate to have chosen physics, chemistry and mathematics for their secondary schooling or relevant bachelor's degree for postgraduate studies.
== Recent advancement ==
Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock.: 23 The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components.
Therefore, at the design stage either:
Digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or
Analog components are mapped into discrete domain and design is carried out there.
The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers.
Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme.
Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc.
== See also ==
== References ==
== Further reading ==
D. Q. Mayne (1965). P. H. Hammond (ed.). A Gradient Method for Determining Optimal Control of Nonlinear Stochastic Systems in Proceedings of IFAC Symposium, Theory of Self-Adaptive Control Systems. Plenum Press. pp. 19–27.
Bennett, Stuart (June 1986). A history of control engineering, 1800-1930. IET. ISBN 978-0-86341-047-5.
Bennett, Stuart (1993). A history of control engineering, 1930-1955. IET. ISBN 978-0-86341-299-8.
Christopher Kilian (2005). Modern Control Technology. Thompson Delmar Learning. ISBN 978-1-4018-5806-3.
Arnold Zankl (2006). Milestones in Automation: From the Transistor to the Digital Factory. Wiley-VCH. ISBN 978-3-89578-259-6.
Franklin, Gene F.; Powell, J. David; Emami-Naeini, Abbas (2014). Feedback control of dynamic systems (7th ed.). Stanford Cali. U.S.: Pearson. p. 880. ISBN 9780133496598.
== External links ==
Control Labs Worldwide
The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook
Control System Integrators Association
List of control systems integrators
Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG)
Systems Science & Control Engineering: An Open Access Journal | Wikipedia/Control_Systems_Engineering |
Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.
== Overview ==
The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.
MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.
While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.
An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.
=== Theory behind MPC ===
MPC is based on iterative, finite-horizon optimization of a plant model. At time
t
{\displaystyle t}
the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future:
[
t
,
t
+
T
]
{\displaystyle [t,t+T]}
. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time
t
+
T
{\displaystyle t+T}
. Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.
=== Principles of MPC ===
Model predictive control is a multivariable control algorithm that uses:
an internal dynamic model of the process
a cost function J over the receding horizon
an optimization algorithm minimizing the cost function J using the control input u
An example of a quadratic cost function for optimization is given by:
J
=
∑
i
=
1
N
w
x
i
(
r
i
−
x
i
)
2
+
∑
i
=
1
M
w
u
i
Δ
u
i
2
{\displaystyle J=\sum _{i=1}^{N}w_{x_{i}}(r_{i}-x_{i})^{2}+\sum _{i=1}^{M}w_{u_{i}}{\Delta u_{i}}^{2}}
without violating constraints (low/high limits) with
x
i
{\displaystyle x_{i}}
:
i
{\displaystyle i}
th controlled variable (e.g. measured temperature)
r
i
{\displaystyle r_{i}}
:
i
{\displaystyle i}
th reference variable (e.g. required temperature)
u
i
{\displaystyle u_{i}}
:
i
{\displaystyle i}
th manipulated variable (e.g. control valve)
w
x
i
{\displaystyle w_{x_{i}}}
: weighting coefficient reflecting the relative importance of
x
i
{\displaystyle x_{i}}
w
u
i
{\displaystyle w_{u_{i}}}
: weighting coefficient penalizing relative big changes in
u
i
{\displaystyle u_{i}}
etc.
== Nonlinear MPC ==
Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.
The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g.,.. Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function.
While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.
== Explicit MPC ==
Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.
== Robust MPC ==
Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.
Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance. This is the optimal solution to linear robust control problems, however it carries a high computational cost. The basic idea behind the min/max MPC approach is to modify the on-line "min" optimization to a "min-max" problem, minimizing the worst case of the objective function, maximized over all possible plants from the uncertainty set.
Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.
Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state. The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.
Multi-stage MPC. This uses a scenario-tree formulation by approximating the uncertainty space with a set of samples and the approach is non-conservative because it takes into account that the measurement information is available at every time stage in the prediction and the decisions at every stage can be different and can act as recourse to counteract the effects of uncertainties. The drawback of the approach however is that the size of the problem grows exponentially with the number of uncertainties and the prediction horizon.
Tube-enhanced multi-stage MPC. This approach synergizes multi-stage MPC and tube-based MPC. It provides high degrees of freedom to choose the desired trade-off between optimality and simplicity by the classification of uncertainties and the choice of control laws in the predictions.
== MPC software ==
Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.
A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.
Freely available open-source software packages for (nonlinear) model predictive control include among others:
Rockit (Rapid Optimal Control kit) — a software framework to quickly prototype optimal control problems.
acados — a software framework providing fast and embedded solvers for nonlinear optimal control.
GRAMPC — a nonlinear MPC framework that is suitable for dynamical systems with sampling times in the (sub)millisecond range and that allows for an efficient implementation on embedded hardware.
CControl - a controll engineering linear algebra library with MPC and kalman filtering for embedded and low cost microcontrollers
== MPC vs. LQR ==
Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.
While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.
Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.
The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.
This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected.
== See also ==
Control engineering
Control theory
Feed-forward
System identification
== References ==
== Further reading ==
Kwon, Wook Hyun; Bruckstein, Alfred M.; Kailath, Thomas (1983). "Stabilizing state feedback design via the moving horizon method". International Journal of Control. 37 (3): 631–643. doi:10.1080/00207178308932998.
Garcia, Carlos E.; Prett, David M.; Morari, Manfred (1989). "Model predictive control: theory and practice". Automatica. 25 (3): 335–348. doi:10.1016/0005-1098(89)90002-2.
Findeisen, Rolf; Allgöwer, Frank (2001). "An introduction to nonlinear model predictive control". Summerschool on "The Impact of Optimization in Control", Dutch Institute of Systems and Control, C. W. Scherer and J. M. Schumacher, Editors: 3.1 – 3.45.
Mayne, David Q.; Michalska, Hannah (1990). "Receding horizon control of nonlinear systems". IEEE Transactions on Automatic Control. 35 (7): 814–824. doi:10.1109/9.57020.
Mayne, David Q.; Rawlings, James B.; Rao, Christopher V.; Scokaert, Pierre O. M. (2000). "Constrained model predictive control: stability and optimality". Automatica. 36 (6): 789–814. doi:10.1016/S0005-1098(99)00214-9.
Allgöwer, Frank; Zheng, Alex, eds. (2000). Nonlinear model predictive control. Progress in Systems Theory. Vol. 26. Birkhauser.
Camacho; Bordons (2004). Model predictive control. Springer Verlag.
Findeisen, Rolf; Allgöwer, Frank; Biegler, Lorenz T. (2006). Assessment and Future Directions of Nonlinear Model Predictive Control. Lecture Notes in Control and Information Sciences. Vol. 26. Springer.
Diehl, Moritz M.; Bock, H. Georg; Schlöder, Johannes P.; Findeisen, Rolf; Nagy, Zoltan; Allgöwer, Frank (2002). "Real-time optimization and Nonlinear Model Predictive Control of Processes governed by differential-algebraic equations". Journal of Process Control. 12 (4): 577–585. doi:10.1016/S0959-1524(01)00023-3.
Rawlings, James B.; Mayne, David Q.; and Diehl, Moritz M.; Model Predictive Control: Theory, Computation, and Design (2nd Ed.), Nob Hill Publishing, LLC, ISBN 978-0975937730 (Oct. 2017)
Geyer, Tobias; Model predictive control of high power converters and industrial drives, Wiley, London, ISBN 978-1-119-01090-6, Nov. 2016
== External links ==
Case Study. Lancaster Waste Water Treatment Works, optimisation by means of Model Predictive Control from Perceptive Engineering
acados - Open-source framework for (nonlinear) model predictive control providing fast and embedded solvers for nonlinear optimization. (C, MATLAB and Python interface available)
μAO-MPC - Open Source Software package that generates tailored code for model predictive controllers on embedded systems in highly portable C code.
GRAMPC - Open source software framework for embedded nonlinear model predictive control using a gradient-based augmented Lagrangian method. (Plain C code, no code generation, MATLAB interface)
jMPC Toolbox - Open Source MATLAB Toolbox for Linear MPC.
Study on application of NMPC to superfluid cryogenics (PhD Project).
Nonlinear Model Predictive Control Toolbox for MATLAB and Python
Model Predictive Control Toolbox from MathWorks for design and simulation of model predictive controllers in MATLAB and Simulink
Pulse step model predictive controller - virtual simulator
Tutorial on MPC with Excel and MATLAB Examples
GEKKO: Model Predictive Control in Python | Wikipedia/Model_Predictive_Control |
In mathematics, a transformation, transform, or self-map is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. f: X → X.
Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations.
== Partial transformations ==
While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X.
== Algebraic structures ==
The set of all transformations on a given base set, together with function composition, forms a regular semigroup.
== Combinatorics ==
For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations.
== See also ==
Coordinate transformation
Data transformation (statistics)
Geometric transformation
Infinitesimal transformation
Linear transformation
List of transforms
Rigid transformation
Transformation geometry
Transformation semigroup
Transformation group
Transformation matrix
== References ==
== External links ==
Media related to Transformation (function) at Wikimedia Commons | Wikipedia/Transform_(mathematics) |
Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.
== Overview ==
Intelligent control can be divided into the following major sub-domains:
Neural network control
Machine learning control
Reinforcement learning
Bayesian control
Fuzzy control
Neuro-fuzzy control
Expert Systems
Genetic control
New control techniques are created continuously as new models of intelligent behavior are created and computational methods developed to support them.
=== Neural network controller ===
Neural networks have been used to solve problems in almost all spheres of science and technology. Neural network control basically involves two steps:
System identification
Control
It has been shown that a feedforward network with nonlinear, continuous and differentiable activation functions have universal approximation capability. Recurrent networks have also been used for system identification. Given, a set of input-output data pairs, system identification aims to form a mapping among these data pairs. Such a network is supposed to capture the dynamics of a system. For the control part, deep reinforcement learning has shown its ability to control complex systems.
=== Bayesian is controllers for online
= = =
Bayesian probability has produced a number of algorithms that are in common use in many advanced control systems, serving as state space estimators of some variables that are used in the controller.
The Kalman filter and the Particle filter are two examples of popular Bayesian control components. The Bayesian approach to controller design often requires an important effort in deriving the so-called system model and measurement model, which are the mathematical relationships linking the state variables to the sensor measurements available in the controlled system. In this respect, it is very closely linked to the
system-theoretic approach to control design.
== See also ==
Action selection
AI effect
Applications of artificial intelligence
Artificial intelligence systems integration
Function approximation
Hybrid intelligent system
Lists
List of emerging technologies
Outline of artificial intelligence
== References ==
Antsaklis, P.J. (1993). Passino, K.M. (ed.). An Introduction to Intelligent and Autonomous Control. Kluwer Academic Publishers. ISBN 0-7923-9267-1. Archived from the original on 10 April 2009.
Liu, J.; Wang, W.; Golnaraghi, F.; Kubica, E. (2010). "A Novel Fuzzy Framework for Nonlinear System Control". Fuzzy Sets and Systems. 161 (21): 2746–2759. doi:10.1016/j.fss.2010.04.009.
== Further reading ==
Jeffrey T. Spooner, Manfredi Maggiore, Raul Ord onez, and Kevin M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley & Sons, NY;
Farrell, J.A., Polycarpou, M.M. (2006). Adaptive Approximation Based Control: Unifying Neural, Fuzzy and Traditional Adaptive Approximation Approaches. Wiley. ISBN 978-0-471-72788-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
Schramm, G. (1998). Intelligent Flight Control - A Fuzzy Logic Approach. TU Delft Press. ISBN 90-901192-4-8. | Wikipedia/Intelligent_control |
Digital control is a branch of control theory that uses digital computers to act as system controllers.
Depending on the requirements, a digital control system can take the form of a microcontroller to an ASIC to a standard desktop computer.
Since a digital computer is a discrete system, the Laplace transform is replaced with the Z-transform. Since a digital computer has finite precision (See quantization), extra care is needed to ensure the error in coefficients, analog-to-digital conversion, digital-to-analog conversion, etc. are not producing undesired or unplanned effects.
Since the creation of the first digital computer in the early 1940s the price of digital computers has dropped considerably, which has made them key pieces to control systems because they are easy to configure and reconfigure through software, can scale to the limits of the memory or storage space without extra cost, parameters of the program can change with time (See adaptive control) and digital computers are much less prone to environmental conditions than capacitors, inductors, etc.
== Digital controller implementation ==
A digital controller is usually cascaded with the plant in a feedback system. The rest of the system can either be digital or analog.
Typically, a digital controller requires:
Analog-to-digital conversion to convert analog inputs to machine-readable (digital) format
Digital-to-analog conversion to convert digital outputs to a form that can be input to a plant (analog)
A program that relates the outputs to the inputs
=== Output program ===
Outputs from the digital controller are functions of current and past input samples, as well as past output samples - this can be implemented by storing relevant values of input and output in registers. The output can then be formed by a weighted sum of these stored values.
The programs can take numerous forms and perform many functions
A digital filter for low-pass filtering
A state space model of a system to act as a state observer
A telemetry system
=== Stability ===
Although a controller may be stable when implemented as an analog controller, it could be unstable when implemented as a digital controller due to a large sampling interval. During sampling the aliasing modifies the cutoff parameters. Thus the sample rate characterizes the transient response and stability of the compensated system, and must update the values at the controller input often enough so as to not cause instability.
When substituting the frequency into the z operator, regular stability criteria still apply to discrete control systems. Nyquist criteria apply to z-domain transfer functions as well as being general for complex valued functions. Bode stability criteria apply similarly.
Jury criterion determines the discrete system stability about its characteristic polynomial.
=== Design of digital controller in s-domain ===
The digital controller can also be designed in the s-domain (continuous). The Tustin transformation can transform the continuous compensator to the respective digital compensator. The digital compensator will achieve an output that approaches the output of its respective analog controller as the sampling interval is decreased.
s
=
2
(
z
−
1
)
T
(
z
+
1
)
{\displaystyle s={\frac {2(z-1)}{T(z+1)}}}
==== Tustin transformation deduction ====
Tustin is the Padé(1,1) approximation of the exponential function
z
=
e
s
T
{\displaystyle {\begin{aligned}z&=e^{sT}\end{aligned}}}
:
z
=
e
s
T
=
e
s
T
/
2
e
−
s
T
/
2
≈
1
+
s
T
/
2
1
−
s
T
/
2
{\displaystyle {\begin{aligned}z&=e^{sT}\\&={\frac {e^{sT/2}}{e^{-sT/2}}}\\&\approx {\frac {1+sT/2}{1-sT/2}}\end{aligned}}}
And its inverse
s
=
1
T
ln
(
z
)
=
2
T
[
z
−
1
z
+
1
+
1
3
(
z
−
1
z
+
1
)
3
+
1
5
(
z
−
1
z
+
1
)
5
+
1
7
(
z
−
1
z
+
1
)
7
+
⋯
]
≈
2
T
z
−
1
z
+
1
=
2
T
1
−
z
−
1
1
+
z
−
1
{\displaystyle {\begin{aligned}s&={\frac {1}{T}}\ln(z)\\&={\frac {2}{T}}\left[{\frac {z-1}{z+1}}+{\frac {1}{3}}\left({\frac {z-1}{z+1}}\right)^{3}+{\frac {1}{5}}\left({\frac {z-1}{z+1}}\right)^{5}+{\frac {1}{7}}\left({\frac {z-1}{z+1}}\right)^{7}+\cdots \right]\\&\approx {\frac {2}{T}}{\frac {z-1}{z+1}}\\&={\frac {2}{T}}{\frac {1-z^{-1}}{1+z^{-1}}}\end{aligned}}}
Digital control theory is the technique to design strategies in discrete time, (and/or) quantized amplitude (and/or) in (binary) coded form to be implemented in computer systems (microcontrollers, microprocessors) that will control the analog (continuous in time and amplitude) dynamics of analog systems. From this consideration many errors from classical digital control were identified and solved and new methods were proposed:
Marcelo Tredinnick and Marcelo Souza and their new type of analog-digital mapping
Yutaka Yamamoto and his "lifting function space model"
Alexander Sesekin and his studies about impulsive systems.
M.U. Akhmetov and his studies about impulsive and pulse control
=== Design of digital controller in z-domain ===
The digital controller can also be designed in the z-domain (discrete). The Pulse Transfer Function (PTF)
G
(
z
)
{\displaystyle G(z)}
represents the digital viewpoint of the continuous process
G
(
s
)
{\displaystyle G(s)}
when interfaced with appropriate ADC and DAC, and for a specified sample time
T
{\displaystyle T}
is obtained as:
G
(
z
)
=
B
(
z
)
A
(
z
)
=
(
z
−
1
)
z
Z
(
G
(
s
)
s
)
{\displaystyle G(z)={\frac {B(z)}{A(z)}}={\frac {(z-1)}{z}}Z{\biggl (}{\frac {G(s)}{s}}{\Biggr )}}
Where
Z
(
)
{\displaystyle Z()}
denotes z-Transform for the chosen sample time
T
{\displaystyle T}
. There are many ways to directly design a digital controller
D
(
z
)
{\displaystyle D(z)}
to achieve a given specification. For a type-0 system under unity negative feedback control, Michael Short and colleagues have shown that a relatively simple but effective method to synthesize a controller for a given (monic) closed-loop denominator polynomial
P
(
z
)
{\displaystyle P(z)}
and preserve the (scaled) zeros of the PTF numerator
B
(
z
)
{\displaystyle B(z)}
is to use the design equation:
D
(
z
)
=
k
p
A
(
z
)
P
(
z
)
−
k
p
B
(
z
)
{\displaystyle D(z)={\frac {k_{p}A(z)}{P(z)-k_{p}B(z)}}}
Where the scalar term
k
p
=
P
(
1
)
/
B
(
1
)
{\displaystyle k_{p}=P(1)/B(1)}
ensures the controller
D
(
z
)
{\displaystyle D(z)}
exhibits integral action, and a steady-state gain of unity is achieved in the closed-loop. The resulting closed-loop discrete transfer function from the z-Transform of reference input
R
(
z
)
{\displaystyle R(z)}
to the z-Transform of process output
Y
(
z
)
{\displaystyle Y(z)}
is then given by:
Y
(
z
)
R
(
z
)
=
k
p
B
(
z
)
P
(
z
)
{\displaystyle {\frac {Y(z)}{R(z)}}={\frac {k_{p}B(z)}{P(z)}}}
Since process time delay manifests as leading co-efficient(s) of zero in the process PTF numerator
B
(
z
)
{\displaystyle B(z)}
, the synthesis method above inherently yields a predictive controller if any such delay is present in the continuous plant.
== See also ==
Sampled data systems
Adaptive control
Analog control
Control theory
Digital
Feedback, Negative feedback, Positive feedback
Laplace transform
Real-time control
Z-transform
== References ==
FRANKLIN, G.F.; POWELL, J.D., Emami-Naeini, A., Digital Control of Dynamical Systems, 3rd Ed (1998). Ellis-Kagle Press, Half Moon Bay, CA ISBN 978-0-9791226-1-3
KATZ, P. Digital control using microprocessors. Englewood Cliffs: Prentice-Hall, 293p. 1981.
OGATA, K. Discrete-time control systems. Englewood Cliffs: Prentice-Hall,984p. 1987.
PHILLIPS, C.L.; NAGLE, H. T. Digital control system analysis and design. Englewood Cliffs, New Jersey: Prentice Hall International. 1995.
M. Sami Fadali, Antonio Visioli, (2009) "Digital Control Engineering", Academic Press, ISBN 978-0-12-374498-2.
JURY, E.I. Sampled-data control systems. New-York: John Wiley. 1958. | Wikipedia/Digital_control |
In mathematics a radial basis function (RBF) is a real-valued function
φ
{\textstyle \varphi }
whose value depends only on the distance between the input and some fixed point, either the origin, so that
φ
(
x
)
=
φ
^
(
‖
x
‖
)
{\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} \right\|)}
, or some other fixed point
c
{\textstyle \mathbf {c} }
, called a center, so that
φ
(
x
)
=
φ
^
(
‖
x
−
c
‖
)
{\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} -\mathbf {c} \right\|)}
. Any function
φ
{\textstyle \varphi }
that satisfies the property
φ
(
x
)
=
φ
^
(
‖
x
‖
)
{\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} \right\|)}
is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection
{
φ
k
}
k
{\displaystyle \{\varphi _{k}\}_{k}}
which forms a basis for some function space of interest, hence the name.
Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977.
RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.
== Definition ==
A radial function is a function
φ
:
[
0
,
∞
)
→
R
{\textstyle \varphi :[0,\infty )\to \mathbb {R} }
. When paired with a norm
‖
⋅
‖
:
V
→
[
0
,
∞
)
{\textstyle \|\cdot \|:V\to [0,\infty )}
on a vector space, a function of the form
φ
c
=
φ
(
‖
x
−
c
‖
)
{\textstyle \varphi _{\mathbf {c} }=\varphi (\|\mathbf {x} -\mathbf {c} \|)}
is said to be a radial kernel centered at
c
∈
V
{\textstyle \mathbf {c} \in V}
. A radial function and the associated radial kernels are said to be radial basis functions if, for any finite set of nodes
{
x
k
}
k
=
1
n
⊆
V
{\displaystyle \{\mathbf {x} _{k}\}_{k=1}^{n}\subseteq V}
, all of the following conditions are true:
=== Examples ===
Commonly used types of radial basis functions include (writing
r
=
‖
x
−
x
i
‖
{\textstyle r=\left\|\mathbf {x} -\mathbf {x} _{i}\right\|}
and using
ε
{\textstyle \varepsilon }
to indicate a shape parameter that can be used to scale the input of the radial kernel):
== Approximation ==
Radial basis functions are typically used to build up function approximations of the form
where the approximating function
y
(
x
)
{\textstyle y(\mathbf {x} )}
is represented as a sum of
N
{\displaystyle N}
radial basis functions, each associated with a different center
x
i
{\textstyle \mathbf {x} _{i}}
, and weighted by an appropriate coefficient
w
i
.
{\textstyle w_{i}.}
The weights
w
i
{\textstyle w_{i}}
can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights
w
i
{\textstyle w_{i}}
.
Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour and 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation).
== RBF Network ==
The sum
can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number
N
{\textstyle N}
of radial basis functions is used.
The approximant
y
(
x
)
{\textstyle y(\mathbf {x} )}
is differentiable with respect to the weights
w
i
{\textstyle w_{i}}
. The weights could thus be learned using any of the standard iterative methods for neural networks.
Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.
== RBFs for PDEs ==
Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs). This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method. It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation. The function values at points
x
{\displaystyle \mathbf {x} }
in the domain are approximated by the linear combination of RBFs:
The derivatives are approximated as such:
where
N
{\displaystyle N}
are the number of points in the discretized domain,
d
{\displaystyle d}
the dimension of the domain and
λ
{\displaystyle \lambda }
the scalar coefficients that are unchanged by the differential operator.
Different numerical methods based on Radial Basis Functions were developed thereafter. Some methods are the RBF-FD method, the RBF-QR method and the RBF-PUM method.
== See also ==
Matérn covariance function
Radial basis function interpolation
Kansa method
== References ==
== Further reading ==
Hardy, R.L. (1971). "Multiquadric equations of topography and other irregular surfaces". Journal of Geophysical Research. 76 (8): 1905–1915. Bibcode:1971JGR....76.1905H. doi:10.1029/jb076i008p01905.
Hardy, R.L. (1990). "Theory and applications of the multiquadric-biharmonic method, 20 years of Discovery, 1968 1988". Comp. Math Applic. 19 (8/9): 163–208. doi:10.1016/0898-1221(90)90272-l.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 3.7.1. Radial Basis Function Interpolation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
Sirayanone, S., 1988, Comparative studies of kriging, multiquadric-biharmonic, and other methods for solving mineral resource problems, PhD. Dissertation, Dept. of Earth Sciences, Iowa State University, Ames, Iowa.
Sirayanone, S.; Hardy, R.L. (1995). "The Multiquadric-biharmonic Method as Used for Mineral Resources, Meteorological, and Other Applications". Journal of Applied Sciences and Computations. 1: 437–475. | Wikipedia/Radial_basis_function |
In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex valued frequency-domain (the z-domain or z-plane) representation.
It can be considered a discrete-time equivalent of the Laplace transform (the s-domain or s-plane). This similarity is explored in the theory of time-scale calculus.
While the continuous-time Fourier transform is evaluated on the s-domain's vertical axis (the imaginary axis), the discrete-time Fourier transform is evaluated along the z-domain's unit circle. The s-domain's left half-plane maps to the area inside the z-domain's unit circle, while the s-domain's right half-plane maps to the area outside of the z-domain's unit circle.
In signal processing, one of the means of designing digital filters is to take analog designs, subject them to a bilinear transform which maps them from the s-domain to the z-domain, and then produce the digital filter by inspection, manipulation, or numerical approximation. Such methods tend not to be accurate except in the vicinity of the complex unity, i.e. at low frequencies.
== History ==
The foundational concept now recognized as the Z-transform, which is a cornerstone in the analysis and design of digital control systems, was not entirely novel when it emerged in the mid-20th century. Its embryonic principles can be traced back to the work of the French mathematician Pierre-Simon Laplace, who is better known for the Laplace transform, a closely related mathematical technique. However, the explicit formulation and application of what we now understand as the Z-transform were significantly advanced in 1947 by Witold Hurewicz and colleagues. Their work was motivated by the challenges presented by sampled-data control systems, which were becoming increasingly relevant in the context of radar technology during that period. The Z-transform provided a systematic and effective method for solving linear difference equations with constant coefficients, which are ubiquitous in the analysis of discrete-time signals and systems.
The method was further refined and gained its official nomenclature, "the Z-transform," in 1952, thanks to the efforts of John R. Ragazzini and Lotfi A. Zadeh, who were part of the sampled-data control group at Columbia University. Their work not only solidified the mathematical framework of the Z-transform but also expanded its application scope, particularly in the field of electrical engineering and control systems.
A notable extension, known as the modified or advanced Z-transform, was later introduced by Eliahu I. Jury. Jury's work extended the applicability and robustness of the Z-transform, especially in handling initial conditions and providing a more comprehensive framework for the analysis of digital control systems. This advanced formulation has played a pivotal role in the design and stability analysis of discrete-time control systems, contributing significantly to the field of digital signal processing.
Interestingly, the conceptual underpinnings of the Z-transform intersect with a broader mathematical concept known as the method of generating functions, a powerful tool in combinatorics and probability theory. This connection was hinted at as early as 1730 by Abraham de Moivre, a pioneering figure in the development of probability theory. De Moivre utilized generating functions to solve problems in probability, laying the groundwork for what would eventually evolve into the Z-transform. From a mathematical perspective, the Z-transform can be viewed as a specific instance of a Laurent series, where the sequence of numbers under investigation is interpreted as the coefficients in the (Laurent) expansion of an analytic function. This perspective not only highlights the deep mathematical roots of the Z-transform but also illustrates its versatility and broad applicability across different branches of mathematics and engineering.
== Definition ==
The Z-transform can be defined as either a one-sided or two-sided transform. (Just like we have the one-sided Laplace transform and the two-sided Laplace transform.)
=== Bilateral Z-transform ===
The bilateral or two-sided Z-transform of a discrete-time signal
x
[
n
]
{\displaystyle x[n]}
is the formal power series
X
(
z
)
{\displaystyle X(z)}
defined as:
where
n
{\displaystyle n}
is an integer and
z
{\displaystyle z}
is, in general, a complex number. In polar form,
z
{\displaystyle z}
may be written as:
z
=
A
e
j
ϕ
=
A
⋅
(
cos
ϕ
+
j
sin
ϕ
)
{\displaystyle z=Ae^{j\phi }=A\cdot (\cos {\phi }+j\sin {\phi })}
where
A
{\displaystyle A}
is the magnitude of
z
{\displaystyle z}
,
j
{\displaystyle j}
is the imaginary unit, and
ϕ
{\displaystyle \phi }
is the complex argument (also referred to as angle or phase) in radians.
=== Unilateral Z-transform ===
Alternatively, in cases where
x
[
n
]
{\displaystyle x[n]}
is defined only for
n
≥
0
{\displaystyle n\geq 0}
, the single-sided or unilateral Z-transform is defined as:
In signal processing, this definition can be used to evaluate the Z-transform of the unit impulse response of a discrete-time causal system.
An important example of the unilateral Z-transform is the probability-generating function, where the component
x
[
n
]
{\displaystyle x[n]}
is the probability that a discrete random variable takes the value. The properties of Z-transforms (listed in § Properties) have useful interpretations in the context of probability theory.
== Inverse Z-transform ==
The inverse Z-transform is:
where
C
{\displaystyle C}
is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). In the case where the ROC is causal (see Example 2), this means the path
C
{\displaystyle C}
must encircle all of the poles of
X
(
z
)
{\displaystyle X(z)}
.
A special case of this contour integral occurs when
C
{\displaystyle C}
is the unit circle. This contour can be used when the ROC includes the unit circle, which is always guaranteed when
X
(
z
)
{\displaystyle X(z)}
is stable, that is, when all the poles are inside the unit circle. With this contour, the inverse Z-transform simplifies to the inverse discrete-time Fourier transform, or Fourier series, of the periodic values of the Z-transform around the unit circle:
The Z-transform with a finite range of
n
{\displaystyle n}
and a finite number of uniformly spaced
z
{\displaystyle z}
values can be computed efficiently via Bluestein's FFT algorithm. The discrete-time Fourier transform (DTFT)—not to be confused with the discrete Fourier transform (DFT)—is a special case of such a Z-transform obtained by restricting
z
{\displaystyle z}
to lie on the unit circle.
The following three methods are often used for the evaluation of the inverse -transform,
=== Direct Evaluation by Contour Integration ===
This method involves applying the Cauchy Residue Theorem to evaluate the inverse Z-transform. By integrating around a closed contour in the complex plane, the residues at the poles of the Z-transform function inside the ROC are summed. This technique is particularly useful when working with functions expressed in terms of complex variables.
=== Expansion into a Series of Terms in the Variables z and z-1 ===
In this method, the Z-transform is expanded into a power series. This approach is useful when the Z-transform function is rational, allowing for the approximation of the inverse by expanding into a series and determining the signal coefficients term by term.
=== Partial-Fraction Expansion and Table Lookup ===
This technique decomposes the Z-transform into a sum of simpler fractions, each corresponding to known Z-transform pairs. The inverse Z-transform is then determined by looking up each term in a standard table of Z-transform pairs. This method is widely used for its efficiency and simplicity, especially when the original function can be easily broken down into recognizable components.
==== Example: ====
A) Determine the inverse Z-transform of the following by series expansion method,
X
(
z
)
=
1
1
−
1.5
z
−
1
+
0.5
z
−
2
{\displaystyle X(z)={\frac {1}{1-1.5z^{-1}+0.5z^{-2}}}}
Solution:
Case 1:
ROC:
|
Z
|
>
1
{\displaystyle \left\vert Z\right\vert >1}
Since the ROC is the exterior of a circle,
x
(
n
)
{\displaystyle x(n)}
is causal (signal existing for n≥0).
X
(
z
)
=
1
1
−
3
2
z
−
1
+
1
2
z
−
2
=
1
+
3
2
z
−
1
+
7
4
z
−
2
+
15
8
z
−
3
+
31
16
z
−
4
+
.
.
.
.
{\displaystyle X(z)={1 \over 1-{3 \over 2}z^{-1}+{1 \over 2}z^{-2}}=1+{{3 \over 2}z^{-1}}+{{7 \over 4}z^{-2}}+{{15 \over 8}z^{-3}}+{{31 \over 16}z^{-4}}+....}
thus,
x
(
n
)
=
{
1
,
3
2
,
7
4
,
15
8
,
31
16
…
}
↑
{\displaystyle {\begin{aligned}x(n)&=\left\{1,{\frac {3}{2}},{\frac {7}{4}},{\frac {15}{8}},{\frac {31}{16}}\ldots \right\}\\&\qquad \!\uparrow \\\end{aligned}}}
(arrow indicates term at x(0)=1)
Note that in each step of long division process we eliminate lowest power term of
z
−
1
{\displaystyle z^{-1}}
.
Case 2:
ROC:
|
Z
|
<
0.5
{\displaystyle \left\vert Z\right\vert <0.5}
Since the ROC is the interior of a circle,
x
(
n
)
{\displaystyle x(n)}
is anticausal (signal existing for n<0).
By performing long division we get,
X
(
z
)
=
1
1
−
3
2
z
−
1
+
1
2
z
−
2
=
2
z
2
+
6
z
3
+
14
z
4
+
30
z
5
+
…
{\displaystyle X(z)={\frac {1}{1-{\frac {3}{2}}z^{-1}+{\frac {1}{2}}z^{-2}}}=2z^{2}+6z^{3}+14z^{4}+30z^{5}+\ldots }
x
(
n
)
=
{
30
,
14
,
6
,
2
,
0
,
0
}
↑
{\displaystyle {\begin{aligned}x(n)&=\{30,14,6,2,0,0\}\\&\qquad \qquad \qquad \quad \ \ \,\uparrow \\\end{aligned}}}
(arrow indicates term at x(0)=0)
Note that in each step of long division process we eliminate lowest power term of
z
{\displaystyle z}
.
Note:
When the signal is causal, we get positive powers of
z
{\displaystyle z}
and when the signal is anticausal, we get negative powers of
z
{\displaystyle z}
.
z
k
{\displaystyle z^{k}}
indicates term at
x
(
−
k
)
{\displaystyle x(-k)}
and
z
−
k
{\displaystyle z^{-k}}
indicates term at
x
(
k
)
{\displaystyle x(k)}
.
B) Determine the inverse Z-transform of the following by series expansion method,
Eliminating negative powers if
z
{\displaystyle z}
and dividing by
z
{\displaystyle z}
,
X
(
z
)
z
=
z
2
z
(
z
2
−
1.5
z
+
0.5
)
=
z
z
2
−
1.5
z
+
0.5
{\displaystyle {\frac {X(z)}{z}}={\frac {z^{2}}{z(z^{2}-1.5z+0.5)}}={\frac {z}{z^{2}-1.5z+0.5}}}
By Partial Fraction Expansion,
X
(
z
)
z
=
z
(
z
−
1
)
(
z
−
0.5
)
=
A
1
z
−
0.5
+
A
2
z
−
1
A
1
=
(
z
−
0.5
)
X
(
z
)
z
|
z
=
0.5
=
0.5
(
0.5
−
1
)
=
−
1
A
2
=
(
z
−
1
)
X
(
z
)
z
|
z
=
1
=
1
1
−
0.5
=
2
X
(
z
)
z
=
2
z
−
1
−
1
z
−
0.5
{\displaystyle {\begin{aligned}{\frac {X(z)}{z}}&={\frac {z}{(z-1)(z-0.5)}}={\frac {A_{1}}{z-0.5}}+{\frac {A_{2}}{z-1}}\\[4pt]&A_{1}=\left.{\frac {(z-0.5)X(z)}{z}}\right\vert _{z=0.5}={\frac {0.5}{(0.5-1)}}=-1\\[4pt]&A_{2}=\left.{\frac {(z-1)X(z)}{z}}\right\vert _{z=1}={\frac {1}{1-0.5}}={2}\\[4pt]{\frac {X(z)}{z}}&={\frac {2}{z-1}}-{\frac {1}{z-0.5}}\end{aligned}}}
Case 1:
ROC:
|
Z
|
>
1
{\displaystyle \left\vert Z\right\vert >1}
Both the terms are causal, hence
x
(
n
)
{\displaystyle x(n)}
is causal.
x
(
n
)
=
2
(
1
)
n
u
(
n
)
−
1
(
0.5
)
n
u
(
n
)
=
(
2
−
0.5
n
)
u
(
n
)
{\displaystyle {\begin{aligned}x(n)&=2{(1)^{n}}u(n)-1{(0.5)^{n}}u(n)\\&=(2-0.5^{n})u(n)\\\end{aligned}}}
Case 2:
ROC:
|
Z
|
<
0.5
{\displaystyle \left\vert Z\right\vert <0.5}
Both the terms are anticausal, hence
x
(
n
)
{\displaystyle x(n)}
is anticausal.
x
(
n
)
=
−
2
(
1
)
n
u
(
−
n
−
1
)
−
(
−
1
(
0.5
)
n
u
(
−
n
−
1
)
)
=
(
0.5
n
−
2
)
u
(
−
n
−
1
)
{\displaystyle {\begin{aligned}x(n)&=-2{(1)^{n}}u(-n-1)-(-1{(0.5)^{n}}u(-n-1))\\&=(0.5^{n}-2)u(-n-1)\\\end{aligned}}}
Case 3:
ROC:
0.5
<
|
Z
|
<
1
{\displaystyle 0.5<\left\vert Z\right\vert <1}
One of the terms is causal (p=0.5 provides the causal part) and other is anticausal (p=1 provides the anticausal part), hence
x
(
n
)
{\displaystyle x(n)}
is both sided.
x
(
n
)
=
−
2
(
1
)
n
u
(
−
n
−
1
)
−
1
(
0.5
)
n
u
(
n
)
=
−
2
u
(
−
n
−
1
)
−
0.5
n
u
(
n
)
{\displaystyle {\begin{aligned}x(n)&=-2{(1)^{n}}u(-n-1)-1{(0.5)^{n}}u(n)\\&=-2u(-n-1)-0.5^{n}u(n)\\\end{aligned}}}
== Region of convergence ==
The region of convergence (ROC) is the set of points in the complex plane for which the Z-transform summation converges (i.e. doesn't blow up in magnitude to infinity):
R
O
C
=
{
z
:
|
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
|
<
∞
}
{\displaystyle \mathrm {ROC} =\left\{z:\left|\sum _{n=-\infty }^{\infty }x[n]z^{-n}\right|<\infty \right\}}
=== Example 1 (no ROC) ===
Let
x
[
n
]
=
(
.5
)
n
.
{\displaystyle x[n]=(.5)^{n}\ .}
Expanding
x
[
n
]
{\displaystyle x[n]}
on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
it becomes
x
[
n
]
=
{
…
,
(
.5
)
−
3
,
(
.5
)
−
2
,
(
.5
)
−
1
,
1
,
(
.5
)
,
(
.5
)
2
,
(
.5
)
3
,
…
}
=
{
…
,
2
3
,
2
2
,
2
,
1
,
(
.5
)
,
(
.5
)
2
,
(
.5
)
3
,
…
}
.
{\displaystyle x[n]=\left\{\dots ,(.5)^{-3},(.5)^{-2},(.5)^{-1},1,(.5),(.5)^{2},(.5)^{3},\dots \right\}=\left\{\dots ,2^{3},2^{2},2,1,(.5),(.5)^{2},(.5)^{3},\dots \right\}.}
Looking at the sum
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
→
∞
.
{\displaystyle \sum _{n=-\infty }^{\infty }x[n]z^{-n}\to \infty .}
Therefore, there are no values of
z
{\displaystyle z}
that satisfy this condition.
=== Example 2 (causal ROC) ===
Let
x
[
n
]
=
(
.5
)
n
u
[
n
]
{\displaystyle x[n]=(.5)^{n}\,u[n]}
(where
u
{\displaystyle u}
is the Heaviside step function). Expanding
x
[
n
]
{\displaystyle x[n]}
on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
it becomes
x
[
n
]
=
{
…
,
0
,
0
,
0
,
1
,
(
.5
)
,
(
.5
)
2
,
(
.5
)
3
,
…
}
.
{\displaystyle x[n]=\left\{\dots ,0,0,0,1,(.5),(.5)^{2},(.5)^{3},\dots \right\}.}
Looking at the sum
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
=
∑
n
=
0
∞
(
.5
)
n
z
−
n
=
∑
n
=
0
∞
(
.5
z
)
n
=
1
1
−
(
.5
)
z
−
1
.
{\displaystyle \sum _{n=-\infty }^{\infty }x[n]z^{-n}=\sum _{n=0}^{\infty }(.5)^{n}z^{-n}=\sum _{n=0}^{\infty }\left({\frac {.5}{z}}\right)^{n}={\frac {1}{1-(.5)z^{-1}}}.}
The last equality arises from the infinite geometric series and the equality only holds if
|
(
.5
)
z
−
1
|
<
1
,
{\displaystyle |(.5)z^{-1}|<1,}
which can be rewritten in terms of
z
{\displaystyle z}
as
|
z
|
>
(
.5
)
.
{\displaystyle |z|>(.5).}
Thus, the ROC is
|
z
|
>
(
.5
)
.
{\displaystyle |z|>(.5).}
In this case the ROC is the complex plane with a disc of radius 0.5 at the origin "punched out".
=== Example 3 (anti causal ROC) ===
Let
x
[
n
]
=
−
(
.5
)
n
u
[
−
n
−
1
]
{\displaystyle x[n]=-(.5)^{n}\,u[-n-1]}
(where
u
{\displaystyle u}
is the Heaviside step function). Expanding
x
[
n
]
{\displaystyle x[n]}
on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
it becomes
x
[
n
]
=
{
…
,
−
(
.5
)
−
3
,
−
(
.5
)
−
2
,
−
(
.5
)
−
1
,
0
,
0
,
0
,
0
,
…
}
.
{\displaystyle x[n]=\left\{\dots ,-(.5)^{-3},-(.5)^{-2},-(.5)^{-1},0,0,0,0,\dots \right\}.}
Looking at the sum
∑
n
=
−
∞
∞
x
[
n
]
z
−
n
=
−
∑
n
=
−
∞
−
1
(
.5
)
n
z
−
n
=
−
∑
m
=
1
∞
(
z
.5
)
m
=
−
(
.5
)
−
1
z
1
−
(
.5
)
−
1
z
=
−
1
(
.5
)
z
−
1
−
1
=
1
1
−
(
.5
)
z
−
1
{\displaystyle {\begin{aligned}\sum _{n=-\infty }^{\infty }x[n]\,z^{-n}&=-\sum _{n=-\infty }^{-1}(.5)^{n}\,z^{-n}\\&=-\sum _{m=1}^{\infty }\left({\frac {z}{.5}}\right)^{m}\\&=-{\frac {(.5)^{-1}z}{1-(.5)^{-1}z}}\\&=-{\frac {1}{(.5)z^{-1}-1}}\\&={\frac {1}{1-(.5)z^{-1}}}\\\end{aligned}}}
and using the infinite geometric series again, the equality only holds if
|
(
.5
)
−
1
z
|
<
1
{\displaystyle |(.5)^{-1}z|<1}
which can be rewritten in terms of
z
{\displaystyle z}
as
|
z
|
<
(
.5
)
.
{\displaystyle |z|<(.5).}
Thus, the ROC is
|
z
|
<
(
.5
)
.
{\displaystyle |z|<(.5).}
In this case the ROC is a disc centered at the origin and of radius 0.5.
What differentiates this example from the previous example is only the ROC. This is intentional to demonstrate that the transform result alone is insufficient.
=== Examples conclusion ===
Examples 2 & 3 clearly show that the Z-transform
X
(
z
)
{\displaystyle X(z)}
of
x
[
n
]
{\displaystyle x[n]}
is unique when and only when specifying the ROC. Creating the pole–zero plot for the causal and anticausal case show that the ROC for either case does not include the pole that is at 0.5. This extends to cases with multiple poles: the ROC will never contain poles.
In example 2, the causal system yields a ROC that includes
|
z
|
=
∞
{\displaystyle |z|=\infty }
while the anticausal system in example 3 yields an ROC that includes
|
z
|
=
0.
{\displaystyle |z|=0.}
In systems with multiple poles it is possible to have a ROC that includes neither
|
z
|
=
∞
{\displaystyle |z|=\infty }
nor
|
z
|
=
0.
{\displaystyle |z|=0.}
The ROC creates a circular band. For example,
x
[
n
]
=
(
.5
)
n
u
[
n
]
−
(
.75
)
n
u
[
−
n
−
1
]
{\displaystyle x[n]=(.5)^{n}\,u[n]-(.75)^{n}\,u[-n-1]}
has poles at 0.5 and 0.75. The ROC will be 0.5 < |z| < 0.75, which includes neither the origin nor infinity. Such a system is called a mixed-causality system as it contains a causal term
(
.5
)
n
u
[
n
]
{\displaystyle (.5)^{n}\,u[n]}
and an anticausal term
−
(
.75
)
n
u
[
−
n
−
1
]
.
{\displaystyle -(.75)^{n}\,u[-n-1].}
The stability of a system can also be determined by knowing the ROC alone. If the ROC contains the unit circle (i.e., |z| = 1) then the system is stable. In the above systems the causal system (Example 2) is stable because |z| > 0.5 contains the unit circle.
Let us assume we are provided a Z-transform of a system without a ROC (i.e., an ambiguous
x
[
n
]
{\displaystyle x[n]}
). We can determine a unique
x
[
n
]
{\displaystyle x[n]}
provided we desire the following:
Stability
Causality
For stability the ROC must contain the unit circle. If we need a causal system then the ROC must contain infinity and the system function will be a right-sided sequence. If we need an anticausal system then the ROC must contain the origin and the system function will be a left-sided sequence. If we need both stability and causality, all the poles of the system function must be inside the unit circle.
The unique
x
[
n
]
{\displaystyle x[n]}
can then be found.
== Properties ==
Parseval's theorem
∑
n
=
−
∞
∞
x
1
[
n
]
x
2
∗
[
n
]
=
1
j
2
π
∮
C
X
1
(
v
)
X
2
∗
(
1
v
∗
)
v
−
1
d
v
{\displaystyle \sum _{n=-\infty }^{\infty }x_{1}[n]x_{2}^{*}[n]\quad =\quad {\frac {1}{j2\pi }}\oint _{C}X_{1}(v)X_{2}^{*}({\tfrac {1}{v^{*}}})v^{-1}\mathrm {d} v}
Initial value theorem: If
x
[
n
]
{\displaystyle x[n]}
is causal, then
x
[
0
]
=
lim
z
→
∞
X
(
z
)
.
{\displaystyle x[0]=\lim _{z\to \infty }X(z).}
Final value theorem: If the poles of
(
z
−
1
)
X
(
z
)
{\displaystyle (z-1)X(z)}
are inside the unit circle, then
x
[
∞
]
=
lim
z
→
1
(
z
−
1
)
X
(
z
)
.
{\displaystyle x[\infty ]=\lim _{z\to 1}(z-1)X(z).}
== Table of common Z-transform pairs ==
Here:
u
:
n
↦
u
[
n
]
=
{
1
,
n
≥
0
0
,
n
<
0
{\displaystyle u:n\mapsto u[n]={\begin{cases}1,&n\geq 0\\0,&n<0\end{cases}}}
is the unit (or Heaviside) step function and
δ
:
n
↦
δ
[
n
]
=
{
1
,
n
=
0
0
,
n
≠
0
{\displaystyle \delta :n\mapsto \delta [n]={\begin{cases}1,&n=0\\0,&n\neq 0\end{cases}}}
is the discrete-time unit impulse function (cf Dirac delta function which is a continuous-time version). The two functions are chosen together so that the unit step function is the accumulation (running total) of the unit impulse function.
== Relationship to Fourier series and Fourier transform ==
For values of
z
{\displaystyle z}
in the region
|
z
|
=
1
{\displaystyle |z|{=}1}
, known as the unit circle, we can express the transform as a function of a single real variable
ω
{\displaystyle \omega }
by defining
z
=
e
j
ω
.
{\displaystyle z{=}e^{j\omega }.}
And the bi-lateral transform reduces to a Fourier series:
which is also known as the discrete-time Fourier transform (DTFT) of the
x
[
n
]
{\displaystyle x[n]}
sequence. This
2
π
{\displaystyle 2\pi }
-periodic function is the periodic summation of a Fourier transform, which makes it a widely used analysis tool. To understand this, let
X
(
f
)
{\displaystyle X(f)}
be the Fourier transform of any function,
x
(
t
)
{\displaystyle x(t)}
, whose samples at some interval
T
{\displaystyle T}
equal the
x
[
n
]
{\displaystyle x[n]}
sequence. Then the DTFT of the
x
[
n
]
{\displaystyle x[n]}
sequence can be written as follows.
where
T
{\displaystyle T}
has units of seconds,
f
{\displaystyle f}
has units of hertz. Comparison of the two series reveals that
ω
=
2
π
f
T
{\displaystyle \omega {=}2\pi fT}
is a normalized frequency with unit of radian per sample. The value
ω
=
2
π
{\displaystyle \omega {=}2\pi }
corresponds to
f
=
1
T
{\textstyle f{=}{\frac {1}{T}}}
. And now, with the substitution
f
=
ω
2
π
T
,
{\textstyle f{=}{\frac {\omega }{2\pi T}},}
Eq.1 can be expressed in terms of
X
(
ω
−
2
π
k
2
π
T
)
{\displaystyle X({\tfrac {\omega -2\pi k}{2\pi T}})}
(a Fourier transform):
As parameter T changes, the individual terms of Eq.2 move farther apart or closer together along the f-axis. In Eq.3 however, the centers remain 2π apart, while their widths expand or contract. When sequence
x
(
n
T
)
{\displaystyle x(nT)}
represents the impulse response of an LTI system, these functions are also known as its frequency response. When the
x
(
n
T
)
{\displaystyle x(nT)}
sequence is periodic, its DTFT is divergent at one or more harmonic frequencies, and zero at all other frequencies. This is often represented by the use of amplitude-variant Dirac delta functions at the harmonic frequencies. Due to periodicity, there are only a finite number of unique amplitudes, which are readily computed by the much simpler discrete Fourier transform (DFT). (See Discrete-time Fourier transform § Periodic data.)
== Relationship to Laplace transform ==
=== Bilinear transform ===
The bilinear transform can be used to convert continuous-time filters (represented in the Laplace domain) into discrete-time filters (represented in the Z-domain), and vice versa. The following substitution is used:
s
=
2
T
(
z
−
1
)
(
z
+
1
)
{\displaystyle s={\frac {2}{T}}{\frac {(z-1)}{(z+1)}}}
to convert some function
H
(
s
)
{\displaystyle H(s)}
in the Laplace domain to a function
H
(
z
)
{\displaystyle H(z)}
in the Z-domain (Tustin transformation), or
z
=
e
s
T
≈
1
+
s
T
/
2
1
−
s
T
/
2
{\displaystyle z=e^{sT}\approx {\frac {1+sT/2}{1-sT/2}}}
from the Z-domain to the Laplace domain. Through the bilinear transformation, the complex s-plane (of the Laplace transform) is mapped to the complex z-plane (of the z-transform). While this mapping is (necessarily) nonlinear, it is useful in that it maps the entire
j
ω
{\displaystyle j\omega }
axis of the s-plane onto the unit circle in the z-plane. As such, the Fourier transform (which is the Laplace transform evaluated on the
j
ω
{\displaystyle j\omega }
axis) becomes the discrete-time Fourier transform. This assumes that the Fourier transform exists; i.e., that the
j
ω
{\displaystyle j\omega }
axis is in the region of convergence of the Laplace transform.
=== Starred transform ===
Given a one-sided Z-transform
X
(
z
)
{\displaystyle X(z)}
of a time-sampled function, the corresponding starred transform produces a Laplace transform and restores the dependence on
T
{\displaystyle T}
(the sampling parameter):
X
∗
(
s
)
=
X
(
z
)
|
z
=
e
s
T
{\displaystyle {\bigg .}X^{*}(s)=X(z){\bigg |}_{\displaystyle z=e^{sT}}}
The inverse Laplace transform is a mathematical abstraction known as an impulse-sampled function.
== Linear constant-coefficient difference equation ==
The linear constant-coefficient difference (LCCD) equation is a representation for a linear system based on the autoregressive moving-average equation:
∑
p
=
0
N
y
[
n
−
p
]
α
p
=
∑
q
=
0
M
x
[
n
−
q
]
β
q
.
{\displaystyle \sum _{p=0}^{N}y[n-p]\alpha _{p}=\sum _{q=0}^{M}x[n-q]\beta _{q}.}
Both sides of the above equation can be divided by
α
0
{\displaystyle \alpha _{0}}
if it is not zero. By normalizing with
α
0
=
1
,
{\displaystyle \alpha _{0}{=}1,}
the LCCD equation can be written
y
[
n
]
=
∑
q
=
0
M
x
[
n
−
q
]
β
q
−
∑
p
=
1
N
y
[
n
−
p
]
α
p
.
{\displaystyle y[n]=\sum _{q=0}^{M}x[n-q]\beta _{q}-\sum _{p=1}^{N}y[n-p]\alpha _{p}.}
This form of the LCCD equation is favorable to make it more explicit that the "current" output
y
[
n
]
{\displaystyle y[n]}
is a function of past outputs
y
[
n
−
p
]
,
{\displaystyle y[n-p],}
current input
x
[
n
]
,
{\displaystyle x[n],}
and previous inputs
x
[
n
−
q
]
.
{\displaystyle x[n-q].}
=== Transfer function ===
Taking the Z-transform of the above equation (using linearity and time-shifting laws) yields:
Y
(
z
)
∑
p
=
0
N
z
−
p
α
p
=
X
(
z
)
∑
q
=
0
M
z
−
q
β
q
{\displaystyle Y(z)\sum _{p=0}^{N}z^{-p}\alpha _{p}=X(z)\sum _{q=0}^{M}z^{-q}\beta _{q}}
where
X
(
z
)
{\displaystyle X(z)}
and
Y
(
z
)
{\displaystyle Y(z)}
are the z-transform of
x
[
n
]
{\displaystyle x[n]}
and
y
[
n
]
,
{\displaystyle y[n],}
respectively. (Notation conventions typically use capitalized letters to refer to the z-transform of a signal denoted by a corresponding lower case letter, similar to the convention used for notating Laplace transforms.)
Rearranging results in the system's transfer function:
H
(
z
)
=
Y
(
z
)
X
(
z
)
=
∑
q
=
0
M
z
−
q
β
q
∑
p
=
0
N
z
−
p
α
p
=
β
0
+
z
−
1
β
1
+
z
−
2
β
2
+
⋯
+
z
−
M
β
M
α
0
+
z
−
1
α
1
+
z
−
2
α
2
+
⋯
+
z
−
N
α
N
.
{\displaystyle H(z)={\frac {Y(z)}{X(z)}}={\frac {\sum _{q=0}^{M}z^{-q}\beta _{q}}{\sum _{p=0}^{N}z^{-p}\alpha _{p}}}={\frac {\beta _{0}+z^{-1}\beta _{1}+z^{-2}\beta _{2}+\cdots +z^{-M}\beta _{M}}{\alpha _{0}+z^{-1}\alpha _{1}+z^{-2}\alpha _{2}+\cdots +z^{-N}\alpha _{N}}}.}
=== Zeros and poles ===
From the fundamental theorem of algebra the numerator has
M
{\displaystyle M}
roots (corresponding to zeros of
H
{\displaystyle H}
) and the denominator has
N
{\displaystyle N}
roots (corresponding to poles). Rewriting the transfer function in terms of zeros and poles
H
(
z
)
=
(
1
−
q
1
z
−
1
)
(
1
−
q
2
z
−
1
)
⋯
(
1
−
q
M
z
−
1
)
(
1
−
p
1
z
−
1
)
(
1
−
p
2
z
−
1
)
⋯
(
1
−
p
N
z
−
1
)
,
{\displaystyle H(z)={\frac {(1-q_{1}z^{-1})(1-q_{2}z^{-1})\cdots (1-q_{M}z^{-1})}{(1-p_{1}z^{-1})(1-p_{2}z^{-1})\cdots (1-p_{N}z^{-1})}},}
where
q
k
{\displaystyle q_{k}}
is the
k
th
{\displaystyle k^{\text{th}}}
zero and
p
k
{\displaystyle p_{k}}
is the
k
th
{\displaystyle k^{\text{th}}}
pole. The zeros and poles are commonly complex and when plotted on the complex plane (z-plane) it is called the pole–zero plot.
In addition, there may also exist zeros and poles at
z
=
0
{\displaystyle z{=}0}
and
z
=
∞
.
{\displaystyle z{=}\infty .}
If we take these poles and zeros as well as multiple-order zeros and poles into consideration, the number of zeros and poles are always equal.
By factoring the denominator, partial fraction decomposition can be used, which can then be transformed back to the time domain. Doing so would result in the impulse response and the linear constant coefficient difference equation of the system.
=== Output response ===
If such a system
H
(
z
)
{\displaystyle H(z)}
is driven by a signal
X
(
z
)
{\displaystyle X(z)}
then the output is
Y
(
z
)
=
H
(
z
)
X
(
z
)
.
{\displaystyle Y(z)=H(z)X(z).}
By performing partial fraction decomposition on
Y
(
z
)
{\displaystyle Y(z)}
and then taking the inverse Z-transform the output
y
[
n
]
{\displaystyle y[n]}
can be found. In practice, it is often useful to fractionally decompose
Y
(
z
)
z
{\displaystyle \textstyle {\frac {Y(z)}{z}}}
before multiplying that quantity by
z
{\displaystyle z}
to generate a form of
Y
(
z
)
{\displaystyle Y(z)}
which has terms with easily computable inverse Z-transforms.
== See also ==
Advanced Z-transform
Bilinear transform
Difference equation (recurrence relation)
Discrete convolution
Discrete-time Fourier transform
Finite impulse response
Formal power series
Generating function
Generating function transformation
Laplace transform
Laurent series
Least-squares spectral analysis
Probability-generating function
Star transform
Zak transform
Zeta function regularization
== References ==
== Further reading ==
Refaat El Attar, Lecture notes on Z-Transform, Lulu Press, Morrisville NC, 2005. ISBN 1-4116-1979-X.
Ogata, Katsuhiko, Discrete Time Control Systems 2nd Ed, Prentice-Hall Inc, 1995, 1987. ISBN 0-13-034281-5.
Alan V. Oppenheim and Ronald W. Schafer (1999). Discrete-Time Signal Processing, 2nd Edition, Prentice Hall Signal Processing Series. ISBN 0-13-754920-2.
== External links ==
"Z-transform". Encyclopedia of Mathematics. EMS Press. 2001 [1994].
Merrikh-Bayat, Farshad (2014). "Two Methods for Numerical Inversion of the Z-Transform". arXiv:1409.1727 [math.NA].
Z-Transform table of some common Laplace transforms
Mathworld's entry on the Z-transform
Z-Transform threads in Comp.DSP
A graphic of the relationship between Laplace transform s-plane to Z-plane of the Z transform
A video-based explanation of the Z-Transform for engineers
What is the z-Transform? | Wikipedia/Z_transform |
A proportional–integral–derivative controller (PID controller or three-term controller) is a feedback-based control loop mechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used in industrial control systems and various other applications where constant control through modulation is necessary without human intervention. The PID controller automatically compares the desired target value (setpoint or SP) with the actual value of the system (process variable or PV). The difference between these two values is called the error value, denoted as
e
(
t
)
{\displaystyle e(t)}
.
It then applies corrective actions automatically to bring the PV to the same value as the SP using three methods: The proportional (P) component responds to the current error value by producing an output that is directly proportional to the magnitude of the error. This provides immediate correction based on how far the system is from the desired setpoint. The integral (I) component, in turn, considers the cumulative sum of past errors to address any residual steady-state errors that persist over time, eliminating lingering discrepancies. Lastly, the derivative (D) component predicts future error by assessing the rate of change of the error, which helps to mitigate overshoot and enhance system stability, particularly when the system undergoes rapid changes. The PID output signal can directly control actuators through voltage, current, or other modulation methods, depending on the application. The PID controller reduces the likelihood of human error and improves automation.
A common example is a vehicle’s cruise control system. For instance, when a vehicle encounters a hill, its speed will decrease if the engine power output is kept constant. The PID controller adjusts the engine's power output to restore the vehicle to its desired speed, doing so efficiently with minimal delay and overshoot.
The theoretical foundation of PID controllers dates back to the early 1920s with the development of automatic steering systems for ships. This concept was later adopted for automatic process control in manufacturing, first appearing in pneumatic actuators and evolving into electronic controllers. PID controllers are widely used in numerous applications requiring accurate, stable, and optimized automatic control, such as temperature regulation, motor speed control, and industrial process management.
== Fundamental operation ==
The distinguishing feature of the PID controller is the ability to use the three control terms of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an error value
e
(
t
)
{\displaystyle e(t)}
as the difference between a desired setpoint
SP
=
r
(
t
)
{\displaystyle {\text{SP}}=r(t)}
and a measured process variable
PV
=
y
(
t
)
{\displaystyle {\text{PV}}=y(t)}
:
e
(
t
)
=
r
(
t
)
−
y
(
t
)
{\displaystyle e(t)=r(t)-y(t)}
, and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a control variable
u
(
t
)
{\displaystyle u(t)}
, such as the opening of a control valve, to a new value determined by a weighted sum of the control terms.
The PID controller directly generates a continuous control signal based on error, without discrete modulation.
In this model:
Term P is proportional to the current value of the SP − PV error
e
(
t
)
{\displaystyle e(t)}
. For example, if the error is large, the control output will be proportionately large by using the gain factor "Kp". Using proportional control alone will result in an error between the set point and the process value because the controller requires an error to generate the proportional output response. In steady state process conditions an equilibrium is reached, with a steady SP-PV "offset".
Term I accounts for past values of the SP − PV error and integrates them over time to produce the I term. For example, if there is a residual SP − PV error after the application of proportional control, the integral term seeks to eliminate the residual error by adding a control effect due to the historic cumulative value of the error. When the error is eliminated, the integral term will cease to grow. This will result in the proportional effect diminishing as the error decreases, but this is compensated for by the growing integral effect.
Term D is a best estimate of the future trend of the SP − PV error, based on its current rate of change. It is sometimes called "anticipatory control", as it is effectively seeking to reduce the effect of the SP − PV error by exerting a control influence generated by the rate of error change. The more rapid the change, the greater the controlling or damping effect.
Tuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response.
Control action – The mathematical model and practical loop above both use a direct control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This is because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is called reverse acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.
=== Control function ===
The overall control function is
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}},}
where
K
p
{\displaystyle K_{\text{p}}}
,
K
i
{\displaystyle K_{\text{i}}}
, and
K
d
{\displaystyle K_{\text{d}}}
, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted P, I, and D).
=== Standard form ===
In the standard form of the equation (see later in article),
K
i
{\displaystyle K_{\text{i}}}
and
K
d
{\displaystyle K_{\text{d}}}
are respectively replaced by
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
and
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
; the advantage of this being that
T
i
{\displaystyle T_{\text{i}}}
and
T
d
{\displaystyle T_{\text{d}}}
have some understandable physical meaning, as they represent an integration time and a derivative time respectively.
K
p
T
d
{\displaystyle K_{\text{p}}T_{\text{d}}}
is the time constant with which the controller will attempt to approach the set point.
K
p
/
T
i
{\displaystyle K_{\text{p}}/T_{\text{i}}}
determines how long the controller will tolerate the output being consistently above or below the set point.
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
e
(
t
)
d
t
)
{\displaystyle u(t)=K_{\text{p}}\left(e(t)+{\frac {1}{T_{\text{i}}}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +T_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}\right)}
where
T
i
=
K
p
K
i
{\displaystyle T_{\text{i}}={K_{\text{p}} \over K_{\text{i}}}}
is the integration time constant, and
T
d
=
K
d
K
p
{\displaystyle T_{\text{d}}={K_{\text{d}} \over K_{\text{p}}}}
is the derivative time constant.
=== Selective use of control terms ===
Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value.
=== Applicability ===
The use of the PID algorithm does not guarantee optimal control of the system or its control stability (see § Limitations, below). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.
== History ==
=== Origins ===
The centrifugal governor was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed.
With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt's self-designed "conical pendulum" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept.
Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper On Governors. He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. The problem was examined further in 1874 by Edward Routh, Charles Sturm, and in 1895, Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria.
In subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor.
About this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868.
Another early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based.
It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.
His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Trials were carried out on the USS New Mexico, with the controllers controlling the angular velocity (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve.
The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others in the 1930s.
=== Industrial control ===
The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback. This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows. The integral term was called Reset. Later the derivative term was added by a further bellows and adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–1.0 bar) was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.
Most modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers.
=== Electronic analog controllers ===
Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers.
== Control loop example ==
Consider a robotic arm that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.
The sensed position is the process variable (PV).
The desired position is called the setpoint (SP).
The difference between the PV and SP is the error (e), which quantifies whether the arm is too low or too high and by how much.
The input to the process (the electric current in the motor) is the output from the PID controller. It is called either the manipulated variable (MV) or the control variable (CV).
The PID controller continuously adjusts the input current to achieve smooth motion.
By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).
=== Proportional ===
The obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.
=== Integral ===
An integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both weakly reacting at the start (because the action would be small at the beginning, depending on time to become significant) and more aggressive at the end (the action increases as long as the error is positive, even if the error is near zero).
Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If it decreases, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable.
=== Derivative ===
A derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but rather the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).
=== Control damping ===
In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.
=== Response to disturbances ===
If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.
=== Applications ===
In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, force, feed rate, flow rate, chemical composition (component concentrations), weight, position, speed, and practically every other variable for which a measurement exists.
== Controller theory ==
This section describes the parallel or non-interacting form of the PID controller. For other forms please see § Alternative nomenclature and forms.
The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining
u
(
t
)
{\displaystyle u(t)}
as the controller output, the final form of the PID algorithm is
u
(
t
)
=
M
V
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
,
{\displaystyle u(t)=\mathrm {MV} (t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau +K_{\text{d}}{\frac {de(t)}{dt}},}
where
K
p
{\displaystyle K_{\text{p}}}
is the proportional gain, a tuning parameter,
K
i
{\displaystyle K_{\text{i}}}
is the integral gain, a tuning parameter,
K
d
{\displaystyle K_{\text{d}}}
is the derivative gain, a tuning parameter,
e
(
t
)
=
S
P
−
P
V
(
t
)
{\displaystyle e(t)=\mathrm {SP} -\mathrm {PV} (t)}
is the error (SP is the setpoint, and PV(t) is the process variable),
t
{\displaystyle t}
is the time or instantaneous time (the present),
τ
{\displaystyle \tau }
is the variable of integration (takes on values from time 0 to the present
t
{\displaystyle t}
).
Equivalently, the transfer function in the Laplace domain of the PID controller is
L
(
s
)
=
K
p
+
K
i
/
s
+
K
d
s
{\displaystyle L(s)=K_{\text{p}}+K_{\text{i}}/s+K_{\text{d}}s}
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle ={K_{\text{d}}s^{2}+K_{\text{p}}s+K_{\text{i}} \over s}}
where
s
{\displaystyle s}
is the complex angular frequency.
=== Proportional term ===
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant Kp, called the proportional gain constant.
The proportional term is given by
P
out
=
K
p
e
(
t
)
.
{\displaystyle P_{\text{out}}=K_{\text{p}}e(t).}
A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.
==== Steady-state error ====
The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term.
=== Integral term ===
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (Ki) and added to the controller output.
The integral term is given by
I
out
=
K
i
∫
0
t
e
(
τ
)
d
τ
.
{\displaystyle I_{\text{out}}=K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau .}
The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning).
=== Derivative term ===
The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain Kd. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, Kd.
The derivative term is given by
D
out
=
K
d
d
e
(
t
)
d
t
.
{\displaystyle D_{\text{out}}=K_{\text{d}}{\frac {de(t)}{dt}}.}
Derivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers – because of its variable impact on system stability in real-world applications.
== Loop tuning ==
Tuning a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.
Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.
Some processes have a degree of nonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions).
=== Stability ===
If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by excess gain, particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired.
Mathematically, the origins of instability can be seen in the Laplace domain.
The closed-loop transfer function is
H
(
s
)
=
K
(
s
)
G
(
s
)
1
+
K
(
s
)
G
(
s
)
,
{\displaystyle H(s)={\frac {K(s)G(s)}{1+K(s)G(s)}},}
where
K
(
s
)
{\displaystyle K(s)}
is the PID transfer function, and
G
(
s
)
{\displaystyle G(s)}
is the plant transfer function. A system is unstable where the closed-loop transfer function diverges for some
s
{\displaystyle s}
. This happens in situations where
K
(
s
)
G
(
s
)
=
−
1
{\displaystyle K(s)G(s)=-1}
. In other words, this happens when
|
K
(
s
)
G
(
s
)
|
=
1
{\displaystyle |K(s)G(s)|=1}
with a 180° phase shift. Stability is guaranteed when
K
(
s
)
G
(
s
)
<
1
{\displaystyle K(s)G(s)<1}
for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion.
=== Optimal behavior ===
The optimal behavior on a process change or setpoint change varies depending on the application.
Two basic requirements are regulation (disturbance rejection – staying at a given setpoint) and command tracking (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.
=== Overview of tuning methods ===
There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.
The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.
=== Manual tuning ===
If the system must remain online, one tuning method is to first set
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
values to zero. Increase the
K
p
{\displaystyle K_{p}}
until the output of the loop oscillates; then set
K
p
{\displaystyle K_{p}}
to approximately half that value for a "quarter amplitude decay"-type response. Then increase
K
i
{\displaystyle K_{i}}
until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase
K
d
{\displaystyle K_{d}}
, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much
K
p
{\displaystyle K_{p}}
causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a
K
p
{\displaystyle K_{p}}
setting significantly less than half that of the
K
p
{\displaystyle K_{p}}
setting that was causing oscillation.
=== Ziegler–Nichols method ===
Another heuristic tuning method is known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain
K
u
{\displaystyle K_{u}}
at which the output of the loop starts to oscillate constantly.
K
u
{\displaystyle K_{u}}
and the oscillation period
T
u
{\displaystyle T_{u}}
are used to set the gains as follows:
The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result.
These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains
K
i
{\displaystyle K_{i}}
and
K
d
{\displaystyle K_{d}}
are dependent on the oscillation period
T
u
{\displaystyle T_{u}}
.
=== Cohen–Coon parameters ===
This method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of
1
4
{\displaystyle {\tfrac {1}{4}}}
. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.
=== Relay (Åström–Hägglund) method ===
Published in 1984 by Karl Johan Åström and Tore Hägglund, the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.
Specifically, the ultimate period
T
u
{\displaystyle T_{u}}
is assumed to be equal to the observed period, and the ultimate gain is computed as
K
u
=
4
b
/
π
a
,
{\displaystyle K_{u}=4b/\pi a,}
where a is the amplitude of the process variable oscillation, and b is the amplitude of the control output change which caused it.
There are numerous variants on the relay method.
=== First-order model with dead time ===
The transfer function for a first-order process with dead time is
y
(
s
)
=
k
p
e
−
θ
s
τ
p
s
+
1
u
(
s
)
,
{\displaystyle y(s)={\frac {k_{\text{p}}e^{-\theta s}}{\tau _{\text{p}}s+1}}u(s),}
where kp is the process gain, τp is the time constant, θ is the dead time, and u(s) is a step change input. Converting this transfer function to the time domain results in
y
(
t
)
=
k
p
Δ
u
(
1
−
e
−
t
−
θ
τ
p
)
,
{\displaystyle y(t)=k_{\text{p}}\Delta u\left(1-e^{\frac {-t-\theta }{\tau _{\text{p}}}}\right),}
using the same parameters found above.
It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain (kp) is equal to the change in output divided by the change in input. The dead time θ is the amount of time between when the step change occurred and when the output first changed. The time constant (τp) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants.
=== Tuning software ===
Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients.
Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.
Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response.
== Limitations ==
While PID controllers are applicable to many control problems and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide optimal control. The fundamental difficulty with PID control is that it is a feedback control system with constant parameters and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer that has no model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.
The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.
=== Linearity and symmetry ===
PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.
A nonlinear valve in a flow control application, for instance, will result in variable loop sensitivity that requires damping to prevent instability. One solution is to include a model of the valve's nonlinearity in the control algorithm to compensate for this.
An asymmetric application, for example, is temperature control in HVAC systems that use only active heating (via a heating element) whereas only passive cooling is available. Overshoot of rising temperature can only be corrected slowly; active cooling is not available to force temperature downward as a function of the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.
=== Noise in derivative term ===
A problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller.
== Modifications to the algorithm ==
The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.
=== Integral windup ===
One common problem resulting from the ideal PID implementations is integral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:
Disabling the integration until the PV has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the regulator output within feasible bounds.
=== Overshooting from known disturbances ===
For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.
=== PI controller ===
A PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.
The controller output is given by
K
P
Δ
+
K
I
∫
Δ
d
t
{\displaystyle K_{P}\Delta +K_{I}\int \Delta \,dt}
where
Δ
{\displaystyle \Delta }
is the error or deviation of actual measured value (PV) from the setpoint (SP).
Δ
=
S
P
−
P
V
.
{\displaystyle \Delta =SP-PV.}
A PI controller can be modelled easily in software such as Simulink or Xcos using a "flow chart" box involving Laplace operators:
C
=
G
(
1
+
τ
s
)
τ
s
{\displaystyle C={\frac {G(1+\tau s)}{\tau s}}}
where
G
=
K
P
{\displaystyle G=K_{P}}
= proportional gain
G
τ
=
K
I
{\displaystyle {\frac {G}{\tau }}=K_{I}}
= integral gain
Setting a value for
G
{\displaystyle G}
is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.
Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.
=== Deadband ===
Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.
=== Setpoint step change ===
The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:
Setpoint ramping
In this modification, the setpoint is gradually moved from its old value to a newly specified value using a linear or first-order differential ramp function. This avoids the discontinuity present in a simple step change.
Derivative of the process variable
In this case the PID controller measures the derivative of the measured PV, rather than the derivative of the error. This quantity is always continuous (i.e., never has a step change as a result of changed setpoint). This modification is a simple case of setpoint weighting.
Setpoint weighting
Setpoint weighting adds adjustable factors (usually between 0 and 1) to the setpoint in the error in the proportional and derivative element of the controller. The error in the integral term must be the true control error to avoid steady-state control errors. These two extra parameters do not affect the response to load disturbances and measurement noise and can be tuned to improve the controller's setpoint response.
=== Feed-forward ===
The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.
=== Bumpless operation ===
PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.
=== Other improvements ===
In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic, or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order. The order of the integrator and differentiator add increased flexibility to the controller.
== Cascade control ==
One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers..
For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.
The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system it controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response.
== Alternative nomenclature and forms ==
=== Standard versus parallel (ideal) form ===
The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form. In this form the
K
p
{\displaystyle K_{p}}
gain is applied to the
I
o
u
t
{\displaystyle I_{\mathrm {out} }}
, and
D
o
u
t
{\displaystyle D_{\mathrm {out} }}
terms, yielding:
u
(
t
)
=
K
p
(
e
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
+
T
d
d
d
t
e
(
t
)
)
{\displaystyle u(t)=K_{p}\left(e(t)+{\frac {1}{T_{i}}}\int _{0}^{t}e(\tau )\,d\tau +T_{d}{\frac {d}{dt}}e(t)\right)}
where
T
i
{\displaystyle T_{i}}
is the integral time
T
d
{\displaystyle T_{d}}
is the derivative time
In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at
T
d
{\displaystyle T_{d}}
seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in
T
i
{\displaystyle T_{i}}
seconds (or samples). The resulting compensated single error value is then scaled by the single gain
K
p
{\displaystyle K_{p}}
to compute the control variable.
In the parallel form, shown in the controller theory section
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
d
t
e
(
t
)
{\displaystyle u(t)=K_{p}e(t)+K_{i}\int _{0}^{t}e(\tau )\,d\tau +K_{d}{\frac {d}{dt}}e(t)}
the gain parameters are related to the parameters of the standard form through
K
i
=
K
p
/
T
i
{\displaystyle K_{i}=K_{p}/T_{i}}
and
K
d
=
K
p
T
d
{\displaystyle K_{d}=K_{p}T_{d}}
. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.
=== Reciprocal gain, a.k.a. proportional band ===
In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain
K
p
{\displaystyle K_{p}}
not as "output per degree", but rather in the reciprocal form of a proportional band
100
/
K
p
{\displaystyle 100/K_{p}}
, which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.
=== Basing derivative action on PV ===
In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.
=== Basing proportional action on PV ===
Most commercial control systems offer the option of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.
Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.
M
V
(
t
)
=
K
p
(
−
P
V
(
t
)
+
1
T
i
∫
0
t
e
(
τ
)
d
τ
−
T
d
d
d
t
P
V
(
t
)
)
{\displaystyle \mathrm {MV(t)} =K_{p}\left(\,{-PV(t)}+{\frac {1}{T_{i}}}\int _{0}^{t}{e(\tau )}\,{d\tau }-T_{d}{\frac {d}{dt}}PV(t)\right)}
King describes an effective chart-based method.
=== Laplace form ===
Sometimes it is useful to write the PID regulator in Laplace transform form:
G
(
s
)
=
K
p
+
K
i
s
+
K
d
s
=
K
d
s
2
+
K
p
s
+
K
i
s
{\displaystyle G(s)=K_{p}+{\frac {K_{i}}{s}}+K_{d}{s}={\frac {K_{d}{s^{2}}+K_{p}{s}+K_{i}}{s}}}
Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.
=== Series/interacting form ===
Another representation of the PID controller is the series, or interacting form
G
(
s
)
=
K
c
(
1
τ
i
s
+
1
)
(
τ
d
s
+
1
)
{\displaystyle G(s)=K_{c}({\frac {1}{\tau _{i}{s}}}+1)(\tau _{d}{s}+1)}
where the parameters are related to the parameters of the standard form through
K
p
=
K
c
⋅
α
{\displaystyle K_{p}=K_{c}\cdot \alpha }
,
T
i
=
τ
i
⋅
α
{\displaystyle T_{i}=\tau _{i}\cdot \alpha }
, and
T
d
=
τ
d
α
{\displaystyle T_{d}={\frac {\tau _{d}}{\alpha }}}
with
α
=
1
+
τ
d
τ
i
{\displaystyle \alpha =1+{\frac {\tau _{d}}{\tau _{i}}}}
.
This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.
=== Discrete implementation ===
The analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be discretized. Approximations for first-order derivatives are made by backward finite differences.
u
(
t
)
{\displaystyle u(t)}
and
e
(
t
)
{\displaystyle e(t)}
are discretized with a sampling period
Δ
t
{\displaystyle \Delta t}
, k is the sample index.
Differentiating both sides of PID equation using Newton's notation gives:
u
˙
(
t
)
=
K
p
e
˙
(
t
)
+
K
i
e
(
t
)
+
K
d
e
¨
(
t
)
{\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
Derivative terms are approximated as,
f
˙
(
t
k
)
=
d
f
(
t
k
)
d
t
=
f
(
t
k
)
−
f
(
t
k
−
1
)
Δ
t
{\displaystyle {\dot {f}}(t_{k})={\dfrac {df(t_{k})}{dt}}={\dfrac {f(t_{k})-f(t_{k-1})}{\Delta t}}}
So,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
˙
(
t
k
)
−
e
˙
(
t
k
−
1
)
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\dot {e}}(t_{k})-{\dot {e}}(t_{k-1})}{\Delta t}}}
Applying backward difference again gives,
u
(
t
k
)
−
u
(
t
k
−
1
)
Δ
t
=
K
p
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
+
K
i
e
(
t
k
)
+
K
d
e
(
t
k
)
−
e
(
t
k
−
1
)
Δ
t
−
e
(
t
k
−
1
)
−
e
(
t
k
−
2
)
Δ
t
Δ
t
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\frac {e(t_{k})-e(t_{k-1})}{\Delta t}}-{\frac {e(t_{k-1})-e(t_{k-2})}{\Delta t}}}{\Delta t}}}
By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
e
(
t
k
)
+
(
−
K
p
−
2
K
d
Δ
t
)
e
(
t
k
−
1
)
+
K
d
Δ
t
e
(
t
k
−
2
)
{\displaystyle u(t_{k})=u(t_{k-1})+\left(K_{p}+K_{i}\Delta t+{\dfrac {K_{d}}{\Delta t}}\right)e(t_{k})+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {K_{d}}{\Delta t}}e(t_{k-2})}
or:
u
(
t
k
)
=
u
(
t
k
−
1
)
+
K
p
[
(
1
+
Δ
t
T
i
+
T
d
Δ
t
)
e
(
t
k
)
+
(
−
1
−
2
T
d
Δ
t
)
e
(
t
k
−
1
)
+
T
d
Δ
t
e
(
t
k
−
2
)
]
{\displaystyle u(t_{k})=u(t_{k-1})+K_{p}\left[\left(1+{\dfrac {\Delta t}{T_{i}}}+{\dfrac {T_{d}}{\Delta t}}\right)e(t_{k})+\left(-1-{\dfrac {2T_{d}}{\Delta t}}\right)e(t_{k-1})+{\dfrac {T_{d}}{\Delta t}}e(t_{k-2})\right]}
s.t.
T
i
=
K
p
/
K
i
,
T
d
=
K
d
/
K
p
{\displaystyle T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}}
Note: This method solves in fact
u
(
t
)
=
K
p
e
(
t
)
+
K
i
∫
0
t
e
(
τ
)
d
τ
+
K
d
d
e
(
t
)
d
t
+
u
0
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}+u_{0}}
where
u
0
{\displaystyle u_{0}}
is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.
== Pseudocode ==
Here is a very simple and explicit group of pseudocode that can be easily understood by the layman:
Kp - proportional gain
Ki - integral gain
Kd - derivative gain
dt - loop interval time (assumes reasonable scale)
previous_error := 0
integral := 0
loop:
error := setpoint − measured_value
proportional := error;
integral := integral + error × dt
derivative := (error - previous_error) / dt
output := Kp × proportional + Ki × integral + Kd × derivative
previous_error := error
wait(dt)
goto loop
Below a pseudocode illustrates how to implement a PID considering the PID as an IIR filter:
The Z-transform of a PID can be written as (
Δ
t
{\displaystyle \Delta _{t}}
is the sampling time):
C
(
z
)
=
K
p
+
K
i
Δ
t
z
z
−
1
+
K
d
Δ
t
z
−
1
z
{\displaystyle C(z)=K_{p}+K_{i}\Delta _{t}{\frac {z}{z-1}}+{\frac {K_{d}}{\Delta _{t}}}{\frac {z-1}{z}}}
and expressed in a IIR form (in agreement with the discrete implementation shown above):
C
(
z
)
=
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
+
(
−
K
p
−
2
K
d
Δ
t
)
z
−
1
+
K
d
Δ
t
z
−
2
1
−
z
−
1
{\displaystyle C(z)={\frac {\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)z^{-1}+{\dfrac {K_{d}}{\Delta _{t}}}z^{-2}}{1-z^{-1}}}}
We can then deduce the recursive iteration often found in FPGA implementation
u
[
n
]
=
u
[
n
−
1
]
+
(
K
p
+
K
i
Δ
t
+
K
d
Δ
t
)
ϵ
[
n
]
+
(
−
K
p
−
2
K
d
Δ
t
)
ϵ
[
n
−
1
]
+
K
d
Δ
t
ϵ
[
n
−
2
]
{\displaystyle u[n]=u[n-1]+\left(K_{p}+K_{i}\Delta _{t}+{\dfrac {K_{d}}{\Delta _{t}}}\right)\epsilon [n]+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta _{t}}}\right)\epsilon [n-1]+{\dfrac {K_{d}}{\Delta _{t}}}\epsilon [n-2]}
A0 := Kp + Ki*dt + Kd/dt
A1 := -Kp - 2*Kd/dt
A2 := Kd/dt
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
output := output + A0 * error[0] + A1 * error[1] + A2 * error[2]
wait(dt)
goto loop
Here, Kp is a dimensionless number, Ki is expressed in
s
−
1
{\displaystyle s^{-1}}
and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases.
In the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error.
Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm.
A common issue when using
K
d
{\displaystyle K_{d}}
is the response to the derivative of a rising or falling edge of the setpoint as shown below:
A typical workaround is to filter the derivative action using a low pass filter of time constant
τ
d
/
N
{\displaystyle \tau _{d}/N}
where
3
<=
N
<=
10
{\displaystyle 3<=N<=10}
:
A variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative:
A0 := Kp + Ki*dt
A1 := -Kp
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
A0d := Kd/dt
A1d := - 2.0*Kd/dt
A2d := Kd/dt
N := 5
tau := Kd / (Kp*N) // IIR filter time constant
alpha := dt / (2*tau)
d0 := 0
d1 := 0
fd0 := 0
fd1 := 0
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
// PI
output := output + A0 * error[0] + A1 * error[1]
// Filtered D
d1 := d0
d0 := A0d * error[0] + A1d * error[1] + A2d * error[2]
fd1 := fd0
fd0 := ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1
output := output + fd0
wait(dt)
goto loop
== See also ==
Control theory
Active disturbance rejection control
== Notes ==
== References ==
== Further reading ==
== External links ==
PID tuning using Mathematica
PID tuning using Python
Principles of PID Control and Tuning
Introduction to the key terms associated with PID Temperature Control
=== PID tutorials ===
PID Control in MATLAB/Simulink and Python with TCLab
What's All This P-I-D Stuff, Anyhow? Article in Electronic Design
Shows how to build a PID controller with basic electronic components (pg. 22)
PID Without a PhD
PID Control with MATLAB and Simulink
PID with single Operational Amplifier
Proven Methods and Best Practices for PID Control
Principles of PID Control and Tuning
PID Tuning Guide: A Best-Practices Approach to Understanding and Tuning PID Controllers
Michael Barr (2002-07-30), Introduction to Closed-Loop Control, Embedded Systems Programming, archived from the original on 2010-02-09
Jinghua Zhong, Mechanical Engineering, Purdue University (Spring 2006). "PID Controller Tuning: A Short Tutorial" (PDF). Archived from the original (PDF) on 2015-04-21. Retrieved 2013-12-04.{{cite web}}: CS1 maint: multiple names: authors list (link)
Introduction to P,PI,PD & PID Controller with MATLAB
Improving The Beginners PID | Wikipedia/PID_control |
A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in the cruise control example above)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
improved rectification of random fluctuations
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
== Open-loop and closed-loop ==
== Closed-loop transfer function ==
The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Y
(
s
)
=
P
(
s
)
U
(
s
)
{\displaystyle Y(s)=P(s)U(s)}
U
(
s
)
=
C
(
s
)
E
(
s
)
{\displaystyle U(s)=C(s)E(s)}
E
(
s
)
=
R
(
s
)
−
F
(
s
)
Y
(
s
)
.
{\displaystyle E(s)=R(s)-F(s)Y(s).}
Solving for Y(s) in terms of R(s) gives
Y
(
s
)
=
(
P
(
s
)
C
(
s
)
1
+
P
(
s
)
C
(
s
)
F
(
s
)
)
R
(
s
)
=
H
(
s
)
R
(
s
)
.
{\displaystyle Y(s)=\left({\frac {P(s)C(s)}{1+P(s)C(s)F(s)}}\right)R(s)=H(s)R(s).}
The expression
H
(
s
)
=
P
(
s
)
C
(
s
)
1
+
F
(
s
)
P
(
s
)
C
(
s
)
{\displaystyle H(s)={\frac {P(s)C(s)}{1+F(s)P(s)C(s)}}}
is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If
|
P
(
s
)
C
(
s
)
|
≫
1
{\displaystyle |P(s)C(s)|\gg 1}
, i.e., it has a large norm with each value of s, and if
|
F
(
s
)
|
≈
1
{\displaystyle |F(s)|\approx 1}
, then Y(s) is approximately equal to R(s) and the output closely tracks the reference input.
== PID feedback control ==
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.
A PID controller continuously calculates an error value e(t) as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal.
The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.
The PID controller is probably the most-used feedback control design.
If u(t) is the control signal sent to the system, y(t) is the measured output and r(t) is the desired output, and e(t) = r(t) − y(t) is the tracking error, a PID controller has the general form
u
(
t
)
=
K
P
e
(
t
)
+
K
I
∫
t
e
(
τ
)
d
τ
+
K
D
d
e
(
t
)
d
t
.
{\displaystyle u(t)=K_{P}e(t)+K_{I}\int ^{t}e(\tau ){\text{d}}\tau +K_{D}{\frac {{\text{d}}e(t)}{{\text{d}}t}}.}
The desired closed loop dynamics is obtained by adjusting the three parameters KP, KI and KD, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
u
(
s
)
=
K
P
e
(
s
)
+
K
I
1
s
e
(
s
)
+
K
D
s
e
(
s
)
{\displaystyle u(s)=K_{P}\,e(s)+K_{I}\,{\frac {1}{s}}\,e(s)+K_{D}\,s\,e(s)}
u
(
s
)
=
(
K
P
+
K
I
1
s
+
K
D
s
)
e
(
s
)
{\displaystyle u(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right)e(s)}
with the PID controller transfer function
C
(
s
)
=
(
K
P
+
K
I
1
s
+
K
D
s
)
.
{\displaystyle C(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right).}
As an example of tuning a PID controller in the closed-loop system H(s), consider a 1st order plant given by
P
(
s
)
=
A
1
+
s
T
P
{\displaystyle P(s)={\frac {A}{1+sT_{P}}}}
where A and TP are some constants. The plant output is fed back through
F
(
s
)
=
1
1
+
s
T
F
{\displaystyle F(s)={\frac {1}{1+sT_{F}}}}
where TF is also a constant. Now if we set
K
P
=
K
(
1
+
T
D
T
I
)
{\displaystyle K_{P}=K\left(1+{\frac {T_{D}}{T_{I}}}\right)}
, KD = KTD, and
K
I
=
K
T
I
{\displaystyle K_{I}={\frac {K}{T_{I}}}}
, we can express the PID controller transfer function in series form as
C
(
s
)
=
K
(
1
+
1
s
T
I
)
(
1
+
s
T
D
)
{\displaystyle C(s)=K\left(1+{\frac {1}{sT_{I}}}\right)(1+sT_{D})}
Plugging P(s), F(s), and C(s) into the closed-loop transfer function H(s), we find that by setting
K
=
1
A
,
T
I
=
T
F
,
T
D
=
T
P
{\displaystyle K={\frac {1}{A}},T_{I}=T_{F},T_{D}=T_{P}}
H(s) = 1. With this tuning in this example, the system output follows the reference input exactly.
However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.
== References == | Wikipedia/Closed-loop_controller |
Automation and Remote Control (Russian: Автоматика и Телемеханика, romanized: Avtomatika i Telemekhanika) is a Russian scientific journal published by MAIK Nauka/Interperiodica Press and distributed in English by Springer Science+Business Media.
The journal was established in April 1936 by the USSR Academy of Sciences Department of Control Processes Problems. Cofounders were the Trapeznikov Institute of Control Sciences and the Institute of Information Transmission Problems. The journal covers research on control theory problems and applications. The editor-in-chief is Andrey A. Galyaev. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7.
== History ==
The journal was established in April 1936 and published bimonthly. Since 1956 the journal has been a monthly publication and was translated into English and published in the United States under the title Automation and Remote Control by Plenum Publishing Corporation. During its existence, the scope of the journal substantially evolved and expanded to reflect virtually all subjects concerned in one way or another with the current science of automation and control systems. The journal publishes surveys, original papers, and short communications.
== References ==
== External links ==
Official website
Official website (in Russian)
Institute of Control Sciences
Journal page at MAIK Nauka/Interperiodica Press | Wikipedia/Automation_and_remote_control |
In discrete-time control theory, the dead-beat control problem consists of finding what input signal must be applied to a system in order to bring the output to the steady state in the smallest number of time steps.
For an Nth-order linear system it can be shown that this minimum number of steps will be at most N (depending on the initial condition), provided that the system is null controllable (that it can be brought to state zero by some input). The solution is to apply feedback such that all poles of the closed-loop transfer function are at the origin of the z-plane. This approach is straightforward for linear systems. However, when it comes to nonlinear systems, dead beat control is an open research problem.
== Usage ==
The sole design parameter in deadbeat control is the sampling period. As the error goes to zero within N sampling periods, the settling time remains within the range of Nh, where h is the sampling parameter.
Also, the magnitude of the control signal increases significantly as the sampling period decreases. Thus, careful selection of the sampling period is crucial when employing this control method.
Finally, since the controller is based upon cancelling plant poles and zeros, these must be known precisely, otherwise the controller will not be deadbeat.
== Transfer function of dead-beat controller ==
Consider that a plant has the transfer function
G
(
z
)
=
B
(
z
)
A
(
z
)
{\displaystyle \mathbf {G} (z)={\frac {B(z)}{A(z)}}}
where
A
(
z
)
=
a
0
+
a
1
z
−
1
+
a
2
z
−
2
+
⋯
a
m
z
−
m
,
{\displaystyle A(z)=a_{0}+a_{1}z^{-1}+a_{2}z^{-2}+\cdots a_{m}z^{-m},}
B
(
z
)
=
b
0
+
b
1
z
−
1
+
b
2
z
−
2
+
⋯
b
n
z
−
n
.
{\displaystyle B(z)=b_{0}+b_{1}z^{-1}+b_{2}z^{-2}+\cdots b_{n}z^{-n}.}
The transfer function of the corresponding dead-beat controller is
C
(
z
)
=
A
(
z
)
/
B
(
1
)
z
d
−
B
(
z
)
/
B
(
1
)
,
{\displaystyle \mathbf {C} (z)={\frac {A(z)/B(1)}{z^{d}-B(z)/B(1)}},}
where d is the minimum necessary system delay for controller to be realizable. For example, systems with two poles must have at minimum 2 step delay from controller to output, so d = 2.
The closed-loop transfer function is
L
(
z
)
=
B
(
z
)
/
B
(
1
)
z
d
,
{\displaystyle \mathbf {L} (z)={\frac {B(z)/B(1)}{z^{d}}},}
and has all poles at the origin.
== Notes ==
== References ==
Kailath, Thomas: Linear Systems, Prentice Hall, 1980, ISBN 9780135369616
Warwick, Kevin: Adaptive dead beat control of stochastic systems, International Journal of Control, 44(3), 651-663, 1986.
Dorf, Richard C.; Bishop, Robert H. (2005). Modern Control Systems. Upper Saddle River, NJ 07458: Pearson Prentice Hall. pp. 617–619.{{cite book}}: CS1 maint: location (link) | Wikipedia/Deadbeat_controller |
Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.
Control theory is divided into two branches. Linear control theory applies to systems made of devices which obey the superposition principle. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems can be solved by powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion.
Nonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. Even if the plant is linear, a nonlinear controller can often have attractive features such as simpler implementation, faster speed, more accuracy, or reduced control energy, which justify the more difficult design procedure.
An example of a nonlinear control system is a thermostat-controlled heating system. A building heating system such as a furnace has a nonlinear response to changes in temperature; it is either "on" or "off", it does not have the fine control in response to temperature differences that a proportional (linear) device would have. Therefore, the furnace is off until the temperature falls below the "turn on" setpoint of the thermostat, when it turns on. Due to the heat added by the furnace, the temperature increases until it reaches the "turn off" setpoint of the thermostat, which turns the furnace off, and the cycle repeats. This cycling of the temperature about the desired temperature is called a limit cycle, and is characteristic of nonlinear control systems.
== Properties of nonlinear systems ==
Some properties of nonlinear dynamic systems are
They do not follow the principle of superposition (linearity and homogeneity).
They may have multiple isolated equilibrium points.
They may exhibit properties such as limit cycle, bifurcation, chaos.
Finite escape time: Solutions of nonlinear systems may not exist for all times.
== Analysis and control of nonlinear systems ==
There are several well-developed techniques for analyzing nonlinear feedback systems:
Describing function method
Phase plane method
Lyapunov stability analysis
Singular perturbation method
The Popov criterion and the circle criterion for absolute stability
Center manifold theorem
Small-gain theorem
Passivity analysis
Control design techniques for nonlinear systems also exist. These can be subdivided into techniques which attempt to treat the system as a linear system in a limited range of operation and use (well-known) linear design techniques for each region:
Gain scheduling
Those that attempt to introduce auxiliary nonlinear feedback in such a way that the system can be treated as linear for purposes of control design:
Feedback linearization
And Lyapunov based methods:
Lyapunov redesign
Control-Lyapunov function
Nonlinear damping
Backstepping
Sliding mode control
== Nonlinear feedback analysis – The Lur'e problem ==
An early nonlinear feedback system analysis problem was formulated by A. I. Lur'e.
Control systems described by the Lur'e problem have a forward path that is linear and time-invariant, and a feedback path that contains a memory-less, possibly time-varying, static nonlinearity.
The linear part can be characterized by four matrices (A,B,C,D), while the nonlinear part is Φ(y) with
Φ
(
y
)
y
∈
[
a
,
b
]
,
a
<
b
∀
y
{\displaystyle {\frac {\Phi (y)}{y}}\in [a,b],\quad a<b\quad \forall y}
(a sector nonlinearity).
=== Absolute stability problem ===
Consider:
(A,B) is controllable and (C,A) is observable
two real numbers a, b with a < b, defining a sector for function Φ
The Lur'e problem (also known as the absolute stability problem) is to derive conditions involving only the transfer matrix H(s) and {a,b} such that x = 0 is a globally uniformly asymptotically stable equilibrium of the system.
There are two well-known wrong conjectures on the absolute stability problem:
The Aizerman's conjecture
The Kalman's conjecture.
Graphically, these conjectures can be interpreted in terms of graphical restrictions on the graph of Φ(y) x y or also on the graph of dΦ/dy x Φ/y. There are counterexamples to Aizerman's and Kalman's conjectures such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution—hidden oscillation.
There are two main theorems concerning the Lur'e problem which give sufficient conditions for absolute stability:
The circle criterion (an extension of the Nyquist stability criterion for linear systems)
The Popov criterion.
== Theoretical results in nonlinear control ==
=== Frobenius theorem ===
The Frobenius theorem is a deep result in differential geometry. When applied to nonlinear control, it says the following: Given a system of the form
x
˙
=
∑
i
=
1
k
f
i
(
x
)
u
i
(
t
)
{\displaystyle {\dot {x}}=\sum _{i=1}^{k}f_{i}(x)u_{i}(t)\,}
where
x
∈
R
n
{\displaystyle x\in R^{n}}
,
f
1
,
…
,
f
k
{\displaystyle f_{1},\dots ,f_{k}}
are vector fields belonging to a distribution
Δ
{\displaystyle \Delta }
and
u
i
(
t
)
{\displaystyle u_{i}(t)}
are control functions, the integral curves of
x
{\displaystyle x}
are restricted to a manifold of dimension
m
{\displaystyle m}
if
span
(
Δ
)
=
m
{\displaystyle \operatorname {span} (\Delta )=m}
and
Δ
{\displaystyle \Delta }
is an involutive distribution.
== See also ==
Feedback passivation
Phase-locked loop
Small control property
== References ==
== Further reading ==
== External links ==
Wolfram language functions for nonlinear control systems | Wikipedia/Nonlinear_control |
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field.
Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory;
although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed.
Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.
== History ==
The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the book Computational Neuroscience. The first of the annual open international meetings focused on Computational Neuroscience was organized by James M. Bower and John Miller in San Francisco, California in 1989. The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at the California Institute of Technology in 1985.
The early historical roots of the field can be traced to the work of people including Louis Lapicque, Hodgkin & Huxley, Hubel and Wiesel, and David Marr. Lapicque introduced the integrate and fire model of the neuron in a seminal article published in 1907, a model still popular for artificial neural networks studies because of its simplicity (see a recent review).
About 40 years later, Hodgkin and Huxley developed the voltage clamp and created the first biophysical model of the action potential. Hubel and Wiesel discovered that neurons in the primary visual cortex, the first cortical area to process information coming from the retina, have oriented receptive fields and are organized in columns. David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory.
== Major topics ==
Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena.
=== Single-neuron modeling ===
Even a single neuron has complex biophysical characteristics and can perform computations (e.g.). Hodgkin and Huxley's original model only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.
The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.
There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a project founded by Henry Markram from the École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.
Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics. However, detailed neuron descriptions are computationally expensive and this computing cost can limit the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.
=== Modeling Neuron-glia interactions ===
Glial cells participate significantly in the regulation of neuronal activity at both the cellular and the network level. Modeling this interaction allows to clarify the potassium cycle, so important for maintaining homeostasis and to prevent epileptic seizures. Modeling reveals the role of glial protrusions that can penetrate in some cases the synaptic cleft to interfere with the synaptic transmission and thus control synaptic communication.
=== Development, axonal patterning, and guidance ===
Computational neuroscience aims to address a wide array of questions, including: How do axons and dendrites form during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons.
Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is the minimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.
=== Sensory processing ===
Early models on sensory processing understood within a theoretical framework are credited to Horace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form of efficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the
forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.
Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck. A subsequent theory, V1 Saliency Hypothesis (V1SH), has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.
Current research in sensory processing is divided among a biophysical modeling of different subsystems and a more theoretical modeling of perception. Current models of perception have suggested that the brain performs some form of Bayesian inference and integration of different sensory information in generating our perception of the physical world.
=== Motor control ===
Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems.
=== Memory and synaptic plasticity ===
Earlier models of memory are primarily based on the postulates of Hebbian learning. Biologically relevant models such as Hopfield net have been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- and long-term memory, localizing in the hippocampus.
One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstable synapses are easy to train but also prone to stochastic disruption. Stable synapses forget less easily, but they are also harder to consolidate. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades.
=== Behaviors of networks ===
Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike most artificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as the visual cortex, are understood in some detail. It is also unknown what the computational functions of these specific connectivity patterns are, if any.
The interactions of neurons in a small network can be often reduced to simple models such as the Ising model. The statistical mechanics of such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions. It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence of two-photon microscopy and calcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks.
In some cases the complex interactions between inhibitory and excitatory neurons can be simplified using mean-field theory, which gives rise to the population model of neural networks. While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.
=== Visual attention, identification, and categorization ===
Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it.
An example theory that is being extensively tested behaviorally and physiologically is the V1 Saliency Hypothesis that a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously. Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes.
=== Cognition, discrimination, and learning ===
Computational modeling of higher cognitive functions has only recently begun. Experimental data comes primarily from single-unit recording in primates. The frontal lobe and parietal lobe function as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.
The brain seems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines.
The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice. Integrative neuroscience attempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.
The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making.
=== Consciousness ===
One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life. Francis Crick, Giulio Tononi and Christof Koch made some attempts to formulate consistent frameworks for future work in neural correlates of consciousness (NCC), though much of the work in this field remains speculative.
=== Computational clinical neuroscience ===
Computational clinical neuroscience is a field that brings together experts in neuroscience, neurology, psychiatry, decision sciences and computational modeling to quantitatively define and investigate problems in neurological and psychiatric diseases, and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.
=== Predictive computational neuroscience ===
Predictive computational neuroscience is a recent field that combines signal processing, neuroscience, clinical data and machine learning to predict the brain during coma or anesthesia. For example, it is possible to anticipate deep brain states using the EEG signal. These states can be used to anticipate hypnotic concentration to administrate to the patient.
=== Computational Psychiatry ===
Computational psychiatry is a new emerging field that brings together experts in machine learning, neuroscience, neurology, psychiatry, psychology to provide an understanding of psychiatric disorders.
== Technology ==
=== Neuromorphic computing ===
A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations (See: neuromorphic computing, physical neural network). One of the advantages of using a physical model computer such as this is that it takes the computational load of the processor (in the sense that the structural and some of the functional elements don't have to be programmed since they are in hardware). In recent times, neuromorphic technology has been used to build supercomputers which are used in international neuroscience collaborations. Examples include the Human Brain Project SpiNNaker supercomputer and the BrainScaleS computer.
== See also ==
== References ==
== Bibliography ==
Chklovskii DB (2004). "Synaptic connectivity and neuronal morphology: two sides of the same coin". Neuron. 43 (5): 609–17. doi:10.1016/j.neuron.2004.08.012. PMID 15339643. S2CID 16217065.
Sejnowski, Terrence J.; Churchland, Patricia Smith (1992). The computational brain. Cambridge, Mass: MIT Press. ISBN 978-0-262-03188-2.
Gerstner, W.; Kistler, W.; Naud, R.; Paninski, L. (2014). Neuronal Dynamics. Cambridge, UK: Cambridge University Press. ISBN 9781107447615.
Dayan P.; Abbott, L. F. (2001). Theoretical neuroscience: computational and mathematical modeling of neural systems. Cambridge, Mass: MIT Press. ISBN 978-0-262-04199-7.
Eliasmith, Chris; Anderson, Charles H. (2003). Neural engineering: Representation, computation, and dynamics in neurobiological systems. Cambridge, Mass: MIT Press. ISBN 978-0-262-05071-5.
Hodgkin AL, Huxley AF (28 August 1952). "A quantitative description of membrane current and its application to conduction and excitation in nerve". J. Physiol. 117 (4): 500–44. doi:10.1113/jphysiol.1952.sp004764. PMC 1392413. PMID 12991237.
William Bialek; Rieke, Fred; David Warland; Rob de Ruyter van Steveninck (1999). Spikes: exploring the neural code. Cambridge, Mass: MIT. ISBN 978-0-262-68108-7.
Schutter, Erik de (2001). Computational neuroscience: realistic modeling for experimentalists. Boca Raton: CRC. ISBN 978-0-8493-2068-2.
Sejnowski, Terrence J.; Hemmen, J. L. van (2006). 23 problems in systems neuroscience. Oxford [Oxfordshire]: Oxford University Press. ISBN 978-0-19-514822-0.
Michael A. Arbib; Shun-ichi Amari; Prudence H. Arbib (2002). The Handbook of Brain Theory and Neural Networks. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-01197-6.
Zhaoping, Li (2014). Understanding vision: theory, models, and data. Oxford, UK: Oxford University Press. ISBN 978-0199564668.
== See also ==
=== Software ===
BRIAN, a Python based simulator
Budapest Reference Connectome, web based 3D visualization tool to browse connections in the human brain
Emergent, neural simulation software.
GENESIS, a general neural simulation system.
NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons.
== External links ==
=== Journals ===
Journal of Mathematical Neuroscience
Journal of Computational Neuroscience
Neural Computation
Cognitive Neurodynamics
Frontiers in Computational Neuroscience
PLoS Computational Biology
Frontiers in Neuroinformatics
=== Conferences ===
Computational and Systems Neuroscience (COSYNE) – a computational neuroscience meeting with a systems neuroscience focus.
Annual Computational Neuroscience Meeting (CNS) – a yearly computational neuroscience meeting.
Neural Information Processing Systems (NIPS)– a leading annual conference covering mostly machine learning.
Cognitive Computational Neuroscience (CCN) – a computational neuroscience meeting focusing on computational models capable of cognitive tasks.
International Conference on Cognitive Neurodynamics (ICCN) – a yearly conference.
UK Mathematical Neurosciences Meeting– a yearly conference, focused on mathematical aspects.
Bernstein Conference on Computational Neuroscience (BCCN)– a yearly computational neuroscience conference ].
AREADNE Conferences– a biennial meeting that includes theoretical and experimental results.
=== Websites ===
Encyclopedia of Computational Neuroscience, part of Scholarpedia, an online expert curated encyclopedia on computational neuroscience and dynamical systems | Wikipedia/Computational_neuroscience |
A networked control system (NCS) is a control system wherein the control loops are closed through a communication network. The defining feature of an NCS is that control and feedback signals are exchanged among the system's components in the form of information packages through a network.
== Overview ==
The functionality of a typical NCS is established by the use of four basic elements:
Sensors, to acquire information,
Controllers, to provide decision and commands,
Actuators, to perform the control commands and
Communication network, to enable exchange of information.
The most important feature of an NCS is that it connects cyberspace to physical space enabling the execution of several tasks from long distance. In addition, NCSs eliminate unnecessary wiring reducing the complexity and the overall cost in designing and implementing the control systems. They can also be easily modified or upgraded by adding sensors, actuators, and controllers to them with relatively low cost and no major change in their structure. Furthermore, featuring efficient sharing of data between their controllers, NCSs are able to easily fuse global information to make intelligent decisions over large physical spaces.
Their potential applications are numerous and cover a wide range of industries, such as space and terrestrial exploration, access in hazardous environments, factory automation, remote diagnostics and troubleshooting, experimental facilities, domestic robots, aircraft, automobiles, manufacturing plant monitoring, nursing homes and tele-operations. While the potential applications of NCSs are numerous, the proven applications are few, and the real opportunity in the area of NCSs is in developing real-world applications that realize the area's potential.
=== Types of communication networks ===
Fieldbuses, e.g. CAN, LON etc.
IP/Ethernet
Wireless networks, e.g. Bluetooth, Zigbee, and Z-Wave. The term wireless networked control system (WNCS) is often used in this connection.
=== Problems and solutions ===
Advent and development of the Internet combined with the advantages provided by NCS attracted the interest of researchers around the globe. Along with the advantages, several challenges also emerged giving rise to many important research topics. New control strategies, kinematics of the actuators in the systems, reliability and security of communications, bandwidth allocation, development of data communication protocols, corresponding fault detection and fault tolerant control strategies, real-time information collection and efficient processing of sensors data are some of the relative topics studied in depth.
The insertion of the communication network in the feedback control loop makes the analysis and design of an NCS complex, since it imposes additional time delays in control loops or possibility of packages loss. Depending on the application, time-delays could impose severe degradation on the system performance.
To alleviate the time-delay effect, Y. Tipsuwan and M-Y. Chow, in ADAC Lab at North Carolina State University, proposed the gain scheduler middleware (GSM) methodology and applied it in iSpace. S. Munir and W.J. Book (Georgia Institute of Technology) used a Smith predictor, a Kalman filter and an energy regulator to perform teleoperation through the Internet.
K.C. Lee, S. Lee and H.H. Lee used a genetic algorithm to design a controller used in a NCS. Many other researchers provided solutions using concepts from several control areas such as robust control, optimal stochastic control, model predictive control, fuzzy logic etc.
A most critical and important issue surrounding the design of distributed NCSs with the successively increasing complexity is to meet the requirements on system reliability and dependability, while guaranteeing a high system performance over a wide operating range. This makes network based fault detection and diagnosis techniques, which are essential to monitor the system performance, receive more and more attention.
=== References ===
== Further reading ==
D. Hristu-Varsakelis and W. S. Levine (Ed.): Handbook of Networked and Embedded Control Systems, 2005. ISBN 0-8176-3239-5.
Hespanha, J. P.; Naghshtabrizi, P.; Xu, Y. (2007). "A Survey of Recent Results in Networked Control Systems". Proceedings of the IEEE. 95 (1): 138–162. CiteSeerX 10.1.1.112.3798. doi:10.1109/JPROC.2006.887288. S2CID 5660618.
Quevedo, D. E.; Nesic, D. (2012). "Robust stability of packetized predictive control of nonlinear systems with disturbances and Markovian packet losses" (PDF). Automatica. 48 (8): 1803–1811. doi:10.1016/j.automatica.2012.05.046. hdl:1959.13/933538.
Pin, G.; Parisini, T. (2011). "Networked Predictive Control of Uncertain Constrained Nonlinear Systems: Recursive Feasibility and Input-to-State Stability Analysis". IEEE Transactions on Automatic Control. 56 (1): 72–87. doi:10.1109/TAC.2010.2051091. hdl:10044/1/15547. S2CID 14365396.
S. Tatikonda, Control under communication constraints, MIT Ph.D dissertation, 2000. http://dspace.mit.edu/bitstream/1721.1/16755/1/48245028.pdf
O. Imer, Optimal estimation and control under communication network constraints, UIUC Ph.D. dissertation, 2005. http://decision.csl.uiuc.edu/~imer/phdsmallfont.pdf
Y. Q. Wang, H. Ye and G. Z. Wang. Fault detection of NCS based on eigendecomposition, adaptive evaluation and adaptive threshold. International Journal of Control, vol. 80, no. 12, pp. 1903–1911, 2007.
M. Mesbahi and M. Egerstedt. Graph Theoretic Methods in Multiagent Networks, Princeton University Press, 2010. ISBN 978-1-4008-3535-5. https://sites.google.com/site/mesbahiegerstedt/home
Martins, N. C.; Dahleh, M. A.; Elia, N. (2006). "Feedback stabilization of uncertain systems in the presence of a direct link". IEEE Transactions on Automatic Control. 51 (3): 438–447. doi:10.1109/tac.2006.871940. S2CID 620399.
Mahajan, A.; Martins, N. C.; Rotkowitz, M. C.; Yuksel, S. "Information structures in optimal decentralized control". Proceedings of the IEEE Conference on Decision and Control. 2012: 1291–1306.
Dong, J.; Kim, J. (2012). "Markov-chain-based Output Feedback Method for Stabilization of Networked Control Systems with Random Time Delays and Packet Losses". International Journal of Control, Automation and Systems. 10 (5): 1013–1022. doi:10.1007/s12555-012-0519-x. S2CID 16994214.
== External links ==
Advanced Diagnosis Automation and Control Lab (NCSU)
Co-design Framework to Integrate Communication, Control, Computation and Energy Management in Networked Control Systems (FeedNetback Project) | Wikipedia/Networked_control_system |
In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE). It has been found that the viscosity solution is the natural solution concept to use in many applications of PDE's, including for example first order equations arising in dynamic programming (the Hamilton–Jacobi–Bellman equation), differential games (the Hamilton–Jacobi–Isaacs equation) or front evolution problems, as well as second-order equations such as the ones arising in stochastic optimal control or stochastic differential games.
The classical concept was that a PDE
F
(
x
,
u
,
D
u
,
D
2
u
)
=
0
{\displaystyle F(x,u,Du,D^{2}u)=0}
over a domain
x
∈
Ω
{\displaystyle x\in \Omega }
has a solution if we can find a function u(x) continuous and differentiable over the entire domain such that
x
{\displaystyle x}
,
u
{\displaystyle u}
,
D
u
{\displaystyle Du}
,
D
2
u
{\displaystyle D^{2}u}
satisfy the above equation at every point.
If a scalar equation is degenerate elliptic (defined below), one can define a type of weak solution called viscosity solution.
Under the viscosity solution concept, u does not need to be everywhere differentiable. There may be points where either
D
u
{\displaystyle Du}
or
D
2
u
{\displaystyle D^{2}u}
does not exist and yet u satisfies the equation in an appropriate generalized sense. The definition allows only for certain kind of singularities, so that existence, uniqueness, and stability under uniform limits, hold for a large class of equations.
== Definition ==
There are several equivalent ways to phrase the definition of viscosity solutions. See for example the section II.4 of Fleming and Soner's book or the definition using semi-jets in the Users Guide.
Degenerate elliptic
An equation
F
(
x
,
u
,
D
u
,
D
2
u
)
=
0
{\displaystyle F(x,u,Du,D^{2}u)=0}
in a domain
Ω
{\displaystyle \Omega }
is defined to be degenerate elliptic if for any two symmetric matrices
X
{\displaystyle X}
and
Y
{\displaystyle Y}
such that
Y
−
X
{\displaystyle Y-X}
is positive definite, and any values of
x
∈
Ω
{\displaystyle x\in \Omega }
,
u
∈
R
{\displaystyle u\in \mathbb {R} }
and
p
∈
R
n
{\displaystyle p\in \mathbb {R} ^{n}}
, we have the inequality
F
(
x
,
u
,
p
,
X
)
≥
F
(
x
,
u
,
p
,
Y
)
{\displaystyle F(x,u,p,X)\geq F(x,u,p,Y)}
. For example,
−
Δ
u
=
0
{\displaystyle -\Delta u=0}
(where
Δ
{\displaystyle \Delta }
denotes the Laplacian) is degenerate elliptic since in this case,
F
(
x
,
u
,
p
,
X
)
=
−
trace
(
X
)
{\displaystyle F(x,u,p,X)=-{\text{trace}}(X)}
, and the trace of
X
{\displaystyle X}
is the sum of its eigenvalues. Any real first-order equation is degenerate elliptic.
Viscosity subsolution
An upper semicontinuous function
u
{\displaystyle u}
in
Ω
{\displaystyle \Omega }
is defined to be a subsolution of the above degenerate elliptic equation in the viscosity sense if for any point
x
0
∈
Ω
{\displaystyle x_{0}\in \Omega }
and any
C
2
{\displaystyle C^{2}}
function
ϕ
{\displaystyle \phi }
such that
ϕ
(
x
0
)
=
u
(
x
0
)
{\displaystyle \phi (x_{0})=u(x_{0})}
and
ϕ
≥
u
{\displaystyle \phi \geq u}
in a neighborhood of
x
0
{\displaystyle x_{0}}
, we have
F
(
x
0
,
ϕ
(
x
0
)
,
D
ϕ
(
x
0
)
,
D
2
ϕ
(
x
0
)
)
≤
0
{\displaystyle F(x_{0},\phi (x_{0}),D\phi (x_{0}),D^{2}\phi (x_{0}))\leq 0}
.
Viscosity supersolution
A lower semicontinuous function
u
{\displaystyle u}
in
Ω
{\displaystyle \Omega }
is defined to be a supersolution of the above degenerate elliptic equation in the viscosity sense if for any point
x
0
∈
Ω
{\displaystyle x_{0}\in \Omega }
and any
C
2
{\displaystyle C^{2}}
function
ϕ
{\displaystyle \phi }
such that
ϕ
(
x
0
)
=
u
(
x
0
)
{\displaystyle \phi (x_{0})=u(x_{0})}
and
ϕ
≤
u
{\displaystyle \phi \leq u}
in a neighborhood of
x
0
{\displaystyle x_{0}}
, we have
F
(
x
0
,
ϕ
(
x
0
)
,
D
ϕ
(
x
0
)
,
D
2
ϕ
(
x
0
)
)
≥
0
{\displaystyle F(x_{0},\phi (x_{0}),D\phi (x_{0}),D^{2}\phi (x_{0}))\geq 0}
.
Viscosity solution
A continuous function u is a viscosity solution of the PDE
F
(
x
,
u
,
D
u
,
D
2
u
)
=
0
{\displaystyle F(x,u,Du,D^{2}u)=0}
in
Ω
{\displaystyle \Omega }
if it is both a supersolution and a subsolution. Note that the boundary condition in the viscosity sense has not been discussed here.
== Example ==
Consider the boundary value problem
|
u
′
(
x
)
|
=
1
{\displaystyle |u'(x)|=1}
, or
F
(
u
′
)
=
|
u
′
|
−
1
=
0
{\displaystyle F(u')=|u'|-1=0}
, on
(
−
1
,
1
)
{\displaystyle (-1,1)}
with boundary conditions
u
(
−
1
)
=
u
(
1
)
=
0
{\displaystyle u(-1)=u(1)=0}
. Then, the function
u
(
x
)
=
1
−
|
x
|
{\displaystyle u(x)=1-|x|}
is a viscosity solution.
Indeed, note that the boundary conditions are satisfied classically, and
|
u
′
(
x
)
|
=
1
{\displaystyle |u'(x)|=1}
is well-defined in the interior except at
x
=
0
{\displaystyle x=0}
. Thus, it remains to show that the conditions for viscosity subsolution and viscosity supersolution hold at
x
=
0
{\displaystyle x=0}
. Suppose that
ϕ
(
x
)
{\displaystyle \phi (x)}
is any function differentiable at
x
=
0
{\displaystyle x=0}
with
ϕ
(
0
)
=
u
(
0
)
=
1
{\displaystyle \phi (0)=u(0)=1}
and
ϕ
(
x
)
≥
u
(
x
)
{\displaystyle \phi (x)\geq u(x)}
near
x
=
0
{\displaystyle x=0}
. From these assumptions, it follows that
ϕ
(
x
)
−
ϕ
(
0
)
≥
−
|
x
|
{\displaystyle \phi (x)-\phi (0)\geq -|x|}
. For positive
x
{\displaystyle x}
, this inequality implies
lim
x
→
0
+
ϕ
(
x
)
−
ϕ
(
0
)
x
≥
−
1
{\displaystyle \lim _{x\to 0^{+}}{\frac {\phi (x)-\phi (0)}{x}}\geq -1}
, using that
|
x
|
/
x
=
s
g
n
(
x
)
=
1
{\displaystyle |x|/x=sgn(x)=1}
for
x
>
0
{\displaystyle x>0}
. On the other hand, for
x
<
0
{\displaystyle x<0}
, we have that
lim
x
→
0
−
ϕ
(
x
)
−
ϕ
(
0
)
x
≤
1
{\displaystyle \lim _{x\to 0^{-}}{\frac {\phi (x)-\phi (0)}{x}}\leq 1}
. Because
ϕ
{\displaystyle \phi }
is differentiable, the left and right limits agree and are equal to
ϕ
′
(
0
)
{\displaystyle \phi '(0)}
, and we therefore conclude that
|
ϕ
′
(
0
)
|
≤
1
{\displaystyle |\phi '(0)|\leq 1}
, i.e.,
F
(
ϕ
′
(
0
)
)
≤
0
{\displaystyle F(\phi '(0))\leq 0}
. Thus,
u
{\displaystyle u}
is a viscosity subsolution. Moreover, the fact that
u
{\displaystyle u}
is a supersolution holds vacuously, since there is no function
ϕ
(
x
)
{\displaystyle \phi (x)}
differentiable at
x
=
0
{\displaystyle x=0}
with
ϕ
(
0
)
=
u
(
0
)
=
1
{\displaystyle \phi (0)=u(0)=1}
and
ϕ
(
x
)
≤
u
(
x
)
{\displaystyle \phi (x)\leq u(x)}
near
x
=
0
{\displaystyle x=0}
. This implies that
u
{\displaystyle u}
is a viscosity solution.
In fact, one may prove that
u
{\displaystyle u}
is the unique viscosity solution for such problem. The uniqueness part involves a more refined argument.
=== Discussion ===
The previous boundary value problem is an eikonal equation in a single spatial dimension with
f
=
1
{\displaystyle f=1}
, where the solution is known to be the signed distance function to the boundary of the domain. Note also in the previous example, the importance of the sign of
F
{\displaystyle F}
. In particular, the viscosity solution to the PDE
−
F
=
0
{\displaystyle -F=0}
with the same boundary conditions is
u
(
x
)
=
|
x
|
−
1
{\displaystyle u(x)=|x|-1}
. This can be explained by observing that the solution
u
(
x
)
=
1
−
|
x
|
{\displaystyle u(x)=1-|x|}
is the limiting solution of the vanishing viscosity problem
F
(
u
′
)
=
[
u
′
]
2
−
1
=
ϵ
u
″
{\displaystyle F(u')=[u']^{2}-1=\epsilon u''}
as
ϵ
{\displaystyle \epsilon }
goes to zero, while
u
(
x
)
=
|
x
|
−
1
{\displaystyle u(x)=|x|-1}
is the limit solution of the vanishing viscosity problem
−
F
(
u
′
)
=
1
−
[
u
′
]
2
=
ϵ
u
″
{\displaystyle -F(u')=1-[u']^{2}=\epsilon u''}
. One can readily confirm that
u
ϵ
(
x
)
=
ϵ
[
ln
(
cosh
(
1
/
ϵ
)
)
−
ln
(
cosh
(
x
/
ϵ
)
)
]
{\displaystyle u_{\epsilon }(x)=\epsilon [\ln(\cosh(1/\epsilon ))-\ln(\cosh(x/\epsilon ))]}
solves the PDE
F
(
u
′
)
=
[
u
′
]
2
−
1
=
ϵ
u
″
{\displaystyle F(u')=[u']^{2}-1=\epsilon u''}
for each
ϵ
>
0
{\displaystyle \epsilon >0}
. Further, the family of solutions
u
ϵ
{\displaystyle u_{\epsilon }}
converges toward the solution
u
=
1
−
|
x
|
{\displaystyle u=1-|x|}
as
ϵ
{\displaystyle \epsilon }
vanishes (see Figure).
== Basic properties ==
The three basic properties of viscosity solutions are existence, uniqueness and stability.
The uniqueness of solutions requires some extra structural assumptions on the equation. Yet it can be shown for a very large class of degenerate elliptic equations. It is a direct consequence of the comparison principle. Some simple examples where comparison principle holds are
u
+
H
(
x
,
∇
u
)
=
0
{\displaystyle u+H(x,\nabla u)=0}
with H uniformly continuous in both variables.
(Uniformly elliptic case)
F
(
D
2
u
,
D
u
,
u
)
=
0
{\displaystyle F(D^{2}u,Du,u)=0}
so that
F
{\displaystyle F}
is Lipschitz with respect to all variables and for every
r
≤
s
{\displaystyle r\leq s}
and
X
≥
Y
{\displaystyle X\geq Y}
,
F
(
Y
,
p
,
s
)
≥
F
(
X
,
p
,
r
)
+
λ
|
|
X
−
Y
|
|
{\displaystyle F(Y,p,s)\geq F(X,p,r)+\lambda ||X-Y||}
for some
λ
>
0
{\displaystyle \lambda >0}
.
The existence of solutions holds in all cases where the comparison principle holds and the boundary conditions can be enforced in some way (through barrier functions in the case of a Dirichlet boundary condition). For first order equations, it can be obtained using the vanishing viscosity method or for most equations using Perron's method. There is a generalized notion of boundary condition, in the viscosity sense. The solution to a boundary problem with generalized boundary conditions is solvable whenever the comparison principle holds.
The stability of solutions in
L
∞
{\displaystyle L^{\infty }}
holds as follows: a locally uniform limit of a sequence of solutions (or subsolutions, or supersolutions) is a solution (or subsolution, or supersolution). More generally, the notions of viscosity sub- and supersolution are also conserved by half-relaxed limits.
== History ==
The term viscosity solutions first appear in the work of Michael G. Crandall and Pierre-Louis Lions in 1983 regarding the Hamilton–Jacobi equation. The name is justified by the fact that the existence of solutions was obtained by the vanishing viscosity method. The definition of solution had actually been given earlier by Lawrence C. Evans in 1980. Subsequently the definition and properties of viscosity solutions for the Hamilton–Jacobi equation were refined in a joint work by Crandall, Evans and Lions in 1984.
For a few years the work on viscosity solutions concentrated on first order equations because it was not known whether second order elliptic equations would have a unique viscosity solution except in very particular cases. The breakthrough result came with the method introduced by Robert Jensen in 1988 to prove the comparison principle using a regularized approximation of the solution which has a second derivative almost everywhere (in modern versions of the proof this is achieved with sup-convolutions and Alexandrov theorem).
In subsequent years the concept of viscosity solution has become increasingly prevalent in analysis of degenerate elliptic PDE. Based on their stability properties, Barles and Souganidis obtained a very simple and general proof of convergence of finite difference schemes. Further regularity properties of viscosity solutions were obtained, especially in the uniformly elliptic case with the work of Luis Caffarelli. Viscosity solutions have become a central concept in the study of elliptic PDE. In particular, Viscosity solutions are essential in the study of the infinity Laplacian.
In the modern approach, the existence of solutions is obtained most often through the Perron method. The vanishing viscosity method is not practical for second order equations in general since the addition of artificial viscosity does not guarantee the existence of a classical solution. Moreover, the definition of viscosity solutions does not generally involve physical viscosity. Nevertheless, while the theory of viscosity solutions is sometimes considered unrelated to viscous fluids, irrotational fluids can indeed be described by a Hamilton-Jacobi equation. In this case, viscosity corresponds to the bulk viscosity of an irrotational, incompressible fluid.
Other names that were suggested were Crandall–Lions solutions, in honor to their pioneers,
L
∞
{\displaystyle L^{\infty }}
-weak solutions, referring to their stability properties, or comparison solutions, referring to their most characteristic property.
== References == | Wikipedia/Viscosity_solutions |
In mathematics, a function
f
{\displaystyle f}
defined on some set
X
{\displaystyle X}
with real or complex values is called bounded if the set of its values (its image) is bounded. In other words, there exists a real number
M
{\displaystyle M}
such that
|
f
(
x
)
|
≤
M
{\displaystyle |f(x)|\leq M}
for all
x
{\displaystyle x}
in
X
{\displaystyle X}
. A function that is not bounded is said to be unbounded.
If
f
{\displaystyle f}
is real-valued and
f
(
x
)
≤
A
{\displaystyle f(x)\leq A}
for all
x
{\displaystyle x}
in
X
{\displaystyle X}
, then the function is said to be bounded (from) above by
A
{\displaystyle A}
. If
f
(
x
)
≥
B
{\displaystyle f(x)\geq B}
for all
x
{\displaystyle x}
in
X
{\displaystyle X}
, then the function is said to be bounded (from) below by
B
{\displaystyle B}
. A real-valued function is bounded if and only if it is bounded from above and below.
An important special case is a bounded sequence, where
X
{\displaystyle X}
is taken to be the set
N
{\displaystyle \mathbb {N} }
of natural numbers. Thus a sequence
f
=
(
a
0
,
a
1
,
a
2
,
…
)
{\displaystyle f=(a_{0},a_{1},a_{2},\ldots )}
is bounded if there exists a real number
M
{\displaystyle M}
such that
|
a
n
|
≤
M
{\displaystyle |a_{n}|\leq M}
for every natural number
n
{\displaystyle n}
. The set of all bounded sequences forms the sequence space
l
∞
{\displaystyle l^{\infty }}
.
The definition of boundedness can be generalized to functions
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
taking values in a more general space
Y
{\displaystyle Y}
by requiring that the image
f
(
X
)
{\displaystyle f(X)}
is a bounded set in
Y
{\displaystyle Y}
.
== Related notions ==
Weaker than boundedness is local boundedness. A family of bounded functions may be uniformly bounded.
A bounded operator
T
:
X
→
Y
{\displaystyle T:X\rightarrow Y}
is not a bounded function in the sense of this page's definition (unless
T
=
0
{\displaystyle T=0}
), but has the weaker property of preserving boundedness; bounded sets
M
⊆
X
{\displaystyle M\subseteq X}
are mapped to bounded sets
T
(
M
)
⊆
Y
{\displaystyle T(M)\subseteq Y}
. This definition can be extended to any function
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
if
X
{\displaystyle X}
and
Y
{\displaystyle Y}
allow for the concept of a bounded set. Boundedness can also be determined by looking at a graph.
== Examples ==
The sine function
sin
:
R
→
R
{\displaystyle \sin :\mathbb {R} \rightarrow \mathbb {R} }
is bounded since
|
sin
(
x
)
|
≤
1
{\displaystyle |\sin(x)|\leq 1}
for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
.
The function
f
(
x
)
=
(
x
2
−
1
)
−
1
{\displaystyle f(x)=(x^{2}-1)^{-1}}
, defined for all real
x
{\displaystyle x}
except for −1 and 1, is unbounded. As
x
{\displaystyle x}
approaches −1 or 1, the values of this function get larger in magnitude. This function can be made bounded if one restricts its domain to be, for example,
[
2
,
∞
)
{\displaystyle [2,\infty )}
or
(
−
∞
,
−
2
]
{\displaystyle (-\infty ,-2]}
.
The function
f
(
x
)
=
(
x
2
+
1
)
−
1
{\textstyle f(x)=(x^{2}+1)^{-1}}
, defined for all real
x
{\displaystyle x}
, is bounded, since
|
f
(
x
)
|
≤
1
{\textstyle |f(x)|\leq 1}
for all
x
{\displaystyle x}
.
The inverse trigonometric function arctangent defined as:
y
=
arctan
(
x
)
{\displaystyle y=\arctan(x)}
or
x
=
tan
(
y
)
{\displaystyle x=\tan(y)}
is increasing for all real numbers
x
{\displaystyle x}
and bounded with
−
π
2
<
y
<
π
2
{\displaystyle -{\frac {\pi }{2}}<y<{\frac {\pi }{2}}}
radians
By the boundedness theorem, every continuous function on a closed interval, such as
f
:
[
0
,
1
]
→
R
{\displaystyle f:[0,1]\rightarrow \mathbb {R} }
, is bounded. More generally, any continuous function from a compact space into a metric space is bounded.
All complex-valued functions
f
:
C
→
C
{\displaystyle f:\mathbb {C} \rightarrow \mathbb {C} }
which are entire are either unbounded or constant as a consequence of Liouville's theorem. In particular, the complex
sin
:
C
→
C
{\displaystyle \sin :\mathbb {C} \rightarrow \mathbb {C} }
must be unbounded since it is entire.
The function
f
{\displaystyle f}
which takes the value 0 for
x
{\displaystyle x}
rational number and 1 for
x
{\displaystyle x}
irrational number (cf. Dirichlet function) is bounded. Thus, a function does not need to be "nice" in order to be bounded. The set of all bounded functions defined on
[
0
,
1
]
{\displaystyle [0,1]}
is much larger than the set of continuous functions on that interval. Moreover, continuous functions need not be bounded; for example, the functions
g
:
R
2
→
R
{\displaystyle g:\mathbb {R} ^{2}\to \mathbb {R} }
and
h
:
(
0
,
1
)
2
→
R
{\displaystyle h:(0,1)^{2}\to \mathbb {R} }
defined by
g
(
x
,
y
)
:=
x
+
y
{\displaystyle g(x,y):=x+y}
and
h
(
x
,
y
)
:=
1
x
+
y
{\displaystyle h(x,y):={\frac {1}{x+y}}}
are both continuous, but neither is bounded. (However, a continuous function must be bounded if its domain is both closed and bounded.)
== See also ==
Bounded set
Compact support
Local boundedness
Uniform boundedness
== References == | Wikipedia/Bounded_function |
In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.
== Linear time-invariant distributed-parameter systems ==
=== Abstract evolution equations ===
==== Discrete-time ====
With U, X and Y Hilbert spaces and
A
{\displaystyle A\,}
∈ L(X),
B
{\displaystyle B\,}
∈ L(U, X),
C
{\displaystyle C\,}
∈ L(X, Y) and
D
{\displaystyle D\,}
∈ L(U, Y) the following difference equations determine a discrete-time linear time-invariant system:
x
(
k
+
1
)
=
A
x
(
k
)
+
B
u
(
k
)
{\displaystyle x(k+1)=Ax(k)+Bu(k)\,}
y
(
k
)
=
C
x
(
k
)
+
D
u
(
k
)
{\displaystyle y(k)=Cx(k)+Du(k)\,}
with
x
{\displaystyle x\,}
(the state) a sequence with values in X,
u
{\displaystyle u\,}
(the input or control) a sequence with values in U and
y
{\displaystyle y\,}
(the output) a sequence with values in Y.
==== Continuous-time ====
The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:
x
˙
(
t
)
=
A
x
(
t
)
+
B
u
(
t
)
{\displaystyle {\dot {x}}(t)=Ax(t)+Bu(t)\,}
,
y
(
t
)
=
C
x
(
t
)
+
D
u
(
t
)
{\displaystyle y(t)=Cx(t)+Du(t)\,}
.
An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples, but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.
=== Example: a partial differential equation ===
The partial differential equation with
t
>
0
{\displaystyle t>0}
and
ξ
∈
[
0
,
1
]
{\displaystyle \xi \in [0,1]}
given by
∂
∂
t
w
(
t
,
ξ
)
=
−
∂
∂
ξ
w
(
t
,
ξ
)
+
u
(
t
)
,
{\displaystyle {\frac {\partial }{\partial t}}w(t,\xi )=-{\frac {\partial }{\partial \xi }}w(t,\xi )+u(t),}
w
(
0
,
ξ
)
=
w
0
(
ξ
)
,
{\displaystyle w(0,\xi )=w_{0}(\xi ),}
w
(
t
,
0
)
=
0
,
{\displaystyle w(t,0)=0,}
y
(
t
)
=
∫
0
1
w
(
t
,
ξ
)
d
ξ
,
{\displaystyle y(t)=\int _{0}^{1}w(t,\xi )\,d\xi ,}
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as
A
x
=
−
x
′
{\displaystyle Ax=-x'}
,
D
(
A
)
=
{
x
∈
X
:
x
absolutely continuous
,
x
′
∈
L
2
(
0
,
1
)
,
x
(
0
)
=
0
}
.
{\displaystyle D(A)=\left\{x\in X:x{\text{ absolutely continuous }},\,x'\in L^{2}(0,1),\,x(0)=0\right\}.}
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
B
u
=
u
,
C
x
=
∫
0
1
x
(
ξ
)
d
ξ
,
D
=
0.
{\displaystyle Bu=u,~~~Cx=\int _{0}^{1}x(\xi )\,d\xi ,~~~D=0.}
=== Example: a delay differential equation ===
The delay differential equation
w
˙
(
t
)
=
w
(
t
)
+
w
(
t
−
τ
)
+
u
(
t
)
,
{\displaystyle {\dot {w}}(t)=w(t)+w(t-\tau )+u(t),}
y
(
t
)
=
w
(
t
)
,
{\displaystyle y(t)=w(t),}
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be the product of the complex numbers with L2(−τ, 0). The operator A is defined as
A
(
r
f
)
=
(
r
+
f
(
−
τ
)
f
′
)
{\displaystyle A{\begin{pmatrix}r\\f\end{pmatrix}}={\begin{pmatrix}r+f(-\tau )\\f'\end{pmatrix}}}
,
D
(
A
)
=
{
(
r
f
)
∈
X
:
f
absolutely continuous
,
f
′
∈
L
2
(
[
−
τ
,
0
]
)
,
r
=
f
(
0
)
}
.
{\displaystyle D(A)=\left\{{\begin{pmatrix}r\\f\end{pmatrix}}\in X:f{\text{ absolutely continuous }},\,f'\in L^{2}([-\tau ,0]),\,r=f(0)\right\}.}
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
B
u
=
(
u
0
)
,
C
(
r
f
)
=
r
,
D
=
0.
{\displaystyle Bu={\begin{pmatrix}u\\0\end{pmatrix}},~~~C{\begin{pmatrix}r\\f\end{pmatrix}}=r,~~~D=0.}
=== Transfer functions ===
As in the finite-dimensional case the transfer function is defined through the Laplace transform (continuous-time) or Z-transform (discrete-time). Whereas in the finite-dimensional case the transfer function is a proper rational function, the infinite-dimensionality of the state space leads to irrational functions (which are however still holomorphic).
==== Discrete-time ====
In discrete-time the transfer function is given in terms of the state-space parameters by
D
+
∑
k
=
0
∞
C
A
k
B
z
k
{\displaystyle D+\sum _{k=0}^{\infty }CA^{k}Bz^{k}}
and it is holomorphic in a disc centered at the origin. In case 1/z belongs to the resolvent set of A (which is the case on a possibly smaller disc centered at the origin) the transfer function equals
D
+
C
z
(
I
−
z
A
)
−
1
B
{\displaystyle D+Cz(I-zA)^{-1}B}
. An interesting fact is that any function that is holomorphic in zero is the transfer function of some discrete-time system.
==== Continuous-time ====
If A generates a strongly continuous semigroup and B, C and D are bounded operators, then the transfer function is given in terms of the state space parameters by
D
+
C
(
s
I
−
A
)
−
1
B
{\displaystyle D+C(sI-A)^{-1}B}
for s with real part larger than the exponential growth bound of the semigroup generated by A. In more general situations this formula as it stands may not even make sense, but an appropriate generalization of this formula still holds.
To obtain an easy expression for the transfer function it is often better to take the Laplace transform in the given differential equation than to use the state space formulas as illustrated below on the examples given above.
==== Transfer function for the partial differential equation example ====
Setting the initial condition
w
0
{\displaystyle w_{0}}
equal to zero and denoting Laplace transforms with respect to t by capital letters we obtain from the partial differential equation given above
s
W
(
s
,
ξ
)
=
−
d
d
ξ
W
(
s
,
ξ
)
+
U
(
s
)
,
{\displaystyle sW(s,\xi )=-{\frac {d}{d\xi }}W(s,\xi )+U(s),}
W
(
s
,
0
)
=
0
,
{\displaystyle W(s,0)=0,}
Y
(
s
)
=
∫
0
1
W
(
s
,
ξ
)
d
ξ
.
{\displaystyle Y(s)=\int _{0}^{1}W(s,\xi )\,d\xi .}
This is an inhomogeneous linear differential equation with
ξ
{\displaystyle \xi }
as the variable, s as a parameter and initial condition zero. The solution is
W
(
s
,
ξ
)
=
U
(
s
)
(
1
−
e
−
s
ξ
)
/
s
{\displaystyle W(s,\xi )=U(s)(1-e^{-s\xi })/s}
. Substituting this in the equation for Y and integrating gives
Y
(
s
)
=
U
(
s
)
(
e
−
s
+
s
−
1
)
/
s
2
{\displaystyle Y(s)=U(s)(e^{-s}+s-1)/s^{2}}
so that the transfer function is
(
e
−
s
+
s
−
1
)
/
s
2
{\displaystyle (e^{-s}+s-1)/s^{2}}
.
==== Transfer function for the delay differential equation example ====
Proceeding similarly as for the partial differential equation example, the transfer function for the delay equation example is
1
/
(
s
−
1
−
e
−
s
)
{\displaystyle 1/(s-1-e^{-s})}
.
=== Controllability ===
In the infinite-dimensional case there are several non-equivalent definitions of controllability which for the finite-dimensional case collapse to the one usual notion of controllability. The three most important controllability concepts are:
Exact controllability,
Approximate controllability,
Null controllability.
==== Controllability in discrete-time ====
An important role is played by the maps
Φ
n
{\displaystyle \Phi _{n}}
which map the set of all U valued sequences into X and are given by
Φ
n
u
=
∑
k
=
0
n
A
k
B
u
k
{\displaystyle \Phi _{n}u=\sum _{k=0}^{n}A^{k}Bu_{k}}
. The interpretation is that
Φ
n
u
{\displaystyle \Phi _{n}u}
is the state that is reached by applying the input sequence u when the initial condition is zero. The system is called
exactly controllable in time n if the range of
Φ
n
{\displaystyle \Phi _{n}}
equals X,
approximately controllable in time n if the range of
Φ
n
{\displaystyle \Phi _{n}}
is dense in X,
null controllable in time n if the range of
Φ
n
{\displaystyle \Phi _{n}}
includes the range of An.
==== Controllability in continuous-time ====
In controllability of continuous-time systems the map
Φ
t
{\displaystyle \Phi _{t}}
given by
∫
0
t
e
A
s
B
u
(
s
)
d
s
{\displaystyle \int _{0}^{t}{\rm {e}}^{As}Bu(s)\,ds}
plays the role that
Φ
n
{\displaystyle \Phi _{n}}
plays in discrete-time. However, the space of control functions on which this operator acts now influences the definition. The usual choice is L2(0, ∞;U), the space of (equivalence classes of) U-valued square integrable functions on the interval (0, ∞), but other choices such as L1(0, ∞;U) are possible. The different controllability notions can be defined once the domain of
Φ
t
{\displaystyle \Phi _{t}}
is chosen. The system is called
exactly controllable in time t if the range of
Φ
t
{\displaystyle \Phi _{t}}
equals X,
approximately controllable in time t if the range of
Φ
t
{\displaystyle \Phi _{t}}
is dense in X,
null controllable in time t if the range of
Φ
t
{\displaystyle \Phi _{t}}
includes the range of
e
A
t
{\displaystyle {\rm {e}}^{At}}
.
=== Observability ===
As in the finite-dimensional case, observability is the dual notion of controllability. In the infinite-dimensional case there are several different notions of observability which in the finite-dimensional case coincide. The three most important ones are:
Exact observability (also known as continuous observability),
Approximate observability,
Final state observability.
==== Observability in discrete-time ====
An important role is played by the maps
Ψ
n
{\displaystyle \Psi _{n}}
which map X into the space of all Y valued sequences and are given by
(
Ψ
n
x
)
k
=
C
A
k
x
{\displaystyle (\Psi _{n}x)_{k}=CA^{k}x}
if k ≤ n and zero if k > n. The interpretation is that
Ψ
n
x
{\displaystyle \Psi _{n}x}
is the truncated output with initial condition x and control zero. The system is called
exactly observable in time n if there exists a kn > 0 such that
‖
Ψ
n
x
‖
≥
k
n
‖
x
‖
{\displaystyle \|\Psi _{n}x\|\geq k_{n}\|x\|}
for all x ∈ X,
approximately observable in time n if
Ψ
n
{\displaystyle \Psi _{n}}
is injective,
final state observable in time n if there exists a kn > 0 such that
‖
Ψ
n
x
‖
≥
k
n
‖
A
n
x
‖
{\displaystyle \|\Psi _{n}x\|\geq k_{n}\|A^{n}x\|}
for all x ∈ X.
==== Observability in continuous-time ====
In observability of continuous-time systems the map
Ψ
t
{\displaystyle \Psi _{t}}
given by
(
Ψ
t
)
(
s
)
=
C
e
A
s
x
{\displaystyle (\Psi _{t})(s)=C{\rm {e}}^{As}x}
for s∈[0,t] and zero for s>t plays the role that
Ψ
n
{\displaystyle \Psi _{n}}
plays in discrete-time. However, the space of functions to which this operator maps now influences the definition. The usual choice is L2(0, ∞, Y), the space of (equivalence classes of) Y-valued square integrable functions on the interval (0,∞), but other choices such as L1(0, ∞, Y) are possible. The different observability notions can be defined once the co-domain of
Ψ
t
{\displaystyle \Psi _{t}}
is chosen. The system is called
exactly observable in time t if there exists a kt > 0 such that
‖
Ψ
t
x
‖
≥
k
t
‖
x
‖
{\displaystyle \|\Psi _{t}x\|\geq k_{t}\|x\|}
for all x ∈ X,
approximately observable in time t if
Ψ
t
{\displaystyle \Psi _{t}}
is injective,
final state observable in time t if there exists a kt > 0 such that
‖
Ψ
t
x
‖
≥
k
t
‖
e
A
t
x
‖
{\displaystyle \|\Psi _{t}x\|\geq k_{t}\|{\rm {e}}^{At}x\|}
for all x ∈ X.
=== Duality between controllability and observability ===
As in the finite-dimensional case, controllability and observability are dual concepts (at least when for the domain of
Φ
{\displaystyle \Phi }
and the co-domain of
Ψ
{\displaystyle \Psi }
the usual L2 choice is made). The correspondence under duality of the different concepts is:
Exact controllability ↔ Exact observability,
Approximate controllability ↔ Approximate observability,
Null controllability ↔ Final state observability.
== See also ==
Control theory
State space (controls)
== Notes ==
== References ==
Curtain, Ruth; Zwart, Hans (1995), An Introduction to Infinite-Dimensional Linear Systems theory, Springer
Tucsnak, Marius; Weiss, George (2009), Observation and Control for Operator Semigroups, Birkhauser
Staffans, Olof (2005), Well-posed linear systems, Cambridge University Press
Luo, Zheng-Hua; Guo, Bao-Zhu; Morgul, Omer (1999), Stability and Stabilization of Infinite Dimensional Systems with Applications, Springer
Lasiecka, Irena; Triggiani, Roberto (2000), Control Theory for Partial Differential Equations, Cambridge University Press
Bensoussan, Alain; Da Prato, Giuseppe; Delfour, Michel; Mitter, Sanjoy (2007), Representation and Control of Infinite Dimensional Systems (second ed.), Birkhauser | Wikipedia/Distributed_parameter_systems |
In control theory, an open-loop controller, also called a non-feedback controller, is a control loop part of a control system in which the control action ("input" to the system) is independent of the "process output", which is the process variable that is being controlled. It does not use feedback to determine if its output has achieved the desired goal of the input command or process setpoint.
There are many open-loop controls, such as on/off switching of valves, machinery, lights, motors or heaters, where the control result is known to be approximately sufficient under normal conditions without the need for feedback. The advantage of using open-loop control in these cases is the reduction in component count and complexity. However, an open-loop system cannot correct any errors that it makes or correct for outside disturbances unlike a closed-loop control system.
== Open-loop and closed-loop ==
== Applications ==
An open-loop controller is often used in simple processes because of its simplicity and low cost, especially in systems where feedback is not critical. A typical example would be an older model domestic clothes dryer, for which the length of time is entirely dependent on the judgement of the human operator, with no automatic feedback of the dryness of the clothes.
For example, an irrigation sprinkler system, programmed to turn on at set times could be an example of an open-loop system if it does not measure soil moisture as a form of feedback. Even if rain is pouring down on the lawn, the sprinkler system would activate on schedule, wasting water.
Another example is a stepper motor used for control of position. Sending it a stream of electrical pulses causes it to rotate by exactly that many steps, hence the name. If the motor was always assumed to perform each movement correctly, without positional feedback, it would be open-loop control. However, if there is a position encoder, or sensors to indicate the start or finish positions, then that is closed-loop control, such as in many inkjet printers. The drawback of open-loop control of steppers is that if the machine load is too high, or the motor attempts to move too quickly, then steps may be skipped. The controller has no means of detecting this and so the machine continues to run slightly out of adjustment until reset. For this reason, more complex robots and machine tools instead use servomotors rather than stepper motors, which incorporate encoders and closed-loop controllers.
However, open-loop control is very useful and economic for well-defined systems where the relationship between input and the resultant state can be reliably modeled by a mathematical formula. For example, determining the voltage to be fed to an electric motor that drives a constant load, in order to achieve a desired speed would be a good application. But if the load were not predictable and became excessive, the motor's speed might vary as a function of the load not just the voltage, and an open-loop controller would be insufficient to ensure repeatable control of the velocity.
An example of this is a conveyor system that is required to travel at a constant speed. For a constant voltage, the conveyor will move at a different speed depending on the load on the motor (represented here by the weight of objects on the conveyor). In order for the conveyor to run at a constant speed, the voltage of the motor must be adjusted depending on the load. In this case, a closed-loop control system would be necessary.
Thus there are many open-loop controls, such as switching valves, lights, motors or heaters on and off, where the result is known to be approximately sufficient without the need for feedback.
== Combination with feedback control ==
A feed back control system, such as a PID controller, can be improved by combining the feedback (or closed-loop control) of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system in some situations.
== See also ==
Cataract, the open-loop speed controller of early beam engines
Control theory
Feed-forward
PID controller
Process control
Open-loop transfer function
== References ==
== Further reading ==
Kuo, Benjamin C. (1991). Automatic Control Systems (6th ed.). New Jersey: Prentice Hall. ISBN 0-13-051046-7.
Ziny Flikop (2004). "Bounded-Input Bounded-Predefined-Control Bounded-Output" (http://arXiv.org/pdf/cs/0411015)
Basso, Christophe (2012). "Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide". Artech House, ISBN 978-1608075577 | Wikipedia/Open-loop_controller |
Integral windup, also known as integrator windup or reset windup, refers to the situation in a PID controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction).
== Solutions ==
This problem can be addressed by
Initializing the controller integral to a desired value, for instance to the value before the problem
Increasing the setpoint in a suitable ramp
Conditional integration: disabling the integral function until the to-be-controlled process variable (PV) has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the process output within feasible bounds.
Clegg integrator: Zeroing the integral value every time the error is equal to, or crosses zero. This avoids having the controller attempt to drive the system to have the same error integral in the opposite direction as was caused by a perturbation, but induces oscillation if a non-zero control value required to maintain the process at setpoint.
== Occurrence ==
Integral windup particularly occurs as a limitation of physical systems, compared with ideal systems, due to the ideal output being physically impossible (process saturation: the output of the process being limited at the top or bottom of its scale, making the error constant). For example, the position of a valve cannot be any more open than fully open and also cannot be closed any more than fully closed. In this case, anti-windup can actually involve the integrator being turned off for periods of time until the response falls back into an acceptable range.
This usually occurs when the controller's output can no longer affect the controlled variable, or if the controller is part of a selection scheme and it is selected right.
Integral windup was more of a problem in analog controllers. Within modern distributed control systems and programmable logic controllers, it is much easier to prevent integral windup by either limiting the controller output, limiting the integral to produce feasible output, or by using external reset feedback, which is a means of feeding back the selected output to the integral circuit of all controllers in the selection scheme so that a closed loop is maintained.
== References == | Wikipedia/Anti-wind_up_system_(control) |
In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output (known as a transfer curve or characteristic curve). Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.
Dimensions and units of the transfer function model the output response of the device for a range of possible inputs. The transfer function of a two-port electronic circuit, such as an amplifier, might be a two-dimensional graph of the scalar voltage at the output as a function of the scalar voltage applied to the input; the transfer function of an electromechanical actuator might be the mechanical displacement of the movable arm as a function of electric current applied to the device; the transfer function of a photodetector might be the output voltage as a function of the luminous intensity of incident light of a given wavelength.
The term "transfer function" is also used in the frequency domain analysis of systems using transform methods, such as the Laplace transform; it is the amplitude of the output as a function of the frequency of the input signal. The transfer function of an electronic filter is the amplitude at the output as a function of the frequency of a constant amplitude sine wave applied to the input. For optical imaging devices, the optical transfer function is the Fourier transform of the point spread function (a function of spatial frequency).
== Linear time-invariant systems ==
Transfer functions are commonly used in the analysis of systems such as single-input single-output filters in signal processing, communication theory, and control theory. The term is often used exclusively to refer to linear time-invariant (LTI) systems. Most real systems have non-linear input-output characteristics, but many systems operated within nominal parameters (not over-driven) have behavior close enough to linear that LTI system theory is an acceptable representation of their input-output behavior.
=== Continuous-time ===
Descriptions are given in terms of a complex variable,
s
=
σ
+
j
⋅
ω
{\displaystyle s=\sigma +j\cdot \omega }
. In many applications it is sufficient to set
σ
=
0
{\displaystyle \sigma =0}
(thus
s
=
j
⋅
ω
{\displaystyle s=j\cdot \omega }
), which reduces the Laplace transforms with complex arguments to Fourier transforms with the real argument ω. This is common in applications primarily interested in the LTI system's steady-state response (often the case in signal processing and communication theory), not the fleeting turn-on and turn-off transient response or stability issues.
For continuous-time input signal
x
(
t
)
{\displaystyle x(t)}
and output
y
(
t
)
{\displaystyle y(t)}
, dividing the Laplace transform of the output,
Y
(
s
)
=
L
{
y
(
t
)
}
{\displaystyle Y(s)={\mathcal {L}}\left\{y(t)\right\}}
, by the Laplace transform of the input,
X
(
s
)
=
L
{
x
(
t
)
}
{\displaystyle X(s)={\mathcal {L}}\left\{x(t)\right\}}
, yields the system's transfer function
H
(
s
)
{\displaystyle H(s)}
:
H
(
s
)
=
Y
(
s
)
X
(
s
)
=
L
{
y
(
t
)
}
L
{
x
(
t
)
}
{\displaystyle H(s)={\frac {Y(s)}{X(s)}}={\frac {{\mathcal {L}}\left\{y(t)\right\}}{{\mathcal {L}}\left\{x(t)\right\}}}}
which can be rearranged as:
Y
(
s
)
=
H
(
s
)
X
(
s
)
.
{\displaystyle Y(s)=H(s)\;X(s)\,.}
=== Discrete-time ===
Discrete-time signals may be notated as arrays indexed by an integer
n
{\displaystyle n}
(e.g.
x
[
n
]
{\displaystyle x[n]}
for input and
y
[
n
]
{\displaystyle y[n]}
for output). Instead of using the Laplace transform (which is better for continuous-time signals), discrete-time signals are dealt with using the z-transform (notated with a corresponding capital letter, like
X
(
z
)
{\displaystyle X(z)}
and
Y
(
z
)
{\displaystyle Y(z)}
), so a discrete-time system's transfer function can be written as:
H
(
z
)
=
Y
(
z
)
X
(
z
)
=
Z
{
y
[
n
]
}
Z
{
x
[
n
]
}
.
{\displaystyle H(z)={\frac {Y(z)}{X(z)}}={\frac {{\mathcal {Z}}\{y[n]\}}{{\mathcal {Z}}\{x[n]\}}}.}
=== Direct derivation from differential equations ===
A linear differential equation with constant coefficients
L
[
u
]
=
d
n
u
d
t
n
+
a
1
d
n
−
1
u
d
t
n
−
1
+
⋯
+
a
n
−
1
d
u
d
t
+
a
n
u
=
r
(
t
)
{\displaystyle L[u]={\frac {d^{n}u}{dt^{n}}}+a_{1}{\frac {d^{n-1}u}{dt^{n-1}}}+\dotsb +a_{n-1}{\frac {du}{dt}}+a_{n}u=r(t)}
where u and r are suitably smooth functions of t, has L as the operator defined on the relevant function space that transforms u into r. That kind of equation can be used to constrain the output function u in terms of the forcing function r. The transfer function can be used to define an operator
F
[
r
]
=
u
{\displaystyle F[r]=u}
that serves as a right inverse of L, meaning that
L
[
F
[
r
]
]
=
r
{\displaystyle L[F[r]]=r}
.
Solutions of the homogeneous constant-coefficient differential equation
L
[
u
]
=
0
{\displaystyle L[u]=0}
can be found by trying
u
=
e
λ
t
{\displaystyle u=e^{\lambda t}}
. That substitution yields the characteristic polynomial
p
L
(
λ
)
=
λ
n
+
a
1
λ
n
−
1
+
⋯
+
a
n
−
1
λ
+
a
n
{\displaystyle p_{L}(\lambda )=\lambda ^{n}+a_{1}\lambda ^{n-1}+\dotsb +a_{n-1}\lambda +a_{n}\,}
The inhomogeneous case can be easily solved if the input function r is also of the form
r
(
t
)
=
e
s
t
{\displaystyle r(t)=e^{st}}
. By substituting
u
=
H
(
s
)
e
s
t
{\displaystyle u=H(s)e^{st}}
,
L
[
H
(
s
)
e
s
t
]
=
e
s
t
{\displaystyle L[H(s)e^{st}]=e^{st}}
if we define
H
(
s
)
=
1
p
L
(
s
)
wherever
p
L
(
s
)
≠
0.
{\displaystyle H(s)={\frac {1}{p_{L}(s)}}\qquad {\text{wherever }}\quad p_{L}(s)\neq 0.}
Other definitions of the transfer function are used, for example
1
/
p
L
(
i
k
)
.
{\displaystyle 1/p_{L}(ik).}
=== Gain, transient behavior and stability ===
A general sinusoidal input to a system of frequency
ω
0
/
(
2
π
)
{\displaystyle \omega _{0}/(2\pi )}
may be written
exp
(
j
ω
0
t
)
{\displaystyle \exp(j\omega _{0}t)}
. The response of a system to a sinusoidal input beginning at time
t
=
0
{\displaystyle t=0}
will consist of the sum of the steady-state response and a transient response. The steady-state response is the output of the system in the limit of infinite time, and the transient response is the difference between the response and the steady-state response; it corresponds to the homogeneous solution of the differential equation. The transfer function for an LTI system may be written as the product:
H
(
s
)
=
∏
i
=
1
N
1
s
−
s
P
i
{\displaystyle H(s)=\prod _{i=1}^{N}{\frac {1}{s-s_{P_{i}}}}}
where sPi are the N roots of the characteristic polynomial and will be the poles of the transfer function. In a transfer function with a single pole
H
(
s
)
=
1
s
−
s
P
{\displaystyle H(s)={\frac {1}{s-s_{P}}}}
where
s
P
=
σ
P
+
j
ω
P
{\displaystyle s_{P}=\sigma _{P}+j\omega _{P}}
, the Laplace transform of a general sinusoid of unit amplitude will be
1
s
−
j
ω
i
{\displaystyle {\frac {1}{s-j\omega _{i}}}}
. The Laplace transform of the output will be
H
(
s
)
s
−
j
ω
0
{\displaystyle {\frac {H(s)}{s-j\omega _{0}}}}
, and the temporal output will be the inverse Laplace transform of that function:
g
(
t
)
=
e
j
ω
0
t
−
e
(
σ
P
+
j
ω
P
)
t
−
σ
P
+
j
(
ω
0
−
ω
P
)
{\displaystyle g(t)={\frac {e^{j\,\omega _{0}\,t}-e^{(\sigma _{P}+j\,\omega _{P})t}}{-\sigma _{P}+j(\omega _{0}-\omega _{P})}}}
The second term in the numerator is the transient response, and in the limit of infinite time it will diverge to infinity if σP is positive. For a system to be stable, its transfer function must have no poles whose real parts are positive. If the transfer function is strictly stable, the real parts of all poles will be negative and the transient behavior will tend to zero in the limit of infinite time. The steady-state output will be:
g
(
∞
)
=
e
j
ω
0
t
−
σ
P
+
j
(
ω
0
−
ω
P
)
{\displaystyle g(\infty )={\frac {e^{j\,\omega _{0}\,t}}{-\sigma _{P}+j(\omega _{0}-\omega _{P})}}}
The frequency response (or "gain") G of the system is defined as the absolute value of the ratio of the output amplitude to the steady-state input amplitude:
G
(
ω
i
)
=
|
1
−
σ
P
+
j
(
ω
0
−
ω
P
)
|
=
1
σ
P
2
+
(
ω
P
−
ω
0
)
2
,
{\displaystyle G(\omega _{i})=\left|{\frac {1}{-\sigma _{P}+j(\omega _{0}-\omega _{P})}}\right|={\frac {1}{\sqrt {\sigma _{P}^{2}+(\omega _{P}-\omega _{0})^{2}}}},}
which is the absolute value of the transfer function
H
(
s
)
{\displaystyle H(s)}
evaluated at
j
ω
i
{\displaystyle j\omega _{i}}
. This result is valid for any number of transfer-function poles.
== Signal processing ==
If
x
(
t
)
{\displaystyle x(t)}
is the input to a general linear time-invariant system, and
y
(
t
)
{\displaystyle y(t)}
is the output, and the bilateral Laplace transform of
x
(
t
)
{\displaystyle x(t)}
and
y
(
t
)
{\displaystyle y(t)}
is
X
(
s
)
=
L
{
x
(
t
)
}
=
d
e
f
∫
−
∞
∞
x
(
t
)
e
−
s
t
d
t
,
Y
(
s
)
=
L
{
y
(
t
)
}
=
d
e
f
∫
−
∞
∞
y
(
t
)
e
−
s
t
d
t
.
{\displaystyle {\begin{aligned}X(s)&={\mathcal {L}}\left\{x(t)\right\}\ {\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }x(t)e^{-st}\,dt,\\Y(s)&={\mathcal {L}}\left\{y(t)\right\}\ {\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }y(t)e^{-st}\,dt.\end{aligned}}}
The output is related to the input by the transfer function
H
(
s
)
{\displaystyle H(s)}
as
Y
(
s
)
=
H
(
s
)
X
(
s
)
{\displaystyle Y(s)=H(s)X(s)}
and the transfer function itself is
H
(
s
)
=
Y
(
s
)
X
(
s
)
.
{\displaystyle H(s)={\frac {Y(s)}{X(s)}}.}
If a complex harmonic signal with a sinusoidal component with amplitude
|
X
|
{\displaystyle |X|}
, angular frequency
ω
{\displaystyle \omega }
and phase
arg
(
X
)
{\displaystyle \arg(X)}
, where arg is the argument
x
(
t
)
=
X
e
j
ω
t
=
|
X
|
e
j
(
ω
t
+
arg
(
X
)
)
{\displaystyle x(t)=Xe^{j\omega t}=|X|e^{j(\omega t+\arg(X))}}
where
X
=
|
X
|
e
j
arg
(
X
)
{\displaystyle X=|X|e^{j\arg(X)}}
is input to a linear time-invariant system, the corresponding component in the output is:
y
(
t
)
=
Y
e
j
ω
t
=
|
Y
|
e
j
(
ω
t
+
arg
(
Y
)
)
,
Y
=
|
Y
|
e
j
arg
(
Y
)
.
{\displaystyle {\begin{aligned}y(t)&=Ye^{j\omega t}=|Y|e^{j(\omega t+\arg(Y))},\\Y&=|Y|e^{j\arg(Y)}.\end{aligned}}}
In a linear time-invariant system, the input frequency
ω
{\displaystyle \omega }
has not changed; only the amplitude and phase angle of the sinusoid have been changed by the system. The frequency response
H
(
j
ω
)
{\displaystyle H(j\omega )}
describes this change for every frequency
ω
{\displaystyle \omega }
in terms of gain
G
(
ω
)
=
|
Y
|
|
X
|
=
|
H
(
j
ω
)
|
{\displaystyle G(\omega )={\frac {|Y|}{|X|}}=|H(j\omega )|}
and phase shift
ϕ
(
ω
)
=
arg
(
Y
)
−
arg
(
X
)
=
arg
(
H
(
j
ω
)
)
.
{\displaystyle \phi (\omega )=\arg(Y)-\arg(X)=\arg(H(j\omega )).}
The phase delay (the frequency-dependent amount of delay introduced to the sinusoid by the transfer function) is
τ
ϕ
(
ω
)
=
−
ϕ
(
ω
)
ω
.
{\displaystyle \tau _{\phi }(\omega )=-{\frac {\phi (\omega )}{\omega }}.}
The group delay (the frequency-dependent amount of delay introduced to the envelope of the sinusoid by the transfer function) is found by computing the derivative of the phase shift with respect to angular frequency
ω
{\displaystyle \omega }
,
τ
g
(
ω
)
=
−
d
ϕ
(
ω
)
d
ω
.
{\displaystyle \tau _{g}(\omega )=-{\frac {d\phi (\omega )}{d\omega }}.}
The transfer function can also be shown using the Fourier transform, a special case of bilateral Laplace transform where
s
=
j
ω
{\displaystyle s=j\omega }
.
=== Common transfer-function families ===
Although any LTI system can be described by some transfer function, "families" of special transfer functions are commonly used:
Butterworth filter – maximally flat in passband and stopband for the given order
Chebyshev filter (Type I) – maximally flat in stopband, sharper cutoff than a Butterworth filter of the same order
Chebyshev filter (Type II) – maximally flat in passband, sharper cutoff than a Butterworth filter of the same order
Bessel filter – maximally constant group delay for a given order
Elliptic filter – sharpest cutoff (narrowest transition between passband and stopband) for the given order
Optimum "L" filter
Gaussian filter – minimum group delay; gives no overshoot to a step function
Raised-cosine filter
== Control engineering ==
In control engineering and control theory, the transfer function is derived with the Laplace transform. The transfer function was the primary tool used in classical control engineering. A transfer matrix can be obtained for any linear system to analyze its dynamics and other properties; each element of a transfer matrix is a transfer function relating a particular input variable to an output variable. A representation bridging state space and transfer function methods was proposed by Howard H. Rosenbrock, and is known as the Rosenbrock system matrix.
== Imaging ==
In imaging, transfer functions are used to describe the relationship between the scene light, the image signal and the displayed light.
== Non-linear systems ==
Transfer functions do not exist for many non-linear systems, such as relaxation oscillators; however, describing functions can sometimes be used to approximate such nonlinear time-invariant systems.
== See also ==
== References ==
== External links ==
ECE 209: Review of Circuits as LTI Systems — Short primer on the mathematical analysis of (electrical) LTI systems. | Wikipedia/Transfer_function |
Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the "plant". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output.
Control theory is divided into two branches. Linear control theory applies to systems made of devices which obey the superposition principle. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called linear time invariant (LTI) systems. These systems can be solved by powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion.
Nonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. Even if the plant is linear, a nonlinear controller can often have attractive features such as simpler implementation, faster speed, more accuracy, or reduced control energy, which justify the more difficult design procedure.
An example of a nonlinear control system is a thermostat-controlled heating system. A building heating system such as a furnace has a nonlinear response to changes in temperature; it is either "on" or "off", it does not have the fine control in response to temperature differences that a proportional (linear) device would have. Therefore, the furnace is off until the temperature falls below the "turn on" setpoint of the thermostat, when it turns on. Due to the heat added by the furnace, the temperature increases until it reaches the "turn off" setpoint of the thermostat, which turns the furnace off, and the cycle repeats. This cycling of the temperature about the desired temperature is called a limit cycle, and is characteristic of nonlinear control systems.
== Properties of nonlinear systems ==
Some properties of nonlinear dynamic systems are
They do not follow the principle of superposition (linearity and homogeneity).
They may have multiple isolated equilibrium points.
They may exhibit properties such as limit cycle, bifurcation, chaos.
Finite escape time: Solutions of nonlinear systems may not exist for all times.
== Analysis and control of nonlinear systems ==
There are several well-developed techniques for analyzing nonlinear feedback systems:
Describing function method
Phase plane method
Lyapunov stability analysis
Singular perturbation method
The Popov criterion and the circle criterion for absolute stability
Center manifold theorem
Small-gain theorem
Passivity analysis
Control design techniques for nonlinear systems also exist. These can be subdivided into techniques which attempt to treat the system as a linear system in a limited range of operation and use (well-known) linear design techniques for each region:
Gain scheduling
Those that attempt to introduce auxiliary nonlinear feedback in such a way that the system can be treated as linear for purposes of control design:
Feedback linearization
And Lyapunov based methods:
Lyapunov redesign
Control-Lyapunov function
Nonlinear damping
Backstepping
Sliding mode control
== Nonlinear feedback analysis – The Lur'e problem ==
An early nonlinear feedback system analysis problem was formulated by A. I. Lur'e.
Control systems described by the Lur'e problem have a forward path that is linear and time-invariant, and a feedback path that contains a memory-less, possibly time-varying, static nonlinearity.
The linear part can be characterized by four matrices (A,B,C,D), while the nonlinear part is Φ(y) with
Φ
(
y
)
y
∈
[
a
,
b
]
,
a
<
b
∀
y
{\displaystyle {\frac {\Phi (y)}{y}}\in [a,b],\quad a<b\quad \forall y}
(a sector nonlinearity).
=== Absolute stability problem ===
Consider:
(A,B) is controllable and (C,A) is observable
two real numbers a, b with a < b, defining a sector for function Φ
The Lur'e problem (also known as the absolute stability problem) is to derive conditions involving only the transfer matrix H(s) and {a,b} such that x = 0 is a globally uniformly asymptotically stable equilibrium of the system.
There are two well-known wrong conjectures on the absolute stability problem:
The Aizerman's conjecture
The Kalman's conjecture.
Graphically, these conjectures can be interpreted in terms of graphical restrictions on the graph of Φ(y) x y or also on the graph of dΦ/dy x Φ/y. There are counterexamples to Aizerman's and Kalman's conjectures such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution—hidden oscillation.
There are two main theorems concerning the Lur'e problem which give sufficient conditions for absolute stability:
The circle criterion (an extension of the Nyquist stability criterion for linear systems)
The Popov criterion.
== Theoretical results in nonlinear control ==
=== Frobenius theorem ===
The Frobenius theorem is a deep result in differential geometry. When applied to nonlinear control, it says the following: Given a system of the form
x
˙
=
∑
i
=
1
k
f
i
(
x
)
u
i
(
t
)
{\displaystyle {\dot {x}}=\sum _{i=1}^{k}f_{i}(x)u_{i}(t)\,}
where
x
∈
R
n
{\displaystyle x\in R^{n}}
,
f
1
,
…
,
f
k
{\displaystyle f_{1},\dots ,f_{k}}
are vector fields belonging to a distribution
Δ
{\displaystyle \Delta }
and
u
i
(
t
)
{\displaystyle u_{i}(t)}
are control functions, the integral curves of
x
{\displaystyle x}
are restricted to a manifold of dimension
m
{\displaystyle m}
if
span
(
Δ
)
=
m
{\displaystyle \operatorname {span} (\Delta )=m}
and
Δ
{\displaystyle \Delta }
is an involutive distribution.
== See also ==
Feedback passivation
Phase-locked loop
Small control property
== References ==
== Further reading ==
== External links ==
Wolfram language functions for nonlinear control systems | Wikipedia/Nonlinear_control_theory |
In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space. A special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information and as the criterion of good design. The performance of the closed-loop system is monitored by the coefficient diagram.
The most considerable advantages of CDM can be listed as follows:
The design procedure is easily understandable, systematic and useful. Therefore, the coefficients of the CDM controller polynomials can be determined more easily than those of the PID or other types of controller. This creates the possibility of an easy realisation for a new designer to control any kind of system.
There are explicit relations between the performance parameters specified before the design and the coefficients of the controller polynomials as described in. For this reason, the designer can easily realize many control systems having different performance properties for a given control problem in a wide range of freedom.
The development of different tuning methods is required for time delay processes of different properties in PID control. But it is sufficient to use the single design procedure in the CDM technique. This is an outstanding advantage.
It is particularly hard to design robust controllers realizing the desired performance properties for unstable, integrating and oscillatory processes having poles near the imaginary axis. It has been reported that successful designs can be achieved even in these cases by using CDM.
It is theoretically proven that CDM design is equivalent to LQ design with proper state augmentation. Thus, CDM can be considered an ‘‘improved LQG’’, because the order of the controller is smaller and weight selection rules are also given.
It is usually required that the controller for a given plant should be designed under some practical limitations.
The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability and time response requirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficient damping of the disturbance effects and the low economic property.
Although the main principles of CDM have been known since the 1950s, the first systematic method was proposed by Shunji Manabe. He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems.
Many control systems have been designed successfully using CDM. It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning.
== See also ==
Polynomials
== References ==
== External links ==
Coefficient Diagram Method
. | Wikipedia/Coefficient_diagram_method |
Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".
The term "real-time" is also used in simulation to mean that the simulation's clock runs at the same speed as a real clock.
Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds. A system not specified as operating in real time cannot usually guarantee a response within any timeframe, although typical or expected response times may be given. Real-time processing fails if not completed within a specified deadline relative to an event; deadlines must always be met, regardless of system load.
A real-time system has been described as one which "controls an environment by receiving data, processing them, and returning the results sufficiently quickly to affect the environment at that time". The term "real-time" is used in process control and enterprise systems to mean "without significant delay".
Real-time software may use one or more of the following: synchronous programming languages, real-time operating systems (RTOSes), and real-time networks. Each of these provide essential frameworks on which to build a real-time software application.
Systems used for many safety-critical applications must be real-time, such as for control of fly-by-wire aircraft, or anti-lock brakes, both of which demand immediate and accurate mechanical response.
== History ==
The term real-time derives from its use in early simulation, where a real-world process is simulated at a rate which matched that of the real process (now called real-time simulation to avoid ambiguity). Analog computers, most often, were capable of simulating at a much faster pace than real-time, a situation that could be just as dangerous as a slow simulation if it were not also recognized and accounted for.
Minicomputers, particularly in the 1970s onwards, when built into dedicated embedded systems such as DOG (Digital on-screen graphic) scanners, increased the need for low-latency priority-driven responses to important interactions with incoming data. Operating systems such as Data General's RDOS (Real-Time Disk Operating System) and RTOS with background and foreground scheduling as well as Digital Equipment Corporation's RT-11 date from this era. Background-foreground scheduling allowed low priority tasks CPU time when no foreground task needed to execute, and gave absolute priority within the foreground to threads/tasks with the highest priority. Real-time operating systems would also be used for time-sharing multiuser duties. For example, Data General Business Basic could run in the foreground or background of RDOS and would introduce additional elements to the scheduling algorithm to make it more appropriate for people interacting via dumb terminals.
Early personal computers were sometimes used for real-time computing. The possibility of deactivating other interrupts allowed for hard-coded loops with defined timing, and the low interrupt latency allowed the implementation of a real-time operating system, giving the user interface and the disk drives lower priority than the real-time thread. Compared to these the programmable interrupt controller of the Intel CPUs (8086..80586) generates a very large latency and the Windows operating system is neither a real-time operating system nor does it allow a program to take over the CPU completely and use its own scheduler, without using native machine language and thus bypassing all interrupting Windows code. However, several coding libraries exist which offer real time capabilities in a high level language on a variety of operating systems, for example Java Real Time. Later microprocessors such as the Motorola 68000 and subsequent family members (68010, 68020, ColdFire etc.) also became popular with manufacturers of industrial control systems. This application area is one where real-time control offers genuine advantages in terms of process performance and safety.
== Criteria for real-time computing ==
A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. Real-time systems, as well as their deadlines, are classified by the consequence of missing a deadline:
Hard – missing a deadline is a total system failure.
Firm – infrequent deadline misses are tolerable, but may degrade the system's quality of service. The usefulness of a result is zero after its deadline.
Soft – the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.
Thus, the goal of a hard real-time system is to ensure that all deadlines are met, but for soft real-time systems the goal becomes meeting a certain subset of deadlines in order to optimize some application-specific criteria. The particular criteria optimized depend on the application, but some typical examples include maximizing the number of deadlines met, minimizing the lateness of tasks and maximizing the number of high priority tasks meeting their deadlines.
Hard real-time systems are used when it is imperative that an event be reacted to within a strict deadline. Such strong guarantees are required of systems for which not reacting in a certain interval of time would cause great loss in some manner, especially damaging the surroundings physically or threatening human lives (although the strict definition is simply that missing the deadline constitutes failure of the system). Some examples of hard real-time systems:
A car engine control system is a hard real-time system because a delayed signal may cause engine failure or damage.
Medical systems such as heart pacemakers. Even though a pacemaker's task is simple, because of the potential risk to human life, medical systems like these are typically required to undergo thorough testing and certification, which in turn requires hard real-time computing in order to offer provable guarantees that a failure is unlikely or impossible.
Industrial process controllers, such as a machine on an assembly line. If the machine is delayed, the item on the assembly line could pass beyond the reach of the machine (leaving the product untouched), or the machine or the product could be damaged by activating the robot at the wrong time. If the failure is detected, both cases would lead to the assembly line stopping, which slows production. If the failure is not detected, a product with a defect could make it through production, or could cause damage in later steps of production.
Hard real-time systems are typically found interacting at a low level with physical hardware, in embedded systems. Early video game systems such as the Atari 2600 and Cinematronics vector graphics had hard real-time requirements because of the nature of the graphics and timing hardware.
Softmodems replace a hardware modem with software running on a computer's CPU. The software must run every few milliseconds to generate the next audio data to be output. If that data is late, the receiving modem will lose synchronization, causing a long interruption as synchronization is reestablished or causing the connection to be lost entirely.
Many types of printers have hard real-time requirements, such as inkjets (the ink must be deposited at the correct time as the printhead crosses the page), laser printers (the laser must be activated at the right time as the beam scans across the rotating drum), and dot matrix and various types of line printers (the impact mechanism must be activated at the right time as the print mechanism comes into alignment with the desired output). A failure in any of these would cause either missing output or misaligned output.
In the context of multitasking systems the scheduling policy is normally priority driven (pre-emptive schedulers). In some situations, these can guarantee hard real-time performance (for instance if the set of tasks and their priorities is known in advance). There are other hard real-time schedulers such as rate-monotonic which is not common in general-purpose systems, as it requires additional information in order to schedule a task: namely a bound or worst-case estimate for how long the task must execute. Specific algorithms for scheduling such hard real-time tasks exist, like earliest deadline first, which, ignoring the overhead of context switching, is sufficient for system loads of less than 100%. New overlay scheduling systems, such as an adaptive partition scheduler assist in managing large systems with a mixture of hard real-time and non real-time applications.
Firm real-time systems are more nebulously defined, and some classifications do not include them, distinguishing only hard and soft real-time systems. Some examples of firm real-time systems:
The assembly line machine described earlier as hard real-time could instead be considered firm real-time. A missed deadline still causes an error which needs to be dealt with: there might be machinery to mark a part as bad or eject it from the assembly line, or the assembly line could be stopped so an operator can correct the problem. However, as long as these errors are infrequent, they may be tolerated.
Soft real-time systems are typically used to solve issues of concurrent access and the need to keep a number of connected systems up-to-date through changing situations. Some examples of soft real-time systems:
Software that maintains and updates the flight plans for commercial airliners. The flight plans must be kept reasonably current, but they can operate with the latency of a few seconds.
Live audio-video systems are also usually soft real-time. A frame of audio which is played late may cause a brief audio glitch (and may cause all subsequent audio to be delayed correspondingly, causing a perception that the audio is being played slower than normal), but this may be better than the alternatives of continuing to play silence, static, a previous audio frame, or estimated data. A frame of video that is delayed typically causes even less disruption for viewers. The system can continue to operate and also recover in the future using workload prediction and reconfiguration methodologies.
Similarly, video games are often soft real-time, particularly as they try to meet a target frame rate. As the next image cannot be computed in advance, since it depends on inputs from the player, only a short time is available to perform all the computing needed to generate a frame of video before that frame must be displayed. If the deadline is missed, the game can continue at a lower frame rate; depending on the game, this may only affect its graphics (while the gameplay continues at normal speed), or the gameplay itself may be slowed down (which was common on older third- and fourth-generation consoles).
=== Real-time in digital signal processing ===
In a real-time digital signal processing (DSP) process, the analyzed (input) and generated (output) samples can be processed (or generated) continuously in the time it takes to input and output the same set of samples independent of the processing delay. It means that the processing delay must be bounded even if the processing continues for an unlimited time. The mean processing time per sample, including overhead, is no greater than the sampling period, which is the reciprocal of the sampling rate. This is the criterion whether the samples are grouped together in large segments and processed as blocks or are processed individually and whether there are long, short, or non-existent input and output buffers.
Consider an audio DSP example; if a process requires 2.01 seconds to analyze, synthesize, or process 2.00 seconds of sound, it is not real-time. However, if it takes 1.99 seconds, it is or can be made into a real-time DSP process.
A common life analogy is standing in a line or queue waiting for the checkout in a grocery store. If the line asymptotically grows longer and longer without bound, the checkout process is not real-time. If the length of the line is bounded, customers are being "processed" and output as rapidly, on average, as they are being inputted then that process is real-time. The grocer might go out of business or must at least lose business if they cannot make their checkout process real-time; thus, it is fundamentally important that this process is real-time.
A signal processing algorithm that cannot keep up with the flow of input data with output falling further and further behind the input, is not real-time. If the delay of the output (relative to the input) is bounded regarding a process which operates over an unlimited time, then that signal processing algorithm is real-time, even if the throughput delay may be very long.
==== Live vs. real-time ====
Real-time signal processing is necessary, but not sufficient in and of itself, for live signal processing such as what is required in live event support. Live audio digital signal processing requires both real-time operation and a sufficient limit to throughput delay so as to be tolerable to performers using stage monitors or in-ear monitors and not noticeable as lip sync error by the audience also directly watching the performers. Tolerable limits to latency for live, real-time processing is a subject of investigation and debate, but is estimated to be between 6 and 20 milliseconds.
Real-time bidirectional telecommunications delays of less than 300 ms ("round trip" or twice the unidirectional delay) are considered "acceptable" to avoid undesired "talk-over" in conversation.
== Real-time and high-performance ==
Real-time computing is sometimes misunderstood to be high-performance computing, but this is not an accurate classification. For example, a massive supercomputer executing a scientific simulation may offer impressive performance, yet it is not executing a real-time computation. Conversely, once the hardware and software for an anti-lock braking system have been designed to meet its required deadlines, no further performance gains are obligatory or even useful. Furthermore, if a network server is highly loaded with network traffic, its response time may be slower, but will (in most cases) still succeed before it times out (hits its deadline). Hence, such a network server would not be considered a real-time system: temporal failures (delays, time-outs, etc.) are typically small and compartmentalized (limited in effect), but are not catastrophic failures. In a real-time system, such as the FTSE 100 Index, a slow-down beyond limits would often be considered catastrophic in its application context. The most important requirement of a real-time system is consistent output, not high throughput.
Some kinds of software, such as many chess-playing programs, can fall into either category. For instance, a chess program designed to play in a tournament with a clock will need to decide on a move before a certain deadline or lose the game, and is therefore a real-time computation, but a chess program that is allowed to run indefinitely before moving is not. In both of these cases, however, high performance is desirable: the more work a tournament chess program can do in the allotted time, the better its moves will be, and the faster an unconstrained chess program runs, the sooner it will be able to move. This example also illustrates the essential difference between real-time computations and other computations: if the tournament chess program does not make a decision about its next move in its allotted time it loses the game—i.e., it fails as a real-time computation—while in the other scenario, meeting the deadline is assumed not to be necessary. High-performance is indicative of the amount of processing that is performed in a given amount of time, whereas real-time is the ability to get done with the processing to yield a useful output in the available time.
== Near real-time ==
The term "near real-time" or "nearly real-time" (NRT), in telecommunications and computing, refers to the time delay introduced, by automated data processing or network transmission, between the occurrence of an event and the use of the processed data, such as for display or feedback and control purposes. For example, a near-real-time display depicts an event or situation as it existed at the current time minus the processing time, as nearly the time of the live event.
The distinction between the terms "near real time" and "real time" is somewhat nebulous and must be defined for the situation at hand. The term implies that there are no significant delays. In many cases, processing described as "real-time" would be more accurately described as "near real-time".
Near real-time also refers to delayed real-time transmission of voice and video. It allows playing video images, in approximately real-time, without having to wait for an entire large video file to download. Incompatible databases can export/import to common flat files that the other database can import/export on a scheduled basis so they can sync/share common data in "near real-time" with each other.
== Design methods ==
Several methods exist to aid the design of real-time systems, an example of which is MASCOT, an old but very successful method that represents the concurrent structure of the system. Other examples are HOOD, Real-Time UML, AADL, the Ravenscar profile, and Real-Time Java.
== See also ==
== References ==
== Further reading ==
Burns, Alan; Wellings, Andy (2009), Real-Time Systems and Programming Languages (4th ed.), Addison-Wesley, ISBN 978-0-321-41745-9
Buttazzo, Giorgio (2011), Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications, New York, New York: Springer, ISBN 9781461406761 – via Google Books.
Liu, Jane W. S. (2000), Real-time systems, Upper Saddle River, New Jersey: Prentice Hall.
The International Journal of Time-Critical Computing Systems
== External links ==
IEEE Technical Committee on Real-Time Systems
Euromicro Technical Committee on Real-time Systems
The What, Where and Why of Real-Time Simulation
Johnstone, R.L. "RTOS—Extending OS/360 for real time spaceflight control" (PDF). Bitsavers. Retrieved February 24, 2023.
Coyle, R. J.; Stewart, J. K. (September 1963). "Design of a Real-time Programming System". Computers and Automation. XII (9). Silver Spring, Maryland: Datatrol Corporation: 26–34. [...] set of notes which will hopefully point up problem areas which should be considered in real time design. | Wikipedia/Real-time_control |
Control theory in sociology is the idea that two control systems—inner controls and outer controls—work against our tendencies to deviate. Control theory can either be classified as centralized or decentralized. Decentralized control is considered market control. Centralized control is considered bureaucratic control. Some types of control such as clan control are considered to be a mixture of both decentralized and centralized control.
Decentralized control or market control is typically maintained through factors such as price, competition, or market share. Centralized control such as bureaucratic control is typically maintained through administrative or hierarchical techniques such as creating standards or policies. An example of mixed control is clan control which has characteristics of both centralized and decentralized control. Mixed control or clan control is typically maintained by keeping a set of values and beliefs or norms and traditions.
Containment theory, as developed by Walter Reckless in 1973, states that behavior is caused not by outside stimuli, but by what a person wants most at any given time. According to the control theory, weaker containing social systems result in more deviant behavior.
Control theory stresses how weak bonds between the individuals and society free people to deviate or go against the norms, or the people who have weak ties would engage in crimes so they could benefit, or gain something that is to their own interest. This is where strong bonds make deviance more costly. Deviant acts appear attractive to individuals but social bonds stop most people from committing the acts. Deviance is a result of extensive exposure to certain social situations where individuals develop behaviors that attract them to avoid conforming to social norms. Social bonds are used in control theory to help individuals from pursuing these attractive deviations.
According to Travis Hirschi, humans are selfish beings, who make decisions based on which choice will give the greatest benefit. A good example of control theory would be that people go to work. Most people do not want to go to work, but they do, because they get paid, to obtain food, water, shelter, and clothing.
Hirschi (1969) identifies four elements of social bonds: attachment, commitment, involvement, and belief.
== See also ==
Social control theory
== Notes ==
== References ==
Giddens, Anthony, Mitchell Duneier, Richard Appelbaum, and Deborah Carr. Introduction To Sociology. Seventh . New York City: W.W. Norton & Company, 2009. 182. Print.
Hamlin, John. "A Non-Causal Explanation: Containment Theory Walter C. Reckless." 2001. University of Minnesota, Web. 5 Mar 2010. <https://web.archive.org/web/20110604223724/http://www.d.umn.edu/cla/faculty/jhamlin/2311/Reckless.html>.
O'Grady, William. Crime in a Canadian Context. 2011. Toronto: Oxford University press. Print.
Henslin, James M. Sociology: A Down-To-Earth Approach. Nine ed. Boston: Allyn and Bacon, 2008. Print.
== External links ==
Control Theory | Wikipedia/Control_theory_(sociology) |
In control theory, a bang–bang controller (hysteresis, 2 step or on–off controller), is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often used to control a plant that accepts a binary input, for example a furnace that is either completely on or completely off. Most common residential thermostats are bang–bang controllers. The Heaviside step function in its discrete form is an example of a bang–bang control signal. Due to the discontinuous control signal, systems that include bang–bang controllers are variable structure systems, and bang–bang controllers are thus variable structure controllers.
== Bang–bang solutions in optimal control ==
In optimal control problems, it is sometimes the case that a control is restricted to be between a lower and an upper bound. If the optimal control switches from one extreme to the other (i.e., is strictly never in between the bounds), then that control is referred to as a bang–bang solution.
Bang–bang controls frequently arise in minimum-time problems. For example, if it is desired for a car starting at rest to arrive at a certain position ahead of the car in the shortest possible time, the solution is to apply maximum acceleration until the unique switching point, and then apply maximum braking to come to rest exactly at the desired position.
A familiar everyday example is bringing water to a boil in the shortest time, which is achieved by applying full heat, then turning it off when the water reaches a boil. A closed-loop household example is most thermostats, wherein the heating element or air conditioning compressor is either running or not, depending upon whether the measured temperature is above or below the setpoint.
Bang–bang solutions also arise when the Hamiltonian is linear in the control variable; application of Pontryagin's minimum or maximum principle will then lead to pushing the control to its upper or lower bound depending on the sign of the coefficient of u in the Hamiltonian.
In summary, bang–bang controls are actually optimal controls in some cases, although they are also often implemented because of simplicity or convenience.
== Practical implications of bang–bang control ==
Mathematically or within a computing context there may
be no problems, but the physical realization of bang–bang
control systems gives rise to several complications.
First, depending on the width of the hysteresis gap and inertia in the process, there will be an oscillating error signal around the desired set point value (e.g., temperature), often saw-tooth shaped. Room temperature may become uncomfortable just before the next switch 'ON' event. Alternatively, a narrow hysteresis gap will lead to frequent
on/off switching, which is often undesirable (e.g. an electrically ignited gas heater).
Second, the onset of the step function may entail, for example, a high electrical current and/or sudden heating and expansion of metal vessels, ultimately leading to metal fatigue or other wear-and-tear effects. Where possible, continuous control, such as in PID control, will avoid problems caused by the brisk state transitions that are the consequence of bang–bang control.
== See also ==
== References ==
== Further reading ==
Artstein, Zvi (1980). "Discrete and continuous bang-bang and facial spaces, or: Look for the extreme points". SIAM Review. 22 (2): 172–185. doi:10.1137/1022026. JSTOR 2029960. MR 0564562.
Flugge-Lotz, Irmgard (1953). Discontinuous Automatic Control. Princeton University Press. ISBN 9780691653259. {{cite book}}: ISBN / Date incompatibility (help)
Hermes, Henry; LaSalle, Joseph P. (1969). Functional analysis and time optimal control. Mathematics in Science and Engineering. Vol. 56. New York—London: Academic Press. pp. viii+136. MR 0420366.
Kluvánek, Igor; Knowles, Greg (1976). Vector measures and control systems. North-Holland Mathematics Studies. Vol. 20. New York: North-Holland Publishing Co. pp. ix+180. MR 0499068.
Rolewicz, Stefan (1987). Functional analysis and control theory: Linear systems. Mathematics and its Applications (East European Series). Vol. 29 (Translated from the Polish by Ewa Bednarczuk ed.). Dordrecht; Warsaw: D. Reidel Publishing Co.; PWN—Polish Scientific Publishers. pp. xvi+524. ISBN 90-277-2186-6. MR 0920371. OCLC 13064804.
Sonneborn, L.; Van Vleck, F. (1965). "The Bang-Bang Principle for Linear Control Systems". SIAM J. Control. 2: 151–159. | Wikipedia/Bang-bang_control |
In control theory, a closed-loop transfer function is a mathematical function describing the net result of the effects of a feedback control loop on the input signal to the plant under control.
== Overview ==
The closed-loop transfer function is measured at the output. The output signal can be calculated from the closed-loop transfer function and the input signal. Signals may be waveforms, images, or other types of data streams.
An example of a closed-loop block diagram, from which a transfer function may be computed, is shown below:
The summing node and the G(s) and H(s) blocks can all be combined into one block, which would have the following transfer function:
Y
(
s
)
X
(
s
)
=
G
(
s
)
1
+
G
(
s
)
H
(
s
)
{\displaystyle {\dfrac {Y(s)}{X(s)}}={\dfrac {G(s)}{1+G(s)H(s)}}}
G
(
s
)
{\displaystyle G(s)}
is called the feed forward transfer function,
H
(
s
)
{\displaystyle H(s)}
is called the feedback transfer function, and their product
G
(
s
)
H
(
s
)
{\displaystyle G(s)H(s)}
is called the open-loop transfer function.
== Derivation ==
We define an intermediate signal Z (also known as error signal) shown as follows:
Using this figure we write:
Y
(
s
)
=
G
(
s
)
Z
(
s
)
{\displaystyle Y(s)=G(s)Z(s)}
Z
(
s
)
=
X
(
s
)
−
H
(
s
)
Y
(
s
)
{\displaystyle Z(s)=X(s)-H(s)Y(s)}
Now, plug the second equation into the first to eliminate Z(s):
Y
(
s
)
=
G
(
s
)
[
X
(
s
)
−
H
(
s
)
Y
(
s
)
]
{\displaystyle Y(s)=G(s)[X(s)-H(s)Y(s)]}
Move all the terms with Y(s) to the left hand side, and keep the term with X(s) on the right hand side:
Y
(
s
)
+
G
(
s
)
H
(
s
)
Y
(
s
)
=
G
(
s
)
X
(
s
)
{\displaystyle Y(s)+G(s)H(s)Y(s)=G(s)X(s)}
Therefore,
Y
(
s
)
(
1
+
G
(
s
)
H
(
s
)
)
=
G
(
s
)
X
(
s
)
{\displaystyle Y(s)(1+G(s)H(s))=G(s)X(s)}
⇒
Y
(
s
)
X
(
s
)
=
G
(
s
)
1
+
G
(
s
)
H
(
s
)
{\displaystyle \Rightarrow {\dfrac {Y(s)}{X(s)}}={\dfrac {G(s)}{1+G(s)H(s)}}}
== See also ==
Federal Standard 1037C
Open-loop controller
Control theory § Open-loop and closed-loop (feedback) control
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. | Wikipedia/Closed-loop_transfer_function |
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals.
== Definition ==
In mathematics, a linear map (or linear function)
f
(
x
)
{\displaystyle f(x)}
is one which satisfies both of the following properties:
Additivity or superposition principle:
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
;
{\displaystyle \textstyle f(x+y)=f(x)+f(y);}
Homogeneity:
f
(
α
x
)
=
α
f
(
x
)
.
{\displaystyle \textstyle f(\alpha x)=\alpha f(x).}
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
f
(
α
x
+
β
y
)
=
α
f
(
x
)
+
β
f
(
y
)
{\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)}
An equation written as
f
(
x
)
=
C
{\displaystyle f(x)=C}
is called linear if
f
(
x
)
{\displaystyle f(x)}
is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if
C
=
0
{\displaystyle C=0}
and
f
(
x
)
{\displaystyle f(x)}
is a homogeneous function.
The definition
f
(
x
)
=
C
{\displaystyle f(x)=C}
is very general in that
x
{\displaystyle x}
can be any sensible mathematical object (number, vector, function, etc.), and the function
f
(
x
)
{\displaystyle f(x)}
can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If
f
(
x
)
{\displaystyle f(x)}
contains differentiation with respect to
x
{\displaystyle x}
, the result will be a differential equation.
== Nonlinear systems of equations ==
A nonlinear system of equations consists of a set of equations in several variables such that at least one of them is not a linear equation.
For a single equation of the form
f
(
x
)
=
0
,
{\displaystyle f(x)=0,}
many methods have been designed; see Root-finding algorithm. In the case where f is a polynomial, one has a polynomial equation such as
x
2
+
x
−
1
=
0.
{\displaystyle x^{2}+x-1=0.}
The general root-finding algorithms apply to polynomial roots, but, generally they do not find all the roots, and when they fail to find a root, this does not imply that there is no roots. Specific methods for polynomials allow finding all roots or the real roots; see real-root isolation.
Solving systems of polynomial equations, that is finding the common zeros of a set of several polynomials in several variables is a difficult problem for which elaborate algorithms have been designed, such as Gröbner base algorithms.
For the general case of system of equations formed by equating to zero several differentiable functions, the main method is Newton's method and its variants. Generally they may provide a solution, but do not provide any information on the number of solutions.
== Nonlinear recurrence relations ==
A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences. Nonlinear discrete models that represent a wide class of nonlinear recurrence relationships include the NARMAX (Nonlinear Autoregressive Moving Average with eXogenous inputs) model and the related nonlinear system identification and analysis procedures. These approaches can be used to study a wide class of complex nonlinear behaviors in the time, frequency, and spatio-temporal domains.
== Nonlinear differential equations ==
A system of differential equations is said to be nonlinear if it is not a system of linear equations. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics and the Lotka–Volterra equations in biology.
One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.
=== Ordinary differential equations ===
First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation
d
u
d
x
=
−
u
2
{\displaystyle {\frac {du}{dx}}=-u^{2}}
has
u
=
1
x
+
C
{\displaystyle u={\frac {1}{x+C}}}
as a general solution (and also the special solution
u
=
0
,
{\displaystyle u=0,}
corresponding to the limit of the general solution when C tends to infinity). The equation is nonlinear because it may be written as
d
u
d
x
+
u
2
=
0
{\displaystyle {\frac {du}{dx}}+u^{2}=0}
and the left-hand side of the equation is not a linear function of
u
{\displaystyle u}
and its derivatives. Note that if the
u
2
{\displaystyle u^{2}}
term were replaced with
u
{\displaystyle u}
, the problem would be linear (the exponential decay problem).
Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed-form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.
Common methods for the qualitative analysis of nonlinear ordinary differential equations include:
Examination of any conserved quantities, especially in Hamiltonian systems
Examination of dissipative quantities (see Lyapunov function) analogous to conserved quantities
Linearization via Taylor expansion
Change of variables into something easier to study
Bifurcation theory
Perturbation methods (can be applied to algebraic equations too)
Existence of solutions of Finite-Duration, which can happen under specific conditions for some non-linear ordinary differential equations.
=== Partial differential equations ===
The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.
Another common (though less mathematical) tactic, often exploited in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.
Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.
=== Pendula ===
A classic, extensively studied nonlinear problem is the dynamics of a frictionless pendulum under the influence of gravity. Using Lagrangian mechanics, it may be shown that the motion of a pendulum can be described by the dimensionless nonlinear equation
d
2
θ
d
t
2
+
sin
(
θ
)
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\sin(\theta )=0}
where gravity points "downwards" and
θ
{\displaystyle \theta }
is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use
d
θ
/
d
t
{\displaystyle d\theta /dt}
as an integrating factor, which would eventually yield
∫
d
θ
C
0
+
2
cos
(
θ
)
=
t
+
C
1
{\displaystyle \int {\frac {d\theta }{\sqrt {C_{0}+2\cos(\theta )}}}=t+C_{1}}
which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary unless
C
0
=
2
{\displaystyle C_{0}=2}
).
Another way to approach the problem is to linearize any nonlinearity (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at
θ
=
0
{\displaystyle \theta =0}
, called the small angle approximation, is
d
2
θ
d
t
2
+
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\theta =0}
since
sin
(
θ
)
≈
θ
{\displaystyle \sin(\theta )\approx \theta }
for
θ
≈
0
{\displaystyle \theta \approx 0}
. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at
θ
=
π
{\displaystyle \theta =\pi }
, corresponding to the pendulum being straight up:
d
2
θ
d
t
2
+
π
−
θ
=
0
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+\pi -\theta =0}
since
sin
(
θ
)
≈
π
−
θ
{\displaystyle \sin(\theta )\approx \pi -\theta }
for
θ
≈
π
{\displaystyle \theta \approx \pi }
. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that
|
θ
|
{\displaystyle |\theta |}
will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.
One more interesting linearization is possible around
θ
=
π
/
2
{\displaystyle \theta =\pi /2}
, around which
sin
(
θ
)
≈
1
{\displaystyle \sin(\theta )\approx 1}
:
d
2
θ
d
t
2
+
1
=
0.
{\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+1=0.}
This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.
== Types of nonlinear dynamic behaviors ==
Amplitude death – any oscillations present in the system cease due to some kind of interaction with other system or feedback by the same system
Chaos – values of a system cannot be predicted indefinitely far into the future, and fluctuations are aperiodic
Multistability – the presence of two or more stable states
Solitons – self-reinforcing solitary waves
Limit cycles – asymptotic periodic orbits to which destabilized fixed points are attracted.
Self-oscillations – feedback oscillations taking place in open dissipative physical systems.
== Examples of nonlinear equations ==
== See also ==
== References ==
== Further reading ==
== External links ==
Command and Control Research Program (CCRP)
New England Complex Systems Institute: Concepts in Complex Systems
Nonlinear Dynamics I: Chaos at MIT's OpenCourseWare
Nonlinear Model Library – (in MATLAB) a Database of Physical Systems
The Center for Nonlinear Studies at Los Alamos National Laboratory | Wikipedia/Nonlinear_differential_equation |
A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process.
For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint.
For sequential and combinational logic, software logic, such as in a programmable logic controller, is used.
== Open-loop and closed-loop control ==
== Feedback control systems ==
== Logic control ==
Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs.
Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine.
PLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) or statement lists.
== On–off control ==
On–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domestic thermostat can be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective.
== Linear control ==
== Fuzzy logic ==
Fuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true.
The rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: "If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."
Measurements from the real world (such as the temperature of a furnace) are fuzzified and logic is calculated arithmetic, as opposed to Boolean logic, and the outputs are de-fuzzified to control equipment.
When a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.
Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used in digital electronics.
== Physical implementation ==
The range of control system implementation is from compact controllers often with dedicated software for a particular machine or device, to distributed control systems for industrial process control for a large physical plant.
Logic systems and feedback controllers are usually implemented with programmable logic controllers. The Broadly Reconfigurable and Expandable Automation Device (BREAD) is a recent framework that provides many open-source hardware devices which can be connected to create more complex data acquisition and control systems.
== See also ==
== References ==
== External links ==
SystemControl Create, simulate or HWIL control loops with Python. Includes Kalman filter, LQG control among others.
Semiautonomous Flight Direction - Reference unmannedaircraft.org
Control System Toolbox for design and analysis of control systems.
Control Systems Manufacturer Design and Manufacture of control systems.
Mathematica functions for the analysis, design, and simulation of control systems
Python Control System (PyConSys) Create and simulate control loops with Python. AI for setting PID parameters. | Wikipedia/Linear_control_theory |
A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.
== Overview ==
A human-built system with complex behavior is often organized as a hierarchy. For example, a command hierarchy has among its notable features the organizational chart of superiors, subordinates, and lines of organizational communication. Hierarchical control systems are organized similarly to divide the decision making responsibility.
Each element of the hierarchy is a linked node in the tree. Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes, whereas sensations and command results flow up the tree from subordinate to superior nodes. Nodes may also exchange messages with their siblings. The two distinguishing features of a hierarchical control system are related to its layers.
Each higher layer of the tree operates with a longer interval of planning and execution time than its immediately lower layer.
The lower layers have local tasks, goals, and sensations, and their activities are planned and coordinated by higher layers which do not generally override their decisions. The layers form a hybrid intelligent system in which the lowest, reactive layers are sub-symbolic. The higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performing planning. A hierarchical task network is a good fit for planning in a hierarchical control system.
Besides artificial systems, an animal's control systems are proposed to be organized as a hierarchy. In perceptual control theory, which postulates that an organism's behavior is a means of controlling its perceptions, the organism's control systems are suggested to be organized in a hierarchical pattern as their perceptions are constructed so.
== Control system structure ==
The accompanying diagram is a general hierarchical model which shows functional manufacturing levels using computerised control of an industrial control system.
Referring to the diagram;
Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves
Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors.
Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens.
Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets
Level 4 is the production scheduling level.
== Applications ==
=== Manufacturing, robotics and vehicles ===
Among the robotic paradigms is the hierarchical paradigm in which a robot operates in a top-down fashion, heavy on planning, especially motion planning. Computer-aided production engineering has been a research focus at NIST since the 1980s. Its Automated Manufacturing Research Facility was used to develop a five layer production control model. In the early 1990s DARPA sponsored research to develop distributed (i.e. networked) intelligent control systems for applications such as military command and control systems. NIST built on earlier research to develop its Real-Time Control System (RCS) and Real-time Control System Software which is a generic hierarchical control system that has been used to operate a manufacturing cell, a robot crane, and an automated vehicle.
In November 2007, DARPA held the Urban Challenge. The winning entry, Tartan Racing employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, perception, world modelling, and mechatronics.
=== Artificial intelligence ===
Subsumption architecture is a methodology for developing artificial intelligence that is heavily associated with behavior based robotics. This architecture is a way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers. Each layer implements a particular goal of the software agent (i.e. system as a whole), and higher layers are increasingly more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer. Behavior need not be planned by a superior layer, rather behaviors may be triggered by sensory inputs and so are only active under circumstances where they might be appropriate.
Reinforcement learning has been used to acquire behavior in a hierarchical control system in which each node can learn to improve its behavior with experience.
James Albus, while at NIST, developed a theory for intelligent system design named the Reference Model Architecture (RMA), which is a hierarchical control system inspired by RCS. Albus defines each node to contain these components.
Behavior generation is responsible for executing tasks received from the superior, parent node. It also plans for, and issues tasks to, the subordinate nodes.
Sensory perception is responsible for receiving sensations from the subordinate nodes, then grouping, filtering, and otherwise processing them into higher level abstractions that update the local state and which form sensations that are sent to the superior node.
Value judgment is responsible for evaluating the updated situation and evaluating alternative plans.
World Model is the local state that provides a model for the controlled system, controlled process, or environment at the abstraction level of the subordinate nodes.
At its lowest levels, the RMA can be implemented as a subsumption architecture, in which the world model is mapped directly to the controlled process or real world, avoiding the need for a mathematical abstraction, and in which time-constrained reactive planning can be implemented as a finite-state machine. Higher levels of the RMA however, may have sophisticated mathematical world models and behavior implemented by automated planning and scheduling. Planning is required when certain behaviors cannot be triggered by current sensations, but rather by predicted or anticipated sensations, especially those that come about as result of the node's actions.
== See also ==
Command hierarchy, a hierarchical power structure
Hierarchical organization, a hierarchical organizational structure
== References ==
== Further reading ==
Albus, J.S. (1996). "The Engineering of Mind". From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior. MIT Press.
Albus, J.S. (2000). "4-D/RCS reference model architecture for unmanned ground vehicles". Robotics and Automation, 2000. Proceedings. ICRA'00. IEEE International Conference on. Vol. 4. doi:10.1109/ROBOT.2000.845165.
Findeisen, W.; Others (1980). Control and coordination in hierarchical systems. Chichester [Eng.]; New York: J. Wiley.
Hayes-roth, F.; Erman, L.; Terry, A. (1992). "Distributed intelligent control and management(DICAM) applications and support for semi-automated development". NASA. Ames Research Center, Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design P 66-70 (SEE N 93-17499 05-61). Retrieved 2008-05-11.
Jones, A.T.; McLean, C.R. (1986). "A Proposed Hierarchical Control Model for Automated Manufacturing Systems". Journal of Manufacturing Systems. 5 (1): 15–25. CiteSeerX 10.1.1.79.6980. doi:10.1016/0278-6125(86)90064-6. Archived from the original on December 12, 2012. Retrieved 2008-05-11.
== External links ==
The RCS (Realtime Control System) Library
Texai An open source project to create artificial intelligence using an Albus hierarchical control system | Wikipedia/Hierarchical_control_system |
In linguistics, control is a construction in which the understood subject of a given predicate is determined by some expression in context. Stereotypical instances of control involve verbs. A superordinate verb "controls" the arguments of a subordinate, nonfinite verb. Control was intensively studied in the government and binding framework in the 1980s, and much of the terminology from that era is still used today. In the days of Transformational Grammar, control phenomena were discussed in terms of Equi-NP deletion. Control is often analyzed in terms of a null pronoun called PRO. Control is also related to raising, although there are important differences between control and raising.
== Examples ==
Standard instances of (obligatory) control are present in the following sentences:
Susan promised to help us. - Subject control with the obligatory control predicate promise
Fred stopped laughing. - Subject control with the obligatory control predicate stop
We tried to leave. - Subject control with the obligatory control predicate try
Sue asked Bill to stop. - Object control with the obligatory control predicate ask
They told you to support the effort. - Object control with the obligatory control predicate tell
Someone forced him to do it. - Object control with the obligatory control predicate force
Each of these sentences contains two verbal predicates. Each time the control verb is on the left, and the verb whose arguments are controlled is on the right. The control verb determines which expression is interpreted as the subject of the verb on the right. The first three sentences are examples of subject control, since the subject of the control verb is also the understood subject of the subordinate verb. The second three examples are instances of object control, because the object of the control verb is understood as the subject of the subordinate verb. The argument of the matrix predicate that functions as the subject of the embedded predicate is the controller. The controllers are in bold in the examples.
== Control verbs vs. auxiliary verbs ==
Control verbs have semantic content; they semantically select their arguments, that is, their appearance strongly influences the nature of the arguments they take. In this regard, they are very different from auxiliary verbs, which lack semantic content and do not semantically select arguments. Compare the following pairs of sentences:
a. Sam will go. - will is an auxiliary verb.
b. Sam yearns to go. - yearns is a subject control verb.
a. Jim has to do it. - has to is a modal auxiliary verb.
b. Jim refuses to do it. - refuses is a subject control verb.
a. Jill would lie and cheat. - would is a modal auxiliary.
b. Jill attempted to lie and cheat. - attempted is a subject control verb.
The a-sentences contain auxiliary verbs that do not select the subject argument. What this means is that the embedded verbs go, do, and lie and cheat are responsible for semantically selecting the subject argument. The point is that while control verbs may have the same outward appearance as auxiliary verbs, the two verb types are quite different.
== Non-obligatory or optional control ==
Control verbs (such as promise, stop, try, ask, tell, force, yearn, refuse, attempt) obligatorily induce a control construction. That is, when control verbs appear, they inherently determine which of their arguments controls the embedded predicate. Control is hence obligatorily present with these verbs. In contrast, the arguments of many verbs can be controlled even when a superordinate control verb is absent, e.g.
He left, singing all the way. - Non-obligatory control of the present participle singing
Understanding nothing, the class protested. - Non-obligatory control of the present participle understanding
Holding his breath too long, Fred passed out. - Non-obligatory control of the present participle holding
In one sense, control is obligatory in these sentences because the arguments of the present participles singing, understanding, and holding are clearly controlled by the matrix subjects. In another sense, however, control is non-obligatory (or optional) because there is no control predicate present that necessitates that control occur. General contextual factors are determining which expression is understood as the controller. The controller is the subject in these sentences because the subject establishes point of view.
Some researchers have begun to use the term "obligatory control" to just mean that there is a grammatical dependency between the controlled subject and its controller, even if that dependency is not strictly required. "Non-obligatory control", on the other hand, may be used just to mean that there is no grammatical dependency involved. Both "obligatory control" and "non-obligatory control" can be present in a single sentence. The following example can either mean that the pool had been in the hot sun all day (so it was nice and warm), in which case there would be a syntactic dependency between "the pool" and "being". Or it can mean that the speaker was in the hot sun all day (so the pool is nice and cool), in which case there would be no grammatical dependency between "being" and the understood controller (the speaker). In such non-obligatory control sentences, it appears that the understood controller needs to be either a perspective holder in the discourse or an established topic.The pool was the perfect temperature after being in the hot sun all day.
== Arbitrary control ==
Arbitrary control occurs when the controller is understood to be anybody in general, e.g.
Reading the Dead Sea Scrolls is fun. - Arbitrary control of the gerund reading.
Seeing is believing. - Arbitrary control of the gerunds seeing and believing
Having to do something repeatedly is boring. - Arbitrary control of the gerund having
The understood subject of the gerunds in these sentence is non-discriminate; any generic person will do. In such cases, control is said to be "arbitrary". Any time the understood subject of a given predicate is not present in the linguistic or situational context, a generic subject (e.g. 'one') is understood.
== Representing control ==
Theoretical linguistics posits the existence of the null pronoun PRO as the theoretical basis for the analysis of control structures. The null pronoun PRO is an element that impacts a sentence in a similar manner to how a normal pronoun impacts a sentence, but the null pronoun is inaudible. The null PRO is added to the predicate, where it occupies the position that one would typically associate with an overt subject (if one were present). The following trees illustrate PRO in both constituency-based structures of phrase structure grammars and dependency-based structures of dependency grammars:
The constituency-based trees are the a-trees on the left, and the dependency-based trees the b-trees on the right. Certainly aspects of these trees - especially of the constituency trees - can be disputed. In the current context, the trees are intended merely to suggest by way of illustration how control and PRO are conceived of. The indices are a common means of identifying PRO and with its antecedent in the control predicate, and the orange arrows indicate further the control relation. In a sense, the controller assigns its index to PRO, which identifies the argument that is understood as the subject of the subordinate predicate.
A (constituency-based) X-bar theoretic tree that is consistent with the standard GB-type analysis is given next:
The details of this tree are, again, not so important. What is important is that by positing the existence of the null subject PRO, the theoretical analysis of control constructions gains a useful tool that can help uncover important traits of control constructions.
== Control vs. raising ==
Control must be distinguished from raising, though the two can be outwardly similar. Control predicates semantically select their arguments, as stated above. Raising predicates, in contrast, do not semantically select (at least) one of their dependents. The contrast is evident with the so-called raising-to-object verbs (=ECM-verbs) such as believe, expect, want, and prove. Compare the following a- and b-sentences:
a. Fred asked you to read it. - asked is an object control verb.
b. Fred expects you to read it. - expects is a raising-to-object verb.
a. Jim forced her to say it. - forced is an object control verb.
b. Jim believed her to have said it. - believes is a raising-to-object verb.
The control predicates ask and force semantically select their object arguments, whereas the raising-to-object verbs do not. Instead, the object of the raising verb appears to have "risen" from the subject position of the embedded predicate, in this case from the embedded predicates to read and to have said. In other words, the embedded predicate is semantically selecting the argument of the matrix predicate. What this means is that while a raising-to-object verb takes an object dependent, that dependent is not a semantic argument of that raising verb. The distinction becomes apparent when one considers that a control predicate like ask requires its object to be an animate entity, whereas a raising-to-object predicate like expects places no semantic limitations on its object dependent.
=== Diagnostic Tests ===
==== Expletives ====
The different predicate types can be identified using expletive there. Expletive there can appear as the "object" of a raising-to-object predicate, but not of a control verb, e.g.
a. *Fred asked there to be a party. - Expletive there cannot appear as the object of a control predicate.
b. Fred expects there to be a party. - Expletive there can appear as the object of a raising-to-object predicate.
a. *Jim forced there to be a party. - Expletive there cannot appear as the object of a control predicate.
b. Jim believes there to have been a party. - Expletive there can appear as the object of a raising-to-object predicate.
The control predicates cannot take expletive there because there does not fulfill the semantic requirements of the control predicates. Since the raising-to-object predicates do not select their objects, they can easily take expletive there.
==== Idioms ====
Control and raising also differ in how they behave with idiomatic expressions. Idiomatic expressions retain their meaning in a raising construction, but they lose it when they are arguments of a control verb. See the examples below featuring the idiom "The cat is out of the bag", which has the meaning that facts that were previously hidden are now revealed.
a. The cat wants to be out of the bag. - There is no possible idiomatic interpretation in the control construction.
b. The cat seems to be out of the bag. - The idiomatic interpretation is retained in the raising construction.
The explanation for this fact is that raising predicates do not semantically select their arguments, and therefore their arguments are not interpreted compositionally, as the subject or object of the raising predicate. Arguments of the control predicate, on the other hand, have to fulfill their semantic requirements, and interpreted as the argument of the predicate compositionally.
This test works for object control and ECM too.
a. I asked the cat to be out of the bag. - There is no possible idiomatic interpretation in the control construction.
b. I believe the cat to be out of the bag. - The idiomatic interpretation is retained in the raising construction.
== Notes ==
== See also ==
Dependency grammar
Phrase structure grammar
Predicate
Argument
Raising
Catenative verb
== References ==
Bach, E. 1974. Syntactic theory. New York: Holt, Rinehart and Winston, Inc.
Borsley, R. 1996. Modern phrase structure grammar. Cambridge, MA: Blackwell Publishers.
Carnie, A. 2007. Syntax: A generative introduction, 2nd edition. Malden, MA: Blackwell Publishing.
Cowper, E. 2009. A concise introduction to syntactic theory: The government-binding approach. Chicago: The University of Chicago Press.
Culicover, P. 1982. Syntax, 2nd edition. New York: Academic Press.
Culicover, P. 1997. Principles and Parameters: An introduction to syntactic theory. Oxford University Press.
Davies, William D., and Stanley Dubinsky. 2008. The grammar of raising and control: A course in syntactic argumentation. John Wiley & Sons.
Emonds, J. 1976. A transformational approach to English syntax: Root, structure-preserving, and local transformations. New York: Academic Press.
Grinder, J. and S. Elgin. 1973. Guide to transformational grammar: History, theory, and practice. New York: Holt, Rinehart, and Winston, Inc.
Haegeman, L. 1994. Introduction to government and binding theory, 2nd edition. Oxford, UK: Blackwell.
Lasnik, H. and M. Saito. 1999. On the subject of infinitives. In H. Lasnik, Minimalist analysis, 7-24. Malden, MA: Blackwell.
McCawley, T. 1988. The syntactic phenomena of English, Vol. 1. Chicago: The University of Chicago Press.
Osborne, T. and T. Groß 2012. Constructions are catenae: Construction Grammar meets Dependency Grammar. Cognitive Linguistics 23, 1, 163-214.
van Riemsdijk, H. and E. Williams. 1986. Introduction to the theory of grammar. Cambridge, MA: The MIT Press.
Rosenbaum, Peter. 1967. The grammar of English predicate complement constructions. Cambridge, MA: MIT Press.
== External links ==
List of English control verbs at Wiktionary | Wikipedia/Control_(linguistics) |
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
== Training ==
Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
== History ==
=== Early work ===
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.
Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research.
R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
=== Deep learning breakthroughs in the 1960s and 1970s ===
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
=== Backpropagation ===
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
=== Convolutional neural networks ===
Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.
=== Recurrent neural networks ===
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
=== Deep learning ===
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need.
It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
== Models ==
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
=== Artificial neurons ===
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
=== Organization ===
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
=== Hyperparameter ===
A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
=== Learning ===
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
==== Learning rate ====
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
==== Cost function ====
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).
==== Backpropagation ====
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
=== Learning paradigms ===
Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
==== Supervised learning ====
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
==== Unsupervised learning ====
In unsupervised learning, input data is given along with the cost function, some function of the data
x
{\displaystyle \textstyle x}
and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model
f
(
x
)
=
a
{\displaystyle \textstyle f(x)=a}
where
a
{\displaystyle \textstyle a}
is a constant and the cost
C
=
E
[
(
x
−
f
(
x
)
)
2
]
{\displaystyle \textstyle C=E[(x-f(x))^{2}]}
. Minimizing this cost produces a value of
a
{\displaystyle \textstyle a}
that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between
x
{\displaystyle \textstyle x}
and
f
(
x
)
{\displaystyle \textstyle f(x)}
, whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
==== Reinforcement learning ====
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally, the environment is modeled as a Markov decision process (MDP) with states
s
1
,
.
.
.
,
s
n
∈
S
{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}
and actions
a
1
,
.
.
.
,
a
m
∈
A
{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}
. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution
P
(
c
t
|
s
t
)
{\displaystyle \textstyle P(c_{t}|s_{t})}
, the observation distribution
P
(
x
t
|
s
t
)
{\displaystyle \textstyle P(x_{t}|s_{t})}
and the transition distribution
P
(
s
t
+
1
|
s
t
,
a
t
)
{\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}
, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
==== Self-learning ====
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it receives initial emotions (only once) about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
==== Neuroevolution ====
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
=== Stochastic neural network ===
Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
=== Topological deep learning ===
Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics.
=== Other ===
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
==== Modes ====
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning, weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
== Types ==
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads;
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
== Network design ==
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc.). Overly complex models learn slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. The Python code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:
== Applications ==
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling)
Data processing (including filtering, clustering, blind source separation, and compression)
Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management)
Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction, object recognition, and sequential decision making)
Sequence recognition (including gesture, speech, and handwritten and printed text recognition)
Sensor data analysis (including image analysis)
Robotics (including directing manipulators and prostheses)
Data mining (including knowledge discovery in databases)
Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets)
Quantum chemistry
General game playing
Generative AI
Data visualization
Machine translation
Social network filtering
E-mail spam filtering
Medical diagnosis
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.
== Theoretical properties ==
=== Computational power ===
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
=== Capacity ===
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
=== Convergence ===
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fit target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
=== Generalization and statistics ===
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters.
Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
y
i
=
e
x
i
∑
j
=
1
c
e
x
j
{\displaystyle y_{i}={\frac {e^{x_{i}}}{\sum _{j=1}^{c}e^{x_{j}}}}}
== Criticism ==
=== Training ===
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.
Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).
=== Theory ===
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a
something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything. One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
=== Hardware ===
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormous CPU power and time.
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
=== Practical counterexamples ===
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
=== Hybrid approaches ===
Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.
=== Dataset bias ===
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
== Gallery ==
== Recent advancements and future directions ==
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.
=== Image processing ===
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.
=== Speech recognition ===
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.
=== Natural language processing ===
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
=== Control systems ===
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.
=== Finance ===
ANNs are used for stock market prediction and credit scoring:
In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process.
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.
=== Medicine ===
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
=== Content creation ===
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.
== See also ==
== References ==
== Bibliography ==
== External links ==
A Brief Introduction to Neural Networks (D. Kriesel) – Illustrated, bilingual manuscript about artificial neural networks; Topics so far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks, Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science Archived 7 June 2015 at the Wayback Machine
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)
Another introduction to ANN
Next Generation of Neural Networks Archived 24 January 2011 at the Wayback Machine – Google Tech Talks
Performance of Neural Networks
Neural Networks and Information Archived 9 July 2009 at the Wayback Machine
Sanderson G (5 October 2017). "But what is a Neural Network?". 3Blue1Brown. Archived from the original on 7 November 2021 – via YouTube. | Wikipedia/Artificial_neural_networks |
Automation and Remote Control (Russian: Автоматика и Телемеханика, romanized: Avtomatika i Telemekhanika) is a Russian scientific journal published by MAIK Nauka/Interperiodica Press and distributed in English by Springer Science+Business Media.
The journal was established in April 1936 by the USSR Academy of Sciences Department of Control Processes Problems. Cofounders were the Trapeznikov Institute of Control Sciences and the Institute of Information Transmission Problems. The journal covers research on control theory problems and applications. The editor-in-chief is Andrey A. Galyaev. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7.
== History ==
The journal was established in April 1936 and published bimonthly. Since 1956 the journal has been a monthly publication and was translated into English and published in the United States under the title Automation and Remote Control by Plenum Publishing Corporation. During its existence, the scope of the journal substantially evolved and expanded to reflect virtually all subjects concerned in one way or another with the current science of automation and control systems. The journal publishes surveys, original papers, and short communications.
== References ==
== External links ==
Official website
Official website (in Russian)
Institute of Control Sciences
Journal page at MAIK Nauka/Interperiodica Press | Wikipedia/Automation_and_Remote_Control |
Control reconfiguration is an active approach in control theory to achieve fault-tolerant control for dynamic systems. It is used when severe faults, such as actuator or sensor outages, cause a break-up of the control loop, which must be restructured to prevent failure at the system level. In addition to loop restructuring, the controller parameters must be adjusted to accommodate changed plant dynamics. Control reconfiguration is a building block toward increasing the dependability of systems under feedback control.
== Reconfiguration problem ==
=== Fault modelling ===
The figure to the right shows a plant controlled by a controller in a standard control loop.
The nominal linear model of the plant is
{
x
˙
=
A
x
+
B
u
y
=
C
x
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}&=\mathbf {A} \mathbf {x} +\mathbf {B} \mathbf {u} \\\mathbf {y} &=\mathbf {C} \mathbf {x} \end{cases}}}
The plant subject to a fault (indicated by a red arrow in the figure) is modelled in general by
{
x
˙
f
=
A
f
x
f
+
B
f
u
y
f
=
C
f
x
f
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}_{f}&=\mathbf {A} _{f}\mathbf {x} _{f}+\mathbf {B} _{f}\mathbf {u} \\\mathbf {y} _{f}&=\mathbf {C} _{f}\mathbf {x} _{f}\end{cases}}}
where the subscript
f
{\displaystyle f}
indicates that the system is faulty. This approach models multiplicative faults by modified system matrices. Specifically, actuator faults are represented by the new input matrix
B
f
{\displaystyle \mathbf {B} _{f}}
, sensor faults are represented by the output map
C
f
{\displaystyle \mathbf {C} _{f}}
, and internal plant faults are represented by the system matrix
A
f
{\displaystyle \mathbf {A} _{f}}
.
The upper part of the figure shows a supervisory loop consisting of fault detection and isolation (FDI) and reconfiguration which changes the loop by
choosing new input and output signals from {
u
,
y
{\displaystyle \mathbf {u} ,\mathbf {y} }
} to reach the control goal,
changing the controller internals (including dynamic structure and parameters),
adjusting the reference input
w
{\displaystyle \mathbf {w} }
.
To this end, the vectors of inputs and outputs contain all available signals, not just those used by the controller in fault-free operation.
Alternative scenarios can model faults as an additive external signal
f
{\displaystyle \mathbf {f} }
influencing the state derivatives and outputs as follows:
{
x
˙
f
=
A
x
f
+
B
u
+
E
f
y
f
=
C
f
x
f
+
F
f
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}_{f}&=\mathbf {A} \mathbf {x} _{f}+\mathbf {B} \mathbf {u} +\mathbf {E} \mathbf {f} \\\mathbf {y} _{f}&=\mathbf {C} _{f}\mathbf {x} _{f}+\mathbf {F} \mathbf {f} \end{cases}}}
=== Reconfiguration goals ===
The goal of reconfiguration is to keep the reconfigured control-loop performance sufficient for preventing plant shutdown. The following goals are distinguished:
Stabilization
Equilibrium recovery
Output trajectory recovery
State trajectory recovery
Transient time response recovery
Internal stability of the reconfigured closed loop is usually the minimum requirement. The equilibrium recovery goal (also referred to as weak goal) refers to the steady-state output equilibrium which the reconfigured loop reaches after a given constant input. This equilibrium must equal the nominal equilibrium under the same input (as time tends to infinity). This goal ensures steady-state reference tracking after reconfiguration. The output trajectory recovery goal (also referred to as strong goal) is even stricter. It requires that the dynamic response to an input must equal the nominal response at all times. Further restrictions are imposed by the state trajectory recovery goal, which requires that the state trajectory be restored to the nominal case by the reconfiguration under any input.
Usually a combination of goals is pursued in practice, such as the equilibrium-recovery goal with stability.
The question whether or not these or similar goals can be reached for specific faults is addressed by reconfigurability analysis.
== Reconfiguration approaches ==
=== Fault hiding ===
This paradigm aims at keeping the nominal controller in the loop. To this end, a reconfiguration block can be placed between the faulty plant and the nominal controller. Together with the faulty plant, it forms the reconfigured plant. The reconfiguration block has to fulfill the requirement that the behaviour of the reconfigured plant matches the behaviour of the nominal, that is fault-free plant.
=== Linear model following ===
In linear model following, a formal feature of the nominal closed loop is attempted to be recovered. In the classical pseudo-inverse method, the closed loop system matrix
A
¯
=
A
−
B
K
{\displaystyle {\bar {\mathbf {A} }}=\mathbf {A} -\mathbf {B} \mathbf {K} }
of a state-feedback control structure is used. The new controller
K
f
{\displaystyle \mathbf {K} _{f}}
is found to approximate
A
¯
{\displaystyle {\bar {\mathbf {A} }}}
in the sense of an induced matrix norm.
In perfect model following, a dynamic compensator is introduced to allow for the exact recovery of the complete loop behaviour under certain conditions.
In eigenstructure assignment, the nominal closed loop eigenvalues and eigenvectors (the eigenstructure) is recovered to the nominal case after a fault.
=== Optimisation-based control schemes ===
Optimisation control schemes include: linear-quadratic regulator design (LQR), model predictive control (MPC) and eigenstructure assignment methods.
=== Probabilistic approaches ===
Some probabilistic approaches have been developed.
=== Learning control ===
There are learning automata, neural networks, etc.
== Mathematical tools and frameworks ==
The methods by which reconfiguration is achieved differ considerably. The following list gives an overview of mathematical approaches that are commonly used.
Adaptive control (AC)
Disturbance decoupling (DD)
Eigenstructure assignment (EA)
Gain scheduling (GS)/linear parameter varying (LPV)
Generalised internal model control (GIMC)
Intelligent control (IC)
Linear matrix inequality (LMI)
Linear-quadratic regulator (LQR)
Model following (MF)
Model predictive control (MPC)
Pseudo-inverse method (PIM)
Robust control techniques
== See also ==
Prior to control reconfiguration, it must be at least determined whether a fault has occurred (fault detection) and if so, which components are affected (fault isolation). Preferably, a model of the faulty plant should be provided (fault identification). These questions are addressed by fault diagnosis methods.
Fault accommodation is another common approach to achieve fault tolerance. In contrast to control reconfiguration, accommodation is limited to internal controller changes. The sets of signals manipulated and measured by the controller are fixed, which means that the loop cannot be restructured.
== References ==
== Further reading ==
Blanke, M.; Kinnaert, M.; Lunze, J.; Staroswiecki, M. (2006), Diagnosis and Fault-Tolerant Control (2nd ed.), Springer
Steffen, T. (2005), Control Reconfiguration of Dynamical Systems, Springer
Staroswiecki, M. (2005), "Fault Tolerant Control: The Pseudo-Inverse Method Revisited", Proceedings of the 16th IFAC World Congress, Prague, Czech Republic: IFAC
Lunze, J.; Rowe-Serrano, D.; Steffen, T. (2003), "Control Reconfiguration Demonstrated at a Two-Degrees-of-Freedom Helicopter Model", Proceedings of European Control Conference (ECC), Cambridge, UK.{{citation}}: CS1 maint: location missing publisher (link)
Maciejowski, J.; Jones, C. (2003), "MPC Fault-Tolerant Flight Control Case Study: Flight 1862", Proceedings of the SAFEPROCESS 2003: 5th Symposium on Detection and Safety for Technical Processes, Washington D.C., USA: IFAC, pp. 265–276
Mahmoud, M.; Jiang, J.; Zhang, Y. (2003), Active Fault Tolerant Control Systems - Stochastic Analysis and Synthesis, Springer
Zhang, Y.; Jiang, J. (2003), "Bibliographical review on reconfigurable fault-tolerant control systems", Proceedings of the SAFEPROCESS 2003: 5th Symposium on Detection and Safety for Technical Processes, Washington D.C., USA: IFAC, pp. 265–276
Patton, R. J. (1997), "Fault-tolerant control: the 1997 situation", Preprints of IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes, Kingston upon Hull, UK, pp. 1033–1055{{citation}}: CS1 maint: location missing publisher (link)
Rauch, H. E. (1995), "Autonomous control reconfiguration", IEEE Control Systems Magazine, 15 (6): 37–48, doi:10.1109/37.476385
Rauch, H. E. (1994), "Intelligent fault diagnosis and control reconfiguration", IEEE Control Systems Magazine, 14 (3): 6–12, doi:10.1109/37.291462, S2CID 39931526
Gao, Z.; Antsaklis, P.J. (1991), "Stability of the pseudo-inverse method for reconfigurable control systems", International Journal of Control, 53 (3): 717–729, doi:10.1080/00207179108953643
Looze, D.; Weiss, J.L.; Eterno, J.S.; Barrett, N.M. (1985), "An Automatic Redesign Approach for Restructurable Control Systems", IEEE Control Systems Magazine, 5 (2): 16–22, doi:10.1109/mcs.1985.1104940, S2CID 12684489.
Esna Ashari, A.; Khaki Sedigh, A.; Yazdanpanah, M. J. (2005), "Reconfigurable control system design using eigenstructure assignment: static, dynamic and robust approaches", International Journal of Control, 78 (13): 1005–1016, doi:10.1080/00207170500241817, S2CID 121350006. | Wikipedia/Control_reconfiguration |
Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point
x
e
{\displaystyle x_{e}}
stay near
x
e
{\displaystyle x_{e}}
forever, then
x
e
{\displaystyle x_{e}}
is Lyapunov stable. More strongly, if
x
e
{\displaystyle x_{e}}
is Lyapunov stable and all solutions that start out near
x
e
{\displaystyle x_{e}}
converge to
x
e
{\displaystyle x_{e}}
, then
x
e
{\displaystyle x_{e}}
is said to be asymptotically stable (see asymptotic analysis). The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.
== History ==
Lyapunov stability is named after Aleksandr Mikhailovich Lyapunov, a Russian mathematician who defended the thesis The General Problem of Stability of Motion at Kharkov University in 1892. A. M. Lyapunov was a pioneer in successful endeavors to develop a global approach to the analysis of the stability of nonlinear dynamical systems by comparison with the widely spread local method of linearizing them about points of equilibrium. His work, initially published in Russian and then translated to French, received little attention for many years. The mathematical theory of stability of motion, founded by A. M. Lyapunov, considerably anticipated the time for its implementation in science and technology. Moreover Lyapunov did not himself make application in this field, his own interest being in the stability of rotating fluid masses with astronomical application. He did not have doctoral students who followed the research in the field of stability and his own destiny was terribly tragic because of his suicide in 1918 . For several decades the theory of stability sank into complete oblivion. The Russian-Soviet mathematician and mechanician Nikolay Gur'yevich Chetaev working at the Kazan Aviation Institute in the 1930s was the first who realized the incredible magnitude of the discovery made by A. M. Lyapunov. The contribution to the theory made by N. G. Chetaev was so significant that many mathematicians, physicists and engineers consider him Lyapunov's direct successor and the next-in-line scientific descendant in the creation and development of the mathematical theory of stability.
The interest in it suddenly skyrocketed during the Cold War period when the so-called "Second Method of Lyapunov" (see below) was found to be applicable to the stability of aerospace guidance systems which typically contain strong nonlinearities not treatable by other methods. A large number of publications appeared then and since in the control and systems literature.
More recently the concept of the Lyapunov exponent (related to Lyapunov's First Method of discussing stability) has received wide interest in connection with chaos theory. Lyapunov stability methods have also been applied to finding equilibrium solutions in traffic assignment problems.
== Definition for continuous-time systems ==
Consider an autonomous nonlinear dynamical system
x
˙
=
f
(
x
(
t
)
)
,
x
(
0
)
=
x
0
{\displaystyle {\dot {x}}=f(x(t)),\;\;\;\;x(0)=x_{0}}
,
where
x
(
t
)
∈
D
⊆
R
n
{\displaystyle x(t)\in {\mathcal {D}}\subseteq \mathbb {R} ^{n}}
denotes the system state vector,
D
{\displaystyle {\mathcal {D}}}
an open set containing the origin, and
f
:
D
→
R
n
{\displaystyle f:{\mathcal {D}}\rightarrow \mathbb {R} ^{n}}
is a continuous vector field on
D
{\displaystyle {\mathcal {D}}}
. Suppose
f
{\displaystyle f}
has an equilibrium at
x
e
{\displaystyle x_{e}}
, so that
f
(
x
e
)
=
0
{\displaystyle f(x_{e})=0}
. Then:
This equilibrium is said to be Lyapunov stable if for every
ϵ
>
0
{\displaystyle \epsilon >0}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that if
‖
x
(
0
)
−
x
e
‖
<
δ
{\displaystyle \|x(0)-x_{e}\|<\delta }
then for every
t
≥
0
{\displaystyle t\geq 0}
we have
‖
x
(
t
)
−
x
e
‖
<
ϵ
{\displaystyle \|x(t)-x_{e}\|<\epsilon }
.
The equilibrium of the above system is said to be asymptotically stable if it is Lyapunov stable and there exists
δ
>
0
{\displaystyle \delta >0}
such that if
‖
x
(
0
)
−
x
e
‖
<
δ
{\displaystyle \|x(0)-x_{e}\|<\delta }
then
lim
t
→
∞
‖
x
(
t
)
−
x
e
‖
=
0
{\displaystyle \lim _{t\rightarrow \infty }\|x(t)-x_{e}\|=0}
.
The equilibrium of the above system is said to be exponentially stable if it is asymptotically stable and there exist
α
>
0
,
β
>
0
,
δ
>
0
{\displaystyle \alpha >0,~\beta >0,~\delta >0}
such that if
‖
x
(
0
)
−
x
e
‖
<
δ
{\displaystyle \|x(0)-x_{e}\|<\delta }
then
‖
x
(
t
)
−
x
e
‖
≤
α
‖
x
(
0
)
−
x
e
‖
e
−
β
t
{\displaystyle \|x(t)-x_{e}\|\leq \alpha \|x(0)-x_{e}\|e^{-\beta t}}
for all
t
≥
0
{\displaystyle t\geq 0}
.
Conceptually, the meanings of the above terms are the following:
Lyapunov stability of an equilibrium means that solutions starting "close enough" to the equilibrium (within a distance
δ
{\displaystyle \delta }
from it) remain "close enough" forever (within a distance
ϵ
{\displaystyle \epsilon }
from it). Note that this must be true for any
ϵ
{\displaystyle \epsilon }
that one may want to choose.
Asymptotic stability means that solutions that start close enough not only remain close enough but also eventually converge to the equilibrium.
Exponential stability means that solutions not only converge, but in fact converge faster than or at least as fast as a particular known rate
α
‖
x
(
0
)
−
x
e
‖
e
−
β
t
{\displaystyle \alpha \|x(0)-x_{e}\|e^{-\beta t}}
.
The trajectory
x
(
t
)
=
ϕ
(
t
)
{\displaystyle x(t)=\phi (t)}
is (locally) attractive if
‖
x
(
t
)
−
ϕ
(
t
)
‖
→
0
{\displaystyle \|x(t)-\phi (t)\|\rightarrow 0}
as
t
→
∞
{\displaystyle t\rightarrow \infty }
for all trajectories
x
(
t
)
{\displaystyle x(t)}
that start close enough to
ϕ
(
t
)
{\displaystyle \phi (t)}
, and globally attractive if this property holds for all trajectories.
That is, if x belongs to the interior of its stable manifold, it is asymptotically stable if it is both attractive and stable. (There are examples showing that attractivity does not imply asymptotic stability. Such examples are easy to create using homoclinic connections.)
If the Jacobian of the dynamical system at an equilibrium happens to be a stability matrix (i.e., if the real part of each eigenvalue is strictly negative), then the equilibrium is asymptotically stable.
=== System of deviations ===
Instead of considering stability only near an equilibrium point (a constant solution
x
(
t
)
=
x
e
{\displaystyle x(t)=x_{e}}
), one can formulate similar definitions of stability near an arbitrary solution
x
(
t
)
=
ϕ
(
t
)
{\displaystyle x(t)=\phi (t)}
. However, one can reduce the more general case to that of an equilibrium by a change of variables called a "system of deviations". Define
y
=
x
−
ϕ
(
t
)
{\displaystyle y=x-\phi (t)}
, obeying the differential equation:
y
˙
=
f
(
t
,
y
+
ϕ
(
t
)
)
−
ϕ
˙
(
t
)
=
g
(
t
,
y
)
{\displaystyle {\dot {y}}=f(t,y+\phi (t))-{\dot {\phi }}(t)=g(t,y)}
.
This is no longer an autonomous system, but it has a guaranteed equilibrium point at
y
=
0
{\displaystyle y=0}
whose stability is equivalent to the stability of the original solution
x
(
t
)
=
ϕ
(
t
)
{\displaystyle x(t)=\phi (t)}
.
=== Lyapunov's second method for stability ===
Lyapunov, in his original 1892 work, proposed two methods for demonstrating stability. The first method developed the solution in a series which was then proved convergent within limits. The second method, which is now referred to as the Lyapunov stability criterion or the Direct Method, makes use of a Lyapunov function V(x) which has an analogy to the potential function of classical dynamics. It is introduced as follows for a system
x
˙
=
f
(
x
)
{\displaystyle {\dot {x}}=f(x)}
having a point of equilibrium at
x
=
0
{\displaystyle x=0}
. Consider a function
V
:
R
n
→
R
{\displaystyle V:\mathbb {R} ^{n}\rightarrow \mathbb {R} }
such that
V
(
x
)
=
0
{\displaystyle V(x)=0}
if and only if
x
=
0
{\displaystyle x=0}
V
(
x
)
>
0
{\displaystyle V(x)>0}
if and only if
x
≠
0
{\displaystyle x\neq 0}
V
˙
(
x
)
=
d
d
t
V
(
x
)
=
∑
i
=
1
n
∂
V
∂
x
i
f
i
(
x
)
=
∇
V
⋅
f
(
x
)
≤
0
{\displaystyle {\dot {V}}(x)={\frac {d}{dt}}V(x)=\sum _{i=1}^{n}{\frac {\partial V}{\partial x_{i}}}f_{i}(x)=\nabla V\cdot f(x)\leq 0}
for all values of
x
≠
0
{\displaystyle x\neq 0}
. Note: for asymptotic stability,
V
˙
(
x
)
<
0
{\displaystyle {\dot {V}}(x)<0}
for
x
≠
0
{\displaystyle x\neq 0}
is required.
Then V(x) is called a Lyapunov function and the system is stable in the sense of Lyapunov. (Note that
V
(
0
)
=
0
{\displaystyle V(0)=0}
is required; otherwise for example
V
(
x
)
=
1
/
(
1
+
|
x
|
)
{\displaystyle V(x)=1/(1+|x|)}
would "prove" that
x
˙
(
t
)
=
x
{\displaystyle {\dot {x}}(t)=x}
is locally stable.) An additional condition called "properness" or "radial unboundedness" is required in order to conclude global stability. Global asymptotic stability (GAS) follows similarly.
It is easier to visualize this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not be applicable.
Lyapunov's realization was that stability can be proven without requiring knowledge of the true physical energy, provided a Lyapunov function can be found to satisfy the above constraints.
== Definition for discrete-time systems ==
The definition for discrete-time systems is almost identical to that for continuous-time systems. The definition below provides this, using an alternate language commonly used in more mathematical texts.
Let (X, d) be a metric space and f : X → X a continuous function. A point x in X is said to be Lyapunov stable, if,
∀
ϵ
>
0
∃
δ
>
0
∀
y
∈
X
[
d
(
x
,
y
)
<
δ
⇒
∀
n
∈
N
d
(
f
n
(
x
)
,
f
n
(
y
)
)
<
ϵ
]
.
{\displaystyle \forall \epsilon >0\ \exists \delta >0\ \forall y\in X\ \left[d(x,y)<\delta \Rightarrow \forall n\in \mathbf {N} \ d\left(f^{n}(x),f^{n}(y)\right)<\epsilon \right].}
We say that x is asymptotically stable if it belongs to the interior of its stable set, i.e. if,
∃
δ
>
0
[
d
(
x
,
y
)
<
δ
⇒
lim
n
→
∞
d
(
f
n
(
x
)
,
f
n
(
y
)
)
=
0
]
.
{\displaystyle \exists \delta >0\left[d(x,y)<\delta \Rightarrow \lim _{n\to \infty }d\left(f^{n}(x),f^{n}(y)\right)=0\right].}
== Stability for linear state space models ==
A linear state space model
x
˙
=
A
x
{\displaystyle {\dot {\textbf {x}}}=A{\textbf {x}}}
,
where
A
{\displaystyle A}
is a finite matrix, is asymptotically stable (in fact, exponentially stable) if all real parts of the eigenvalues of
A
{\displaystyle A}
are negative. This condition is equivalent to the following one:
A
T
M
+
M
A
{\displaystyle A^{\textsf {T}}M+MA}
is negative definite for some positive definite matrix
M
=
M
T
{\displaystyle M=M^{\textsf {T}}}
. (The relevant Lyapunov function is
V
(
x
)
=
x
T
M
x
{\displaystyle V(x)=x^{\textsf {T}}Mx}
.)
Correspondingly, a time-discrete linear state space model
x
t
+
1
=
A
x
t
{\displaystyle {\textbf {x}}_{t+1}=A{\textbf {x}}_{t}}
is asymptotically stable (in fact, exponentially stable) if all the eigenvalues of
A
{\displaystyle A}
have a modulus smaller than one.
This latter condition has been generalized to switched systems: a linear switched discrete time system (ruled by a set of matrices
{
A
1
,
…
,
A
m
}
{\displaystyle \{A_{1},\dots ,A_{m}\}}
)
x
t
+
1
=
A
i
t
x
t
,
A
i
t
∈
{
A
1
,
…
,
A
m
}
{\displaystyle {{\textbf {x}}_{t+1}}=A_{i_{t}}{\textbf {x}}_{t},\quad A_{i_{t}}\in \{A_{1},\dots ,A_{m}\}}
is asymptotically stable (in fact, exponentially stable) if the joint spectral radius of the set
{
A
1
,
…
,
A
m
}
{\displaystyle \{A_{1},\dots ,A_{m}\}}
is smaller than one.
== Stability for systems with inputs ==
A system with inputs (or controls) has the form
x
˙
=
f
(
x
,
u
)
{\displaystyle {\dot {\textbf {x}}}={\textbf {f}}({\textbf {x}},{\textbf {u}})}
where the (generally time-dependent) input u(t) may be viewed as a control, external input,
stimulus, disturbance, or forcing function. It has been shown that near to a point of equilibrium which is Lyapunov stable the system remains stable under small disturbances. For larger input disturbances the study of such systems is the subject of control theory and applied in control engineering. For systems with inputs, one must quantify the effect of inputs on the stability of the system. The main two approaches to this analysis are BIBO stability (for linear systems) and input-to-state stability (ISS) (for nonlinear systems)
== Example ==
This example shows a system where a Lyapunov function can be used to prove Lyapunov stability but cannot show asymptotic stability.
Consider the following equation, based on the Van der Pol oscillator equation with the friction term changed:
y
¨
+
y
−
ε
(
y
˙
3
3
−
y
˙
)
=
0.
{\displaystyle {\ddot {y}}+y-\varepsilon \left({\frac {{\dot {y}}^{3}}{3}}-{\dot {y}}\right)=0.}
Let
x
1
=
y
,
x
2
=
y
˙
{\displaystyle x_{1}=y,x_{2}={\dot {y}}}
so that the corresponding system is
x
˙
1
=
x
2
,
x
˙
2
=
−
x
1
+
ε
(
x
2
3
3
−
x
2
)
.
{\displaystyle {\begin{aligned}&{\dot {x}}_{1}=x_{2},\\&{\dot {x}}_{2}=-x_{1}+\varepsilon \left({\frac {x_{2}^{3}}{3}}-{x_{2}}\right).\end{aligned}}}
The origin
x
1
=
0
,
x
2
=
0
{\displaystyle x_{1}=0,\ x_{2}=0}
is the only equilibrium point.
Let us choose as a Lyapunov function
V
=
1
2
(
x
1
2
+
x
2
2
)
{\displaystyle V={\frac {1}{2}}\left(x_{1}^{2}+x_{2}^{2}\right)}
which is clearly positive definite. Its derivative is
V
˙
=
x
1
x
˙
1
+
x
2
x
˙
2
=
x
1
x
2
−
x
1
x
2
+
ε
x
2
4
3
−
ε
x
2
2
=
ε
x
2
4
3
−
ε
x
2
2
.
{\displaystyle {\dot {V}}=x_{1}{\dot {x}}_{1}+x_{2}{\dot {x}}_{2}=x_{1}x_{2}-x_{1}x_{2}+\varepsilon {\frac {x_{2}^{4}}{3}}-\varepsilon {x_{2}^{2}}=\varepsilon {\frac {x_{2}^{4}}{3}}-\varepsilon {x_{2}^{2}}.}
It seems that if the parameter
ε
{\displaystyle \varepsilon }
is positive, stability is asymptotic for
x
2
2
<
3.
{\displaystyle x_{2}^{2}<3.}
But this is wrong, since
V
˙
{\displaystyle {\dot {V}}}
does not depend on
x
1
{\displaystyle x_{1}}
, and will be 0 everywhere on the
x
1
{\displaystyle x_{1}}
axis. The equilibrium is Lyapunov stable but not asymptotically stable.
== Barbalat's lemma and stability of time-varying systems ==
It may be difficult to find a Lyapunov function with a negative definite derivative as required by the Lyapunov stability criterion, however a function
V
{\displaystyle V}
with
V
˙
{\displaystyle {\dot {V}}}
that is only negative semi-definite may be available. In autonomous systems, the invariant set theorem can be applied to prove asymptotic stability, but this theorem is not applicable when the dynamics are a function of time.
Instead, Barbalat's lemma allows for Lyapunov-like analysis of these non-autonomous systems. The lemma is motivated by the following observations. Assuming f is a function of time only:
Having
f
˙
(
t
)
→
0
{\displaystyle {\dot {f}}(t)\to 0}
does not imply that
f
(
t
)
{\displaystyle f(t)}
has a limit at
t
→
∞
{\displaystyle t\to \infty }
. For example,
f
(
t
)
=
sin
(
ln
(
t
)
)
,
t
>
0
{\displaystyle f(t)=\sin(\ln(t)),\;t>0}
.
Having
f
(
t
)
{\displaystyle f(t)}
approaching a limit as
t
→
∞
{\displaystyle t\to \infty }
does not imply that
f
˙
(
t
)
→
0
{\displaystyle {\dot {f}}(t)\to 0}
. For example,
f
(
t
)
=
sin
(
t
2
)
/
t
,
t
>
0
{\displaystyle f(t)=\sin \left(t^{2}\right)/t,\;t>0}
.
Having
f
(
t
)
{\displaystyle f(t)}
lower bounded and decreasing (
f
˙
≤
0
{\displaystyle {\dot {f}}\leq 0}
) implies it converges to a limit. But it does not say whether or not
f
˙
→
0
{\displaystyle {\dot {f}}\to 0}
as
t
→
∞
{\displaystyle t\to \infty }
.
Barbalat's Lemma says:
If
f
(
t
)
{\displaystyle f(t)}
has a finite limit as
t
→
∞
{\displaystyle t\to \infty }
and if
f
˙
{\displaystyle {\dot {f}}}
is uniformly continuous (a sufficient condition for uniform continuity is that
f
¨
{\displaystyle {\ddot {f}}}
is bounded), then
f
˙
(
t
)
→
0
{\displaystyle {\dot {f}}(t)\to 0}
as
t
→
∞
{\displaystyle t\to \infty }
.
An alternative version is as follows:
Let
p
∈
[
1
,
∞
)
{\displaystyle p\in [1,\infty )}
and
q
∈
(
1
,
∞
]
{\displaystyle q\in (1,\infty ]}
. If
f
∈
L
p
(
0
,
∞
)
{\displaystyle f\in L^{p}(0,\infty )}
and
f
˙
∈
L
q
(
0
,
∞
)
{\displaystyle {\dot {f}}\in L^{q}(0,\infty )}
, then
f
(
t
)
→
0
{\displaystyle f(t)\to 0}
as
t
→
∞
.
{\displaystyle t\to \infty .}
In the following form the Lemma is true also in the vector valued case:
Let
f
(
t
)
{\displaystyle f(t)}
be a uniformly continuous function with values in a Banach space
E
{\displaystyle E}
and assume that
∫
0
t
f
(
τ
)
d
τ
{\displaystyle \textstyle \int _{0}^{t}f(\tau )\mathrm {d} \tau }
has a finite limit as
t
→
∞
{\displaystyle t\to \infty }
. Then
f
(
t
)
→
0
{\displaystyle f(t)\to 0}
as
t
→
∞
{\displaystyle t\to \infty }
.
The following example is taken from page 125 of Slotine and Li's book Applied Nonlinear Control.
Consider a non-autonomous system
e
˙
=
−
e
+
g
⋅
w
(
t
)
{\displaystyle {\dot {e}}=-e+g\cdot w(t)}
g
˙
=
−
e
⋅
w
(
t
)
.
{\displaystyle {\dot {g}}=-e\cdot w(t).}
This is non-autonomous because the input
w
{\displaystyle w}
is a function of time. Assume that the input
w
(
t
)
{\displaystyle w(t)}
is bounded.
Taking
V
=
e
2
+
g
2
{\displaystyle V=e^{2}+g^{2}}
gives
V
˙
=
−
2
e
2
≤
0.
{\displaystyle {\dot {V}}=-2e^{2}\leq 0.}
This says that
V
(
t
)
≤
V
(
0
)
{\displaystyle V(t)\leq V(0)}
by first two conditions and hence
e
{\displaystyle e}
and
g
{\displaystyle g}
are bounded. But it does not say anything about the convergence of
e
{\displaystyle e}
to zero, as
V
˙
{\displaystyle {\dot {V}}}
is only negative semi-definite (note
g
{\displaystyle g}
can be non-zero when
V
˙
{\displaystyle {\dot {V}}}
=0) and the dynamics are non-autonomous.
Using Barbalat's lemma:
V
¨
=
−
4
e
(
−
e
+
g
⋅
w
)
{\displaystyle {\ddot {V}}=-4e(-e+g\cdot w)}
.
This is bounded because
e
{\displaystyle e}
,
g
{\displaystyle g}
and
w
{\displaystyle w}
are bounded. This implies
V
˙
→
0
{\displaystyle {\dot {V}}\to 0}
as
t
→
∞
{\displaystyle t\to \infty }
and hence
e
→
0
{\displaystyle e\to 0}
. This proves that the error converges.
== See also ==
Lyapunov function
LaSalle's invariance principle
Lyapunov–Malkin theorem
Markus–Yamabe conjecture
Libration point orbit
Hartman–Grobman theorem
Perturbation theory
Stability theory
== References ==
== Further reading ==
Bhatia, Nam Parshad; Szegő, Giorgio P. (2002). Stability theory of dynamical systems. Springer. ISBN 978-3-540-42748-3.
Chervin, Robert (1971). Lyapunov Stability and Feedback Control of Two-Stream Plasma Systems (PhD). Columbia University.
Gandolfo, Giancarlo (1996). Economic Dynamics (Third ed.). Berlin: Springer. pp. 407–428. ISBN 978-3-540-60988-9.
Parks, P. C. (1992). "A. M. Lyapunov's stability theory—100 years on". IMA Journal of Mathematical Control & Information. 9 (4): 275–303. doi:10.1093/imamci/9.4.275.
Slotine, Jean-Jacques E.; Weiping Li (1991). Applied Nonlinear Control. NJ: Prentice Hall.
Teschl, G. (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
Wiggins, S. (2003). Introduction to Applied Nonlinear Dynamical Systems and Chaos (2nd ed.). New York: Springer Verlag. ISBN 978-0-387-00177-7.
This article incorporates material from asymptotically stable on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Lyapunov's_theory |
A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.
Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
disturbance rejection (such as hills in the cruise control example above)
guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
unstable processes can be stabilized
reduced sensitivity to parameter variations
improved reference tracking performance
improved rectification of random fluctuations
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
== Open-loop and closed-loop ==
== Closed-loop transfer function ==
The output of the system y(t) is fed back through a sensor measurement F to a comparison with the reference value r(t). The controller C then takes the error e (difference) between the reference and the output to change the inputs u to the system under control P. This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output (SISO) control system; MIMO (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller C, the plant P, and the sensor F are linear and time-invariant (i.e., elements of their transfer function C(s), P(s), and F(s) do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Y
(
s
)
=
P
(
s
)
U
(
s
)
{\displaystyle Y(s)=P(s)U(s)}
U
(
s
)
=
C
(
s
)
E
(
s
)
{\displaystyle U(s)=C(s)E(s)}
E
(
s
)
=
R
(
s
)
−
F
(
s
)
Y
(
s
)
.
{\displaystyle E(s)=R(s)-F(s)Y(s).}
Solving for Y(s) in terms of R(s) gives
Y
(
s
)
=
(
P
(
s
)
C
(
s
)
1
+
P
(
s
)
C
(
s
)
F
(
s
)
)
R
(
s
)
=
H
(
s
)
R
(
s
)
.
{\displaystyle Y(s)=\left({\frac {P(s)C(s)}{1+P(s)C(s)F(s)}}\right)R(s)=H(s)R(s).}
The expression
H
(
s
)
=
P
(
s
)
C
(
s
)
1
+
F
(
s
)
P
(
s
)
C
(
s
)
{\displaystyle H(s)={\frac {P(s)C(s)}{1+F(s)P(s)C(s)}}}
is referred to as the closed-loop transfer function of the system. The numerator is the forward (open-loop) gain from r to y, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If
|
P
(
s
)
C
(
s
)
|
≫
1
{\displaystyle |P(s)C(s)|\gg 1}
, i.e., it has a large norm with each value of s, and if
|
F
(
s
)
|
≈
1
{\displaystyle |F(s)|\approx 1}
, then Y(s) is approximately equal to R(s) and the output closely tracks the reference input.
== PID feedback control ==
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.
A PID controller continuously calculates an error value e(t) as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. PID is an initialism for Proportional-Integral-Derivative, referring to the three terms operating on the error signal to produce a control signal.
The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.
The PID controller is probably the most-used feedback control design.
If u(t) is the control signal sent to the system, y(t) is the measured output and r(t) is the desired output, and e(t) = r(t) − y(t) is the tracking error, a PID controller has the general form
u
(
t
)
=
K
P
e
(
t
)
+
K
I
∫
t
e
(
τ
)
d
τ
+
K
D
d
e
(
t
)
d
t
.
{\displaystyle u(t)=K_{P}e(t)+K_{I}\int ^{t}e(\tau ){\text{d}}\tau +K_{D}{\frac {{\text{d}}e(t)}{{\text{d}}t}}.}
The desired closed loop dynamics is obtained by adjusting the three parameters KP, KI and KD, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
u
(
s
)
=
K
P
e
(
s
)
+
K
I
1
s
e
(
s
)
+
K
D
s
e
(
s
)
{\displaystyle u(s)=K_{P}\,e(s)+K_{I}\,{\frac {1}{s}}\,e(s)+K_{D}\,s\,e(s)}
u
(
s
)
=
(
K
P
+
K
I
1
s
+
K
D
s
)
e
(
s
)
{\displaystyle u(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right)e(s)}
with the PID controller transfer function
C
(
s
)
=
(
K
P
+
K
I
1
s
+
K
D
s
)
.
{\displaystyle C(s)=\left(K_{P}+K_{I}\,{\frac {1}{s}}+K_{D}\,s\right).}
As an example of tuning a PID controller in the closed-loop system H(s), consider a 1st order plant given by
P
(
s
)
=
A
1
+
s
T
P
{\displaystyle P(s)={\frac {A}{1+sT_{P}}}}
where A and TP are some constants. The plant output is fed back through
F
(
s
)
=
1
1
+
s
T
F
{\displaystyle F(s)={\frac {1}{1+sT_{F}}}}
where TF is also a constant. Now if we set
K
P
=
K
(
1
+
T
D
T
I
)
{\displaystyle K_{P}=K\left(1+{\frac {T_{D}}{T_{I}}}\right)}
, KD = KTD, and
K
I
=
K
T
I
{\displaystyle K_{I}={\frac {K}{T_{I}}}}
, we can express the PID controller transfer function in series form as
C
(
s
)
=
K
(
1
+
1
s
T
I
)
(
1
+
s
T
D
)
{\displaystyle C(s)=K\left(1+{\frac {1}{sT_{I}}}\right)(1+sT_{D})}
Plugging P(s), F(s), and C(s) into the closed-loop transfer function H(s), we find that by setting
K
=
1
A
,
T
I
=
T
F
,
T
D
=
T
P
{\displaystyle K={\frac {1}{A}},T_{I}=T_{F},T_{D}=T_{P}}
H(s) = 1. With this tuning in this example, the system output follows the reference input exactly.
However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.
== References == | Wikipedia/Closed-loop_control |
In control theory, an open-loop controller, also called a non-feedback controller, is a control loop part of a control system in which the control action ("input" to the system) is independent of the "process output", which is the process variable that is being controlled. It does not use feedback to determine if its output has achieved the desired goal of the input command or process setpoint.
There are many open-loop controls, such as on/off switching of valves, machinery, lights, motors or heaters, where the control result is known to be approximately sufficient under normal conditions without the need for feedback. The advantage of using open-loop control in these cases is the reduction in component count and complexity. However, an open-loop system cannot correct any errors that it makes or correct for outside disturbances unlike a closed-loop control system.
== Open-loop and closed-loop ==
== Applications ==
An open-loop controller is often used in simple processes because of its simplicity and low cost, especially in systems where feedback is not critical. A typical example would be an older model domestic clothes dryer, for which the length of time is entirely dependent on the judgement of the human operator, with no automatic feedback of the dryness of the clothes.
For example, an irrigation sprinkler system, programmed to turn on at set times could be an example of an open-loop system if it does not measure soil moisture as a form of feedback. Even if rain is pouring down on the lawn, the sprinkler system would activate on schedule, wasting water.
Another example is a stepper motor used for control of position. Sending it a stream of electrical pulses causes it to rotate by exactly that many steps, hence the name. If the motor was always assumed to perform each movement correctly, without positional feedback, it would be open-loop control. However, if there is a position encoder, or sensors to indicate the start or finish positions, then that is closed-loop control, such as in many inkjet printers. The drawback of open-loop control of steppers is that if the machine load is too high, or the motor attempts to move too quickly, then steps may be skipped. The controller has no means of detecting this and so the machine continues to run slightly out of adjustment until reset. For this reason, more complex robots and machine tools instead use servomotors rather than stepper motors, which incorporate encoders and closed-loop controllers.
However, open-loop control is very useful and economic for well-defined systems where the relationship between input and the resultant state can be reliably modeled by a mathematical formula. For example, determining the voltage to be fed to an electric motor that drives a constant load, in order to achieve a desired speed would be a good application. But if the load were not predictable and became excessive, the motor's speed might vary as a function of the load not just the voltage, and an open-loop controller would be insufficient to ensure repeatable control of the velocity.
An example of this is a conveyor system that is required to travel at a constant speed. For a constant voltage, the conveyor will move at a different speed depending on the load on the motor (represented here by the weight of objects on the conveyor). In order for the conveyor to run at a constant speed, the voltage of the motor must be adjusted depending on the load. In this case, a closed-loop control system would be necessary.
Thus there are many open-loop controls, such as switching valves, lights, motors or heaters on and off, where the result is known to be approximately sufficient without the need for feedback.
== Combination with feedback control ==
A feed back control system, such as a PID controller, can be improved by combining the feedback (or closed-loop control) of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system in some situations.
== See also ==
Cataract, the open-loop speed controller of early beam engines
Control theory
Feed-forward
PID controller
Process control
Open-loop transfer function
== References ==
== Further reading ==
Kuo, Benjamin C. (1991). Automatic Control Systems (6th ed.). New Jersey: Prentice Hall. ISBN 0-13-051046-7.
Ziny Flikop (2004). "Bounded-Input Bounded-Predefined-Control Bounded-Output" (http://arXiv.org/pdf/cs/0411015)
Basso, Christophe (2012). "Designing Control Loops for Linear and Switching Power Supplies: A Tutorial Guide". Artech House, ISBN 978-1608075577 | Wikipedia/Open-loop_control |
Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.
== Overview ==
The models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.
MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.
While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.
An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.
=== Theory behind MPC ===
MPC is based on iterative, finite-horizon optimization of a plant model. At time
t
{\displaystyle t}
the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future:
[
t
,
t
+
T
]
{\displaystyle [t,t+T]}
. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time
t
+
T
{\displaystyle t+T}
. Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.
=== Principles of MPC ===
Model predictive control is a multivariable control algorithm that uses:
an internal dynamic model of the process
a cost function J over the receding horizon
an optimization algorithm minimizing the cost function J using the control input u
An example of a quadratic cost function for optimization is given by:
J
=
∑
i
=
1
N
w
x
i
(
r
i
−
x
i
)
2
+
∑
i
=
1
M
w
u
i
Δ
u
i
2
{\displaystyle J=\sum _{i=1}^{N}w_{x_{i}}(r_{i}-x_{i})^{2}+\sum _{i=1}^{M}w_{u_{i}}{\Delta u_{i}}^{2}}
without violating constraints (low/high limits) with
x
i
{\displaystyle x_{i}}
:
i
{\displaystyle i}
th controlled variable (e.g. measured temperature)
r
i
{\displaystyle r_{i}}
:
i
{\displaystyle i}
th reference variable (e.g. required temperature)
u
i
{\displaystyle u_{i}}
:
i
{\displaystyle i}
th manipulated variable (e.g. control valve)
w
x
i
{\displaystyle w_{x_{i}}}
: weighting coefficient reflecting the relative importance of
x
i
{\displaystyle x_{i}}
w
u
i
{\displaystyle w_{u_{i}}}
: weighting coefficient penalizing relative big changes in
u
i
{\displaystyle u_{i}}
etc.
== Nonlinear MPC ==
Nonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.
The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g.,.. Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function.
While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.
== Explicit MPC ==
Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.
== Robust MPC ==
Robust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.
Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance. This is the optimal solution to linear robust control problems, however it carries a high computational cost. The basic idea behind the min/max MPC approach is to modify the on-line "min" optimization to a "min-max" problem, minimizing the worst case of the objective function, maximized over all possible plants from the uncertainty set.
Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.
Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state. The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.
Multi-stage MPC. This uses a scenario-tree formulation by approximating the uncertainty space with a set of samples and the approach is non-conservative because it takes into account that the measurement information is available at every time stage in the prediction and the decisions at every stage can be different and can act as recourse to counteract the effects of uncertainties. The drawback of the approach however is that the size of the problem grows exponentially with the number of uncertainties and the prediction horizon.
Tube-enhanced multi-stage MPC. This approach synergizes multi-stage MPC and tube-based MPC. It provides high degrees of freedom to choose the desired trade-off between optimality and simplicity by the classification of uncertainties and the choice of control laws in the predictions.
== MPC software ==
Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.
A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.
Freely available open-source software packages for (nonlinear) model predictive control include among others:
Rockit (Rapid Optimal Control kit) — a software framework to quickly prototype optimal control problems.
acados — a software framework providing fast and embedded solvers for nonlinear optimal control.
GRAMPC — a nonlinear MPC framework that is suitable for dynamical systems with sampling times in the (sub)millisecond range and that allows for an efficient implementation on embedded hardware.
CControl - a controll engineering linear algebra library with MPC and kalman filtering for embedded and low cost microcontrollers
== MPC vs. LQR ==
Model predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.
While a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.
Due to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.
The main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.
This means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected.
== See also ==
Control engineering
Control theory
Feed-forward
System identification
== References ==
== Further reading ==
Kwon, Wook Hyun; Bruckstein, Alfred M.; Kailath, Thomas (1983). "Stabilizing state feedback design via the moving horizon method". International Journal of Control. 37 (3): 631–643. doi:10.1080/00207178308932998.
Garcia, Carlos E.; Prett, David M.; Morari, Manfred (1989). "Model predictive control: theory and practice". Automatica. 25 (3): 335–348. doi:10.1016/0005-1098(89)90002-2.
Findeisen, Rolf; Allgöwer, Frank (2001). "An introduction to nonlinear model predictive control". Summerschool on "The Impact of Optimization in Control", Dutch Institute of Systems and Control, C. W. Scherer and J. M. Schumacher, Editors: 3.1 – 3.45.
Mayne, David Q.; Michalska, Hannah (1990). "Receding horizon control of nonlinear systems". IEEE Transactions on Automatic Control. 35 (7): 814–824. doi:10.1109/9.57020.
Mayne, David Q.; Rawlings, James B.; Rao, Christopher V.; Scokaert, Pierre O. M. (2000). "Constrained model predictive control: stability and optimality". Automatica. 36 (6): 789–814. doi:10.1016/S0005-1098(99)00214-9.
Allgöwer, Frank; Zheng, Alex, eds. (2000). Nonlinear model predictive control. Progress in Systems Theory. Vol. 26. Birkhauser.
Camacho; Bordons (2004). Model predictive control. Springer Verlag.
Findeisen, Rolf; Allgöwer, Frank; Biegler, Lorenz T. (2006). Assessment and Future Directions of Nonlinear Model Predictive Control. Lecture Notes in Control and Information Sciences. Vol. 26. Springer.
Diehl, Moritz M.; Bock, H. Georg; Schlöder, Johannes P.; Findeisen, Rolf; Nagy, Zoltan; Allgöwer, Frank (2002). "Real-time optimization and Nonlinear Model Predictive Control of Processes governed by differential-algebraic equations". Journal of Process Control. 12 (4): 577–585. doi:10.1016/S0959-1524(01)00023-3.
Rawlings, James B.; Mayne, David Q.; and Diehl, Moritz M.; Model Predictive Control: Theory, Computation, and Design (2nd Ed.), Nob Hill Publishing, LLC, ISBN 978-0975937730 (Oct. 2017)
Geyer, Tobias; Model predictive control of high power converters and industrial drives, Wiley, London, ISBN 978-1-119-01090-6, Nov. 2016
== External links ==
Case Study. Lancaster Waste Water Treatment Works, optimisation by means of Model Predictive Control from Perceptive Engineering
acados - Open-source framework for (nonlinear) model predictive control providing fast and embedded solvers for nonlinear optimization. (C, MATLAB and Python interface available)
μAO-MPC - Open Source Software package that generates tailored code for model predictive controllers on embedded systems in highly portable C code.
GRAMPC - Open source software framework for embedded nonlinear model predictive control using a gradient-based augmented Lagrangian method. (Plain C code, no code generation, MATLAB interface)
jMPC Toolbox - Open Source MATLAB Toolbox for Linear MPC.
Study on application of NMPC to superfluid cryogenics (PhD Project).
Nonlinear Model Predictive Control Toolbox for MATLAB and Python
Model Predictive Control Toolbox from MathWorks for design and simulation of model predictive controllers in MATLAB and Simulink
Pulse step model predictive controller - virtual simulator
Tutorial on MPC with Excel and MATLAB Examples
GEKKO: Model Predictive Control in Python | Wikipedia/Model_predictive_control |
In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.
The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today.
In contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but
bounded.
== Criteria for robustness ==
Informally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control.
The major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge.
Robust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation.
High-gain feedback is the principle that allows simplified models of operational amplifiers and emitter-degenerated bipolar transistors to be used in a variety of different settings. This idea was already well understood by Bode and Black in 1927.
== The modern theory of robust control ==
The theory of robust control system began in the late 1970s and early 1980s and soon developed a number of techniques for dealing with bounded system uncertainty.
Probably the most important example of a robust control technique is H-infinity loop-shaping, which was developed by Duncan McFarlane and Keith Glover of Cambridge University; this method minimizes the sensitivity of a system over its frequency spectrum, and this guarantees that the system will not greatly deviate from expected trajectories when disturbances enter the system.
An emerging area of robust control from application point of view is sliding mode control (SMC), which is a variation of variable structure control (VSC). The robustness properties of SMC with respect to matched uncertainty as well as the simplicity in design attracted a variety of applications.
While robust control has been traditionally dealt with along deterministic approaches, in the last two decades this approach has been criticized on the basis that it is too rigid to describe real uncertainty, while it often also leads to over conservative solutions. Probabilistic robust control has been introduced as an alternative, see e.g. that interprets robust control within the so-called scenario optimization theory.
Another example is loop transfer recovery (LQG/LTR), which was developed to overcome the robustness problems of linear-quadratic-Gaussian control (LQG) control.
Other robust techniques includes quantitative feedback theory (QFT), passivity based control, Lyapunov based control, etc.
When system behavior varies considerably in normal operation, multiple control laws may have to be devised. Each distinct control law addresses a specific system behavior mode. An example is a computer hard disk drive. Separate robust control system modes are designed in order to address the rapid magnetic head traversal operation, known as the seek, a transitional settle operation as the magnetic head approaches its destination, and a track following mode during which the disk drive performs its data access operation.
One of the challenges is to design a control system that addresses these diverse system operating modes and enables smooth transition from one mode to the next as quickly as possible.
Such state machine-driven composite control system is an extension of the gain scheduling idea where the entire control strategy changes based upon changes in system behavior.
== See also ==
Control theory
Control engineering
Fractional-order control
H-infinity control
H-infinity loop-shaping
Sliding mode control
Intelligent control
Process control
Robust decision making
Root locus
Servomechanism
Stable polynomial
State space (controls)
System identification
Stability radius
Iso-damping
Active disturbance rejection control
Quantitative feedback theory
== References ==
== Further reading == | Wikipedia/Robust_control |
In applied physics, the concept of controlling self-organized criticality refers to the control of processes by which a self-organized system dissipates energy. The objective of the control is to reduce the probability of occurrence of and size of energy dissipation bursts, often called avalanches, of self-organized systems. Dissipation of energy in a self-organized critical system into a lower energy state can be costly for society, since it depends on avalanches of all sizes usually following a kind of power law distribution and large avalanches can be damaging and disruptive.
== Schemes ==
Several strategies have been proposed to deal with the issue of controlling self-organized criticality:
The design of controlled avalanches. Daniel O. Cajueiro and Roberto F. S. Andrade show that if well-formulated small and medium avalanches are exogenously triggered in the system, the energy of the system is released in a way that large avalanches are rarer.
The modification of the degree of interdependence of the network where the avalanche spreads. Charles D. Brummitt, Raissa M. D'Souza and E. A. Leicht show that the dynamics of self-organized critical systems on complex networks depend on connectivity of the complex network. They find that while some connectivity is beneficial (since it suppresses the largest cascades in the system), too much connectivity gives space for the development of very large cascades and increases the size of capacity of the system.
The modification of the deposition process of the self-organized system. Pierre-Andre Noel, Charles D. Brummitt and Raissa M. D'Souza show that it is possible to control the self-organized system by modifying the natural deposition process of the self-organized system adjusting the place where the avalanche starts.
Dynamically modifying the local thresholds of cascading failures. In a model of an electric transmission network, Heiko Hoffmann and David W. Payton demonstrated that either randomly upgrading lines (sort of like preventive maintenance) or upgrading broken lines to a random breakage threshold suppresses self-organized criticality. Apparently, these strategies undermine the self-organization of large critical clusters. Here, a critical cluster is a collection of transmission lines that are near the failure threshold and that collapse entirely if triggered.
== Applications ==
There are several events that arise in nature or society and that these ideas of control may help to avoid:
Flood caused by systems of dams and reservoirs or interconnected valleys.
Snow avalanches that take place in snow hills.
Forest fires in areas susceptible to a lightning bolt or a match lighting.
Cascades of load shedding that take place in power grids (a type of power outage). The OPA model is used to study different techniques for criticality control.
Cascading failure in the internet switching fabric.
Ischemic cascades, a series of biochemical reactions releasing toxins during moments of inadequate blood supply.
Systemic risk in financial systems.
Excursions in nuclear energy systems.
Earthquakes and induced seismicity.
The failure cascades in electrical transmission and financial sectors occur because economic forces that push for efficiency cause these systems to operate near a critical point, where avalanches of indeterminate size become possible. Financial investments that are vulnerable to this kind of failure may exhibit a Taleb distribution.
== See also ==
Abelian sandpile model
Complex networks
Self-organized criticality
== References == | Wikipedia/Self-organized_criticality_control |
Too often systems fail, sometimes leading to significant loss of life, fortunes and confidence in the provider of a product or service. It was determined that a simple and useful tool was needed to help in the analysis of interactions of groups and systems to determine possible unexpected consequences. The tool didn’t need to provide every possible outcome of the interactions but needed to provide a means for analysts and product/service development stakeholders to evaluate the potential risks associated with implementing new functionality in a system. They needed a brainstorming tool to help ascertain if a concept was viable from a business perspective. The control–feedback–abort loop and the analysis diagram is one such tool that has helped organizations analyze their system workflows and workflow exceptions.
The concept of the Control–Feedback–Abort (CFA) loop is based upon another concept called the ‘Control – Feedback Loop'. The Control – Feedback Loop has been around for many years and was the key concept in the development of many electronic designs such as Phase-Lock Loops. The core of the CFA loop concept was based on a major need that corporate executives and staff can anticipate the operation of systems, processes, products and services they use and create before they are developed.
== History of CFA loop concept ==
The concept of the CFA loop was developed by T. James LeDoux, ‘Jim’, a Senior Consultant and software QA / test expert and owner of Alpha Group 3 LLC, a test management consulting company. In 1986, Mr. LeDoux, with assistance from Mr. Warren Yates, a former engineer from General Dynamics, Inc., found that using a Control and Feedback concept for analyzing group and system dynamics was not providing them with the full picture when systems were going out of control. In 1996, Jim LeDoux and Dr. Larry W. Smith, Ph.D., president of Remote Testing Services, Inc., discussed the issue at length and came to the conclusion that some other form of control must be present when a system goes out of control, even if the control is unintended.
In 1997, Mr. LeDoux used the change of behavior a person exhibits when driving a car at the time a police car pulls in behind them to describe how a change of control occurs. He demonstrated this phenomenon at a 2003 Product Development and Management Association (PDMA) meeting in Denver by showing the action of the first control (traffic, signs and speed) being aborted by the driver and a second control (police car, signs and speed) becoming the primary control. In 2004, Mr. LeDoux worked with Dr. Susan Wheeler, Ed. D., a former Instructional Design Consultant with Nims, Inc. and the present Director of Technology Services at Illinois Central College, to identify the range of uses for the CFA Loop. The CFA Loop is now being used to analyze system activities in several Fortune 100 companies. A discussion of its use is also included in the management book “Takeoff!: The Introduction to Project Management Book that Will Make Your Projects Take Off and Fly!” by Dr. Dan Price, D.M. ISBN 978-0-9707461-1-5
It was found that strong similarities existed between the concept of ‘Control Charts’ and the CFA Loop. The difference in the two concepts was that control charting is used as a dynamic measurement of present conditions. The CFA Loop is used to analyze how a closed-loop system is supposed to work and what are the expectations when alternate controls take over by either intent or accident. A comparison of the CFA Loop and its relationship to Control Charts are presented in a later section of this discussion.
=== The control–feedback concept ===
The control–feedback concept consisted of a ‘Control’ that gave information on the way the component was to perform and then adjustments to the control's present operation based upon the feedback. It used a concept called ‘Sampling’ to determine how often the ‘Control’ used the ‘Feedback’ information so that the ‘Control’ could modify instructions to the component.
== What is the CFA loop ==
Figure 1 shows a model of the CFA loop. The CFA loop consists of three main elements – The Control element, the Feedback element and the Abort element. Within any system, the lack of any one of these three elements will result in the system failing at some point in time. The term ‘system’ used in this document can represent any environment, task, process, procedure or system in a physical, organizational or natural structure where an entity will respond to influences. It has been found, through experience, that even trees appear to follow the CFA model. The diagram in Figure 1 can be used as an analysis diagram by inserting functions of the controls, feedbacks and aborts in each of the related circles defining the system being analyzed. (Example: Control – Workflow requests, Feedback – Results of requests, Aborts – Requests that failed, workflow exception path)
The CFA model can be used effectively with 3-sigma control charts. CFA loops and Control Charts share the same functionality, which will be discussed later in this document.
== A description of the control–feedback–abort (CFA) loop ==
As mentioned, the CFA Loop consists of three elements – Control, Feedback and Abort. First, we will discuss the Control element of the loop.
=== The control element ===
The Control element of the CFA loop, as highlighted in Figure 2, controls the activity of the system in question. A basic characteristic of the Control element is that it is always in a static state until it receives new information from the feedback. This static state is, in reality, the Control element holding the system in a status quo condition. Using an automobile as an example, if the previous instruction provided by the Control to the auto was to accelerate, it would continue to accelerate until a feedback reading would indicate to the Control that the Control should issue an instruction to stop accelerating.
Remember, the idea of the static condition is not saying that nothing is happening but rather to say that nothing is changing in the instructions given to the system since the last instruction from the Control. If the last instruction by the Control is to accelerate, the system will continue to accelerate until told otherwise.
The Control element is the ‘primary control’ for the system. While everything is operating within a ‘normal’ operational mode, the Control element remains the primary control.
Figure 2 – CFA Loop – Control Element
=== The feedback element ===
The Feedback element feeds back information on the present state of the system. Due to the fact that the Feedback element is always reading the present state of the system, the feedback element has the basic characteristic of always being in a ‘dynamic’ state. This means that the feedback is reading constantly changing conditions. No system is ever in a non-changing condition except if it is off, no longer functioning or dead. Look at a computer in a wait state. It is still performing administrative activities even while it waits for some activity to happen. Change is the constant state of the Feedback element.
For this reason, the Feedback element needs to provide information to the Control element at intervals necessary to provide the Control element time to adequately respond to the changing environment. This interval period is called ‘sampling’, which will be discussed later in this document.
Figure 3 – CFA Loop – Feedback Element
The communication between the Control element and the Feedback element is performed by way of the ‘Primary Path’ (see Figure 4). The Primary Path is a bidirectional path that allows for the Control Element to request a sample of the information and the Feedback element to respond.
=== The abort element ===
The abort element (see Figure 5) is so named because it responds to conditions that resulted in the primary path being ‘aborted’. The Abort element then takes over the act of control until conditions can be brought back into acceptable parameters.
The ‘Alternate Path’ (see Figure 6) is used for communication between the Alternate Control (Abort) and the Feedback. The Feedback at this point may be a different set of feedbacks than was defined for the primary path.
To demonstrate that the Feedback may be another set of feedback elements, we look at the following example.
Let’s use the act of driving the auto once more for our example (see Figure 7). When a driver is driving the car, the primary path is the Control element (gas pedal) and the Feedback element (speedometer and street signs). Once a stop sign is detected ahead, the driver will take the foot off the gas pedal (primary control) and press the brake pedal (alternate control). Note that the driver is no longer looking at the speedometer or street signs once the auto gets to the stop sign. The driver is looking for other cars that may cross his path. In other words, the driver is looking for a different set of feedback sources. Once he feels it is safe to go, he will go back to the primary control and feedback and the primary path.
==== Sampling and the feedback element ====
In order for the Control element to be able to give proper instructions on what the system needs to do next, the information provided by the Feedback needs to be a true representation of the present conditions. If the feedback information is sampled by the Control element too often, it can put unnecessary demands on the system. If the information is not read often enough, considerable error can occur resulting in system failure. The solution to this dilemma is to sample when needed, at a rate that allows us to have confidence that we can still maintain control over the system.
Going back to our auto. The rate we sample the street signs for information is going to be different from when we look at the speedometer. We may also change our sampling rate when certain outside influences introduce themselves into the feedback mix. If we have a police car behind us, odds are that we will be sampling the speedometer much more often than if a police car was not there.
== Creating the control loop diagram using the CFA loop ==
The Control Loop Diagram is a chart that provides a list of each of the conditions we discover during the analysis of the interaction of the specific item in question. A basic Control Loop Diagram is shown in Table 1.
Table 1 – Control loop diagram template
The Control Loop Diagram provides a vehicle for the CFA Loop to be used effectively. The following is a sequence that allows for us to create the CFA Loop analysis information and convert it into a Control Loop Diagram. The process is:
A. Identify the perspective of the CFA Loop.
It is important to know what the perspective is. We may be looking at the environment from a specific perspective (i.e. from the viewpoint of a Test Manager looking at defects or a Development Manager looking at versions.) The perspective will determine what is to be the Control and what is providing the Feedback for the analysis.
B. Identify what is controlling the environment.
C. Identify the Feedback components.
By identifying the controlling environment and the feedback elements, we can identify the parameters of the primary path.
D. Identify the conditions that would lead to an abort of the primary path.
The abort conditions can give us an insight into the limitations and boundaries the primary path must operate within.
E. Identify the processes the Control will use to manage the environment.
The interaction between the control and feedback elements can now be analyzed and the resulting information can be mapped into the Control Loop Diagram.
F. Identify the processes used when the Abort is given control.
An example of the CFA loop – control loop diagram relationship
The following CFA Loop and Control Loop Diagram exhibit the relationship between a Version Control/Defect Reporting CFA Loop (Figure 8) and its associated Control Loop Diagram (Table 2).
A Control Loop Diagram for the CFA Loop with a focus on Version Control as the Control element (see Table 2) should look similar to the following table (mapped during an analysis brainstorming session):
Table 2 – Control Loop Diagram
== Control charts ==
Control Charts have a very close relationship to the CFA Loop. Control Charts are used to provide a means of tracking the trend and condition of a specific measured item. The Control Chart (see Figure 9) uses the standard deviation of sampled items to determine whether the item is in-bounds (within acceptable conditions) or out of bounds (outside of acceptable conditions). The +3s is also identified as the Upper Defined-Control Limit or UDL. The -3s is also known as the Lower Defined-Control Limit or LDL.
Those items that are in bounds are considered to be in control (see Figure 10). They can be the Control element of the CFA Loop.
Those items that are out of bounds are said to be out of control (see Figure 11). The out of bounds areas can also be identified as the Abort element of the CFA Loop.
Remember that it was mentioned earlier in this document that the CFA Loop and the Control Chart share the similar functions, the difference is in the use and objectives. We have already seen the Control and Abort similarities.
Let’s look at a Control Chart (see Figure 12) and compare the information in the Control Chart with the CFA Loop elements.
The ‘in bounds’ area is our Control element. As long as our data points, sometimes called items, are within the ‘in bounds’ area, we are said to be in control. The data points are the Feedback element. The ‘out of bounds’ areas are the Abort elements. Notice that data point 4 is in the ‘out of bounds’ area, which should lead to control being passed to the Abort element in order to take action to bring the future data points back into control. During the analysis of the system operation using the CFA Loop, the abort mechanism should have been clearly identified so that when the system goes out of bounds during operation, the alternate control should have been activated and the alternate action should be no surprise to the system designers.
The benefit of using control charts is due to its ability to report dynamic conditions of a system in operation. By data point 2, we should be able to see that if the data follows the trend set by the previous data points, the data will go out of control at some point. This ability to see the trend allows for the chart user to take early action to ensure that the system stays in control or to monitor automated abort processes used to bring the system back into control.
== Notes and references ==
== External links == | Wikipedia/Control–feedback–abort_loop |
A feed forward (sometimes written feedforward) is an element or pathway within a control system that passes a controlling signal from a source in its external environment to a load elsewhere in its external environment. This is often a command signal from an external operator.
In control engineering, a feedforward control system is a control system that uses sensors to detect disturbances affecting the system and then applies an additional input to minimize the effect of the disturbance. This requires a mathematical model of the system so that the effect of disturbances can be properly predicted.
A control system which has only feed-forward behavior responds to its control signal in a pre-defined way without responding to the way the system reacts; it is in contrast with a system that also has feedback, which adjusts the input to take account of how it affects the system, and how the system itself may vary unpredictably.
In a feed-forward system, the control variable adjustment is not error-based. Instead it is based on knowledge about the process in the form of a mathematical model of the process and knowledge about, or measurements of, the process disturbances.
Some prerequisites are needed for control scheme to be reliable by pure feed-forward without feedback: the external command or controlling signal must be available, and the effect of the output of the system on the load should be known (that usually means that the load must be predictably unchanging with time). Sometimes pure feed-forward control without feedback is called 'ballistic', because once a control signal has been sent, it cannot be further adjusted; any corrective adjustment must be by way of a new control signal. In contrast, 'cruise control' adjusts the output in response to the load that it encounters, by a feedback mechanism.
These systems could relate to control theory, physiology, or computing.
== Overview ==
With feed-forward or feedforward control, the disturbances are measured and accounted for before they have time to affect the system. In the house example, a feed-forward system may measure the fact that the door is opened and automatically turn on the heater before the house can get too cold. The difficulty with feed-forward control is that the effects of the disturbances on the system must be accurately predicted, and there must not be any unmeasured disturbances. For instance, if a window was opened that was not being measured, the feed-forward-controlled thermostat might let the house cool down.
The term has specific meaning within the field of CPU-based automatic control. The discipline of feedforward control as it relates to modern, CPU based automatic controls is widely discussed, but is seldom practiced due to the difficulty and expense of developing or providing for the mathematical model required to facilitate this type of control. Open-loop control and feedback control, often based on canned PID control algorithms, are much more widely used.
There are three types of control systems: open loop, feed-forward, and feedback. An example of a pure open loop control system is manual non-power-assisted steering of a motor car; the steering system does not have access to an auxiliary power source and does not respond to varying resistance to turning of the direction wheels; the driver must make that response without help from the steering system. In comparison, power steering has access to a controlled auxiliary power source, which depends on the engine speed. When the steering wheel is turned, a valve is opened which allows fluid under pressure to turn the driving wheels. A sensor monitors that pressure so that the valve only opens enough to cause the correct pressure to reach the wheel turning mechanism. This is feed-forward control where the output of the system, the change in direction of travel of the vehicle, plays no part in the system. See Model predictive control.
If the driver is included in the system, then they do provide a feedback path by observing the direction of travel and compensating for errors by turning the steering wheel. In that case you have a feedback system, and the block labeled System in Figure(c) is a feed-forward system.
In other words, systems of different types can be nested, and the overall system regarded as a black-box.
Feedforward control is distinctly different from open loop control and teleoperator systems. Feedforward control requires a mathematical model of the plant (process and/or machine being controlled) and the plant's relationship to any inputs or feedback the system might receive. Neither open loop control nor teleoperator systems require the sophistication of a mathematical model of the physical system or plant being controlled. Control based on operator input without integral processing and interpretation through a mathematical model of the system is a teleoperator system and is not considered feedforward control.
== History ==
Historically, the use of the term feedforward is found in works by Harold S. Black in US patent 1686792 (invented 17 March 1923) and D. M. MacKay as early as 1956. While MacKay's work is in the field of biological control theory, he speaks only of feedforward systems. MacKay does not mention feedforward control or allude to the discipline of feedforward controls. MacKay and other early writers who use the term feedforward are generally writing about theories of how human or animal brains work. Black also has US patent 2102671 invented 2 August 1927 on the technique of feedback applied to electronic systems.
The discipline of feedforward controls was largely developed by professors and graduate students at Georgia Tech, MIT, Stanford and Carnegie Mellon. Feedforward is not typically hyphenated in scholarly publications. Meckl and Seering of MIT and Book and Dickerson of Georgia Tech began the development of the concepts of Feedforward Control in the mid-1970s. The discipline of Feedforward Controls was well defined in many scholarly papers, articles and books by the late 1980s.
== Benefits ==
The benefits of feedforward control are significant and can often justify the extra cost, time and effort required to implement the technology. Control accuracy can often be improved by as much as an order of magnitude if the mathematical model is of sufficient quality and implementation of the feedforward control law is well thought out. Energy consumption by the feedforward control system and its driver is typically substantially lower than with other controls. Stability is enhanced such that the controlled device can be built of lower cost, lighter weight, springier materials while still being highly accurate and able to operate at high speeds. Other benefits of feedforward control include reduced wear and tear on equipment, lower maintenance costs, higher reliability and a substantial reduction in hysteresis. Feedforward control is often combined with feedback control to optimize performance.
== Model ==
The mathematical model of the plant (machine, process or organism) used by the feedforward control system may be created and input by a control engineer or it may be learned by the control system. Control systems capable of learning and/or adapting their mathematical model have become more practical as microprocessor speeds have increased. The discipline of modern feedforward control was itself made possible by the invention of microprocessors.
Feedforward control requires integration of the mathematical model into the control algorithm such that it is used to determine the control actions based on what is known about the state of the system being controlled. In the case of control for a lightweight, flexible robotic arm, this could be as simple as compensating between when the robot arm is carrying a payload and when it is not. The target joint angles are adjusted to place the payload in the desired position based on knowing the deflections in the arm from the mathematical model's interpretation of the disturbance caused by the payload. Systems that plan actions and then pass the plan to a different system for execution do not satisfy the above definition of feedforward control. Unless the system includes a means to detect a disturbance or receive an input and process that input through the mathematical model to determine the required modification to the control action, it is not true feedforward control.
=== Open system ===
In systems theory, an open system is a feed forward system that does not have any feedback loop to control its output. In contrast, a closed system uses on a feedback loop to control the operation of the system. In an open system, the output of the system is not fed back into the input to the system for control or operation.
== Applications ==
=== Physiological feed-forward system ===
In physiology, feed-forward control is exemplified by the normal anticipatory regulation of heartbeat in advance of actual physical exertion by the central autonomic network. Feed-forward control can be likened to learned anticipatory responses to known cues (predictive coding). Feedback regulation of the heartbeat provides further adaptiveness to the running eventualities of physical exertion. Feedforward systems are also found in biological control of other variables by many regions of animals brains.
Even in the case of biological feedforward systems, such as in the human brain, knowledge or a mental model of the plant (body) can be considered to be mathematical as the model is characterized by limits, rhythms, mechanics and patterns.
A pure feed-forward system is different from a homeostatic control system, which has the function of keeping the body's internal environment 'steady' or in a 'prolonged steady state of readiness.' A homeostatic control system relies mainly on feedback (especially negative), in addition to the feedforward elements of the system.
=== Gene regulation and feed-forward ===
Feed-forward loops (FFLs), a three-node graph of the form A affects B and C and B affects C, are frequently observed in transcription networks in several organisms including E. coli and S. cerevisiae, suggesting that they perform functions that are important for the functioning of these organisms. In E. coli and S. cerevisiae transcription networks have been extensively studied, FFLs occur approximately three times more frequently than expected based on random (Erdös-Rényi) networks.
Edges in transcription networks are directed and signed, as they represent activation (+) or repression (-). The sign of a path in a transcription network can be obtained by multiplying the signs of the edges in the path, so a path with an odd number of negative signs is negative. There are eight possible three-node FFLs as each of the three arrows can be either repression or activation, which can be classified into coherent or incoherent FFLs. Coherent FFLs have the same sign for both the paths from A to C, and incoherent FFLs have different signs for the two paths.
The temporal dynamics of FFLs show that coherent FFLs can be sign-sensitive delays that filter input into the circuit. We consider the differential equations for a Type-I coherent FFL, where all the arrows are positive:
δ
B
δ
t
=
β
B
(
A
)
−
γ
B
B
{\displaystyle {\frac {\delta B}{\delta t}}=\beta _{B}(A)-\gamma _{B}B}
δ
C
δ
t
=
β
C
(
A
,
B
)
−
γ
C
C
{\displaystyle {\frac {\delta C}{\delta t}}=\beta _{C}(A,B)-\gamma _{C}C}
Where
β
y
{\displaystyle \beta _{y}}
and
β
z
{\displaystyle \beta _{z}}
are increasing functions in
A
{\displaystyle A}
and
B
{\displaystyle B}
representing production, and
γ
Y
{\displaystyle \gamma _{Y}}
and
γ
z
{\displaystyle \gamma _{z}}
are rate constants representing degradation or dilution of
B
{\displaystyle B}
and
C
{\displaystyle C}
respectively.
β
C
(
A
,
B
)
{\displaystyle \beta _{C}(A,B)}
can represent an AND gate where
β
C
(
A
,
B
)
=
0
{\displaystyle \beta _{C}(A,B)=0}
if either
A
=
0
{\displaystyle A=0}
or
B
=
0
{\displaystyle B=0}
, for instance if
β
C
(
A
,
B
)
=
β
C
θ
A
(
A
>
k
A
C
)
θ
A
(
B
>
k
A
B
C
)
{\displaystyle \beta _{C}(A,B)=\beta _{C}\theta _{A}(A>k_{AC})\theta _{A}(B>k_{ABC})}
where
θ
A
{\displaystyle \theta _{A}}
and
θ
B
{\displaystyle \theta _{B}}
are step functions. In this case the FFL creates a time-delay between a sustained on-signal, i.e. increase in
A
{\displaystyle A}
and the output increase in
C
{\displaystyle C}
. This is because production of
A
{\displaystyle A}
must first induce production of
B
{\displaystyle B}
, which is then needed to induce production of
C
{\displaystyle C}
. However, there is no time-delay in for an off-signal because a reduction of
A
{\displaystyle A}
immediately results in a decrease in the production term
β
C
(
A
,
B
)
{\displaystyle \beta _{C}(A,B)}
. This system therefore filters out fluctuations in the on-signal and detects persistent signals. This is particularly relevant in settings with stochastically fluctuating signals. In bacteria these circuits create time delays ranging from a few minutes to a few hours.
Similarly, an inclusive-OR gate in which
C
{\displaystyle C}
is activated by either
A
{\displaystyle A}
or
B
{\displaystyle B}
is a sign-sensitive delay with no delay after the ON step but with a delay after the OFF step. This is because an ON pulse immediately activates B and C, but an OFF step does not immediately result in deactivation of C because B can still be active. This can protect the system from fluctuations that result in the transient loss of the ON signal and can also provide a form of memory. Kalir, Mangan, and Alon, 2005 show that the regulatory system for flagella in E. coli is regulated with a Type 1 coherent feedforward loop.
For instance, the regulation of the shift from one carbon source to another in diauxic growth in E. coli can be controlled via a type-1 coherent FFL. In diauxic growth cells growth using two carbon sources by first rapidly consuming the preferred carbon source, and then slowing growth in a lag phase before consuming the second less preferred carbon source. In E. coli, glucose is preferred over both arabinose and lactose. The absence of glucose is represented via a small molecule cAMP. Diauxic growth in glucose and lactose is regulated by a simple regulatory system involving cAMP and the lac operon. However, growth in arabinose is regulated by a feedforward loop with an AND gate which confers an approximately 20 minute time delay between the ON-step in which cAMP concentration increases when glucose is consumed and when arabinose transporters are expressed. There is no time delay with the OFF signal which occurs when glucose is present. This prevents the cell from shifting to growth on arabinose based on short term fluctuations in glucose availability.
Additionally, feedforward loops can facilitate cellular memory. Doncic and Skotheim (2003) show this effect in the regulation in the mating of yeast, where extracellular mating pheromone induces mating behavior, including preventing cells from entering the cell cycle. The mating pheromone activates the MAPK pathway, which then activates the cell-cycle inhibitor Far1 and the transcription factor Ste12, which in turn increases the synthesis of inactive Far1. In this system, the concentration of active Far1 depends on the time integral of a function of the external mating pheromone concentration. This dependence on past levels of mating pheromone is a form of cellular memory. This system simultaneously allows for the stability and reversibility.
Incoherent feedforward loops, in which the two paths from the input to the output node have different signs result in short pulses in response to an ON signal. In this system, input A simultaneous directly increases and indirectly decreases synthesis of output node C. If the indirect path to C (via B) is slower than the direct path a pulse of output is produced in the time period before levels of B are high enough to inhibit synthesis of C. Response to epidermal growth factor (EGF) in dividing mammalian cells is an example of a Type-1 incoherent FFL.
The frequent observation of feed-forward loops in various biological contexts across multiple scales suggests that they have structural properties that are highly adaptive in many contexts. Several theoretical and experimental studies including those discussed here show that FFLs create a mechanism for biological systems to process and store information, which is important for predictive behavior and survival in complex dynamically changing environments.
=== Feed-forward systems in computing ===
In computing, feed-forward normally refers to a perceptron network in which the outputs from all neurons go to following but not preceding layers, so there are no feedback loops. The connections are set up during a training phase, which in effect is when the system is a feedback system.
=== Long distance telephony ===
In the early 1970s, intercity coaxial transmission systems, including L-carrier, used feed-forward amplifiers to diminish linear distortion. This more complex method allowed wider bandwidth than earlier feedback systems. Optical fiber, however, made such systems obsolete before many were built.
=== Automation and machine control ===
Feedforward control is a discipline within the field of automatic controls used in automation.
=== Parallel feed-forward compensation with derivative (PFCD) ===
The method is rather a new technique that changes the phase of an open-loop transfer function of a non-minimum phase system into minimum phase.
== See also ==
Black box
Smith predictor
== References ==
== Further reading == | Wikipedia/Feed_forward_(control) |
In numerical methods for stochastic differential equations, the Markov chain approximation method (MCAM) belongs to the several numerical (schemes) approaches used in stochastic control theory. Regrettably the simple adaptation of the deterministic schemes for matching up to stochastic models such as the Runge–Kutta method does not work at all.
It is a powerful and widely usable set of ideas, due to the current infancy of stochastic control it might be even said 'insights.' for numerical and other approximations problems in stochastic processes. They represent counterparts from deterministic control theory such as optimal control theory.
The basic idea of the MCAM is to approximate the original controlled process by a chosen controlled markov process on a finite state space. In case of need, one must as well approximate the cost function for one that matches up the Markov chain chosen to approximate the original stochastic process.
== See also ==
Control theory
Optimal control
Stochastic differential equation
Differential equation
Numerical analysis
Stochastic process
== References == | Wikipedia/Markov_chain_approximation_method |
In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal (or more rigorously, a set-valued control signal) that forces the system to "slide" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.
== Introduction ==
Figure 1 shows an example trajectory of a system under sliding mode control. The sliding surface is described by
s
=
0
{\displaystyle s=0}
, and the sliding mode along the surface commences after the finite time when system trajectories have reached the surface. In the theoretical description of sliding modes, the system stays confined to the sliding surface and need only be viewed as sliding along the surface. However, real implementations of sliding mode control approximate this theoretical behavior with a high-frequency and generally non-deterministic switching control signal that causes the system to "chatter" in a tight neighborhood of the sliding surface. Chattering can be reduced through the use of deadbands or boundary layers around the sliding surface, or other compensatory methods. Although the system is nonlinear in general, the idealized (i.e., non-chattering) behavior of the system in Figure 1 when confined to the
s
=
0
{\displaystyle s=0}
surface is an LTI system with an exponentially stable origin.
One of the compensatory methods is the adaptive sliding mode control method proposed in
which uses estimated uncertainty to construct continuous control law. In this method chattering is eliminated while preserving accuracy (for more details see references [2] and [3]). The three distinguished features of the proposed adaptive sliding mode controller are as follows: (i) The structured (or parametric) uncertainties and unstructured uncertainties (un-modeled dynamics, unknown external disturbances) are synthesized into a single type uncertainty term called lumped uncertainty. Therefore, a linearly parameterized dynamic model of the system is not required, and the simple structure and computationally efficient properties of this approach make it suitable for the real-time control applications. (ii) The adaptive sliding mode control scheme design relies on the online estimated uncertainty vector rather than relying on the worst-case scenario (i.e., bounds of uncertainties). Therefore, a-priory knowledge of the bounds of uncertainties is not required, and at each time instant, the control input compensates for the uncertainty that exists. (iii) The developed continuous control law using fundamentals of the sliding mode control theory eliminates the chattering phenomena without trade-off between performance and robustness, which is prevalent in boundary-layer approach.
Intuitively, sliding mode control uses practically infinite gain to force the trajectories of a dynamic system to slide along the restricted sliding mode subspace. Trajectories from this reduced-order sliding mode have desirable properties (e.g., the system naturally slides along it until it comes to rest at a desired equilibrium). The main strength of sliding mode control is its robustness. Because the control can be as simple as a switching between two states (e.g., "on"/"off" or "forward"/"reverse"), it need not be precise and will not be sensitive to parameter variations that enter into the control channel. Additionally, because the control law is not a continuous function, the sliding mode can be reached in finite time (i.e., better than asymptotic behavior). Under certain common conditions, optimality requires the use of bang–bang control; hence, sliding mode control describes the optimal controller for a broad set of dynamic systems.
One application of sliding mode controller is the control of electric drives operated by switching power converters.: "Introduction" Because of the discontinuous operating mode of those converters, a discontinuous sliding mode controller is a natural implementation choice over continuous controllers that may need to be applied by means of pulse-width modulation or a similar technique of applying a continuous signal to an output that can only take discrete states. Sliding mode control has many applications in robotics. In particular, this control algorithm has been used for tracking control of unmanned surface vessels in simulated rough seas with high degree of success.
Sliding mode control must be applied with more care than other forms of nonlinear control that have more moderate control action. In particular, because actuators have delays and other imperfections, the hard sliding-mode-control action can lead to chatter, energy loss, plant damage, and excitation of unmodeled dynamics.: 554–556 Continuous control design methods are not as susceptible to these problems and can be made to mimic sliding-mode controllers.: 556–563
== Control scheme ==
Consider a nonlinear dynamical system described by
where
x
(
t
)
≜
[
x
1
(
t
)
x
2
(
t
)
⋮
x
n
−
1
(
t
)
x
n
(
t
)
]
∈
R
n
{\displaystyle \mathbf {x} (t)\triangleq {\begin{bmatrix}x_{1}(t)\\x_{2}(t)\\\vdots \\x_{n-1}(t)\\x_{n}(t)\end{bmatrix}}\in \mathbb {R} ^{n}}
is an n-dimensional state vector and
u
(
t
)
≜
[
u
1
(
t
)
u
2
(
t
)
⋮
u
m
−
1
(
t
)
u
m
(
t
)
]
∈
R
m
{\displaystyle \mathbf {u} (t)\triangleq {\begin{bmatrix}u_{1}(t)\\u_{2}(t)\\\vdots \\u_{m-1}(t)\\u_{m}(t)\end{bmatrix}}\in \mathbb {R} ^{m}}
is an m-dimensional input vector that will be used for state feedback. The functions
f
:
R
n
×
R
→
R
n
{\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} \to \mathbb {R} ^{n}}
and
B
:
R
n
×
R
→
R
n
×
m
{\displaystyle B:\mathbb {R} ^{n}\times \mathbb {R} \to \mathbb {R} ^{n\times m}}
are assumed to be continuous and sufficiently smooth so that the Picard–Lindelöf theorem can be used to guarantee that solution
x
(
t
)
{\displaystyle \mathbf {x} (t)}
to Equation (1) exists and is unique.
A common task is to design a state-feedback control law
u
(
x
(
t
)
)
{\displaystyle \mathbf {u} (\mathbf {x} (t))}
(i.e., a mapping from current state
x
(
t
)
{\displaystyle \mathbf {x} (t)}
at time t to the input
u
{\displaystyle \mathbf {u} }
) to stabilize the dynamical system in Equation (1) around the origin
x
=
[
0
,
0
,
…
,
0
]
⊺
{\displaystyle \mathbf {x} =[0,0,\ldots ,0]^{\intercal }}
. That is, under the control law, whenever the system is started away from the origin, it will return to it. For example, the component
x
1
{\displaystyle x_{1}}
of the state vector
x
{\displaystyle \mathbf {x} }
may represent the difference some output is away from a known signal (e.g., a desirable sinusoidal signal); if the control
u
{\displaystyle \mathbf {u} }
can ensure that
x
1
{\displaystyle x_{1}}
quickly returns to
x
1
=
0
{\displaystyle x_{1}=0}
, then the output will track the desired sinusoid. In sliding-mode control, the designer knows that the system behaves desirably (e.g., it has a stable equilibrium) provided that it is constrained to a subspace of its configuration space. Sliding mode control forces the system trajectories into this subspace and then holds them there so that they slide along it. This reduced-order subspace is referred to as a sliding (hyper)surface, and when closed-loop feedback forces trajectories to slide along it, it is referred to as a sliding mode of the closed-loop system. Trajectories along this subspace can be likened to trajectories along eigenvectors (i.e., modes) of LTI systems; however, the sliding mode is enforced by creasing the vector field with high-gain feedback. Like a marble rolling along a crack, trajectories are confined to the sliding mode.
The sliding-mode control scheme involves
Selection of a hypersurface or a manifold (i.e., the sliding surface) such that the system trajectory exhibits desirable behavior when confined to this manifold.
Finding feedback gains so that the system trajectory intersects and stays on the manifold.
Because sliding mode control laws are not continuous, it has the ability to drive trajectories to the sliding mode in finite time (i.e., stability of the sliding surface is better than asymptotic). However, once the trajectories reach the sliding surface, the system takes on the character of the sliding mode (e.g., the origin
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} }
may only have asymptotic stability on this surface).
The sliding-mode designer picks a switching function
σ
:
R
n
→
R
m
{\displaystyle \sigma :\mathbb {R} ^{n}\to \mathbb {R} ^{m}}
that represents a kind of "distance" that the states
x
{\displaystyle \mathbf {x} }
are away from a sliding surface.
A state
x
{\displaystyle \mathbf {x} }
that is outside of this sliding surface has
σ
(
x
)
≠
0
{\displaystyle \sigma (\mathbf {x} )\neq 0}
.
A state that is on this sliding surface has
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
.
The sliding-mode-control law switches from one state to another based on the sign of this distance. So the sliding-mode control acts like a stiff pressure always pushing in the direction of the sliding mode where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
.
Desirable
x
(
t
)
{\displaystyle \mathbf {x} (t)}
trajectories will approach the sliding surface, and because the control law is not continuous (i.e., it switches from one state to another as trajectories move across this surface), the surface is reached in finite time. Once a trajectory reaches the surface, it will slide along it and may, for example, move toward the
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} }
origin. So the switching function is like a topographic map with a contour of constant height along which trajectories are forced to move.
The sliding (hyper)surface/manifold is typically of dimension
n
−
m
{\displaystyle n-m}
where n is the number of states in
x
{\displaystyle \mathbf {x} }
and m is the number of input signals (i.e., control signals) in
u
{\displaystyle \mathbf {u} }
. For each control index
1
≤
k
≤
m
{\displaystyle 1\leq k\leq m}
, there is an
(
n
−
1
)
{\displaystyle (n-1)}
-dimensional sliding surface given by
The vital part of SMC design is to choose a control law so that the sliding mode (i.e., this surface given by
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
) exists and is reachable along system trajectories. The principle of sliding mode control is to forcibly constrain the system, by suitable control strategy, to stay on the sliding surface on which the system will exhibit desirable features. When the system is constrained by the sliding control to stay on the sliding surface, the system dynamics are governed by reduced-order system obtained from Equation (2).
To force the system states
x
{\displaystyle \mathbf {x} }
to satisfy
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
, one must:
Ensure that the system is capable of reaching
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
from any initial condition
Having reached
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
, the control action is capable of maintaining the system at
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
=== Existence of closed-loop solutions ===
Note that because the control law is not continuous, it is certainly not locally Lipschitz continuous, and so existence and uniqueness of solutions to the closed-loop system is not guaranteed by the Picard–Lindelöf theorem. Thus the solutions are to be understood in the Filippov sense. Roughly speaking, the resulting closed-loop system moving along
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
is approximated by the smooth dynamics
σ
˙
(
x
)
=
0
;
{\displaystyle {\dot {\sigma }}(\mathbf {x} )=\mathbf {0} ;}
however, this smooth behavior may not be truly realizable. Similarly, high-speed pulse-width modulation or delta-sigma modulation produces outputs that only assume two states, but the effective output swings through a continuous range of motion. These complications can be avoided by using a different nonlinear control design method that produces a continuous controller. In some cases, sliding-mode control designs can be approximated by other continuous control designs.
== Theoretical foundation ==
The following theorems form the foundation of variable structure control.
=== Theorem 1: Existence of sliding mode ===
Consider a Lyapunov function candidate
where
‖
⋅
‖
{\displaystyle \|{\mathord {\cdot }}\|}
is the Euclidean norm (i.e.,
‖
σ
(
x
)
‖
2
{\displaystyle \|\sigma (\mathbf {x} )\|_{2}}
is the distance away from the sliding manifold where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
). For the system given by Equation (1) and the sliding surface given by Equation (2), a sufficient condition for the existence of a sliding mode is that
σ
⊺
⏞
∂
V
∂
σ
σ
˙
⏞
d
σ
d
t
⏟
d
V
d
t
<
0
(i.e.,
d
V
d
t
<
0
)
{\displaystyle \underbrace {\overbrace {\sigma ^{\intercal }} ^{\tfrac {\partial V}{\partial \sigma }}\overbrace {\dot {\sigma }} ^{\tfrac {\operatorname {d} \sigma }{\operatorname {d} t}}} _{\tfrac {\operatorname {d} V}{\operatorname {d} t}}<0\qquad {\text{(i.e., }}{\tfrac {\operatorname {d} V}{\operatorname {d} t}}<0{\text{)}}}
in a neighborhood of the surface given by
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
.
Roughly speaking (i.e., for the scalar control case when
m
=
1
{\displaystyle m=1}
), to achieve
σ
⊺
σ
˙
<
0
{\displaystyle \sigma ^{\intercal }{\dot {\sigma }}<0}
, the feedback control law
u
(
x
)
{\displaystyle u(\mathbf {x} )}
is picked so that
σ
{\displaystyle \sigma }
and
σ
˙
{\displaystyle {\dot {\sigma }}}
have opposite signs. That is,
u
(
x
)
{\displaystyle u(\mathbf {x} )}
makes
σ
˙
(
x
)
{\displaystyle {\dot {\sigma }}(\mathbf {x} )}
negative when
σ
(
x
)
{\displaystyle \sigma (\mathbf {x} )}
is positive.
u
(
x
)
{\displaystyle u(\mathbf {x} )}
makes
σ
˙
(
x
)
{\displaystyle {\dot {\sigma }}(\mathbf {x} )}
positive when
σ
(
x
)
{\displaystyle \sigma (\mathbf {x} )}
is negative.
Note that
σ
˙
=
∂
σ
∂
x
x
˙
⏞
d
x
d
t
=
∂
σ
∂
x
(
f
(
x
,
t
)
+
B
(
x
,
t
)
u
)
⏞
x
˙
{\displaystyle {\dot {\sigma }}={\frac {\partial \sigma }{\partial \mathbf {x} }}\overbrace {\dot {\mathbf {x} }} ^{\tfrac {\operatorname {d} \mathbf {x} }{\operatorname {d} t}}={\frac {\partial \sigma }{\partial \mathbf {x} }}\overbrace {\left(f(\mathbf {x} ,t)+B(\mathbf {x} ,t)\mathbf {u} \right)} ^{\dot {\mathbf {x} }}}
and so the feedback control law
u
(
x
)
{\displaystyle \mathbf {u} (\mathbf {x} )}
has a direct impact on
σ
˙
{\displaystyle {\dot {\sigma }}}
.
==== Reachability: Attaining sliding manifold in finite time ====
To ensure that the sliding mode
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
is attained in finite time,
d
V
/
d
t
{\displaystyle \operatorname {d} V/{\operatorname {d} t}}
must be more strongly bounded away from zero. That is, if it vanishes too quickly, the attraction to the sliding mode will only be asymptotic. To ensure that the sliding mode is entered in finite time,
d
V
d
t
≤
−
μ
(
V
)
α
{\displaystyle {\frac {\operatorname {d} V}{\operatorname {d} t}}\leq -\mu ({\sqrt {V}})^{\alpha }}
where
μ
>
0
{\displaystyle \mu >0}
and
0
<
α
≤
1
{\displaystyle 0<\alpha \leq 1}
are constants.
===== Explanation by comparison lemma =====
This condition ensures that for the neighborhood of the sliding mode
V
∈
[
0
,
1
]
{\displaystyle V\in [0,1]}
,
d
V
d
t
≤
−
μ
(
V
)
α
≤
−
μ
V
.
{\displaystyle {\frac {\operatorname {d} V}{\operatorname {d} t}}\leq -\mu ({\sqrt {V}})^{\alpha }\leq -\mu {\sqrt {V}}.}
So, for
V
∈
(
0
,
1
]
{\displaystyle V\in (0,1]}
,
1
V
d
V
d
t
≤
−
μ
,
{\displaystyle {\frac {1}{\sqrt {V}}}{\frac {\operatorname {d} V}{\operatorname {d} t}}\leq -\mu ,}
which, by the chain rule (i.e.,
d
W
/
d
t
{\displaystyle \operatorname {d} W/{\operatorname {d} t}}
with
W
≜
2
V
{\displaystyle W\triangleq 2{\sqrt {V}}}
), means
D
+
(
2
V
⏞
∝
‖
σ
‖
2
⏟
W
)
⏟
D
+
W
≜
Upper right-hand
W
˙
=
1
V
d
V
d
t
≤
−
μ
{\displaystyle {\mathord {\underbrace {D^{+}{\Bigl (}{\mathord {\underbrace {2{\mathord {\overbrace {\sqrt {V}} ^{{}\propto \|\sigma \|_{2}}}}} _{W}}}{\Bigr )}} _{D^{+}W\,\triangleq \,{\mathord {{\text{Upper right-hand }}{\dot {W}}}}}}}={\frac {1}{\sqrt {V}}}{\frac {\operatorname {d} V}{\operatorname {d} t}}\leq -\mu }
where
D
+
{\displaystyle D^{+}}
is the upper right-hand derivative of
2
V
{\displaystyle 2{\sqrt {V}}}
and the symbol
∝
{\displaystyle \propto }
denotes proportionality. So, by comparison to the curve
z
(
t
)
=
z
0
−
μ
t
{\displaystyle z(t)=z_{0}-\mu t}
which is represented by differential equation
z
˙
=
−
μ
{\displaystyle {\dot {z}}=-\mu }
with initial condition
z
(
0
)
=
z
0
{\displaystyle z(0)=z_{0}}
, it must be the case that
2
V
(
t
)
≤
V
0
−
μ
t
{\displaystyle 2{\sqrt {V(t)}}\leq V_{0}-\mu t}
for all t. Moreover, because
V
≥
0
{\displaystyle {\sqrt {V}}\geq 0}
,
V
{\displaystyle {\sqrt {V}}}
must reach
V
=
0
{\displaystyle {\sqrt {V}}=0}
in finite time, which means that V must reach
V
=
0
{\displaystyle V=0}
(i.e., the system enters the sliding mode) in finite time. Because
V
{\displaystyle {\sqrt {V}}}
is proportional to the Euclidean norm
‖
⋅
‖
2
{\displaystyle \|{\mathord {\cdot }}\|_{2}}
of the switching function
σ
{\displaystyle \sigma }
, this result implies that the rate of approach to the sliding mode must be firmly bounded away from zero.
===== Consequences for sliding mode control =====
In the context of sliding mode control, this condition means that
σ
⊺
⏞
∂
V
∂
σ
σ
˙
⏞
d
σ
d
t
⏟
d
V
d
t
≤
−
μ
(
‖
σ
‖
2
⏞
V
)
α
{\displaystyle \underbrace {\overbrace {\sigma ^{\intercal }} ^{\tfrac {\partial V}{\partial \sigma }}\overbrace {\dot {\sigma }} ^{\tfrac {\operatorname {d} \sigma }{\operatorname {d} t}}} _{\tfrac {\operatorname {d} V}{\operatorname {d} t}}\leq -\mu ({\mathord {\overbrace {\|\sigma \|_{2}} ^{\sqrt {V}}}})^{\alpha }}
where
‖
⋅
‖
{\displaystyle \|{\mathord {\cdot }}\|}
is the Euclidean norm. For the case when switching function
σ
{\displaystyle \sigma }
is scalar valued, the sufficient condition becomes
σ
σ
˙
≤
−
μ
|
σ
|
α
{\displaystyle \sigma {\dot {\sigma }}\leq -\mu |\sigma |^{\alpha }}
.
Taking
α
=
1
{\displaystyle \alpha =1}
, the scalar sufficient condition becomes
sgn
(
σ
)
σ
˙
≤
−
μ
{\displaystyle \operatorname {sgn} (\sigma ){\dot {\sigma }}\leq -\mu }
which is equivalent to the condition that
sgn
(
σ
)
≠
sgn
(
σ
˙
)
and
|
σ
˙
|
≥
μ
>
0
{\displaystyle \operatorname {sgn} (\sigma )\neq \operatorname {sgn} ({\dot {\sigma }})\qquad {\text{and}}\qquad |{\dot {\sigma }}|\geq \mu >0}
.
That is, the system should always be moving toward the switching surface
σ
=
0
{\displaystyle \sigma =0}
, and its speed
|
σ
˙
|
{\displaystyle |{\dot {\sigma }}|}
toward the switching surface should have a non-zero lower bound. So, even though
σ
{\displaystyle \sigma }
may become vanishingly small as
x
{\displaystyle \mathbf {x} }
approaches the
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
surface,
σ
˙
{\displaystyle {\dot {\sigma }}}
must always be bounded firmly away from zero. To ensure this condition, sliding mode controllers are discontinuous across the
σ
=
0
{\displaystyle \sigma =0}
manifold; they switch from one non-zero value to another as trajectories cross the manifold.
=== Theorem 2: Region of attraction ===
For the system given by Equation (1) and sliding surface given by Equation (2), the subspace for which the
{
x
∈
R
n
:
σ
(
x
)
=
0
}
{\displaystyle \{\mathbf {x} \in \mathbb {R} ^{n}:\sigma (\mathbf {x} )=\mathbf {0} \}}
surface is reachable is given by
{
x
∈
R
n
:
σ
⊺
(
x
)
σ
˙
(
x
)
<
0
}
{\displaystyle \{\mathbf {x} \in \mathbb {R} ^{n}:\sigma ^{\intercal }(\mathbf {x} ){\dot {\sigma }}(\mathbf {x} )<0\}}
That is, when initial conditions come entirely from this space, the Lyapunov function candidate
V
(
σ
)
{\displaystyle V(\sigma )}
is a Lyapunov function and
x
{\displaystyle \mathbf {x} }
trajectories are sure to move toward the sliding mode surface where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
. Moreover, if the reachability conditions from Theorem 1 are satisfied, the sliding mode will enter the region where
V
˙
{\displaystyle {\dot {V}}}
is more strongly bounded away from zero in finite time. Hence, the sliding mode
σ
=
0
{\displaystyle \sigma =0}
will be attained in finite time.
=== Theorem 3: Sliding motion ===
Let
∂
σ
∂
x
B
(
x
,
t
)
{\displaystyle {\frac {\partial \sigma }{\partial {\mathbf {x} }}}B(\mathbf {x} ,t)}
be nonsingular. That is, the system has a kind of controllability that ensures that there is always a control that can move a trajectory to move closer to the sliding mode. Then, once the sliding mode where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
is achieved, the system will stay on that sliding mode. Along sliding mode trajectories,
σ
(
x
)
{\displaystyle \sigma (\mathbf {x} )}
is constant, and so sliding mode trajectories are described by the differential equation
σ
˙
=
0
{\displaystyle {\dot {\sigma }}=\mathbf {0} }
.
If an
x
{\displaystyle \mathbf {x} }
-equilibrium is stable with respect to this differential equation, then the system will slide along the sliding mode surface toward the equilibrium.
The equivalent control law on the sliding mode can be found by solving
σ
˙
(
x
)
=
0
{\displaystyle {\dot {\sigma }}(\mathbf {x} )=0}
for the equivalent control law
u
(
x
)
{\displaystyle \mathbf {u} (\mathbf {x} )}
. That is,
∂
σ
∂
x
(
f
(
x
,
t
)
+
B
(
x
,
t
)
u
)
⏞
x
˙
=
0
{\displaystyle {\frac {\partial \sigma }{\partial \mathbf {x} }}\overbrace {\left(f(\mathbf {x} ,t)+B(\mathbf {x} ,t)\mathbf {u} \right)} ^{\dot {\mathbf {x} }}=\mathbf {0} }
and so the equivalent control
u
=
−
(
∂
σ
∂
x
B
(
x
,
t
)
)
−
1
∂
σ
∂
x
f
(
x
,
t
)
{\displaystyle \mathbf {u} =-\left({\frac {\partial \sigma }{\partial \mathbf {x} }}B(\mathbf {x} ,t)\right)^{-1}{\frac {\partial \sigma }{\partial \mathbf {x} }}f(\mathbf {x} ,t)}
That is, even though the actual control
u
{\displaystyle \mathbf {u} }
is not continuous, the rapid switching across the sliding mode where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
forces the system to act as if it were driven by this continuous control.
Likewise, the system trajectories on the sliding mode behave as if
x
˙
=
f
(
x
,
t
)
−
B
(
x
,
t
)
(
∂
σ
∂
x
B
(
x
,
t
)
)
−
1
∂
σ
∂
x
f
(
x
,
t
)
⏞
f
(
x
,
t
)
+
B
(
x
,
t
)
u
=
f
(
x
,
t
)
(
I
−
B
(
x
,
t
)
(
∂
σ
∂
x
B
(
x
,
t
)
)
−
1
∂
σ
∂
x
)
{\displaystyle {\dot {\mathbf {x} }}=\overbrace {f(\mathbf {x} ,t)-B(\mathbf {x} ,t)\left({\frac {\partial \sigma }{\partial \mathbf {x} }}B(\mathbf {x} ,t)\right)^{-1}{\frac {\partial \sigma }{\partial \mathbf {x} }}f(\mathbf {x} ,t)} ^{f(\mathbf {x} ,t)+B(\mathbf {x} ,t)u}=f(\mathbf {x} ,t)\left(\mathbf {I} -B(\mathbf {x} ,t)\left({\frac {\partial \sigma }{\partial \mathbf {x} }}B(\mathbf {x} ,t)\right)^{-1}{\frac {\partial \sigma }{\partial \mathbf {x} }}\right)}
The resulting system matches the sliding mode differential equation
σ
˙
(
x
)
=
0
{\displaystyle {\dot {\sigma }}(\mathbf {x} )=\mathbf {0} }
, the sliding mode surface
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
, and the trajectory conditions from the reaching phase now reduce to the above derived simpler condition. Hence, the system can be assumed to follow the simpler
σ
˙
=
0
{\displaystyle {\dot {\sigma }}=0}
condition after some initial transient during the period while the system finds the sliding mode. The same motion is approximately maintained when the equality
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=\mathbf {0} }
only approximately holds.
It follows from these theorems that the sliding motion is invariant (i.e., insensitive) to sufficiently small disturbances entering the system through the control channel. That is, as long as the control is large enough to ensure that
σ
⊺
σ
˙
<
0
{\displaystyle \sigma ^{\intercal }{\dot {\sigma }}<0}
and
σ
˙
{\displaystyle {\dot {\sigma }}}
is uniformly bounded away from zero, the sliding mode will be maintained as if there was no disturbance. The invariance property of sliding mode control to certain disturbances and model uncertainties is its most attractive feature; it is strongly robust.
As discussed in an example below, a sliding mode control law can keep the constraint
x
˙
+
x
=
0
{\displaystyle {\dot {x}}+x=0}
in order to asymptotically stabilize any system of the form
x
¨
=
a
(
t
,
x
,
x
˙
)
+
u
{\displaystyle {\ddot {x}}=a(t,x,{\dot {x}})+u}
when
a
(
⋅
)
{\displaystyle a(\cdot )}
has a finite upper bound. In this case, the sliding mode is where
x
˙
=
−
x
{\displaystyle {\dot {x}}=-x}
(i.e., where
x
˙
+
x
=
0
{\displaystyle {\dot {x}}+x=0}
). That is, when the system is constrained this way, it behaves like a simple stable linear system, and so it has a globally exponentially stable equilibrium at the
(
x
,
x
˙
)
=
(
0
,
0
)
{\displaystyle (x,{\dot {x}})=(0,0)}
origin.
== Control design examples ==
Consider a plant described by Equation (1) with single input u (i.e.,
m
=
1
{\displaystyle m=1}
). The switching function is picked to be the linear combination
where the weight
s
i
>
0
{\displaystyle s_{i}>0}
for all
1
≤
i
≤
n
{\displaystyle 1\leq i\leq n}
. The sliding surface is the simplex where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
. When trajectories are forced to slide along this surface,
σ
˙
(
x
)
=
0
{\displaystyle {\dot {\sigma }}(\mathbf {x} )=0}
and so
s
1
x
˙
1
+
s
2
x
˙
2
+
⋯
+
s
n
−
1
x
˙
n
−
1
+
s
n
x
˙
n
=
0
{\displaystyle s_{1}{\dot {x}}_{1}+s_{2}{\dot {x}}_{2}+\cdots +s_{n-1}{\dot {x}}_{n-1}+s_{n}{\dot {x}}_{n}=0}
which is a reduced-order system (i.e., the new system is of order
n
−
1
{\displaystyle n-1}
because the system is constrained to this
(
n
−
1
)
{\displaystyle (n-1)}
-dimensional sliding mode simplex). This surface may have favorable properties (e.g., when the plant dynamics are forced to slide along this surface, they move toward the origin
x
=
0
{\displaystyle \mathbf {x} =\mathbf {0} }
). Taking the derivative of the Lyapunov function in Equation (3), we have
V
˙
(
σ
(
x
)
)
=
σ
(
x
)
T
⏞
∂
V
∂
x
σ
˙
(
x
)
⏞
d
σ
d
t
{\displaystyle {\dot {V}}(\sigma (\mathbf {x} ))=\overbrace {\sigma (\mathbf {x} )^{\text{T}}} ^{\tfrac {\partial V}{\partial \mathbf {x} }}\overbrace {{\dot {\sigma }}(\mathbf {x} )} ^{\tfrac {\operatorname {d} \sigma }{\operatorname {d} t}}}
To ensure
V
˙
<
0
{\displaystyle {\dot {V}}<0}
, the feedback control law
u
(
x
)
{\displaystyle u(\mathbf {x} )}
must be chosen so that
{
σ
˙
<
0
if
σ
>
0
σ
˙
>
0
if
σ
<
0
{\displaystyle {\begin{cases}{\dot {\sigma }}<0&{\text{if }}\sigma >0\\{\dot {\sigma }}>0&{\text{if }}\sigma <0\end{cases}}}
Hence, the product
σ
σ
˙
<
0
{\displaystyle \sigma {\dot {\sigma }}<0}
because it is the product of a negative and a positive number. Note that
The control law
u
(
x
)
{\displaystyle u(\mathbf {x} )}
is chosen so that
u
(
x
)
=
{
u
+
(
x
)
if
σ
(
x
)
>
0
u
−
(
x
)
if
σ
(
x
)
<
0
{\displaystyle u(\mathbf {x} )={\begin{cases}u^{+}(\mathbf {x} )&{\text{if }}\sigma (\mathbf {x} )>0\\u^{-}(\mathbf {x} )&{\text{if }}\sigma (\mathbf {x} )<0\end{cases}}}
where
u
+
(
x
)
{\displaystyle u^{+}(\mathbf {x} )}
is some control (e.g., possibly extreme, like "on" or "forward") that ensures Equation (5) (i.e.,
σ
˙
{\displaystyle {\dot {\sigma }}}
) is negative at
x
{\displaystyle \mathbf {x} }
u
−
(
x
)
{\displaystyle u^{-}(\mathbf {x} )}
is some control (e.g., possibly extreme, like "off" or "reverse") that ensures Equation (5) (i.e.,
σ
˙
{\displaystyle {\dot {\sigma }}}
) is positive at
x
{\displaystyle \mathbf {x} }
The resulting trajectory should move toward the sliding surface where
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
. Because real systems have delay, sliding mode trajectories often chatter back and forth along this sliding surface (i.e., the true trajectory may not smoothly follow
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
, but it will always return to the sliding mode after leaving it).
Consider the dynamic system
x
¨
=
a
(
t
,
x
,
x
˙
)
+
u
{\displaystyle {\ddot {x}}=a(t,x,{\dot {x}})+u}
which can be expressed in a 2-dimensional state space (with
x
1
=
x
{\displaystyle x_{1}=x}
and
x
2
=
x
˙
{\displaystyle x_{2}={\dot {x}}}
) as
{
x
˙
1
=
x
2
x
˙
2
=
a
(
t
,
x
1
,
x
2
)
+
u
{\displaystyle {\begin{cases}{\dot {x}}_{1}=x_{2}\\{\dot {x}}_{2}=a(t,x_{1},x_{2})+u\end{cases}}}
Also assume that
sup
{
|
a
(
⋅
)
|
}
≤
k
{\displaystyle \sup\{|a(\cdot )|\}\leq k}
(i.e.,
|
a
|
{\displaystyle |a|}
has a finite upper bound k that is known). For this system, choose the switching function
σ
(
x
1
,
x
2
)
=
x
1
+
x
2
=
x
+
x
˙
{\displaystyle \sigma (x_{1},x_{2})=x_{1}+x_{2}=x+{\dot {x}}}
By the previous example, we must choose the feedback control law
u
(
x
,
x
˙
)
{\displaystyle u(x,{\dot {x}})}
so that
σ
σ
˙
<
0
{\displaystyle \sigma {\dot {\sigma }}<0}
. Here,
σ
˙
=
x
˙
1
+
x
˙
2
=
x
˙
+
x
¨
=
x
˙
+
a
(
t
,
x
,
x
˙
)
+
u
⏞
x
¨
{\displaystyle {\dot {\sigma }}={\dot {x}}_{1}+{\dot {x}}_{2}={\dot {x}}+{\ddot {x}}={\dot {x}}\,+\,\overbrace {a(t,x,{\dot {x}})+u} ^{\ddot {x}}}
When
x
+
x
˙
<
0
{\displaystyle x+{\dot {x}}<0}
(i.e., when
σ
<
0
{\displaystyle \sigma <0}
), to make
σ
˙
>
0
{\displaystyle {\dot {\sigma }}>0}
, the control law should be picked so that
u
>
|
x
˙
+
a
(
t
,
x
,
x
˙
)
|
{\displaystyle u>|{\dot {x}}+a(t,x,{\dot {x}})|}
When
x
+
x
˙
>
0
{\displaystyle x+{\dot {x}}>0}
(i.e., when
σ
>
0
{\displaystyle \sigma >0}
), to make
σ
˙
<
0
{\displaystyle {\dot {\sigma }}<0}
, the control law should be picked so that
u
<
−
|
x
˙
+
a
(
t
,
x
,
x
˙
)
|
{\displaystyle u<-|{\dot {x}}+a(t,x,{\dot {x}})|}
However, by the triangle inequality,
|
x
˙
|
+
|
a
(
t
,
x
,
x
˙
)
|
≥
|
x
˙
+
a
(
t
,
x
,
x
˙
)
|
{\displaystyle |{\dot {x}}|+|a(t,x,{\dot {x}})|\geq |{\dot {x}}+a(t,x,{\dot {x}})|}
and by the assumption about
|
a
|
{\displaystyle |a|}
,
|
x
˙
|
+
k
+
1
>
|
x
˙
|
+
|
a
(
t
,
x
,
x
˙
)
|
{\displaystyle |{\dot {x}}|+k+1>|{\dot {x}}|+|a(t,x,{\dot {x}})|}
So the system can be feedback stabilized (to return to the sliding mode) by means of the control law
u
(
x
,
x
˙
)
=
{
|
x
˙
|
+
k
+
1
if
x
+
x
˙
⏟
<
0
,
−
(
|
x
˙
|
+
k
+
1
)
if
x
+
x
˙
⏞
σ
>
0
{\displaystyle u(x,{\dot {x}})={\begin{cases}|{\dot {x}}|+k+1&{\text{if }}\underbrace {x+{\dot {x}}} <0,\\-\left(|{\dot {x}}|+k+1\right)&{\text{if }}\overbrace {x+{\dot {x}}} ^{\sigma }>0\end{cases}}}
which can be expressed in closed form as
u
(
x
,
x
˙
)
=
−
(
|
x
˙
|
+
k
+
1
)
sgn
(
x
˙
+
x
⏞
σ
)
⏟
(i.e., tests
σ
>
0
)
{\displaystyle u(x,{\dot {x}})=-(|{\dot {x}}|+k+1)\underbrace {\operatorname {sgn} (\overbrace {{\dot {x}}+x} ^{\sigma })} _{{\text{(i.e., tests }}\sigma >0{\text{)}}}}
Assuming that the system trajectories are forced to move so that
σ
(
x
)
=
0
{\displaystyle \sigma (\mathbf {x} )=0}
, then
x
˙
=
−
x
(i.e.,
σ
(
x
,
x
˙
)
=
x
+
x
˙
=
0
)
{\displaystyle {\dot {x}}=-x\qquad {\text{(i.e., }}\sigma (x,{\dot {x}})=x+{\dot {x}}=0{\text{)}}}
So once the system reaches the sliding mode, the system's 2-dimensional dynamics behave like this 1-dimensional system, which has a globally exponentially stable equilibrium at
(
x
,
x
˙
)
=
(
0
,
0
)
{\displaystyle (x,{\dot {x}})=(0,0)}
.
=== Automated design solutions ===
Although various theories exist for sliding mode control system design, there is a lack of a highly effective design methodology due to practical difficulties encountered in analytical and numerical methods. A reusable computing paradigm such as a genetic algorithm can, however, be utilized to transform a 'unsolvable problem' of optimal design into a practically solvable 'non-deterministic polynomial problem'. This results in computer-automated designs for sliding model control.
== Sliding mode observer ==
Sliding mode control can be used in the design of state observers. These non-linear high-gain observers have the ability to bring coordinates of the estimator error dynamics to zero in finite time. Additionally, switched-mode observers have attractive measurement noise resilience that is similar to a Kalman filter. For simplicity, the example here uses a traditional sliding mode modification of a Luenberger observer for an LTI system. In these sliding mode observers, the order of the observer dynamics are reduced by one when the system enters the sliding mode. In this particular example, the estimator error for a single estimated state is brought to zero in finite time, and after that time the other estimator errors decay exponentially to zero. However, as first described by Drakunov, a sliding mode observer for non-linear systems can be built that brings the estimation error for all estimated states to zero in a finite (and arbitrarily small) time.
Here, consider the LTI system
{
x
˙
=
A
x
+
B
u
y
=
[
1
0
0
⋯
]
x
=
x
1
{\displaystyle {\begin{cases}{\dot {\mathbf {x} }}=A\mathbf {x} +B\mathbf {u} \\y={\begin{bmatrix}1&0&0&\cdots &\end{bmatrix}}\mathbf {x} =x_{1}\end{cases}}}
where state vector
x
≜
(
x
1
,
x
2
,
…
,
x
n
)
∈
R
n
{\displaystyle \mathbf {x} \triangleq (x_{1},x_{2},\dots ,x_{n})\in \mathbb {R} ^{n}}
,
u
≜
(
u
1
,
u
2
,
…
,
u
r
)
∈
R
r
{\displaystyle \mathbf {u} \triangleq (u_{1},u_{2},\dots ,u_{r})\in \mathbb {R} ^{r}}
is a vector of inputs, and output y is a scalar equal to the first state of the
x
{\displaystyle \mathbf {x} }
state vector. Let
A
≜
[
a
11
A
12
A
21
A
22
]
{\displaystyle A\triangleq {\begin{bmatrix}a_{11}&A_{12}\\A_{21}&A_{22}\end{bmatrix}}}
where
a
11
{\displaystyle a_{11}}
is a scalar representing the influence of the first state
x
1
{\displaystyle x_{1}}
on itself,
A
21
∈
R
(
n
−
1
)
{\displaystyle A_{21}\in \mathbb {R} ^{(n-1)}}
is a row vector corresponding to the influence of the first state on the other states,
A
22
∈
R
(
n
−
1
)
×
(
n
−
1
)
{\displaystyle A_{22}\in \mathbb {R} ^{(n-1)\times (n-1)}}
is a matrix representing the influence of the other states on themselves, and
A
12
∈
R
1
×
(
n
−
1
)
{\displaystyle A_{12}\in \mathbb {R} ^{1\times (n-1)}}
is a column vector representing the influence of the other states on the first state.
The goal is to design a high-gain state observer that estimates the state vector
x
{\displaystyle \mathbf {x} }
using only information from the measurement
y
=
x
1
{\displaystyle y=x_{1}}
. Hence, let the vector
x
^
=
(
x
^
1
,
x
^
2
,
…
,
x
^
n
)
∈
R
n
{\displaystyle {\hat {\mathbf {x} }}=({\hat {x}}_{1},{\hat {x}}_{2},\dots ,{\hat {x}}_{n})\in \mathbb {R} ^{n}}
be the estimates of the n states. The observer takes the form
x
^
˙
=
A
x
^
+
B
u
+
L
v
(
x
^
1
−
x
1
)
{\displaystyle {\dot {\hat {\mathbf {x} }}}=A{\hat {\mathbf {x} }}+B\mathbf {u} +Lv({\hat {x}}_{1}-x_{1})}
where
v
:
R
→
R
{\displaystyle v:\mathbb {R} \to \mathbb {R} }
is a nonlinear function of the error between estimated state
x
^
1
{\displaystyle {\hat {x}}_{1}}
and the output
y
=
x
1
{\displaystyle y=x_{1}}
, and
L
∈
R
n
{\displaystyle L\in \mathbb {R} ^{n}}
is an observer gain vector that serves a similar purpose as in the typical linear Luenberger observer. Likewise, let
L
=
[
−
1
L
2
]
{\displaystyle L={\begin{bmatrix}-1\\L_{2}\end{bmatrix}}}
where
L
2
∈
R
(
n
−
1
)
{\displaystyle L_{2}\in \mathbb {R} ^{(n-1)}}
is a column vector. Additionally, let
e
=
(
e
1
,
e
2
,
…
,
e
n
)
∈
R
n
{\displaystyle \mathbf {e} =(e_{1},e_{2},\dots ,e_{n})\in \mathbb {R} ^{n}}
be the state estimator error. That is,
e
=
x
^
−
x
{\displaystyle \mathbf {e} ={\hat {\mathbf {x} }}-\mathbf {x} }
. The error dynamics are then
e
˙
=
x
^
˙
−
x
˙
=
A
x
^
+
B
u
+
L
v
(
x
^
1
−
x
1
)
−
A
x
−
B
u
=
A
(
x
^
−
x
)
+
L
v
(
x
^
1
−
x
1
)
=
A
e
+
L
v
(
e
1
)
{\displaystyle {\begin{aligned}{\dot {\mathbf {e} }}&={\dot {\hat {\mathbf {x} }}}-{\dot {\mathbf {x} }}\\&=A{\hat {\mathbf {x} }}+B\mathbf {u} +Lv({\hat {x}}_{1}-x_{1})-A\mathbf {x} -B\mathbf {u} \\&=A({\hat {\mathbf {x} }}-\mathbf {x} )+Lv({\hat {x}}_{1}-x_{1})\\&=A\mathbf {e} +Lv(e_{1})\end{aligned}}}
where
e
1
=
x
^
1
−
x
1
{\displaystyle e_{1}={\hat {x}}_{1}-x_{1}}
is the estimator error for the first state estimate. The nonlinear control law v can be designed to enforce the sliding manifold
0
=
x
^
1
−
x
1
{\displaystyle 0={\hat {x}}_{1}-x_{1}}
so that estimate
x
^
1
{\displaystyle {\hat {x}}_{1}}
tracks the real state
x
1
{\displaystyle x_{1}}
after some finite time (i.e.,
x
^
1
=
x
1
{\displaystyle {\hat {x}}_{1}=x_{1}}
). Hence, the sliding mode control switching function
σ
(
x
^
1
,
x
^
)
≜
e
1
=
x
^
1
−
x
1
.
{\displaystyle \sigma ({\hat {x}}_{1},{\hat {x}})\triangleq e_{1}={\hat {x}}_{1}-x_{1}.}
To attain the sliding manifold,
σ
˙
{\displaystyle {\dot {\sigma }}}
and
σ
{\displaystyle \sigma }
must always have opposite signs (i.e.,
σ
σ
˙
<
0
{\displaystyle \sigma {\dot {\sigma }}<0}
for essentially all
x
{\displaystyle \mathbf {x} }
). However,
σ
˙
=
e
˙
1
=
a
11
e
1
+
A
12
e
2
−
v
(
e
1
)
=
a
11
e
1
+
A
12
e
2
−
v
(
σ
)
{\displaystyle {\dot {\sigma }}={\dot {e}}_{1}=a_{11}e_{1}+A_{12}\mathbf {e} _{2}-v(e_{1})=a_{11}e_{1}+A_{12}\mathbf {e} _{2}-v(\sigma )}
where
e
2
≜
(
e
2
,
e
3
,
…
,
e
n
)
∈
R
(
n
−
1
)
{\displaystyle \mathbf {e} _{2}\triangleq (e_{2},e_{3},\ldots ,e_{n})\in \mathbb {R} ^{(n-1)}}
is the collection of the estimator errors for all of the unmeasured states. To ensure that
σ
σ
˙
<
0
{\displaystyle \sigma {\dot {\sigma }}<0}
, let
v
(
σ
)
=
M
sgn
(
σ
)
{\displaystyle v(\sigma )=M\operatorname {sgn} (\sigma )}
where
M
>
max
{
|
a
11
e
1
+
A
12
e
2
|
}
.
{\displaystyle M>\max\{|a_{11}e_{1}+A_{12}\mathbf {e} _{2}|\}.}
That is, positive constant M must be greater than a scaled version of the maximum possible estimator errors for the system (i.e., the initial errors, which are assumed to be bounded so that M can be picked large enough; al). If M is sufficiently large, it can be assumed that the system achieves
e
1
=
0
{\displaystyle e_{1}=0}
(i.e.,
x
^
1
=
x
1
{\displaystyle {\hat {x}}_{1}=x_{1}}
). Because
e
1
{\displaystyle e_{1}}
is constant (i.e., 0) along this manifold,
e
˙
1
=
0
{\displaystyle {\dot {e}}_{1}=0}
as well. Hence, the discontinuous control
v
(
σ
)
{\displaystyle v(\sigma )}
may be replaced with the equivalent continuous control
v
eq
{\displaystyle v_{\text{eq}}}
where
0
=
σ
˙
=
a
11
e
1
⏞
=
0
+
A
12
e
2
−
v
eq
⏞
v
(
σ
)
=
A
12
e
2
−
v
eq
.
{\displaystyle 0={\dot {\sigma }}=a_{11}{\mathord {\overbrace {e_{1}} ^{{}=0}}}+A_{12}\mathbf {e} _{2}-{\mathord {\overbrace {v_{\text{eq}}} ^{v(\sigma )}}}=A_{12}\mathbf {e} _{2}-v_{\text{eq}}.}
So
v
eq
⏟
scalar
=
A
12
⏟
1
×
(
n
−
1
)
vector
e
2
⏟
(
n
−
1
)
×
1
vector
.
{\displaystyle {\mathord {\underbrace {v_{\text{eq}}} _{\text{scalar}}}}={\mathord {\underbrace {A_{12}} _{1\times (n-1) \atop {\text{ vector}}}}}{\mathord {\underbrace {\mathbf {e} _{2}} _{(n-1)\times 1 \atop {\text{ vector}}}}}.}
This equivalent control
v
eq
{\displaystyle v_{\text{eq}}}
represents the contribution from the other
(
n
−
1
)
{\displaystyle (n-1)}
states to the trajectory of the output state
x
1
{\displaystyle x_{1}}
. In particular, the row
A
12
{\displaystyle A_{12}}
acts like an output vector for the error subsystem
[
e
˙
2
e
˙
3
⋮
e
˙
n
]
⏞
e
˙
2
=
A
2
[
e
2
e
3
⋮
e
n
]
⏞
e
2
+
L
2
v
(
e
1
)
=
A
2
e
2
+
L
2
v
eq
=
A
2
e
2
+
L
2
A
12
e
2
=
(
A
2
+
L
2
A
12
)
e
2
.
{\displaystyle {\mathord {\overbrace {\begin{bmatrix}{\dot {e}}_{2}\\{\dot {e}}_{3}\\\vdots \\{\dot {e}}_{n}\end{bmatrix}} ^{{\dot {\mathbf {e} }}_{2}}}}=A_{2}{\mathord {\overbrace {\begin{bmatrix}e_{2}\\e_{3}\\\vdots \\e_{n}\end{bmatrix}} ^{\mathbf {e} _{2}}}}+L_{2}v(e_{1})=A_{2}\mathbf {e} _{2}+L_{2}v_{\text{eq}}=A_{2}\mathbf {e} _{2}+L_{2}A_{12}\mathbf {e} _{2}=(A_{2}+L_{2}A_{12})\mathbf {e} _{2}.}
So, to ensure the estimator error
e
2
{\displaystyle \mathbf {e} _{2}}
for the unmeasured states converges to zero, the
(
n
−
1
)
×
1
{\displaystyle (n-1)\times 1}
vector
L
2
{\displaystyle L_{2}}
must be chosen so that the
(
n
−
1
)
×
(
n
−
1
)
{\displaystyle (n-1)\times (n-1)}
matrix
(
A
2
+
L
2
A
12
)
{\displaystyle (A_{2}+L_{2}A_{12})}
is Hurwitz (i.e., the real part of each of its eigenvalues must be negative). Hence, provided that it is observable, this
e
2
{\displaystyle \mathbf {e} _{2}}
system can be stabilized in exactly the same way as a typical linear state observer when
A
12
{\displaystyle A_{12}}
is viewed as the output matrix (i.e., "C"). That is, the
v
eq
{\displaystyle v_{\text{eq}}}
equivalent control provides measurement information about the unmeasured states that can continually move their estimates asymptotically closer to them. Meanwhile, the discontinuous control
v
=
M
sgn
(
x
^
1
−
x
)
{\displaystyle v=M\operatorname {sgn} ({\hat {x}}_{1}-x)}
forces the estimate of the measured state to have zero error in finite time. Additionally, white zero-mean symmetric measurement noise (e.g., Gaussian noise) only affects the switching frequency of the control v, and hence the noise will have little effect on the equivalent sliding mode control
v
eq
{\displaystyle v_{\text{eq}}}
. Hence, the sliding mode observer has Kalman filter–like features.
The final version of the observer is thus
x
^
˙
=
A
x
^
+
B
u
+
L
M
sgn
(
x
^
1
−
x
1
)
=
A
x
^
+
B
u
+
[
−
1
L
2
]
M
sgn
(
x
^
1
−
x
1
)
=
A
x
^
+
B
u
+
[
−
M
L
2
M
]
sgn
(
x
^
1
−
x
1
)
=
A
x
^
+
[
B
[
−
M
L
2
M
]
]
[
u
sgn
(
x
^
1
−
x
1
)
]
=
A
obs
x
^
+
B
obs
u
obs
{\displaystyle {\begin{aligned}{\dot {\hat {\mathbf {x} }}}&=A{\hat {\mathbf {x} }}+B\mathbf {u} +LM\operatorname {sgn} ({\hat {x}}_{1}-x_{1})\\&=A{\hat {\mathbf {x} }}+B\mathbf {u} +{\begin{bmatrix}-1\\L_{2}\end{bmatrix}}M\operatorname {sgn} ({\hat {x}}_{1}-x_{1})\\&=A{\hat {\mathbf {x} }}+B\mathbf {u} +{\begin{bmatrix}-M\\L_{2}M\end{bmatrix}}\operatorname {sgn} ({\hat {x}}_{1}-x_{1})\\&=A{\hat {\mathbf {x} }}+{\begin{bmatrix}B&{\begin{bmatrix}-M\\L_{2}M\end{bmatrix}}\end{bmatrix}}{\begin{bmatrix}\mathbf {u} \\\operatorname {sgn} ({\hat {x}}_{1}-x_{1})\end{bmatrix}}\\&=A_{\text{obs}}{\hat {\mathbf {x} }}+B_{\text{obs}}\mathbf {u} _{\text{obs}}\end{aligned}}}
where
A
obs
≜
A
,
{\displaystyle A_{\text{obs}}\triangleq A,}
B
obs
≜
[
B
[
−
M
L
2
M
]
]
,
{\displaystyle B_{\text{obs}}\triangleq {\begin{bmatrix}B&{\begin{bmatrix}-M\\L_{2}M\end{bmatrix}}\end{bmatrix}},}
and
u
obs
≜
[
u
sgn
(
x
^
1
−
x
1
)
]
.
{\displaystyle u_{\text{obs}}\triangleq {\begin{bmatrix}\mathbf {u} \\\operatorname {sgn} ({\hat {x}}_{1}-x_{1})\end{bmatrix}}.}
That is, by augmenting the control vector
u
{\displaystyle \mathbf {u} }
with the switching function
sgn
(
x
^
1
−
x
1
)
{\displaystyle \operatorname {sgn} ({\hat {x}}_{1}-x_{1})}
, the sliding mode observer can be implemented as an LTI system. That is, the discontinuous signal
sgn
(
x
^
1
−
x
1
)
{\displaystyle \operatorname {sgn} ({\hat {x}}_{1}-x_{1})}
is viewed as a control input to the 2-input LTI system.
For simplicity, this example assumes that the sliding mode observer has access to a measurement of a single state (i.e., output
y
=
x
1
{\displaystyle y=x_{1}}
). However, a similar procedure can be used to design a sliding mode observer for a vector of weighted combinations of states (i.e., when output
y
=
C
x
{\displaystyle \mathbf {y} =C\mathbf {x} }
uses a generic matrix C). In each case, the sliding mode will be the manifold where the estimated output
y
^
{\displaystyle {\hat {\mathbf {y} }}}
follows the measured output
y
{\displaystyle \mathbf {y} }
with zero error (i.e., the manifold where
σ
(
x
)
≜
y
^
−
y
=
0
{\displaystyle \sigma (\mathbf {x} )\triangleq {\hat {\mathbf {y} }}-\mathbf {y} =\mathbf {0} }
).
== See also ==
Variable structure control
Variable structure system
Hybrid system
Nonlinear control
Robust control
Optimal control
Bang–bang control – Sliding mode control is often implemented as a bang–bang control. In some cases, such control is necessary for optimality.
H-bridge – A topology that combines four switches forming the four legs of an "H". Can be used to drive a motor (or other electrical device) forward or backward when only a single supply is available. Often used in actuator in sliding-mode controlled systems.
Switching amplifier – Uses switching-mode control to drive continuous outputs
Delta-sigma modulation – Another (feedback) method of encoding a continuous range of values in a signal that rapidly switches between two states (i.e., a kind of specialized sliding-mode control)
Pulse-density modulation – A generalized form of delta-sigma modulation.
Pulse-width modulation – Another modulation scheme that produces continuous motion through discontinuous switching.
== Notes ==
== References ==
Zinober, Alan S.I., ed. (1994). Variable Structure and Lyapunov Control. Lecture Notes in Control and Information Sciences. Vol. 193. London: Springer-Verlag. doi:10.1007/BFb0033675. ISBN 978-3-540-19869-7.
Edwards, C.; Spurgeon, S. (1998). Sliding Mode Control. Theory and Applications. London: Taylor and Francis. ISBN 978-0-7484-0601-2.
Sabanovic, Asif; Fridman, Leonid; Spurgeon, Sarah (2004). Variable Structure Systems: From Principles to Implementation. London: The Insitite of Electrical Engineers. ISBN 0-86341-350-1.
Edwards, C.; Fossas Colet, E.; Fridman, L. (2006). Advances in Variable Structure and Sliding Mode Control. Berlin: Springer Verlag. ISBN 978-3-540-32800-1.
Bartolini, G.; Fridman, L.; Pisano, A.; Usai, E. (2008). Modern Sliding Mode Control Theory - New perspectives and applications. Berlin: Springer Verlag. ISBN 978-3-540-79016-7.
Fridman, L.; Barbot, J.-P.; Plestan, F. (2016). Recent Trends in Sliding Mode Control. London: The Institution of Engineering and Technology. ISBN 978-1-78561-076-9.
Shtessel, Y.; Edwards, C.; Fridman, L.; Levant, A. (2014). Sliding Mode Control and Observation. Basel: Birkhauser. ISBN 978-0-81764-8923.
Sira- Ramirez, Hebertt (2015). Sliding Mode Control. The Delta-Sigma Modulation Approach. Basel: Birkhauser. ISBN 978-3-319-17256-9.
Fridman, L.; Moreno, J.; Bandyopadhyay, B.; Kamal Asif Chalanga, S.; Chalanga, S. (2015). Continuous Nested Algorithms: The Fifth Generation of Sliding Mode Controllers. In: Recent Advances in Sliding Modes: From Control to Intelligent Mechatronics, X. Yu, O. Efe (Eds.), pp. 5–35. Studies in Systems, Decision and Control. Vol. 24. London: Springer-Verlag. doi:10.1007/978-3-319-18290-2_2. ISBN 978-3-319-18289-6.
Li, S.; Yu, X.; Fridman, L.; Man, Z.; Wang, X. (2017). Advances in Variable Structure Systems and Sliding Mode Control—Theory and Applications. Studies in Systems, Decision and Control. Vol. 24. London: Springer-Verlag. ISBN 978-3-319-62895-0.
Ferrara, A.; Incremona, G.P.; Cucuzzella, M. (2019). Advanced and Optimization Based Sliding Mode Control. SIAM. doi:10.1137/1.9781611975840. ISBN 978-1611975833. S2CID 198313071.
Steinberger, M.; Horn, M.; Fridman, L., eds. (2020). Variable-Structure Systems and Sliding-Mode Control. Studies in Systems, Decision and Control. Vol. 271. London: Springer-Verlag. doi:10.1007/BFb0033675. ISBN 978-3-030-36620-9.
== Further reading ==
Drakunov S.V., Utkin V.I.. (1992). "Sliding mode control in dynamic systems". International Journal of Control. 55 (4): 1029–1037. doi:10.1080/00207179208934270. hdl:10338.dmlcz/135339. | Wikipedia/Sliding_mode_control |
In control theory, a bang–bang controller (hysteresis, 2 step or on–off controller), is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often used to control a plant that accepts a binary input, for example a furnace that is either completely on or completely off. Most common residential thermostats are bang–bang controllers. The Heaviside step function in its discrete form is an example of a bang–bang control signal. Due to the discontinuous control signal, systems that include bang–bang controllers are variable structure systems, and bang–bang controllers are thus variable structure controllers.
== Bang–bang solutions in optimal control ==
In optimal control problems, it is sometimes the case that a control is restricted to be between a lower and an upper bound. If the optimal control switches from one extreme to the other (i.e., is strictly never in between the bounds), then that control is referred to as a bang–bang solution.
Bang–bang controls frequently arise in minimum-time problems. For example, if it is desired for a car starting at rest to arrive at a certain position ahead of the car in the shortest possible time, the solution is to apply maximum acceleration until the unique switching point, and then apply maximum braking to come to rest exactly at the desired position.
A familiar everyday example is bringing water to a boil in the shortest time, which is achieved by applying full heat, then turning it off when the water reaches a boil. A closed-loop household example is most thermostats, wherein the heating element or air conditioning compressor is either running or not, depending upon whether the measured temperature is above or below the setpoint.
Bang–bang solutions also arise when the Hamiltonian is linear in the control variable; application of Pontryagin's minimum or maximum principle will then lead to pushing the control to its upper or lower bound depending on the sign of the coefficient of u in the Hamiltonian.
In summary, bang–bang controls are actually optimal controls in some cases, although they are also often implemented because of simplicity or convenience.
== Practical implications of bang–bang control ==
Mathematically or within a computing context there may
be no problems, but the physical realization of bang–bang
control systems gives rise to several complications.
First, depending on the width of the hysteresis gap and inertia in the process, there will be an oscillating error signal around the desired set point value (e.g., temperature), often saw-tooth shaped. Room temperature may become uncomfortable just before the next switch 'ON' event. Alternatively, a narrow hysteresis gap will lead to frequent
on/off switching, which is often undesirable (e.g. an electrically ignited gas heater).
Second, the onset of the step function may entail, for example, a high electrical current and/or sudden heating and expansion of metal vessels, ultimately leading to metal fatigue or other wear-and-tear effects. Where possible, continuous control, such as in PID control, will avoid problems caused by the brisk state transitions that are the consequence of bang–bang control.
== See also ==
== References ==
== Further reading ==
Artstein, Zvi (1980). "Discrete and continuous bang-bang and facial spaces, or: Look for the extreme points". SIAM Review. 22 (2): 172–185. doi:10.1137/1022026. JSTOR 2029960. MR 0564562.
Flugge-Lotz, Irmgard (1953). Discontinuous Automatic Control. Princeton University Press. ISBN 9780691653259. {{cite book}}: ISBN / Date incompatibility (help)
Hermes, Henry; LaSalle, Joseph P. (1969). Functional analysis and time optimal control. Mathematics in Science and Engineering. Vol. 56. New York—London: Academic Press. pp. viii+136. MR 0420366.
Kluvánek, Igor; Knowles, Greg (1976). Vector measures and control systems. North-Holland Mathematics Studies. Vol. 20. New York: North-Holland Publishing Co. pp. ix+180. MR 0499068.
Rolewicz, Stefan (1987). Functional analysis and control theory: Linear systems. Mathematics and its Applications (East European Series). Vol. 29 (Translated from the Polish by Ewa Bednarczuk ed.). Dordrecht; Warsaw: D. Reidel Publishing Co.; PWN—Polish Scientific Publishers. pp. xvi+524. ISBN 90-277-2186-6. MR 0920371. OCLC 13064804.
Sonneborn, L.; Van Vleck, F. (1965). "The Bang-Bang Principle for Linear Control Systems". SIAM J. Control. 2: 151–159. | Wikipedia/Bang–bang_control |
In control systems theory, the describing function (DF) method, developed by Nikolay Mitrofanovich Krylov and Nikolay Bogoliubov in the 1930s, and extended by Ralph Kochenburger is an approximate procedure for analyzing certain nonlinear control problems. It is based on quasi-linearization, which is the approximation of the non-linear system under investigation by a linear time-invariant (LTI) transfer function that depends on the amplitude of the input waveform. By definition, a transfer function of a true LTI system cannot depend on the amplitude of the input function because an LTI system is linear. Thus, this dependence on amplitude generates a family of linear systems that are combined in an attempt to capture salient features of the non-linear system behavior. The describing function is one of the few widely applicable methods for designing nonlinear systems, and is very widely used as a standard mathematical tool for analyzing limit cycles in closed-loop controllers, such as industrial process controls, servomechanisms, and electronic oscillators.
== The method ==
Consider feedback around a discontinuous (but piecewise continuous) nonlinearity (e.g., an amplifier with saturation, or an element with deadband effects) cascaded with a slow stable linear system. The continuous region in which the feedback is presented to the nonlinearity depends on the amplitude of the output of the linear system. As the linear system's output amplitude decays, the nonlinearity may move into a different continuous region. This switching from one continuous region to another can generate periodic oscillations. The describing function method attempts to predict characteristics of those oscillations (e.g., their fundamental frequency) by assuming that the slow system acts like a low-pass or bandpass filter that concentrates all energy around a single frequency. Even if the output waveform has several modes, the method can still provide intuition about properties like frequency and possibly amplitude; in this case, the describing function method can be thought of as describing the sliding mode of the feedback system.
Using this low-pass assumption, the system response can be described by one of a family of sinusoidal waveforms; in this case the system would be characterized by a sine input describing function (SIDF)
H
(
A
,
j
ω
)
{\displaystyle H(A,\,j\omega )}
giving the system response to an input consisting of a sine wave of amplitude A and frequency
ω
{\displaystyle \omega }
. This SIDF is a modification of the transfer function
H
(
j
ω
)
{\displaystyle H(j\omega )}
used to characterize linear systems. In a quasi-linear system, when the input is a sine wave, the output will be a sine wave of the same frequency but with a scaled amplitude and shifted phase as given by
H
(
A
,
j
ω
)
{\displaystyle H(A,\,j\omega )}
. Many systems are approximately quasi-linear in the sense that although the response to a sine wave is not a pure sine wave, most of the energy in the output is indeed at the same frequency
ω
{\displaystyle \omega }
as the input. This is because such systems may possess intrinsic low-pass or bandpass characteristics such that harmonics are naturally attenuated, or because external filters are added for this purpose. An important application of the SIDF technique is to estimate the oscillation amplitude in sinusoidal electronic oscillators.
Other types of describing functions that have been used are DFs for level inputs and for Gaussian noise inputs. Although not a complete description of the system, the DFs often suffice to answer specific questions about control and stability. DF methods are best for analyzing systems with relatively weak nonlinearities. In addition the higher order sinusoidal input describing functions (HOSIDF), describe the response of a class of nonlinear systems at harmonics of the input frequency of a sinusoidal input. The HOSIDFs are an extension of the SIDF for systems where the nonlinearities are significant in the response.
== Caveats ==
Although the describing function method can produce reasonably accurate results for a wide class of systems, it can fail badly for others. For example, the method can fail if the system emphasizes higher harmonics of the nonlinearity. Such examples have been presented by Tzypkin for bang–bang systems. A fairly similar example is a closed-loop oscillator consisting of a non-inverting Schmitt trigger followed by an inverting integrator that feeds back its output to the Schmitt trigger's input. The output of the Schmitt trigger is going to be a square waveform, while that of the integrator (following it) is going to have a triangle waveform with peaks coinciding with the transitions in the square wave. Each of these two oscillator stages lags the signal exactly by 90 degrees (relative to its input). If one were to perform DF analysis on this circuit, the triangle wave at the Schmitt trigger's input would be replaced by its fundamental (sine wave), which passing through the trigger would cause a phase shift of less than 90 degrees (because the sine wave would trigger it sooner than the triangle wave does) so the system would appear not to oscillate in the same (simple) way.
Also, in the case where the conditions for Aizerman's or Kalman conjectures are fulfilled, there are no periodic solutions by describing function method, but counterexamples with hidden periodic attractors are known. Counterexamples to the describing function method can be constructed for discontinuous dynamical systems when a rest segment destroys predicted limit cycles. Therefore, the application of the describing function method requires additional justification.
== References ==
== Further reading ==
== External links ==
Electrical Engineering Encyclopedia: Describing Functions | Wikipedia/Describing_function |
In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state-space Markov chains usually under the name Foster–Lyapunov functions.
For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. Whereas there is no general technique for constructing Lyapunov functions for ODEs, in many specific cases the construction of Lyapunov functions is known. For instance, quadratic functions suffice for systems with one state, the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems, and conservation laws can often be used to construct Lyapunov functions for physical systems.
== Definition ==
A Lyapunov function for an autonomous dynamical system
{
g
:
R
n
→
R
n
y
˙
=
g
(
y
)
{\displaystyle {\begin{cases}g:\mathbb {R} ^{n}\to \mathbb {R} ^{n}&\\{\dot {y}}=g(y)\end{cases}}}
with an equilibrium point at
y
=
0
{\displaystyle y=0}
is a scalar function
V
:
R
n
→
R
{\displaystyle V:\mathbb {R} ^{n}\to \mathbb {R} }
that is continuous, has continuous first derivatives, is strictly positive for
y
≠
0
{\displaystyle y\neq 0}
, and for which the time derivative
V
˙
=
∇
V
⋅
g
{\displaystyle {\dot {V}}=\nabla {V}\cdot g}
is non positive (these conditions are required on some region containing the origin). The (stronger) condition that
−
∇
V
⋅
g
{\displaystyle -\nabla {V}\cdot g}
is strictly positive for
y
≠
0
{\displaystyle y\neq 0}
is sometimes stated as
−
∇
V
⋅
g
{\displaystyle -\nabla {V}\cdot g}
is locally positive definite, or
∇
V
⋅
g
{\displaystyle \nabla {V}\cdot g}
is locally negative definite.
=== Further discussion of the terms arising in the definition ===
Lyapunov functions arise in the study of equilibrium points of dynamical systems. In
R
n
,
{\displaystyle \mathbb {R} ^{n},}
an arbitrary autonomous dynamical system can be written as
y
˙
=
g
(
y
)
{\displaystyle {\dot {y}}=g(y)}
for some smooth
g
:
R
n
→
R
n
.
{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{n}.}
An equilibrium point is a point
y
∗
{\displaystyle y^{*}}
such that
g
(
y
∗
)
=
0.
{\displaystyle g\left(y^{*}\right)=0.}
Given an equilibrium point,
y
∗
,
{\displaystyle y^{*},}
there always exists a coordinate transformation
x
=
y
−
y
∗
,
{\displaystyle x=y-y^{*},}
such that:
{
x
˙
=
y
˙
=
g
(
y
)
=
g
(
x
+
y
∗
)
=
f
(
x
)
f
(
0
)
=
0
{\displaystyle {\begin{cases}{\dot {x}}={\dot {y}}=g(y)=g\left(x+y^{*}\right)=f(x)\\f(0)=0\end{cases}}}
Thus, in studying equilibrium points, it is sufficient to assume the equilibrium point occurs at
0
{\displaystyle 0}
.
By the chain rule, for any function,
H
:
R
n
→
R
,
{\displaystyle H:\mathbb {R} ^{n}\to \mathbb {R} ,}
the time derivative of the function evaluated along a solution of the dynamical system is
H
˙
=
d
d
t
H
(
x
(
t
)
)
=
∂
H
∂
x
⋅
d
x
d
t
=
∇
H
⋅
x
˙
=
∇
H
⋅
f
(
x
)
.
{\displaystyle {\dot {H}}={\frac {d}{dt}}H(x(t))={\frac {\partial H}{\partial x}}\cdot {\frac {dx}{dt}}=\nabla H\cdot {\dot {x}}=\nabla H\cdot f(x).}
A function
H
{\displaystyle H}
is defined to be locally positive-definite function (in the sense of dynamical systems) if both
H
(
0
)
=
0
{\displaystyle H(0)=0}
and there is a neighborhood of the origin,
B
{\displaystyle {\mathcal {B}}}
, such that:
H
(
x
)
>
0
∀
x
∈
B
∖
{
0
}
.
{\displaystyle H(x)>0\quad \forall x\in {\mathcal {B}}\setminus \{0\}.}
== Basic Lyapunov theorems for autonomous systems ==
Let
x
∗
=
0
{\displaystyle x^{*}=0}
be an equilibrium point of the autonomous system
x
˙
=
f
(
x
)
.
{\displaystyle {\dot {x}}=f(x).}
and use the notation
V
˙
(
x
)
{\displaystyle {\dot {V}}(x)}
to denote the time derivative of the Lyapunov-candidate-function
V
{\displaystyle V}
:
V
˙
(
x
)
=
d
d
t
V
(
x
(
t
)
)
=
∂
V
∂
x
⋅
d
x
d
t
=
∇
V
⋅
x
˙
=
∇
V
⋅
f
(
x
)
.
{\displaystyle {\dot {V}}(x)={\frac {d}{dt}}V(x(t))={\frac {\partial V}{\partial x}}\cdot {\frac {dx}{dt}}=\nabla V\cdot {\dot {x}}=\nabla V\cdot f(x).}
=== Locally asymptotically stable equilibrium ===
If the equilibrium point is isolated, the Lyapunov-candidate-function
V
{\displaystyle V}
is locally positive definite, and the time derivative of the Lyapunov-candidate-function is locally negative definite:
V
˙
(
x
)
<
0
∀
x
∈
B
(
0
)
∖
{
0
}
,
{\displaystyle {\dot {V}}(x)<0\quad \forall x\in {\mathcal {B}}(0)\setminus \{0\},}
for some neighborhood
B
(
0
)
{\displaystyle {\mathcal {B}}(0)}
of origin, then the equilibrium is proven to be locally asymptotically stable.
=== Stable equilibrium ===
If
V
{\displaystyle V}
is a Lyapunov function, then the equilibrium is Lyapunov stable.
=== Globally asymptotically stable equilibrium ===
If the Lyapunov-candidate-function
V
{\displaystyle V}
is globally positive definite, radially unbounded, the equilibrium isolated and the time derivative of the Lyapunov-candidate-function is globally negative definite:
V
˙
(
x
)
<
0
∀
x
∈
R
n
∖
{
0
}
,
{\displaystyle {\dot {V}}(x)<0\quad \forall x\in \mathbb {R} ^{n}\setminus \{0\},}
then the equilibrium is proven to be globally asymptotically stable.
The Lyapunov-candidate function
V
(
x
)
{\displaystyle V(x)}
is radially unbounded if
‖
x
‖
→
∞
⇒
V
(
x
)
→
∞
.
{\displaystyle \|x\|\to \infty \Rightarrow V(x)\to \infty .}
(This is also referred to as norm-coercivity.)
The converse is also true, and was proved by José Luis Massera (see also Massera's lemma).
== Example ==
Consider the following differential equation on
R
{\displaystyle \mathbb {R} }
:
x
˙
=
−
x
.
{\displaystyle {\dot {x}}=-x.}
Considering that
x
2
{\displaystyle x^{2}}
is always positive around the origin it is a natural candidate to be a Lyapunov function to help us study
x
{\displaystyle x}
. So let
V
(
x
)
=
x
2
{\displaystyle V(x)=x^{2}}
on
R
{\displaystyle \mathbb {R} }
. Then,
V
˙
(
x
)
=
V
′
(
x
)
x
˙
=
2
x
⋅
(
−
x
)
=
−
2
x
2
<
0.
{\displaystyle {\dot {V}}(x)=V'(x){\dot {x}}=2x\cdot (-x)=-2x^{2}<0.}
This correctly shows that the above differential equation,
x
,
{\displaystyle x,}
is asymptotically stable about the origin. Note that using the same Lyapunov candidate one can show that the equilibrium is also globally asymptotically stable.
== See also ==
Lyapunov stability
Ordinary differential equations
Control-Lyapunov function
Chetaev function
Foster's theorem
Lyapunov optimization
== References ==
Weisstein, Eric W. "Lyapunov Function". MathWorld.
Khalil, H.K. (1996). Nonlinear systems. Prentice Hall Upper Saddle River, NJ.
La Salle, Joseph; Lefschetz, Solomon (1961). Stability by Liapunov's Direct Method: With Applications. New York: Academic Press.
This article incorporates material from Lyapunov function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== External links ==
Example of determining the stability of the equilibrium solution of a system of ODEs with a Lyapunov function | Wikipedia/Lyapunov_function |
Computer numerical control (CNC) or CNC machining is the automated control of machine tools by a computer. It is an evolution of numerical control (NC), where machine tools are directly managed by data storage media such as punched cards or punched tape. Because CNC allows for easier programming, modification, and real-time adjustments, it has gradually replaced NC as computing costs declined.
A CNC machine is a motorized maneuverable tool and often a motorized maneuverable platform, which are both controlled by a computer, according to specific input instructions. Instructions are delivered to a CNC machine in the form of a sequential program of machine control instructions such as G-code and M-code, and then executed. The program can be written by a person or, far more often, generated by graphical computer-aided design (CAD) or computer-aided manufacturing (CAM) software. In the case of 3D printers, the part to be printed is "sliced" before the instructions (or the program) are generated. 3D printers also use G-Code.
CNC offers greatly increased productivity over non-computerized machining for repetitive production, where the machine must be manually controlled (e.g. using devices such as hand wheels or levers) or mechanically controlled by pre-fabricated pattern guides (see pantograph mill). However, these advantages come at significant cost in terms of both capital expenditure and job setup time. For some prototyping and small batch jobs, a good machine operator can have parts finished to a high standard whilst a CNC workflow is still in setup.
In modern CNC systems, the design of a mechanical part and its manufacturing program are highly automated. The part's mechanical dimensions are defined using CAD software and then translated into manufacturing directives by CAM software. The resulting directives are transformed (by "post processor" software) into the specific commands necessary for a particular machine to produce the component and then are loaded into the CNC machine.
Since any particular component might require the use of several different tools – drills, saws, touch probes etc. – modern machines often combine multiple tools into a single "cell". In other installations, several different machines are used with an external controller and human or robotic operators that move the component from machine to machine. In either case, the series of steps needed to produce any part is highly automated and produces a part that meets every specification in the original CAD drawing, where each specification includes a tolerance.
== Description ==
Motion is controlling multiple axes, normally at least two (X and Y), and a tool spindle that moves in the Z (depth). The position of the tool is driven by direct-drive stepper motors or servo motors to provide highly accurate movements, or in older designs, motors through a series of step-down gears. Open-loop control works as long as the forces are kept small enough and speeds are not too great. On commercial metalworking machines, closed-loop controls are standard and required to provide the accuracy, speed, and repeatability demanded.
=== Parts description ===
As the controller hardware evolved, the mills themselves also evolved. One change has been to enclose the entire mechanism in a large box as a safety measure (with safety glass in the doors to permit the operator to monitor the machine's function), often with additional safety interlocks to ensure the operator is far enough from the working piece for safe operation. Most new CNC systems built today are 100% electronically controlled.
CNC-like systems are used for any process that can be described as movements and operations. These include laser cutting, welding, friction stir welding, ultrasonic welding, flame and plasma cutting, bending, spinning, hole-punching, pinning, gluing, fabric cutting, sewing, tape and fiber placement, routing, picking and placing, and sawing.
== History ==
The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the tool or part to follow points fed into the system on punched tape. These early servomechanisms were rapidly augmented with analog and digital computers, creating the modern CNC machine tools that have revolutionized machining processes.
== Today ==
Now the CNC in the processing manufacturing field has been very extensive, not only the traditional milling and turning, other machines and equipment are also installed with the corresponding CNC, which makes the manufacturing industry in its support, greatly improving the quality and efficiency. Of course, the latest trend in CNC is to combine traditional subtractive manufacturing with additive manufacturing (3D printing) to create a new manufacturing method - hybrid additive subtractive manufacturing (HASM). Another trend is the combination of AI, using a large number of sensors, with the goal of achieving flexible manufacturing.
== Examples of CNC machines ==
== Other CNC tools ==
Many other tools have CNC variants, including:
== Tool/machine crashing ==
In CNC, a "crash" occurs when the machine moves in such a way that is harmful to the machine, tools, or parts being machined, sometimes resulting in bending or breakage of cutting tools, accessory clamps, vises, and fixtures, or causing damage to the machine itself by bending guide rails, breaking drive screws, or causing structural components to crack or deform under strain. A mild crash may not damage the machine or tools but may damage the part being machined so that it must be scrapped. Many CNC tools have no inherent sense of the absolute position of the table or tools when turned on. They must be manually "homed" or "zeroed" to have any reference to work from, and these limits are just for figuring out the location of the part to work with it and are no hard motion limit on the mechanism. It is often possible to drive the machine outside the physical bounds of its drive mechanism, resulting in a collision with itself or damage to the drive mechanism. Many machines implement control parameters limiting axis motion past a certain limit in addition to physical limit switches. However, these parameters can often be changed by the operator.
Many CNC tools also do not know anything about their working environment. Machines may have load sensing systems on spindle and axis drives, but some do not. They blindly follow the machining code provided and it is up to an operator to detect if a crash is either occurring or about to occur, and for the operator to manually abort the active process. Machines equipped with load sensors can stop axis or spindle movement in response to an overload condition, but this does not prevent a crash from occurring. It may only limit the damage resulting from the crash. Some crashes may not ever overload any axis or spindle drives.
If the drive system is weaker than the machine's structural integrity, then the drive system simply pushes against the obstruction, and the drive motors "slip in place". The machine tool may not detect the collision or the slipping, so for example the tool should now be at 210mm on the X-axis, but is, in fact, at 32mm where it hit the obstruction and kept slipping. All of the next tool motions will be off by −178mm on the X-axis, and all future motions are now invalid, which may result in further collisions with clamps, vises, or the machine itself. This is common in open-loop stepper systems but is not possible in closed-loop systems unless mechanical slippage between the motor and drive mechanism has occurred. Instead, in a closed-loop system, the machine will continue to attempt to move against the load until either the drive motor goes into an overload condition or a servo motor fails to get to the desired position.
Collision detection and avoidance are possible, through the use of absolute position sensors (optical encoder strips or disks) to verify that motion occurred, or torque sensors or power-draw sensors on the drive system to detect abnormal strain when the machine should just be moving and not cutting, but these are not a common component of most hobby CNC tools. Instead, most hobby CNC tools simply rely on the assumed accuracy of stepper motors that rotate a specific number of degrees in response to magnetic field changes. It is often assumed the stepper is perfectly accurate and never missteps, so tool position monitoring simply involves counting the number of pulses sent to the stepper over time. An alternate means of stepper position monitoring is usually not available, so crash or slip detection is not possible.
Commercial CNC metalworking machines use closed-loop feedback controls for axis movement. In a closed-loop system, the controller monitors the actual position of each axis with an absolute or incremental encoder. Proper control programming will reduce the possibility of a crash, but it is still up to the operator and programmer to ensure that the machine is operated safely. However, during the 2000s and 2010s, the software for machining simulation has been maturing rapidly, and it is no longer uncommon for the entire machine tool envelope (including all axes, spindles, chucks, turrets, tool holders, tailstocks, fixtures, clamps, and stock) to be modeled accurately with 3D solid models, which allows the simulation software to predict fairly accurately whether a cycle will involve a crash. Although such simulation is not new, its accuracy and market penetration are changing considerably because of computing advancements.
== Numerical precision and equipment backlash ==
Within the numerical systems of CNC programming, the code generator can assume that the controlled mechanism is always perfectly accurate, or that precision tolerances are identical for all cutting or movement directions. While the common use of ball screws on most modern NC machines eliminates the vast majority of backlash, it still must be taken into account. CNC tools with a large amount of mechanical backlash can still be highly precise if the drive or cutting mechanism is only driven to apply cutting force from one direction, and all driving systems are pressed tightly together in that one cutting direction. However, a CNC device with high backlash and a dull cutting tool can lead to cutter chatter and possible workpiece gouging. The backlash also affects the precision of some operations involving axis movement reversals during cutting, such as the milling of a circle, where axis motion is sinusoidal. However, this can be compensated for if the amount of backlash is precisely known by linear encoders or manual measurement.
The high backlash mechanism itself is not necessarily relied on to be repeatedly precise for the cutting process, but some other reference object or precision surface may be used to zero the mechanism, by tightly applying pressure against the reference and setting that as the zero references for all following CNC-encoded motions. This is similar to the manual machine tool method of clamping a micrometer onto a reference beam and adjusting the Vernier dial to zero using that object as the reference.
== Positioning control system ==
In numerical control systems, the position of the tool is defined by a set of instructions called the part program. Positioning control is handled using either an open-loop or a closed-loop system. In an open-loop system, communication takes place in one direction only: from the controller to the motor. In a closed-loop system, feedback is provided to the controller so that it can correct for errors in position, velocity, and acceleration, which can arise due to variations in load or temperature. Open-loop systems are generally cheaper but less accurate. Stepper motors can be used in both types of systems, while servo motors can only be used in closed systems.
=== Cartesian coordinates ===
The G & M code positions are all based on a three-dimensional Cartesian coordinate system. This system is a typical plane often seen in mathematics when graphing. This system is required to map out the machine tool paths and any other kind of actions that need to happen in a specific coordinate. Absolute coordinates are what are generally used more commonly for machines and represent the (0,0,0) point on the plane. This point is set on the stock material to give a starting point or "home position" before starting the actual machining.
== Coding ==
=== G-codes ===
G-codes are used to command specific movements of the machine, such as machine moves or drilling functions. The majority of G-code programs start with a percent (%) symbol on the first line, then followed by an "O" with a numerical name for the program (i.e. "O0001") on the second line, then another percent (%) symbol on the last line of the program. The format for a G-code is the letter G followed by two to three digits; for example G01. G-codes differ slightly between a mill and lathe application, for example:
[G00 Rapid Motion Positioning]
[G01 Linear Interpolation Motion]
[G02 Circular Interpolation Motion-Clockwise]
[G03 Circular Interpolation Motion-Counter Clockwise]
[G04 Dwell (Group 00) Mill]
[G10 Set offsets (Group 00) Mill]
[G12 Circular Pocketing-Clockwise]
[G13 Circular Pocketing-Counter Clockwise]
=== M-codes ===
[Code Miscellaneous Functions (M-Code)]. M-codes are miscellaneous machine commands that do not command axis motion. The format for an M-code is the letter M followed by two to three digits; for example:
[M01 Operational stop]
[M02 End of Program]
[M03 Start Spindle - Clockwise]
[M04 Start Spindle - Counter Clockwise]
[M05 Stop Spindle]
[M06 Tool Change]
[M07 Coolant on mist coolant]
[M08 Flood coolant on]
[M09 Coolant off]
[M10 Chuck open]
[M11 Chuck close]
[M12 Spindle up]
[M13 BOTH M03&M08 Spindle clockwise rotation & flood coolant]
[M14 BOTH M04&M08 Spindle counter clockwise rotation & flood coolant]
[M15 BOTH M05&M09 Spindle stop and Flood coolant off]
[M16 Special tool call]
[M19 Spindle orientate]
[M29 DNC mode]
[M30 Program reset & rewind]
[M38 Door open]
[M39 Door close]
[M40 Spindle gear at middle]
[M41 Low gear select]
[M42 High gear select]
[M53 Retract Spindle] (raises tool spindle above current position to allow operator to do whatever they would need to do)
[M68 Hydraulic chuck close]
[M69 Hydraulic chuck open]
[M78 Tailstock advancing]
[M79 Tailstock reversing]
=== Example ===
Having the correct speeds and feeds in the program provides for a more efficient and smoother product run. Incorrect speeds and feeds will cause damage to the tool, machine spindle, and even the product. The quickest and simplest way to find these numbers would be to use a calculator that can be found online. A formula can also be used to calculate the proper speeds and feeds for a material. These values can be found online or in Machinery's Handbook.
== See also ==
Automatic tool changer
Binary cutter location
CNC plunge milling
Computer-aided technologies
Computer-aided engineering (CAE)
Coordinate-measuring machine (CMM)
Design for manufacturability
Direct numerical control (DNC)
EIA RS-274
EIA RS-494
Gerber format
Home automation
Maslow CNC
Multiaxis machining
Optical tracer
Part program
Robotics
Touch probe
List of computer-aided manufacturing software
== References ==
== Further reading ==
Brittain, James (1992), Alexanderson: Pioneer in American Electrical Engineering, Johns Hopkins University Press, ISBN 0-8018-4228-X.
Holland, Max (1989), When the Machine Stopped: A Cautionary Tale from Industrial America, Boston: Harvard Business School Press, ISBN 978-0-87584-208-0, OCLC 246343673.
Noble, David F. (1984), Forces of Production: A Social History of Industrial Automation, New York, New York, US: Knopf, ISBN 978-0-394-51262-4, LCCN 83048867.
Reintjes, J. Francis (1991), Numerical Control: Making a New Technology, Oxford University Press, ISBN 978-0-19-506772-9.
Weisberg, David, The Engineering Design Revolution (PDF), archived from the original (PDF) on 7 July 2010.
Wildes, Karl L.; Lindgren, Nilo A. (1985), A Century of Electrical Engineering and Computer Science at MIT, MIT Press, ISBN 0-262-23119-0.
Herrin, Golden E. "Industry Honors The Inventor Of NC", Modern Machine Shop, 12 January 1998.
Siegel, Arnold. "Automatic Programming of Numerically Controlled Machine Tools", Control Engineering, Volume 3 Issue 10 (October 1956), pp. 65–70.
Smid, Peter (2008), CNC Programming Handbook (3rd ed.), New York: Industrial Press, ISBN 9780831133474, LCCN 2007045901.
Christopher jun Pagarigan (Vini) Edmonton Alberta Canada. CNC Infomatic, Automotive Design & Production.
Fitzpatrick, Michael (2019), "Machining and CNC Technology".
== External links ==
Media related to Computer numerical control at Wikimedia Commons | Wikipedia/Numerical_control |
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces.
In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space.
The concept of dimension is not restricted to physical objects. High-dimensional spaces frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space.
== In mathematics ==
In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. Similarly, a surface is of dimension two, even if embedded in three-dimensional space.
The dimension of Euclidean n-space En is n. When trying to generalize to other types of spaces, one is faced with the question "what makes En n-dimensional?" One answer is that to cover a fixed ball in En by small balls of radius ε, one needs on the order of ε−n such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in En looks locally like En-1 and this leads to the notion of the inductive dimension. While these notions agree on En, they turn out to be different when one looks at more general spaces.
A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4".
Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry.
The rest of this section examines some of the more important mathematical definitions of dimension.
=== Vector spaces ===
The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension.
For the non-free case, this generalizes to the notion of the length of a module.
=== Manifolds ===
The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean n-space, in which the number n is the manifold's dimension.
For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point.
In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases n > 4 are simplified by having extra space in which to "work"; and the cases n = 3 and 4 are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied.
==== Complex dimension ====
The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number (x + iy) has a real part x and an imaginary part y, in which x and y are both real numbers; hence, the complex dimension is half the real dimension.
Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension.
=== Varieties ===
The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety.
An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains
V
0
⊊
V
1
⊊
⋯
⊊
V
d
{\displaystyle V_{0}\subsetneq V_{1}\subsetneq \cdots \subsetneq V_{d}}
of sub-varieties of the given algebraic set (the length of such a chain is the number of "
⊊
{\displaystyle \subsetneq }
").
Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient stack [V/G] has dimension m − n.
=== Krull dimension ===
The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence
P
0
⊊
P
1
⊊
⋯
⊊
P
n
{\displaystyle {\mathcal {P}}_{0}\subsetneq {\mathcal {P}}_{1}\subsetneq \cdots \subsetneq {\mathcal {P}}_{n}}
of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety.
For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0.
=== Topological spaces ===
For any normal topological space X, the Lebesgue covering dimension of X is defined to be the smallest integer n for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than n + 1 elements. In this case dim X = n. For X a manifold, this coincides with the dimension mentioned above. If no such integer n exists, then the dimension of X is said to be infinite, and one writes dim X = ∞. Moreover, X has dimension −1, i.e. dim X = −1 if and only if X is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open".
An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general, one obtains an (n + 1)-dimensional object by dragging an n-dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension −1.
Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
=== Hausdorff dimension ===
The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values.
=== Hilbert spaces ===
Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide.
== In physics ==
=== Spatial dimensions ===
Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies i.e., moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.)
=== Time ===
A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction.
The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy).
The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is time.
=== Additional dimensions ===
In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" (compactified) at such tiny scales as to be effectively invisible to current experiments.
In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However, at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building.
In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a (3 + 1)-dimensional subspace. Thus, the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume.
Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration.
Extra dimensions are said to be universal if all fields are equally free to propagate within them.
== In computer graphics and spatial data ==
Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions:
Point (0-dimensional), a single coordinate in a Cartesian coordinate system.
Line or Polyline (1-dimensional) usually represented as an ordered list of points sampled from a continuous line, whereupon the software is expected to interpolate the intervening shape of the line as straight- or curved-line segments.
Polygon (2-dimensional) usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior.
Surface (3-dimensional) represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior.
Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines).
== More dimensions ==
== List of topics by dimension ==
== See also ==
== References ==
== Further reading ==
Murty, Katta G. (2014). "1. Systems of Simultaneous Linear Equations" (PDF). Computational and Algorithmic Linear Algebra and n-Dimensional Geometry. World Scientific Publishing. doi:10.1142/8261. ISBN 978-981-4366-62-5.
Abbott, Edwin A. (1884). Flatland: A Romance of Many Dimensions. London: Seely & Co.
—. Flatland: ... Project Gutenberg.
—; Stewart, Ian (2008). The Annotated Flatland: A Romance of Many Dimensions. Basic Books. ISBN 978-0-7867-2183-2.
Banchoff, Thomas F. (1996). Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions. Scientific American Library. ISBN 978-0-7167-6015-3.
Pickover, Clifford A. (2001). Surfing through Hyperspace: Understanding Higher Universes in Six Easy Lessons. Oxford University Press. ISBN 978-0-19-992381-6.
Rucker, Rudy (2014) [1984]. The Fourth Dimension: Toward a Geometry of Higher Reality. Courier Corporation. ISBN 978-0-486-77978-2. Google preview
Kaku, Michio (1994). Hyperspace, a Scientific Odyssey Through the 10th Dimension. Oxford University Press. ISBN 978-0-19-286189-4.
Krauss, Lawrence M. (2005). Hiding in the Mirror. Viking Press. ISBN 978-0-670-03395-9.
== External links ==
Copeland, Ed (2009). "Extra Dimensions". Sixty Symbols. Brady Haran for the University of Nottingham. | Wikipedia/Dimension_(mathematics_and_physics) |
In mathematics, geometric measure theory (GMT) is the study of geometric properties of sets (typically in Euclidean space) through measure theory. It allows mathematicians to extend tools from differential geometry to a much larger class of surfaces that are not necessarily smooth.
== History ==
Geometric measure theory was born out of the desire to solve Plateau's problem (named after Joseph Plateau) which asks if for every smooth closed curve in
R
3
{\displaystyle \mathbb {R} ^{3}}
there exists a surface of least area among all surfaces whose boundary equals the given curve. Such surfaces mimic soap films.
The problem had remained open since it was posed in 1760 by Lagrange. It was solved independently in the 1930s by Jesse Douglas and Tibor Radó under certain topological restrictions. In 1960 Herbert Federer and Wendell Fleming used the theory of currents with which they were able to solve the orientable Plateau's problem analytically without topological restrictions, thus sparking geometric measure theory. Later Jean Taylor after Fred Almgren proved Plateau's laws for the kind of singularities that can occur in these more general soap films and soap bubbles clusters.
== Important notions ==
The following objects are central in geometric measure theory:
Hausdorff measure and Hausdorff dimension
Rectifiable sets (or Radon measures), which are sets with the least possible regularity required to admit approximate tangent spaces.
Characterization of rectifiability through existence of approximate tangents, densities, projections, etc.
Orthogonal projections, Kakeya sets, Besicovitch sets
Uniform rectifiability
Rectifiability and uniform rectifiability of (subsets of) metric spaces, e.g. SubRiemannian manifolds, Carnot groups, Heisenberg groups, etc.
Connections to singular integrals, Fourier transform, Frostman measures, harmonic measures, etc
Currents, a generalization of the concept of oriented manifolds, possibly with boundary.
Flat chains, an alternative generalization of the concept of manifolds, possibly with boundary.
Caccioppoli sets (also known as sets of locally finite perimeter), a generalization of the concept of manifolds on which the divergence theorem applies.
Plateau type minimization problems from calculus of variations
The following theorems and concepts are also central:
The area formula, which generalizes the concept of change of variables in integration.
The coarea formula, which generalizes and adapts Fubini's theorem to geometric measure theory.
The isoperimetric inequality, which states that the smallest possible circumference for a given area is that of a round circle.
Flat convergence, which generalizes the concept of manifold convergence.
== Examples ==
The Brunn–Minkowski inequality for the n-dimensional volumes of convex bodies K and L,
v
o
l
(
(
1
−
λ
)
K
+
λ
L
)
1
/
n
≥
(
1
−
λ
)
v
o
l
(
K
)
1
/
n
+
λ
v
o
l
(
L
)
1
/
n
,
{\displaystyle \mathrm {vol} {\big (}(1-\lambda )K+\lambda L{\big )}^{1/n}\geq (1-\lambda )\mathrm {vol} (K)^{1/n}+\lambda \,\mathrm {vol} (L)^{1/n},}
can be proved on a single page and quickly yields the classical isoperimetric inequality. The Brunn–Minkowski inequality also leads to Anderson's theorem in statistics. The proof of the Brunn–Minkowski inequality predates modern measure theory; the development of measure theory and Lebesgue integration allowed connections to be made between geometry and analysis, to the extent that in an integral form of the Brunn–Minkowski inequality known as the Prékopa–Leindler inequality the geometry seems almost entirely absent.
== See also ==
Caccioppoli set
Coarea formula
Currents
Herbert Federer
Osgood curve
== References ==
Federer, Herbert; Fleming, Wendell H. (1960), "Normal and integral currents", Annals of Mathematics, II, 72 (4): 458–520, doi:10.2307/1970227, JSTOR 1970227, MR 0123260, Zbl 0187.31301. The first paper of Federer and Fleming illustrating their approach to the theory of perimeters based on the theory of currents.
Federer, Herbert (1969), Geometric measure theory, series Die Grundlehren der mathematischen Wissenschaften, vol. Band 153, New York: Springer-Verlag New York Inc., pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325
Federer, H. (1978), "Colloquium lectures on geometric measure theory", Bull. Amer. Math. Soc., 84 (3): 291–338, doi:10.1090/S0002-9904-1978-14462-0
Fomenko, Anatoly T. (1990), Variational Principles in Topology (Multidimensional Minimal Surface Theory), Mathematics and its Applications (Book 42), Springer, Kluwer Academic Publishers, ISBN 978-0792302308
Gardner, Richard J. (2002), "The Brunn-Minkowski inequality", Bull. Amer. Math. Soc. (N.S.), 39 (3): 355–405 (electronic), doi:10.1090/S0273-0979-02-00941-2, ISSN 0273-0979, MR 1898210
Mattila, Pertti (1999), Geometry of Sets and Measures in Euclidean Spaces, London: Cambridge University Press, p. 356, ISBN 978-0-521-65595-8
Morgan, Frank (2009), Geometric measure theory: A beginner's guide (Fourth ed.), San Diego, California: Academic Press Inc., pp. viii+249, ISBN 978-0-12-374444-9, MR 2455580
Taylor, Jean E. (1976), "The structure of singularities in soap-bubble-like and soap-film-like minimal surfaces", Annals of Mathematics, Second Series, 103 (3): 489–539, doi:10.2307/1970949, JSTOR 1970949, MR 0428181.
O'Neil, T.C. (2001) [1994], "Geometric measure theory", Encyclopedia of Mathematics, EMS Press
== External links ==
Peter Mörters' GMT page
Toby O'Neil's GMT page with references | Wikipedia/Geometric_measure_theory |
In algebraic geometry, the problem of resolution of singularities asks whether every algebraic variety V has a resolution, which is a non-singular variety W with a proper birational map W→V. For varieties over fields of characteristic 0, this was proved by Heisuke Hironaka in 1964; while for varieties of dimension at least 4 over fields of characteristic p, it is an open problem.
== Definitions ==
Originally the problem of resolution of singularities was to find a nonsingular model for the function field of a variety X, in other words a complete non-singular variety X′ with the same function field. In practice it is more convenient to ask for a different condition as follows: a variety X has a resolution of singularities if we can find a non-singular variety X′ and a proper birational map from X′ to X. The condition that the map is proper is needed to exclude trivial solutions, such as taking X′ to be the subvariety of non-singular points of X.
More generally, it is often useful to resolve the singularities of a variety X embedded into a larger variety W. Suppose we have a closed embedding of X into a regular variety W. A strong desingularization of X is given by a proper birational morphism from a regular variety W′ to W subject to some of the following conditions (the exact choice of conditions depends on the author):
The strict transform X′ of X is regular, and transverse to the exceptional locus of the resolution morphism (so in particular it resolves the singularities of X).
The map from the strict transform of X′ to X is an isomorphism away from the singular points of X.
W′ is constructed by repeatedly blowing up regular closed subvarieties of W or more strongly regular subvarieties of X, transverse to the exceptional locus of the previous blowings up.
The construction of W′ is functorial for smooth morphisms to W and embeddings of W into a larger variety. (It cannot be made functorial for all (not necessarily smooth) morphisms in any reasonable way.)
The morphism from X′ to X does not depend on the embedding of X in W. Or in general, the sequence of blowings up is functorial with respect to smooth morphisms.
Hironaka showed that there is a strong desingularization satisfying the first three conditions above whenever X is defined over a field of characteristic 0, and his construction was improved by several authors (see below) so that it satisfies all conditions above.
== Resolution of singularities of curves ==
Every algebraic curve has a unique nonsingular projective model, which means that all resolution methods are essentially the same because they all construct this model. In higher dimensions this is no longer true: varieties can have many different nonsingular projective models.
Kollár (2007) lists about 20 ways of proving resolution of singularities of curves.
=== Newton's method ===
Resolution of singularities of curves was essentially first proved by Newton (1676), who showed the existence of Puiseux series for a curve from which resolution follows easily.
=== Riemann's method ===
Riemann constructed a smooth Riemann surface from the function field of a complex algebraic curve, which gives a resolution of its singularities. This can be done over more general fields by using the set of discrete valuation rings of the field as a substitute for the Riemann surface.
=== Albanese's method ===
Albanese's method consists of taking a curve that spans a projective space of sufficiently large dimension (more than twice the degree of the curve) and repeatedly projecting down from singular points to projective spaces of smaller dimension. This method extends to higher-dimensional varieties, and shows that any n-dimensional variety has a projective model with singularities of multiplicity at most n!. For a curve, n = 1, and thus there are no singular points.
=== Normalization ===
Muhly & Zariski (1939) gave a one step method of resolving singularities of a curve by taking the normalization of the curve. Normalization removes all singularities in codimension 1, so it works for curves but not in higher dimensions.
=== Valuation rings ===
Another one-step method of resolving singularities of a curve is to take a space of valuation rings of the function field of the curve. This space can be made into a nonsingular projective curve birational to the original curve.
=== Blowing up ===
Repeatedly blowing up the singular points of a curve will eventually resolve the singularities. The main task with this method is to find a way to measure the complexity of a singularity and to show that blowing up improves this measure. There are many ways to do this. For example, one can use the arithmetic genus of the curve.
=== Noether's method ===
Noether's method takes a plane curve and repeatedly applies quadratic transformations (determined by a singular point and two points in general position). Eventually this produces a plane curve whose only singularities are ordinary multiple points (all tangent lines have multiplicity two).
=== Bertini's method ===
Bertini's method is similar to Noether's method. It starts with a plane curve, and repeatedly applies birational transformations to the plane to improve the curve. The birational transformations are more complicated than the quadratic transformations used in Noether's method, but produce the better result that the only singularities are ordinary double points.
== Resolution of singularities of surfaces ==
Surfaces have many different nonsingular projective models (unlike the case of curves where the nonsingular projective model is unique). However a surface still has a unique minimal resolution, that all others factor through (all others are resolutions of it). In higher dimensions there need not be a minimal resolution.
There were several attempts to prove resolution for surfaces over the complex numbers by Del Pezzo (1892), Levi (1899), Severi (1914), Chisini (1921), and Albanese (1924), but Zariski (1935, chapter I section 6) points out that none of these early attempts are complete, and all are vague (or even wrong) at some critical point of the argument. The first rigorous proof was given by Walker (1935), and an algebraic proof for all fields of characteristic 0 was given by Zariski (1939). Abhyankar (1956) gave a proof for surfaces of non-zero characteristic. Resolution of singularities has also been shown for all excellent 2-dimensional schemes (including all arithmetic surfaces) by Lipman (1978).
=== Zariski's method ===
Zariski's method of resolution of singularities for surfaces is to repeatedly alternate normalizing the surface (which kills codimension 1 singularities) with blowing up points (which makes codimension 2 singularities better, but may introduce new codimension 1 singularities). Although this will resolve the singularities of surfaces by itself, Zariski used a more roundabout method: he first proved a local uniformization theorem showing that every valuation of a surface could be resolved, then used the compactness of the Zariski–Riemann surface to show that it is possible to find a finite set of surfaces such that the center of each valuation is simple on at least one of these surfaces, and finally by studying birational maps between surfaces showed that this finite set of surfaces could be replaced by a single non-singular surface.
=== Jung's method ===
By applying strong embedded resolution for curves, Jung (1908) reduces to a surface with only rather special singularities (abelian quotient singularities) which are then dealt with explicitly. The higher-dimensional version of this method is de Jong's method.
=== Albanese method ===
In general the analogue of Albanese's method for curves shows that for any variety one can reduce to singularities of order at most n!, where n is the dimension. For surfaces this reduces to the case of singularities of order 2, which are easy enough to do explicitly.
=== Abhyankar's method ===
Abhyankar (1956) proved resolution of singularities for surfaces over a field of any characteristic by proving a local uniformization theorem for valuation rings. The hardest case is valuation rings of rank 1 whose valuation group is a nondiscrete subgroup of the rational numbers. The rest of the proof follows Zariski's method.
=== Hironaka's method ===
Hironaka's method for arbitrary characteristic varieties gives a resolution method for surfaces, which involves repeatedly blowing up points or smooth curves in the singular set.
=== Lipman's method ===
Lipman (1978) showed that a surface Y (a 2-dimensional reduced Noetherian scheme) has a desingularization if and only if its normalization is finite over Y and analytically normal (the completions of its singular points are normal) and has only finitely many singular points. In particular if Y is excellent then it has a desingularization.
His method was to consider normal surfaces Z with a birational proper map to Y and show that there is a minimal one with minimal possible arithmetic genus. He then shows that all singularities of this minimal Z are pseudo rational, and shows that pseudo rational singularities can be resolved by repeatedly blowing up points.
== Resolution of singularities in higher dimensions ==
The problem of resolution of singularities in higher dimensions is notorious for many incorrect published proofs and announcements of proofs that never appeared.
=== Zariski's method ===
For 3-folds the resolution of singularities was proved in characteristic 0 by Zariski (1944). He first proved a theorem about local uniformization of valuation rings, valid for varieties of any dimension over any field of characteristic 0. He then showed that the Zariski–Riemann space of valuations is quasi-compact (for any variety of any dimension over any field), implying that there is a finite family of models of any projective variety such that any valuation has a smooth center over at least one of these models. The final and hardest part of the proof, which uses the fact that the variety is of dimension 3 but which works for all characteristics, is to show that given 2 models one can find a third that resolves the singularities that each of the two given models resolve.
=== Abhyankar's method ===
Abhyankar (1966) proved resolution of singularities for 3-folds in characteristic greater than 6. The restriction on the characteristic arises because Abhyankar shows that it is possible to resolve any singularity of a 3-fold of multiplicity less than the characteristic, and then uses Albanese's method to show that singularities can be reduced to those of multiplicity at most (dimension)! = 3! = 6. Cutkosky (2009) gave a simplified version of Abhyankar's proof.
Cossart and Piltant (2008, 2009) proved resolution of singularities of 3-folds in all characteristics, by proving local uniformization in dimension at most 3, and then checking that Zariski's proof that this implies resolution for 3-folds still works in the positive characteristic case.
=== Hironaka's method ===
Resolution of singularities in characteristic 0 in all dimensions was first proved by Hironaka (1964). He proved that it was possible to resolve singularities of varieties over fields of characteristic 0 by repeatedly blowing up along non-singular subvarieties, using a very complicated argument by induction on the dimension. Simplified versions of his formidable proof were given by several people, including Bierstone & Milman (1991), Bierstone & Milman (1997), Villamayor (1992), Encinas & Villamayor (1998), Encinas & Hauser (2002), Wlodarczyk (2005), Kollár (2007). Some of the recent proofs are about a tenth of the length of Hironaka's original proof, and are easy enough to give in an introductory graduate course. For an expository account of the theorem, see (Hauser 2003) and for a historical discussion see (Hauser 2000).
=== De Jong's method ===
de Jong (1996) found a different approach to resolution of singularities, generalizing Jung's method for surfaces, which was used by
Bogomolov & Pantev (1996) and by Abramovich & de Jong (1997) to prove resolution of singularities in characteristic 0. De Jong's method gave a weaker result for varieties of all dimensions in characteristic p, which is strong enough to act as a substitute for resolution for many purposes.
De Jong proved that for any variety X over a field there is a dominant proper morphism which preserves the dimension from a regular variety onto X. This need not be a birational map, so is not a resolution of singularities, as it may be generically finite to one and so involves a finite extension of the function field of X. De Jong's idea was to try to represent X as a fibration over a smaller space Y with fibers that are curves (this may involve modifying X), then eliminate the singularities of Y by induction on the dimension, then eliminate the singularities in the fibers.
== Resolution for schemes and status of the problem ==
It is easy to extend the definition of resolution to all schemes. Not all schemes have resolutions of their singularities: Grothendieck & Dieudonné (1965, section 7.9) showed that if a locally Noetherian scheme X has the property that one can resolve the singularities of any finite integral scheme over X, then X must be quasi-excellent. Grothendieck also suggested that the converse might hold: in other words, if a locally Noetherian scheme X is reduced and quasi excellent, then it is possible to resolve its singularities. When X is defined over a field of characteristic 0 and is Noetherian, this follows from Hironaka's theorem, and when X has dimension at most 2 it was proved by Lipman.
Hauser (2010) gave a survey of work on the unsolved characteristic p resolution problem.
== Method of proof in characteristic zero ==
There are many constructions of strong desingularization but all of them give essentially the same result. In every case the global object (the variety to be desingularized) is replaced by local data (the ideal sheaf of the variety and those of the exceptional divisors and some orders that represents how much should be resolved the ideal in that step). With this local data the centers of blowing-up are defined. The centers will be defined locally and therefore it is a problem to guarantee that they will match up into a global center. This can be done by defining what blowings-up are allowed to resolve each ideal. Done appropriately, this will make the centers match automatically. Another way is to define a local invariant depending on the variety and the history of the resolution (the previous local centers) so that the centers consist of the maximum locus of the invariant. The definition of this is made such that making this choice is meaningful, giving smooth centers transversal to the exceptional divisors.
In either case the problem is reduced to resolve singularities of the tuple formed by the ideal sheaf and the extra data (the exceptional divisors and the order, d, to which the resolution should go for that ideal). This tuple is called a marked ideal and the set of points in which the order of the ideal is larger than d is called its co-support. The proof that there is a resolution for the marked ideals is done by induction on dimension. The induction breaks in two steps:
Functorial desingularization of marked ideal of dimension n − 1 implies functorial desingularization of marked ideals of maximal order of dimension n.
Functorial desingularization of marked ideals of maximal order of dimension n implies functorial desingularization of (a general) marked ideal of dimension n.
Here we say that a marked ideal is of maximal order if at some point of its co-support the order of the ideal is equal to d.
A key ingredient in the strong resolution is the use of the Hilbert–Samuel function of the local rings of the points in the variety. This is one of the components of the resolution invariant.
== Examples ==
=== Multiplicity need not decrease under blowup ===
The most obvious invariant of a singularity is its multiplicity. However this need not decrease under blowup, so it is necessary to use more subtle invariants to measure the improvement.
For example, the rhamphoid cusp y2 = x5 has a singularity of order 2 at the origin. After blowing up at its singular point it becomes the ordinary cusp y2 = x3, which still has multiplicity 2.
It is clear that the singularity has improved, since the degree of defining polynomial has decreased. This does not happen in general.
An example where it does not is given by the isolated singularity of x2 + y3z + z3 = 0 at the origin. Blowing it up gives the singularity x2 + y2z + yz3 = 0. It is not immediately obvious that this new singularity is better, as both singularities have multiplicity 2 and are given by the sum of monomials of degrees 2, 3, and 4.
=== Blowing up the most singular points does not work ===
A natural idea for improving singularities is to blow up the locus of the "worst" singular points. The Whitney umbrella x2 = y2z has singular set the z axis, most of whose point are ordinary double points, but there is a more complicated pinch point singularity at the origin, so blowing up the worst singular points suggests that one should start by blowing up the origin. However blowing up the origin reproduces the same singularity on one of the coordinate charts. So blowing up the (apparently) "worst" singular points does not improve the singularity. Instead the singularity can be resolved by blowing up along the z-axis.
There are algorithms that work by blowing up the "worst" singular points in some sense, such as (Bierstone & Milman 1997), but this example shows that the definition of the "worst" points needs to be quite subtle.
For more complicated singularities, such as x2 = ymzn which is singular along x = yz =0, blowing up the worst singularity at the origin produces the singularities x2 = ym+n−2zn and x2 = ymzm+n−2 which are worse than the original singularity if m and n are both at least 3.
After resolution, the total transform (the union of the strict transform and the exceptional divisors) is a variety with singularities of the simple normal crossings type. It is natural to consider the possibility of resolving singularities without resolving this type of singularities, this is finding a resolution that is an isomorphism over the set of smooth and simple normal crossing points. When the strict transform is a divisor (i.e., can be embedded as a codimension one subvariety in a smooth variety) it is known that there exists a strong resolution avoiding simple normal crossing points. Whitney's umbrella shows that it is not possible to resolve singularities avoiding blowing-up the normal crossings singularities.
=== Incremental resolution procedures need memory ===
A natural way to resolve singularities is to repeatedly blow up some canonically chosen smooth subvariety. This runs into the following problem. The singular set of x2 = y2z2 is the pair of lines given by the y and z axes. The only reasonable varieties to blow up are the origin, one of these two axes, or the whole singular set (both axes). However the whole singular set cannot be used since it is not smooth, and choosing one of the two axes breaks the symmetry between them so is not canonical. This means we have to start by blowing up the origin, but this reproduces the original singularity, so we seem to be going round in circles.
The solution to this problem is that although blowing up the origin does not change the type of the singularity, it does give a subtle improvement: it breaks the symmetry between the two singular axes because one of them is an exceptional divisor for a previous blowup, so it is now permissible to blow up just one of these. However, in order to exploit this the resolution procedure needs to treat these 2 singularities differently, even though they are locally the same. This is sometimes done by giving the resolution procedure some memory, so the center of the blowup at each step depends not only on the singularity, but on the previous blowups used to produce it.
=== Resolutions are not functorial ===
Some resolution methods (in characteristic 0) are functorial for all smooth morphisms.
However it is not possible to find a strong resolution functorial for all (possibly non-smooth) morphisms. An example is given by the map from the affine plane A2 to the conical singularity x2 + y2 = z2 taking (X,Y) to (2XY, X2 − Y2, X2 + Y2). The XY-plane is already nonsingular so should not be changed by resolution, and any resolution of the conical singularity factorizes through the minimal resolution given by blowing up the singular point. However the rational map from the XY-plane to this blowup does not extend to a regular map.
=== Minimal resolutions need not exist ===
Minimal resolutions (resolutions such that every resolution factors through them) exist in dimensions 1 and 2, but not always in higher dimensions. The Atiyah flop gives an example in 3 dimensions of a singularity with no minimal resolution.
Let Y be the zeros of xy = zw in A4, and let V be the blowup of Y at the origin.
The exceptional locus of this blowup is isomorphic to P1×P1, and can be blown down to P1 in 2 different ways, giving two small resolutions X1 and X2 of Y, neither of which can be blown down any further.
=== Resolutions should not commute with products ===
Kollár (2007, example 3.4.4, page 121) gives the following example showing that one cannot expect a sufficiently good resolution procedure to commute with products. If f:A→B is the blowup of the origin of a quadric cone B in affine 3-space, then f×f:A×A→B×B cannot be produced by an étale local resolution procedure, essentially because the exceptional locus has 2 components that intersect.
=== Singularities of toric varieties ===
Singularities of toric varieties give examples of high-dimensional singularities that are easy to resolve explicitly. A toric variety is defined by a fan, a collection of cones in a lattice. The singularities can be resolved by subdividing each cone into a union of cones each of which is generated by a basis for the lattice, and taking the corresponding toric variety.
=== Choosing centers that are regular subvarieties of X ===
Construction of a desingularization of a variety X may not produce centers of blowings up that are smooth subvarieties of X. Many constructions of a desingularization of an abstract variety X proceed by locally embedding X in a smooth variety W, considering its ideal in W and computing a canonical desingularization of this ideal. The desingularization of ideals uses the order of the ideal as a measure of how singular is the ideal. The desingularization of the ideal can be made such that one can justify that the local centers patch together to give global centers. This method leads to a proof that is relatively simpler to present, compared to Hironaka's original proof, which uses the Hilbert-Samuel function as the measure of how bad singularities are. For example, the proofs in Villamayor (1992), Encinas & Villamayor (1998), Encinas & Hauser (2002), and Kollár (2007) use this idea. However, this method only ensures centers of blowings up that are regular in W.
The following example shows that this method can produce centers that have non-smooth intersections with the (strict transform of) X. Therefore, the resulting desingularization, when restricted to the abstract variety X, is not obtained by blowing up regular subvarieties of X.
Let X be the subvariety of the four-dimensional affine plane, with coordinates x,y,z,w, generated by y2-x3 and x4+xz2-w3. The canonical desingularization of the ideal with these generators would blow up the center C0 given by x=y=z=w=0. The transform of the ideal in the x-chart if generated by x-y2 and y2(y2+z2-w3). The next center of blowing up C1 is given by x=y=0. However, the strict transform of X is X1, which is generated by x-y2 and y2+z2-w3. This means that the intersection of C1 and X1 is given by x=y=0 and z2-w3=0, which is not regular.
To produce centers of blowings up that are regular subvarieties of X stronger proofs use the Hilbert-Samuel function of the local rings of X rather than the order of its ideal in the local embedding in W.
== Other variants of resolutions of singularities ==
After the resolution the total transform, the union of the strict transform, X, and the exceptional divisor, is a variety that can be made, at best, to have simple normal crossing singularities. Then it is natural to consider the possibility of resolving singularities without resolving this type of singularities. The problem is to find a resolution that is an isomorphism over the set of smooth and simple normal crossing points. When X is a divisor, i.e. it can be embedded as a codimension-one subvariety in a smooth variety it is known to be true the existence of the strong resolution avoiding simple normal crossing points. The general case or generalizations to avoid different types of singularities are still not known.
Avoiding certain singularities is impossible. For example, one can't resolve singularities avoiding blowing-up the normal crossings singularities. In fact, to resolve the pinch point singularity the whole singular locus needs to be blown up, including points where normal crossing singularities are present.
== References ==
=== Bibliography ===
Abhyankar, Shreeram (1956), "Local uniformization on algebraic surfaces over ground fields of characteristic p≠0", Annals of Mathematics, Second Series, 63 (3): 491–526, doi:10.2307/1970014, JSTOR 1970014, MR 0078017
Abhyankar, Shreeram S. (1966), Resolution of singularities of embedded algebraic surfaces, Springer Monographs in Mathematics, Acad. Press, doi:10.1007/978-3-662-03580-1, ISBN 3-540-63719-2 (1998 2nd edition)
Abramovich, Dan (2011), "Review of Resolution of singularities and Lectures on resolution of singularities", Bulletin of the American Mathematical Society, 48: 115–122, doi:10.1090/S0273-0979-10-01301-7
Abramovich, D; de Jong, A. J. (1997), "Smoothness, semistability, and toroidal geometry", Journal of Algebraic Geometry, 6 (4): 789–801, arXiv:alg-geom/9603018, Bibcode:1996alg.geom..3018A, MR 1487237
Albanese, G. (1924), "Trasformazione birazionale di una superficie algebrica in un'altra priva di punti multipli", Rend. Circ. Mat. Palermo, 48 (3): 321–332, doi:10.1007/BF03014708, S2CID 122056627
Bierstone, Edward; Milman, Pierre D. (1991), "A simple constructive proof of Canonical Resolution of Singularities", in Mora, T.; Traverso, C. (eds.), Effective Methods in Algebraic Geometry, Progress in Mathematics, vol. 94, Boston: Birkhäuser, pp. 11–30, doi:10.1007/978-1-4612-0441-1_2, ISBN 978-1-4612-6761-4
Bierstone, Edward; Milman, Pierre D. (1997), "Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant", Invent. Math., 128 (2): 207–302, arXiv:alg-geom/9508005, Bibcode:1997InMat.128..207B, doi:10.1007/s002220050141, MR 1440306, S2CID 119128818
Bierstone, Edward; Milman, Pierre D. (2007), "Functoriality in resolution of singularities", Publications of the Research Institute for Mathematical Sciences, 44 (2), arXiv:math/0702375, Bibcode:2007math......2375B
Bierstone, Edward; Milman, Pierre D. (2012), "Resolution except for minimal singularities I", Advances in Mathematics, 231 (5): 3022–3053, arXiv:1107.5595, doi:10.1016/j.aim.2012.08.002, S2CID 119702658
Bogomolov, Fedor A.; Pantev, Tony G. (1996), "Weak Hironaka theorem", Mathematical Research Letters, 3 (3): 299–307, arXiv:alg-geom/9603019, doi:10.4310/mrl.1996.v3.n3.a1, S2CID 14010069
Chisini, O. (1921), "La risoluzione delle singolarità di una superficie", Mem. Acad. Bologna, 8
Cossart, Vincent; Piltant, Olivier (2008), "Resolution of singularities of threefolds in positive characteristic. I. Reduction to local uniformization on Artin-Schreier and purely inseparable coverings" (PDF), Journal of Algebra, 320 (3): 1051–1082, doi:10.1016/j.jalgebra.2008.03.032, MR 2427629
Cossart, Vincent; Piltant, Olivier (2009), "Resolution of singularities of threefolds in positive characteristic. II" (PDF), Journal of Algebra, 321 (7): 1836–1976, doi:10.1016/j.jalgebra.2008.11.030, MR 2494751
Cutkosky, Steven Dale (2004), Resolution of Singularities, Providence, RI: American Math. Soc., ISBN 0-8218-3555-6
Cutkosky, Steven Dale (2009), "Resolution of singularities for 3-folds in positive characteristic", Amer. J. Math., 131 (1): 59–127, arXiv:math/0606530, doi:10.1353/ajm.0.0036, JSTOR 40068184, MR 2488485, S2CID 2139305
Danilov, V.I. (2001) [1994], "Resolution of singularities", Encyclopedia of Mathematics, EMS Press
de Jong, A. J. (1996), "Smoothness, semi-stability and alterations", Inst. Hautes Études Sci. Publ. Math., 83: 51–93, doi:10.1007/BF02698644, S2CID 53581802
Del Pezzo, Pasquale (1892). "Intorno ai punti singolari delle superficie algebriche". Rendiconti del Circolo Matematico di Palermo.
Ellwood, David; Hauser, Herwig; Mori, Shigefumi; Schicho, Josef (12 December 2014). The Resolution of Singular Algebraic Varieties (PDF). American Mathematical Soc. ISBN 9780821889824.
Encinas, S.; Hauser, Herwig (2002), "Strong resolution of singularities in characteristic zero", Comment. Math. Helv., 77 (4): 821–845, arXiv:math/0211423, doi:10.1007/PL00012443, S2CID 9511067
Encinas, S.; Villamayor, O. (1998), "Good points and constructive resolution of singularities", Acta Math., 181 (1): 109–158, doi:10.1007/BF02392749, MR 1654779
Grothendieck, A.; Dieudonné, J. (1965), "Eléments de géométrie algébrique", Publ. Math. IHÉS, 24
Hauser, Herwig (1998), "Seventeen obstacles for resolution of singularities", Singularities (Oberwolfach, 1996), Progr. Math., vol. 162, Basel, Boston, Berlin: Birkhäuser, pp. 289–313, MR 1652479
Hauser, Herwig (2000), "Resolution of singularities 1860-1999.", in Hauser, Herwig; Lipman, Joseph; Oort, Frans; Quirós, Adolfo (eds.), Resolution of singularities (Obergurgl, 1997), Progr. Math., vol. 181, Birkhäuser, pp. 5–36, arXiv:math/0508332, doi:10.1007/978-3-0348-8399-3, ISBN 0-8176-6178-6
Hauser, Herwig (2003), "The Hironaka theorem on resolution of singularities (or: A proof we always wanted to understand)", Bull. Amer. Math. Soc. (N.S.), 40 (3): 323–403, doi:10.1090/S0273-0979-03-00982-0
Hauser, Herwig (2010), "On the problem of resolution of singularities in positive characteristic (Or: a proof we are still waiting for)", Bulletin of the American Mathematical Society, New Series, 47 (1): 1–30, doi:10.1090/S0273-0979-09-01274-9, MR 2566444
Kollár, János (2000), Hauser, Herwig; Lipman, J.; Oort, F.; Quirós, A. (eds.), Resolution of singularities, Progress in Mathematics, vol. 181, Birkhäuser Verlag, arXiv:math/0508332, doi:10.1007/978-3-0348-8399-3, ISBN 978-3-7643-6178-5, MR 1748614
Hironaka, Heisuke (1964), "Resolution of singularities of an algebraic variety over a field of characteristic zero. I", Ann. of Math., 2, 79 (1): 109–203, doi:10.2307/1970486, JSTOR 1970486, MR 0199184 and part II, pp. 205–326, JSTOR 1970547
Kollár, János (2007), Lectures on Resolution of Singularities, Princeton: Princeton University Press, ISBN 978-0-691-12923-5 (similar to his Resolution of Singularities -- Seattle Lecture.
Jung, H. W. E. (1908), "Darstellung der Funktionen eines algebraischen Körpers zweier unabhängigen Veränderlichen x,y in der Umgebung x=a, y= b", Journal für die Reine und Angewandte Mathematik, 133: 289–314, doi:10.1515/crll.1908.133.289, S2CID 116911985
Levi, B. (1899), "Risoluzione delle singolarita puntualli delle superficie algebriche", Atti. Acad. Torino, 34
Lipman, Joseph (1975), "Introduction to resolution of singularities", Algebraic geometry (Humboldt State Univ., Arcata, Calif., 1974), Proc. Sympos. Pure Math., vol. 29, Providence, R.I.: Amer. Math. Soc., pp. 187–230, MR 0389901
Lipman, Joseph (1978), "Desingularization of two-dimensional schemes", Ann. Math., 2, 107 (1): 151–207, doi:10.2307/1971141, JSTOR 1971141, MR 0491722
Muhly, H. T.; Zariski, O. (1939), "The Resolution of Singularities of an Algebraic Curve", Amer. J. Math., 61 (1): 107–114, doi:10.2307/2371389, JSTOR 2371389, MR 1507363
Newton, Isaac (1676), Letter to Oldenburg dated 1676 Oct 24, reprinted in Newton, Isaac (1960), The correspondence of Isaac Newton, vol. II, Cambridge University press, pp. 126–127
Severi, Francesco (20 December 1914), "Transformazione birazionale di una superficie algebrica qualunque in una priva di punti multipli", Proceedings of the Royal Academy of Lincei: Reports (PDF) (in Italian), vol. 23, Rome: Reale Accademia dei Lincei, retrieved 16 December 2023
Villamayor, Orlando U. (1995), "On good points and a new canonical algorithm of resolution of singularities", in Fabrizio Broglia; Margherita Galbiati; Alberto Tognoli (eds.), Real Analytic and Algebraic Geometry: Proceedings of the International Conference, Trento (Italy), September 21-25th, 1992, Berlin, New York: De Gruyter, pp. 277–292, doi:10.1515/9783110881271.277, ISBN 978-3-11-088127-1
Walker, Robert J. (1935), "Reduction of the Singularities of an Algebraic Surface", Annals of Mathematics, Second Series, 36 (2): 336–365, doi:10.2307/1968575, JSTOR 1968575
Wlodarczyk, Jaroslaw (2005), "Simple Hironaka resolution in characteristic zero", J. Amer. Math. Soc., 18 (4): 779–822, doi:10.1090/S0894-0347-05-00493-5
Zariski, Oscar (1935), Abhyankar, Shreeram S.; Lipman, Joseph; Mumford, David (eds.), Algebraic surfaces, Classics in mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-58658-6, MR 0469915 {{citation}}: ISBN / Date incompatibility (help)
Zariski, Oscar (1939), "The reduction of the singularities of an algebraic surface", Ann. of Math., 2, 40 (3): 639–689, Bibcode:1939AnMat..40..639Z, doi:10.2307/1968949, JSTOR 1968949
Zariski, Oscar (1944), "Reduction of the singularities of algebraic three dimensional varieties", Ann. of Math., 2, 45 (3): 472–542, doi:10.2307/1969189, JSTOR 1969189, MR 0011006
== External links ==
Resolution of singularities I, a video of a talk by Hironaka.
Resolution of singularities in algebraic geometry, a video of a talk by Hironaka.
Some pictures of singularities and their resolutions
SINGULAR: a computer algebra system with packages for resolving singularities.
Notes and lectures for the Working Week on Resolution of Singularities Tirol 1997, September 7–14, 1997, Obergurgl, Tirol, Austria
Lecture notes from the Summer School on Resolution of Singularities, June 2006, Trieste, Italy.
desing - A computer program for resolution of singularities
Hauser's home page with several expository papers on resolution of singularities | Wikipedia/Resolution_of_singularities |
Compositio Mathematica is a monthly peer-reviewed mathematics journal established by L.E.J. Brouwer in 1935. It is owned by the Foundation Compositio Mathematica, and since 2004 it has been published on behalf of the Foundation by the London Mathematical Society in partnership with Cambridge University Press. According to the Journal Citation Reports, the journal has a 2020 2-year impact factor of 1.456 and a 2020 5-year impact factor of 1.696.
The editors-in-chief are Fabrizio Andreatta, David Holmes, Bruno Klingler, and Éric Vasserot.
== Early history ==
The journal was established by L. E. J. Brouwer in response to his dismissal from Mathematische Annalen in 1928. An announcement of the new journal was made in a 1934 issue of the American Mathematical Monthly. In 1940, the publication of the journal was suspended due to the German occupation of the Netherlands.
== References ==
== External links ==
Official website
Online archive (1935-1996) | Wikipedia/Algebraic_Geometry_(journal) |
In mathematics, noncommutative topology is a term used for the relationship between topological and C*-algebraic concepts. The term has its origins in the Gelfand–Naimark theorem, which implies the duality of the category of locally compact Hausdorff spaces and the category of commutative C*-algebras. Noncommutative topology is related to analytic noncommutative geometry.
== Examples ==
The premise behind noncommutative topology is that a noncommutative C*-algebra can be treated like the algebra of complex-valued continuous functions on a 'noncommutative space' which does not exist classically. Several topological properties can be formulated as properties for the C*-algebras without making reference to commutativity or the underlying space, and so have an immediate generalization.
Among these are:
compactness (unital)
σ-compactness (σ-unital)
dimension (real or stable rank)
connectedness (projectionless)
extremally disconnected spaces (AW*-algebras)
Individual elements of a commutative C*-algebra correspond with continuous functions. And so certain types of functions can correspond to certain properties of a C*-algebra. For example, self-adjoint elements of a commutative C*-algebra correspond to real-valued continuous functions. Also, projections (i.e. self-adjoint idempotents) correspond to indicator functions of clopen sets.
Categorical constructions lead to some examples. For example, the coproduct of spaces is the disjoint union and thus corresponds to the direct sum of algebras, which is the product of C*-algebras. Similarly, product topology corresponds to the coproduct of C*-algebras, the tensor product of algebras. In a more specialized setting,
compactifications of topologies correspond to unitizations of algebras. So the one-point compactification corresponds to the minimal unitization of C*-algebras, the Stone–Čech compactification corresponds to the multiplier algebra, and corona sets correspond with corona algebras.
There are certain examples of properties where multiple generalizations are possible and it is not clear which is preferable. For example, probability measures can correspond either to states or tracial states. Since all states are vacuously
tracial states in the commutative case, it is not clear whether the tracial condition is necessary to be a useful generalization.
== K-theory ==
One of the major examples of this idea is the generalization of topological K-theory to noncommutative C*-algebras in the form of operator K-theory.
A further development in this is a bivariant version of K-theory called KK-theory, which has a composition product
K
K
(
A
,
B
)
×
K
K
(
B
,
C
)
→
K
K
(
A
,
C
)
{\displaystyle KK(A,B)\times KK(B,C)\rightarrow KK(A,C)}
of which the ring structure in ordinary K-theory is a special case. The product gives the structure of a category to KK. It has been related to correspondences of algebraic varieties.
== References == | Wikipedia/Noncommutative_topology |
In mathematics, an algebraic stack is a vast generalization of algebraic spaces, or schemes, which are foundational for studying moduli theory. Many moduli spaces are constructed using techniques specific to algebraic stacks, such as Artin's representability theorem, which is used to construct the moduli space of pointed algebraic curves
M
g
,
n
{\displaystyle {\mathcal {M}}_{g,n}}
and the moduli stack of elliptic curves. Originally, they were introduced by Alexander Grothendieck to keep track of automorphisms on moduli spaces, a technique which allows for treating these moduli spaces as if their underlying schemes or algebraic spaces are smooth. After Grothendieck developed the general theory of descent, and Giraud the general theory of stacks, the notion of algebraic stacks was defined by Michael Artin.
== Definition ==
=== Motivation ===
One of the motivating examples of an algebraic stack is to consider a groupoid scheme
(
R
,
U
,
s
,
t
,
m
)
{\displaystyle (R,U,s,t,m)}
over a fixed scheme
S
{\displaystyle S}
. For example, if
R
=
μ
n
×
S
A
S
n
{\displaystyle R=\mu _{n}\times _{S}\mathbb {A} _{S}^{n}}
(where
μ
n
{\displaystyle \mu _{n}}
is the group scheme of roots of unity),
U
=
A
S
n
{\displaystyle U=\mathbb {A} _{S}^{n}}
,
s
=
pr
U
{\displaystyle s={\text{pr}}_{U}}
is the projection map,
t
{\displaystyle t}
is the group action
ζ
n
⋅
(
x
1
,
…
,
x
n
)
=
(
ζ
n
x
1
,
…
,
ζ
n
x
n
)
{\displaystyle \zeta _{n}\cdot (x_{1},\ldots ,x_{n})=(\zeta _{n}x_{1},\ldots ,\zeta _{n}x_{n})}
and
m
{\displaystyle m}
is the multiplication map
m
:
(
μ
n
×
S
A
S
n
)
×
μ
n
×
S
A
S
n
(
μ
n
×
S
A
S
n
)
→
μ
n
×
S
A
S
n
{\displaystyle m:(\mu _{n}\times _{S}\mathbb {A} _{S}^{n})\times _{\mu _{n}\times _{S}\mathbb {A} _{S}^{n}}(\mu _{n}\times _{S}\mathbb {A} _{S}^{n})\to \mu _{n}\times _{S}\mathbb {A} _{S}^{n}}
on
μ
n
{\displaystyle \mu _{n}}
. Then, given an
S
{\displaystyle S}
-scheme
π
:
X
→
S
{\displaystyle \pi :X\to S}
, the groupoid scheme
(
R
(
X
)
,
U
(
X
)
,
s
,
t
,
m
)
{\displaystyle (R(X),U(X),s,t,m)}
forms a groupoid (where
R
,
U
{\displaystyle R,U}
are their associated functors). Moreover, this construction is functorial on
(
S
c
h
/
S
)
{\displaystyle (\mathrm {Sch} /S)}
forming a contravariant 2-functor
(
R
(
−
)
,
U
(
−
)
,
s
,
t
,
m
)
:
(
S
c
h
/
S
)
o
p
→
Cat
{\displaystyle (R(-),U(-),s,t,m):(\mathrm {Sch} /S)^{\mathrm {op} }\to {\text{Cat}}}
where
Cat
{\displaystyle {\text{Cat}}}
is the 2-category of small categories. Another way to view this is as a fibred category
[
U
/
R
]
→
(
S
c
h
/
S
)
{\displaystyle [U/R]\to (\mathrm {Sch} /S)}
through the Grothendieck construction. Getting the correct technical conditions, such as the Grothendieck topology on
(
S
c
h
/
S
)
{\displaystyle (\mathrm {Sch} /S)}
, gives the definition of an algebraic stack. For instance, in the associated groupoid of
k
{\displaystyle k}
-points for a field
k
{\displaystyle k}
, over the origin object
0
∈
A
S
n
(
k
)
{\displaystyle 0\in \mathbb {A} _{S}^{n}(k)}
there is the groupoid of automorphisms
μ
n
(
k
)
{\displaystyle \mu _{n}(k)}
. However, in order to get an algebraic stack from
[
U
/
R
]
{\displaystyle [U/R]}
, and not just a stack, there are additional technical hypotheses required for
[
U
/
R
]
{\displaystyle [U/R]}
.
=== Algebraic stacks ===
It turns out using the fppf-topology (faithfully flat and locally of finite presentation) on
(
S
c
h
/
S
)
{\displaystyle (\mathrm {Sch} /S)}
, denoted
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle (\mathrm {Sch} /S)_{fppf}}
, forms the basis for defining algebraic stacks. Then, an algebraic stack is a fibered category
p
:
X
→
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle p:{\mathcal {X}}\to (\mathrm {Sch} /S)_{fppf}}
such that
X
{\displaystyle {\mathcal {X}}}
is a category fibered in groupoids, meaning the overcategory for some
π
:
X
→
S
{\displaystyle \pi :X\to S}
is a groupoid
The diagonal map
Δ
:
X
→
X
×
S
X
{\displaystyle \Delta :{\mathcal {X}}\to {\mathcal {X}}\times _{S}{\mathcal {X}}}
of fibered categories is representable as algebraic spaces
There exists an
f
p
p
f
{\displaystyle fppf}
scheme
U
→
S
{\displaystyle U\to S}
and an associated 1-morphism of fibered categories
U
→
X
{\displaystyle {\mathcal {U}}\to {\mathcal {X}}}
which is surjective and smooth called an atlas.
==== Explanation of technical conditions ====
===== Using the fppf topology =====
First of all, the fppf-topology is used because it behaves well with respect to descent. For example, if there are schemes
X
,
Y
∈
Ob
(
S
c
h
/
S
)
{\displaystyle X,Y\in \operatorname {Ob} (\mathrm {Sch} /S)}
and
X
→
Y
{\displaystyle X\to Y}
can be refined to an fppf-cover of
Y
{\displaystyle Y}
, if
X
{\displaystyle X}
is flat, locally finite type, or locally of finite presentation, then
Y
{\displaystyle Y}
has this property. this kind of idea can be extended further by considering properties local either on the target or the source of a morphism
f
:
X
→
Y
{\displaystyle f:X\to Y}
. For a cover
{
X
i
→
X
}
i
∈
I
{\displaystyle \{X_{i}\to X\}_{i\in I}}
we say a property
P
{\displaystyle {\mathcal {P}}}
is local on the source if
f
:
X
→
Y
{\displaystyle f:X\to Y}
has
P
{\displaystyle {\mathcal {P}}}
if and only if each
X
i
→
Y
{\displaystyle X_{i}\to Y}
has
P
{\displaystyle {\mathcal {P}}}
.There is an analogous notion on the target called local on the target. This means given a cover
{
Y
i
→
Y
}
i
∈
I
{\displaystyle \{Y_{i}\to Y\}_{i\in I}}
f
:
X
→
Y
{\displaystyle f:X\to Y}
has
P
{\displaystyle {\mathcal {P}}}
if and only if each
X
×
Y
Y
i
→
Y
i
{\displaystyle X\times _{Y}Y_{i}\to Y_{i}}
has
P
{\displaystyle {\mathcal {P}}}
.For the fppf topology, having an immersion is local on the target. In addition to the previous properties local on the source for the fppf topology,
f
{\displaystyle f}
being universally open is also local on the source. Also, being locally Noetherian and Jacobson are local on the source and target for the fppf topology. This does not hold in the fpqc topology, making it not as "nice" in terms of technical properties. Even though this is true, using algebraic stacks over the fpqc topology still has its use, such as in chromatic homotopy theory. This is because the Moduli stack of formal group laws
M
f
g
{\displaystyle {\mathcal {M}}_{fg}}
is an fpqc-algebraic stackpg 40.
===== Representable diagonal =====
By definition, a 1-morphism
f
:
X
→
Y
{\displaystyle f:{\mathcal {X}}\to {\mathcal {Y}}}
of categories fibered in groupoids is representable by algebraic spaces if for any fppf morphism
U
→
S
{\displaystyle U\to S}
of schemes and any 1-morphism
y
:
(
S
c
h
/
U
)
f
p
p
f
→
Y
{\displaystyle y:(Sch/U)_{fppf}\to {\mathcal {Y}}}
, the associated category fibered in groupoids
(
S
c
h
/
U
)
f
p
p
f
×
Y
X
{\displaystyle (Sch/U)_{fppf}\times _{\mathcal {Y}}{\mathcal {X}}}
is representable as an algebraic space, meaning there exists an algebraic space
F
:
(
S
c
h
/
S
)
f
p
p
f
o
p
→
S
e
t
s
{\displaystyle F:(Sch/S)_{fppf}^{op}\to Sets}
such that the associated fibered category
S
F
→
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle {\mathcal {S}}_{F}\to (Sch/S)_{fppf}}
is equivalent to
(
S
c
h
/
U
)
f
p
p
f
×
Y
X
{\displaystyle (Sch/U)_{fppf}\times _{\mathcal {Y}}{\mathcal {X}}}
. There are a number of equivalent conditions for representability of the diagonal which help give intuition for this technical condition, but one of main motivations is the following: for a scheme
U
{\displaystyle U}
and objects
x
,
y
∈
Ob
(
X
U
)
{\displaystyle x,y\in \operatorname {Ob} ({\mathcal {X}}_{U})}
the sheaf
Isom
(
x
,
y
)
{\displaystyle \operatorname {Isom} (x,y)}
is representable as an algebraic space. In particular, the stabilizer group for any point on the stack
x
:
Spec
(
k
)
→
X
Spec
(
k
)
{\displaystyle x:\operatorname {Spec} (k)\to {\mathcal {X}}_{\operatorname {Spec} (k)}}
is representable as an algebraic space.
Another important equivalence of having a representable diagonal is the technical condition that the intersection of any two algebraic spaces in an algebraic stack is an algebraic space. Reformulated using fiber products
Y
×
X
Z
→
Y
↓
↓
Z
→
X
{\displaystyle {\begin{matrix}Y\times _{\mathcal {X}}Z&\to &Y\\\downarrow &&\downarrow \\Z&\to &{\mathcal {X}}\end{matrix}}}
the representability of the diagonal is equivalent to
Y
→
X
{\displaystyle Y\to {\mathcal {X}}}
being representable for an algebraic space
Y
{\displaystyle Y}
. This is because given morphisms
Y
→
X
,
Z
→
X
{\displaystyle Y\to {\mathcal {X}},Z\to {\mathcal {X}}}
from algebraic spaces, they extend to maps
X
×
X
{\displaystyle {\mathcal {X}}\times {\mathcal {X}}}
from the diagonal map. There is an analogous statement for algebraic spaces which gives representability of a sheaf on
(
F
/
S
)
f
p
p
f
{\displaystyle (F/S)_{fppf}}
as an algebraic space.
Note that an analogous condition of representability of the diagonal holds for some formulations of higher stacks where the fiber product is an
(
n
−
1
)
{\displaystyle (n-1)}
-stack for an
n
{\displaystyle n}
-stack
X
{\displaystyle {\mathcal {X}}}
.
==== Surjective and smooth atlas ====
===== 2-Yoneda lemma =====
The existence of an
f
p
p
f
{\displaystyle fppf}
scheme
U
→
S
{\displaystyle U\to S}
and a 1-morphism of fibered categories
U
→
X
{\displaystyle {\mathcal {U}}\to {\mathcal {X}}}
which is surjective and smooth depends on defining a smooth and surjective morphisms of fibered categories. Here
U
{\displaystyle {\mathcal {U}}}
is the algebraic stack from the representable functor
h
U
{\displaystyle h_{U}}
on
h
U
:
(
S
c
h
/
S
)
f
p
p
f
o
p
→
S
e
t
s
{\displaystyle h_{U}:(Sch/S)_{fppf}^{op}\to Sets}
upgraded to a category fibered in groupoids where the categories only have trivial morphisms. This means the set
h
U
(
T
)
=
Hom
(
S
c
h
/
S
)
f
p
p
f
(
T
,
U
)
{\displaystyle h_{U}(T)={\text{Hom}}_{(Sch/S)_{fppf}}(T,U)}
is considered as a category, denoted
h
U
(
T
)
{\displaystyle h_{\mathcal {U}}(T)}
, with objects in
h
U
(
T
)
{\displaystyle h_{U}(T)}
as
f
p
p
f
{\displaystyle fppf}
morphisms
f
:
T
→
U
{\displaystyle f:T\to U}
and morphisms are the identity morphism. Hence
h
U
:
(
S
c
h
/
S
)
f
p
p
f
o
p
→
G
r
o
u
p
o
i
d
s
{\displaystyle h_{\mathcal {U}}:(Sch/S)_{fppf}^{op}\to Groupoids}
is a 2-functor of groupoids. Showing this 2-functor is a sheaf is the content of the 2-Yoneda lemma. Using the Grothendieck construction, there is an associated category fibered in groupoids denoted
U
→
X
{\displaystyle {\mathcal {U}}\to {\mathcal {X}}}
.
===== Representable morphisms of categories fibered in groupoids =====
To say this morphism
U
→
X
{\displaystyle {\mathcal {U}}\to {\mathcal {X}}}
is smooth or surjective, we have to introduce representable morphisms. A morphism
p
:
X
→
Y
{\displaystyle p:{\mathcal {X}}\to {\mathcal {Y}}}
of categories fibered in groupoids over
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle (Sch/S)_{fppf}}
is said to be representable if given an object
T
→
S
{\displaystyle T\to S}
in
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle (Sch/S)_{fppf}}
and an object
t
∈
Ob
(
Y
T
)
{\displaystyle t\in {\text{Ob}}({\mathcal {Y}}_{T})}
the 2-fibered product
(
S
c
h
/
T
)
f
p
p
f
×
t
,
Y
X
T
{\displaystyle (Sch/T)_{fppf}\times _{t,{\mathcal {Y}}}{\mathcal {X}}_{T}}
is representable by a scheme. Then, we can say the morphism of categories fibered in groupoids
p
{\displaystyle p}
is smooth and surjective if the associated morphism
(
S
c
h
/
T
)
f
p
p
f
×
t
,
Y
X
T
→
(
S
c
h
/
T
)
f
p
p
f
{\displaystyle (Sch/T)_{fppf}\times _{t,{\mathcal {Y}}}{\mathcal {X}}_{T}\to (Sch/T)_{fppf}}
of schemes is smooth and surjective.
=== Deligne–Mumford stacks ===
Algebraic stacks, also known as Artin stacks, are by definition equipped with a smooth surjective atlas
U
→
X
{\displaystyle {\mathcal {U}}\to {\mathcal {X}}}
, where
U
{\displaystyle {\mathcal {U}}}
is the stack associated to some scheme
U
→
S
{\displaystyle U\to S}
. If the atlas
U
→
X
{\displaystyle {\mathcal {U}}\to {\mathcal {X}}}
is moreover étale, then
X
{\displaystyle {\mathcal {X}}}
is said to be a Deligne–Mumford stack. The subclass of Deligne-Mumford stacks is useful because it provides the correct setting for many natural stacks considered, such as the moduli stack of algebraic curves. In addition, they are strict enough that object represented by points in Deligne-Mumford stacks do not have infinitesimal automorphisms. This is very important because infinitesimal automorphisms make studying the deformation theory of Artin stacks very difficult. For example, the deformation theory of the Artin stack
B
G
L
n
=
[
∗
/
G
L
n
]
{\displaystyle BGL_{n}=[*/GL_{n}]}
, the moduli stack of rank
n
{\displaystyle n}
vector bundles, has infinitesimal automorphisms controlled partially by the Lie algebra
g
l
n
{\displaystyle {\mathfrak {gl}}_{n}}
. This leads to an infinite sequence of deformations and obstructions in general, which is one of the motivations for studying moduli of stable bundles. Only in the special case of the deformation theory of line bundles
[
∗
/
G
L
1
]
=
[
∗
/
G
m
]
{\displaystyle [*/GL_{1}]=[*/\mathbb {G} _{m}]}
is the deformation theory tractable, since the associated Lie algebra is abelian.
Note that many stacks cannot be naturally represented as Deligne-Mumford stacks because it only allows for finite covers, or, algebraic stacks with finite covers. Note that because every Etale cover is flat and locally of finite presentation, algebraic stacks defined with the fppf-topology subsume this theory; but, it is still useful since many stacks found in nature are of this form, such as the moduli of curves
M
g
{\displaystyle {\mathcal {M}}_{g}}
. Also, the differential-geometric analogue of such stacks are called orbifolds. The Etale condition implies the 2-functor
B
μ
n
:
(
S
c
h
/
S
)
op
→
Cat
{\displaystyle B\mu _{n}:(\mathrm {Sch} /S)^{\text{op}}\to {\text{Cat}}}
sending a scheme to its groupoid of
μ
n
{\displaystyle \mu _{n}}
-torsors is representable as a stack over the Etale topology, but the Picard-stack
B
G
m
{\displaystyle B\mathbb {G} _{m}}
of
G
m
{\displaystyle \mathbb {G} _{m}}
-torsors (equivalently the category of line bundles) is not representable. Stacks of this form are representable as stacks over the fppf-topology.
Another reason for considering the fppf-topology versus the etale topology is over characteristic
p
{\displaystyle p}
the Kummer sequence
0
→
μ
p
→
G
m
→
G
m
→
0
{\displaystyle 0\to \mu _{p}\to \mathbb {G} _{m}\to \mathbb {G} _{m}\to 0}
is exact only as a sequence of fppf sheaves, but not as a sequence of etale sheaves.
=== Defining algebraic stacks over other topologies ===
Using other Grothendieck topologies on
(
F
/
S
)
{\displaystyle (F/S)}
gives alternative theories of algebraic stacks which are either not general enough, or don't behave well with respect to exchanging properties from the base of a cover to the total space of a cover. It is useful to recall there is the following hierarchy of generalization
fpqc
⊃
fppf
⊃
smooth
⊃
etale
⊃
Zariski
{\displaystyle {\text{fpqc}}\supset {\text{fppf}}\supset {\text{smooth}}\supset {\text{etale}}\supset {\text{Zariski}}}
of big topologies on
(
F
/
S
)
{\displaystyle (F/S)}
.
== Structure sheaf ==
The structure sheaf of an algebraic stack is an object pulled back from a universal structure sheaf
O
{\displaystyle {\mathcal {O}}}
on the site
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle (Sch/S)_{fppf}}
. This universal structure sheaf is defined as
O
:
(
S
c
h
/
S
)
f
p
p
f
o
p
→
R
i
n
g
s
,
where
U
/
X
↦
Γ
(
U
,
O
U
)
{\displaystyle {\mathcal {O}}:(Sch/S)_{fppf}^{op}\to Rings,{\text{ where }}U/X\mapsto \Gamma (U,{\mathcal {O}}_{U})}
and the associated structure sheaf on a category fibered in groupoids
p
:
X
→
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle p:{\mathcal {X}}\to (Sch/S)_{fppf}}
is defined as
O
X
:=
p
−
1
O
{\displaystyle {\mathcal {O}}_{\mathcal {X}}:=p^{-1}{\mathcal {O}}}
where
p
−
1
{\displaystyle p^{-1}}
comes from the map of Grothendieck topologies. In particular, this means is
x
∈
Ob
(
X
)
{\displaystyle x\in {\text{Ob}}({\mathcal {X}})}
lies over
U
{\displaystyle U}
, so
p
(
x
)
=
U
{\displaystyle p(x)=U}
, then
O
X
(
x
)
=
Γ
(
U
,
O
U
)
{\displaystyle {\mathcal {O}}_{\mathcal {X}}(x)=\Gamma (U,{\mathcal {O}}_{U})}
. As a sanity check, it's worth comparing this to a category fibered in groupoids coming from an
S
{\displaystyle S}
-scheme
X
{\displaystyle X}
for various topologies. For example, if
(
X
Z
a
r
,
O
X
)
=
(
(
S
c
h
/
X
)
Z
a
r
,
O
X
)
{\displaystyle ({\mathcal {X}}_{Zar},{\mathcal {O}}_{\mathcal {X}})=((Sch/X)_{Zar},{\mathcal {O}}_{X})}
is a category fibered in groupoids over
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle (Sch/S)_{fppf}}
, the structure sheaf for an open subscheme
U
→
X
{\displaystyle U\to X}
gives
O
X
(
U
)
=
O
X
(
U
)
=
Γ
(
U
,
O
X
)
{\displaystyle {\mathcal {O}}_{\mathcal {X}}(U)={\mathcal {O}}_{X}(U)=\Gamma (U,{\mathcal {O}}_{X})}
so this definition recovers the classic structure sheaf on a scheme. Moreover, for a quotient stack
X
=
[
X
/
G
]
{\displaystyle {\mathcal {X}}=[X/G]}
, the structure sheaf this just gives the
G
{\displaystyle G}
-invariant sections
O
X
(
U
)
=
Γ
(
U
,
u
∗
O
X
)
G
{\displaystyle {\mathcal {O}}_{\mathcal {X}}(U)=\Gamma (U,u^{*}{\mathcal {O}}_{X})^{G}}
for
u
:
U
→
X
{\displaystyle u:U\to X}
in
(
S
c
h
/
S
)
f
p
p
f
{\displaystyle (Sch/S)_{fppf}}
.
== Examples ==
=== Classifying stacks ===
Many classifying stacks for algebraic groups are algebraic stacks. In fact, for an algebraic group space
G
{\displaystyle G}
over a scheme
S
{\displaystyle S}
which is flat of finite presentation, the stack
B
G
{\displaystyle BG}
is algebraictheorem 6.1.
== See also ==
Gerbe
Chow group of a stack
Cohomology of a stack
Quotient stack
Sheaf on an algebraic stack
Toric stack
Artin's criterion
Pursuing Stacks
Derived algebraic geometry
== References ==
== External links ==
=== Artin's Axioms ===
https://stacks.math.columbia.edu/tag/07SZ - Look at "Axioms" and "Algebraic stacks"
Artin Algebraization and Quotient Stacks - Jarod Alper
=== Papers ===
Alper, Jarod (2009). "A Guide to the Literature on Algebraic Stacks" (PDF). S2CID 51803452. Archived from the original (PDF) on 2020-02-13.
Hall, Jack; Rydh, David (2014). "The Hilbert stack". Advances in Mathematics. 253: 194–233. arXiv:1011.5484. doi:10.1016/j.aim.2013.12.002. S2CID 55936583.
Behrend, Kai A. (2003). "Derived ℓ-Adic Categories for Algebraic Stacks" (PDF). Memoirs of the American Mathematical Society. 163 (774): 1–93. doi:10.1090/memo/0774. ISBN 978-1-4704-0372-0.
=== Applications ===
Lafforgue, Vincent (2014). "Introduction to chtoucas for reductive groups and to the global Langlands parameterization". arXiv:1404.6416 [math.AG].
Deligne, P.; Rapoport, M. (1973). "Les Schémas de Modules de Courbes Elliptiques". Modular Functions of One Variable II. Lecture Notes in Mathematics. Vol. 349. pp. 143–316. doi:10.1007/978-3-540-37855-6_4. ISBN 978-3-540-06558-6.
Knudsen, Finn F. (1983). "The projectivity of the moduli space of stable curves, II: The stacks
M
g
,
n
{\displaystyle {\mathcal {M}}_{g,n}}
". Mathematica Scandinavica. 52: 161. doi:10.7146/math.scand.a-12001.
Jiang, Yunfeng (2019). "On the construction of moduli stack of projective Higgs bundles over surfaces". arXiv:1911.00250 [math.AG].
=== Other ===
Examples of Stacks
Notes on Grothendieck topologies, fibered categories and descent theory
Notes on algebraic stacks | Wikipedia/Algebraic_stack |
In mathematics, a sheaf (pl.: sheaves) is a tool for systematically tracking data (such as sets, abelian groups, rings) attached to the open sets of a topological space and defined locally with regard to them. For example, for each open set, the data could be the ring of continuous functions defined on that open set. Such data are well-behaved in that they can be restricted to smaller open sets, and also the data assigned to an open set are equivalent to all collections of compatible data assigned to collections of smaller open sets covering the original open set (intuitively, every datum is the sum of its constituent data).
The field of mathematics that studies sheaves is called sheaf theory.
Sheaves are understood conceptually as general and abstract objects. Their precise definition is rather technical. They are specifically defined as sheaves of sets or as sheaves of rings, for example, depending on the type of data assigned to the open sets.
There are also maps (or morphisms) from one sheaf to another; sheaves (of a specific type, such as sheaves of abelian groups) with their morphisms on a fixed topological space form a category. On the other hand, to each continuous map there is associated both a direct image functor, taking sheaves and their morphisms on the domain to sheaves and morphisms on the codomain, and an inverse image functor operating in the opposite direction. These functors, and certain variants of them, are essential parts of sheaf theory.
Due to their general nature and versatility, sheaves have several applications in topology and especially in algebraic and differential geometry. First, geometric structures such as that of a differentiable manifold or a scheme can be expressed in terms of a sheaf of rings on the space. In such contexts, several geometric constructions such as vector bundles or divisors are naturally specified in terms of sheaves. Second, sheaves provide the framework for a very general cohomology theory, which encompasses also the "usual" topological cohomology theories such as singular cohomology. Especially in algebraic geometry and the theory of complex manifolds, sheaf cohomology provides a powerful link between topological and geometric properties of spaces. Sheaves also provide the basis for the theory of D-modules, which provide applications to the theory of differential equations. In addition, generalisations of sheaves to more general settings than topological spaces, such as the notion of a sheaf on a category with respect to some Grothendieck topology, have provided applications to mathematical logic and to number theory.
== Definitions and examples ==
In many mathematical branches, several structures defined on a topological space
X
{\displaystyle X}
(e.g., a differentiable manifold) can be naturally localised or restricted to open subsets
U
⊆
X
{\displaystyle U\subseteq X}
: typical examples include continuous real-valued or complex-valued functions,
n
{\displaystyle n}
-times differentiable (real-valued or complex-valued) functions, bounded real-valued functions, vector fields, and sections of any vector bundle on the space. The ability to restrict data to smaller open subsets gives rise to the concept of presheaves. Roughly speaking, sheaves are then those presheaves, where local data can be glued to global data.
=== Presheaves ===
Let
X
{\displaystyle X}
be a topological space. A presheaf
F
{\displaystyle {\mathcal {F}}}
of sets on
X
{\displaystyle X}
consists of the following data:
For each open set
U
⊆
X
{\displaystyle U\subseteq X}
, there exists a set
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
. This set is also denoted
Γ
(
U
,
F
)
{\displaystyle \Gamma (U,{\mathcal {F}})}
. The elements in this set are called the sections of
F
{\displaystyle {\mathcal {F}}}
over
U
{\displaystyle U}
. The sections of
F
{\displaystyle {\mathcal {F}}}
over
X
{\displaystyle X}
are called the global sections of
F
{\displaystyle {\mathcal {F}}}
.
For each inclusion of open sets
V
⊆
U
{\displaystyle V\subseteq U}
, a function
res
V
U
:
F
(
U
)
→
F
(
V
)
{\displaystyle \operatorname {res} _{V}^{U}\colon {\mathcal {F}}(U)\rightarrow {\mathcal {F}}(V)}
. In view of many of the examples below, the morphisms
res
V
U
{\displaystyle {\text{res}}_{V}^{U}}
are called restriction morphisms. If
s
∈
F
(
U
)
{\displaystyle s\in {\mathcal {F}}(U)}
, then its restriction
res
V
U
(
s
)
{\displaystyle {\text{res}}_{V}^{U}(s)}
is often denoted
s
|
V
{\displaystyle s|_{V}}
by analogy with restriction of functions.
The restriction morphisms are required to satisfy two additional (functorial) properties:
For every open set
U
{\displaystyle U}
of
X
{\displaystyle X}
, the restriction morphism
res
U
U
:
F
(
U
)
→
F
(
U
)
{\displaystyle \operatorname {res} _{U}^{U}\colon {\mathcal {F}}(U)\rightarrow {\mathcal {F}}(U)}
is the identity morphism on
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
.
If we have three open sets
W
⊆
V
⊆
U
{\displaystyle W\subseteq V\subseteq U}
, then the composite
res
W
V
∘
res
V
U
=
res
W
U
{\displaystyle {\text{res}}_{W}^{V}\circ {\text{res}}_{V}^{U}={\text{res}}_{W}^{U}}
.
Informally, the second axiom says it does not matter whether we restrict to
W
{\displaystyle W}
in one step or restrict first to
V
{\displaystyle V}
, then to
W
{\displaystyle W}
. A concise functorial reformulation of this definition is given further below.
Many examples of presheaves come from different classes of functions: to any
U
{\displaystyle U}
, one can assign the set
C
0
(
U
)
{\displaystyle C^{0}(U)}
of continuous real-valued functions on
U
{\displaystyle U}
. The restriction maps are then just given by restricting a continuous function on
U
{\displaystyle U}
to a smaller open subset
V
⊆
U
{\displaystyle V\subseteq U}
, which again is a continuous function. The two presheaf axioms are immediately checked, thereby giving an example of a presheaf. This can be extended to a presheaf of holomorphic functions
H
(
−
)
{\displaystyle {\mathcal {H}}(-)}
and a presheaf of smooth functions
C
∞
(
−
)
{\displaystyle C^{\infty }(-)}
.
Another common class of examples is assigning to
U
{\displaystyle U}
the set of constant real-valued functions on
U
{\displaystyle U}
. This presheaf is called the constant presheaf associated to
R
{\displaystyle \mathbb {R} }
and is denoted
R
_
psh
{\displaystyle {\underline {\mathbb {R} }}^{\text{psh}}}
.
=== Sheaves ===
Given a presheaf, a natural question to ask is to what extent its sections over an open set
U
{\displaystyle U}
are specified by their restrictions to open subsets of
U
{\displaystyle U}
. A sheaf is a presheaf whose sections are, in a technical sense, uniquely determined by their restrictions.
Axiomatically, a sheaf is a presheaf that satisfies both of the following axioms:
(Locality) Suppose
U
{\displaystyle U}
is an open set,
{
U
i
}
i
∈
I
{\displaystyle \{U_{i}\}_{i\in I}}
is an open cover of
U
{\displaystyle U}
with
U
i
⊆
U
{\displaystyle U_{i}\subseteq U}
for all
i
∈
I
{\displaystyle i\in I}
, and
s
,
t
∈
F
(
U
)
{\displaystyle s,t\in {\mathcal {F}}(U)}
are sections. If
s
|
U
i
=
t
|
U
i
{\displaystyle s|_{U_{i}}=t|_{U_{i}}}
for all
i
∈
I
{\displaystyle i\in I}
, then
s
=
t
{\displaystyle s=t}
.
(Gluing) Suppose
U
{\displaystyle U}
is an open set,
{
U
i
}
i
∈
I
{\displaystyle \{U_{i}\}_{i\in I}}
is an open cover of
U
{\displaystyle U}
with
U
i
⊆
U
{\displaystyle U_{i}\subseteq U}
for all
i
∈
I
{\displaystyle i\in I}
, and
{
s
i
∈
F
(
U
i
)
}
i
∈
I
{\displaystyle \{s_{i}\in {\mathcal {F}}(U_{i})\}_{i\in I}}
is a family of sections. If all pairs of sections agree on the overlap of their domains, that is, if
s
i
|
U
i
∩
U
j
=
s
j
|
U
i
∩
U
j
{\displaystyle s_{i}|_{U_{i}\cap U_{j}}=s_{j}|_{U_{i}\cap U_{j}}}
for all
i
,
j
∈
I
{\displaystyle i,j\in I}
, then there exists a section
s
∈
F
(
U
)
{\displaystyle s\in {\mathcal {F}}(U)}
such that
s
|
U
i
=
s
i
{\displaystyle s|_{U_{i}}=s_{i}}
for all
i
∈
I
{\displaystyle i\in I}
.
In both of these axioms, the hypothesis on the open cover is equivalent to the assumption that
⋃
i
∈
I
U
i
=
U
{\textstyle \bigcup _{i\in I}U_{i}=U}
.
The section
s
{\displaystyle s}
whose existence is guaranteed by axiom 2 is called the gluing, concatenation, or collation of the sections
s
i
{\displaystyle s_{i}}
. By axiom 1 it is unique. Sections
s
i
{\displaystyle s_{i}}
and
s
j
{\displaystyle s_{j}}
satisfying the agreement precondition of axiom 2 are often called compatible ; thus axioms 1 and 2 together state that any collection of pairwise compatible sections can be uniquely glued together. A separated presheaf, or monopresheaf, is a presheaf satisfying axiom 1.
The presheaf consisting of continuous functions mentioned above is a sheaf. This assertion reduces to checking that, given continuous functions
f
i
:
U
i
→
R
{\displaystyle f_{i}:U_{i}\to \mathbb {R} }
which agree on the intersections
U
i
∩
U
j
{\displaystyle U_{i}\cap U_{j}}
, there is a unique continuous function
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
whose restriction equals the
f
i
{\displaystyle f_{i}}
. By contrast, the constant presheaf is usually not a sheaf as it fails to satisfy the locality axiom on the empty set (this is explained in more detail at constant sheaf).
Presheaves and sheaves are typically denoted by capital letters,
F
{\displaystyle F}
being particularly common, presumably for the French word for sheaf, faisceau. Use of calligraphic letters such as
F
{\displaystyle {\mathcal {F}}}
is also common.
It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of a basis for the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. This observation is used to construct another example which is crucial in algebraic geometry, namely quasi-coherent sheaves. Here the topological space in question is the spectrum of a commutative ring
R
{\displaystyle R}
, whose points are the prime ideals
p
{\displaystyle {\mathfrak {p}}}
in
R
{\displaystyle R}
. The open sets
D
f
:=
{
p
⊆
R
,
f
∉
p
}
{\displaystyle D_{f}:=\{{\mathfrak {p}}\subseteq R,f\notin {\mathfrak {p}}\}}
form a basis for the Zariski topology on this space. Given an
R
{\displaystyle R}
-module
M
{\displaystyle M}
, there is a sheaf, denoted by
M
~
{\displaystyle {\tilde {M}}}
on the
Spec
R
{\displaystyle \operatorname {Spec} R}
, that satisfies
M
~
(
D
f
)
:=
M
[
1
/
f
]
,
{\displaystyle {\tilde {M}}(D_{f}):=M[1/f],}
the localization of
M
{\displaystyle M}
at
f
{\displaystyle f}
.
There is another characterization of sheaves that is equivalent to the previously discussed.
A presheaf
F
{\displaystyle {\mathcal {F}}}
is a sheaf if and only if for any open
U
{\displaystyle U}
and any open cover
{
U
a
}
{\displaystyle \{U_{a}\}}
of
U
{\displaystyle U}
,
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
is the fibre product
F
(
U
)
≅
F
(
U
a
)
×
F
(
U
a
∩
U
b
)
F
(
U
b
)
{\displaystyle {\mathcal {F}}(U)\cong {\mathcal {F}}(U_{a})\times _{{\mathcal {F}}(U_{a}\cap U_{b})}{\mathcal {F}}(U_{b})}
. This characterization is useful in construction of sheaves, for example, if
F
,
G
{\displaystyle {\mathcal {F}},{\mathcal {G}}}
are abelian sheaves, then the kernel of sheaves morphism
F
→
G
{\displaystyle {\mathcal {F}}\to {\mathcal {G}}}
is a sheaf, since projective limits commutes with projective limits. On the other hand, the cokernel is not always a sheaf because inductive limits do not necessarily commute with projective limits. One way to fix this is to consider Noetherian topological spaces; all open sets are compact so that the cokernel is a sheaf, since finite projective limits commutes with inductive limits.
=== Further examples ===
==== Sheaf of sections of a continuous map ====
Any continuous map
f
:
Y
→
X
{\displaystyle f:Y\to X}
of topological spaces determines a sheaf
Γ
(
Y
/
X
)
{\displaystyle \Gamma (Y/X)}
on
X
{\displaystyle X}
by setting
Γ
(
Y
/
X
)
(
U
)
=
{
s
:
U
→
Y
,
f
∘
s
=
id
U
}
.
{\displaystyle \Gamma (Y/X)(U)=\{s:U\to Y,f\circ s=\operatorname {id} _{U}\}.}
Any such
s
{\displaystyle s}
is commonly called a section of
f
{\displaystyle f}
, and this example is the reason why the elements in
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
are generally called sections. This construction is especially important when
f
{\displaystyle f}
is the projection of a fiber bundle onto its base space. For example, the sheaves of smooth functions are the sheaves of sections of the trivial bundle.
Another example: the sheaf of sections of
C
⟶
exp
C
∖
{
0
}
{\displaystyle \mathbb {C} {\stackrel {\exp }{\longrightarrow }}\mathbb {C} \setminus \{0\}}
is the sheaf which assigns to any
U
⊆
C
∖
{
0
}
{\displaystyle U\subseteq \mathbb {C} \setminus \{0\}}
the set of branches of the complex logarithm on
U
{\displaystyle U}
.
Given a point
x
{\displaystyle x}
and an abelian group
S
{\displaystyle S}
, the skyscraper sheaf
S
x
{\displaystyle S_{x}}
is defined as follows: if
U
{\displaystyle U}
is an open set containing
x
{\displaystyle x}
, then
S
x
(
U
)
=
S
{\displaystyle S_{x}(U)=S}
. If
U
{\displaystyle U}
does not contain
x
{\displaystyle x}
, then
S
x
(
U
)
=
0
{\displaystyle S_{x}(U)=0}
, the trivial group. The restriction maps are either the identity on
S
{\displaystyle S}
, if both open sets contain
x
{\displaystyle x}
, or the zero map otherwise.
==== Sheaves on manifolds ====
On an
n
{\displaystyle n}
-dimensional
C
k
{\displaystyle C^{k}}
-manifold
M
{\displaystyle M}
, there are a number of important sheaves, such as the sheaf of
j
{\displaystyle j}
-times continuously differentiable functions
O
M
j
{\displaystyle {\mathcal {O}}_{M}^{j}}
(with
j
≤
k
{\displaystyle j\leq k}
). Its sections on some open
U
{\displaystyle U}
are the
C
j
{\displaystyle C^{j}}
-functions
U
→
R
{\displaystyle U\to \mathbb {R} }
. For
j
=
k
{\displaystyle j=k}
, this sheaf is called the structure sheaf and is denoted
O
M
{\displaystyle {\mathcal {O}}_{M}}
. The nonzero
C
k
{\displaystyle C^{k}}
functions also form a sheaf, denoted
O
X
×
{\displaystyle {\mathcal {O}}_{X}^{\times }}
. Differential forms (of degree
p
{\displaystyle p}
) also form a sheaf
Ω
M
p
{\displaystyle \Omega _{M}^{p}}
. In all these examples, the restriction morphisms are given by restricting functions or forms.
The assignment sending
U
{\displaystyle U}
to the compactly supported functions on
U
{\displaystyle U}
is not a sheaf, since there is, in general, no way to preserve this property by passing to a smaller open subset. Instead, this forms a cosheaf, a dual concept where the restriction maps go in the opposite direction than with sheaves. However, taking the dual of these vector spaces does give a sheaf, the sheaf of distributions.
==== Presheaves that are not sheaves ====
In addition to the constant presheaf mentioned above, which is usually not a sheaf, there are further examples of presheaves that are not sheaves:
Let
X
{\displaystyle X}
be the two-point topological space
{
x
,
y
}
{\displaystyle \{x,y\}}
with the discrete topology. Define a presheaf
F
{\displaystyle F}
as follows:
F
(
∅
)
=
{
∅
}
,
F
(
{
x
}
)
=
R
,
F
(
{
y
}
)
=
R
,
F
(
{
x
,
y
}
)
=
R
×
R
×
R
{\displaystyle F(\varnothing )=\{\varnothing \},\ F(\{x\})=\mathbb {R} ,\ F(\{y\})=\mathbb {R} ,\ F(\{x,y\})=\mathbb {R} \times \mathbb {R} \times \mathbb {R} }
The restriction map
F
(
{
x
,
y
}
)
→
F
(
{
x
}
)
{\displaystyle F(\{x,y\})\to F(\{x\})}
is the projection of
R
×
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} \times \mathbb {R} }
onto its first coordinate, and the restriction map
F
(
{
x
,
y
}
)
→
F
(
{
y
}
)
{\displaystyle F(\{x,y\})\to F(\{y\})}
is the projection of
R
×
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} \times \mathbb {R} }
onto its second coordinate.
F
{\displaystyle F}
is a presheaf that is not separated: a global section is determined by three numbers, but the values of that section over
{
x
}
{\displaystyle \{x\}}
and
{
y
}
{\displaystyle \{y\}}
determine only two of those numbers. So while we can glue any two sections over
{
x
}
{\displaystyle \{x\}}
and
{
y
}
{\displaystyle \{y\}}
, we cannot glue them uniquely.
Let
X
=
R
{\displaystyle X=\mathbb {R} }
be the real line, and let
F
(
U
)
{\displaystyle F(U)}
be the set of bounded continuous functions on
U
{\displaystyle U}
. This is not a sheaf because it is not always possible to glue. For example, let
U
i
{\displaystyle U_{i}}
be the set of all
x
{\displaystyle x}
such that
|
x
|
<
i
{\displaystyle |x|<i}
. The identity function
f
(
x
)
=
x
{\displaystyle f(x)=x}
is bounded on each
U
i
{\displaystyle U_{i}}
. Consequently, we get a section
s
i
{\displaystyle s_{i}}
on
U
i
{\displaystyle U_{i}}
. However, these sections do not glue, because the function
f
{\displaystyle f}
is not bounded on the real line. Consequently
F
{\displaystyle F}
is a presheaf, but not a sheaf. In fact,
F
{\displaystyle F}
is separated because it is a sub-presheaf of the sheaf of continuous functions.
=== Motivating sheaves from complex analytic spaces and algebraic geometry ===
One of the historical motivations for sheaves have come from studying complex manifolds, complex analytic geometry, and scheme theory from algebraic geometry. This is because in all of the previous cases, we consider a topological space
X
{\displaystyle X}
together with a structure sheaf
O
{\displaystyle {\mathcal {O}}}
giving it the structure of a complex manifold, complex analytic space, or scheme. This perspective of equipping a topological space with a sheaf is essential to the theory of locally ringed spaces (see below).
==== Technical challenges with complex manifolds ====
One of the main historical motivations for introducing sheaves was constructing a device which keeps track of holomorphic functions on complex manifolds. For example, on a compact complex manifold
X
{\displaystyle X}
(like complex projective space or the vanishing locus in projective space of a homogeneous polynomial), the only holomorphic functions
f
:
X
→
C
{\displaystyle f:X\to \mathbb {C} }
are the constant functions. This means there exist two compact complex manifolds
X
,
X
′
{\displaystyle X,X'}
which are not isomorphic, but nevertheless their rings of global holomorphic functions, denoted
H
(
X
)
,
H
(
X
′
)
{\displaystyle {\mathcal {H}}(X),{\mathcal {H}}(X')}
, are isomorphic. Contrast this with smooth manifolds where every manifold
M
{\displaystyle M}
can be embedded inside some
R
n
{\displaystyle \mathbb {R} ^{n}}
, hence its ring of smooth functions
C
∞
(
M
)
{\displaystyle C^{\infty }(M)}
comes from restricting the smooth functions from
C
∞
(
R
n
)
{\displaystyle C^{\infty }(\mathbb {R} ^{n})}
, of which there exist plenty.
Another complexity when considering the ring of holomorphic functions on a complex manifold
X
{\displaystyle X}
is given a small enough open set
U
⊆
X
{\displaystyle U\subseteq X}
, the holomorphic functions will be isomorphic to
H
(
U
)
≅
H
(
C
n
)
{\displaystyle {\mathcal {H}}(U)\cong {\mathcal {H}}(\mathbb {C} ^{n})}
. Sheaves are a direct tool for dealing with this complexity since they make it possible to keep track of the holomorphic structure on the underlying topological space of
X
{\displaystyle X}
on arbitrary open subsets
U
⊆
X
{\displaystyle U\subseteq X}
. This means as
U
{\displaystyle U}
becomes more complex topologically, the ring
H
(
U
)
{\displaystyle {\mathcal {H}}(U)}
can be expressed from gluing the
H
(
U
i
)
{\displaystyle {\mathcal {H}}(U_{i})}
. Note that sometimes this sheaf is denoted
O
(
−
)
{\displaystyle {\mathcal {O}}(-)}
or just
O
{\displaystyle {\mathcal {O}}}
, or even
O
X
{\displaystyle {\mathcal {O}}_{X}}
when we want to emphasize the space the structure sheaf is associated to.
==== Tracking submanifolds with sheaves ====
Another common example of sheaves can be constructed by considering a complex submanifold
Y
↪
X
{\displaystyle Y\hookrightarrow X}
. There is an associated sheaf
O
Y
{\displaystyle {\mathcal {O}}_{Y}}
which takes an open subset
U
⊆
X
{\displaystyle U\subseteq X}
and gives the ring of holomorphic functions on
U
∩
Y
{\displaystyle U\cap Y}
. This kind of formalism was found to be extremely powerful and motivates a lot of homological algebra such as sheaf cohomology since an intersection theory can be built using these kinds of sheaves from the Serre intersection formula.
== Operations with sheaves ==
=== Morphisms ===
Morphisms of sheaves are, roughly speaking, analogous to functions between them. In contrast to a function between sets, which is simply an assignment of outputs to inputs, morphisms of sheaves are also required to be compatible with the local–global structures of the underlying sheaves. This idea is made precise in the following definition.
Let
F
{\displaystyle {\mathcal {F}}}
and
G
{\displaystyle {\mathcal {G}}}
be two sheaves of sets (respectively abelian groups, rings, etc.) on
X
{\displaystyle X}
. A morphism
φ
:
F
→
G
{\displaystyle \varphi :{\mathcal {F}}\to {\mathcal {G}}}
consists of a morphism
φ
U
:
F
(
U
)
→
G
(
U
)
{\displaystyle \varphi _{U}:{\mathcal {F}}(U)\to {\mathcal {G}}(U)}
of sets (respectively abelian groups, rings, etc.) for each open set
U
{\displaystyle U}
of
X
{\displaystyle X}
, subject to the condition that this morphism is compatible with restrictions. In other words, for every open subset
V
{\displaystyle V}
of an open set
U
{\displaystyle U}
, the following diagram is commutative.
F
(
U
)
→
φ
U
G
(
U
)
r
V
U
↓
↓
r
′
V
U
F
(
V
)
→
φ
V
G
(
V
)
{\displaystyle {\begin{array}{rcl}{\mathcal {F}}(U)&\xrightarrow {\quad \varphi _{U}\quad } &{\mathcal {G}}(U)\\r_{V}^{U}{\Biggl \downarrow }&&{\Biggl \downarrow }{r'}_{V}^{U}\\{\mathcal {F}}(V)&{\xrightarrow[{\quad \varphi _{V}\quad }]{}}&{\mathcal {G}}(V)\end{array}}}
For example, taking the derivative gives a morphism of sheaves on
R
{\displaystyle \mathbb {R} }
,
d
d
x
:
O
R
n
→
O
R
n
−
1
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\colon {\mathcal {O}}_{\mathbb {R} }^{n}\to {\mathcal {O}}_{\mathbb {R} }^{n-1}.}
Indeed, given an (
n
{\displaystyle n}
-times continuously differentiable) function
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
(with
U
{\displaystyle U}
in
R
{\displaystyle \mathbb {R} }
open), the restriction (to a smaller open subset
V
{\displaystyle V}
) of its derivative equals the derivative of
f
|
V
{\displaystyle f|_{V}}
.
With this notion of morphism, sheaves of sets (respectively abelian groups, rings, etc.) on a fixed topological space
X
{\displaystyle X}
form a category. The general categorical notions of mono-, epi- and isomorphisms can therefore be applied to sheaves.
A morphism
φ
:
F
→
G
{\displaystyle \varphi \colon {\mathcal {F}}\rightarrow {\mathcal {G}}}
of sheaves on
X
{\displaystyle X}
is an isomorphism (respectively monomorphism) if and only if there exists an open cover
{
U
α
}
{\displaystyle \{U_{\alpha }\}}
of
X
{\displaystyle X}
such that
φ
|
U
α
:
F
(
U
α
)
→
G
(
U
α
)
{\displaystyle \varphi |_{U_{\alpha }}\colon {\mathcal {F}}(U_{\alpha })\rightarrow {\mathcal {G}}(U_{\alpha })}
are isomorphisms (respectively injective morphisms) of sets (respectively abelian groups, rings, etc.) for all
α
{\displaystyle \alpha }
. These statements give examples of how to work with sheaves using local information, but it's important to note that we cannot check if a morphism of sheaves is an epimorphism in the same manner. Indeed the statement that maps on the level of open sets
φ
U
:
F
(
U
)
→
G
(
U
)
{\displaystyle \varphi _{U}\colon {\mathcal {F}}(U)\rightarrow {\mathcal {G}}(U)}
are not always surjective for epimorphisms of sheaves is equivalent to non-exactness of the global sections functor—or equivalently, to non-triviality of sheaf cohomology.
=== Stalks of a sheaf ===
The stalk
F
x
{\displaystyle {\mathcal {F}}_{x}}
of a sheaf
F
{\displaystyle {\mathcal {F}}}
captures the properties of a sheaf "around" a point
x
∈
X
{\displaystyle x\in X}
, generalizing the germs of functions.
Here, "around" means that, conceptually speaking, one looks at smaller and smaller neighborhoods of the point. Of course, no single neighborhood will be small enough, which requires considering a limit of some sort. More precisely, the stalk is defined by
F
x
=
lim
→
U
∋
x
F
(
U
)
,
{\displaystyle {\mathcal {F}}_{x}=\varinjlim _{U\ni x}{\mathcal {F}}(U),}
the direct limit being over all open subsets of
X
{\displaystyle X}
containing the given point
x
{\displaystyle x}
. In other words, an element of the stalk is given by a section over some open neighborhood of
x
{\displaystyle x}
, and two such sections are considered equivalent if their restrictions agree on a smaller neighborhood.
The natural morphism
F
(
U
)
→
F
x
{\displaystyle {\mathcal {F}}(U)\to {\mathcal {F}}_{x}}
takes a section
s
{\displaystyle s}
in
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
to its germ
s
x
{\displaystyle s_{x}}
at
x
{\displaystyle x}
. This generalises the usual definition of a germ.
In many situations, knowing the stalks of a sheaf is enough to control the sheaf itself. For example, whether or not a morphism of sheaves is a monomorphism, epimorphism, or isomorphism can be tested on the stalks. In this sense, a sheaf is determined by its stalks, which are a local data. By contrast, the global information present in a sheaf, i.e., the global sections, i.e., the sections
F
(
X
)
{\displaystyle {\mathcal {F}}(X)}
on the whole space
X
{\displaystyle X}
, typically carry less information. For example, for a compact complex manifold
X
{\displaystyle X}
, the global sections of the sheaf of holomorphic functions are just
C
{\displaystyle \mathbb {C} }
, since any holomorphic function
X
→
C
{\displaystyle X\to \mathbb {C} }
is constant by Liouville's theorem.
=== Turning a presheaf into a sheaf ===
It is frequently useful to take the data contained in a presheaf and to express it as a sheaf. It turns out that there is a best possible way to do this. It takes a presheaf
F
{\displaystyle {\mathcal {F}}}
and produces a new sheaf
a
F
{\displaystyle a{\mathcal {F}}}
called the sheafification or sheaf associated to the presheaf
F
{\displaystyle {\mathcal {F}}}
. For example, the sheafification of the constant presheaf (see above) is called the constant sheaf. Despite its name, its sections are locally constant functions.
The sheaf
a
F
{\displaystyle a{\mathcal {F}}}
can be constructed using the étalé space
E
{\displaystyle E}
of
F
{\displaystyle {\mathcal {F}}}
, namely as the sheaf of sections of the map
E
→
X
.
{\displaystyle E\to X.}
Another construction of the sheaf
a
F
{\displaystyle a{\mathcal {F}}}
proceeds by means of a functor
L
{\displaystyle L}
from presheaves to presheaves that gradually improves the properties of a presheaf: for any presheaf
F
{\displaystyle {\mathcal {F}}}
,
L
F
{\displaystyle L{\mathcal {F}}}
is a separated presheaf, and for any separated presheaf
F
{\displaystyle {\mathcal {F}}}
,
L
F
{\displaystyle L{\mathcal {F}}}
is a sheaf. The associated sheaf
a
F
{\displaystyle a{\mathcal {F}}}
is given by
L
L
F
{\displaystyle LL{\mathcal {F}}}
.
The idea that the sheaf
a
F
{\displaystyle a{\mathcal {F}}}
is the best possible approximation to
F
{\displaystyle {\mathcal {F}}}
by a sheaf is made precise using the following universal property: there is a natural morphism of presheaves
i
:
F
→
a
F
{\displaystyle i\colon {\mathcal {F}}\to a{\mathcal {F}}}
so that for any sheaf
G
{\displaystyle {\mathcal {G}}}
and any morphism of presheaves
f
:
F
→
G
{\displaystyle f\colon {\mathcal {F}}\to {\mathcal {G}}}
, there is a unique morphism of sheaves
f
~
:
a
F
→
G
{\displaystyle {\tilde {f}}\colon a{\mathcal {F}}\rightarrow {\mathcal {G}}}
such that
f
=
f
~
i
{\displaystyle f={\tilde {f}}i}
. In fact,
a
{\displaystyle a}
is the left adjoint functor to the inclusion functor (or forgetful functor) from the category of sheaves to the category of presheaves, and
i
{\displaystyle i}
is the unit of the adjunction. In this way, the category of sheaves turns into a Giraud subcategory of presheaves. This categorical situation is the reason why the sheafification functor appears in constructing cokernels of sheaf morphisms or tensor products of sheaves, but not for kernels, say.
=== Subsheaves, quotient sheaves ===
If
K
{\displaystyle K}
is a subsheaf of a sheaf
F
{\displaystyle F}
of abelian groups, then the quotient sheaf
Q
{\displaystyle Q}
is the sheaf associated to the presheaf
U
↦
F
(
U
)
/
K
(
U
)
{\displaystyle U\mapsto F(U)/K(U)}
; in other words, the quotient sheaf fits into an exact sequence of sheaves of abelian groups;
0
→
K
→
F
→
Q
→
0.
{\displaystyle 0\to K\to F\to Q\to 0.}
(this is also called a sheaf extension.)
Let
F
,
G
{\displaystyle F,G}
be sheaves of abelian groups. The set
Hom
(
F
,
G
)
{\displaystyle \operatorname {Hom} (F,G)}
of morphisms of sheaves from
F
{\displaystyle F}
to
G
{\displaystyle G}
forms an abelian group (by the abelian group structure of
G
{\displaystyle G}
). The sheaf hom of
F
{\displaystyle F}
and
G
{\displaystyle G}
, denoted by,
H
o
m
(
F
,
G
)
{\displaystyle {\mathcal {Hom}}(F,G)}
is the sheaf of abelian groups
U
↦
Hom
(
F
|
U
,
G
|
U
)
{\displaystyle U\mapsto \operatorname {Hom} (F|_{U},G|_{U})}
where
F
|
U
{\displaystyle F|_{U}}
is the sheaf on
U
{\displaystyle U}
given by
(
F
|
U
)
(
V
)
=
F
(
V
)
{\displaystyle (F|_{U})(V)=F(V)}
(note sheafification is not needed here). The direct sum of
F
{\displaystyle F}
and
G
{\displaystyle G}
is the sheaf given by
U
↦
F
(
U
)
⊕
G
(
U
)
{\displaystyle U\mapsto F(U)\oplus G(U)}
, and the tensor product of
F
{\displaystyle F}
and
G
{\displaystyle G}
is the sheaf associated to the presheaf
U
↦
F
(
U
)
⊗
G
(
U
)
{\displaystyle U\mapsto F(U)\otimes G(U)}
.
All of these operations extend to sheaves of modules over a sheaf of rings
A
{\displaystyle A}
; the above is the special case when
A
{\displaystyle A}
is the constant sheaf
Z
_
{\displaystyle {\underline {\mathbf {Z} }}}
.
=== Basic functoriality ===
Since the data of a (pre-)sheaf depends on the open subsets of the base space, sheaves on different topological spaces are unrelated to each other in the sense that there are no morphisms between them. However, given a continuous map
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two topological spaces, pushforward and pullback relate sheaves on
X
{\displaystyle X}
to those on
Y
{\displaystyle Y}
and vice versa.
==== Direct image ====
The pushforward (also known as direct image) of a sheaf
F
{\displaystyle {\mathcal {F}}}
on
X
{\displaystyle X}
is the sheaf defined by
(
f
∗
F
)
(
V
)
=
F
(
f
−
1
(
V
)
)
.
{\displaystyle (f_{*}{\mathcal {F}})(V)={\mathcal {F}}(f^{-1}(V)).}
Here
V
{\displaystyle V}
is an open subset of
Y
{\displaystyle Y}
, so that its preimage is open in
X
{\displaystyle X}
by the continuity of
f
{\displaystyle f}
.
This construction recovers the skyscraper sheaf
S
x
{\displaystyle S_{x}}
mentioned above:
S
x
=
i
∗
(
S
)
{\displaystyle S_{x}=i_{*}(S)}
where
i
:
{
x
}
→
X
{\displaystyle i:\{x\}\to X}
is the inclusion, and
S
{\displaystyle S}
is regarded as a sheaf on the singleton by
S
(
{
∗
}
)
=
S
,
S
(
∅
)
=
∅
{\displaystyle S(\{*\})=S,S(\emptyset )=\emptyset }
.
For a map between locally compact spaces, the direct image with compact support is a subsheaf of the direct image. By definition,
(
f
!
F
)
(
V
)
{\displaystyle (f_{!}{\mathcal {F}})(V)}
consists of those
s
∈
F
(
f
−
1
(
V
)
)
{\displaystyle s\in {\mathcal {F}}(f^{-1}(V))}
whose support is mapped properly. If
f
{\displaystyle f}
is proper itself, then
f
!
F
=
f
∗
F
{\displaystyle f_{!}{\mathcal {F}}=f_{*}{\mathcal {F}}}
, but in general they disagree.
==== Inverse image ====
The pullback or inverse image goes the other way: it produces a sheaf on
X
{\displaystyle X}
, denoted
f
−
1
G
{\displaystyle f^{-1}{\mathcal {G}}}
out of a sheaf
G
{\displaystyle {\mathcal {G}}}
on
Y
{\displaystyle Y}
. If
f
{\displaystyle f}
is the inclusion of an open subset, then the inverse image is just a restriction, i.e., it is given by
(
f
−
1
G
)
(
U
)
=
G
(
U
)
{\displaystyle (f^{-1}{\mathcal {G}})(U)={\mathcal {G}}(U)}
for an open
U
{\displaystyle U}
in
X
{\displaystyle X}
. A sheaf
F
{\displaystyle {\mathcal {F}}}
(on some space
X
{\displaystyle X}
) is called locally constant if
X
=
⋃
i
∈
I
U
i
{\displaystyle X=\bigcup _{i\in I}U_{i}}
by some open subsets
U
i
{\displaystyle U_{i}}
such that the restriction of
F
{\displaystyle {\mathcal {F}}}
to all these open subsets is constant. On a wide range of topological spaces
X
{\displaystyle X}
, such sheaves are equivalent to representations of the fundamental group
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
.
For general maps
f
{\displaystyle f}
, the definition of
f
−
1
G
{\displaystyle f^{-1}{\mathcal {G}}}
is more involved; it is detailed at inverse image functor. The stalk is an essential special case of the pullback in view of a natural identification, where
i
{\displaystyle i}
is as above:
G
x
=
i
−
1
G
(
{
x
}
)
.
{\displaystyle {\mathcal {G}}_{x}=i^{-1}{\mathcal {G}}(\{x\}).}
More generally, stalks satisfy
(
f
−
1
G
)
x
=
G
f
(
x
)
{\displaystyle (f^{-1}{\mathcal {G}})_{x}={\mathcal {G}}_{f(x)}}
.
==== Extension by zero ====
For the inclusion
j
:
U
→
X
{\displaystyle j:U\to X}
of an open subset, the extension by zero
j
!
F
{\displaystyle j_{!}{\mathcal {F}}}
(pronounced "j lower shriek of F") of a sheaf
F
{\displaystyle {\mathcal {F}}}
of abelian groups on
U
{\displaystyle U}
is the sheafification of the presheaf defined by
V
↦
F
(
V
)
{\displaystyle V\mapsto {\mathcal {F}}(V)}
if
V
⊆
U
{\displaystyle V\subseteq U}
and
V
↦
0
{\displaystyle V\mapsto 0}
otherwise.
For a sheaf
G
{\displaystyle {\mathcal {G}}}
on
X
{\displaystyle X}
, this construction is in a sense complementary to
i
∗
{\displaystyle i_{*}}
, where
i
:
X
∖
U
→
X
{\displaystyle i:X\setminus U\to X}
is the inclusion of the complement of
U
{\displaystyle U}
:
(
j
!
j
∗
G
)
x
=
G
x
{\displaystyle (j_{!}j^{*}{\mathcal {G}})_{x}={\mathcal {G}}_{x}}
for
x
{\displaystyle x}
in
U
{\displaystyle U}
, and the stalk is zero otherwise, while
(
i
∗
i
∗
G
)
x
=
0
{\displaystyle (i_{*}i^{*}{\mathcal {G}})_{x}=0}
for
x
{\displaystyle x}
in
U
{\displaystyle U}
, and equals
G
x
{\displaystyle {\mathcal {G}}_{x}}
otherwise.
More generally, if
A
⊂
X
{\displaystyle A\subset X}
is a locally closed subset, then there exists an open
U
{\displaystyle U}
of
X
{\displaystyle X}
containing
A
{\displaystyle A}
such that
A
{\displaystyle A}
is closed in
U
{\displaystyle U}
. Let
f
:
A
→
U
{\displaystyle f:A\to U}
and
j
:
U
→
X
{\displaystyle j:U\to X}
be the natural inclusions. Then the extension by zero of a sheaf
F
{\displaystyle {\mathcal {F}}}
on
A
{\displaystyle A}
is defined by
j
!
f
∗
F
{\displaystyle j_{!}f_{*}F}
.
Due to its nice behavior on stalks, the extension by zero functor is useful for reducing sheaf-theoretic questions on
X
{\displaystyle X}
to ones on the strata of a stratification, i.e., a decomposition of
X
{\displaystyle X}
into smaller, locally closed subsets.
== Complements ==
=== Sheaves in more general categories ===
In addition to (pre-)sheaves as introduced above, where
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
is merely a set, it is in many cases important to keep track of additional structure on these sections. For example, the sections of the sheaf of continuous functions naturally form a real vector space, and restriction is a linear map between these vector spaces.
Presheaves with values in an arbitrary category
C
{\displaystyle C}
are defined by first considering the category of open sets on
X
{\displaystyle X}
to be the posetal category
O
(
X
)
{\displaystyle O(X)}
whose objects are the open sets of
X
{\displaystyle X}
and whose morphisms are inclusions. Then a
C
{\displaystyle C}
-valued presheaf on
X
{\displaystyle X}
is the same as a contravariant functor from
O
(
X
)
{\displaystyle O(X)}
to
C
{\displaystyle C}
. Morphisms in this category of functors, also known as natural transformations, are the same as the morphisms defined above, as can be seen by unraveling the definitions.
If the target category
C
{\displaystyle C}
admits all limits, a
C
{\displaystyle C}
-valued presheaf is a sheaf if the following diagram is an equalizer for every open cover
U
=
{
U
i
}
i
∈
I
{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}
of any open set
U
{\displaystyle U}
:
F
(
U
)
→
∏
i
F
(
U
i
)
⟶
⟶
∏
i
,
j
F
(
U
i
∩
U
j
)
.
{\displaystyle F(U)\rightarrow \prod _{i}F(U_{i}){{{} \atop \longrightarrow } \atop {\longrightarrow \atop {}}}\prod _{i,j}F(U_{i}\cap U_{j}).}
Here the first map is the product of the restriction maps
res
U
i
,
U
:
F
(
U
)
→
F
(
U
i
)
{\displaystyle \operatorname {res} _{U_{i},U}\colon F(U)\rightarrow F(U_{i})}
and the pair of arrows the products of the two sets of restrictions
res
U
i
∩
U
j
,
U
i
:
F
(
U
i
)
→
F
(
U
i
∩
U
j
)
{\displaystyle \operatorname {res} _{U_{i}\cap U_{j},U_{i}}\colon F(U_{i})\rightarrow F(U_{i}\cap U_{j})}
and
res
U
i
∩
U
j
,
U
j
:
F
(
U
j
)
→
F
(
U
i
∩
U
j
)
.
{\displaystyle \operatorname {res} _{U_{i}\cap U_{j},U_{j}}\colon F(U_{j})\rightarrow F(U_{i}\cap U_{j}).}
If
C
{\displaystyle C}
is an abelian category, this condition can also be rephrased by requiring that there is an exact sequence
0
→
F
(
U
)
→
∏
i
F
(
U
i
)
→
res
U
i
∩
U
j
,
U
i
−
res
U
i
∩
U
j
,
U
j
∏
i
,
j
F
(
U
i
∩
U
j
)
.
{\displaystyle 0\to F(U)\to \prod _{i}F(U_{i})\xrightarrow {\operatorname {res} _{U_{i}\cap U_{j},U_{i}}-\operatorname {res} _{U_{i}\cap U_{j},U_{j}}} \prod _{i,j}F(U_{i}\cap U_{j}).}
A particular case of this sheaf condition occurs for
U
{\displaystyle U}
being the empty set, and the index set
I
{\displaystyle I}
also being empty. In this case, the sheaf condition requires
F
(
∅
)
{\displaystyle {\mathcal {F}}(\emptyset )}
to be the terminal object in
C
{\displaystyle C}
.
=== Ringed spaces and sheaves of modules ===
In several geometrical disciplines, including algebraic geometry and differential geometry, the spaces come along with a natural sheaf of rings, often called the structure sheaf and denoted by
O
X
{\displaystyle {\mathcal {O}}_{X}}
. Such a pair
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is called a ringed space. Many types of spaces can be defined as certain types of ringed spaces. Commonly, all the stalks
O
X
,
x
{\displaystyle {\mathcal {O}}_{X,x}}
of the structure sheaf are local rings, in which case the pair is called a locally ringed space.
For example, an
n
{\displaystyle n}
-dimensional
C
k
{\displaystyle C^{k}}
manifold
M
{\displaystyle M}
is a locally ringed space whose structure sheaf consists of
C
k
{\displaystyle C^{k}}
-functions on the open subsets of
M
{\displaystyle M}
. The property of being a locally ringed space translates into the fact that such a function, which is nonzero at a point
x
{\displaystyle x}
, is also non-zero on a sufficiently small open neighborhood of
x
{\displaystyle x}
. Some authors actually define real (or complex) manifolds to be locally ringed spaces that are locally isomorphic to the pair consisting of an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
(respectively
C
n
{\displaystyle \mathbb {C} ^{n}}
) together with the sheaf of
C
k
{\displaystyle C^{k}}
(respectively holomorphic) functions. Similarly, schemes, the foundational notion of spaces in algebraic geometry, are locally ringed spaces that are locally isomorphic to the spectrum of a ring.
Given a ringed space, a sheaf of modules is a sheaf
M
{\displaystyle {\mathcal {M}}}
such that on every open set
U
{\displaystyle U}
of
X
{\displaystyle X}
,
M
(
U
)
{\displaystyle {\mathcal {M}}(U)}
is an
O
X
(
U
)
{\displaystyle {\mathcal {O}}_{X}(U)}
-module and for every inclusion of open sets
V
⊆
U
{\displaystyle V\subseteq U}
, the restriction map
M
(
U
)
→
M
(
V
)
{\displaystyle {\mathcal {M}}(U)\to {\mathcal {M}}(V)}
is compatible with the restriction map
O
(
U
)
→
O
(
V
)
{\displaystyle {\mathcal {O}}(U)\to {\mathcal {O}}(V)}
: the restriction of
f
s
{\displaystyle fs}
is the restriction of
f
{\displaystyle f}
times that of
s
{\displaystyle s}
for any
f
{\displaystyle f}
in
O
(
U
)
{\displaystyle {\mathcal {O}}(U)}
and
s
{\displaystyle s}
in
M
(
U
)
{\displaystyle {\mathcal {M}}(U)}
.
Most important geometric objects are sheaves of modules. For example, there is a one-to-one correspondence between vector bundles and locally free sheaves of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules. This paradigm applies to real vector bundles, complex vector bundles, or vector bundles in algebraic geometry (where
O
{\displaystyle {\mathcal {O}}}
consists of smooth functions, holomorphic functions, or regular functions, respectively). Sheaves of solutions to differential equations are
D
{\displaystyle D}
-modules, that is, modules over the sheaf of differential operators. On any topological space, modules over the constant sheaf
Z
_
{\displaystyle {\underline {\mathbf {Z} }}}
are the same as sheaves of abelian groups in the sense above.
There is a different inverse image functor for sheaves of modules over sheaves of rings. This functor is usually denoted
f
∗
{\displaystyle f^{*}}
and it is distinct from
f
−
1
{\displaystyle f^{-1}}
. See inverse image functor.
==== Finiteness conditions for sheaves of modules ====
Finiteness conditions for module over commutative rings give rise to similar finiteness conditions for sheaves of modules:
M
{\displaystyle {\mathcal {M}}}
is called finitely generated (respectively finitely presented) if, for every point
x
{\displaystyle x}
of
X
{\displaystyle X}
, there exists an open neighborhood
U
{\displaystyle U}
of
x
{\displaystyle x}
, a natural number
n
{\displaystyle n}
(possibly depending on
U
{\displaystyle U}
), and a surjective morphism of sheaves
O
X
n
|
U
→
M
|
U
{\displaystyle {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {M}}|_{U}}
(respectively, in addition a natural number
m
{\displaystyle m}
, and an exact sequence
O
X
m
|
U
→
O
X
n
|
U
→
M
|
U
→
0
{\displaystyle {\mathcal {O}}_{X}^{m}|_{U}\to {\mathcal {O}}_{X}^{n}|_{U}\to {\mathcal {M}}|_{U}\to 0}
.) Paralleling the notion of a coherent module,
M
{\displaystyle {\mathcal {M}}}
is called a coherent sheaf if it is of finite type and if, for every open set
U
{\displaystyle U}
and every morphism of sheaves
ϕ
:
O
X
n
→
M
{\displaystyle \phi :{\mathcal {O}}_{X}^{n}\to {\mathcal {M}}}
(not necessarily surjective), the kernel of
ϕ
{\displaystyle \phi }
is of finite type.
O
X
{\displaystyle {\mathcal {O}}_{X}}
is coherent if it is coherent as a module over itself. Like for modules, coherence is in general a strictly stronger condition than finite presentation. The Oka coherence theorem states that the sheaf of holomorphic functions on a complex manifold is coherent.
=== The étalé space of a sheaf ===
In the examples above it was noted that some sheaves occur naturally as sheaves of sections. In fact, all sheaves of sets can be represented as sheaves of sections of a topological space called the étalé space, from the French word étalé [etale], meaning roughly "spread out". If
F
∈
Sh
(
X
)
{\displaystyle F\in {\text{Sh}}(X)}
is a sheaf over
X
{\displaystyle X}
, then the étalé space (sometimes called the étale space) of
F
{\displaystyle F}
is a topological space
E
{\displaystyle E}
together with a local homeomorphism
π
:
E
→
X
{\displaystyle \pi :E\to X}
such that the sheaf of sections
Γ
(
π
,
−
)
{\displaystyle \Gamma (\pi ,-)}
of
π
{\displaystyle \pi }
is
F
{\displaystyle F}
. The space
E
{\displaystyle E}
is usually very strange, and even if the sheaf
F
{\displaystyle F}
arises from a natural topological situation,
E
{\displaystyle E}
may not have any clear topological interpretation. For example, if
F
{\displaystyle F}
is the sheaf of sections of a continuous function
f
:
Y
→
X
{\displaystyle f:Y\to X}
, then
E
=
Y
{\displaystyle E=Y}
if and only if
f
{\displaystyle f}
is a local homeomorphism.
The étalé space
E
{\displaystyle E}
is constructed from the stalks of
F
{\displaystyle F}
over
X
{\displaystyle X}
. As a set, it is their disjoint union and
π
{\displaystyle \pi }
is the obvious map that takes the value
x
{\displaystyle x}
on the stalk of
F
{\displaystyle F}
over
x
∈
X
{\displaystyle x\in X}
. The topology of
E
{\displaystyle E}
is defined as follows. For each element
s
∈
F
(
U
)
{\displaystyle s\in F(U)}
and each
x
∈
U
{\displaystyle x\in U}
, we get a germ of
s
{\displaystyle s}
at
x
{\displaystyle x}
, denoted
[
s
]
x
{\displaystyle [s]_{x}}
or
s
x
{\displaystyle s_{x}}
. These germs determine points of
E
{\displaystyle E}
. For any
U
{\displaystyle U}
and
s
∈
F
(
U
)
{\displaystyle s\in F(U)}
, the union of these points (for all
x
∈
U
{\displaystyle x\in U}
) is declared to be open in
E
{\displaystyle E}
. Notice that each stalk has the discrete topology as subspace topology. A morphism between two sheaves determine a continuous map of the corresponding étalé spaces that is compatible with the projection maps (in the sense that every germ is mapped to a germ over the same point). This makes the construction into a functor.
The construction above determines an equivalence of categories between the category of sheaves of sets on
X
{\displaystyle X}
and the category of étalé spaces over
X
{\displaystyle X}
. The construction of an étalé space can also be applied to a presheaf, in which case the sheaf of sections of the étalé space recovers the sheaf associated to the given presheaf.
This construction makes all sheaves into representable functors on certain categories of topological spaces. As above, let
F
{\displaystyle F}
be a sheaf on
X
{\displaystyle X}
, let
E
{\displaystyle E}
be its étalé space, and let
π
:
E
→
X
{\displaystyle \pi :E\to X}
be the natural projection. Consider the overcategory
Top
/
X
{\displaystyle {\text{Top}}/X}
of topological spaces over
X
{\displaystyle X}
, that is, the category of topological spaces together with fixed continuous maps to
X
{\displaystyle X}
. Every object of this category is a continuous map
f
:
Y
→
X
{\displaystyle f:Y\to X}
, and a morphism from
Y
→
X
{\displaystyle Y\to X}
to
Z
→
X
{\displaystyle Z\to X}
is a continuous map
Y
→
Z
{\displaystyle Y\to Z}
that commutes with the two maps to
X
{\displaystyle X}
. There is a functor
Γ
:
Top
/
X
→
Sets
{\displaystyle \Gamma :{\text{Top}}/X\to {\text{Sets}}}
sending an object
f
:
Y
→
X
{\displaystyle f:Y\to X}
to
f
−
1
F
(
Y
)
{\displaystyle f^{-1}F(Y)}
. For example, if
i
:
U
↪
X
{\displaystyle i:U\hookrightarrow X}
is the inclusion of an open subset, then
Γ
(
i
)
=
f
−
1
F
(
U
)
=
F
(
U
)
=
Γ
(
F
,
U
)
{\displaystyle \Gamma (i)=f^{-1}F(U)=F(U)=\Gamma (F,U)}
and for the inclusion of a point
i
:
{
x
}
↪
X
{\displaystyle i:\{x\}\hookrightarrow X}
, then
Γ
(
i
)
=
f
−
1
F
(
{
x
}
)
=
F
|
x
{\displaystyle \Gamma (i)=f^{-1}F(\{x\})=F|_{x}}
is the stalk of
F
{\displaystyle F}
at
x
{\displaystyle x}
. There is a natural isomorphism
(
f
−
1
F
)
(
Y
)
≅
Hom
T
o
p
/
X
(
f
,
π
)
{\displaystyle (f^{-1}F)(Y)\cong \operatorname {Hom} _{\mathbf {Top} /X}(f,\pi )}
,which shows that
π
:
E
→
X
{\displaystyle \pi :E\to X}
(for the étalé space) represents the functor
Γ
{\displaystyle \Gamma }
.
E
{\displaystyle E}
is constructed so that the projection map
π
{\displaystyle \pi }
is a covering map. In algebraic geometry, the natural analog of a covering map is called an étale morphism. Despite its similarity to "étalé", the word étale [etal] has a different meaning in French. It is possible to turn
E
{\displaystyle E}
into a scheme and
π
{\displaystyle \pi }
into a morphism of schemes in such a way that
π
{\displaystyle \pi }
retains the same universal property, but
π
{\displaystyle \pi }
is not in general an étale morphism because it is not quasi-finite. It is, however, formally étale.
The definition of sheaves by étalé spaces is older than the definition given earlier in the article. It is still common in some areas of mathematics such as mathematical analysis.
== Sheaf cohomology ==
In contexts where the open set
U
{\displaystyle U}
is fixed, and the sheaf is regarded as a variable, the set
F
(
U
)
{\displaystyle F(U)}
is also often denoted
Γ
(
U
,
F
)
.
{\displaystyle \Gamma (U,F).}
As was noted above, this functor does not preserve epimorphisms. Instead, an epimorphism of sheaves
F
→
G
{\displaystyle {\mathcal {F}}\to {\mathcal {G}}}
is a map with the following property: for any section
g
∈
G
(
U
)
{\displaystyle g\in {\mathcal {G}}(U)}
there is a covering
U
=
{
U
i
}
i
∈
I
{\displaystyle {\mathcal {U}}=\{U_{i}\}_{i\in I}}
where
U
=
⋃
i
∈
I
U
i
{\displaystyle U=\bigcup _{i\in I}U_{i}}
of open subsets, such that the restriction
g
|
U
i
{\displaystyle g|_{U_{i}}}
are in the image of
F
(
U
i
)
{\displaystyle {\mathcal {F}}(U_{i})}
. However,
g
{\displaystyle g}
itself need not be in the image of
F
(
U
)
{\displaystyle {\mathcal {F}}(U)}
. A concrete example of this phenomenon is the exponential map
O
→
exp
O
×
{\displaystyle {\mathcal {O}}{\stackrel {\exp }{\to }}{\mathcal {O}}^{\times }}
between the sheaf of holomorphic functions and non-zero holomorphic functions. This map is an epimorphism, which amounts to saying that any non-zero holomorphic function
g
{\displaystyle g}
(on some open subset in
C
{\displaystyle \mathbb {C} }
, say), admits a complex logarithm locally, i.e., after restricting
g
{\displaystyle g}
to appropriate open subsets. However,
g
{\displaystyle g}
need not have a logarithm globally.
Sheaf cohomology captures this phenomenon. More precisely, for an exact sequence of sheaves of abelian groups
0
→
F
1
→
F
2
→
F
3
→
0
,
{\displaystyle 0\to {\mathcal {F}}_{1}\to {\mathcal {F}}_{2}\to {\mathcal {F}}_{3}\to 0,}
(i.e., an epimorphism
F
2
→
F
3
{\displaystyle {\mathcal {F}}_{2}\to {\mathcal {F}}_{3}}
whose kernel is
F
1
{\displaystyle {\mathcal {F}}_{1}}
), there is a long exact sequence
0
→
Γ
(
U
,
F
1
)
→
Γ
(
U
,
F
2
)
→
Γ
(
U
,
F
3
)
→
H
1
(
U
,
F
1
)
→
H
1
(
U
,
F
2
)
→
H
1
(
U
,
F
3
)
→
H
2
(
U
,
F
1
)
→
…
{\displaystyle 0\to \Gamma (U,{\mathcal {F}}_{1})\to \Gamma (U,{\mathcal {F}}_{2})\to \Gamma (U,{\mathcal {F}}_{3})\to H^{1}(U,{\mathcal {F}}_{1})\to H^{1}(U,{\mathcal {F}}_{2})\to H^{1}(U,{\mathcal {F}}_{3})\to H^{2}(U,{\mathcal {F}}_{1})\to \dots }
By means of this sequence, the first cohomology group
H
1
(
U
,
F
1
)
{\displaystyle H^{1}(U,{\mathcal {F}}_{1})}
is a measure for the non-surjectivity of the map between sections of
F
2
{\displaystyle {\mathcal {F}}_{2}}
and
F
3
{\displaystyle {\mathcal {F}}_{3}}
.
There are several different ways of constructing sheaf cohomology. Grothendieck (1957) introduced them by defining sheaf cohomology as the derived functor of
Γ
{\displaystyle \Gamma }
. This method is theoretically satisfactory, but, being based on injective resolutions, of little use in concrete computations. Godement resolutions are another general, but practically inaccessible approach.
=== Computing sheaf cohomology ===
Especially in the context of sheaves on manifolds, sheaf cohomology can often be computed using resolutions by soft sheaves, fine sheaves, and flabby sheaves (also known as flasque sheaves from the French flasque meaning flabby). For example, a partition of unity argument shows that the sheaf of smooth functions on a manifold is soft. The higher cohomology groups
H
i
(
U
,
F
)
{\displaystyle H^{i}(U,{\mathcal {F}})}
for
i
>
0
{\displaystyle i>0}
vanish for soft sheaves, which gives a way of computing cohomology of other sheaves. For example, the de Rham complex is a resolution of the constant sheaf
R
_
{\displaystyle {\underline {\mathbf {R} }}}
on any smooth manifold, so the sheaf cohomology of
R
_
{\displaystyle {\underline {\mathbf {R} }}}
is equal to its de Rham cohomology.
A different approach is by Čech cohomology. Čech cohomology was the first cohomology theory developed for sheaves and it is well-suited to concrete calculations, such as computing the coherent sheaf cohomology of complex projective space
P
n
{\displaystyle \mathbb {P} ^{n}}
. It relates sections on open subsets of the space to cohomology classes on the space. In most cases, Čech cohomology computes the same cohomology groups as the derived functor cohomology. However, for some pathological spaces, Čech cohomology will give the correct
H
1
{\displaystyle H^{1}}
but incorrect higher cohomology groups. To get around this, Jean-Louis Verdier developed hypercoverings. Hypercoverings not only give the correct higher cohomology groups but also allow the open subsets mentioned above to be replaced by certain morphisms from another space. This flexibility is necessary in some applications, such as the construction of Pierre Deligne's mixed Hodge structures.
Many other coherent sheaf cohomology groups are found using an embedding
i
:
X
↪
Y
{\displaystyle i:X\hookrightarrow Y}
of a space
X
{\displaystyle X}
into a space with known cohomology, such as
P
n
{\displaystyle \mathbb {P} ^{n}}
, or some weighted projective space. In this way, the known sheaf cohomology groups on these ambient spaces can be related to the sheaves
i
∗
F
{\displaystyle i_{*}{\mathcal {F}}}
, giving
H
i
(
Y
,
i
∗
F
)
≅
H
i
(
X
,
F
)
{\displaystyle H^{i}(Y,i_{*}{\mathcal {F}})\cong H^{i}(X,{\mathcal {F}})}
. For example, computing the coherent sheaf cohomology of projective plane curves is easily found. One big theorem in this space is the Hodge decomposition found using a spectral sequence associated to sheaf cohomology groups, proved by Deligne. Essentially, the
E
1
{\displaystyle E_{1}}
-page with terms
E
1
p
,
q
=
H
p
(
X
,
Ω
X
q
)
{\displaystyle E_{1}^{p,q}=H^{p}(X,\Omega _{X}^{q})}
the sheaf cohomology of a smooth projective variety
X
{\displaystyle X}
, degenerates, meaning
E
1
=
E
∞
{\displaystyle E_{1}=E_{\infty }}
. This gives the canonical Hodge structure on the cohomology groups
H
k
(
X
,
C
)
{\displaystyle H^{k}(X,\mathbb {C} )}
. It was later found these cohomology groups can be easily explicitly computed using Griffiths residues. See Jacobian ideal. These kinds of theorems lead to one of the deepest theorems about the cohomology of algebraic varieties, the decomposition theorem, paving the path for Mixed Hodge modules.
Another clean approach to the computation of some cohomology groups is the Borel–Bott–Weil theorem, which identifies the cohomology groups of some line bundles on flag manifolds with irreducible representations of Lie groups. This theorem can be used, for example, to easily compute the cohomology groups of all line bundles on projective space and grassmann manifolds.
In many cases there is a duality theory for sheaves that generalizes Poincaré duality. See Grothendieck duality and Verdier duality.
=== Derived categories of sheaves ===
The derived category of the category of sheaves of, say, abelian groups on some space X, denoted here as
D
(
X
)
{\displaystyle D(X)}
, is the conceptual haven for sheaf cohomology, by virtue of the following relation:
H
n
(
X
,
F
)
=
Hom
D
(
X
)
(
Z
,
F
[
n
]
)
.
{\displaystyle H^{n}(X,{\mathcal {F}})=\operatorname {Hom} _{D(X)}(\mathbf {Z} ,{\mathcal {F}}[n]).}
The adjunction between
f
−
1
{\displaystyle f^{-1}}
, which is the left adjoint of
f
∗
{\displaystyle f_{*}}
(already on the level of sheaves of abelian groups) gives rise to an adjunction
f
−
1
:
D
(
Y
)
⇄
D
(
X
)
:
R
f
∗
{\displaystyle f^{-1}:D(Y)\rightleftarrows D(X):Rf_{*}}
(for
f
:
X
→
Y
{\displaystyle f:X\to Y}
),
where
R
f
∗
{\displaystyle Rf_{*}}
is the derived functor. This latter functor encompasses the notion of sheaf cohomology since
H
n
(
X
,
F
)
=
R
n
f
∗
F
{\displaystyle H^{n}(X,{\mathcal {F}})=R^{n}f_{*}{\mathcal {F}}}
for
f
:
X
→
{
∗
}
{\displaystyle f:X\to \{*\}}
.
Like
f
∗
{\displaystyle f_{*}}
, the direct image with compact support
f
!
{\displaystyle f_{!}}
can also be derived. By virtue of the following isomorphism
R
f
!
F
{\displaystyle Rf_{!}{\mathcal {F}}}
parametrizes the cohomology with compact support of the fibers of
f
{\displaystyle f}
:
(
R
i
f
!
F
)
y
=
H
c
i
(
f
−
1
(
y
)
,
F
)
.
{\displaystyle (R^{i}f_{!}{\mathcal {F}})_{y}=H_{c}^{i}(f^{-1}(y),{\mathcal {F}}).}
This isomorphism is an example of a base change theorem. There is another adjunction
R
f
!
:
D
(
X
)
⇄
D
(
Y
)
:
f
!
.
{\displaystyle Rf_{!}:D(X)\rightleftarrows D(Y):f^{!}.}
Unlike all the functors considered above, the twisted (or exceptional) inverse image functor
f
!
{\displaystyle f^{!}}
is in general only defined on the level of derived categories, i.e., the functor is not obtained as the derived functor of some functor between abelian categories. If
f
:
X
→
{
∗
}
{\displaystyle f:X\to \{*\}}
and X is a smooth orientable manifold of dimension n, then
f
!
R
_
≅
R
_
[
n
]
.
{\displaystyle f^{!}{\underline {\mathbf {R} }}\cong {\underline {\mathbf {R} }}[n].}
This computation, and the compatibility of the functors with duality (see Verdier duality) can be used to obtain a high-brow explanation of Poincaré duality. In the context of quasi-coherent sheaves on schemes, there is a similar duality known as coherent duality.
Perverse sheaves are certain objects in
D
(
X
)
{\displaystyle D(X)}
, i.e., complexes of sheaves (but not in general sheaves proper). They are an important tool to study the geometry of singularities.
==== Derived categories of coherent sheaves and the Grothendieck group ====
Another important application of derived categories of sheaves is with the derived category of coherent sheaves on a scheme
X
{\displaystyle X}
denoted
D
C
o
h
(
X
)
{\displaystyle D_{Coh}(X)}
. This was used by Grothendieck in his development of intersection theory using derived categories and K-theory, that the intersection product of subschemes
Y
1
,
Y
2
{\displaystyle Y_{1},Y_{2}}
is represented in K-theory as
[
Y
1
]
⋅
[
Y
2
]
=
[
O
Y
1
⊗
O
X
L
O
Y
2
]
∈
K
(
Coh(X)
)
{\displaystyle [Y_{1}]\cdot [Y_{2}]=[{\mathcal {O}}_{Y_{1}}\otimes _{{\mathcal {O}}_{X}}^{\mathbf {L} }{\mathcal {O}}_{Y_{2}}]\in K({\text{Coh(X)}})}
where
O
Y
i
{\displaystyle {\mathcal {O}}_{Y_{i}}}
are coherent sheaves defined by the
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules given by their structure sheaves.
== Sites and topoi ==
André Weil's Weil conjectures stated that there was a cohomology theory for algebraic varieties over finite fields that would give an analogue of the Riemann hypothesis. The cohomology of a complex manifold can be defined as the sheaf cohomology of the locally constant sheaf
C
_
{\displaystyle {\underline {\mathbf {C} }}}
in the Euclidean topology, which suggests defining a Weil cohomology theory in positive characteristic as the sheaf cohomology of a constant sheaf. But the only classical topology on such a variety is the Zariski topology, and the Zariski topology has very few open sets, so few that the cohomology of any Zariski-constant sheaf on an irreducible variety vanishes (except in degree zero). Alexandre Grothendieck solved this problem by introducing Grothendieck topologies, which axiomatize the notion of covering. Grothendieck's insight was that the definition of a sheaf depends only on the open sets of a topological space, not on the individual points. Once he had axiomatized the notion of covering, open sets could be replaced by other objects. A presheaf takes each one of these objects to data, just as before, and a sheaf is a presheaf that satisfies the gluing axiom with respect to our new notion of covering. This allowed Grothendieck to define étale cohomology and ℓ-adic cohomology, which eventually were used to prove the Weil conjectures.
A category with a Grothendieck topology is called a site. A category of sheaves on a site is called a topos or a Grothendieck topos. The notion of a topos was later abstracted by William Lawvere and Miles Tierney to define an elementary topos, which has connections to mathematical logic.
== History ==
The first origins of sheaf theory are hard to pin down – they may be co-extensive with the idea of analytic continuation. It took about 15 years for a recognisable, free-standing theory of sheaves to emerge from the foundational work on cohomology.
1936 Eduard Čech introduces the nerve construction, for associating a simplicial complex to an open covering.
1938 Hassler Whitney gives a 'modern' definition of cohomology, summarizing the work since J. W. Alexander and Kolmogorov first defined cochains.
1943 Norman Steenrod publishes on homology with local coefficients.
1945 Jean Leray publishes work carried out as a prisoner of war, motivated by proving fixed-point theorems for application to PDE theory; it is the start of sheaf theory and spectral sequences.
1947 Henri Cartan reproves the de Rham theorem by sheaf methods, in correspondence with André Weil (see De Rham–Weil theorem). Leray gives a sheaf definition in his courses via closed sets (the later carapaces).
1948 The Cartan seminar writes up sheaf theory for the first time.
1950 The "second edition" sheaf theory from the Cartan seminar: the sheaf space (espace étalé) definition is used, with stalkwise structure. Supports are introduced, and cohomology with supports. Continuous mappings give rise to spectral sequences. At the same time Kiyoshi Oka introduces an idea (adjacent to that) of a sheaf of ideals, in several complex variables.
1951 The Cartan seminar proves theorems A and B, based on Oka's work.
1953 The finiteness theorem for coherent sheaves in the analytic theory is proved by Cartan and Jean-Pierre Serre, as is Serre duality.
1954 Serre's paper Faisceaux algébriques cohérents (published in 1955) introduces sheaves into algebraic geometry. These ideas are immediately exploited by Friedrich Hirzebruch, who writes a major 1956 book on topological methods.
1955 Alexander Grothendieck in lectures in Kansas defines abelian category and presheaf, and by using injective resolutions allows direct use of sheaf cohomology on all topological spaces, as derived functors.
1956 Oscar Zariski's report Algebraic sheaf theory
1957 Grothendieck's Tohoku paper rewrites homological algebra; he proves Grothendieck duality (i.e., Serre duality for possibly singular algebraic varieties).
1957 onwards: Grothendieck extends sheaf theory in line with the needs of algebraic geometry, introducing: schemes and general sheaves on them, local cohomology, derived categories (with Verdier), and Grothendieck topologies. There emerges also his influential schematic idea of 'six operations' in homological algebra.
1958 Roger Godement's book on sheaf theory is published. At around this time Mikio Sato proposes his hyperfunctions, which will turn out to have sheaf-theoretic nature.
At this point sheaves had become a mainstream part of mathematics, with use by no means restricted to algebraic topology. It was later discovered that the logic in categories of sheaves is intuitionistic logic (this observation is now often referred to as Kripke–Joyal semantics, but probably should be attributed to a number of authors).
== See also ==
Coherent sheaf
Gerbe
Stack (mathematics)
Sheaf of spectra
Perverse sheaf
Presheaf of spaces
Constructible sheaf
De Rham's theorem
== Notes ==
== References ==
Bredon, Glen E. (1997), Sheaf theory, Graduate Texts in Mathematics, vol. 170 (2nd ed.), Springer-Verlag, ISBN 978-0-387-94905-5, MR 1481706 (oriented towards conventional topological applications)
de Cataldo, Andrea Mark; Migliorini, Luca (2010). "What is a perverse sheaf?" (PDF). Notices of the American Mathematical Society. 57 (5): 632–4. arXiv:1004.2983. Bibcode:2010arXiv1004.2983D. MR 2664042.
Godement, Roger (2006) [1973], Topologie algébrique et théorie des faisceaux, Paris: Hermann, ISBN 2705612521, MR 0345092
Grothendieck, Alexander (1957), "Sur quelques points d'algèbre homologique", The Tohoku Mathematical Journal, Second Series, 9 (2): 119–221, doi:10.2748/tmj/1178244839, ISSN 0040-8735, MR 0102537
Hirzebruch, Friedrich (1995), Topological methods in algebraic geometry, Classics in Mathematics, Springer-Verlag, ISBN 978-3-540-58663-0, MR 1335917 (updated edition of a classic using enough sheaf theory to show its power)
Iversen, Birger (1986), Cohomology of sheaves, Universitext, Springer, doi:10.1007/978-3-642-82783-9, ISBN 3-540-16389-1, MR 0842190
Kashiwara, Masaki; Schapira, Pierre (1994), Sheaves on manifolds, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 292, Springer-Verlag, ISBN 978-3-540-51861-7, MR 1299726 (advanced techniques such as the derived category and vanishing cycles on the most reasonable spaces)
Mac Lane, Saunders; Moerdijk, Ieke (1994), Sheaves in Geometry and Logic: A First Introduction to Topos Theory, Universitext, Springer-Verlag, ISBN 978-0-387-97710-2, MR 1300636 (category theory and toposes emphasised)
Martin, William T.; Chern, Shiing-Shen; Zariski, Oscar (1956), "Scientific report on the Second Summer Institute, several complex variables", Bulletin of the American Mathematical Society, 62 (2): 79–141, doi:10.1090/S0002-9904-1956-10013-X, ISSN 0002-9904, MR 0077995
Ramanan, S. (2005), Global calculus, Graduate Studies in Mathematics, vol. 65, American Mathematical Society, doi:10.1090/gsm/065, ISBN 0-8218-3702-8, MR 2104612
Seebach, J. Arthur; Seebach, Linda A.; Steen, Lynn A. (1970), "What is a Sheaf", American Mathematical Monthly, 77 (7): 681–703, doi:10.1080/00029890.1970.11992563, MR 0263073, S2CID 203043621
Serre, Jean-Pierre (1955), "Faisceaux algébriques cohérents" (PDF), Annals of Mathematics, Second Series, 61 (2): 197–278, doi:10.2307/1969915, ISSN 0003-486X, JSTOR 1969915, MR 0068874
Swan, Richard G. (1964), The Theory of Sheaves, Chicago lectures in mathematics (3 ed.), University of Chicago Press, ISBN 9780226783291 {{citation}}: ISBN / Date incompatibility (help) (concise lecture notes)
Tennison, Barry R. (1975), Sheaf theory, London Mathematical Society Lecture Note Series, vol. 20, Cambridge University Press, ISBN 978-0-521-20784-3, MR 0404390 (pedagogic treatment)
Rosiak, Daniel (2022). Sheaf theory through examples. Cambridge, Massachusetts. doi:10.7551/mitpress/12581.001.0001. ISBN 978-0-262-37042-4. OCLC 1333708310. S2CID 253133215.{{cite book}}: CS1 maint: location missing publisher (link) (introductory book with open access) | Wikipedia/Sheaf_theory |
In mathematics, a basic semialgebraic set is a set defined by polynomial equalities and polynomial inequalities, and a semialgebraic set is a finite union of basic semialgebraic sets. A semialgebraic function is a function with a semialgebraic graph. Such sets and functions are mainly studied in real algebraic geometry which is the appropriate framework for algebraic geometry over the real numbers.
== Definition ==
Let
F
{\displaystyle \mathbb {F} }
be a real closed field (For example
F
{\displaystyle \mathbb {F} }
could be the field of real numbers
R
{\displaystyle \mathbb {R} }
).
A subset
S
{\displaystyle S}
of
F
n
{\displaystyle \mathbb {F} ^{n}}
is a semialgebraic set if it is a finite union of sets defined by polynomial equalities of the form
{
(
x
1
,
.
.
.
,
x
n
)
∈
F
n
∣
P
(
x
1
,
.
.
.
,
x
n
)
=
0
}
{\displaystyle \{(x_{1},...,x_{n})\in \mathbb {F} ^{n}\mid P(x_{1},...,x_{n})=0\}}
and of sets defined by polynomial inequalities of the form
{
(
x
1
,
.
.
.
,
x
n
)
∈
F
n
∣
P
(
x
1
,
.
.
.
,
x
n
)
>
0
}
.
{\displaystyle \{(x_{1},...,x_{n})\in \mathbb {F} ^{n}\mid P(x_{1},...,x_{n})>0\}.}
== Properties ==
Similarly to algebraic subvarieties, finite unions and intersections of semialgebraic sets are still semialgebraic sets. Furthermore, unlike subvarieties, the complement of a semialgebraic set is again semialgebraic. Finally, and most importantly, the Tarski–Seidenberg theorem says that they are also closed under the projection operation: in other words a semialgebraic set projected onto a linear subspace yields another semialgebraic set (as is the case for quantifier elimination). These properties together mean that semialgebraic sets form an o-minimal structure on R.
A semialgebraic set (or function) is said to be defined over a subring A of R if there is some description, as in the definition, where the polynomials can be chosen to have coefficients in A.
On a dense open subset of the semialgebraic set S, it is (locally) a submanifold. One can define the dimension of S to be the largest dimension at points at which it is a submanifold. It is not hard to see that a semialgebraic set lies inside an algebraic subvariety of the same dimension.
== See also ==
Łojasiewicz inequality
Existential theory of the reals
Subanalytic set
Piecewise algebraic space
== References ==
Bochnak, J.; Coste, M.; Roy, M.-F. (1998), Real algebraic geometry, Berlin: Springer-Verlag, ISBN 9783662037188.
Bierstone, Edward; Milman, Pierre D. (1988), "Semianalytic and subanalytic sets", Inst. Hautes Études Sci. Publ. Math., 67: 5–42, doi:10.1007/BF02699126, MR 0972342, S2CID 56006439.
van den Dries, L. (1998), Tame topology and o-minimal structures, Cambridge University Press, ISBN 9780521598385.
== External links ==
PlanetMath page | Wikipedia/Semi-algebraic_set |
Discrete differential geometry is the study of discrete counterparts of notions in differential geometry. Instead of smooth curves and surfaces, there are polygons, meshes, and simplicial complexes. It is used in the study of computer graphics, geometry processing and topological combinatorics.
== See also ==
Discrete Laplace operator
Discrete exterior calculus
Discrete Morse theory
Topological combinatorics
Spectral shape analysis
Analysis on fractals
Discrete calculus
== References ==
Discrete differential geometry Forum
Keenan Crane; Max Wardetzky (November 2017). "A Glimpse into Discrete Differential Geometry". Notices of the American Mathematical Society. 64 (10): 1153–1159. doi:10.1090/noti1578.
Alexander I. Bobenko; Peter Schröder; John M. Sullivan; Günter M. Ziegler (2008). Discrete differential geometry. Birkhäuser Verlag AG. ISBN 978-3-7643-8620-7.
Alexander I. Bobenko, Yuri B. Suris (2008), "Discrete Differential Geometry", American Mathematical Society, ISBN 978-0-8218-4700-8 | Wikipedia/Discrete_differential_geometry |
In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function.
A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials.
An algebraic variety has naturally the structure of a locally ringed space; a morphism between algebraic varieties is precisely a morphism of the underlying locally ringed spaces.
== Definition ==
If X and Y are closed subvarieties of
A
n
{\displaystyle \mathbb {A} ^{n}}
and
A
m
{\displaystyle \mathbb {A} ^{m}}
(so they are affine varieties), then a regular map
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
is the restriction of a polynomial map
A
n
→
A
m
{\displaystyle \mathbb {A} ^{n}\to \mathbb {A} ^{m}}
. Explicitly, it has the form:
f
=
(
f
1
,
…
,
f
m
)
{\displaystyle f=(f_{1},\dots ,f_{m})}
where the
f
i
{\displaystyle f_{i}}
s are in the coordinate ring of X:
k
[
X
]
=
k
[
x
1
,
…
,
x
n
]
/
I
,
{\displaystyle k[X]=k[x_{1},\dots ,x_{n}]/I,}
where I is the ideal defining X (note: two polynomials f and g define the same function on X if and only if f − g is in I). The image f(X) lies in Y, and hence satisfies the defining equations of Y. That is, a regular map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is the same as the restriction of a polynomial map whose components satisfy the defining equations of
Y
{\displaystyle Y}
.
More generally, a map f : X→Y between two varieties is regular at a point x if there is a neighbourhood U of x and a neighbourhood V of f(x) such that f(U) ⊂ V and the restricted function f : U→V is regular as a function on some affine charts of U and V. Then f is called regular, if it is regular at all points of X.
Note: It is not immediately obvious that the two definitions coincide: if X and Y are affine varieties, then a map f : X→Y is regular in the first sense if and only if it is so in the second sense. Also, it is not immediately clear whether regularity depends on a choice of affine charts (it does not.) This kind of a consistency issue, however, disappears if one adopts the formal definition. Formally, an (abstract) algebraic variety is defined to be a particular kind of a locally ringed space. When this definition is used, a morphism of varieties is just a morphism of locally ringed spaces.
The composition of regular maps is again regular; thus, algebraic varieties form the category of algebraic varieties where the morphisms are the regular maps.
Regular maps between affine varieties correspond contravariantly in one-to-one to algebra homomorphisms between the coordinate rings: if f : X→Y is a morphism of affine varieties, then it defines the algebra homomorphism
f
#
:
k
[
Y
]
→
k
[
X
]
,
g
↦
g
∘
f
{\displaystyle f^{\#}:k[Y]\to k[X],\,g\mapsto g\circ f}
where
k
[
X
]
,
k
[
Y
]
{\displaystyle k[X],k[Y]}
are the coordinate rings of X and Y; it is well-defined since
g
∘
f
=
g
(
f
1
,
…
,
f
m
)
{\displaystyle g\circ f=g(f_{1},\dots ,f_{m})}
is a polynomial in elements of
k
[
X
]
{\displaystyle k[X]}
. Conversely, if
ϕ
:
k
[
Y
]
→
k
[
X
]
{\displaystyle \phi :k[Y]\to k[X]}
is an algebra homomorphism, then it induces the morphism
ϕ
a
:
X
→
Y
{\displaystyle \phi ^{a}:X\to Y}
given by: writing
k
[
Y
]
=
k
[
y
1
,
…
,
y
m
]
/
J
,
{\displaystyle k[Y]=k[y_{1},\dots ,y_{m}]/J,}
ϕ
a
=
(
ϕ
(
y
1
¯
)
,
…
,
ϕ
(
y
m
¯
)
)
{\displaystyle \phi ^{a}=(\phi ({\overline {y_{1}}}),\dots ,\phi ({\overline {y_{m}}}))}
where
y
¯
i
{\displaystyle {\overline {y}}_{i}}
are the images of
y
i
{\displaystyle y_{i}}
's. Note
ϕ
a
#
=
ϕ
{\displaystyle {\phi ^{a}}^{\#}=\phi }
as well as
f
#
a
=
f
.
{\displaystyle {f^{\#}}^{a}=f.}
In particular, f is an isomorphism of affine varieties if and only if f# is an isomorphism of the coordinate rings.
For example, if X is a closed subvariety of an affine variety Y and f is the inclusion, then f# is the restriction of regular functions on Y to X. See #Examples below for more examples.
== Regular functions ==
In the particular case that
Y
{\displaystyle Y}
equals
A
1
{\displaystyle \mathbb {A} ^{1}}
the regular maps
f
:
X
→
A
1
{\displaystyle f:X\rightarrow \mathbb {A} ^{1}}
are called regular functions, and are algebraic analogs of smooth functions studied in differential geometry. The ring of regular functions (that is the coordinate ring or more abstractly the ring of global sections of the structure sheaf) is a fundamental object in affine algebraic geometry. The only regular function on a projective variety is constant (this can be viewed as an algebraic analogue of Liouville's theorem in complex analysis).
A scalar function
f
:
X
→
A
1
{\displaystyle f:X\rightarrow \mathbb {A} ^{1}}
is regular at a point
x
{\displaystyle x}
if, in some open affine neighborhood of
x
{\displaystyle x}
, it is a rational function that is regular at
x
{\displaystyle x}
; i.e., there are regular functions
g
{\displaystyle g}
,
h
{\displaystyle h}
near
x
{\displaystyle x}
such that
f
=
g
/
h
{\displaystyle f=g/h}
and
h
{\displaystyle h}
does not vanish at
x
{\displaystyle x}
. Caution: the condition is for some pair (g, h) not for all pairs (g, h); see Examples.
If X is a quasi-projective variety; i.e., an open subvariety of a projective variety, then the function field k(X) is the same as that of the closure
X
¯
{\displaystyle {\overline {X}}}
of X and thus a rational function on X is of the form g/h for some homogeneous elements g, h of the same degree in the homogeneous coordinate ring
k
[
X
¯
]
{\displaystyle k[{\overline {X}}]}
of
X
¯
{\displaystyle {\overline {X}}}
(cf. Projective variety#Variety structure). Then a rational function f on X is regular at a point x if and only if there are some homogeneous elements g, h of the same degree in
k
[
X
¯
]
{\displaystyle k[{\overline {X}}]}
such that f = g/h and h does not vanish at x. This characterization is sometimes taken as the definition of a regular function.
== Comparison with a morphism of schemes ==
If
X
=
Spec
A
{\displaystyle X=\operatorname {Spec} A}
and
Y
=
Spec
B
{\displaystyle Y=\operatorname {Spec} B}
are affine schemes, then each ring homomorphism
ϕ
:
B
→
A
{\displaystyle \phi :B\rightarrow A}
determines a morphism
ϕ
a
:
X
→
Y
,
p
↦
ϕ
−
1
(
p
)
{\displaystyle \phi ^{a}:X\to Y,\,{\mathfrak {p}}\mapsto \phi ^{-1}({\mathfrak {p}})}
by taking the pre-images of prime ideals. All morphisms between affine schemes are of this type and gluing such morphisms gives a morphism of schemes in general.
Now, if X, Y are affine varieties; i.e., A, B are integral domains that are finitely generated algebras over an algebraically closed field k, then, working with only the closed points, the above coincides with the definition given at #Definition. (Proof: If f : X → Y is a morphism, then writing
ϕ
=
f
#
{\displaystyle \phi =f^{\#}}
, we need to show
m
f
(
x
)
=
ϕ
−
1
(
m
x
)
{\displaystyle {\mathfrak {m}}_{f(x)}=\phi ^{-1}({\mathfrak {m}}_{x})}
where
m
x
,
m
f
(
x
)
{\displaystyle {\mathfrak {m}}_{x},{\mathfrak {m}}_{f(x)}}
are the maximal ideals corresponding to the points x and f(x); i.e.,
m
x
=
{
g
∈
k
[
X
]
∣
g
(
x
)
=
0
}
{\displaystyle {\mathfrak {m}}_{x}=\{g\in k[X]\mid g(x)=0\}}
. This is immediate.)
This fact means that the category of affine varieties can be identified with a full subcategory of affine schemes over k. Since morphisms of varieties are obtained by gluing morphisms of affine varieties in the same way morphisms of schemes are obtained by gluing morphisms of affine schemes, it follows that the category of varieties is a full subcategory of the category of schemes over k.
For more details, see [1].
== Examples ==
The regular functions on
A
n
{\displaystyle \mathbb {A} ^{n}}
are exactly the polynomials in
n
{\displaystyle n}
variables and the regular functions on
P
n
{\displaystyle \mathbb {P} ^{n}}
are exactly the constants.
Let
X
{\displaystyle X}
be the affine curve
y
=
x
2
{\displaystyle y=x^{2}}
. Then
f
:
X
→
A
1
,
(
x
,
y
)
↦
x
{\displaystyle f:X\to \mathbf {A} ^{1},\,(x,y)\mapsto x}
is a morphism; it is bijective with the inverse
g
(
x
)
=
(
x
,
x
2
)
{\displaystyle g(x)=(x,x^{2})}
. Since
g
{\displaystyle g}
is also a morphism,
f
{\displaystyle f}
is an isomorphism of varieties.
Let
X
{\displaystyle X}
be the affine curve
y
2
=
x
3
+
x
2
{\displaystyle y^{2}=x^{3}+x^{2}}
. Then
f
:
A
1
→
X
,
t
↦
(
t
2
−
1
,
t
3
−
t
)
{\displaystyle f:\mathbf {A} ^{1}\to X,\,t\mapsto (t^{2}-1,t^{3}-t)}
is a morphism. It corresponds to the ring homomorphism
f
#
:
k
[
X
]
→
k
[
t
]
,
g
↦
g
(
t
2
−
1
,
t
3
−
t
)
,
{\displaystyle f^{\#}:k[X]\to k[t],\,g\mapsto g(t^{2}-1,t^{3}-t),}
which is seen to be injective (since f is surjective).
Continuing the preceding example, let U = A1 − {1}. Since U is the complement of the hyperplane t = 1, U is affine. The restriction
f
:
U
→
X
{\displaystyle f:U\to X}
is bijective. But the corresponding ring homomorphism is the inclusion
k
[
X
]
=
k
[
t
2
−
1
,
t
3
−
t
]
↪
k
[
t
,
(
t
−
1
)
−
1
]
{\displaystyle k[X]=k[t^{2}-1,t^{3}-t]\hookrightarrow k[t,(t-1)^{-1}]}
, which is not an isomorphism and so the restriction f |U is not an isomorphism.
Let X be the affine curve x2 + y2 = 1 and let
f
(
x
,
y
)
=
1
−
y
x
.
{\displaystyle f(x,y)={1-y \over x}.}
Then f is a rational function on X. It is regular at (0, 1) despite the expression since, as a rational function on X, f can also be written as
f
(
x
,
y
)
=
x
1
+
y
{\displaystyle f(x,y)={x \over 1+y}}
.
Let X = A2 − (0, 0). Then X is an algebraic variety since it is an open subset of a variety. If f is a regular function on X, then f is regular on
D
A
2
(
x
)
=
A
2
−
{
x
=
0
}
{\displaystyle D_{\mathbf {A} ^{2}}(x)=\mathbf {A} ^{2}-\{x=0\}}
and so is in
k
[
D
A
2
(
x
)
]
=
k
[
A
2
]
[
x
−
1
]
=
k
[
x
,
x
−
1
,
y
]
{\displaystyle k[D_{\mathbf {A} ^{2}}(x)]=k[\mathbf {A} ^{2}][x^{-1}]=k[x,x^{-1},y]}
. Similarly, it is in
k
[
x
,
y
,
y
−
1
]
{\displaystyle k[x,y,y^{-1}]}
. Thus, we can write:
f
=
g
x
n
=
h
y
m
{\displaystyle f={g \over x^{n}}={h \over y^{m}}}
where g, h are polynomials in k[x, y]. But this implies g is divisible by xn and so f is in fact a polynomial. Hence, the ring of regular functions on X is just k[x, y]. (This also shows that X cannot be affine since if it were, X is determined by its coordinate ring and thus X = A2.)
Suppose
P
1
=
A
1
∪
{
∞
}
{\displaystyle \mathbf {P} ^{1}=\mathbf {A} ^{1}\cup \{\infty \}}
by identifying the points (x : 1) with the points x on A1 and ∞ = (1 : 0). There is an automorphism σ of P1 given by σ(x : y) = (y : x); in particular, σ exchanges 0 and ∞. If f is a rational function on P1, then
σ
#
(
f
)
=
f
(
1
/
z
)
{\displaystyle \sigma ^{\#}(f)=f(1/z)}
and f is regular at ∞ if and only if f(1/z) is regular at zero.
Taking the function field k(V) of an irreducible algebraic curve V, the functions F in the function field may all be realised as morphisms from V to the projective line over k. (cf. #Properties) The image will either be a single point, or the whole projective line (this is a consequence of the completeness of projective varieties). That is, unless F is actually constant, we have to attribute to F the value ∞ at some points of V.
For any algebraic varieties X, Y, the projection
p
:
X
×
Y
→
X
,
(
x
,
y
)
↦
x
{\displaystyle p:X\times Y\to X,\,(x,y)\mapsto x}
is a morphism of varieties. If X and Y are affine, then the corresponding ring homomorphism is
p
#
:
k
[
X
]
→
k
[
X
×
Y
]
=
k
[
X
]
⊗
k
k
[
Y
]
,
f
↦
f
⊗
1
{\displaystyle p^{\#}:k[X]\to k[X\times Y]=k[X]\otimes _{k}k[Y],\,f\mapsto f\otimes 1}
where
(
f
⊗
1
)
(
x
,
y
)
=
f
(
p
(
x
,
y
)
)
=
f
(
x
)
{\displaystyle (f\otimes 1)(x,y)=f(p(x,y))=f(x)}
.
== Properties ==
A morphism between varieties is continuous with respect to Zariski topologies on the source and the target.
The image of a morphism of varieties need not be open nor closed (for example, the image of
A
2
→
A
2
,
(
x
,
y
)
↦
(
x
,
x
y
)
{\displaystyle \mathbf {A} ^{2}\to \mathbf {A} ^{2},\,(x,y)\mapsto (x,xy)}
is neither open nor closed). However, one can still say: if f is a morphism between varieties, then the image of f contains an open dense subset of its closure (cf. constructible set).
A morphism f:X→Y of algebraic varieties is said to be dominant if it has dense image. For such an f, if V is a nonempty open affine subset of Y, then there is a nonempty open affine subset U of X such that f(U) ⊂ V and then
f
#
:
k
[
V
]
→
k
[
U
]
{\displaystyle f^{\#}:k[V]\to k[U]}
is injective. Thus, the dominant map f induces an injection on the level of function fields:
k
(
Y
)
=
lim
→
k
[
V
]
↪
k
(
X
)
,
g
↦
g
∘
f
{\displaystyle k(Y)=\varinjlim k[V]\hookrightarrow k(X),\,g\mapsto g\circ f}
where the direct limit runs over all nonempty open affine subsets of Y. (More abstractly, this is the induced map from the residue field of the generic point of Y to that of X.) Conversely, every inclusion of fields
k
(
Y
)
↪
k
(
X
)
{\displaystyle k(Y)\hookrightarrow k(X)}
is induced by a dominant rational map from X to Y. Hence, the above construction determines a contravariant-equivalence between the category of algebraic varieties over a field k and dominant rational maps between them and the category of finitely generated field extension of k.
If X is a smooth complete curve (for example, P1) and if f is a rational map from X to a projective space Pm, then f is a regular map X → Pm. In particular, when X is a smooth complete curve, any rational function on X may be viewed as a morphism X → P1 and, conversely, such a morphism as a rational function on X.
On a normal variety (in particular, a smooth variety), a rational function is regular if and only if it has no poles of codimension one. This is an algebraic analog of Hartogs' extension theorem. There is also a relative version of this fact; see [2].
A morphism between algebraic varieties that is a homeomorphism between the underlying topological spaces need not be an isomorphism (a counterexample is given by a Frobenius morphism
t
↦
t
p
{\displaystyle t\mapsto t^{p}}
.) On the other hand, if f is bijective birational and the target space of f is a normal variety, then f is biregular. (cf. Zariski's main theorem.)
A regular map between complex algebraic varieties is a holomorphic map. (There is actually a slight technical difference: a regular map is a meromorphic map whose singular points are removable, but the distinction is usually ignored in practice.) In particular, a regular map into the complex numbers is just a usual holomorphic function (complex-analytic function).
== Morphisms to a projective space ==
Let
f
:
X
→
P
m
{\displaystyle f:X\to \mathbf {P} ^{m}}
be a morphism from a projective variety to a projective space. Let x be a point of X. Then some i-th homogeneous coordinate of f(x) is nonzero; say, i = 0 for simplicity. Then, by continuity, there is an open affine neighborhood U of x such that
f
:
U
→
P
m
−
{
y
0
=
0
}
{\displaystyle f:U\to \mathbf {P} ^{m}-\{y_{0}=0\}}
is a morphism, where yi are the homogeneous coordinates. Note the target space is the affine space Am through the identification
(
a
0
:
⋯
:
a
m
)
=
(
1
:
a
1
/
a
0
:
⋯
:
a
m
/
a
0
)
∼
(
a
1
/
a
0
,
…
,
a
m
/
a
0
)
{\displaystyle (a_{0}:\dots :a_{m})=(1:a_{1}/a_{0}:\dots :a_{m}/a_{0})\sim (a_{1}/a_{0},\dots ,a_{m}/a_{0})}
. Thus, by definition, the restriction f |U is given by
f
|
U
(
x
)
=
(
g
1
(
x
)
,
…
,
g
m
(
x
)
)
{\displaystyle f|_{U}(x)=(g_{1}(x),\dots ,g_{m}(x))}
where gi's are regular functions on U. Since X is projective, each gi is a fraction of homogeneous elements of the same degree in the homogeneous coordinate ring k[X] of X. We can arrange the fractions so that they all have the same homogeneous denominator say f0. Then we can write gi = fi/f0 for some homogeneous elements fi's in k[X]. Hence, going back to the homogeneous coordinates,
f
(
x
)
=
(
f
0
(
x
)
:
f
1
(
x
)
:
⋯
:
f
m
(
x
)
)
{\displaystyle f(x)=(f_{0}(x):f_{1}(x):\dots :f_{m}(x))}
for all x in U and by continuity for all x in X as long as the fi's do not vanish at x simultaneously. If they vanish simultaneously at a point x of X, then, by the above procedure, one can pick a different set of fi's that do not vanish at x simultaneously (see Note at the end of the section.)
In fact, the above description is valid for any quasi-projective variety X, an open subvariety of a projective variety
X
¯
{\displaystyle {\overline {X}}}
; the difference being that fi's are in the homogeneous coordinate ring of
X
¯
{\displaystyle {\overline {X}}}
.
Note: The above does not say a morphism from a projective variety to a projective space is given by a single set of polynomials (unlike the affine case). For example, let X be the conic
y
2
=
x
z
{\displaystyle y^{2}=xz}
in P2. Then two maps
(
x
:
y
:
z
)
↦
(
x
:
y
)
{\displaystyle (x:y:z)\mapsto (x:y)}
and
(
x
:
y
:
z
)
↦
(
y
:
z
)
{\displaystyle (x:y:z)\mapsto (y:z)}
agree on the open subset
{
(
x
:
y
:
z
)
∈
X
∣
x
≠
0
,
z
≠
0
}
{\displaystyle \{(x:y:z)\in X\mid x\neq 0,z\neq 0\}}
of X (since
(
x
:
y
)
=
(
x
y
:
y
2
)
=
(
x
y
:
x
z
)
=
(
y
:
z
)
{\displaystyle (x:y)=(xy:y^{2})=(xy:xz)=(y:z)}
) and so defines a morphism
f
:
X
→
P
1
{\displaystyle f:X\to \mathbf {P} ^{1}}
.
== Fibers of a morphism ==
The important fact is:
In Mumford's red book, the theorem is proved by means of Noether's normalization lemma. For an algebraic approach where the generic freeness plays a main role and the notion of "universally catenary ring" is a key in the proof, see Eisenbud, Ch. 14 of "Commutative algebra with a view toward algebraic geometry." In fact, the proof there shows that if f is flat, then the dimension equality in 2. of the theorem holds in general (not just generically).
== Degree of a finite morphism ==
Let f: X → Y be a finite surjective morphism between algebraic varieties over a field k. Then, by definition, the degree of f is the degree of the finite field extension of the function field k(X) over f*k(Y). By generic freeness, there is some nonempty open subset U in Y such that the restriction of the structure sheaf OX to f−1(U) is free as OY|U-module. The degree of f is then also the rank of this free module.
If f is étale and if X, Y are complete, then for any coherent sheaf F on Y, writing χ for the Euler characteristic,
χ
(
f
∗
F
)
=
deg
(
f
)
χ
(
F
)
.
{\displaystyle \chi (f^{*}F)=\deg(f)\chi (F).}
(The Riemann–Hurwitz formula for a ramified covering shows the "étale" here cannot be omitted.)
In general, if f is a finite surjective morphism, if X, Y are complete and F a coherent sheaf on Y, then from the Leray spectral sequence
H
p
(
Y
,
R
q
f
∗
f
∗
F
)
⇒
H
p
+
q
(
X
,
f
∗
F
)
{\displaystyle \operatorname {H} ^{p}(Y,R^{q}f_{*}f^{*}F)\Rightarrow \operatorname {H} ^{p+q}(X,f^{*}F)}
, one gets:
χ
(
f
∗
F
)
=
∑
q
=
0
∞
(
−
1
)
q
χ
(
R
q
f
∗
f
∗
F
)
.
{\displaystyle \chi (f^{*}F)=\sum _{q=0}^{\infty }(-1)^{q}\chi (R^{q}f_{*}f^{*}F).}
In particular, if F is a tensor power
L
⊗
n
{\displaystyle L^{\otimes n}}
of a line bundle, then
R
q
f
∗
(
f
∗
F
)
=
R
q
f
∗
O
X
⊗
L
⊗
n
{\displaystyle R^{q}f_{*}(f^{*}F)=R^{q}f_{*}{\mathcal {O}}_{X}\otimes L^{\otimes n}}
and since the support of
R
q
f
∗
O
X
{\displaystyle R^{q}f_{*}{\mathcal {O}}_{X}}
has positive codimension if q is positive, comparing the leading terms, one has:
deg
(
f
∗
L
)
=
deg
(
f
)
deg
(
L
)
{\displaystyle \operatorname {deg} (f^{*}L)=\operatorname {deg} (f)\operatorname {deg} (L)}
(since the generic rank of
f
∗
O
X
{\displaystyle f_{*}{\mathcal {O}}_{X}}
is the degree of f.)
If f is étale and k is algebraically closed, then each geometric fiber f−1(y) consists exactly of deg(f) points.
== See also ==
Algebraic function
Smooth morphism
Étale morphisms – The algebraic analogue of local diffeomorphisms.
Resolution of singularities
contraction morphism
== Notes ==
== Citations ==
== References == | Wikipedia/Regular_function |
In mathematics – particularly in homological algebra, algebraic topology, and algebraic geometry – a differential graded algebra (or DGA, or DG algebra) is an algebraic structure often used to capture information about a topological or geometric space. Explicitly, a differential graded algebra is a graded associative algebra with a chain complex structure that is compatible with the algebra structure.
In geometry, the de Rham algebra of differential forms on a manifold has the structure of a differential graded algebra, and it encodes the de Rham cohomology of the manifold. In algebraic topology, the singular cochains of a topological space form a DGA encoding the singular cohomology. Moreover, American mathematician Dennis Sullivan developed a DGA to encode the rational homotopy type of topological spaces.
== Definitions ==
Let
A
∙
=
⨁
i
∈
Z
A
i
{\displaystyle A_{\bullet }=\bigoplus \nolimits _{i\in \mathbb {Z} }A_{i}}
be a
Z
{\displaystyle \mathbb {Z} }
-graded algebra, with product
⋅
{\displaystyle \cdot }
, equipped with a map
d
:
A
∙
→
A
∙
{\displaystyle d\colon A_{\bullet }\to A_{\bullet }}
of degree
−
1
{\displaystyle -1}
(homologically graded) or degree
+
1
{\displaystyle +1}
(cohomologically graded). We say that
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
is a differential graded algebra if
d
{\displaystyle d}
is a differential, giving
A
∙
{\displaystyle A_{\bullet }}
the structure of a chain complex or cochain complex (depending on the degree), and satisfies a graded Leibniz rule. In what follows, we will denote the "degree" of a homogeneous element
a
∈
A
i
{\displaystyle a\in A_{i}}
by
|
a
|
=
i
{\displaystyle |a|=i}
. Explicitly, the map
d
{\displaystyle d}
satisfies the conditions
Often one omits the differential and multiplication and simply writes
A
∙
{\displaystyle A_{\bullet }}
or
A
{\displaystyle A}
to refer to the DGA
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
.
A linear map
f
:
A
∙
→
B
∙
{\displaystyle f:A_{\bullet }\to B_{\bullet }}
between graded vector spaces is said to be of degree n if
f
(
A
i
)
⊆
B
i
+
n
{\displaystyle f(A_{i})\subseteq B_{i+n}}
for all
i
{\displaystyle i}
. When considering (co)chain complexes, we restrict our attention to chain maps, that is, maps of degree 0 that commute with the differentials
f
∘
d
A
=
d
B
∘
f
{\displaystyle f\circ d_{A}=d_{B}\circ f}
. The morphisms in the category of DGAs are chain maps that are also algebra homomorphisms.
=== Categorical Definition ===
One can also define DGAs more abstractly using category theory. There is a category of chain complexes over a ring
R
{\displaystyle R}
, often denoted
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
, whose objects are chain complexes and whose morphisms are chain maps. We define the tensor product of chain complexes
(
V
,
d
V
)
{\displaystyle (V,d_{V})}
and
(
W
,
d
W
)
{\displaystyle (W,d_{W})}
by
(
V
⊗
W
)
n
=
⨁
i
+
j
=
n
V
i
⊗
R
W
j
{\displaystyle (V\otimes W)_{n}=\bigoplus _{i+j=n}V_{i}\otimes _{R}W_{j}}
with differential
d
(
v
⊗
w
)
=
(
d
V
v
)
⊗
w
−
(
−
1
)
|
v
|
v
⊗
(
d
W
w
)
{\displaystyle d(v\otimes w)=(d_{V}v)\otimes w-(-1)^{|v|}v\otimes (d_{W}w)}
This operation makes
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
into a symmetric monoidal category. Then, we can equivalently define a differential graded algebra as a monoid object in
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
. Heuristically, it is an object in
Ch
R
{\displaystyle \operatorname {Ch} _{R}}
with an associative and unital multiplication.
=== Homology and Cohomology ===
Associated to any chain complex
(
A
∙
,
d
)
{\displaystyle (A_{\bullet },d)}
is its homology. Since
d
∘
d
=
0
{\displaystyle d\circ d=0}
, it follows that
im
(
d
:
A
i
+
1
→
A
i
)
{\displaystyle \operatorname {im} (d:A_{i+1}\to A_{i})}
is a subobject of
ker
(
d
:
A
i
→
A
i
−
1
)
{\displaystyle \operatorname {ker} (d:A_{i}\to A_{i-1})}
. Thus, we can form the quotient
H
i
(
A
∙
)
=
ker
(
d
:
A
i
→
A
i
−
1
)
/
im
(
d
:
A
i
+
1
→
A
i
)
{\displaystyle H_{i}(A_{\bullet })=\operatorname {ker} (d:A_{i}\to A_{i-1})/\operatorname {im} (d:A_{i+1}\to A_{i})}
This is called the
i
{\displaystyle i}
th homology group, and all together they form a graded vector space
H
∙
(
A
)
{\displaystyle H_{\bullet }(A)}
. In fact, the homology groups form a DGA with zero differential. Analogously, one can define the cohomology groups of a cochain complex, which also form a graded algebra with zero differential.
Every chain map
f
:
(
A
∙
,
d
A
)
→
(
B
∙
,
d
B
)
{\displaystyle f:(A_{\bullet },d_{A})\to (B_{\bullet },d_{B})}
of complexes induces a map on (co)homology, often denoted
f
∗
:
H
∙
(
A
)
→
H
∙
(
B
)
{\displaystyle f_{*}:H_{\bullet }(A)\to H_{\bullet }(B)}
(respectively
f
∗
:
H
∙
(
B
)
→
H
∙
(
A
)
{\displaystyle f^{*}:H^{\bullet }(B)\to H^{\bullet }(A)}
). If this induced map is an isomorphism on all (co)homology groups, the map
f
{\displaystyle f}
is called a quasi-isomorphism. In many contexts, this is the natural notion of equivalence one uses for (co)chain complexes. We say a morphism of DGAs is a quasi-isomorphism if the chain map on the underlying (co)chain complexes is.
== Properties of DGAs ==
=== Commutative Differential Graded Algebras ===
A commutative differential graded algebra (or CDGA) is a differential graded algebra,
(
A
∙
,
d
,
⋅
)
{\displaystyle (A_{\bullet },d,\cdot )}
, which satisfies a graded version of commutativity. Namely,
a
⋅
b
=
(
−
1
)
|
a
|
|
b
|
b
⋅
a
{\displaystyle a\cdot b=(-1)^{|a||b|}b\cdot a}
for homogeneous elements
a
∈
A
i
,
b
∈
A
j
{\displaystyle a\in A_{i},b\in A_{j}}
. Many of the DGAs commonly encountered in math happen to be CDGAs, like the de Rham algebra of differential forms.
=== Differential graded Lie algebras ===
A differential graded Lie algebra (or DGLA) is a differential graded analogue of a Lie algebra. That is, it is a differential graded vector space,
(
L
∙
,
d
)
{\displaystyle (L_{\bullet },d)}
, together with an operation
[
,
]
:
L
i
⊗
L
j
→
L
i
+
j
{\displaystyle [,]:L_{i}\otimes L_{j}\to L_{i+j}}
, satisfying the following graded analogues of the Lie algebra axioms.
An example of a DGLA is the de Rham algebra
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
tensored with a Lie algebra
g
{\displaystyle {\mathfrak {g}}}
, with the bracket given by the exterior product of the differential forms and Lie bracket; elements of this DGLA are known as Lie algebra–valued differential forms. DGLAs also arise frequently in the study of deformations of algebraic structures where, over a field of characteristic 0, "nice" deformation problems are described by the space of Maurer-Cartan elements of some suitable DGLA.
=== Formal DGAs ===
A (co)chain complex
C
∙
{\displaystyle C_{\bullet }}
is called formal if there is a chain map to its (co)homology
H
∙
(
C
∙
)
{\displaystyle H_{\bullet }(C_{\bullet })}
(respectively
H
∙
(
C
∙
)
{\displaystyle H^{\bullet }(C_{\bullet })}
), thought of as a complex with 0 differential, that is a quasi-isomorphism. We say that a DGA
A
{\displaystyle A}
is formal if there exists a morphism of DGAs
A
→
H
∙
(
A
)
{\displaystyle A\to H_{\bullet }(A)}
(respectively
A
→
H
∙
(
A
)
{\displaystyle A\to H^{\bullet }(A)}
) that is a quasi-isomorphism. This notion is important, for instance, when one wants to consider quasi-isomorphic chain complexes or DGAs as being equivalent, as in the derived category.
== Examples ==
=== Trivial DGAs ===
Notice that any graded algebra
A
=
⨁
i
A
i
{\displaystyle A=\bigoplus \nolimits _{i}A_{i}}
has the structure of a DGA with trivial differential, i.e.,
d
=
0
{\displaystyle d=0}
. In particular, as noted above, the (co)homology of any DGA forms a trivial DGA, since it is a graded algebra.
=== The de-Rham algebra ===
Let
M
{\displaystyle M}
be a manifold. Then, the differential forms on
M
{\displaystyle M}
, denoted by
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
, naturally have the structure of a (cohomologically graded) DGA. The graded vector space is
Ω
∙
(
M
)
{\displaystyle \Omega ^{\bullet }(M)}
, where the grading is given by form degree. This vector space has a product, given by the exterior product, which makes it into a graded algebra. Finally, the exterior derivative
d
:
Ω
i
(
M
)
→
Ω
i
+
1
(
M
)
{\displaystyle d:\Omega ^{i}(M)\to \Omega ^{i+1}(M)}
satisfies
d
2
=
0
{\displaystyle d^{2}=0}
and the graded Leibniz rule. In fact, the exterior product is graded-commutative, which makes the de Rham algebra an example of a CDGA.
=== Singular Cochains ===
Let
X
{\displaystyle X}
be a topological space. Recall that we can associate to
X
{\displaystyle X}
its complex of singular cochains with coefficients in a ring
R
{\displaystyle R}
, denoted
(
C
∙
(
X
;
R
)
,
d
)
{\displaystyle (C^{\bullet }(X;R),d)}
, whose cohomology is the singular cohomology of
X
{\displaystyle X}
. On
C
∙
(
X
;
R
)
{\displaystyle C^{\bullet }(X;R)}
, one can define the cup product of cochains, which gives this cochain complex the structure of a DGA. In the case where
X
{\displaystyle X}
is a smooth manifold and
R
=
R
{\displaystyle R=\mathbb {R} }
, the de Rham theorem states that the singular cohomology is isomorphic to the de Rham cohomology and, moreover, the cup product and exterior product of differential forms induce the same operation on cohomology.
Note, however, that while the cup product induces a graded-commutative operation on cohomology, it is not graded commutative directly on cochains. This is an important distinction, and the failure of a DGA to be commutative is referred to as the "commutative cochain problem". This problem is important because if, for any topological space
X
{\displaystyle X}
, one can associate a commutative DGA whose cohomology is the singular cohomology of
X
{\displaystyle X}
over
R
{\displaystyle R}
, then this CDGA determines the
R
{\displaystyle R}
-homotopy type of
X
{\displaystyle X}
.
=== The Free DGA ===
Let
V
{\displaystyle V}
be a (non-graded) vector space over a field
k
{\displaystyle k}
. The tensor algebra
T
(
V
)
{\displaystyle T(V)}
is defined to be the graded algebra
T
(
V
)
=
⨁
i
≥
0
T
i
(
V
)
=
⨁
i
≥
0
V
⊗
i
{\displaystyle T(V)=\bigoplus _{i\geq 0}T^{i}(V)=\bigoplus _{i\geq 0}V^{\otimes i}}
where, by convention, we take
T
0
(
V
)
=
k
{\displaystyle T^{0}(V)=k}
. This vector space can be made into a graded algebra with the multiplication
T
i
(
V
)
⊗
T
j
(
V
)
→
T
i
+
j
(
V
)
{\displaystyle T^{i}(V)\otimes T^{j}(V)\to T^{i+j}(V)}
given by the tensor product
⊗
{\displaystyle \otimes }
. This is the free algebra on
V
{\displaystyle V}
, and can be thought of as the algebra of all non-commuting polynomials in the elements of
V
{\displaystyle V}
.
One can give the tensor algebra the structure of a DGA as follows. Let
f
:
V
→
k
{\displaystyle f:V\to k}
be any linear map. Then, this extends uniquely to a derivation of
T
(
V
)
{\displaystyle T(V)}
of degree
−
1
{\displaystyle -1}
(homologically graded) by the formula
d
f
(
v
1
⊗
⋯
⊗
v
n
)
=
∑
i
=
1
n
(
−
1
)
i
−
1
v
1
⊗
⋯
⊗
f
(
v
i
)
⊗
⋯
⊗
v
n
{\displaystyle d_{f}(v_{1}\otimes \cdots \otimes v_{n})=\sum _{i=1}^{n}(-1)^{i-1}v_{1}\otimes \cdots \otimes f(v_{i})\otimes \cdots \otimes v_{n}}
One can think of the minus signs on the right-hand side as coming from "jumping" the map
f
{\displaystyle f}
over the elements
v
1
,
…
,
v
i
−
1
{\displaystyle v_{1},\ldots ,v_{i-1}}
, which are all of degree 1 in
T
(
V
)
{\displaystyle T(V)}
. This is commonly referred to as the Koszul sign rule.
One can extend this construction to differential graded vector spaces. Let
(
V
∙
,
d
V
)
{\displaystyle (V_{\bullet },d_{V})}
be a differential graded vector space, i.e.,
d
V
:
V
i
→
V
i
−
1
{\displaystyle d_{V}:V_{i}\to V_{i-1}}
and
d
2
=
0
{\displaystyle d^{2}=0}
. Here we work with a homologically graded DG vector space, but this construction works equally well for a cohomologically graded one. Then, we can endow the tensor algebra
T
(
V
)
{\displaystyle T(V)}
with a DGA structure which extends the DG structure on V. The differential is given by
d
(
v
1
⊗
⋯
⊗
v
n
)
=
∑
i
=
1
n
(
−
1
)
|
v
1
|
+
…
+
|
v
i
−
1
|
v
1
⊗
⋯
⊗
d
V
(
v
i
)
⊗
⋯
⊗
v
n
{\displaystyle d(v_{1}\otimes \cdots \otimes v_{n})=\sum _{i=1}^{n}(-1)^{|v_{1}|+\ldots +|v_{i-1}|}v_{1}\otimes \cdots \otimes d_{V}(v_{i})\otimes \cdots \otimes v_{n}}
This is similar to the previous case, except that now the elements of
V
{\displaystyle V}
can have different degrees, and
T
(
V
)
{\displaystyle T(V)}
is no longer graded by the number of tensor products but instead by the sum of the degrees of the elements of
V
{\displaystyle V}
, i.e.,
|
v
1
⊗
⋯
⊗
v
n
|
=
|
v
1
|
+
…
+
|
v
n
|
{\displaystyle |v_{1}\otimes \cdots \otimes v_{n}|=|v_{1}|+\ldots +|v_{n}|}
.
=== The Free CDGA ===
Similar to the previous case, one can also construct the free CDGA. Given a graded vector space
V
∙
{\displaystyle V_{\bullet }}
, we define the free graded commutative algebra on it by
S
(
V
)
=
Sym
(
⨁
i
=
2
k
V
i
)
⊗
⋀
(
⨁
i
=
2
k
+
1
V
i
)
{\displaystyle S(V)=\operatorname {Sym} \left(\bigoplus _{i=2k}V_{i}\right)\otimes \bigwedge \left(\bigoplus _{i=2k+1}V_{i}\right)}
where
Sym
{\displaystyle \operatorname {Sym} }
denotes the symmetric algebra and
⋀
{\displaystyle \bigwedge }
denotes the exterior algebra. If we begin with a DG vector space
(
V
∙
,
d
)
{\displaystyle (V_{\bullet },d)}
(either homologically or cohomologically graded), then we can extend
d
{\displaystyle d}
to
S
(
V
)
{\displaystyle S(V)}
such that
(
S
(
V
)
,
d
)
{\displaystyle (S(V),d)}
is a CDGA in a unique way.
== Models for DGAs ==
As mentioned previously, oftentimes one is most interested in the (co)homology of a DGA. As such, the specific (co)chain complex we use is less important, as long as it has the right (co)homology. Given a DGA
A
{\displaystyle A}
, we say that another DGA
M
{\displaystyle M}
is a model for
A
{\displaystyle A}
if it comes with a surjective DGA morphism
p
:
M
→
A
{\displaystyle p:M\to A}
that is a quasi-isomorphism.
=== Minimal Models ===
Since one could form arbitrarily large (co)chain complexes with the same cohomology, it is useful to consider the "smallest" possible model of a DGA. We say that a DGA
(
A
,
d
,
⋅
)
{\displaystyle (A,d,\cdot )}
is a minimal if it satisfies the following conditions.
Note that some conventions, often used in algebraic topology, additionally require that
A
{\displaystyle A}
be simply connected, which means that
A
0
=
k
{\displaystyle A^{0}=k}
and
A
1
=
0
{\displaystyle A^{1}=0}
. This condition on the 0th and 1st degree components of
A
{\displaystyle A}
mirror the (co)homology groups of a simply connected space.
Finally, we say that
M
{\displaystyle M}
is a minimal model for
A
{\displaystyle A}
if it is both minimal and a model for
A
{\displaystyle A}
. The fundamental theorem of minimal models states that if
A
{\displaystyle A}
is simply connected then it admits a minimal model, and that if a minimal model exists it is unique up to (non-unique) isomorphism.
=== The Sullivan minimal model ===
Minimal models were used with great success by Dennis Sullivan in his work on rational homotopy theory. Given a simplicial complex
X
{\displaystyle X}
, one can define a rational analogue of the (real) de Rham algebra: the DGA
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
of "piecewise polynomial" differential forms with
Q
{\displaystyle \mathbb {Q} }
-coefficients. Then,
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
has the structure of a CDGA over the field
Q
{\displaystyle \mathbb {Q} }
, and in fact the cohomology is isomorphic to the singular cohomology of
X
{\displaystyle X}
. In particular, if
X
{\displaystyle X}
is a simply connected topological space then
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is simply connected as a DGA, thus there exists a minimal model.
Moreover, since
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
is a CDGA whose cohomology is the singular cohomology of
X
{\displaystyle X}
with
Q
{\displaystyle \mathbb {Q} }
-coefficients, it is a solution to the commutative cochain problem. Thus, if
X
{\displaystyle X}
is a simply connected CW complex with finite dimensional rational homology groups, the minimal model of the CDGA
A
P
L
(
X
)
{\displaystyle A_{PL}(X)}
captures entirely the rational homotopy type of
X
{\displaystyle X}
.
== See also ==
Differential graded Lie algebra
Rational homotopy theory
Homotopy associative algebra
== Notes ==
== References == | Wikipedia/Differential_graded_commutative_algebra |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.