text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In fluid dynamics, two types of stream function (or streamfunction) are defined:
The two-dimensional (or Lagrange) stream function, introduced by Joseph Louis Lagrange in 1781, is defined for incompressible (divergence-free), two-dimensional flows.
The Stokes stream function, named after George Gabriel Stokes, is defined for incompressible, three-dimensional flows with axisymmetry.
The properties of stream functions make them useful for analyzing and graphically illustrating flows.
The remainder of this article describes the two-dimensional stream function.
== Two-dimensional stream function ==
=== Assumptions ===
The two-dimensional stream function is based on the following assumptions:
The flow field can be described as two-dimensional plane flow, with velocity vector
u
=
[
u
(
x
,
y
,
t
)
v
(
x
,
y
,
t
)
0
]
.
{\displaystyle \quad \mathbf {u} ={\begin{bmatrix}u(x,y,t)\\v(x,y,t)\\0\end{bmatrix}}.}
The velocity satisfies the continuity equation for incompressible flow:
∇
⋅
u
=
0.
{\displaystyle \quad \nabla \cdot \mathbf {u} =0.}
The domain has no holes, or only has holes that have no net flux inwards or outwards.
Although in principle the stream function doesn't require the use of a particular coordinate system, for convenience the description presented here uses a right-handed Cartesian coordinate system with coordinates
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
.
=== Derivation ===
==== The test surface ====
Consider two points
A
{\displaystyle A}
and
P
{\displaystyle P}
in the
x
y
{\displaystyle xy}
plane, and a continuous curve
A
P
{\displaystyle AP}
, also in the
x
y
{\displaystyle xy}
plane, that connects them. Then every point on the curve
A
P
{\displaystyle AP}
has
z
{\displaystyle z}
coordinate
z
=
0
{\displaystyle z=0}
. Let the total length of the curve
A
P
{\displaystyle AP}
be
L
{\displaystyle L}
.
Suppose a ribbon-shaped surface is created by extending the curve
A
P
{\displaystyle AP}
upward to the horizontal plane
z
=
b
{\displaystyle z=b}
(
b
>
0
)
{\displaystyle (b>0)}
, where
b
{\displaystyle b}
is the thickness of the flow. Then the surface has length
L
{\displaystyle L}
, width
b
{\displaystyle b}
, and area
b
L
{\displaystyle b\,L}
. Call this the test surface.
==== Flux through the test surface ====
The total volumetric flux through the test surface is
Q
(
x
,
y
,
t
)
=
∫
0
b
∫
0
L
u
⋅
n
^
d
s
d
z
{\displaystyle Q(x,y,t)=\int _{0}^{b}\int _{0}^{L}\mathbf {u} \cdot {\hat {\mathbf {n} }}\,\mathrm {d} s\,\mathrm {d} z}
where
s
{\displaystyle s}
is an arc-length parameter defined on the curve
A
P
{\displaystyle AP}
, with
s
=
0
{\displaystyle s=0}
at the point
A
{\displaystyle A}
and
s
=
L
{\displaystyle s=L}
at the point
P
{\displaystyle P}
.
Here
n
^
{\displaystyle {\hat {\mathbf {n} }}}
is the unit vector perpendicular to the test surface, i.e.,
n
^
d
s
=
−
R
d
r
=
[
d
y
−
d
x
0
]
{\displaystyle {\hat {\mathbf {n} }}\,\mathrm {d} s=-R\,\mathrm {d} \mathbf {r} ={\begin{bmatrix}\mathrm {d} y\\-\mathrm {d} x\\0\end{bmatrix}}}
where
R
{\displaystyle R}
is the
3
×
3
{\displaystyle 3\times 3}
rotation matrix corresponding to a
90
∘
{\displaystyle 90^{\circ }}
anticlockwise rotation about the positive
z
{\displaystyle z}
axis:
R
=
R
z
(
90
∘
)
=
[
0
−
1
0
1
0
0
0
0
1
]
.
{\displaystyle R=R_{z}(90^{\circ })={\begin{bmatrix}0&-1&0\\1&0&0\\0&0&1\end{bmatrix}}.}
The integrand in the expression for
Q
{\displaystyle Q}
is independent of
z
{\displaystyle z}
, so the outer integral can be evaluated to yield
Q
(
x
,
y
,
t
)
=
b
∫
A
P
(
u
d
y
−
v
d
x
)
{\displaystyle Q(x,y,t)=b\,\int _{A}^{P}\left(u\,\mathrm {d} y-v\,\mathrm {d} x\right)}
==== Classical definition ====
Lamb and Batchelor define the stream function
ψ
{\displaystyle \psi }
as follows.
ψ
(
x
,
y
,
t
)
=
∫
A
P
(
u
d
y
−
v
d
x
)
{\displaystyle \psi (x,y,t)=\int _{A}^{P}\left(u\,\mathrm {d} y-v\,\mathrm {d} x\right)}
Using the expression derived above for the total volumetric flux,
Q
{\displaystyle Q}
, this can be written as
ψ
(
x
,
y
,
t
)
=
Q
(
x
,
y
,
t
)
b
{\displaystyle \psi (x,y,t)={\frac {Q(x,y,t)}{b}}}
.
In words, the stream function
ψ
{\displaystyle \psi }
is the volumetric flux through the test surface per unit thickness, where thickness is measured perpendicular to the plane of flow.
The point
A
{\displaystyle A}
is a reference point that defines where the stream function is identically zero. Its position is chosen more or less arbitrarily and, once chosen, typically remains fixed.
An infinitesimal shift
d
P
=
(
d
x
,
d
y
)
{\displaystyle \mathrm {d} P=(\mathrm {d} x,\mathrm {d} y)}
in the position of point
P
{\displaystyle P}
results in the following change of the stream function:
d
ψ
=
u
d
y
−
v
d
x
{\displaystyle \mathrm {d} \psi =u\,\mathrm {d} y-v\,\mathrm {d} x}
.
From the exact differential
d
ψ
=
∂
ψ
∂
x
d
x
+
∂
ψ
∂
y
d
y
,
{\displaystyle \mathrm {d} \psi ={\frac {\partial \psi }{\partial x}}\,\mathrm {d} x+{\frac {\partial \psi }{\partial y}}\,\mathrm {d} y,}
so the flow velocity components in relation to the stream function
ψ
{\displaystyle \psi }
must be
u
=
∂
ψ
∂
y
,
v
=
−
∂
ψ
∂
x
.
{\displaystyle u={\frac {\partial \psi }{\partial y}},\qquad v=-{\frac {\partial \psi }{\partial x}}.}
Notice that the stream function is linear in the velocity. Consequently if two incompressible flow fields are superimposed, then the stream function of the resultant flow field is the algebraic sum of the stream functions of the two original fields.
=== Effect of shift in position of reference point ===
Consider a shift in the position of the reference point, say from
A
{\displaystyle A}
to
A
′
{\displaystyle A'}
. Let
ψ
′
{\displaystyle \psi '}
denote the stream function relative to the shifted reference point
A
′
{\displaystyle A'}
:
ψ
′
(
x
,
y
,
t
)
=
∫
A
′
P
(
u
d
y
−
v
d
x
)
.
{\displaystyle \psi '(x,y,t)=\int _{A'}^{P}\left(u\,\mathrm {d} y-v\,\mathrm {d} x\right).}
Then the stream function is shifted by
Δ
ψ
(
t
)
=
ψ
′
(
x
,
y
,
t
)
−
ψ
(
x
,
y
,
t
)
=
∫
A
′
A
(
u
d
y
−
v
d
x
)
,
{\displaystyle {\begin{aligned}\Delta \psi (t)&=\psi '(x,y,t)-\psi (x,y,t)\\&=\int _{A'}^{A}\left(u\,\mathrm {d} y-v\,\mathrm {d} x\right),\end{aligned}}}
which implies the following:
A shift in the position of the reference point effectively adds a constant (for steady flow) or a function solely of time (for nonsteady flow) to the stream function
ψ
{\displaystyle \psi }
at every point
P
{\displaystyle P}
.
The shift in the stream function,
Δ
ψ
{\displaystyle \Delta \psi }
, is equal to the total volumetric flux, per unit thickness, through the continuous surface that extends from point
A
′
{\displaystyle A'}
to point
A
{\displaystyle A}
. Consequently
Δ
ψ
=
0
{\displaystyle \Delta \psi =0}
if and only if
A
{\displaystyle A}
and
A
′
{\displaystyle A'}
lie on the same streamline.
=== In terms of vector rotation ===
The velocity
u
{\displaystyle \mathbf {u} }
can be expressed in terms of the stream function
ψ
{\displaystyle \psi }
as
u
=
−
R
∇
ψ
{\displaystyle \mathbf {u} =-R\,\nabla \psi }
where
R
{\displaystyle R}
is the
3
×
3
{\displaystyle 3\times 3}
rotation matrix corresponding to a
90
∘
{\displaystyle 90^{\circ }}
anticlockwise rotation about the positive
z
{\displaystyle z}
axis. Solving the above equation for
∇
ψ
{\displaystyle \nabla \psi }
produces the equivalent form
∇
ψ
=
R
u
.
{\displaystyle \nabla \psi =R\,\mathbf {u} .}
From these forms it is immediately evident that the vectors
u
{\displaystyle \mathbf {u} }
and
∇
ψ
{\displaystyle \nabla \psi }
are
perpendicular:
u
⋅
∇
ψ
=
0
{\displaystyle \mathbf {u} \cdot \nabla \psi =0}
of the same length:
|
u
|
=
|
∇
ψ
|
{\displaystyle |\mathbf {u} |=|\nabla \psi |}
.
Additionally, the compactness of the rotation form facilitates manipulations (e.g., see Condition of existence).
=== In terms of vector potential and stream surfaces ===
In general, a divergence-free field like
u
{\displaystyle \mathbf {u} }
, also known as a solenoidal vector field, can always be represented as the curl of some vector potential
A
{\displaystyle {\boldsymbol {A}}}
:
u
=
∇
×
A
.
{\displaystyle \mathbf {u} =\nabla \times {\boldsymbol {A}}.}
The stream function
ψ
{\displaystyle \psi }
can be understood as providing the strength of a vector potential that is directed perpendicular to the plane:
A
(
x
,
y
,
t
)
=
[
0
0
ψ
(
x
,
y
,
t
)
]
,
{\displaystyle {\boldsymbol {A}}(x,y,t)={\begin{bmatrix}0\\0\\\psi (x,y,t)\end{bmatrix}},}
in other words
A
=
ψ
z
^
{\displaystyle {\boldsymbol {A}}=\psi {\hat {\mathbf {z} }}}
, where
z
^
{\displaystyle {\hat {\mathbf {z} }}}
is the unit vector pointing in the positive
z
{\displaystyle z}
direction.
This can also be written as the vector cross product
u
=
∇
ψ
×
z
^
{\displaystyle \mathbf {u} =\nabla \psi \times {\hat {\mathbf {z} }}}
where we've used the vector calculus identity
∇
×
(
ψ
z
^
)
=
ψ
∇
×
z
^
+
∇
ψ
×
z
^
.
{\displaystyle \nabla \times \left(\psi {\hat {\mathbf {z} }}\right)=\psi \nabla \times {\hat {\mathbf {z} }}+\nabla \psi \times {\hat {\mathbf {z} }}.}
Noting that
z
^
=
∇
z
{\displaystyle {\hat {\mathbf {z} }}=\nabla z}
, and defining
ϕ
=
z
{\displaystyle \phi =z}
, one can express the velocity field as
u
=
∇
ψ
×
∇
ϕ
.
{\displaystyle \mathbf {u} =\nabla \psi \times \nabla \phi .}
This form shows that the level surfaces of
ψ
{\displaystyle \psi }
and the level surfaces of
z
{\displaystyle z}
(i.e., horizontal planes) form a system of orthogonal stream surfaces.
=== Alternative (opposite sign) definition ===
An alternative definition, sometimes used in meteorology and oceanography, is
ψ
′
=
−
ψ
.
{\displaystyle \psi '=-\psi .}
=== Relation to vorticity ===
In two-dimensional plane flow, the vorticity vector, defined as
ω
=
∇
×
u
{\displaystyle {\boldsymbol {\omega }}=\nabla \times \mathbf {u} }
, reduces to
ω
z
^
{\displaystyle \omega \,{\hat {\mathbf {z} }}}
, where
ω
=
−
∇
2
ψ
{\displaystyle \omega =-\nabla ^{2}\psi }
or
ω
=
+
∇
2
ψ
′
{\displaystyle \omega =+\nabla ^{2}\psi '}
These are forms of Poisson's equation.
=== Relation to streamlines ===
Consider two-dimensional plane flow with two infinitesimally close points
P
=
(
x
,
y
,
z
)
{\displaystyle P=(x,y,z)}
and
P
′
=
(
x
+
d
x
,
y
+
d
y
,
z
)
{\displaystyle P'=(x+dx,y+dy,z)}
lying in the same horizontal plane. From calculus, the corresponding infinitesimal difference between the values of the stream function at the two points is
d
ψ
(
x
,
y
,
t
)
=
ψ
(
x
+
d
x
,
y
+
d
y
,
t
)
−
ψ
(
x
,
y
,
t
)
=
∂
ψ
∂
x
d
x
+
∂
ψ
∂
y
d
y
=
∇
ψ
⋅
d
r
{\displaystyle {\begin{aligned}\mathrm {d} \psi (x,y,t)&=\psi (x+\mathrm {d} x,y+\mathrm {d} y,t)-\psi (x,y,t)\\&={\partial \psi \over \partial x}\mathrm {d} x+{\partial \psi \over \partial y}\mathrm {d} y\\&=\nabla \psi \cdot \mathrm {d} \mathbf {r} \end{aligned}}}
Suppose
ψ
{\displaystyle \psi }
takes the same value, say
C
{\displaystyle C}
, at the two points
P
{\displaystyle P}
and
P
′
{\displaystyle P'}
. Then this gives
0
=
∇
ψ
⋅
d
r
,
{\displaystyle 0=\nabla \psi \cdot \mathrm {d} \mathbf {r} ,}
implying that the vector
∇
ψ
{\displaystyle \nabla \psi }
is normal to the surface
ψ
=
C
{\displaystyle \psi =C}
. Because
u
⋅
∇
ψ
=
0
{\displaystyle \mathbf {u} \cdot \nabla \psi =0}
everywhere (e.g., see In terms of vector rotation), each streamline corresponds to the intersection of a particular stream surface and a particular horizontal plane. Consequently, in three dimensions, unambiguous identification of any particular streamline requires that one specify corresponding values of both the stream function and the elevation (
z
{\displaystyle z}
coordinate).
The development here assumes the space domain is three-dimensional. The concept of stream function can also be developed in the context of a two-dimensional space domain. In that case level sets of the stream function are curves rather than surfaces, and streamlines are level curves of the stream function. Consequently, in two dimensions, unambiguous identification of any particular streamline requires that one specify the corresponding value of the stream function only.
=== Condition of existence ===
It's straightforward to show that for two-dimensional plane flow
u
{\displaystyle \mathbf {u} }
satisfies the curl-divergence equation
(
∇
⋅
u
)
z
^
=
−
∇
×
(
R
u
)
{\displaystyle (\nabla \cdot \mathbf {u} )\,{\hat {\mathbf {z} }}=-\nabla \times (R\,\mathbf {u} )}
where
R
{\displaystyle R}
is the
3
×
3
{\displaystyle 3\times 3}
rotation matrix corresponding to a
90
∘
{\displaystyle 90^{\circ }}
anticlockwise rotation about the positive
z
{\displaystyle z}
axis. This equation holds regardless of whether or not the flow is incompressible.
If the flow is incompressible (i.e.,
∇
⋅
u
=
0
{\displaystyle \nabla \cdot \mathbf {u} =0}
), then the curl-divergence equation gives
0
=
∇
×
(
R
u
)
{\displaystyle \mathbf {0} =\nabla \times (R\,\mathbf {u} )}
.
Then by Stokes' theorem the line integral of
R
u
{\displaystyle R\,\mathbf {u} }
over every closed loop vanishes
∮
∂
Σ
(
R
u
)
⋅
d
Γ
=
0.
{\displaystyle \oint _{\partial \Sigma }(R\,\mathbf {u} )\cdot \mathrm {d} \mathbf {\Gamma } =0.}
Hence, the line integral of
R
u
{\displaystyle R\,\mathbf {u} }
is path-independent. Finally, by the converse of the gradient theorem, a scalar function
ψ
(
x
,
y
,
t
)
{\displaystyle \psi (x,y,t)}
exists such that
R
u
=
∇
ψ
{\displaystyle R\,\mathbf {u} =\nabla \psi }
.
Here
ψ
{\displaystyle \psi }
represents the stream function.
Conversely, if the stream function exists, then
R
u
=
∇
ψ
{\displaystyle R\,\mathbf {u} =\nabla \psi }
. Substituting this result into the curl-divergence equation yields
∇
⋅
u
=
0
{\displaystyle \nabla \cdot \mathbf {u} =0}
(i.e., the flow is incompressible).
In summary, the stream function for two-dimensional plane flow exists if and only if the flow is incompressible.
=== Potential flow ===
For two-dimensional potential flow, streamlines are perpendicular to equipotential lines. Taken together with the velocity potential, the stream function may be used to derive a complex potential. In other words, the stream function accounts for the solenoidal part of a two-dimensional Helmholtz decomposition, while the velocity potential accounts for the irrotational part.
=== Summary of properties ===
The basic properties of two-dimensional stream functions can be summarized as follows:
The x- and y-components of the flow velocity at a given point are given by the partial derivatives of the stream function at that point.
The value of the stream function is constant along every streamline (streamlines represent the trajectories of particles in steady flow). That is, in two dimensions each streamline is a level curve of the stream function.
The difference between the stream function values at any two points gives the volumetric flux through the vertical surface that connects the two points.
== Two-dimensional stream function for flows with time-invariant density ==
If the fluid density is time-invariant at all points within the flow, i.e.,
∂
ρ
∂
t
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}=0}
,
then the continuity equation (e.g., see Continuity equation#Fluid dynamics) for two-dimensional plane flow becomes
∇
⋅
(
ρ
u
)
=
0.
{\displaystyle \nabla \cdot (\rho \,\mathbf {u} )=0.}
In this case the stream function
ψ
{\displaystyle \psi }
is defined such that
ρ
u
=
∂
ψ
∂
y
,
ρ
v
=
−
∂
ψ
∂
x
{\displaystyle \rho \,u={\frac {\partial \psi }{\partial y}},\quad \rho \,v=-{\frac {\partial \psi }{\partial x}}}
and represents the mass flux (rather than volumetric flux) per unit thickness through the test surface.
== See also ==
Elementary flow
== References ==
=== Citations ===
=== Sources ===
== External links ==
Joukowsky Transform Interactive WebApp | Wikipedia/Stream_function |
In mathematics, there are several functions known as Kummer's function. One is known as the confluent hypergeometric function of Kummer. Another one, defined below, is related to the polylogarithm. Both are named for Ernst Kummer.
Kummer's function is defined by
Λ
n
(
z
)
=
∫
0
z
log
n
−
1
|
t
|
1
+
t
d
t
.
{\displaystyle \Lambda _{n}(z)=\int _{0}^{z}{\frac {\log ^{n-1}|t|}{1+t}}\;dt.}
The duplication formula is
Λ
n
(
z
)
+
Λ
n
(
−
z
)
=
2
1
−
n
Λ
n
(
−
z
2
)
{\displaystyle \Lambda _{n}(z)+\Lambda _{n}(-z)=2^{1-n}\Lambda _{n}(-z^{2})}
.
Compare this to the duplication formula for the polylogarithm:
Li
n
(
z
)
+
Li
n
(
−
z
)
=
2
1
−
n
Li
n
(
z
2
)
.
{\displaystyle \operatorname {Li} _{n}(z)+\operatorname {Li} _{n}(-z)=2^{1-n}\operatorname {Li} _{n}(z^{2}).}
An explicit link to the polylogarithm is given by
Li
n
(
z
)
=
Li
n
(
1
)
+
∑
k
=
1
n
−
1
(
−
1
)
k
−
1
log
k
|
z
|
k
!
Li
n
−
k
(
z
)
+
(
−
1
)
n
−
1
(
n
−
1
)
!
[
Λ
n
(
−
1
)
−
Λ
n
(
−
z
)
]
.
{\displaystyle \operatorname {Li} _{n}(z)=\operatorname {Li} _{n}(1)\;\;+\;\;\sum _{k=1}^{n-1}(-1)^{k-1}\;{\frac {\log ^{k}|z|}{k!}}\;\operatorname {Li} _{n-k}(z)\;\;+\;\;{\frac {(-1)^{n-1}}{(n-1)!}}\;\left[\Lambda _{n}(-1)-\Lambda _{n}(-z)\right].}
== References ==
Lewin, Leonard, ed. (1991), Structural Properties of Polylogarithms, Providence, RI: American Mathematical Society, ISBN 0-8218-4532-2. | Wikipedia/Kummer's_function |
In mathematics, in particular linear algebra, the matrix determinant lemma computes the determinant of the sum of an invertible matrix A and the dyadic product, u vT, of a column vector u and a row vector vT.
== Statement ==
Suppose A is an invertible square matrix and u, v are column vectors. Then the matrix determinant lemma states that
det
(
A
+
u
v
T
)
=
(
1
+
v
T
A
−
1
u
)
det
(
A
)
.
{\displaystyle \det(\mathbf {A} +\mathbf {uv} ^{\textsf {T}})=(1+\mathbf {v} ^{\textsf {T}}\mathbf {A} ^{-1}\mathbf {u} )\,\det(\mathbf {A} )\,.}
Here, uvT is the outer product of two vectors u and v.
The theorem can also be stated in terms of the adjugate matrix of A:
det
(
A
+
u
v
T
)
=
det
(
A
)
+
v
T
a
d
j
(
A
)
u
,
{\displaystyle \det(\mathbf {A} +\mathbf {uv} ^{\textsf {T}})=\det(\mathbf {A} )+\mathbf {v} ^{\textsf {T}}\mathrm {adj} (\mathbf {A} )\mathbf {u} \,,}
in which case it applies whether or not the matrix A is invertible.
== Proof ==
First the proof of the special case A = I follows from the equality:
(
I
0
v
T
1
)
(
I
+
u
v
T
u
0
1
)
(
I
0
−
v
T
1
)
=
(
I
u
0
1
+
v
T
u
)
.
{\displaystyle {\begin{pmatrix}\mathbf {I} &0\\\mathbf {v} ^{\textsf {T}}&1\end{pmatrix}}{\begin{pmatrix}\mathbf {I} +\mathbf {uv} ^{\textsf {T}}&\mathbf {u} \\0&1\end{pmatrix}}{\begin{pmatrix}\mathbf {I} &0\\-\mathbf {v} ^{\textsf {T}}&1\end{pmatrix}}={\begin{pmatrix}\mathbf {I} &\mathbf {u} \\0&1+\mathbf {v} ^{\textsf {T}}\mathbf {u} \end{pmatrix}}.}
The determinant of the left hand side is the product of the determinants of the three matrices. Since the first and third matrix are triangular matrices with unit diagonal, their determinants are just 1. The determinant of the middle matrix is our desired value. The determinant of the right hand side is simply (1 + vTu). So we have the result:
det
(
I
+
u
v
T
)
=
1
+
v
T
u
.
{\displaystyle \det(\mathbf {I} +\mathbf {uv} ^{\textsf {T}})=1+\mathbf {v} ^{\textsf {T}}\mathbf {u} .}
Then the general case can be found as:
det
(
A
+
u
v
T
)
=
det
(
A
)
det
(
I
+
(
A
−
1
u
)
v
T
)
=
det
(
A
)
(
1
+
v
T
(
A
−
1
u
)
)
.
{\displaystyle {\begin{aligned}\det(\mathbf {A} +\mathbf {uv} ^{\textsf {T}})&=\det(\mathbf {A} )\det(\mathbf {I} +(\mathbf {A} ^{-1}\mathbf {u} )\mathbf {v} ^{\textsf {T}})\\&=\det(\mathbf {A} )\left(1+\mathbf {v} ^{\textsf {T}}(\mathbf {A} ^{-1}\mathbf {u} )\right).\end{aligned}}}
== Application ==
If the determinant and inverse of A are already known, the formula provides a numerically cheap way to compute the determinant of A corrected by the matrix uvT. The computation is relatively cheap because the determinant of A + uvT does not have to be computed from scratch (which in general is expensive). Using unit vectors for u and/or v, individual columns, rows or elements of A may be manipulated and a correspondingly updated determinant computed relatively cheaply in this way.
When the matrix determinant lemma is used in conjunction with the Sherman–Morrison formula, both the inverse and determinant may be conveniently updated together.
== Generalization ==
Suppose A is an invertible n-by-n matrix and U, V are n-by-m matrices. Then
det
(
A
+
U
V
T
)
=
det
(
I
m
+
V
T
A
−
1
U
)
det
(
A
)
.
{\displaystyle \det(\mathbf {A} +\mathbf {UV} ^{\textsf {T}})=\det(\mathbf {I_{m}} +\mathbf {V} ^{\textsf {T}}\mathbf {A} ^{-1}\mathbf {U} )\det(\mathbf {A} ).}
In the special case
A
=
I
n
{\displaystyle \mathbf {A} =\mathbf {I_{n}} }
this is the Weinstein–Aronszajn identity.
Given additionally an invertible m-by-m matrix W, the relationship can also be expressed as
det
(
A
+
U
W
V
T
)
=
det
(
W
−
1
+
V
T
A
−
1
U
)
det
(
W
)
det
(
A
)
.
{\displaystyle \det(\mathbf {A} +\mathbf {UWV} ^{\textsf {T}})=\det(\mathbf {W} ^{-1}+\mathbf {V} ^{\textsf {T}}\mathbf {A} ^{-1}\mathbf {U} )\det(\mathbf {W} )\det(\mathbf {A} ).}
== See also ==
The Sherman–Morrison formula, which shows how to update the inverse, A−1, to obtain (A + uvT)−1.
The Woodbury formula, which shows how to update the inverse, A−1, to obtain (A + UCVT)−1.
The binomial inverse theorem for (A + UCVT)−1.
== References == | Wikipedia/Matrix_determinant_lemma |
A generator in electrical circuit theory is one of two ideal elements: an ideal voltage source, or an ideal current source. These are two of the fundamental elements in circuit theory. Real electrical generators are most commonly modelled as a non-ideal source consisting of a combination of an ideal source and a resistor. Voltage generators are modelled as an ideal voltage source in series with a resistor. Current generators are modelled as an ideal current source in parallel with a resistor. The resistor is referred to as the internal resistance of the source. Real world equipment may not perfectly follow these models, especially at extremes of loading (both high and low), but for most purposes, they suffice.
The two models of non-ideal generators are interchangeable; either can be used for any given generator. Thévenin's theorem allows a non-ideal current source model to be converted to a non-ideal voltage source model and Norton's theorem allows a non-ideal voltage source model to be converted to a non-ideal current source model. Both models are equally valid, but the voltage source model is more applicable when the internal resistance is low (that is, much lower than the load impedance), and the current source model is more applicable when the internal resistance is high (compared to the load).
== Symbols ==
Symbols commonly used for ideal sources are shown in the figure. Symbols do vary from region to region and time period to time period. Another common symbol for a current source is two interlocking circles.
== Dependent sources ==
A dependent source is one in which the voltage or current of the source output is dependent on another voltage or current elsewhere in the circuit. There are thus four possible types: current-dependent voltage source, voltage-dependent voltage source, current-dependent current source, and voltage-dependent current source. Non-ideal dependent sources can be modelled with the addition of an impedance in the same way as non-dependent sources. These elements are widely used to model the function of two-port networks; one generator is needed for each port, and it is dependent on either voltage or current at the other port. The models are an example of black box modelling; that is, they are quite unrelated to what is physically inside the device but correctly model the device's function. There are a number of these two-port models, differing only in the type of generator required to represent them. This kind of model is particularly useful for modelling the behaviour of transistors.
The model used to represent h-parameters is shown in the figure. h-parameters are frequently used in transistor data sheets to specify the device. The h-parameters are defined as the matrix
[
V
1
I
2
]
=
[
h
11
h
12
h
21
h
22
]
[
I
1
V
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\I_{2}\end{bmatrix}}={\begin{bmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\V_{2}\end{bmatrix}}}
where the voltage and current variables are as shown in the figure. The circuit model using dependent generators is just an alternative way of representing this matrix.
== References == | Wikipedia/Generator_(circuit_theory) |
In electrical circuit theory, a port is a pair of terminals connecting an electrical network or circuit to an external circuit, as a point of entry or exit for electrical energy. A port consists of two nodes (terminals) connected to an outside circuit which meets the port condition – the currents flowing into the two nodes must be equal and opposite.
The use of ports helps to reduce the complexity of circuit analysis. Many common electronic devices and circuit blocks, such as transistors, transformers, electronic filters, and amplifiers, are analyzed in terms of ports. In multiport network analysis, the circuit is regarded as a "black box" connected to the outside world through its ports. The ports are points where input signals are applied or output signals taken. Its behavior is completely specified by a matrix of parameters relating the voltage and current at its ports, so the internal makeup or design of the circuit need not be considered, or even known, in determining the circuit's response to applied signals.
The concept of ports can be extended to waveguides, but the definition in terms of current is not appropriate and the possible existence of multiple waveguide modes must be accounted for.
== Port condition ==
Any node of a circuit that is available for connection to an external circuit is called a pole (or terminal if it is a physical object). The port condition is that a pair of poles of a circuit is considered a port if and only if the current flowing into one pole from outside the circuit is equal to the current flowing out of the other pole into the external circuit. Equivalently, the algebraic sum of the currents flowing into the two poles from the external circuit must be zero.
It cannot be determined if a pair of nodes meets the port condition by analysing the internal properties of the circuit itself. The port condition is dependent entirely on the external connections of the circuit. What are ports under one set of external circumstances may well not be ports under another. Consider the circuit of four resistors in the figure for example. If generators are connected to the pole pairs (1, 2) and (3, 4) then those two pairs are ports and the circuit is a box attenuator. On the other hand, if generators are connected to pole pairs (1, 4) and (2, 3) then those pairs are ports, the pairs (1, 2) and (3, 4) are no longer ports, and the circuit is a bridge circuit.
It is even possible to arrange the inputs so that no pair of poles meets the port condition. However, it is possible to deal with such a circuit by splitting one or more poles into a number of separate poles joined to the same node. If only one external generator terminal is connected to each pole (whether a split pole or otherwise) then the circuit can again be analysed in terms of ports. The most common arrangement of this type is to designate one pole of an n-pole circuit as the common and split it into n−1 poles. This latter form is especially useful for unbalanced circuit topologies and the resulting circuit has n−1 ports.
In the most general case, it is possible to have a generator connected to every pair of poles, that is, nC2 generators, then every pole must be split into n−1 poles. For instance, in the figure example (c), if the poles 2 and 4 are each split into two poles each then the circuit can be described as a 3-port. However, it is also possible to connect generators to pole pairs (1, 3), (1, 4), and (3, 2) making 4C2 = 6 generators in all and the circuit has to be treated as a 6-port.
== One-ports ==
Any two-pole circuit is guaranteed to meet the port condition by virtue of Kirchhoff's current law and they are therefore one-ports unconditionally. All of the basic electrical elements (inductors, resistors, capacitors, voltage sources, current sources) are one-port devices.
Study of one-ports is an important part of the foundation of network synthesis, most especially in filter design. Two-element one-ports (that is RC, RL and LC circuits) are easier to synthesise than the general case. For a two-element one-port Foster's canonical form or Cauer's canonical form can be used. In particular, LC circuits are studied since these are lossless and are commonly used in filter design.
== Two-ports ==
Linear two port networks have been widely studied and a large number of ways of representing them have been developed. One of these representations is the z-parameters which can be described in matrix form by;
[
V
1
V
2
]
=
[
z
11
z
12
z
21
z
22
]
[
I
1
I
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}}
where Vn and In are the voltages and currents respectively at port n. Most of the other descriptions of two-ports can likewise be described with a similar matrix but with a different arrangement of the voltage and current column vectors.
Common circuit blocks which are two-ports include amplifiers, attenuators and filters.
== Multiports ==
In general, a circuit can consist of any number of ports—a multiport. Some, but not all, of the two-port parameter representations can be extended to arbitrary multiports. Of the voltage and current based matrices, the ones that can be extended are z-parameters and y-parameters. Neither of these are suitable for use at microwave frequencies because voltages and currents are not convenient to measure in formats using conductors and are not relevant at all in waveguide formats. Instead, s-parameters are used at these frequencies and these too can be extended to an arbitrary number of ports.
Circuit blocks which have more than two ports include directional couplers, power splitters, circulators, diplexers, duplexers, multiplexers, hybrids and directional filters.
== RF and microwave ==
RF and microwave circuit topologies are commonly unbalanced circuit topologies such as coaxial or microstrip. In these formats, one pole of each port in a circuit is connected to a common node such as a ground plane. It is assumed in the circuit analysis that all these commoned poles are at the same potential and that current is sourced to or sunk into the ground plane that is equal and opposite to that going into the other pole of any port. In this topology a port is treated as being just a single pole. The corresponding balancing pole is imagined to be incorporated into the ground plane.
The one-pole representation of a port will start to fail if there are significant ground plane loop currents. The assumption in the model is that the ground plane is perfectly conducting and that there is no potential difference between two locations on the ground plane. In reality, the ground plane is not perfectly conducting and loop currents in it will cause potential differences. If there is a potential difference between the commoned poles of two ports then the port condition is broken and the model is invalid.
== Waveguide ==
The idea of ports can be (and is) extended to waveguide devices, but a port can no longer be defined in terms of circuit poles because in waveguides the electromagnetic waves are not guided by electrical conductors. They are, instead guided by the walls of the waveguide. Thus, the concept of a circuit conductor pole does not exist in this format. Ports in waveguides consist of an aperture or break in the waveguide through which the electromagnetic waves can pass. The bounded plane through which the wave passes is the definition of the port.
Waveguides have an additional complication in port analysis in that it is possible (and sometimes desirable) for more than one waveguide mode to exist at the same time. In such cases, for each physical port, a separate port must be added to the analysis model for each of the modes present at that physical port.
== Other energy domains ==
The concept of ports can be extended into other energy domains. The generalised definition of a port is a place where energy can flow from one element or subsystem to another element or subsystem. This generalised view of the port concept helps to explain why the port condition is so defined in electrical analysis. If the algebraic sum of the currents is not zero, such as in example diagram (c), then the energy delivered from an external generator is not equal to the energy entering the pair of circuit poles. The energy transfer at that place is thus more complex than a simple flow from one subsystem to another and does not meet the generalised definition of a port.
The port concept is particularly useful where multiple energy domains are involved in the same system and a unified, coherent analysis is required such as with mechanical–electrical analogies or bond graph analysis. Connection between energy domains is by means of transducers. A transducer may be a one-port as viewed by the electrical domain, but with the more generalised definition of port it is a two-port. For instance, a mechanical actuator has one port in the electrical domain and one port in the mechanical domain. Transducers can be analysed as two-port networks in the same way as electrical two-ports. That is, by means of a pair of linear algebraic equations or a 2×2 transfer function matrix. However, the variables at the two ports will be different and the two-port parameters will be a mixture of two energy domains. For instance, in the actuator example, the z-parameters will include one electrical impedance, one mechanical impedance, and two transimpedances that are ratios of one electrical and one mechanical variable.
== References ==
== Bibliography ==
Won Y. Yang, Seung C. Lee, Circuit Systems with MATLAB and PSpice, John Wiley & Sons, 2008 ISBN 0470822406.
Frank Gustrau, RF and Microwave Engineering: Fundamentals of Wireless Communications, John Wiley & Sons, 2012 ISBN 111834958X.
Peter Russer, Electromagnetics, Microwave Circuit and Antenna Design for Communications Engineering, Artech House, 2003 ISBN 1580535321.
Herbert J. Carlin, Pier Paolo Civalleri, Wideband Circuit Design, CRC Press, 1997 ISBN 0849378974.
Dean Karnopp, Donald L. Margolis, Ronald C. Rosenberg, System Dynamics, Wiley, 2000 ISBN 0471333018.
Wolfgang Borutzky, Bond Graph Methodology, Springer 2009 ISBN 1848828829.
Leo Leroy Beranek, Tim Mellow, Acoustics: Sound Fields and Transducers, Academic Press, 2012 ISBN 0123914213. | Wikipedia/Port_(circuit_theory) |
An equivalent impedance is an equivalent circuit of an electrical network of impedance elements which presents the same impedance between all pairs of terminals as did the given network. This article describes mathematical transformations between some passive, linear impedance networks commonly found in electronic circuits.
There are a number of very well known and often used equivalent circuits in linear network analysis. These include resistors in series, resistors in parallel and the extension to series and parallel circuits for capacitors, inductors and general impedances. Also well known are the Norton and Thévenin equivalent current generator and voltage generator circuits respectively, as is the Y-Δ transform. None of these are discussed in detail here; the individual linked articles should be consulted.
The number of equivalent circuits that a linear network can be transformed into is unbounded. Even in the most trivial cases this can be seen to be true, for instance, by asking how many different combinations of resistors in parallel are equivalent to a given combined resistor. The number of series and parallel combinations that can be formed grows exponentially with the number of resistors, n. For large n the size of the set has been found by numerical techniques to be approximately 2.53n and analytically strict bounds are given by a Farey sequence of Fibonacci numbers. This article could never hope to be comprehensive, but there are some generalisations possible. Wilhelm Cauer found a transformation that could generate all possible equivalents of a given rational, passive, linear one-port, or in other words, any given two-terminal impedance. Transformations of 4-terminal, especially 2-port, networks are also commonly found and transformations of yet more complex networks are possible.
The vast scale of the topic of equivalent circuits is underscored in a story told by Sidney Darlington. According to Darlington, a large number of equivalent circuits were found by Ronald M. Foster, following his and George Campbell's 1920 paper on non-dissipative four-ports. In the course of this work they looked at the ways four ports could be interconnected with ideal transformers and maximum power transfer. They found a number of combinations which might have practical applications and asked the AT&T patent department to have them patented. The patent department replied that it was pointless just patenting some of the circuits if a competitor could use an equivalent circuit to get around the patent; they should patent all of them or not bother. Foster therefore set to work calculating every last one of them. He arrived at an enormous total of 83,539 equivalents (577,722 if different output ratios are included). This was too many to patent, so instead the information was released into the public domain in order to prevent any of AT&T's competitors from patenting them in the future.
== 2-terminal, 2-element-kind networks ==
A single impedance has two terminals to connect to the outside world, hence can be described as a 2-terminal, or a one-port, network. Despite the simple description, there is no limit to the number of meshes, and hence complexity and number of elements, that the impedance network may have. 2-element-kind networks are common in circuit design; filters, for instance, are often LC-kind networks and printed circuit designers favour RC-kind networks because inductors are less easy to manufacture. Transformations are simpler and easier to find than for 3-element-kind networks. One-element-kind networks can be thought of as a special case of two-element-kind. It is possible to use the transformations in this section on a certain few 3-element-kind networks by substituting a network of elements for element Zn. However, this is limited to a maximum of two impedances being substituted; the remainder will not be a free choice. All the transformation equations given in this section are due to Otto Zobel.
=== 3-element networks ===
One-element networks are trivial and two-element, two-terminal networks are either two elements in series or two elements in parallel, also trivial. The smallest number of elements that is non-trivial is three, and there are two 2-element-kind non-trivial transformations possible, one being both the reverse transformation and the topological dual, of the other.
=== 4-element networks ===
There are four non-trivial 4-element transformations for 2-element-kind networks. Two of these are the reverse transformations of the other two and two are the dual of a different two. Further transformations are possible in the special case of Z2 being made the same element kind as Z1, that is, when the network is reduced to one-element-kind. The number of possible networks continues to grow as the number of elements is increased. For all entries in the following table it is defined:
== 2-terminal, n-element, 3-element-kind networks ==
Simple networks with just a few elements can be dealt with by formulating the network equations "by hand" with the application of simple network theorems such as Kirchhoff's laws. Equivalence is proved between two networks by directly comparing the two sets of equations and equating coefficients. For large networks more powerful techniques are required. A common approach is to start by expressing the network of impedances as a matrix. This approach is only good for rational networks. Any network that includes distributed elements, such as a transmission line, cannot be represented by a finite matrix. Generally, an n-mesh network requires an nxn matrix to represent it. For instance the matrix for a 3-mesh network might look like
[
Z
]
=
[
Z
11
Z
12
Z
13
Z
21
Z
22
Z
23
Z
31
Z
32
Z
33
]
{\displaystyle \mathbf {[Z]} ={\begin{bmatrix}Z_{11}&Z_{12}&Z_{13}\\Z_{21}&Z_{22}&Z_{23}\\Z_{31}&Z_{32}&Z_{33}\end{bmatrix}}}
The entries of the matrix are chosen so that the matrix forms a system of linear equations in the mesh voltages and currents (as defined for mesh analysis):
[
V
]
=
[
Z
]
[
I
]
{\displaystyle \mathbf {[V]} =\mathbf {[Z][I]} }
The example diagram in Figure 1, for instance, can be represented as an impedance matrix by
[
Z
]
=
[
R
1
+
R
2
−
R
2
−
R
2
R
2
+
R
3
]
{\displaystyle \mathbf {[Z]} ={\begin{bmatrix}R_{1}+R_{2}&-R_{2}\\-R_{2}&R_{2}+R_{3}\end{bmatrix}}}
and the associated system of linear equations is
[
V
1
0
]
=
[
R
1
+
R
2
−
R
2
−
R
2
R
2
+
R
3
]
[
I
1
I
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\0\end{bmatrix}}={\begin{bmatrix}R_{1}+R_{2}&-R_{2}\\-R_{2}&R_{2}+R_{3}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}}
In the most general case, each branch Zp of the network may be made up of three elements so that
Z
p
=
s
L
p
+
R
p
+
1
s
C
p
{\displaystyle Z_{\mathrm {p} }=sL_{\mathrm {p} }+R_{\mathrm {p} }+{1 \over sC_{\mathrm {p} }}}
where L, R and C represent inductance, resistance, and capacitance respectively and s is the complex frequency operator
s
=
σ
+
i
ω
{\displaystyle \scriptstyle s=\sigma +i\omega }
.
This is the conventional way of representing a general impedance but for the purposes of this article it is mathematically more convenient to deal with elastance, D, the inverse of capacitance, C. In those terms the general branch impedance can be represented by
s
Z
p
=
s
2
L
p
+
s
R
p
+
D
p
{\displaystyle sZ_{\mathrm {p} }=s^{2}L_{\mathrm {p} }+sR_{\mathrm {p} }+D_{\mathrm {p} }\,\!}
Likewise, each entry of the impedance matrix can consist of the sum of three elements. Consequently, the matrix can be decomposed into three nxn matrices, one for each of the three element kinds:
s
[
Z
]
=
s
2
[
L
]
+
s
[
R
]
+
[
D
]
{\displaystyle s\mathbf {[Z]} =s^{2}\mathbf {[L]} +s\mathbf {[R]} +\mathbf {[D]} }
It is desired that the matrix [Z] represent an impedance, Z(s). For this purpose, the loop of one of the meshes is cut and Z(s) is the impedance measured between the points so cut. It is conventional to assume the external connection port is in mesh 1, and is therefore connected across matrix entry Z11, although it would be perfectly possible to formulate this with connections to any desired nodes. In the following discussion Z(s) taken across Z11 is assumed. Z(s) may be calculated from [Z] by
Z
(
s
)
=
|
Z
|
z
11
{\displaystyle Z(s)={\frac {|\mathbf {Z} |}{z_{11}}}}
where z11 is the complement of Z11 and |Z| is the determinant of [Z].
For the example network above,
This result is easily verified to be correct by the more direct method of resistors in series and parallel. However, such methods rapidly become tedious and cumbersome with the growth of the size and complexity of the network under analysis.
The entries of [R], [L] and [D] cannot be set arbitrarily. For [Z] to be able to realise the impedance Z(s) then [R],[L] and [D] must all be positive-definite matrices. Even then, the realisation of Z(s) will, in general, contain ideal transformers within the network. Finding only those transforms that do not require mutual inductances or ideal transformers is a more difficult task. Similarly, if starting from the "other end" and specifying an expression for Z(s), this again cannot be done arbitrarily. To be realisable as a rational impedance, Z(s) must be positive-real. The positive-real (PR) condition is both necessary and sufficient but there may be practical reasons for rejecting some topologies.
A general impedance transform for finding equivalent rational one-ports from a given instance of [Z] is due to Wilhelm Cauer. The group of real affine transformations
[
Z
′
]
=
[
T
]
T
[
Z
]
[
T
]
{\displaystyle \mathbf {[Z']} =\mathbf {[T]} ^{T}\mathbf {[Z]} \mathbf {[T]} }
where
[
T
]
=
[
1
0
⋯
0
T
21
T
22
⋯
T
2
n
⋅
⋯
T
n
1
T
n
2
⋯
T
n
n
]
{\displaystyle \mathbf {[T]} ={\begin{bmatrix}1&0\cdots 0\\T_{21}&T_{22}\cdots T_{2n}\\\cdot &\cdots \\T_{n1}&T_{n2}\cdots T_{nn}\end{bmatrix}}}
is invariant in Z(s). That is, all the transformed networks are equivalents according to the definition given here. If the Z(s) for the initial given matrix is realisable, that is, it meets the PR condition, then all the transformed networks produced by this transformation will also meet the PR condition.
== 3 and 4-terminal networks ==
When discussing 4-terminal networks, network analysis often proceeds in terms of 2-port networks, which covers a vast array of practically useful circuits. "2-port", in essence, refers to the way the network has been connected to the outside world: that the terminals have been connected in pairs to a source or load. It is possible to take exactly the same network and connect it to external circuitry in such a way that it is no longer behaving as a 2-port. This idea is demonstrated in Figure 2.
A 3-terminal network can also be used as a 2-port. To achieve this, one of the terminals is connected in common to one terminal of both ports. In other words, one terminal has been split into two terminals and the network has effectively been converted to a 4-terminal network. This topology is known as unbalanced topology and is opposed to balanced topology. Balanced topology requires, referring to Figure 3, that the impedance measured between terminals 1 and 3 is equal to the impedance measured between 2 and 4. This is the pairs of terminals not forming ports: the case where the pairs of terminals forming ports have equal impedance is referred to as symmetrical. Strictly speaking, any network that does not meet the balance condition is unbalanced, but the term is most often referring to the 3-terminal topology described above and in Figure 3. Transforming an unbalanced 2-port network into a balanced network is usually quite straightforward: all series-connected elements are divided in half with one half being relocated in what was the common branch. Transforming from balanced to unbalanced topology will often be possible with the reverse transformation but there are certain cases of certain topologies which cannot be transformed in this way. For example, see the discussion of lattice transforms below.
An example of a 3-terminal network transform that is not restricted to 2-ports is the Y-Δ transform. This is a particularly important transform for finding equivalent impedances. Its importance arises from the fact that the total impedance between two terminals cannot be determined solely by calculating series and parallel combinations except for a certain restricted class of network. In the general case additional transformations are required. The Y-Δ transform, its inverse the Δ-Y transform, and the n-terminal analogues of these two transforms (star-polygon transforms) represent the minimal additional transforms required to solve the general case. Series and parallel are, in fact, the 2-terminal versions of star and polygon topology. A common simple topology that cannot be solved by series and parallel combinations is the input impedance to a bridge network (except in the special case when the bridge is in balance). The rest of the transforms in this section are all restricted to use with 2-ports only.
=== Lattice transforms ===
Symmetric 2-port networks can be transformed into lattice networks using Bartlett's bisection theorem. The method is limited to symmetric networks but this includes many topologies commonly found in filters, attenuators and equalisers. The lattice topology is intrinsically balanced, there is no unbalanced counterpart to the lattice and it will usually require more components than the transformed network.
Reverse transformations from a lattice to an unbalanced topology are not always possible in terms of passive components. For instance, this transform:
cannot be realised with passive components because of the negative values arising in the transformed circuit. It can however be realised if mutual inductances and ideal transformers are permitted, for instance, in this circuit. Another possibility is to permit the use of active components which would enable negative impedances to be directly realised as circuit components.
It can sometimes be useful to make such a transformation, not for the purposes of actually building the transformed circuit, but rather, for the purposes of aiding understanding of how the original circuit is working. The following circuit in bridged-T topology is a modification of a mid-series m-derived filter T-section. The circuit is due to Hendrik Bode who claims that the addition of the bridging resistor of a suitable value will cancel the parasitic resistance of the shunt inductor. The action of this circuit is clear if it is transformed into T topology – in this form there is a negative resistance in the shunt branch which can be made to be exactly equal to the positive parasitic resistance of the inductor.
Any symmetrical network can be transformed into any other symmetrical network by the same method, that is, by first transforming into the intermediate lattice form (omitted for clarity from the above example transform) and from the lattice form into the required target form. As with the example, this will generally result in negative elements except in special cases.
=== Eliminating resistors ===
A theorem due to Sidney Darlington states that any PR function Z(s) can be realised as a lossless two-port terminated in a positive resistor R. That is, regardless of how many resistors feature in the matrix [Z] representing the impedance network, a transform can be found that will realise the network entirely as an LC-kind network with just one resistor across the output port (which would normally represent the load). No resistors within the network are necessary in order to realise the specified response. Consequently, it is always possible to reduce 3-element-kind 2-port networks to 2-element-kind (LC) 2-port networks provided the output port is terminated in a resistance of the required value.
=== Eliminating ideal transformers ===
An elementary transformation that can be done with ideal transformers and some other impedance element is to shift the impedance to the other side of the transformer. In all the following transforms, r is the turns ratio of the transformer.
These transforms do not just apply to single elements; entire networks can be passed through the transformer. In this manner, the transformer can be shifted around the network to a more convenient location. Darlington gives an equivalent transform that can eliminate an ideal transformer altogether. This technique requires that the transformer is next to (or capable of being moved next to) an "L" network of same-kind impedances. The transform in all variants results in the "L" network facing the opposite way, that is, topologically mirrored.
Example 3 shows the result is a Π-network rather than an L-network. The reason for this is that the shunt element has more capacitance than is required by the transform so some is still left over after applying the transform. If the excess were instead, in the element nearest the transformer, this could be dealt with by first shifting the excess to the other side of the transformer before carrying out the transform.
== Terminology ==
== References ==
== Bibliography ==
Bartlett, A. C., "An extension of a property of artificial lines", Phil. Mag., vol 4, p. 902, November 1927.
Belevitch, V., "Summary of the history of circuit theory", Proceedings of the IRE, vol 50, Iss 5, pp. 848–855, May 1962.
E. Cauer, W. Mathis, and R. Pauli, "Life and Work of Wilhelm Cauer (1900 – 1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems, Perpignan, June, 2000.
Foster, Ronald M.; Campbell, George A., "Maximum output networks for telephone substation and repeater circuits", Transactions of the American Institute of Electrical Engineers, vol.39, iss.1, pp. 230–290, January 1920.
Darlington, S., "A history of network synthesis and filter theory for circuits composed of resistors, inductors, and capacitors", IEEE Trans. Circuits and Systems, vol 31, pp. 3–13, 1984.
Farago, P. S., An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961.
Khan, Sameen Ahmed, "Farey sequences and resistor networks", Proceedings of the Indian Academy of Sciences (Mathematical Sciences), vol.122, iss.2, pp. 153–162, May 2012.
Zobel, O. J.,Theory and Design of Uniform and Composite Electric Wave Filters, Bell System Technical Journal, Vol. 2 (1923), pp. 1–46. | Wikipedia/Equivalent_impedance_transforms |
In materials science, effective medium approximations (EMA) or effective medium theory (EMT) pertain to analytical or theoretical modeling that describes the macroscopic properties of composite materials. EMAs or EMTs are developed from averaging the multiple values of the constituents that directly make up the composite material. At the constituent level, the values of the materials vary and are inhomogeneous. Precise calculation of the many constituent values is nearly impossible. However, theories have been developed that can produce acceptable approximations which in turn describe useful parameters including the effective permittivity and permeability of the materials as a whole. In this sense, effective medium approximations are descriptions of a medium (composite material) based on the properties and the relative fractions of its components and are derived from calculations, and effective medium theory. There are two widely used formulae.
Effective permittivity and permeability are averaged dielectric and magnetic characteristics of a microinhomogeneous medium. They both were derived in quasi-static approximation when the electric field inside a mixture particle may be considered as homogeneous. So, these formulae can not describe the particle size effect. Many attempts were undertaken to improve these formulae.
== Applications ==
There are many different effective medium approximations, each of them being more or less accurate in distinct conditions. Nevertheless, they all assume that the macroscopic system is homogeneous and, typical of all mean field theories, they fail to predict the properties of a multiphase medium close to the percolation threshold due to the absence of long-range correlations or critical fluctuations in the theory.
The properties under consideration are usually the conductivity
σ
{\displaystyle \sigma }
or the dielectric constant
ε
{\displaystyle \varepsilon }
of the medium. These parameters are interchangeable in the formulas in a whole range of models due to the wide applicability of the Laplace equation. The problems that fall outside of this class are mainly in the field of elasticity and hydrodynamics, due to the higher order tensorial character of the effective medium constants.
EMAs can be discrete models, such as applied to resistor networks, or continuum theories as applied to elasticity or viscosity. However, most of the current theories have difficulty in describing percolating systems. Indeed, among the numerous effective medium approximations, only Bruggeman's symmetrical theory is able to predict a threshold. This characteristic feature of the latter theory puts it in the same category as other mean field theories of critical phenomena.
== Bruggeman's model ==
For a mixture of two materials with permittivities
ε
m
{\displaystyle \varepsilon _{m}}
and
ε
d
{\displaystyle \varepsilon _{d}}
with corresponding volume fractions
c
m
{\displaystyle c_{m}}
and
c
d
{\displaystyle c_{d}}
, D.A.G. Bruggeman proposed a formula of the following form:
Here the positive sign before the square root must be altered to a negative sign in some cases in order to get the correct imaginary part of effective complex permittivity which is related with electromagnetic wave attenuation. The formula is symmetric with respect to swapping the 'd' and 'm' roles. This formula is based on the equality
where
Δ
Φ
{\displaystyle \Delta \Phi }
is the jump of electric displacement flux all over the integration surface,
E
n
(
r
)
{\displaystyle E_{n}(\mathbf {r} )}
is the component of microscopic electric field normal to the integration surface,
ε
r
(
r
)
{\displaystyle \varepsilon _{r}(\mathbf {r} )}
is the local relative complex permittivity which takes the value
ε
m
{\displaystyle \varepsilon _{m}}
inside the picked metal particle, the value
ε
d
{\displaystyle \varepsilon _{d}}
inside the picked dielectric particle and the value
ε
e
f
f
{\displaystyle \varepsilon _{\mathrm {eff} }}
outside the picked particle,
E
0
{\displaystyle E_{0}}
is the normal component of the macroscopic electric field. Formula (4) comes out of Maxwell's equality
div
(
ε
r
E
)
=
0
{\displaystyle \operatorname {div} (\varepsilon _{r}\mathbf {E} )=0}
. Thus only one picked particle is considered in Bruggeman's approach. The interaction with all the other particles is taken into account only in a mean field approximation described by
ε
e
f
f
{\displaystyle \varepsilon _{\mathrm {eff} }}
. Formula (3) gives a reasonable resonant curve for plasmon excitations in metal nanoparticles if their size is 10 nm or smaller. However, it is unable to describe the size dependence for the resonant frequency of plasmon excitations that are observed in experiments
=== Formulas ===
Without any loss of generality, we shall consider the study of the effective conductivity (which can be either dc or ac) for a system made up of spherical multicomponent inclusions with different arbitrary conductivities. Then the Bruggeman formula takes the form:
==== Circular and spherical inclusions ====
In a system of Euclidean spatial dimension
n
{\displaystyle n}
that has an arbitrary number of components, the sum is made over all the constituents.
δ
i
{\displaystyle \delta _{i}}
and
σ
i
{\displaystyle \sigma _{i}}
are respectively the fraction and the conductivity of each component, and
σ
e
{\displaystyle \sigma _{e}}
is the effective conductivity of the medium. (The sum over the
δ
i
{\displaystyle \delta _{i}}
's is unity.)
==== Elliptical and ellipsoidal inclusions ====
This is a generalization of Eq. (1) to a biphasic system with ellipsoidal inclusions of conductivity
σ
{\displaystyle \sigma }
into a matrix of conductivity
σ
m
{\displaystyle \sigma _{m}}
. The fraction of inclusions is
δ
{\displaystyle \delta }
and the system is
n
{\displaystyle n}
dimensional. For randomly oriented inclusions,
where the
L
j
{\displaystyle L_{j}}
's denote the appropriate doublet/triplet of depolarization factors which is governed by the ratios between the axis of the ellipse/ellipsoid. For example: in the case of a circle (
L
1
=
1
/
2
{\displaystyle L_{1}=1/2}
,
L
2
=
1
/
2
{\displaystyle L_{2}=1/2}
) and in the case of a sphere (
L
1
=
1
/
3
{\displaystyle L_{1}=1/3}
,
L
2
=
1
/
3
{\displaystyle L_{2}=1/3}
,
L
3
=
1
/
3
{\displaystyle L_{3}=1/3}
). (The sum over the
L
j
{\displaystyle L_{j}}
's is unity.)
The most general case to which the Bruggeman approach has been applied involves bianisotropic ellipsoidal inclusions.
=== Derivation ===
The figure illustrates a two-component medium. Consider the cross-hatched volume of conductivity
σ
1
{\displaystyle \sigma _{1}}
, take it as a sphere of volume
V
{\displaystyle V}
and assume it is embedded in a uniform medium with an effective conductivity
σ
e
{\displaystyle \sigma _{e}}
. If the electric field far from the inclusion is
E
0
¯
{\displaystyle {\overline {E_{0}}}}
then elementary considerations lead to a dipole moment associated with the volume
This polarization produces a deviation from
E
0
¯
{\displaystyle {\overline {E_{0}}}}
. If the average deviation is to vanish, the total polarization summed over the two types of inclusion must vanish. Thus
where
δ
1
{\displaystyle \delta _{1}}
and
δ
2
{\displaystyle \delta _{2}}
are respectively the volume fraction of material 1 and 2. This can be easily extended to a system of dimension
n
{\displaystyle n}
that has an arbitrary number of components. All cases can be combined to yield Eq. (1).
Eq. (1) can also be obtained by requiring the deviation in current to vanish.
It has been derived here from the assumption that the inclusions are spherical and it can be modified for shapes with other depolarization factors; leading to Eq. (2).
A more general derivation applicable to bianisotropic materials is also available.
=== Modeling of percolating systems ===
The main approximation is that all the domains are located in an equivalent mean field.
Unfortunately, it is not the case close to the percolation threshold where the system is governed by the largest cluster of conductors, which is a fractal, and long-range correlations that are totally absent from Bruggeman's simple formula.
The threshold values are in general not correctly predicted. It is 33% in the EMA, in three dimensions, far from the 16% expected from percolation theory and observed in experiments. However, in two dimensions, the EMA gives a threshold of 50% and has been proven to model percolation relatively well.
== Maxwell Garnett equation ==
In the Maxwell Garnett approximation, the effective medium consists of a matrix medium with
ε
m
{\displaystyle \varepsilon _{m}}
and inclusions with
ε
i
{\displaystyle \varepsilon _{i}}
. Maxwell Garnett was the son of physicist William Garnett, and was named after Garnett's friend, James Clerk Maxwell. He proposed his formula to explain colored pictures that are observed in glasses doped with metal nanoparticles. His formula has a form
where
ε
eff
{\displaystyle \varepsilon _{\text{eff}}}
is effective relative complex permittivity of the mixture,
ε
d
{\displaystyle \varepsilon _{d}}
is relative complex permittivity of the background medium containing small spherical inclusions of relative permittivity
ε
m
{\displaystyle \varepsilon _{m}}
with volume fraction of
c
m
≪
1
{\displaystyle c_{m}\ll 1}
. This formula is based on the equality
where
ε
0
{\displaystyle \varepsilon _{0}}
is the absolute permittivity of free space and
p
m
{\displaystyle p_{m}}
is electric dipole moment of a single inclusion induced by the external electric field E. However this equality is good only for homogeneous medium and
ε
d
=
1
{\displaystyle \varepsilon _{d}=1}
. Moreover, the formula (1) ignores the interaction between single inclusions. Because of these circumstances, formula (1) gives too narrow and too high resonant curve for plasmon excitations in metal nanoparticles of the mixture.
=== Formula ===
The Maxwell Garnett equation reads:
where
ε
e
f
f
{\displaystyle \varepsilon _{\mathrm {eff} }}
is the effective dielectric constant of the medium,
ε
i
{\displaystyle \varepsilon _{i}}
of the inclusions, and
ε
m
{\displaystyle \varepsilon _{m}}
of the matrix;
δ
i
{\displaystyle \delta _{i}}
is the volume fraction of the inclusions.
The Maxwell Garnett equation is solved by:
so long as the denominator does not vanish. A simple MATLAB calculator using this formula is as follows.
=== Derivation ===
For the derivation of the Maxwell Garnett equation we start with an array of polarizable particles. By using the Lorentz local field concept, we obtain the Clausius-Mossotti relation:
ε
−
1
ε
+
2
=
4
π
3
∑
j
N
j
α
j
{\displaystyle {\frac {\varepsilon -1}{\varepsilon +2}}={\frac {4\pi }{3}}\sum _{j}N_{j}\alpha _{j}}
Where
N
j
{\displaystyle N_{j}}
is the number of particles per unit volume. By using elementary electrostatics, we get for a spherical inclusion with dielectric constant
ε
i
{\displaystyle \varepsilon _{i}}
and a radius
a
{\displaystyle a}
a polarisability
α
{\displaystyle \alpha }
:
α
=
(
ε
i
−
1
ε
i
+
2
)
a
3
{\displaystyle \alpha =\left({\frac {\varepsilon _{i}-1}{\varepsilon _{i}+2}}\right)a^{3}}
If we combine
α
{\displaystyle \alpha }
with the Clausius Mosotti equation, we get:
(
ε
e
f
f
−
1
ε
e
f
f
+
2
)
=
δ
i
(
ε
i
−
1
ε
i
+
2
)
{\displaystyle \left({\frac {\varepsilon _{\mathrm {eff} }-1}{\varepsilon _{\mathrm {eff} }+2}}\right)=\delta _{i}\left({\frac {\varepsilon _{i}-1}{\varepsilon _{i}+2}}\right)}
Where
ε
e
f
f
{\displaystyle \varepsilon _{\mathrm {eff} }}
is the effective dielectric constant of the medium,
ε
i
{\displaystyle \varepsilon _{i}}
of the inclusions;
δ
i
{\displaystyle \delta _{i}}
is the volume fraction of the inclusions.
As the model of Maxwell Garnett is a composition of a matrix medium with inclusions we enhance the equation:
=== Validity ===
In general terms, the Maxwell Garnett EMA is expected to be valid at low volume fractions
δ
i
{\displaystyle \delta _{i}}
, since it is assumed that the domains are spatially separated and electrostatic interaction between the chosen inclusions and all other neighbouring inclusions is neglected. The Maxwell Garnett formula, in contrast to Bruggeman formula, ceases to be correct when the inclusions become resonant. In the case of plasmon resonance, the Maxwell Garnett formula is correct only at volume fraction of the inclusions
δ
i
<
10
−
5
{\displaystyle \delta _{i}<10^{-5}}
. The applicability of effective medium approximation for dielectric multilayers and metal-dielectric multilayers have been studied, showing that there are certain cases where the effective medium approximation does not hold and one needs to be cautious in application of the theory.
== Generalization of the Maxwell Garnett Equation to describe the nanoparticle size distribution ==
Maxwell Garnett Equation describes optical properties of nanocomposites which consist in a collection of perfectly spherical nanoparticles. All these nanoparticles must have the same size. However, due to confinement effect, the optical properties can be influenced by the nanoparticles size distribution. As shown by Battie et al., the Maxwell Garnett equation can be generalized to take into account this distribution.
(
ε
eff
−
ε
m
)
ε
eff
−
2
ε
m
=
3
i
λ
3
16
π
2
ε
m
1.5
f
R
m
3
∫
P
(
R
)
a
1
(
R
)
d
R
{\displaystyle {\frac {(\varepsilon _{\text{eff}}-\varepsilon _{m})}{\varepsilon _{\text{eff}}-2\varepsilon _{m}}}={\frac {3i\lambda ^{3}}{16\pi ^{2}\varepsilon _{m}^{1.5}}}{\frac {f}{R_{m}^{3}}}\int P(R)a_{1}(R)dR}
R
{\displaystyle R}
and
P
(
R
)
{\displaystyle P(R)}
are the nanoparticle radius and size distribution, respectively.
R
m
{\displaystyle R_{m}}
and
f
{\displaystyle f}
are the mean radius and the volume fraction of the nanoparticles, respectively.
a
1
{\displaystyle a_{1}}
is the first electric Mie coefficient.
This equation reveals that the classical Maxwell Garnett equation gives a false estimation of the volume fraction nanoparticles when the size distribution cannot be neglected.
=== Generalization to include shape distribution of nanoparticles ===
The Maxwell Garnett equation only describes the optical properties of a collection of perfectly spherical nanoparticles. However, the optical properties of nanocomposites are sensitive to the nanoparticles shape distribution. To overcome this limit, Y. Battie et al. have developed the shape distributed effective medium theory (SDEMT). This effective medium theory enables to calculate the effective dielectric function of a nanocomposite which consists in a collection of ellipsoïdal nanoparticles distributed in shape.
ε
eff
=
(
1
−
f
)
ε
m
+
f
β
ε
i
1
−
f
+
f
β
{\displaystyle \varepsilon _{\text{eff}}={\frac {(1-f)\varepsilon _{m}+f\beta \varepsilon _{i}}{1-f+f\beta }}}
with
β
=
1
3
∬
P
(
L
1
,
L
2
)
∑
i
=
1
3
ε
m
ε
m
+
L
i
(
ε
i
−
ε
m
)
d
L
1
d
L
2
{\displaystyle \beta ={\frac {1}{3}}\iint P(L_{1},L_{2})\sum _{i\mathop {=} 1}^{3}{\frac {\varepsilon _{m}}{\varepsilon _{m}+L_{i}(\varepsilon _{i}-\varepsilon _{m})}}dL_{1}dL_{2}}
The depolarization factors (
L
1
,
L
2
,
L
3
{\displaystyle L_{1},L_{2},L_{3}}
) only depend on the shape of nanoparticles.
P
(
L
1
,
L
2
)
{\displaystyle P(L_{1},L_{2})}
is the distribution of depolarization factors.
f
{\displaystyle f}
is the volume fraction of the nanoparticles.
The SDEMT theory was used to extract the shape distribution of nanoparticles from absorption or ellipsometric spectra.
== Formula describing size effect ==
A new formula describing size effect was proposed. This formula has a form
ε
eff
=
1
4
(
H
ε
+
i
−
H
ε
2
−
8
ε
m
ε
d
J
(
k
m
a
)
)
,
{\displaystyle \varepsilon _{\text{eff}}={\frac {1}{4}}\left(H_{\varepsilon }+i{\sqrt {-H_{\varepsilon }^{2}-8\varepsilon _{m}\varepsilon _{d}J(k_{m}a)}}\right),}
J
(
x
)
=
2
1
−
x
cot
(
x
)
x
2
+
x
cot
(
x
)
−
1
,
{\displaystyle J(x)=2{\frac {1-x\cot(x)}{x^{2}+x\cot(x)-1}},}
where a is the nanoparticle radius and
k
m
=
ε
m
μ
m
ω
/
c
{\displaystyle k_{m}={\sqrt {\varepsilon _{m}\mu _{m}}}\omega /c}
is wave number. It is supposed here that the time dependence of the electromagnetic field is given by the factor
e
x
p
(
−
i
ω
t
)
.
{\displaystyle \mathrm {exp} (-i\omega t).}
In this paper Bruggeman's approach was used, but electromagnetic field for electric-dipole oscillation mode inside the picked particle was computed without applying quasi-static approximation. Thus the function
J
(
k
m
a
)
{\displaystyle J(k_{m}a)}
is due to the field nonuniformity inside the picked particle. In quasi-static region (
k
m
a
≪
1
{\displaystyle k_{m}a\ll 1}
, i.e.
a
≤
10
n
m
{\displaystyle a\leq \mathrm {10\,nm} }
for Ag
)
{\displaystyle )}
this function becomes constant
J
(
k
m
a
)
=
1
{\displaystyle J(k_{m}a)=1}
and formula (5) becomes identical with Bruggeman's formula.
== Effective permeability formula ==
Formula for effective permeability of mixtures has a form
H
μ
=
(
2
−
3
c
m
)
μ
d
−
(
1
−
3
c
m
)
μ
m
J
(
k
m
a
)
.
{\displaystyle H_{\mu }=(2-3c_{m})\mu _{d}-(1-3c_{m})\mu _{m}J(k_{m}a).}
Here
μ
eff
{\displaystyle \mu _{\text{eff}}}
is effective relative complex permeability of the mixture,
μ
d
{\displaystyle \mu _{d}}
is relative complex permeability of the background medium containing small spherical inclusions of relative permeability
μ
m
{\displaystyle \mu _{m}}
with volume fraction of
c
m
≪
1
{\displaystyle c_{m}\ll 1}
. This formula was derived in dipole approximation. Magnetic octupole mode and all other magnetic oscillation modes of odd orders were neglected here. When
μ
m
=
μ
d
{\displaystyle \mu _{m}=\mu _{d}}
and
k
m
a
≪
1
{\displaystyle k_{m}a\ll 1}
this formula has a simple form
== Effective medium theory for resistor networks ==
For a network consisting of a high density of random resistors, an exact solution for each individual element may be impractical or impossible. In such case, a random resistor network can be considered as a two-dimensional graph and the effective resistance can be modelled in terms of graph measures and geometrical properties of networks.
Assuming, edge length is much less than electrode spacing and edges to be uniformly distributed, the potential can be considered to drop uniformly from one electrode to another.
Sheet resistance of such a random network (
R
s
n
{\displaystyle R_{sn}}
) can be written in terms of edge (wire) density (
N
E
{\displaystyle N_{E}}
), resistivity (
ρ
{\displaystyle \rho }
), width (
w
{\displaystyle w}
) and thickness (
t
{\displaystyle t}
) of edges (wires) as:
== See also ==
Constitutive equation
Percolation threshold
== References ==
== Further reading ==
Lakhtakia, A., ed. (1996). Selected Papers on Linear Optical Composite Materials [Milestone Vol. 120]. Bellingham, WA, USA: SPIE Press. ISBN 978-0-8194-2152-4.
Tuck, Choy (1999). Effective Medium Theory (1st ed.). Oxford: Oxford University Press. ISBN 978-0-19-851892-1.
Lakhtakia (Ed.), A. (2000). Electromagnetic Fields in Unconventional Materials and Structures. New York: Wiley-Interscience. ISBN 978-0-471-36356-9.
Weiglhofer (Ed.); Lakhtakia (Ed.), A. (2003). Introduction to Complex Mediums for Optics and Electromagnetics. Bellingham, WA, USA: SPIE Press. ISBN 978-0-8194-4947-4.
Mackay, T. G.; Lakhtakia, A. (2010). Electromagnetic Anisotropy and Bianisotropy: A Field Guide (1st ed.). Singapore: World Scientific. ISBN 978-981-4289-61-0. | Wikipedia/Effective_medium_approximations |
In electrical engineering, the distributed-element model or transmission-line model of electrical circuits assumes that the attributes of the circuit (resistance, capacitance, and inductance) are distributed continuously throughout the material of the circuit. This is in contrast to the more common lumped-element model, which assumes that these values are lumped into electrical components that are joined by perfectly conducting wires. In the distributed-element model, each circuit element is infinitesimally small, and the wires connecting elements are not assumed to be perfect conductors; that is, they have impedance. Unlike the lumped-element model, it assumes nonuniform current along each branch and nonuniform voltage along each wire.
The distributed model is used where the wavelength becomes comparable to the physical dimensions of the circuit, making the lumped model inaccurate. This occurs at high frequencies, where the wavelength is very short, or on low-frequency, but very long, transmission lines such as overhead power lines.
== Applications ==
The distributed-element model is more accurate but more complex than the lumped-element model. The use of infinitesimals will often require the application of calculus, whereas circuits analysed by the lumped-element model can be solved with linear algebra. The distributed model is consequently usually only applied when accuracy calls for its use. The location of this point is dependent on the accuracy required in a specific application, but essentially, it needs to be used in circuits where the wavelengths of the signals have become comparable to the physical dimensions of the components. An often-quoted engineering rule of thumb (not to be taken too literally because there are many exceptions) is that parts larger than one-tenth of a wavelength will usually need to be analysed as distributed elements.
=== Transmission lines ===
Transmission lines are a common example of the use of the distributed model. Its use is dictated because the length of the line will usually be many wavelengths of the circuit's operating frequency. Even for the low frequencies used on power transmission lines, one-tenth of a wavelength is still only about 500 kilometres at 60 Hz. Transmission lines are usually represented in terms of the primary line constants as shown in figure 1. From this model, the behaviour of the circuit is described by the secondary line constants, which can be calculated from the primary ones.
The primary line constants are normally taken to be constant with position along the line leading to a particularly simple analysis and model. However, this is not always the case, variations in physical dimensions along the line will cause variations in the primary constants, that is, they have now to be described as functions of distance. Most often, such a situation represents an unwanted deviation from the ideal, such as a manufacturing error, however, there are a number of components where such longitudinal variations are deliberately introduced as part of the function of the component. A well-known example of this is the horn antenna.
Where reflections are present on the line, quite short lengths of line can exhibit effects that are simply not predicted by the lumped-element model. A quarter wavelength line, for instance, will transform the terminating impedance into its dual. This can be a wildly different impedance.
=== High-frequency transistors ===
Another example of the use of distributed elements is in the modelling of the base region of a bipolar junction transistor at high frequencies. The analysis of charge carriers crossing the base region is inaccurate when the base region is simply treated as a lumped element. A more successful model is a simplified transmission line model, which includes the base material's distributed bulk resistance and the substrate's distributed capacitance. This model is represented in figure 2.
=== Resistivity measurements ===
In many situations, it is desired to measure resistivity of bulk material by applying an electrode array at the surface. Amongst the fields that use this technique are geophysics (because it avoids having to dig into the substrate) and the semiconductor industry (for the similar reason that it is non-intrusive) for testing bulk silicon wafers. The basic arrangement is shown in figure 3, although normally, more electrodes would be used. To form a relationship between the voltage and current measured on the one hand, and the material's resistivity on the other, it is necessary to apply the distributed-element model by considering the material to be an array of infinitesimal resistor elements. Unlike the transmission line example, the need to apply the distributed-element model arises from the geometry of the setup, and not from any wave propagation considerations.
The model used here needs to be truly 3-dimensional (transmission line models are usually described by elements of a one-dimensional line). It is also possible that the resistances of the elements will be functions of the coordinates, indeed, in the geophysical application, it may well be that regions of changed resistivity are the very things that it is desired to detect.
=== Inductor windings ===
Another example where a simple one-dimensional model will not suffice is the windings of an inductor. Coils of wire have capacitance between adjacent turns (and more remote turns as well, but the effect progressively diminishes). For a single-layer solenoid, the distributed capacitance will mostly lie between adjacent turns, as shown in figure 4, between turns T1 and T2, but for multiple-layer windings and more accurate models distributed capacitance to other turns must also be considered. This model is fairly difficult to deal with in simple calculations and, for the most part, is avoided. The most common approach is to roll up all the distributed capacitance into one lumped element in parallel with the inductance and resistance of the coil. This lumped model works successfully at low frequencies but falls apart at high frequencies where the usual practice is to simply measure (or specify) an overall Q for the inductor without associating a specific equivalent circuit.
== See also ==
Telegrapher's equations
Distributed-element circuit
Distributed-element filter
Warren P. Mason
== References ==
== Bibliography ==
Kenneth L. Kaiser, Electromagnetic compatibility handbook, CRC Press, 2004 ISBN 0-8493-2087-9.
Karl Lark-Horovitz, Vivian Annabelle Johnson, Methods of experimental physics: Solid state physics, Academic Press, 1959 ISBN 0-12-475946-7.
Robert B. Northrop, Introduction to instrumentation and measurements, CRC Press, 1997 ISBN 0-8493-7898-2.
P. Vallabh Sharma, Environmental and engineering geophysics, Cambridge University Press, 1997 ISBN 0-521-57632-6. | Wikipedia/Distributed-element_model |
Small-signal modeling is a common analysis technique in electronics engineering used to approximate the behavior of electronic circuits containing nonlinear devices, such as diodes, transistors, vacuum tubes, and integrated circuits, with linear equations. It is applicable to electronic circuits in which the AC signals (i.e., the time-varying currents and voltages in the circuit) are small relative to the DC bias currents and voltages. A small-signal model is an AC equivalent circuit in which the nonlinear circuit elements are replaced by linear elements whose values are given by the first-order (linear) approximation of their characteristic curve near the bias point.
== Overview ==
Many of the electrical components used in simple electric circuits, such as resistors, inductors, and capacitors are linear. Circuits made with these components, called linear circuits, are governed by linear differential equations, and can be solved easily with powerful mathematical frequency domain methods such as the Laplace transform.
In contrast, many of the components that make up electronic circuits, such as diodes, transistors, integrated circuits, and vacuum tubes are nonlinear; that is the current through them is not proportional to the voltage, and the output of two-port devices like transistors is not proportional to their input. The relationship between current and voltage in them is given by a curved line on a graph, their characteristic curve (I-V curve). In general these circuits don't have simple mathematical solutions. To calculate the current and voltage in them generally requires either graphical methods or simulation on computers using electronic circuit simulation programs like SPICE.
However in some electronic circuits such as radio receivers, telecommunications, sensors, instrumentation and signal processing circuits, the AC signals are "small" compared to the DC voltages and currents in the circuit. In these, perturbation theory can be used to derive an approximate AC equivalent circuit which is linear, allowing the AC behavior of the circuit to be calculated easily. In these circuits a steady DC current or voltage from the power supply, called a bias, is applied to each nonlinear component such as a transistor and vacuum tube to set its operating point, and the time-varying AC current or voltage which represents the signal to be processed is added to it. The point on the graph of the characteristic curve representing the bias current and voltage is called the quiescent point (Q point). In the above circuits the AC signal is small compared to the bias, representing a small perturbation of the DC voltage or current in the circuit about the Q point. If the characteristic curve of the device is sufficiently flat over the region occupied by the signal, using a Taylor series expansion the nonlinear function can be approximated near the bias point by its first order partial derivative (this is equivalent to approximating the characteristic curve by a straight line tangent to it at the bias point). These partial derivatives represent the incremental capacitance, resistance, inductance and gain seen by the signal, and can be used to create a linear equivalent circuit giving the response of the real circuit to a small AC signal. This is called the "small-signal model".
The small signal model is dependent on the DC bias currents and voltages in the circuit (the Q point). Changing the bias moves the operating point up or down on the curves, thus changing the equivalent small-signal AC resistance, gain, etc. seen by the signal.
Any nonlinear component whose characteristics are given by a continuous, single-valued, smooth (differentiable) curve can be approximated by a linear small-signal model. Small-signal models exist for electron tubes, diodes, field-effect transistors (FET) and bipolar transistors, notably the hybrid-pi model and various two-port networks. Manufacturers often list the small-signal characteristics of such components at "typical" bias values on their data sheets.
== Variable notation ==
DC quantities (also known as bias), constant values with respect to time, are denoted by uppercase letters with uppercase subscripts. For example, the DC input bias voltage of a transistor would be denoted
V
I
N
{\displaystyle V_{\mathrm {IN} }}
. For example, one might say that
V
I
N
=
5
{\displaystyle V_{\mathrm {IN} }=5}
.
Small-signal quantities, which have zero average value, are denoted using lowercase letters with lowercase subscripts. Small signals typically used for modeling are sinusoidal, or "AC", signals. For example, the input signal of a transistor would be denoted as
v
i
n
{\displaystyle v_{\mathrm {in} }}
. For example, one might say that
v
i
n
(
t
)
=
0.2
cos
(
2
π
t
)
{\displaystyle v_{\mathrm {in} }(t)=0.2\cos(2\pi t)}
.
Total quantities, combining both small-signal and large-signal quantities, are denoted using lower case letters and uppercase subscripts. For example, the total input voltage to the aforementioned transistor would be denoted as
v
I
N
(
t
)
{\displaystyle v_{\mathrm {IN} }(t)}
. The small-signal model of the total signal is then the sum of the DC component and the small-signal component of the total signal, or in algebraic notation,
v
I
N
(
t
)
=
V
I
N
+
v
i
n
(
t
)
{\displaystyle v_{\mathrm {IN} }(t)=V_{\mathrm {IN} }+v_{\mathrm {in} }(t)}
. For example,
v
I
N
(
t
)
=
5
+
0.2
cos
(
2
π
t
)
{\displaystyle v_{\mathrm {IN} }(t)=5+0.2\cos(2\pi t)}
== PN junction diodes ==
The (large-signal) Shockley equation for a diode can be linearized about the bias point or quiescent point (sometimes called Q-point) to find the small-signal conductance, capacitance and resistance of the diode. This procedure is described in more detail under diode modelling#Small-signal_modelling, which provides an example of the linearization procedure followed in small-signal models of semiconductor devices.
== Differences between small signal and large signal ==
A large signal is any signal having enough magnitude to reveal a circuit's nonlinear behavior. The signal may be a DC signal or an AC signal or indeed, any signal. How large a signal needs to be (in magnitude) before it is considered a large signal depends on the circuit and context in which the signal is being used. In some highly nonlinear circuits practically all signals need to be considered as large signals.
A small signal is part of a model of a large signal. To avoid confusion, note that there is such a thing as a small signal (a part of a model) and a small-signal model (a model of a large signal).
A small signal model consists of a small signal (having zero average value, for example a sinusoid, but any AC signal could be used) superimposed on a bias signal (or superimposed on a DC constant signal) such that the sum of the small signal plus the bias signal gives the total signal which is exactly equal to the original (large) signal to be modeled. This resolution of a signal into two components allows the technique of superposition to be used to simplify further analysis. (If superposition applies in the context.)
In analysis of the small signal's contribution to the circuit, the nonlinear components, which would be the DC components, are analyzed separately taking into account nonlinearity.
== See also ==
Diode modelling
Hybrid-pi model
Early effect
SPICE – Simulation Program with Integrated Circuit Emphasis, a general purpose analog electronic circuit simulator capable of solving small signal models.
== References == | Wikipedia/Small-signal_model |
In circuit design, the Y-Δ transform, also written wye-delta and also known by many other names, is a mathematical technique to simplify the analysis of an electrical network. The name derives from the shapes of the circuit diagrams, which look respectively like the letter Y and the Greek capital letter Δ. This circuit transformation theory was published by Arthur Edwin Kennelly in 1899. It is widely used in analysis of three-phase electric power circuits.
The Y-Δ transform can be considered a special case of the star-mesh transform for three resistors. In mathematics, the Y-Δ transform plays an important role in theory of circular planar graphs.
== Names ==
The Y-Δ transform is known by a variety of other names, mostly based upon the two shapes involved, listed in either order. The Y, spelled out as wye, can also be called T or star; the Δ, spelled out as delta, can also be called triangle, Π (spelled out as pi), or mesh. Thus, common names for the transformation include wye-delta or delta-wye, star-delta, star-mesh, or T-Π.
== Basic Y-Δ transformation ==
The transformation is used to establish equivalence for networks with three terminals. Where three elements terminate at a common node and none are sources, the node is eliminated by transforming the impedances. For equivalence, the impedance between any pair of terminals must be the same for both networks. The equations given here are valid for complex as well as real impedances. Complex impedance is a quantity measured in ohms which represents resistance as positive real numbers in the usual manner, and also represents reactance as positive and negative imaginary values.
=== Equations for the transformation from Δ to Y ===
The general idea is to compute the impedance
R
Y
{\displaystyle R_{\text{Y}}}
at a terminal node of the Y circuit with impedances
R
′
{\displaystyle R'}
,
R
″
{\displaystyle R''}
to adjacent nodes in the Δ circuit by
R
Y
=
R
′
R
″
∑
R
Δ
{\displaystyle R_{\text{Y}}={\frac {R'R''}{\sum R_{\Delta }}}}
where
R
Δ
{\displaystyle R_{\Delta }}
are all impedances in the Δ circuit. This yields the specific formula
R
1
=
R
b
R
c
R
a
+
R
b
+
R
c
R
2
=
R
a
R
c
R
a
+
R
b
+
R
c
R
3
=
R
a
R
b
R
a
+
R
b
+
R
c
{\displaystyle {\begin{aligned}R_{1}&={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\\[3pt]R_{2}&={\frac {R_{\text{a}}R_{\text{c}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\\[3pt]R_{3}&={\frac {R_{\text{a}}R_{\text{b}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\end{aligned}}}
=== Equations for the transformation from Y to Δ ===
The general idea is to compute an impedance
R
Δ
{\displaystyle R_{\Delta }}
in the Δ circuit by
R
Δ
=
R
P
R
opposite
{\displaystyle R_{\Delta }={\frac {R_{P}}{R_{\text{opposite}}}}}
where
R
P
=
R
1
R
2
+
R
2
R
3
+
R
3
R
1
{\displaystyle R_{P}=R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}
is the sum of the products of all pairs of impedances in the Y circuit and
R
opposite
{\displaystyle R_{\text{opposite}}}
is the impedance of the node in the Y circuit which is opposite the edge with
R
Δ
{\displaystyle R_{\Delta }}
. The formulae for the individual edges are thus
R
a
=
R
1
R
2
+
R
2
R
3
+
R
3
R
1
R
1
=
R
2
+
R
3
+
R
2
R
3
R
1
R
b
=
R
1
R
2
+
R
2
R
3
+
R
3
R
1
R
2
=
R
1
+
R
3
+
R
1
R
3
R
2
R
c
=
R
1
R
2
+
R
2
R
3
+
R
3
R
1
R
3
=
R
1
+
R
2
+
R
1
R
2
R
3
{\displaystyle {\begin{aligned}R_{\text{a}}&={\frac {R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}{R_{1}}}=R_{2}+R_{3}+{\frac {R_{2}R_{3}}{R_{1}}}\\[3pt]R_{\text{b}}&={\frac {R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}{R_{2}}}=R_{1}+R_{3}+{\frac {R_{1}R_{3}}{R_{2}}}\\[3pt]R_{\text{c}}&={\frac {R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}{R_{3}}}=R_{1}+R_{2}+{\frac {R_{1}R_{2}}{R_{3}}}\end{aligned}}}
Or, if using admittance instead of resistance:
Y
a
=
Y
3
Y
2
∑
Y
Y
Y
b
=
Y
3
Y
1
∑
Y
Y
Y
c
=
Y
1
Y
2
∑
Y
Y
{\displaystyle {\begin{aligned}Y_{\text{a}}&={\frac {Y_{3}Y_{2}}{\sum Y_{\text{Y}}}}\\[3pt]Y_{\text{b}}&={\frac {Y_{3}Y_{1}}{\sum Y_{\text{Y}}}}\\[3pt]Y_{\text{c}}&={\frac {Y_{1}Y_{2}}{\sum Y_{\text{Y}}}}\end{aligned}}}
Note that the general formula in Y to Δ using admittance is similar to Δ to Y using resistance.
== A proof of the existence and uniqueness of the transformation ==
The feasibility of the transformation can be shown as a consequence of the superposition theorem for electric circuits. A short proof, rather than one derived as a corollary of the more general star-mesh transform, can be given as follows. The equivalence lies in the statement that for any external voltages (
V
1
,
V
2
{\displaystyle V_{1},V_{2}}
and
V
3
{\displaystyle V_{3}}
) applying at the three nodes (
N
1
,
N
2
{\displaystyle N_{1},N_{2}}
and
N
3
{\displaystyle N_{3}}
), the corresponding currents (
I
1
,
I
2
{\displaystyle I_{1},I_{2}}
and
I
3
{\displaystyle I_{3}}
) are exactly the same for both the Y and Δ circuit, and vice versa. In this proof, we start with given external currents at the nodes. According to the superposition theorem, the voltages can be obtained by studying the superposition of the resulting voltages at the nodes of the following three problems applied at the three nodes with current:
1
3
(
I
1
−
I
2
)
,
−
1
3
(
I
1
−
I
2
)
,
0
{\displaystyle {\frac {1}{3}}\left(I_{1}-I_{2}\right),-{\frac {1}{3}}\left(I_{1}-I_{2}\right),0}
0
,
1
3
(
I
2
−
I
3
)
,
−
1
3
(
I
2
−
I
3
)
{\displaystyle 0,{\frac {1}{3}}\left(I_{2}-I_{3}\right),-{\frac {1}{3}}\left(I_{2}-I_{3}\right)}
and
−
1
3
(
I
3
−
I
1
)
,
0
,
1
3
(
I
3
−
I
1
)
{\displaystyle -{\frac {1}{3}}\left(I_{3}-I_{1}\right),0,{\frac {1}{3}}\left(I_{3}-I_{1}\right)}
The equivalence can be readily shown by using Kirchhoff's circuit laws that
I
1
+
I
2
+
I
3
=
0
{\displaystyle I_{1}+I_{2}+I_{3}=0}
. Now each problem is relatively simple, since it involves only one single ideal current source. To obtain exactly the same outcome voltages at the nodes for each problem, the equivalent resistances in the two circuits must be the same, this can be easily found by using the basic rules of series and parallel circuits:
R
3
+
R
1
=
(
R
c
+
R
a
)
R
b
R
a
+
R
b
+
R
c
,
R
3
R
1
=
R
a
R
c
.
{\displaystyle R_{3}+R_{1}={\frac {\left(R_{\text{c}}+R_{\text{a}}\right)R_{\text{b}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}},\quad {\frac {R_{3}}{R_{1}}}={\frac {R_{\text{a}}}{R_{\text{c}}}}.}
Though usually six equations are more than enough to express three variables (
R
1
,
R
2
,
R
3
{\displaystyle R_{1},R_{2},R_{3}}
) in term of the other three variables(
R
a
,
R
b
,
R
c
{\displaystyle R_{\text{a}},R_{\text{b}},R_{\text{c}}}
), here it is straightforward to show that these equations indeed lead to the above designed expressions.
In fact, the superposition theorem establishes the relation between the values of the resistances, the uniqueness theorem guarantees the uniqueness of such solution.
== Simplification of networks ==
Resistive networks between two terminals can theoretically be simplified to a single equivalent resistor (more generally, the same is true of impedance). Series and parallel transforms are basic tools for doing so, but for complex networks such as the bridge illustrated here, they do not suffice.
The Y-Δ transform can be used to eliminate one node at a time and produce a network that can be further simplified, as shown.
The reverse transformation, Δ-Y, which adds a node, is often handy to pave the way for further simplification as well.
Every two-terminal network represented by a planar graph can be reduced to a single equivalent resistor by a sequence of series, parallel, Y-Δ, and Δ-Y transformations. However, there are non-planar networks that cannot be simplified using these transformations, such as a regular square grid wrapped around a torus, or any member of the Petersen family.
== Graph theory ==
In graph theory, the Y-Δ transform means replacing a Y subgraph of a graph with the equivalent Δ subgraph. The transform preserves the number of edges in a graph, but not the number of vertices or the number of cycles. Two graphs are said to be Y-Δ equivalent if one can be obtained from the other by a series of Y-Δ transforms in either direction. For example, the Petersen family is a Y-Δ equivalence class.
== Demonstration ==
=== Δ-load to Y-load transformation equations ===
To relate
{
R
a
,
R
b
,
R
c
}
{\displaystyle \left\{R_{\text{a}},R_{\text{b}},R_{\text{c}}\right\}}
from Δ to
{
R
1
,
R
2
,
R
3
}
{\displaystyle \left\{R_{1},R_{2},R_{3}\right\}}
from Y, the impedance between two corresponding nodes is compared. The impedance in either configuration is determined as if one of the nodes is disconnected from the circuit.
The impedance between N1 and N2 with N3 disconnected in Δ:
R
Δ
(
N
1
,
N
2
)
=
R
c
∥
(
R
a
+
R
b
)
=
1
1
R
c
+
1
R
a
+
R
b
=
R
c
(
R
a
+
R
b
)
R
a
+
R
b
+
R
c
{\displaystyle {\begin{aligned}R_{\Delta }\left(N_{1},N_{2}\right)&=R_{\text{c}}\parallel (R_{\text{a}}+R_{\text{b}})\\[3pt]&={\frac {1}{{\frac {1}{R_{\text{c}}}}+{\frac {1}{R_{\text{a}}+R_{\text{b}}}}}}\\[3pt]&={\frac {R_{\text{c}}\left(R_{\text{a}}+R_{\text{b}}\right)}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\end{aligned}}}
To simplify, let
R
T
{\displaystyle R_{\text{T}}}
be the sum of
{
R
a
,
R
b
,
R
c
}
{\displaystyle \left\{R_{\text{a}},R_{\text{b}},R_{\text{c}}\right\}}
.
R
T
=
R
a
+
R
b
+
R
c
{\displaystyle R_{\text{T}}=R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}
Thus,
R
Δ
(
N
1
,
N
2
)
=
R
c
(
R
a
+
R
b
)
R
T
{\displaystyle R_{\Delta }\left(N_{1},N_{2}\right)={\frac {R_{\text{c}}(R_{\text{a}}+R_{\text{b}})}{R_{\text{T}}}}}
The corresponding impedance between N1 and N2 in Y is simple:
R
Y
(
N
1
,
N
2
)
=
R
1
+
R
2
{\displaystyle R_{\text{Y}}\left(N_{1},N_{2}\right)=R_{1}+R_{2}}
hence:
R
1
+
R
2
=
R
c
(
R
a
+
R
b
)
R
T
{\displaystyle R_{1}+R_{2}={\frac {R_{\text{c}}(R_{\text{a}}+R_{\text{b}})}{R_{\text{T}}}}}
(1)
Repeating for
R
(
N
2
,
N
3
)
{\displaystyle R(N_{2},N_{3})}
:
R
2
+
R
3
=
R
a
(
R
b
+
R
c
)
R
T
{\displaystyle R_{2}+R_{3}={\frac {R_{\text{a}}(R_{\text{b}}+R_{\text{c}})}{R_{\text{T}}}}}
(2)
and for
R
(
N
1
,
N
3
)
{\displaystyle R\left(N_{1},N_{3}\right)}
:
R
1
+
R
3
=
R
b
(
R
a
+
R
c
)
R
T
.
{\displaystyle R_{1}+R_{3}={\frac {R_{\text{b}}\left(R_{\text{a}}+R_{\text{c}}\right)}{R_{\text{T}}}}.}
(3)
From here, the values of
{
R
1
,
R
2
,
R
3
}
{\displaystyle \left\{R_{1},R_{2},R_{3}\right\}}
can be determined by linear combination (addition and/or subtraction).
For example, adding (1) and (3), then subtracting (2) yields
R
1
+
R
2
+
R
1
+
R
3
−
R
2
−
R
3
=
R
c
(
R
a
+
R
b
)
R
T
+
R
b
(
R
a
+
R
c
)
R
T
−
R
a
(
R
b
+
R
c
)
R
T
⇒
2
R
1
=
2
R
b
R
c
R
T
⇒
R
1
=
R
b
R
c
R
T
.
{\displaystyle {\begin{aligned}R_{1}+R_{2}+R_{1}+R_{3}-R_{2}-R_{3}&={\frac {R_{\text{c}}(R_{\text{a}}+R_{\text{b}})}{R_{\text{T}}}}+{\frac {R_{\text{b}}(R_{\text{a}}+R_{\text{c}})}{R_{\text{T}}}}-{\frac {R_{\text{a}}(R_{\text{b}}+R_{\text{c}})}{R_{\text{T}}}}\\[3pt]{}\Rightarrow 2R_{1}&={\frac {2R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}\\[3pt]{}\Rightarrow R_{1}&={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}.\end{aligned}}}
For completeness:
R
1
=
R
b
R
c
R
T
{\displaystyle R_{1}={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}}
(4)
R
2
=
R
a
R
c
R
T
{\displaystyle R_{2}={\frac {R_{\text{a}}R_{\text{c}}}{R_{\text{T}}}}}
(5)
R
3
=
R
a
R
b
R
T
{\displaystyle R_{3}={\frac {R_{\text{a}}R_{\text{b}}}{R_{\text{T}}}}}
(6)
=== Y-load to Δ-load transformation equations ===
Let
R
T
=
R
a
+
R
b
+
R
c
{\displaystyle R_{\text{T}}=R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}
.
We can write the Δ to Y equations as
R
1
=
R
b
R
c
R
T
{\displaystyle R_{1}={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}}
(1)
R
2
=
R
a
R
c
R
T
{\displaystyle R_{2}={\frac {R_{\text{a}}R_{\text{c}}}{R_{\text{T}}}}}
(2)
R
3
=
R
a
R
b
R
T
.
{\displaystyle R_{3}={\frac {R_{\text{a}}R_{\text{b}}}{R_{\text{T}}}}.}
(3)
Multiplying the pairs of equations yields
R
1
R
2
=
R
a
R
b
R
c
2
R
T
2
{\displaystyle R_{1}R_{2}={\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}^{2}}{R_{\text{T}}^{2}}}}
(4)
R
1
R
3
=
R
a
R
b
2
R
c
R
T
2
{\displaystyle R_{1}R_{3}={\frac {R_{\text{a}}R_{\text{b}}^{2}R_{\text{c}}}{R_{\text{T}}^{2}}}}
(5)
R
2
R
3
=
R
a
2
R
b
R
c
R
T
2
{\displaystyle R_{2}R_{3}={\frac {R_{\text{a}}^{2}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}^{2}}}}
(6)
and the sum of these equations is
R
1
R
2
+
R
1
R
3
+
R
2
R
3
=
R
a
R
b
R
c
2
+
R
a
R
b
2
R
c
+
R
a
2
R
b
R
c
R
T
2
{\displaystyle R_{1}R_{2}+R_{1}R_{3}+R_{2}R_{3}={\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}^{2}+R_{\text{a}}R_{\text{b}}^{2}R_{\text{c}}+R_{\text{a}}^{2}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}^{2}}}}
(7)
Factor
R
a
R
b
R
c
{\displaystyle R_{\text{a}}R_{\text{b}}R_{\text{c}}}
from the right side, leaving
R
T
{\displaystyle R_{\text{T}}}
in the numerator, canceling with an
R
T
{\displaystyle R_{\text{T}}}
in the denominator.
R
1
R
2
+
R
1
R
3
+
R
2
R
3
=
(
R
a
R
b
R
c
)
(
R
a
+
R
b
+
R
c
)
R
T
2
=
R
a
R
b
R
c
R
T
{\displaystyle {\begin{aligned}R_{1}R_{2}+R_{1}R_{3}+R_{2}R_{3}&={}{\frac {\left(R_{\text{a}}R_{\text{b}}R_{\text{c}}\right)\left(R_{\text{a}}+R_{\text{b}}+R_{\text{c}}\right)}{R_{\text{T}}^{2}}}\\&={}{\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}\end{aligned}}}
(8)
Note the similarity between (8) and {(1), (2), (3)}
Divide (8) by (1)
R
1
R
2
+
R
1
R
3
+
R
2
R
3
R
1
=
R
a
R
b
R
c
R
T
R
T
R
b
R
c
=
R
a
,
{\displaystyle {\begin{aligned}{\frac {R_{1}R_{2}+R_{1}R_{3}+R_{2}R_{3}}{R_{1}}}&={}{\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}{\frac {R_{\text{T}}}{R_{\text{b}}R_{\text{c}}}}\\&={}R_{\text{a}},\end{aligned}}}
which is the equation for
R
a
{\displaystyle R_{\text{a}}}
. Dividing (8) by (2) or (3) (expressions for
R
2
{\displaystyle R_{2}}
or
R
3
{\displaystyle R_{3}}
) gives the remaining equations.
== Δ to Y transformation of a practical generator ==
During the analysis of balanced three-phase power systems, usually an equivalent per-phase (or single-phase) circuit is analyzed instead due to its simplicity. For that, equivalent wye connections are used for generators, transformers, loads and motors. The stator windings of a practical delta-connected three-phase generator, shown in the following figure, can be converted to an equivalent wye-connected generator, using the six following formulas:
Z
s1Y
=
Z
s1
Z
s3
Z
s1
+
Z
s2
+
Z
s3
Z
s2Y
=
Z
s1
Z
s2
Z
s1
+
Z
s2
+
Z
s3
Z
s3Y
=
Z
s2
Z
s3
Z
s1
+
Z
s2
+
Z
s3
V
s1Y
=
(
V
s1
Z
s1
−
V
s3
Z
s3
)
Z
s1Y
V
s2Y
=
(
V
s2
Z
s2
−
V
s1
Z
s1
)
Z
s2Y
V
s3Y
=
(
V
s3
Z
s3
−
V
s2
Z
s2
)
Z
s3Y
{\displaystyle {\begin{aligned}&Z_{\text{s1Y}}={\dfrac {Z_{\text{s1}}\,Z_{\text{s3}}}{Z_{\text{s1}}+Z_{\text{s2}}+Z_{\text{s3}}}}\\[2ex]&Z_{\text{s2Y}}={\dfrac {Z_{\text{s1}}\,Z_{\text{s2}}}{Z_{\text{s1}}+Z_{\text{s2}}+Z_{\text{s3}}}}\\[2ex]&Z_{\text{s3Y}}={\dfrac {Z_{\text{s2}}\,Z_{\text{s3}}}{Z_{\text{s1}}+Z_{\text{s2}}+Z_{\text{s3}}}}\\[2ex]&V_{\text{s1Y}}=\left({\dfrac {V_{\text{s1}}}{Z_{\text{s1}}}}-{\dfrac {V_{\text{s3}}}{Z_{\text{s3}}}}\right)Z_{\text{s1Y}}\\[2ex]&V_{\text{s2Y}}=\left({\dfrac {V_{\text{s2}}}{Z_{\text{s2}}}}-{\dfrac {V_{\text{s1}}}{Z_{\text{s1}}}}\right)Z_{\text{s2Y}}\\[2ex]&V_{\text{s3Y}}=\left({\dfrac {V_{\text{s3}}}{Z_{\text{s3}}}}-{\dfrac {V_{\text{s2}}}{Z_{\text{s2}}}}\right)Z_{\text{s3Y}}\end{aligned}}}
The resulting network is the following. The neutral node of the equivalent network is fictitious, and so are the line-to-neutral phasor voltages. During the transformation, the line phasor currents and the line (or line-to-line or phase-to-phase) phasor voltages are not altered.
If the actual delta generator is balanced, meaning that the internal phasor voltages have the same magnitude and are phase-shifted by 120° between each other and the three complex impedances are the same, then the previous formulas reduce to the four following:
Z
sY
=
Z
s
3
V
s1Y
=
V
s1
3
∠
±
30
∘
V
s2Y
=
V
s2
3
∠
±
30
∘
V
s3Y
=
V
s3
3
∠
±
30
∘
{\displaystyle {\begin{aligned}&Z_{\text{sY}}={\dfrac {Z_{\text{s}}}{3}}\\&V_{\text{s1Y}}={\dfrac {V_{\text{s1}}}{{\sqrt {3}}\,\angle \pm 30^{\circ }}}\\[2ex]&V_{\text{s2Y}}={\dfrac {V_{\text{s2}}}{{\sqrt {3}}\,\angle \pm 30^{\circ }}}\\[2ex]&V_{\text{s3Y}}={\dfrac {V_{\text{s3}}}{{\sqrt {3}}\,\angle \pm 30^{\circ }}}\end{aligned}}}
where for the last three equations, the first sign (+) is used if the phase sequence is positive/abc or the second sign (−) is used if the phase sequence is negative/acb.
== See also ==
Star-mesh transform
Network analysis (electrical circuits)
Electrical network, three-phase power, polyphase systems for examples of Y and Δ connections
AC motor for a discussion of the Y-Δ starting technique
== References ==
== Notes ==
== Bibliography ==
William Stevenson, Elements of Power System Analysis 3rd ed., McGraw Hill, New York, 1975, ISBN 0-07-061285-4
== External links ==
Star-Triangle Conversion: Knowledge on resistive networks and resistors
Calculator of Star-Triangle transform | Wikipedia/Y-Δ_transform |
The star-mesh transform, or star-polygon transform, is a mathematical circuit analysis technique to transform a resistive network into an equivalent network with one less node. The equivalence follows from the Schur complement identity applied to the Kirchhoff matrix of the network.
The equivalent impedance betweens nodes A and B is given by:
z
AB
=
z
A
z
B
∑
1
z
,
{\displaystyle z_{\text{AB}}=z_{\text{A}}z_{\text{B}}\sum {\frac {1}{z}},}
where
z
A
{\displaystyle z_{\text{A}}}
is the impedance between node A and the central node being removed.
The transform replaces N resistors with
1
2
N
(
N
−
1
)
{\textstyle {\frac {1}{2}}N(N-1)}
resistors. For
N
>
3
{\textstyle N>3}
, the result is an increase in the number of resistors, so the transform has no general inverse without additional constraints.
It is possible, though not necessarily efficient, to transform an arbitrarily complex two-terminal resistive network into a single equivalent resistor by repeatedly applying the star-mesh transform to eliminate each non-terminal node.
== Special cases ==
When N is:
For a single dangling resistor, the transform eliminates the resistor.
For two resistors, the "star" is simply the two resistors in series, and the transform yields a single equivalent resistor.
The special case of three resistors is better known as the Y-Δ transform. Since the result also has three resistors, this transform has an inverse Δ-Y transform.
== See also ==
Topology of electrical circuits
Network analysis (electrical circuits)
== References ==
van Lier, M.; Otten, R. (March 1973). "Planarization by transformation". IEEE Transactions on Circuit Theory. 20 (2): 169–171. doi:10.1109/TCT.1973.1083633.
Bedrosian, S. (December 1961). "Converse of the Star-Mesh Transformation". IRE Transactions on Circuit Theory. 8 (4): 491–493. doi:10.1109/TCT.1961.1086832.
E.B. Curtis, D. Ingerman, J.A. Morrow. Circular planar graphs and resistor networks. Linear Algebra and its Applications. Volume 283, Issues 1–3, 1 November 1998, pp. 115–150| doi = https://doi.org/10.1016/S0024-3795(98)10087-3. | Wikipedia/Star-mesh_transform |
In electronics, a two-port network (a kind of four-terminal network or quadripole) is an electrical network (i.e. a circuit) or device with two pairs of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the current entering one terminal must equal the current emerging from the other terminal on the same port. The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port.
It is commonly used in mathematical circuit analysis.
== Application ==
The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a "black box" with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their h-parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions.
Examples of circuits analyzed as two-ports are filters, matching networks, transmission lines, transformers, and small-signal models for transistors (such as the hybrid-pi model). The analysis of passive two-port networks is an outgrowth of reciprocity theorems first derived by Lorentz.
In two-port mathematical models, the network is described by a 2 by 2 square matrix of complex numbers. The common models that are used are referred to as z-parameters, y-parameters, h-parameters, g-parameters, and ABCD-parameters, each described individually below. These are all limited to linear networks since an underlying assumption of their derivation is that any given circuit condition is a linear superposition of various short-circuit and open circuit conditions. They are usually expressed in matrix notation, and they establish relations between the variables
V1, voltage across port 1
I1, current into port 1
V2, voltage across port 2
I2, current into port 2
which are shown in figure 1. The difference between the various models lies in which of these variables are regarded as the independent variables. These current and voltage variables are most useful at low-to-moderate frequencies. At high frequencies (e.g., microwave frequencies), the use of power and energy variables is more appropriate, and the two-port current–voltage approach is replaced by an approach based upon scattering parameters.
== General properties ==
There are certain properties of two-ports that frequently occur in practical networks and can be used to greatly simplify the analysis. These include:
Reciprocal networks
A network is said to be reciprocal if the voltage appearing at port 2 due to a current applied at port 1 is the same as the voltage appearing at port 1 when the same current is applied to port 2. Exchanging voltage and current results in an equivalent definition of reciprocity. A network that consists entirely of linear passive components (that is, resistors, capacitors and inductors) is usually reciprocal, a notable exception being passive circulators and isolators that contain magnetized materials. In general, it will not be reciprocal if it contains active components such as generators or transistors.
Symmetrical networks
A network is symmetrical if its input impedance is equal to its output impedance. Most often, but not necessarily, symmetrical networks are also physically symmetrical. Sometimes also antimetrical networks are of interest. These are networks where the input and output impedances are the duals of each other.
Lossless network
A lossless network is one which contains no resistors or other dissipative elements.
== Impedance parameters (z-parameters) ==
[
V
1
V
2
]
=
[
z
11
z
12
z
21
z
22
]
[
I
1
I
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}}
where
z
11
=
def
V
1
I
1
|
I
2
=
0
z
12
=
def
V
1
I
2
|
I
1
=
0
z
21
=
def
V
2
I
1
|
I
2
=
0
z
22
=
def
V
2
I
2
|
I
1
=
0
{\displaystyle {\begin{aligned}z_{11}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{1}}{I_{1}}}\right|_{I_{2}=0}&z_{12}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{1}}{I_{2}}}\right|_{I_{1}=0}\\z_{21}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{2}}{I_{1}}}\right|_{I_{2}=0}&z_{22}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{2}}{I_{2}}}\right|_{I_{1}=0}\end{aligned}}}
All the z-parameters have dimensions of ohms.
For reciprocal networks z12 = z21. For symmetrical networks z11 = z22. For reciprocal lossless networks all the zmn are purely imaginary.
=== Example: bipolar current mirror with emitter degeneration ===
Figure 3 shows a bipolar current mirror with emitter resistors to increase its output resistance. Transistor Q1 is diode connected, which is to say its collector-base voltage is zero. Figure 4 shows the small-signal circuit equivalent to Figure 3. Transistor Q1 is represented by its emitter resistance rE:
r
E
≈
thermal voltage,
V
T
emitter current,
I
E
,
{\displaystyle r_{\mathrm {E} }\approx {\frac {{\text{thermal voltage, }}V_{\mathrm {T} }}{{\text{emitter current, }}I_{E}}},}
a simplification made possible because the dependent current source in the hybrid-pi model for Q1 draws the same current as a resistor 1 / gm connected across rπ. The second transistor Q2 is represented by its hybrid-pi model. Table 1 below shows the z-parameter expressions that make the z-equivalent circuit of Figure 2 electrically equivalent to the small-signal circuit of Figure 4.
The negative feedback introduced by resistors RE can be seen in these parameters. For example, when used as an active load in a differential amplifier, I1 ≈ −I2, making the output impedance of the mirror approximately
R
22
−
R
21
≈
2
β
r
O
R
E
r
π
+
2
R
E
{\displaystyle R_{22}-R_{21}\approx {\frac {2\beta r_{\mathrm {O} }R_{\mathrm {E} }}{r_{\pi }+2R_{\mathrm {E} }}}}
compared to only rO without feedback (that is with RE = 0 Ω). At the same time, the impedance on the reference side of the mirror is approximately
R
11
−
R
12
≈
r
π
r
π
+
2
R
E
(
r
E
+
R
E
)
,
{\displaystyle R_{11}-R_{12}\approx {\frac {r_{\pi }}{r_{\pi }+2R_{\mathrm {E} }}}(r_{\mathrm {E} }+R_{\mathrm {E} }),}
only a moderate value, but still larger than rE with no feedback. In the differential amplifier application, a large output resistance increases the difference-mode gain, a good thing, and a small mirror input resistance is desirable to avoid Miller effect.
== Admittance parameters (y-parameters) ==
[
I
1
I
2
]
=
[
y
11
y
12
y
21
y
22
]
[
V
1
V
2
]
{\displaystyle {\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}={\begin{bmatrix}y_{11}&y_{12}\\y_{21}&y_{22}\end{bmatrix}}{\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}}
where
y
11
=
def
I
1
V
1
|
V
2
=
0
y
12
=
def
I
1
V
2
|
V
1
=
0
y
21
=
def
I
2
V
1
|
V
2
=
0
y
22
=
def
I
2
V
2
|
V
1
=
0
{\displaystyle {\begin{aligned}y_{11}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{1}}{V_{1}}}\right|_{V_{2}=0}&y_{12}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{1}}{V_{2}}}\right|_{V_{1}=0}\\y_{21}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{2}}{V_{1}}}\right|_{V_{2}=0}&y_{22}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{2}}{V_{2}}}\right|_{V_{1}=0}\end{aligned}}}
All the Y-parameters have dimensions of siemens.
For reciprocal networks y12 = y21. For symmetrical networks y11 = y22. For reciprocal lossless networks all the ymn are purely imaginary.
== Hybrid parameters (h-parameters) ==
[
V
1
I
2
]
=
[
h
11
h
12
h
21
h
22
]
[
I
1
V
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\I_{2}\end{bmatrix}}={\begin{bmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\V_{2}\end{bmatrix}}}
where
h
11
=
def
V
1
I
1
|
V
2
=
0
h
12
=
def
V
1
V
2
|
I
1
=
0
h
21
=
def
I
2
I
1
|
V
2
=
0
h
22
=
def
I
2
V
2
|
I
1
=
0
{\displaystyle {\begin{aligned}h_{11}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{1}}{I_{1}}}\right|_{V_{2}=0}&h_{12}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{1}}{V_{2}}}\right|_{I_{1}=0}\\h_{21}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{2}}{I_{1}}}\right|_{V_{2}=0}&h_{22}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{2}}{V_{2}}}\right|_{I_{1}=0}\end{aligned}}}
This circuit is often selected when a current amplifier is desired at the output. The resistors shown in the diagram can be general impedances instead.
Off-diagonal h-parameters are dimensionless, while diagonal members have dimensions the reciprocal of one another.
For reciprocal networks h12 = –h21. For symmetrical networks h11h22 – h12h21 = 1. For reciprocal lossless networks h12 and h21 are real, while h11 and h22 are purely imaginary.
=== Example: common-base amplifier ===
Note: Tabulated formulas in Table 2 make the h-equivalent circuit of the transistor from Figure 6 agree with its small-signal low-frequency hybrid-pi model in Figure 7. Notation: rπ is base resistance of transistor, rO is output resistance, and gm is mutual transconductance. The negative sign for h21 reflects the convention that I1, I2 are positive when directed into the two-port. A non-zero value for h12 means the output voltage affects the input voltage, that is, this amplifier is bilateral. If h12 = 0, the amplifier is unilateral.
=== History ===
The h-parameters were initially called series-parallel parameters. The term hybrid to describe these parameters was coined by D. A. Alsberg in 1953 in "Transistor metrology". In 1954 a joint committee of the IRE and the AIEE adopted the term h-parameters and recommended that these become the standard method of testing and characterising transistors because they were "peculiarly adaptable to the physical characteristics of transistors". In 1956, the recommendation became an issued standard; 56 IRE 28.S2. Following the merge of these two organisations as the IEEE, the standard became Std 218-1956 and was reaffirmed in 1980, but has now been withdrawn.
== Inverse hybrid parameters (g-parameters) ==
[
I
1
V
2
]
=
[
g
11
g
12
g
21
g
22
]
[
V
1
I
2
]
{\displaystyle {\begin{bmatrix}I_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}g_{11}&g_{12}\\g_{21}&g_{22}\end{bmatrix}}{\begin{bmatrix}V_{1}\\I_{2}\end{bmatrix}}}
where
g
11
=
def
I
1
V
1
|
I
2
=
0
g
12
=
def
I
1
I
2
|
V
1
=
0
g
21
=
def
V
2
V
1
|
I
2
=
0
g
22
=
def
V
2
I
2
|
V
1
=
0
{\displaystyle {\begin{aligned}g_{11}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{1}}{V_{1}}}\right|_{I_{2}=0}&g_{12}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{1}}{I_{2}}}\right|_{V_{1}=0}\\g_{21}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{2}}{V_{1}}}\right|_{I_{2}=0}&g_{22}&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{2}}{I_{2}}}\right|_{V_{1}=0}\end{aligned}}}
Often this circuit is selected when a voltage amplifier is wanted at the output. Off-diagonal g-parameters are dimensionless, while diagonal members have dimensions the reciprocal of one another. The resistors shown in the diagram can be general impedances instead.
=== Example: common-base amplifier ===
Note: Tabulated formulas in Table 3 make the g-equivalent circuit of the transistor from Figure 8 agree with its small-signal low-frequency hybrid-pi model in Figure 9. Notation: rπ is base resistance of transistor, rO is output resistance, and gm is mutual transconductance. The negative sign for g12 reflects the convention that I1, I2 are positive when directed into the two-port. A non-zero value for g12 means the output current affects the input current, that is, this amplifier is bilateral. If g12 = 0, the amplifier is unilateral.
== ABCD-parameters ==
The ABCD-parameters are known variously as chain, cascade, or transmission parameters. There are a number of definitions given for ABCD parameters, the most common is,
[
V
1
I
1
]
=
[
A
B
C
D
]
[
V
2
−
I
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\I_{1}\end{bmatrix}}={\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}V_{2}\\-I_{2}\end{bmatrix}}}
Note: Some authors chose to reverse the indicated direction of I2 and suppress the negative sign on I2.
where
A
=
def
V
1
V
2
|
I
2
=
0
B
=
def
−
V
1
I
2
|
V
2
=
0
C
=
def
I
1
V
2
|
I
2
=
0
D
=
def
−
I
1
I
2
|
V
2
=
0
{\displaystyle {\begin{aligned}A&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{1}}{V_{2}}}\right|_{I_{2}=0}&B&\mathrel {\stackrel {\text{def}}{=}} \left.-{\frac {V_{1}}{I_{2}}}\right|_{V_{2}=0}\\C&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {I_{1}}{V_{2}}}\right|_{I_{2}=0}&D&\mathrel {\stackrel {\text{def}}{=}} \left.-{\frac {I_{1}}{I_{2}}}\right|_{V_{2}=0}\end{aligned}}}
For reciprocal networks AD – BC = 1. For symmetrical networks A = D. For networks which are reciprocal and lossless, A and D are purely real while B and C are purely imaginary.
This representation is preferred because when the parameters are used to represent a cascade of two-ports, the matrices are written in the same order that a network diagram would be drawn, that is, left to right. However, a variant definition is also in use,
[
V
2
−
I
2
]
=
[
A
′
B
′
C
′
D
′
]
[
V
1
I
1
]
{\displaystyle {\begin{bmatrix}V_{2}\\-I_{2}\end{bmatrix}}={\begin{bmatrix}A'&B'\\C'&D'\end{bmatrix}}{\begin{bmatrix}V_{1}\\I_{1}\end{bmatrix}}}
where
A
′
=
def
V
2
V
1
|
I
1
=
0
B
′
=
def
V
2
I
1
|
V
1
=
0
C
′
=
def
−
I
2
V
1
|
I
1
=
0
D
′
=
def
−
I
2
I
1
|
V
1
=
0
{\displaystyle {\begin{aligned}A'&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{2}}{V_{1}}}\right|_{I_{1}=0}&B'&\mathrel {\stackrel {\text{def}}{=}} \left.{\frac {V_{2}}{I_{1}}}\right|_{V_{1}=0}\\C'&\mathrel {\stackrel {\text{def}}{=}} \left.-{\frac {I_{2}}{V_{1}}}\right|_{I_{1}=0}&D'&\mathrel {\stackrel {\text{def}}{=}} \left.-{\frac {I_{2}}{I_{1}}}\right|_{V_{1}=0}\end{aligned}}}
The negative sign of –I2 arises to make the output current of one cascaded stage (as it appears in the matrix) equal to the input current of the next. Without the minus sign the two currents would have opposite senses because the positive direction of current, by convention, is taken as the current entering the port. Consequently, the input voltage/current matrix vector can be directly replaced with the matrix equation of the preceding cascaded stage to form a combined A'B'C'D' matrix.
The terminology of representing the ABCD parameters as a matrix of elements designated a11 etc. as adopted by some authors and the inverse A'B'C'D' parameters as a matrix of elements designated b11 etc. is used here for both brevity and to avoid confusion with circuit elements.
[
a
]
=
[
a
11
a
12
a
21
a
22
]
=
[
A
B
C
D
]
[
b
]
=
[
b
11
b
12
b
21
b
22
]
=
[
A
′
B
′
C
′
D
′
]
{\displaystyle {\begin{aligned}\left[\mathbf {a} \right]&={\begin{bmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{bmatrix}}={\begin{bmatrix}A&B\\C&D\end{bmatrix}}\\\left[\mathbf {b} \right]&={\begin{bmatrix}b_{11}&b_{12}\\b_{21}&b_{22}\end{bmatrix}}={\begin{bmatrix}A'&B'\\C'&D'\end{bmatrix}}\end{aligned}}}
=== Table of transmission parameters ===
The table below lists ABCD and inverse ABCD parameters for some simple network elements.
== Scattering parameters (S-parameters) ==
The previous parameters are all defined in terms of voltages and currents at ports. S-parameters are different, and are defined in terms of incident and reflected waves at ports. S-parameters are used primarily at UHF and microwave frequencies where it becomes difficult to measure voltages and currents directly. On the other hand, incident and reflected power are easy to measure using directional couplers. The definition is,
[
b
1
b
2
]
=
[
S
11
S
12
S
21
S
22
]
[
a
1
a
2
]
{\displaystyle {\begin{bmatrix}b_{1}\\b_{2}\end{bmatrix}}={\begin{bmatrix}S_{11}&S_{12}\\S_{21}&S_{22}\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}}
where the ak are the incident waves and the bk are the reflected waves at port k. It is conventional to define the ak and bk in terms of the square root of power. Consequently, there is a relationship with the wave voltages (see main article for details).
For reciprocal networks S12 = S21. For symmetrical networks S11 = S22. For antimetrical networks S11 = –S22. For lossless reciprocal networks
|
S
11
|
=
|
S
22
|
{\displaystyle |S_{11}|=|S_{22}|}
and
|
S
11
|
2
+
|
S
12
|
2
=
1.
{\displaystyle |S_{11}|^{2}+|S_{12}|^{2}=1.}
== Scattering transfer parameters (T-parameters) ==
Scattering transfer parameters, like scattering parameters, are defined in terms of incident and reflected waves. The difference is that T-parameters relate the waves at port 1 to the waves at port 2 whereas S-parameters relate the reflected waves to the incident waves. In this respect T-parameters fill the same role as ABCD parameters and allow the T-parameters of cascaded networks to be calculated by matrix multiplication of the component networks. T-parameters, like ABCD parameters, can also be called transmission parameters. The definition is,
[
a
1
b
1
]
=
[
T
11
T
12
T
21
T
22
]
[
b
2
a
2
]
{\displaystyle {\begin{bmatrix}a_{1}\\b_{1}\end{bmatrix}}={\begin{bmatrix}T_{11}&T_{12}\\T_{21}&T_{22}\end{bmatrix}}{\begin{bmatrix}b_{2}\\a_{2}\end{bmatrix}}}
T-parameters are not as easy to measure directly as S-parameters. However, S-parameters are easily converted to T-parameters, see main article for details.
== Combinations of two-port networks ==
When two or more two-port networks are connected, the two-port parameters of the combined network can be found by performing matrix algebra on the matrices of parameters for the component two-ports. The matrix operation can be made particularly simple with an appropriate choice of two-port parameters to match the form of connection of the two-ports. For instance, the z-parameters are best for series connected ports.
The combination rules need to be applied with care. Some connections (when dissimilar potentials are joined) result in the port condition being invalidated and the combination rule will no longer apply. A Brune test can be used to check the permissibility of the combination. This difficulty can be overcome by placing 1:1 ideal transformers on the outputs of the problem two-ports. This does not change the parameters of the two-ports, but does ensure that they will continue to meet the port condition when interconnected. An example of this problem is shown for series-series connections in figures 11 and 12 below.
=== Series-series connection ===
When two-ports are connected in a series-series configuration as shown in figure 10, the best choice of two-port parameter is the z-parameters. The z-parameters of the combined network are found by matrix addition of the two individual z-parameter matrices.
[
z
]
=
[
z
]
1
+
[
z
]
2
{\displaystyle [\mathbf {z} ]=[\mathbf {z} ]_{1}+[\mathbf {z} ]_{2}}
As mentioned above, there are some networks which will not yield directly to this analysis. A simple example is a two-port consisting of a L-network of resistors R1 and R2. The z-parameters for this network are;
[
z
]
1
=
[
R
1
+
R
2
R
2
R
2
R
2
]
{\displaystyle [\mathbf {z} ]_{1}={\begin{bmatrix}R_{1}+R_{2}&R_{2}\\R_{2}&R_{2}\end{bmatrix}}}
Figure 11 shows two identical such networks connected in series-series. The total z-parameters predicted by matrix addition are;
[
z
]
=
[
z
]
1
+
[
z
]
2
=
2
[
z
]
1
=
[
2
R
1
+
2
R
2
2
R
2
2
R
2
2
R
2
]
{\displaystyle [\mathbf {z} ]=[\mathbf {z} ]_{1}+[\mathbf {z} ]_{2}=2[\mathbf {z} ]_{1}={\begin{bmatrix}2R_{1}+2R_{2}&2R_{2}\\2R_{2}&2R_{2}\end{bmatrix}}}
However, direct analysis of the combined circuit shows that,
[
z
]
=
[
R
1
+
2
R
2
2
R
2
2
R
2
2
R
2
]
{\displaystyle [\mathbf {z} ]={\begin{bmatrix}R_{1}+2R_{2}&2R_{2}\\2R_{2}&2R_{2}\end{bmatrix}}}
The discrepancy is explained by observing that R1 of the lower two-port has been by-passed by the short-circuit between two terminals of the output ports. This results in no current flowing through one terminal in each of the input ports of the two individual networks. Consequently, the port condition is broken for both the input ports of the original networks since current is still able to flow into the other terminal. This problem can be resolved by inserting an ideal transformer in the output port of at least one of the two-port networks. While this is a common text-book approach to presenting the theory of two-ports, the practicality of using transformers is a matter to be decided for each individual design.
=== Parallel-parallel connection ===
When two-ports are connected in a parallel-parallel configuration as shown in figure 13, the best choice of two-port parameter is the y-parameters. The y-parameters of the combined network are found by matrix addition of the two individual y-parameter matrices.
[
y
]
=
[
y
]
1
+
[
y
]
2
{\displaystyle [\mathbf {y} ]=[\mathbf {y} ]_{1}+[\mathbf {y} ]_{2}}
=== Series-parallel connection ===
When two-ports are connected in a series-parallel configuration as shown in figure 14, the best choice of two-port parameter is the h-parameters. The h-parameters of the combined network are found by matrix addition of the two individual h-parameter matrices.
[
h
]
=
[
h
]
1
+
[
h
]
2
{\displaystyle [\mathbf {h} ]=[\mathbf {h} ]_{1}+[\mathbf {h} ]_{2}}
=== Parallel-series connection ===
When two-ports are connected in a parallel-series configuration as shown in figure 15, the best choice of two-port parameter is the g-parameters. The g-parameters of the combined network are found by matrix addition of the two individual g-parameter matrices.
[
g
]
=
[
g
]
1
+
[
g
]
2
{\displaystyle [\mathbf {g} ]=[\mathbf {g} ]_{1}+[\mathbf {g} ]_{2}}
=== Cascade connection ===
When two-ports are connected with the output port of the first connected to the input port of the second (a cascade connection) as shown in figure 16, the best choice of two-port parameter is the ABCD-parameters. The a-parameters of the combined network are found by matrix multiplication of the two individual a-parameter matrices.
[
a
]
=
[
a
]
1
⋅
[
a
]
2
{\displaystyle [\mathbf {a} ]=[\mathbf {a} ]_{1}\cdot [\mathbf {a} ]_{2}}
A chain of n two-ports may be combined by matrix multiplication of the n matrices. To combine a cascade of b-parameter matrices, they are again multiplied, but the multiplication must be carried out in reverse order, so that;
[
b
]
=
[
b
]
2
⋅
[
b
]
1
{\displaystyle [\mathbf {b} ]=[\mathbf {b} ]_{2}\cdot [\mathbf {b} ]_{1}}
==== Example ====
Suppose we have a two-port network consisting of a series resistor R followed by a shunt capacitor C. We can model the entire network as a cascade of two simpler networks:
[
b
]
1
=
[
1
−
R
0
1
]
[
b
]
2
=
[
1
0
−
s
C
1
]
{\displaystyle {\begin{aligned}[][\mathbf {b} ]_{1}&={\begin{bmatrix}1&-R\\0&1\end{bmatrix}}\\\lbrack \mathbf {b} \rbrack _{2}&={\begin{bmatrix}1&0\\-sC&1\end{bmatrix}}\end{aligned}}}
The transmission matrix for the entire network [b] is simply the matrix multiplication of the transmission matrices for the two network elements:
[
b
]
=
[
b
]
2
⋅
[
b
]
1
=
[
1
0
−
s
C
1
]
[
1
−
R
0
1
]
=
[
1
−
R
−
s
C
1
+
s
C
R
]
{\displaystyle {\begin{aligned}[]\lbrack \mathbf {b} \rbrack &=\lbrack \mathbf {b} \rbrack _{2}\cdot \lbrack \mathbf {b} \rbrack _{1}\\&={\begin{bmatrix}1&0\\-sC&1\end{bmatrix}}{\begin{bmatrix}1&-R\\0&1\end{bmatrix}}\\&={\begin{bmatrix}1&-R\\-sC&1+sCR\end{bmatrix}}\end{aligned}}}
Thus:
[
V
2
−
I
2
]
=
[
1
−
R
−
s
C
1
+
s
C
R
]
[
V
1
I
1
]
{\displaystyle {\begin{bmatrix}V_{2}\\-I_{2}\end{bmatrix}}={\begin{bmatrix}1&-R\\-sC&1+sCR\end{bmatrix}}{\begin{bmatrix}V_{1}\\I_{1}\end{bmatrix}}}
== Interrelation of parameters ==
Where Δ[x] is the determinant of [x].
Certain pairs of matrices have a particularly simple relationship. The admittance parameters are the matrix inverse of the impedance parameters, the inverse hybrid parameters are the matrix inverse of the hybrid parameters, and the [b] form of the ABCD-parameters is the matrix inverse of the [a] form. That is,
[
y
]
=
[
z
]
−
1
[
g
]
=
[
h
]
−
1
[
b
]
=
[
a
]
−
1
{\displaystyle {\begin{aligned}\left[\mathbf {y} \right]&=[\mathbf {z} ]^{-1}\\\left[\mathbf {g} \right]&=[\mathbf {h} ]^{-1}\\\left[\mathbf {b} \right]&=[\mathbf {a} ]^{-1}\end{aligned}}}
== Networks with more than two ports ==
While two port networks are very common (e.g., amplifiers and filters), other electrical networks such as directional couplers and circulators have more than 2 ports. The following representations are also applicable to networks with an arbitrary number of ports:
Admittance (y) parameters
Impedance (z) parameters
Scattering (S) parameters
For example, three-port impedance parameters result in the following relationship:
[
V
1
V
2
V
3
]
=
[
Z
11
Z
12
Z
13
Z
21
Z
22
Z
23
Z
31
Z
32
Z
33
]
[
I
1
I
2
I
3
]
{\displaystyle {\begin{bmatrix}V_{1}\\V_{2}\\V_{3}\end{bmatrix}}={\begin{bmatrix}Z_{11}&Z_{12}&Z_{13}\\Z_{21}&Z_{22}&Z_{23}\\Z_{31}&Z_{32}&Z_{33}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\\I_{3}\end{bmatrix}}}
However the following representations are necessarily limited to two-port devices:
Hybrid (h) parameters
Inverse hybrid (g) parameters
Transmission (ABCD) parameters
Scattering transfer (T) parameters
== Collapsing a two-port to a one port ==
A two-port network has four variables with two of them being independent. If one of the ports is terminated by a load with no independent sources, then the load enforces a relationship between the voltage and current of that port. A degree of freedom is lost. The circuit now has only one independent parameter. The two-port becomes a one-port impedance to the remaining independent variable.
For example, consider impedance parameters
[
V
1
V
2
]
=
[
z
11
z
12
z
21
z
22
]
[
I
1
I
2
]
{\displaystyle {\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}={\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end{bmatrix}}{\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}}
Connecting a load, ZL onto port 2 effectively adds the constraint
V
2
=
−
Z
L
I
2
{\displaystyle V_{2}=-Z_{\mathrm {L} }I_{2}\,}
The negative sign is because the positive direction for I2 is directed into the two-port instead of into the load. The augmented equations become
V
1
=
Z
11
I
1
+
Z
12
I
2
−
Z
L
I
2
=
Z
21
I
1
+
Z
22
I
2
{\displaystyle {\begin{aligned}V_{1}&=Z_{11}I_{1}+Z_{12}I_{2}\\-Z_{\mathrm {L} }I_{2}&=Z_{21}I_{1}+Z_{22}I_{2}\end{aligned}}}
The second equation can be easily solved for I2 as a function of I1 and that expression can replace I2 in the first equation leaving V1 ( and V2 and I2 ) as functions of I1
I
2
=
−
Z
21
Z
L
+
Z
22
I
1
V
1
=
Z
11
I
1
−
Z
12
Z
21
Z
L
+
Z
22
I
1
=
(
Z
11
−
Z
12
Z
21
Z
L
+
Z
22
)
I
1
=
Z
in
I
1
{\displaystyle {\begin{aligned}I_{2}&=-{\frac {Z_{21}}{Z_{\mathrm {L} }+Z_{22}}}I_{1}\\[3pt]V_{1}&=Z_{11}I_{1}-{\frac {Z_{12}Z_{21}}{Z_{\mathrm {L} }+Z_{22}}}I_{1}\\[2pt]&=\left(Z_{11}-{\frac {Z_{12}Z_{21}}{Z_{\mathrm {L} }+Z_{22}}}\right)I_{1}=Z_{\text{in}}I_{1}\end{aligned}}}
So, in effect, I1 sees an input impedance Zin and the two-port's effect on the input circuit has been effectively collapsed down to a one-port; i.e., a simple two terminal impedance.
== See also ==
Admittance parameters
Impedance parameters
Scattering parameters
Transfer-matrix method (optics) for reflection/transmission calculation of light waves in transparent layers
Ray transfer matrix for calculation of paraxial propagation of a light ray
== Notes ==
== References ==
== Bibliography ==
Carlin, HJ, Civalleri, PP, Wideband circuit design, CRC Press, 1998. ISBN 0-8493-7897-4.
William F. Egan, Practical RF system design, Wiley-IEEE, 2003 ISBN 0-471-20023-9.
Farago, PS, An Introduction to Linear Network Analysis, The English Universities Press Ltd, 1961.
Gray, P.R.; Hurst, P.J.; Lewis, S.H.; Meyer, R.G. (2001). Analysis and Design of Analog Integrated Circuits (4th ed.). New York: Wiley. ISBN 0-471-32168-0.
Ghosh, Smarajit, Network Theory: Analysis and Synthesis, Prentice Hall of India ISBN 81-203-2638-5.
Jaeger, R.C.; Blalock, T.N. (2006). Microelectronic Circuit Design (3rd ed.). Boston: McGraw–Hill. ISBN 978-0-07-319163-8.
Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, McGraw-Hill, 1964.
Mahmood Nahvi, Joseph Edminister, Schaum's outline of theory and problems of electric circuits, McGraw-Hill Professional, 2002 ISBN 0-07-139307-2.
Dragica Vasileska, Stephen Marshall Goodnick, Computational electronics, Morgan & Claypool Publishers, 2006 ISBN 1-59829-056-8.
Clayton R. Paul, Analysis of Multiconductor Transmission Lines, John Wiley & Sons, 2008 ISBN 0470131543, 9780470131541.
=== h-parameters history ===
D. A. Alsberg, "Transistor metrology", IRE Convention Record, part 9, pp. 39–44, 1953.
also published as "Transistor metrology", Transactions of the IRE Professional Group on Electron Devices, vol. ED-1, iss. 3, pp. 12–17, August 1954.
AIEE-IRE joint committee, "Proposed methods of testing transistors", Transactions of the American Institute of Electrical Engineers: Communications and Electronics, pp. 725–740, January 1955.
"IRE Standards on solid-state devices: methods of testing transistors, 1956", Proceedings of the IRE, vol. 44, iss. 11, pp. 1542–1561, November, 1956.
IEEE Standard Methods of Testing Transistors, IEEE Std 218-1956. | Wikipedia/Two-port_network |
Reciprocity in electrical networks is a property of a circuit that relates voltages and currents at two points. The reciprocity theorem states that the current at one point in a circuit due to a voltage at a second point is the same as the current at the second point due to the same voltage at the first. The reciprocity theorem is valid for almost all passive networks. The reciprocity theorem is a feature of a more general principle of reciprocity in electromagnetism.
== Description ==
If a current,
I
A
{\displaystyle I_{\text{A}}}
, injected into port A produces a voltage,
V
B
{\displaystyle V_{\text{B}}}
, at port B and
I
A
{\displaystyle I_{\text{A}}}
injected into port B produces
V
B
{\displaystyle V_{\text{B}}}
at port A, then the network is said to be reciprocal. Equivalently, reciprocity can be defined by the dual situation; applying voltage,
V
A
{\displaystyle V_{\text{A}}}
, at port A producing current
I
B
{\displaystyle I_{\text{B}}}
at port B and
V
A
{\displaystyle V_{\text{A}}}
at port B producing current
I
B
{\displaystyle I_{\text{B}}}
at port A. In general, passive networks are reciprocal. Any network that consists entirely of ideal capacitances, inductances (including mutual inductances), and resistances, that is, elements that are linear and bilateral, will be reciprocal. However, passive components that are non-reciprocal do exist. Any component containing ferromagnetic material is likely to be non-reciprocal. Examples of passive components deliberately designed to be non-reciprocal include circulators and isolators.
The transfer function of a reciprocal network has the property that it is symmetrical about the main diagonal if expressed in terms of a z-parameter, y-parameter, or s-parameter matrix. A non-symmetrical matrix implies a non-reciprocal network. A symmetric matrix does not imply a symmetric network.
In some parametisations of networks, the representative matrix is not symmetrical for reciprocal networks. Common examples are h-parameters and ABCD-parameters, but they all have some other condition for reciprocity that can be calculated from the parameters. For h-parameters the condition is
h
12
=
−
h
21
{\displaystyle h_{12}=-h_{21}}
and for the ABCD parameters it is
A
D
−
B
C
=
1
{\displaystyle AD-BC=1}
. These representations mix voltages and currents in the same column vector and therefore do not even have matching units in transposed elements.
=== Example ===
An example of reciprocity can be demonstrated using an asymmetrical resistive attenuator. An asymmetrical network is chosen as the example because a symmetrical network is self-evidently reciprocal.
Injecting 6 amperes into port 1 of this network produces 24 volts at port 2.
Injecting 6 amperes into port 2 produces 24 volts at port 1.
Hence, the network is reciprocal. In this example, the port that is not injecting current is left open circuit. This is because a current generator applying zero current is an open circuit. If, on the other hand, one wished to apply voltages and measure the resulting current, then the port to which the voltage is not applied would be made short circuit. This is because a voltage generator applying zero volts is a short circuit.
== Proof ==
Reciprocity of electrical networks is a special case of Lorentz reciprocity, but it can also be proven more directly from network theorems. This proof shows reciprocity for a two-node network in terms of its admittance matrix, and then shows reciprocity for a network with an arbitrary number of nodes by an induction argument. A linear network can be represented as a set of linear equations through nodal analysis. For a network consisting of n+1 nodes (one being a reference node) where, in general, an admittance is connected between each pair of nodes and where a current is injected in each node (provided by an ideal current source connected between the node and the reference node), these equations can be expressed in the form of an admittance matrix,
[
I
1
I
2
⋮
I
n
]
=
[
Y
11
Y
12
⋯
Y
1
n
Y
21
Y
22
⋯
Y
2
n
⋮
⋮
⋱
⋮
Y
n
1
Y
n
2
⋯
Y
n
n
]
[
V
1
V
2
⋮
V
n
]
{\displaystyle {\begin{bmatrix}I_{1}\\I_{2}\\\vdots \\I_{n}\end{bmatrix}}={\begin{bmatrix}Y_{11}&Y_{12}&\cdots &Y_{1n}\\Y_{21}&Y_{22}&\cdots &Y_{2n}\\\vdots &\vdots &\ddots &\vdots \\Y_{n1}&Y_{n2}&\cdots &Y_{nn}\end{bmatrix}}{\begin{bmatrix}V_{1}\\V_{2}\\\vdots \\V_{n}\end{bmatrix}}}
where
I
k
{\displaystyle I_{k}}
is the current injected into node k by a generator (which amounts to zero if no current source is connected to node k)
V
k
{\displaystyle V_{k}}
is the voltage at node k with respect to the reference node (one could also say, it is the electric potential at node k)
Y
j
k
{\displaystyle Y_{jk}}
(j ≠ k) is the negative of the admittance directly connecting nodes j and k (if any)
Y
k
k
{\displaystyle Y_{kk}}
is the sum of the admittances connected to node k (regardless of the other node the admittance is connected to).
This representation corresponds to the one obtained by nodal analysis. If we further require that network is made up of passive, bilateral elements, then
Y
j
k
=
Y
k
j
{\displaystyle Y_{jk}=Y_{kj}}
since the admittance connected between nodes j and k is the same element as the admittance connected between nodes k and j. The matrix is therefore symmetrical. For the case where
n
=
2
{\displaystyle n=2}
the matrix reduces to,
[
I
1
I
2
]
=
[
Y
11
Y
12
Y
21
Y
22
]
[
V
1
V
2
]
{\displaystyle {\begin{bmatrix}I_{1}\\I_{2}\end{bmatrix}}={\begin{bmatrix}Y_{11}&Y_{12}\\Y_{21}&Y_{22}\end{bmatrix}}{\begin{bmatrix}V_{1}\\V_{2}\end{bmatrix}}}
.
From which it can be seen that,
Y
12
=
I
1
V
2
|
V
1
=
0
{\displaystyle Y_{12}=\left.{\frac {I_{1}}{V_{2}}}\right|_{V_{1}=0}}
and
Y
21
=
I
2
V
1
|
V
2
=
0
.
{\displaystyle Y_{21}=\left.{\frac {I_{2}}{V_{1}}}\right|_{V_{2}=0}\ .}
But since
Y
12
=
Y
21
{\displaystyle Y_{12}=Y_{21}}
then,
I
1
V
2
|
V
1
=
0
=
I
2
V
1
|
V
2
=
0
{\displaystyle \left.{\frac {I_{1}}{V_{2}}}\right|_{V_{1}=0}=\left.{\frac {I_{2}}{V_{1}}}\right|_{V_{2}=0}}
which is synonymous with the condition for reciprocity. In words, the ratio of the current at one port to the voltage at another is the same ratio if the ports being driven and measured are interchanged. Thus reciprocity is proven for the case of
n
=
2
{\displaystyle n=2}
.
For the case of a matrix of arbitrary size, the order of the matrix can be reduced through node elimination. After eliminating the sth node, the new admittance matrix will have the form,
[
(
Y
11
Y
s
s
−
Y
s
1
Y
1
s
)
(
Y
12
Y
s
s
−
Y
s
2
Y
1
s
)
(
Y
13
Y
s
s
−
Y
s
3
Y
1
s
)
⋯
(
Y
21
Y
s
s
−
Y
s
1
Y
2
s
)
(
Y
22
Y
s
s
−
Y
s
2
Y
2
s
)
(
Y
23
Y
s
s
−
Y
s
3
Y
2
s
)
⋯
(
Y
31
Y
s
s
−
Y
s
1
Y
3
s
)
(
Y
32
Y
s
s
−
Y
s
2
Y
3
s
)
(
Y
33
Y
s
s
−
Y
s
3
Y
3
s
)
⋯
⋯
⋯
⋯
⋯
]
{\displaystyle {\begin{bmatrix}(Y_{11}Y_{ss}-Y_{s1}Y_{1s})&(Y_{12}Y_{ss}-Y_{s2}Y_{1s})&(Y_{13}Y_{ss}-Y_{s3}Y_{1s})&\cdots \\(Y_{21}Y_{ss}-Y_{s1}Y_{2s})&(Y_{22}Y_{ss}-Y_{s2}Y_{2s})&(Y_{23}Y_{ss}-Y_{s3}Y_{2s})&\cdots \\(Y_{31}Y_{ss}-Y_{s1}Y_{3s})&(Y_{32}Y_{ss}-Y_{s2}Y_{3s})&(Y_{33}Y_{ss}-Y_{s3}Y_{3s})&\cdots \\\cdots &\cdots &\cdots &\cdots \end{bmatrix}}}
It can be seen that this new matrix is also symmetrical. Nodes can continue to be eliminated in this way until only a 2×2 symmetrical matrix remains involving the two nodes of interest. Since this matrix is symmetrical it is proved that reciprocity applies to a matrix of arbitrary size when one node is driven by a voltage and current measured at another. A similar process using the impedance matrix from mesh analysis demonstrates reciprocity where one node is driven by a current and voltage is measured at another.
== References ==
== Bibliography ==
Bakshi, U.A.; Bakshi, A.V., Electrical Networks, Technical Publications, 2008 ISBN 8184314647.
Guillemin, Ernst A., Introductory Circuit Theory, New York: John Wiley & Sons, 1953 OCLC 535111
Kumar, K. S. Suresh, Electric Circuits and Networks, Pearson Education India, 2008 ISBN 8131713903.
Harris, Vincent G., "Microwave ferrites and applications", ch. 14 in, Mailadil T. Sebastian, Rick Ubic, Heli Jantunen, Microwave Materials and Applications, John Wiley & Sons, 2017 ISBN 1119208521.
Zhang, Kequian; Li, Dejie, Electromagnetic Theory for Microwaves and Optoelectronics, Springer Science & Business Media, 2013 ISBN 3662035537. | Wikipedia/Reciprocity_(electrical_networks) |
A variable-gain (VGA) or voltage-controlled amplifier (VCA) is an electronic amplifier that varies its gain depending on a control voltage (often abbreviated CV).
VCAs have many applications, including audio level compression, synthesizers and amplitude modulation.
A crude example is a typical inverting op-amp configuration with a light-dependent resistor (LDR) in the feedback loop. The gain of the amplifier then depends on the light falling on the LDR, which can be provided by an LED (an optocoupler). The gain of the amplifier is then controllable by the current through the LED. This is similar to the circuits used in optical audio compressors.
A voltage-controlled amplifier can be realised by first creating a voltage-controlled resistor (VCR), which is used to set the amplifier gain. The VCR is one of the numerous interesting circuit elements that can be produced by using a JFET (junction field-effect transistor) with simple biasing. VCRs manufactured in this way can be obtained as discrete devices, e.g. VCR2N.
Another type of circuit uses operational transconductance amplifiers.
In audio applications logarithmic gain control is used to emulate how the ear hears loudness. David E. Blackmer's dbx 202 VCA, based on the Blackmer gain cell, was among the first successful implementations of a logarithmic VCA.
Analog multipliers are a type of VCA designed to have accurate linear characteristics, the two inputs are identical and often work in all four voltage quadrants, unlike most other VCAs.
== In sound mixing consoles ==
Some mixing consoles come equipped with VCAs in each channel for console automation. The fader, which traditionally controls the audio signal directly, becomes a DC control voltage for the VCA. The maximum voltage available to a fader can be controlled by one or more master faders called VCA groups. The VCA master fader then controls the overall level of all of the channels assigned to it. Typically VCA groups are used to control various parts of the mix; vocals, guitars, drums or percussion. The VCA master fader allows a portion of a mix to be raised or lowered without affecting the blend of the instruments in that part of the mix.
A benefit of VCA sub-group is that since it is directly affecting the gain level of each channel, changes to a VCA sub-group level affect not only the channel level but also all of the levels sent to any post-fader mixes. With traditional audio sub-groups, the sub-group master fader only affects the level going into the main mix and does not affect the level going into the post-fader mixes. Consider the case of an instrument feeding a sub-group and a post-fader mix. If you completely lower the sub-group master fader, you would no longer hear the instrument itself, but you would still hear it as part of the post-fader mix, perhaps a reverb or chorus effect.
VCA mixers are known to last longer than non-VCA mixers. Because the VCA controls the audio level instead of the physical fader, decay of the fader mechanism over time does not cause a degradation in audio quality.
VCAs were invented by David E. Blackmer, the founder of dbx, who used them to make dynamic range compressors. The first console using VCAs was the Allison Research computer-automated recording system designed by Paul C. Buff in 1973. Another early VCA capability on a sound mixer was the series of MCI JH500 studio recording desks introduced in 1975. The first VCA mixer for live sound was the PM3000 introduced by Yamaha in 1985.
== Digital variable-gain amplifier ==
A digitally controlled amplifier (DCA) is a variable-gain amplifier that is digitally controlled.
The digitally controlled amplifier uses a stepped approach giving the circuit graduated increments of gain selection. This can be done in several fashions, but certain elements remain in any design.
At its most basic form, a toggle switch strapped across the feedback resistor can provide two discrete gain settings. While this is not a computer-controlled function, it describes the core function. With eight switches and eight resistors in the feedback loop, each switch can enable a particular resistor to control the amplifier's feedback. If each switch was converted to a relay, a microcontroller could be used to activate the relays to attain the desired amount of gain.
Relays can be replaced with Field Effect Transistors of an appropriate type to reduce the mechanical nature of the design. Other devices such as the CD4053 bi-directional CMOS analog multiplexer integrated circuit and digital potentiometers (combined resistor string and MUXes) can serve well as the switching function.
To minimize the number of switches and resistors, combinations of resistance values can be utilized by activating multiple switches.
== See also ==
Automixer
Mix automation
== References ==
== External links ==
Examples of non-optical VCAs
Some schematics for VCAs
"Vacuum tube VCAs". Archived from the original on 2008-05-13.
University of Toronto undergraduate lecture explaining how to implement a Voltage Controlled Amplifier using an operational amplifier and a photocell at archive.today (archived 2013-02-21)
Allen & Heath's Guide to VCA Sound Desk Mixing at the Wayback Machine (archived 2008-12-03) | Wikipedia/Voltage_controlled_amplifier |
A delta-wye transformer is a type of three-phase electric power transformer design that employs delta-connected windings on its primary and wye/star connected windings on its secondary. A neutral wire can be provided on wye output side. It can be a single three-phase transformer, or built from three independent single-phase units. An equivalent term is delta-star transformer.
== Transformers ==
Delta-wye transformers are common in commercial, industrial, and high-density residential locations, to supply three-phase distribution systems.
An example would be a distribution transformer with a delta primary, running on three 11 kV phases with no neutral or earth required, and a star (or wye) secondary providing a 3-phase supply at 415 V, with the domestic voltage of 240 available between each phase and the earthed (grounded) neutral point.
The delta winding allows third-harmonic currents to circulate within the transformer, and prevents third-harmonic currents from flowing in the supply line.
Delta-wye transformers introduce a 30, 150, 210, or 330 degree phase shift. Thus they cannot be paralleled with wye-wye (or delta-delta) transformers. However, they can be paralleled with identical configurations and some different configurations of other delta-wye (or wye-delta with some attention) transformers.
== See also ==
Electric power distribution
High-leg delta
Mains power systems
Motor soft starter
== References ==
== External links ==
Three-phase transformer circuits
Three-phase voltage transformations | Wikipedia/Delta-wye_transformer |
Home energy storage refers to residential energy storage devices that store electrical energy locally for later consumption. Usually, electricity is stored in lithium-ion rechargeable batteries, controlled by intelligent software to handle charging and discharging cycles. Companies are also developing smaller flow battery technology for home use. As a local energy storage technologies for home use, they are smaller relatives of battery-based grid energy storage and support the concept of distributed generation. When paired with on-site generation, they can virtually eliminate blackouts in an off-the-grid lifestyle.
The stored energy commonly originates from on-site solar photovoltaic system such as rooftop solar panels, which generate direct current electricity during daylight hours. The solar electricity can be backfed to the grid (often rewarded with a feed-in tariff) via a solar inverter, or it can be stored in a home energy storage system as a stand-alone power system for later consumption after sundown. This allows the household to take advantage of the peak solar generation during the day hours (when homes are typically unoccupied with low electricity usage due to the occupants being away at work or at school) and use it later to offset after-hour consumption from the grid, thus avoid the higher power costs during the domestic peak demand hours (usually from mid-afternoon to mid-evening). The home energy storage can also serve as a backup battery in the events of power outage to keep essential lighting, heating, computing and home medical equipment running without disruption.
Small wind turbines are less common but still available for home use as a complement or alternative to solar panels.
== Market trends ==
=== Automotive companies ===
There has been a trend of automotive companies cooperating with other leaders in the energy industry in order to develop home energy storage solutions. This is likely due to a lot of the research and development that goes into powerful batteries having the potential to benefit both automotive and residential industries. Manufacturers such BMW in their partnership with Solarwatt and Nissan in conjunction with Eaton are strong examples of this trend. Additionally, BYD and Tesla market own-brand home energy storage devices to their customers.
Despite initial high costs bringing a lot of scrutiny, the home energy storage market is seeing an increase in revenue following a trend in lowering prices
=== Tariffs ===
The units can also be programmed to exploit a differential tariff, that provide lower priced energy during hours of low demand - seven hours from 12:30am in the case of Britain's Economy 7 tariff - for consumption when prices are higher.
Smart tariffs, stemming from the increasing prevalence of smart meters, will increasingly be paired with home energy storage devices to exploit low off-peak prices, and avoid higher-priced energy at times of peak demand.
== Advantages ==
=== Overcoming grid losses ===
Transmission of electrical power from power stations to population centres is inherently inefficient, due to transmission losses in electrical grids, particularly within power-hungry dense conurbations where power stations are harder to site. By allowing a greater proportion of on-site generated electricity to be consumed on-site, rather than exported to the energy grid, home energy storage devices can reduce the inefficiencies of grid transport.
=== Energy grid support ===
Home energy storage devices, when connected to a server via the internet, can theoretically be ordered to provide very short-term services to the energy grid:-
Reduced peak hour demand stress - provision of short-term demand response during periods of peak demand reducing the need to inefficiently stand up short generation assets like diesel generators.
Frequency correction - the provision of ultra short-term corrections, to keep mains frequency within the tolerances required by regulators (e.g., 50 Hz or 60 Hz +/- n%).
=== Reduced reliance on fossil fuels ===
Due to the above efficiencies, and their ability to boost the amount of solar energy consumed on-site, the devices reduce the amount of power generated using fossil fuels, namely natural gas, coal, oil and diesel.
== Disadvantages ==
=== Environmental impact of batteries ===
Lithium-ion batteries, a popular choice due to their relatively high charge cycle and lack of memory effect, are difficult to recycle.
Lead-acid batteries are relatively easier to recycle and, due to the high resale value of the lead, 99% of those sold in the US get recycled. They have much shorter useful lives than a lithium-ion battery of a similar capacity, due to having a lower charge cycle, narrowing the environmental-impact gap. In addition, lead is a toxic heavy metal and the sulfuric acid in the electrolyte has a high environmental impact.
==== Second life for electric vehicle batteries ====
To offset the environmental impact of batteries, some manufacturers extend the useful life of used batteries taken from electric vehicles at the point where the cells will not sufficiently hold charge. Though considered end of life for electric vehicles, the batteries will function satisfactorily in home energy storage devices. Manufacturers supporting this include Nissan, BMW and Powervault.
==== Salt water batteries ====
Home Energy Storage devices can be paired with salt water batteries, which have a lower environmental impact due to their lack of toxic heavy metal and ease of recyclability.
Saltwater batteries are no longer being produced on a commercial level after the bankruptcy of Aquion Energy in March 2017.
=== Grid defection ===
With an increasing amount of consumers choosing to implement solar panels that feed energy solely to their home and home batteries, grid defection has continued to grow. As the number of people of grid increases, the cost of the grid will be spread across fewer consumers making, "the incentive to go off-grid only grow". This is seen as an increasingly large disadvantage to home energy storage, as it could lead to the abandoning of a large infrastructure network created to maintain grids, price inflation for those on grid, and a hindrance to the energy transition.
== Other forms of storage ==
Storing energy in batteries is far from the only option. Multiple forms of storing energy exist such as flywheels, hydroelectric, and thermal energy.
=== Pico hydro (hydroelectric) ===
Using a pumped-storage system of cisterns for energy storage and small generators, pico hydro generation may also be effective for "closed loop" home energy generation systems.
=== Thermal energy storage ===
A storage heater or heat bank (Australia) is an electrical heater which stores thermal energy during the evening, or at night when electricity is available at lower cost, and releases the heat during the day as required.
Accumulators, like a hot water storage tank, are another type of storage heater but specifically store hot water for later use.
Some systems may be portable or partially portable for easier transportation to another location, or use during transportation or travel.
== See also ==
Energy storage
Rechargeable battery
UltraBattery
Flow battery
Criticism of vehicle-to-grid
Distributed generation
Backfeeding
Other forms of grid-energy storage
Smart grid
Energy storage as a service
Uninterruptible power supply
Emergency power system
== References == | Wikipedia/Home_energy_storage |
Electrical insulation papers are specific types of paper that are used as electrical insulation. They are used in many applications due to the outstanding electrical properties of pure cellulose. Cellulose is a good insulator and is also polar, having a relative permittivity significantly greater than 1. Electrical paper products are classified by their thickness, with tissue considered papers less than 1.5 mils (0.0381 mm) thickness, and board considered more than 20 mils (0.508 mm) thickness.
== History ==
The use of paper board as electrical insulation paper started in the early-mid 20th century. Since the need for high voltage electrical transformers, there has been a need for an insulating material that could withstand the high electrical and physical stresses experienced around a core and windings. Pressboard, a board made by compressing layers of paper together and drying them, has been used for installation purposes in many of the first electrical machines. However, as electrical technology increased, the need for a higher density material that was capable of insulating larger and higher voltage transformers grew. In the late 1920s, Hans Tschudi-Faude became the director of H. Weidmann Limited and began developing a type of pressboard that would meet the higher standards needed for the newer, more powerful transformers. Unlike older methods of pressboard production, Transformerboard was not based on used paper or cotton waste but was made with high-grade sulfate cellulose. The new product was made purely out of cellulose without a resin or binder, improving electrical insulation capabilities and could be completely dried, degassed, and oil impregnated. The new product became famous under the name Transformerboard. Throughout the 1930s, new methods of production and advances in understanding replaced almost all insulating parts of transformers with parts made from transformer board.
== Production ==
The more demanding application the cleaner the paper needs to be. Paper machines are run with deionised or even distilled process water when producing higher grades of electrical insulation paper. Electrical insulation papers are made from well delignified unbleached kraft pulp.
== Applications ==
=== Cable paper ===
Electrical cables are categorized by the voltage and current used. Telephone cables have moderate voltage and current associated with cables leading moderate electric current or transmitting electrical signals. The telephone cables have a large number of conductors that are individually insulated. The paper needs to be thin (30-40 g/m2). A normal power cable needs more insulation and therefore paper with higher paper density is used, normally 60-190 g/m2. The paper needs to be strong, elastic, uniform and free of holes or debris. These applications are being replaced by plastic insulation.
=== High voltage power cable paper ===
Submarine power cables at very high voltages (> 400 kV) are a very demanding application. The paper is normally 65-155 g/m2 and mostly produced on two ply paper machines. An advantage of using paper in sea cables is that in case of leakage, the paper will swell and prevent water from flowing along the cable.
=== Capacitor tissue ===
This paper is used in capacitors and is an extremely clean and thin tissue paper (normally 6-12 g/m2) that is super calendered. The pulp is clean unbleached kraft pulp that is extremely refined. The paper is made on small paper machines with slow speeds because the stock has to be drained very slowly.
=== Transformer board ===
Transformer board is used mainly in oil-filled transformers where a solid insulating structure is needed. This is a pressboard up to 8 mm in thickness. The board is built up wet on forming cylinders and cut off when at the desired thickness. This makes a sheet with the size of the width and circumference of the drum. The wet sheets are hot- or cold-press dried and finished on separate machines.
== See also ==
Fish paper
Vulcanized fibre
Insulation system
== References == | Wikipedia/Transformerboard |
Toroidal inductors and transformers are inductors and transformers which use magnetic cores with a toroidal (ring or donut) shape. They are passive electronic components, consisting of a circular ring or donut shaped magnetic core of ferromagnetic material such as laminated iron, iron powder, or ferrite, around which wire is wound.
Although closed-core inductors and transformers often use cores with a rectangular shape, the use of toroidal-shaped cores sometimes provides superior electrical performance. The advantage of the toroidal shape is that, due to its symmetry, the amount of magnetic flux that escapes outside the core (leakage flux) can be made low, potentially making it more efficient and making it emit less electromagnetic interference (EMI).
Toroidal inductors and transformers are used in a wide range of electronic circuits: power supplies, inverters, and amplifiers, which in turn are used in the vast majority of electrical equipment: TVs, radios, computers, and audio systems.
== Advantages ==
In general, a toroidal inductor/transformer is more compact than other shaped cores because they are made of fewer materials and include a centering washer, nuts, and bolts resulting in up to a 50% lighter weight design. This is especially the case for power devices.
Because the toroid is a closed-loop core, it will have a higher magnetic field and thus higher inductance and Q factor than an inductor of the same mass with a straight core (solenoid coils). This is because most of the magnetic field is contained within the core. By comparison, with an inductor with a straight core, the magnetic field emerging from one end of the core has a long path through air to enter the other end.
In addition, because the windings are relatively short and wound in a closed magnetic field, a toroidal transformer will have a lower secondary impedance which will increase efficiency, electrical performance and reduce effects such as distortion and fringing.
Due to the symmetry of a toroid, little magnetic flux escapes from the core (leakage flux). Thus, a toroidal inductor/transformer, radiates less electromagnetic interference (EMI) to adjacent circuits and is an ideal choice for highly concentrated environments. Manufacturers have adopted toroidal coils in recent years to comply with increasingly strict international standards limiting the amount of electromagnetic field consumer electronics can produce.
== Total B field confinement ==
In some circumstances, the current in the winding of a toroidal inductor contributes only to the B field inside the windings. It does not contribute to the magnetic B field outside the windings. This is a consequence of symmetry and Ampère's circuital law.
=== Sufficient conditions ===
The absence of circumferential current (the path of circumferential current is indicated by the red arrow in figure 3 of this section) and the axially symmetric layout of the conductors and magnetic materials are sufficient conditions for total internal confinement of the B field. (Some authors prefer to use the H field). Because of the symmetry, the lines of B flux must form circles of constant intensity centered on the axis of symmetry. The only lines of B flux that encircle any current are those that are inside the toroidal winding. Therefore, from Ampere's circuital law, the intensity of the B field must be zero outside the windings.
Figure 3 of this section shows the most common toroidal winding. It fails both requirements for total B field confinement. Looking out from the axis, sometimes the winding is on the inside of the core and sometimes on the outside of the core. It is not axially symmetric in the near region. However, at points a distance of several times the winding spacing, the toroid does look symmetric. There is still the problem of the circumferential current. No matter how many times the winding encircles the core and no matter how thin the wire, this toroidal inductor will still include a one coil loop in the plane of the toroid. This winding will also produce and be susceptible to an E field in the plane of the inductor.
Figures 4-6 show different ways to neutralize the circumferential current. Figure 4 is the simplest and has the advantage that the return wire can be added after the inductor is bought or built.
=== External electric field ===
There will be a distribution of potential along the winding. This can lead to an E-Field in the plane of the toroid and also a susceptibility to an E field in the plane of the toroid, as shown in figure 7. This can be mitigated by using a return winding, as shown in Figure 8. With this winding, each place the winding crosses itself; the two parts will be at equal and opposite polarity, which substantially reduces the E field generated in the plane.
=== Magnetic vector potential ===
See Feynman chapter 14 and 15 for a general discussion of magnetic vector potential. See Feynman page 15-11 for a diagram of the magnetic vector potential around a long thin solenoid which also exhibits total internal confinement of the B field, at least in the infinite limit.
The A field is accurate when using the assumption
b
f
A
=
0
{\displaystyle bf{A}=0}
. This would be true under the following assumptions:
1. the Coulomb gauge is used
2. the Lorenz gauge is used and there is no distribution of charge,
ρ
=
0
{\displaystyle \rho =0\,}
3. the Lorenz gauge is used and zero frequency is assumed
4. the Lorenz gauge is used and a non-zero frequency that is low enough to neglect
1
c
2
∂
ϕ
∂
t
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial \phi }{\partial t}}}
is assumed.
Number 4 will be presumed for the rest of this section and may be referred to the "quasi-static condition".
Although the axially symmetric toroidal inductor with no circumferential current totally confines the B field within the windings, the A field (magnetic vector potential) is not confined. Arrow #1 in the picture depicts the vector potential on the axis of symmetry. Radial current sections a and b are equal distances from the axis but pointed in opposite directions, so they will cancel. Likewise, segments c and d cancel. All the radial current segments cancel. The situation for axial currents is different. The axial current on the outside of the toroid is pointed down and the axial current on the inside of the toroid is pointed up. Each axial current segment on the outside of the toroid can be matched with an equal but oppositely directed segment on the inside of the toroid. The segments on the inside are closer than the segments on the outside to the axis, therefore there is a net upward component of the A field along the axis of symmetry.
Since the equations
∇
×
A
=
B
{\displaystyle \nabla \times \mathbf {A} =\mathbf {B} \ }
, and
∇
×
B
=
μ
0
j
{\displaystyle \nabla \times \mathbf {B} =\mu _{0}\mathbf {j} \ }
(assuming quasi-static conditions, i.e.
∂
E
∂
t
→
0
{\displaystyle {\frac {\partial E}{\partial t}}\rightarrow 0}
) have the same form, then the lines and contours of A relate to B like the lines and contours of B relate to j. Thus, a depiction of the A field around a loop of B flux (as would be produced in a toroidal inductor) is qualitatively the same as the B field around a loop of current. The figure to the left is an artist's depiction of the A field around a toroidal inductor. The thicker lines indicate paths of higher average intensity (shorter paths have higher intensity so that the path integral is the same). The lines are just drawn to look good and impart general look of the A field.
=== Transformer action ===
The E and B fields can be computed from the A and
ϕ
{\displaystyle \phi \,}
(scalar electric potential) fields
B
=
∇
×
A
.
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} .}
and :
E
=
−
∇
ϕ
−
∂
A
∂
t
{\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}
and so even if the region outside the windings is devoid of B field, it is filled with non-zero E field.
The quantity
∂
A
∂
t
{\displaystyle {\frac {\partial \mathbf {A} }{\partial t}}}
is responsible for the desirable magnetic field coupling between primary and secondary while the quantity
∇
ϕ
{\displaystyle \nabla \phi \,}
is responsible for the undesirable electric field coupling between primary and secondary. Transformer designers attempt to minimize the electric field coupling. For the rest of this section,
∇
ϕ
{\displaystyle \nabla \phi \,}
will assumed to be zero unless otherwise specified.
Stokes theorem applies, so that the path integral of A is equal to the enclosed B flux, just as the path integral B is equal to a constant times the enclosed current
The path integral of E along the secondary winding gives the secondary's induced EMF (Electro-Motive Force).
E
M
F
=
∮
p
a
t
h
E
⋅
d
l
=
−
∮
p
a
t
h
∂
A
∂
t
⋅
d
l
=
−
∂
∂
t
∮
p
a
t
h
A
⋅
d
l
=
−
∂
∂
t
∫
s
u
r
f
a
c
e
B
⋅
d
s
{\displaystyle \mathbf {EMF} =\oint _{path}\mathbf {E} \cdot {\rm {d}}l=-\oint _{path}{\frac {\partial \mathbf {A} }{\partial t}}\cdot {\rm {d}}l=-{\frac {\partial }{\partial t}}\oint _{path}\mathbf {A} \cdot {\rm {d}}l=-{\frac {\partial }{\partial t}}\int _{surface}\mathbf {B} \cdot {\rm {d}}s}
which says the EMF is equal to the time rate of change of the B flux enclosed by the winding, which is the usual result.
=== Poynting vector coupling ===
This figure shows the half section of a toroidal transformer. Quasi-static conditions are assumed, so the phase of each field is the same everywhere. The transformer, its windings and all things are distributed symmetrically about the axis of symmetry. The windings are such that there is no circumferential current. The requirements are met for full internal confinement of the B field due to the primary current. The core and primary winding are represented by the gray-brown torus. The primary winding is not shown, but the current in the winding at the cross-section surface is shown as gold (or orange) ellipses. The B field caused by the primary current is confined to the region enclosed by the primary winding (i.e. the core). Blue dots on the left-hand cross-section indicate that lines of B flux in the core come out of the left-hand cross-section. On the other cross-section, blue plus signs indicate that the B flux enters there. The E field sourced from the primary currents is shown as green ellipses. The secondary winding is shown as a brown line coming directly down the axis of symmetry. In standard practice, the two ends of the secondary are connected with a long wire that stays well away from the torus, but to maintain the absolute axial symmetry, the entire apparatus is envisioned as being inside a perfectly conductive sphere with the secondary wire "grounded" to the inside of the sphere at each end. The secondary is made of resistance wire, so there is no separate load. The E field along the secondary causes current in the secondary (yellow arrows), which causes a B field around the secondary (shown as blue ellipses). This B field fills space, including inside the transformer core, so in the end, there is a continuous non-zero B field from the primary to the secondary, if the secondary is not open-circuited. The cross product of the E field (sourced from primary currents) and the B field (sourced from the secondary currents) forms the Poynting vector, which points from the primary toward the secondary.
== Notes ==
== References ==
Feynman, Richard P; Leighton, Robert B; Sands, Matthew (1964), The Feynman Lectures on Physics Volume 2, Addison-Wesley, ISBN 0-201-02117-X {{citation}}: ISBN / Date incompatibility (help)
Griffiths, David (1989), Introduction to Electrodynamics, Prentice-Hall, ISBN 0-13-481367-7
Halliday; Resnick (1962), Physics, part two, John Wiley & Sons
Hayt, William (1989), Engineering Electromagnetics (5th ed.), McGraw-Hill, ISBN 0-07-027406-1
Purcell, Edward M. (1965), Electricity and Magnetism, Berkeley Physics Course, vol. II, McGraw-Hill, ISBN 978-0-07-004859-1
Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (1993), Foundations of Electromagnetic Theory, Addison-Wesley, ISBN 0-201-52624-7
== External links ==
Inductor and Transformer Design Guides - Magnetics
Approximate inductance of a toroid includes formula, but assumes circular windings
Design Considerations of Toroid Transformers Industrial study material: Ferrite Toroid Transformers Design | Wikipedia/Toroidal_inductors_and_transformers |
Voltage transformers (VT), also called potential transformers (PT), are a parallel-connected type of instrument transformer. They are designed to present a negligible load to the supply being measured and have an accurate voltage ratio and phase relationship to enable accurate secondary connected metering.
== Ratio ==
The PT is typically described by its voltage ratio from primary to secondary. A 600:120 PT will provide an output voltage of 120 volts when 600 volts are impressed across its primary winding. Standard secondary voltage ratings are 120 volts and 70 volts, compatible with standard measuring instruments.
== Burden and accuracy ==
Burden and accuracy are usually stated as a combined parameter due to being dependent on each other.
Metering style PTs are designed with smaller cores and VA capacities than power transformers. This causes metering PTs to saturate at lower secondary voltage outputs saving sensitive connected metering devices from damaging large voltage spikes found in grid disturbances. A small PT (see nameplate in photo) with a rating of 0.3W, 0.6X would indicate with up to W load (12.5 watts) of secondary burden the secondary current will be within a 0.3 percent error parallelogram on an accuracy diagram incorporating both phase angle and ratio errors. The same technique applies for the X load (25 watts) rating except inside a 0.6% accuracy parallelogram.
== Markings ==
Transformer winding primary (usually high-voltage) connecting wires are of many types. They may be labeled as H1, H2 (sometimes H0 if it is internally designed to be grounded) and X1, X2 and sometimes an X3 tap may be present. Sometimes a second isolated winding (Y1, Y2, Y3) (and third (Z1, Z2, Z3) may also be available on the same voltage transformer. The primary may be connected phase to ground or phase to phase. The secondary is usually grounded on one terminal to avoid capacitive induction from damaging low-voltage equipment and for human safety.
== Types of voltage transformers ==
There are three primary types of potential transformers (PT): electromagnetic, capacitor, and optical.
An electromagnetic potential transformer is a wire-wound transformer.
An optical voltage transformer exploits the Faraday effect, rotating polarized light, in optical materials.
A capacitor voltage transformer, or capacitive voltage transformer uses a capacitive voltage divider to reduce the line voltage before applying it to an ordinary electromagnetic transformer.
=== capacitor voltage transformer ===
A capacitor voltage transformer (CVT), is a transformer used in power systems to step down extra high voltage signals and provide a low voltage signal to the actual VT (voltage transformer) used for operating metering/protective relays due to a lower cost than an electromagnetic PT.
In its most basic form, the device consists of three parts: a two capacitor voltage divider across which the transmission line, an inductive element to tune the device to the line frequency, and a voltage transformer to isolate and further step down the voltage for metering devices or protective relay.
The tuning of the divider to the line frequency makes the overall division ratio less sensitive to changes in the burden of the connected metering or protection devices. The device has at least four terminals: a terminal for connection to the high voltage signal, a ground terminal, and two secondary terminals which connect to the instrumentation or protective relay.
Capacitor C1 is often constructed as a stack of smaller capacitors connected in series. This provides a large voltage drop across C1 and a relatively small voltage drop across C2. As the majority of the voltage drop is on C1, this reduces the required insulation level of the voltage transformer. This makes CVTs more economical than the wound voltage transformers under high voltage (over 100 kV), as the latter one requires more winding and materials.
In communication systems, CVTs in combination with wave traps are used for filtering high-frequency communication signals from power frequency. This forms a carrier communication network throughout the transmission network, to communicate between substations.
== References == | Wikipedia/Voltage_transformer |
Energy subsidies are measures that keep prices for customers below market levels, or for suppliers above market levels, or reduce costs for customers and suppliers. Energy subsidies may be direct cash transfers to suppliers, customers, or related bodies, as well as indirect support mechanisms, such as tax exemptions and rebates, price controls, trade restrictions, and limits on market access.
During FY 2016–22, most US federal subsidies were for renewable energy producers (primarily biofuels, wind, and solar), low-income households, and energy-efficiency improvements. During FY 2016–22, nearly half (46%) of federal energy subsidies were associated with renewable energy, and 35% were associated with energy end uses. Federal support for renewable energy of all types more than doubled, from $7.4 billion in FY 2016 to $15.6 billion in FY 2022.
The International Renewable Energy Agency tracked some $634 billion in energy-sector subsidies in 2020, and found that around 70% were fossil fuel subsidies. About 20% went to renewable power generation, 6% to biofuels and just over 3% to nuclear.
== Overview of all sources of energy ==
If governments choose to subsidize one particular source of energy more than another, that choice can impact the environment. That distinguishing factor informs the below discussion on all energy subsidies of all sources of energy in general.
Main arguments for energy subsidies are:
Security of supply – subsidies are used to ensure adequate domestic supply by supporting indigenous fuel production in order to reduce import dependency, or supporting overseas activities of national energy companies, or to secure the electricity grid.
Environmental and health improvement – subsidies are used to improve health by reducing air pollution, and to fulfill international climate pledges. For example the IEA says the purchase price of heat pumps should be subsidized.
Economic benefits – subsidies in the form of reduced prices are used to stimulate particular economic sectors or segments of the population, e.g. alleviating poverty and increasing access to energy in developing countries. With regards to fossil fuel prices in particular, Ian Parry, the lead author of a 2021 IMF report said, "Some countries are reluctant to raise energy prices because they think it will harm the poor. But holding down fossil fuel prices is a highly inefficient way to help the poor, because most of the benefits accrue to wealthier households. It would be better to target resources towards helping poor and vulnerable people directly."
Employment and social benefits – subsidies are used to maintain employment, especially in periods of economic transition. In 2021, with regards to fossil fuel prices in particular, Ipek Gençsü, at the Overseas Development Institute, said: "[Subsidy reform] requires support for vulnerable consumers who will be impacted by rising costs, as well for workers in industries which simply have to shut down. It also requires information campaigns, showing how the savings will be redistributed to society in the form of healthcare, education and other social services. Many people oppose subsidy reform because they see it solely as governments taking something away, and not giving back."
Main arguments against energy subsidies are:
Some energy subsidies, such as the fossil fuel subsidies (oil, coal, and gas subsidies), counter the goal of sustainable development, as they may lead to higher consumption and waste, exacerbating the harmful effects of energy use on the environment, create a heavy burden on government finances and weaken the potential for economies to grow, undermine private and public investment in the energy sector. Also, most benefits from fossil fuel subsidies in developing countries go to the richest 20% of households.
Impede the expansion of distribution networks and the development of more environmentally benign energy technologies, and do not always help the people that need them most.
The study conducted by the World Bank finds that subsidies to the large commercial businesses that dominate the energy sector are not justified. However, under some circumstances it is reasonable to use subsidies to promote access to energy for the poorest households in developing countries. Energy subsidies should encourage access to the modern energy sources, not to cover operating costs of companies. The study conducted by the World Resources Institute finds that energy subsidies often go to capital intensive projects at the expense of smaller or distributed alternatives.
Types of energy subsidies are below. ("Fossil-fuel subsidies generally take two forms. Production subsidies...[and]...consumption subsidies."):
Direct financial transfers – grants to suppliers; grants to customers; low-interest or preferential loans to suppliers.
Preferential tax treatments – rebates or exemption on royalties, duties, supplier levies and tariffs; tax credit; accelerated depreciation allowances on energy supply equipment.
Trade restrictions – quota, technical restrictions and trade embargoes.
Energy-related services provided by government at less than full cost – direct investment in energy infrastructure; public research and development.
Regulation of the energy sector – demand guarantees and mandated deployment rates; price controls; market-access restrictions; preferential planning consent and controls over access to resources.
Failure to impose external costs – environmental externality costs; energy security risks and price volatility costs.
Depletion Allowance – allows a deduction from gross income of up to ~27% for the depletion of exhaustible resources (oil, gas, minerals).
Overall, energy subsidies require coordination and integrated implementation, especially in light of globalization and increased interconnectedness of energy policies, thus their regulation at the World Trade Organization is often seen as necessary.
== Support for new technology ==
Early support of solar power by the United States and Germany greatly helped renewable energy commercialization to reduce greenhouse gas emissions worldwide, but may not have helped local manufacturing. Support for nuclear fusion continues, although it is not expected to be commercially viable in time to contribute to countries net zero targets. Energy storage research is also supported.
== Fossil fuel subsidies ==
== See also ==
Fossil fuel subsidies
Corporate welfare
Building-integrated photovoltaics
Government subsidies
Feed-in tariff
Gasoline subsidies
Renewable Energy Certificates
Renewable energy commercialization
Renewable energy payments
Stranded assets
Financial incentives for photovoltaics
== References ==
== Bibliography ==
Difiglio, Prof. Carmine; Güray, Bora Şekip; Merdan, Ersin (November 2020). Turkey Energy Outlook. iicec.sabanciuniv.edu (Report). Sabanci University Istanbul International Center for Energy and Climate (IICEC). ISBN 978-605-70031-9-5.
== External links ==
Fossil Fuel Subsidy Tracker- a collaboration between the Organisation for Economic Co-operation and Development (OECD) and the International Institute for Sustainable Development (IISD)
Global Subsidies Initiative - a project of the International Institute for Sustainable Development
OECD-IEA analysis of fossil fuels and other support - OECD
European countries spend billions a year on fossil fuel subsidies, survey shows (2017) | Wikipedia/Energy_subsidy |
A rotary variable differential transformer (RVDT) is a type of electrical transformer used for measuring angular displacement. The transformer has a rotor which can be turned by an external force. The transformer acts as an electromechanical transducer that outputs an alternating current (AC) voltage proportional to the angular displacement of its rotor shaft.
In operation, an alternating current (AC) voltage is applied to the transformer primary to energize the RVDT. When energized with a constant AC voltage, the transfer function (output voltage vs. shaft angular displacement) of any particular RVDT is linear (to within a specified tolerance) over a specified range of angular displacement.
RVDTs employ contactless, electromagnetic coupling, which provides long life and reliable, repeatable position sensing with high resolution, even under extreme operating conditions. Most RVDTs consist of a wound, laminated stator and a salient two-pole rotor. The stator, containing four slots, contains both the primary winding and the two secondary windings, which may be connected together in some cases. RVDTs offer advantages such as sturdiness, relatively low cost, small size, and low sensitivity to temperature, primary voltage and frequency variations.
== Operation ==
The two induced voltages of the secondary windings,
V
1
{\displaystyle V_{1}}
and
V
2
{\displaystyle V_{2}}
, vary linearly with the mechanical angle of the rotor, θ:
θ
=
G
⋅
(
V
1
−
V
2
V
1
+
V
2
)
{\displaystyle \theta \ =G\cdot \ \left({\frac {V_{1}-V_{2}}{V_{1}+V_{2}}}\right)}
where
G
{\displaystyle G}
is the gain or sensitivity. The second voltage can be determined by:
V
2
=
V
1
±
G
⋅
θ
{\displaystyle V_{2}=V_{1}\pm \ G\cdot \ \theta \ }
The difference
V
1
−
V
2
{\displaystyle V_{1}-V_{2}}
gives a proportional voltage:
Δ
V
=
2
⋅
G
⋅
θ
{\displaystyle \Delta \ V=2\cdot \ G\cdot \ \theta \ }
and the sum of the voltages is a constant:
C
=
∑
V
=
2
⋅
V
0
{\displaystyle C=\sum \ V=2\cdot \ V_{0}}
Consequently, the angular information output by a RVDT is independent of the input voltage, frequency and temperature.
Putting the above mathematical equations in some theoretical form, the working of RVDT can be explained as below.
Basic RVDT construction and operation is provided by rotating an iron-core bearing supported within a housed stator assembly. The housing is passivated stainless steel. The stator consists of a primary excitation coil and a pair of secondary output coils.
A fixed alternating current excitation is applied to the primary stator coil that is electromagnetically coupled to the secondary coils. This coupling is proportional to the angle of the input shaft. The output pair is structured so that one coil is in-phase with the excitation coil, and the second is 180° out-of-phase with the excitation coil.
When the rotor is in a position that directs the available flux equally in both the in-phase and out-of-phase coils, the output voltages cancel and result in a zero value signal. This is referred to as the electrical zero position or E.Z. When the rotor shaft is displaced from E.Z., the resulting output signals have a magnitude and phase relationship proportional to the direction of rotation.
Because RVDTs perform essentially like a transformer, excitation voltages changes will cause directly proportional changes to the output (transformation ratio). However, the voltage out to excitation voltage ratio will remain constant. Since most RVDT signal conditioning systems measure signal as a function of the transformation ratio (TR), excitation voltage drift beyond 7.5% typically has no effect on sensor accuracy and strict voltage regulation is not typically necessary. Excitation frequency should be controlled within ±1% to maintain accuracy.
Although the RVDT can theoretically operate between ±45°, accuracy decreases quickly after ±35°. Thus, its operational limits lie mostly within ±30°, but some up to ±40°. Certain types can operate up to ±60°.
== Varieties ==
An RVDT can also be designed with two laminations, one containing the primary and the other, the secondaries. These types can operate on larger rotations.
A similar transformer is called the Rotary Variable Transformer and contains only one secondary winding giving only one voltage:
V
=
G
⋅
θ
{\displaystyle V=G\cdot \ \theta \ }
== See also ==
LVDT, Linear-movement counterpart to the RVDT
Rotary encoder
Resolver, an angle sensor which operates over a full 360°
Synchro, a three-phase variant of the resolver
== References == | Wikipedia/Rotary_variable_differential_transformer |
A zigzag transformer winding is a special-purpose transformer winding with a zigzag or "interconnected star" connection, such that each output is the vector sum of two (2) phases offset by 120°. It is used as a grounding transformer, creating a missing neutral connection from an ungrounded 3-phase system to permit the grounding of that neutral to an earth reference point; to perform harmonic mitigation, as they can suppress triplet (3rd, 9th, 15th, 21st, etc.) harmonic currents; to supply 3-phase power as an autotransformer (serving as the primary and secondary with no isolated circuits); and to supply non-standard, phase-shifted, 3-phase power.
Nine-winding, three-phase transformers typically have three primaries and six identical secondary windings, which can be used in zigzag winding connection as pictured.
A conventional six-winding, grounding transformer or zigzag bank,with the same winding and core quantity as a conventional three-phase transformer, can also be used in zigzag winding connection.
In all cases the first coil on each zigzag winding core is connected contrariwise to the second coil on the next core. The second coils are then all tied together to form the neutral, and the phases are connected to the primary coils. Each phase, therefore, couples with each other phase, and the voltages cancel out. As such, there would be negligible current through the neutral point, as the Zig-Zag has a high positive and negative sequence impedance, with a low zero-sequence impedance which can be tied to ground.
Each of the three "limbs" are split into two sections. The two halves of each limb have an equal number of turns and are wound in opposite directions. With the neutral grounded, during a phase-to-ground short fault, a third of the current returns to the fault current, and the remainder must go through two of the three phases when used to derive a grounding point from a delta source.
If one or more phases fault to earth, the voltage applied to each phase of the transformer is no longer in balance; fluxes in the windings no longer oppose. (Using symmetrical components, this is Ia0 = Ib0 = Ic0.) Zero-sequence (earth fault) current exists between the transformer’s neutral to the faulting phase. The purpose of a zigzag transformer in this application is to provide a return path for earth faults on delta-connected systems. With negligible current in the neutral under normal conditions, an undersized (unable to carry a continuous fault load) transformer may be used only as short-time rating is required, provided the defective load will be automatically disconnected in a fault condition. The transformer's impedance should not be too low for desired maximum fault current. Impedance can be added after the secondaries are summed to limit maximum fault currents (the 3Io path).
A combination of Y (wye or star), delta, and zigzag windings may be used to achieve a vector phase shift. For example, an electrical network may have a transmission network of 110 kV/33 kV star/star transformers, with 33 kV/11 kV delta/star for the high voltage distribution network. If a transformation is required directly between the 110 kV/11 kV network an option is to use a 110 kV/11 kV star/delta transformer. The problem is that the 11 kV delta no longer has an earth reference point. Installing a zigzag transformer near the secondary side of the 110 kV/11 kV transformer provides the required earth reference point.
== Applications ==
Zigzag transformers are often required by utilities when connecting three-phase inverters (usually for renewable energy) to the grid to provide a stable neutral voltage and prevent excessive phase-to-ground voltages. This also protects the switching devices inside the inverters, which are usually insulated-gate bipolar transistors (IGBTs).
== References == | Wikipedia/Zigzag_transformer |
Transformer oil, a type of insulating and cooling oil used in transformers and other electrical equipment, needs to be tested periodically to ensure that it is still fit for purpose. This is because it tends to deteriorate over time. Testing sequences and procedures are defined by various international standards, many of them set by ASTM. Transformer oil testing consists of measuring breakdown voltage and other physical and chemical properties of samples of the oil, either in a laboratory or using portable test equipment on-site.
== Motivation for testing ==
The transformer oil (insulation oil) of voltage transformers and current transformers fulfills the purpose of insulating as well as cooling. Thus, the dielectric quality of transformer oil is essential to secure operation of a transformer.
As transformer oil deteriorates through aging and moisture ingress, transformer oil should, depending on economics, transformer duty and other factors, be tested periodically. Electric utility companies have a vested interest in periodic oil testing because transformers represent a large proportion of their total assets. Through such testing, transformers' life can be substantially increased, thus delaying new investment of replacement transformer assets.
== On-site testing ==
Recently time-consuming testing procedures in test labs have been replaced by on-site oil testing procedures. There are various manufacturers of portable oil testers. With low weight devices in the range of 20 to 40 kg, tests up to 100 kV rms can be performed and reported on-site automatically. Some of them are even battery-powered and come with accessories.
== Breakdown voltage testing procedure ==
To assess the insulating property of dielectric transformer oil, a sample of the transformer oil is taken and its breakdown voltage is measured. The lower the resulting breakdown voltage, the poorer the quality of the transformer oil.
The transformer oil is filled in the vessel of the testing device. Two standard-compliant test electrodes with a typical clearance of 2.5 mm are surrounded by the dielectric oil.
A test voltage is applied to the electrodes and is continuously increased up to the breakdown voltage with a constant, standard-compliant slew rate of e.g. 2 kV/s.
At a certain voltage level breakdown occurs in an electric arc, leading to a collapse of the test voltage.
An instant after ignition of the arc, the test voltage is switched off automatically by the testing device. Ultra fast switch off is highly desirable, as the carbonisation due to the electric arc must be limited to keep the additional pollution as low as possible.
The transformer oil testing device measures and reports the root mean square value of the breakdown voltage.
After the transformer oil test is completed, the insulation oil is stirred automatically and the test sequence is performed repeatedly: typically 5 repetitions, depending on the standard.
As a result the breakdown voltage is calculated as mean value of the individual measurements.
== Types of test ==
Color; e.g., ASTM D1500.
Dielectric breakdown voltage; e.g., D 877, ASTM D1816
Dissolved gas analysis; e.g., ASTM D3612
Dissolved metals; e.g., ASTM D7151
Flash point, fire point; e.g., ASTM D92
Interfacial tension; e.g. D 971
Furanic compounds; e.g., ASTM D5837
Karl Fischer moisture; e.g., ASTM D1533
Liquid power factor; e.g., ASTM D924
Neutralization number; e.g., ASTM D974
Oxidation inhibitor content; e.g., ASTM D2668
Polychlorinated biphenyls content; e.g., ASTM D4059
Relative density (specific gravity); e.g., D 1298, ASTM D1524
Resistivity; e.g., ASTM D1169
Visual examination; e.g., D1524
=== International transformer oil testing standards ===
VDE370-5/96
OVE EN60156
IEC 60156/97,
ASTM1816-04-1
ASTM1816-04-2
ASTM877-02
ASTM877-02B
AS1767.2.1
BS EN60156
NEN 10 156
NF EN60156
PA SEV EN60156
SABS EN60156
UNE EN60156
IS:6792
IS 335
== See also ==
Electrical measurements
Severity factor
== References == | Wikipedia/Transformer_oil_testing |
A Scott-T transformer or Scott connection is a type of circuit used to produce two-phase electric power (2 φ, 90 degree phase rotation) from a three-phase (3 φ, 120 degree phase rotation) source, or vice versa. The Scott connection evenly distributes a balanced load between the phases of the source. The Scott three-phase transformer was invented by Westinghouse engineer Charles F. Scott in the late 1890s to bypass Thomas Edison's more expensive rotary converter and thereby permit two-phase generator plants to drive three-phase motors.
== Interconnection ==
At the time of the invention, two-phase motor loads also existed and the Scott connection allowed connecting them to newer three-phase supplies with the currents equal on the three phases. This was valuable for getting equal voltage drop and thus feasible regulation of the voltage from the electric generator (the phases cannot be varied separately in a three-phase machine). Nikola Tesla's original polyphase power system was based on simple-to-build two-phase four-wire components. However, as transmission distances increased, the more transmission-line efficient three-phase system became more common. (Three phase power can be transmitted with only three wires, where the two-phase power systems required four wires, two per phase.) Both 2 φ and 3 φ components coexisted for a number of years and the Scott-T transformer connection allowed them to be interconnected.
== Technical details ==
Assuming the desired voltage is the same on the two and three phase sides, the Scott-T transformer connection (shown right) consists of a centre-tapped 1:1 ratio main transformer, T1, and a √3/2(≈86.6%) ratio teaser transformer, T2. The centre-tapped side of T1 is connected between two of the phases on the three-phase side. Its centre tap then connects to one end of the lower turn count side of T2, the other end connects to the remaining phase. The other side of the transformers then connect directly to the two pairs of a two-phase four-wire system.
=== Unbalanced loads ===
Two-phase motors draw constant power, just as three-phase motors do, so a balanced two-phase load is converted to a balanced three-phase load. However if a two-phase load is not balanced (more power drawn from one phase than the other), no arrangement of transformers (including the Scott-T transformers) can restore balance: Unbalanced current on the two-phase side causes unbalanced current on the three-phase side. Since the typical two-phase load was a motor, the current in the two phases was presumed inherently equal during the Scott-T development.
In modern times people have tried to revive the Scott connection as a way to power single-phase electric railways from three-phase Utility supplies. This will not result in balanced current on the three-phase of being equal. The instantaneous difference in loading on the two sections will be seen as an imbalance in the three-phase supply; there is no way to smooth it out with transformers.
=== Back to back arrangement ===
The Scott-T transformer connection may also be used in a back-to-back T-to-T arrangement for a three-phase to three-phase connection. This is a cost-saving in the lower-power transformers due to the two-coil T connected to a secondary two-coil T instead of the traditional three-coil primary to three-coil secondary transformer. In this arrangement the X0 neutral tap is part way up on the secondary teaser transformer (see right). The voltage stability of this T-to-T arrangement as compared to the traditional three-coil primary to three-coil secondary transformer is questioned, as the "per unit" impedance of the two windings (primary and secondary, respectively) are not the same in a T-to-T configuration, whereas the three windings (primary and secondary, respectively) are the same in a three transformer configuration, if the three transformers are identical.
Three-phase to three-phase (also called "T-connected") distribution transformers are seeing increasing applications. The primary must be delta-connected (Δ), but the secondary may be either delta or "wye"-connected (Y), at the customer's option, with X0 providing the neutral for the "wye" case. Taps for either case are usually provided. The customary maximum capacity of such a distribution transformer is 333 kVA (a third of a megawatt at unity power factor).
== See also ==
Alternating current
Polyphase coil
Symmetrical components
High-leg delta
== References == | Wikipedia/Scott-T_transformer |
A trigger transformer is a small, usually ferrite cored transformer or autotransformer used in applications requiring a high voltage pulse, typically to start ionization of a gas to allow a current to pass.
== Uses ==
Trigger transformer cores may be utilized in a unipolar (electromagnetic flux strictly positive or negative) or bipolar (swinging between positive and negative) manner. Applications also differ in whether or not they saturate the core (termed a saturating trigger transformer).
Strobe lights, for instance operate the core in a unipolar, unsaturated mode. Capacitors are charged to approx. 300 volts, at which point a second capacitor pulses voltage through the transformer, achieving the approx. 2000-6000 volts (depending upon the characteristics of the specific flash tube) necessary to overcome the resistance of the inert gas (such as xenon) between the electrodes, ionizing it.
Trigger transformers operate by means of a secondary coil with hundreds, even thousands, of turns of very fine copper wire, trading current for voltage.
Much like in lightning, this plasma has much lower resistance, and the capacitor can discharge rapidly across it. The capacitors begin charging again, starting the cycle anew and giving rise to their characteristic periodic flashing. Capacitors alone would result in a continuous arc. Having achieved the voltage necessary to cause dielectric breakdown, they would maintain it indefinitely.
So-called saturating trigger transformers find use in circuits that intentionally utilize core saturation and/or operate in a bipolar fashion, such as DC-to-AC power inverters.
Inductors are also commonly used in place of a trigger transformer, however are not considered transformers themselves, although similar in operation.
== References == | Wikipedia/Trigger_transformer |
A padmount or pad-mounted transformer is a ground-mounted electric power distribution transformer in a locked steel cabinet mounted on a concrete pad. Since all energized connection points are securely enclosed in a grounded metal housing, a padmount transformer can be installed in places that do not have room for a fenced enclosure. Padmount transformers are used with underground electric power distribution lines at service drops, to step down the primary voltage on the line to the lower secondary voltage supplied to utility customers. A single transformer may serve one large building or many homes.
Pad-mounted transformers are made in power ratings from around 15 to around 5000 kVA and often include built-in fuses and switches. Primary power cables may be connected with elbow connectors, which can be operated when energized using a hot stick and allows for flexibility in repair and maintenance.
== Design ==
Pad-mount transformers are available in various electrical and mechanical configurations. Pad-mount transformers operate on medium-voltage distribution systems, up to about 35 kV. The low-voltage winding matches the customer requirement and may be single-phase or three-phase.
Pad-mount transformers are (nearly always) oil-filled units and so must be mounted outdoors only. The core and coils are enclosed in a steel oil-filled tank, with terminals for the transformer accessible in an adjacent lockable wiring cabinet. The wiring cabinet has high and low-voltage wiring compartments. High and low-voltage underground cables from below enter the terminal compartments directly. The top of the tank has a cover secured with carriage bolt-nut assemblies. The wiring cabinet has sidewalls on two ends with doors that open sideways to expose the high and low voltage wiring compartments.
Pad-mount transformers have self-protecting fuses consisting of a bayonet mount fuse placed in a high voltage compartment, with a backup high energy current limiting fuse in series to protect against secondary faults and transformer overload. The bayonet mount fuse protects against secondary faults and transformer overload and is a field replaceable device. The backup current-limiting fuse operates only during transformer failure; therefore, it is not field replaceable. These transformers also serve the conventional low voltage fusing requirements.
The use of a polymeric cable and load break elbows enable switching and isolation to be carried out in the HV chamber in what is known as a "dead front" environment, i.e., all terminations are fully screened and watertight.
Single- and three-phase pad-mounted transformers are used in underground industrial and residential power distribution systems, where there is a need for safe, reliable, and aesthetically appealing transformer design. Their enclosed construction allows the installation of pad-mount transformers in public areas without protective fencing. Pad-mount transformers are usually located on the street easements and supply multiple households in residential areas.
While most traditional pad-mount transformers are fixed on a concrete 'pad,' today, small single-phase designs are also available with the transformer already mounted on a 'polypad' base to be mounted on hard ground, connected, and switched on.
== Standards ==
American National Standards Institute /Institute of Electrical and Electronics Engineers (ANSI/IEEE)
ANSI C57.12.00 – Standard General Requirements for Liquid-Immersed Distribution, Power, and Regulating Transformers
ANSI C57.12.22 – Standard for Transformers - Pad-Mounted, Compartmental-Type, Self-Cooled, Three-Phase Distribution Transformers with High-Voltage Bushings, 2500 kVA and Smaller: High Voltage, 34,500Grd/19,920 Volts, and Below; Low-Voltage, 480 Volts, and Below - Requirements
ANSI C57.12.26 (Withdrawn) – Standard for Transformers-Pad-Mounted, Compartmental-Type, Self-Cooled, Three-Phase Distribution Transformers for Use with Separable Insulated High-Voltage Connectors, 34,500 Grd/19,920 Volts and Below; 2500 kVA and Smaller
ANSI C57.12.28 – Standard for Switchgear and Transformers, Pad-Mounted Equipment – Enclosure Integrity
National Electrical Manufacturers Association (NEMA) Standards
NEMA TR 1-1993 (R2000) – Transformers, Regulators and Reactors, Table 0-2 Audible Sound Levels for Liquid-Immersed Power Transformers.
NEMA 260-1996 (2004) – Safety Labels for Pad-Mounted Switchgear and Transformers Sited in Public Areas
== References == | Wikipedia/Pad-mounted_transformer |
Droop speed control is a control mode used for AC electrical power generators, whereby the power output of a generator reduces as the line frequency increases. It is commonly used as the speed control mode of the governor of a prime mover driving a synchronous generator connected to an electrical grid. It works by controlling the rate of power produced by the prime mover according to the grid frequency. With droop speed control, when the grid is operating at maximum operating frequency, the prime mover's power is reduced to zero, and when the grid is at minimum operating frequency, the power is set to 100%, and intermediate values at other operating frequencies.
This mode allows synchronous generators to run in parallel, so that loads are shared among generators with the same droop curve in proportion to their power rating.
In practice, the droop curves that are used by generators on large electrical grids are not necessarily linear or the same, and may be adjusted by operators. This permits the ratio of power used to vary depending on load, so for example, base load generators will generate a larger proportion at low demand. Stability requires that over the operating frequency range the power output is a monotonically decreasing function of frequency.
Droop speed control can also be used by grid storage systems. With droop speed control those systems will remove energy from the grid at higher than average frequencies, and supply it at lower frequencies.
== Linear ==
The frequency of a synchronous generator is given by
F
=
P
N
120
{\displaystyle F={\frac {PN}{120}}}
where
F, frequency (in Hz),
P, number of poles,
N, speed of generator (in RPM)
The frequency (F) of a synchronous generator is directly proportional to its speed (N). When multiple synchronous generators are connected in parallel to the electrical grid, the frequency is fixed by the grid, since individual power output of each generator will be small compared to the load on a large grid. Synchronous generators connected to the grid all run at the same frequency but they can run at various speeds because they can differ in the number of poles (P).
A speed reference as percentage of actual speed is set in this mode. As the generator is loaded from no load to full load, the actual speed of the prime mover tends to decrease. In order to increase the power output in this mode, the prime mover speed reference is increased. Because the actual prime mover speed is fixed by the grid, this difference in speed reference and actual speed of the prime mover is used to increase the flow of working fluid (fuel, steam, etc.) to the prime mover, and hence power output is increased. The reverse will be true for decreasing power output. The prime mover speed reference is always greater than actual speed of the prime mover. The actual speed of the prime mover is allowed to "droop" or decrease with respect to the reference, and so the name.
For example, if the turbine is rated at 3000 rpm, and the machine speed reduces from 3000 rpm to 2880 rpm when it is loaded from no load to base load, then the droop % is given by
D
r
o
o
p
%
=
N
o
l
o
a
d
s
p
e
e
d
−
F
u
l
l
l
o
a
d
s
p
e
e
d
N
o
l
o
a
d
s
p
e
e
d
{\displaystyle \mathrm {Droop\%} ={\frac {\mathrm {No\ load\ speed-Full\ load\ speed} }{\mathrm {No\ load\ speed} }}}
= (3000 – 2880) / 3000
= 4%
In this case, speed reference will be 104% and actual speed will be 100%. For every 1% change in the turbine speed reference, the power output of the turbine will change by 25% of rated for a unit with a 4% droop setting. Droop is therefore expressed as the percentage change in (design) speed required for 100% governor action.
As frequency is fixed on the grid, and so actual turbine speed is also fixed, the increase in turbine speed reference will increase the error between reference and actual speed. As the difference increases, fuel flow is increased to increase power output, and vice versa. This type of control is referred to as "straight proportional" control. If the entire grid tends to be overloaded, the grid frequency and hence actual speed of generator will decrease. All units will see an increase in the speed error, and so increase fuel flow to their prime movers and power output. In this way droop speed control mode also helps to hold a stable grid frequency. The amount of power produced is strictly proportional to the error between the actual turbine speed and speed reference.
It can be mathematically shown that if all machines synchronized to a system have the same droop speed control, they will share load proportionate to the machine ratings.
For example, how fuel flow is increased or decreased in a GE-design heavy duty gas turbine can be given by the formula,
FSRN = (FSKRN2 * (TNR-TNH)) + FSKRN1
Where,
FSRN = Fuel Stroke Reference (Fuel supplied to Gas Turbine) for droop mode
TNR = Turbine Speed Reference
TNH = Actual Turbine Speed
FSKRN2 = Constant
FSKRN1 = Constant
The above formula is nothing but the equation of a straight line (y = mx + b).
Multiple synchronous generators having equal % droop setting connected to a grid will share the change in grid load in proportion of their base load.
For stable operation of the electrical grid of North America, power plants typically operate with a four or five percent speed droop. By definition, with 5% droop the full-load speed is 100% and the no-load speed is 105%.
Normally the changes in speed are minor due to inertia of the total rotating mass of all generators and motors running on the grid. Adjustments in power output for a particular prime mover and generator combination are made by slowly raising the droop curve by increasing the spring pressure on a centrifugal governor or by an engine control unit adjustment, or the analogous operation for an electronic speed governor. All units to be connected to a grid should have the same droop setting, so that all plants respond in the same way to the instantaneous changes in frequency without depending on outside communication.
Next to the inertia given by the parallel operation of synchronous generators, the frequency speed droop is the primary instantaneous parameter in control of an individual power plant's power output (kW).
== See also ==
Electric power transmission
Wide area synchronous grid
Dynamic demand (electric power)
== References ==
== Further reading ==
Alfred Engler: Applicability of droops in low voltage grids. International Journal of Distributed Energy Resources, Vol 1, No 1, 2005. | Wikipedia/Droop_speed_control |
The Parametric transformer (or paraformer) is a particular type of transformer. It transfers the power from primary to secondary Windings not by mutual inductance coupling but by a variation of a parameter in its magnetic circuit. First described by Wanlass, et al., 1968.
Assuming Faraday's law of induction,
v
(
t
)
=
L
d
i
(
t
)
d
t
{\displaystyle v(t)=L{\frac {di(t)}{dt}}}
it is possible to obtain a voltage at the secondary winding terminals also thanks to a variation of the inductance, so that
v
(
t
)
=
I
d
l
(
t
)
d
t
{\displaystyle v(t)=I{\frac {dl(t)}{dt}}}
This can be accomplished by for example modulating the saturation of the core by means of an applied variable magnetic field. It works even if primary and secondary windings magnetic coupling is zero (when the fluxes are mutually orthogonal) (Burian 1972, p. 278).
== Further reading ==
S. D. Wanlass, C. L. Wanlass, L. K. Wanlass, "The Paraformer; A new passive power conversion device", IEEE Wescon Tech. Papers, 1968.
Burian, Kurt, "Theory and analysis of a parametrically excited passive power converter", IEEE Transactions on Industry Applications, vol. IA-8, iss. 3, pp. 278–282, May 1972.
Power, Henry Al, "Analysis of a passive power converter as a nonlinear feedback system", IEEE Transactions on Industry Applications, vol. IA-11, iss. 5, pp. 556–559, September 1975.
Bessho, K.; Matsumura, F.; Aoki, Y.; Suzuki, M., "Theory and analysis of a new power converter with center-tap reactor circuit", IEEE Transactions on Magnetics, vol. 11, iss. 5, pp. 1558–1560, September 1975.
Tez, E. Salih; Smith, I. R., The Parametric transformer: a power conversion device demonstrating the principles of parametric excitation", IEEE Transactions on Education, vol. 27, iss. 2, pp. 56–57, May 1984 (includes history of the principle).
Newark Electronics, "Newark-Denver Electronic Supply Co., Wanlass Parax Voltage Regulators and Paraformer" Wanlass "Parax Voltage Regulators and Paraformer," Newark-Denver Electronic Supply Company 1970 catalog pg. 252. Archived at Pro Audio design Forum. | Wikipedia/Parametric_transformer |
In electrical engineering, an autotransformer is an electrical transformer with only one winding. The "auto" (Greek for "self") prefix refers to the single coil acting alone. In an autotransformer, portions of the same winding act as both the primary winding and secondary winding sides of the transformer. In contrast, an ordinary transformer has separate primary and secondary windings that are not connected by an electrically conductive path between them.
The autotransformer winding has at least three electrical connections to the winding. Since part of the winding does "double duty", autotransformers have the advantages of often being smaller, lighter, and cheaper than typical dual-winding transformers, but the disadvantage of not providing electrical isolation between primary and secondary circuits. Other advantages of autotransformers include lower leakage reactance, lower losses, lower excitation current, and increased VA rating for a given size and mass.
An example of an application of an autotransformer is one style of traveler's voltage converter, that allows 230-volt devices to be used on 120-volt supply circuits, or the reverse. An autotransformer with multiple taps may be applied to adjust the voltage at the end of a long distribution circuit to correct for excess voltage drop; when automatically controlled, this is one example of a voltage regulator.
== Operation ==
An autotransformer has a single winding with two end terminals and one or more terminals at intermediate tap points. It is a transformer in which the primary and secondary coils have part of their turns in common. The portion of the winding shared by both the primary and secondary is the common section. The portion of the winding not shared by both the primary and secondary is the series section. The primary voltage is applied across two of the terminals. The secondary voltage is taken from two terminals, one terminal of which is usually in common with a primary voltage terminal.
Since the volts-per-turn is the same in both windings, each develops a voltage in proportion to its number of turns. In an autotransformer, part of the output current flows directly from the input to the output (through the series section), and only part is transferred inductively (through the common section), allowing a smaller, lighter, cheaper core to be used as well as requiring only a single winding. However the voltage and current ratio of autotransformers can be formulated the same as other two-winding transformers:
V
1
V
2
=
N
1
N
2
=
a
{\displaystyle {\frac {V_{1}}{V_{2}}}={\frac {N_{1}}{N_{2}}}=a}
where
0
<
V
2
<
V
1
{\displaystyle 0<V_{2}<V_{1}}
.
The ampere-turns provided by the series section of the winding:
F
S
=
(
N
1
−
N
2
)
I
1
=
(
1
−
1
a
)
N
1
I
1
{\displaystyle F_{S}=(N_{1}-N_{2})I_{1}=\left(1-{\frac {1}{a}}\right)N_{1}I_{1}}
The ampere-turns provided by the common section of the winding:
F
C
=
N
2
(
I
2
−
I
1
)
=
N
1
a
(
I
2
−
I
1
)
{\displaystyle F_{C}=N_{2}(I_{2}-I_{1})={\frac {N_{1}}{a}}(I_{2}-I_{1})}
For ampere-turn balance,
F
S
=
F
C
{\displaystyle F_{S}=F_{C}}
:
(
1
−
1
a
)
N
1
I
1
=
N
1
a
(
I
2
−
I
1
)
{\displaystyle \left(1-{\frac {1}{a}}\right)N_{1}I_{1}={\frac {N_{1}}{a}}(I_{2}-I_{1})}
Therefore:
I
1
I
2
=
1
a
=
N
2
N
1
{\displaystyle {\frac {I_{1}}{I_{2}}}={\frac {1}{a}}={\frac {N_{2}}{N_{1}}}}
One end of the winding is usually connected in common to both the voltage source and the electrical load. The other end of the source and load are connected to taps along the winding. Different taps on the winding correspond to different voltages, measured from the common end. In a step-down transformer the source is usually connected across the entire winding while the load is connected by a tap across only a portion of the winding. In a step-up transformer, conversely, the load is attached across the full winding while the source is connected to a tap across a portion of the winding. For a step-up transformer, the subscripts in the above equations are reversed where, in this situation,
N
2
{\displaystyle N_{2}}
and
V
2
{\displaystyle V_{2}}
are greater than
N
1
{\displaystyle N_{1}}
and
V
1
{\displaystyle V_{1}}
, respectively.
As in a two-winding transformer, the ratio of secondary to primary voltages is equal to the ratio of the number of turns of the winding they connect to. For example, connecting the load between the middle of the winding and the common terminal end of the winding of the autotransformer will result in the output load voltage being 50% of the primary voltage. Depending on the application, that portion of the winding used solely in the higher-voltage (lower current) portion may be wound with wire of a smaller gauge, though the entire winding is directly connected.
If one of the center-taps is used for the ground, then the autotransformer can be used as a balun to convert a balanced line (connected to the two end taps) to an unbalanced line (the side with the ground).
An autotransformer does not provide electrical isolation between its windings as an ordinary transformer does; if the neutral side of the input is not at ground voltage, the neutral side of the output will not be either. A failure of the isolation of the windings of an autotransformer can result in full input voltage applied to the output. Also, a break in the part of the winding that is used as both primary and secondary will result in the transformer acting as an inductor in series with the load (which under light load conditions may result in nearly full input voltage being applied to the output). These are important safety considerations when deciding to use an autotransformer in a given application.
Because it requires both fewer windings and a smaller core, an autotransformer for power applications is typically lighter and less costly than a two-winding transformer, up to a voltage ratio of about 3:1; beyond that range, a two-winding transformer is usually more economical.
In three phase power transmission applications, autotransformers have the limitations of not suppressing harmonic currents and as acting as another source of ground fault currents. A large three-phase autotransformer may have a "buried" delta winding, not connected to the outside of the tank, to absorb some harmonic currents.
In practice, losses mean that both standard transformers and autotransformers are not perfectly reversible; one designed for stepping down a voltage will deliver slightly less voltage than required if it is used to step up. The difference is usually slight enough to allow reversal where the actual voltage level is not critical.
Like multiple-winding transformers, autotransformers use time-varying magnetic fields to transfer power. They require alternating currents to operate properly and will not function on direct current. Because the primary and secondary windings are electrically connected, an autotransformer will allow current to flow between windings and therefore does not provide AC or DC isolation.
== Applications ==
=== Power transmission and distribution ===
Autotransformers are frequently used in power applications to interconnect systems operating at different voltage classes, for example 132 kV to 66 kV for transmission. Another application in industry is to adapt machinery built (for example) for 480 V supplies to operate on a 600 V supply. They are also often used for providing conversions between the two common domestic mains voltage bands in the world (100–130 V and 200–250 V). The links between the UK 400 kV and 275 kV "Super Grid" networks are normally three phase autotransformers with taps at the common neutral end. On long rural power distribution lines, special autotransformers with automatic tap-changing equipment are inserted as voltage regulators, so that customers at the far end of the line receive the same average voltage as those closer to the source. The variable ratio of the autotransformer compensates for the voltage drop along the line.
A special form of auto transformer called a zig zag is used to provide grounding on three-phase systems that otherwise have no connection to ground. A zig-zag transformer provides a path for current that is common to all three phases (so-called zero sequence current).
=== Audio system ===
In audio applications, tapped autotransformers are used to adapt speakers to constant-voltage audio distribution systems, and for impedance matching such as between a low-impedance microphone and a high-impedance amplifier input.
=== Railways ===
In railway applications, it is common to power the trains at 25 kV AC. To increase the distance between electricity Grid feeder points, they can be arranged to supply a split-phase 25–0–25 kV feed with the third wire (opposite phase) out of reach of the train's overhead collector pantograph. The 0 V point of the supply is connected to the rail while one 25 kV point is connected to the overhead contact wire. At frequent (about 10 km) intervals, an autotransformer links the contact wire to rail and to the second (antiphase) supply conductor. This system increases usable transmission distance, reduces induced interference into external equipment and reduces cost. A variant is occasionally seen where the supply conductor is at a different voltage to the contact wire with the autotransformer ratio modified to suit.
=== Autotransformer starter ===
Autotransformers can be used as a method of soft starting induction motors. One of the designs of such a starter is the Korndörfer autotransformer starter.
== History ==
The autotransformer starter was invented in 1908, by Max Korndorfer of Berlin. He filed the application with the U.S. Patent office in May 1908 and was granted the patent US 1,096,922 in May 1914. Max Korndorfer assigned his patent to the General Electric Company.
An induction motor draws very high starting current during its acceleration to full rated speed, typically 6 to 10 times the full load current. Reduced starting current is desirable where the electrical grid is not of sufficient capacity, or where the driven load cannot withstand high starting torque. One basic method to reduce the starting current is with a reduced voltage autotransformer with taps at 50%, 65% and 80% of the applied line voltage; once the motor is started the autotransformer is switched out of circuit.
== Variable autotransformers ==
By exposing part of the winding coils and making the secondary connection through a sliding brush, a continuously variable turns ratio can be obtained, allowing for very smooth control of output voltage. The output voltage is not limited to the discrete voltages represented by actual number of turns. The voltage can be smoothly varied between turns as the brush has a relatively high resistance (compared with a metal contact) and the actual output voltage is a function of the relative area of brush in contact with adjacent windings. The relatively high resistance of the brush also prevents it from acting as a short circuited turn when it contacts two adjacent turns. Typically the primary connection connects to only a part of the winding allowing the output voltage to be varied smoothly from zero to above the input voltage and thus allowing the device to be used for testing electrical equipment at the limits of its specified voltage range.
The output voltage adjustment can be manual or automatic. The manual type is applicable only for relatively low voltage and is known as a variable AC transformer (often referred to by the trademark name Variac). These are often used in repair shops for testing devices under different voltages or to simulate abnormal line voltages.
The type with automatic voltage adjustment can be used as automatic voltage regulator, to maintain a steady voltage at the customers' service during a wide range of line and load conditions. Another application is a lighting dimmer that doesn't produce the EMI typical of most thyristor dimmers.
=== Variac trademark ===
From 1934 to 2002, Variac was a U.S. trademark of General Radio for a variable autotransformer intended to conveniently vary the output voltage for a steady AC input voltage. In 2004, Instrument Service Equipment applied for and obtained the Variac trademark for the same type of product. The term variac has become a genericised trademark, being used to refer to a variable autotransformer.
== See also ==
Voltage divider
Balun
Electromagnetism
Faraday's law of induction
Ignition coil
Inductor
Magnetic field
== References ==
== Further reading ==
Croft, Terrell; Summers, Wilford, eds. (1987). American Electricians' Handbook (Eleventh ed.). New York: McGraw Hill. ISBN 0-07-013932-6. | Wikipedia/Autotransformer |
Electric energy consumption is energy consumption in the form of electrical energy. About a fifth of global energy is consumed as electricity: for residential, industrial, commercial, transportation and other purposes.
The global electricity consumption in 2022 was 24,398 terawatt-hour (TWh), almost exactly three times the amount of consumption in 1981 (8,132 TWh). China, the United States, and India accounted for more than half of the global share of electricity consumption. Japan and Russia followed with nearly twice the consumption of the remaining industrialized countries.
== Overview ==
Electric energy is most often measured either in joules (J), or in watt hours (W·h).
1 W·s = 1 J
1 W·h = 3,600 W·s = 3,600 J
1 kWh = 3,600 kWs = 1,000 Wh = 3.6 million W·s = 3.6 million J
Electric and electronic devices consume electric energy to generate desired output (light, heat, motion, etc.). During operation, some part of the energy is lost depending on the electrical efficiency.
Electricity has been generated in power stations since 1882. The invention of the steam turbine in 1884 to drive the electric generator led to an increase in worldwide electricity consumption.
In 2022, the total worldwide electricity production was nearly 29,000 TWh. Total primary energy is converted into numerous forms, including, but not limited to, electricity, heat and motion. Some primary energy is lost during the conversion to electricity, as seen in the United States, where a little more than 60% was lost in 2022.
Electricity accounted for more than 20% of worldwide final energy consumption in 2022, with oil being less than 40%, coal being less than 9%, natural gas being less than 15%, biofuels and waste less than 10%, and other sources (such as heat, solar electricity, wind electricity and geothermal) being more than 5%. The total final electricity consumption in 2022 was split unevenly between the following sectors: industry (42.2%), residential (26.8%), commercial and public services (21.1%), transport (1.8%), and other (8.1%; i.e., agriculture and fishing). In 1981, the final electricity consumption continued to decrease in the industrial sector and increase in the residential, commercial and public services sectors.
A sensitivity analysis on an adaptive neuro-fuzzy network model for electric demand estimation shows that employment is the most critical factor influencing electrical consumption. The study used six parameters as input data, employment, GDP, dwelling, population, heating degree day and cooling degree day, with electricity demand as output variable.
== World electricity consumption ==
The table lists 45 electricity-consuming countries, which used about 22,000 TWh. These countries comprise about 90% of the final consumption of 190+ countries. The final consumption to generate this electricity is provided for every country. The data is from 2022.
In 2022, OECD's final electricity consumption was over 10,000 TWh. In that year, the industrial sector consumed about 42.2% of the electricity, with the residential sector consuming nearly 26.8%, the commercial and public services sectors consuming about 21.1%, the transport sector consuming nearly 1.8%, and the other sectors (such as agriculture and fishing) consuming nearly 8.1%. In recent decades, the consumption in the residential and commercial and public services sectors has grown, while the industry consumption has declined. More recently, the transport sector has witnessed an increase in consumption with the growth in the electric vehicle market.
=== Consumption per capita ===
The final consumption divided by the number of inhabitants provides a country's consumption per capita. In Western Europe, this is between 4 and 8 MWh/year. (1 MWh = 1,000 kWh) In Scandinavia, the United States, Canada, Taiwan, South Korea, Australia, Japan and the United Kingdom, the per capita consumption is higher; however, in developing countries, it is much lower. The world's average was about 3 MWh/year in 2022. Very low consumption levels, such as those in Philippines, not included in the table, indicate that many inhabitants are not connected to the electricity grid, and that is the reason why some of the world's most populous countries, including Nigeria and Bangladesh, do not appear in the table.
== Electricity generation and GDP ==
The table lists 30 countries, which represent about 76% of the world population, 84% of the world GDP, and 85% of the world electricity generation. Productivity per electricity generation (concept similar to energy intensity) can be measured by dividing GDP over the electricity generated. The data is from 2019.
== Electricity consumption by sector ==
The table below lists the 15 countries with the highest final electricity consumption, which comprised more than 70% of the global consumption in 2022.
== Electricity outlook ==
Looking forward, increasing energy efficiency will result in less electricity needed for a given demand in power, but demand will increase strongly on the account of:
Economic growth in developing countries, and
Electrification of transport and heating. Combustion engines are replaced by electric drive and for heating less gas and oil, but more electricity is used, if possible with heat pumps.
The International Energy Agency expects revisions of subsidies for fossil fuels which amounted to $550 billion in 2013, more than four times renewable energy subsidies. In this scenario, almost half of the increase in 2040 of electricity consumption is covered by more than 80% growth of renewable energy. Many new nuclear plants will be constructed, mainly to replace old ones. The nuclear part of electricity generation will increase from 11 to 12%. The renewable part goes up much more, from 21 to 33%. The IEA warns that in order to restrict global warming to 2 °C, carbon dioxide emissions must not exceed 1000 gigaton (Gt) from 2014. This limit is reached in 2040 and emissions will not drop to zero ever.
The World Energy Council sees world electricity consumption increasing to more than 40,000 TWh/a in 2040. The fossil part of generation depends on energy policy. It can stay around 70% in the so-called "Jazz" scenario where countries rather independently "improvise" but it can also decrease to around 40% in the "Symphony" scenario if countries work "orchestrated" for more climate friendly policy. Carbon dioxide emissions, 32 Gt/a in 2012, will increase to 46 Gt/a in Jazz but decrease to 26 Gt/a in Symphony. Accordingly, until 2040 the renewable part of generation will stay at about 20% in Jazz but increase to about 45% in Symphony.
An EU survey conducted on climate and energy consumption in 2022 found that 63% of people in the European Union want energy costs to be dependent on use, with the greatest consumers paying more. This is compared to 83% in China, 63% in the UK and 57% in the US. 24% of Americans surveyed believing that people and businesses should do more to cut their own usage (compared to 20% in the UK, 19% in the EU, and 17% in China).
Nearly half of those polled in the European Union (47%) and the United Kingdom (45%) want their government to focus on the development of renewable energies. This is compared to 37% in both the United States and China when asked to list their priorities on energy.
The United States is on track to break electricity consumption records in 2025 and 2026, according to the U.S. Energy Information Administration’s (EIA) Short-Term Energy Outlook, released in February 2025.
With demand from data centers powering artificial intelligence and cryptocurrency operations, alongside rising electricity use in homes and businesses for heating and transportation, the EIA projects total power consumption will hit 4,179 billion kilowatt-hours (kWh) in 2025 and 4,239 billion kWh in 2026—both surpassing the current record of 4,082 billion kWh set in 2024.
The forecasted increase can be broken down as follows: residential electricity sales will climb to 1,524 billion kWh in 2025, commercial demand to 1,458 billion kWh, and industrial usage to 1,054 billion kWh. This would mark new highs for the commercial sector, which set its current record of 1,421 billion kWh in 2024, and for residential consumers, whose last peak was 1,509 billion kWh in 2022. Meanwhile, the industrial sector—historically the largest consumer of electricity—remains just below its all-time high of 1,064 billion kWh set in 2000.
As AI, cryptocurrency mining, and electrification continue to drive demand, the U.S. power grid faces mounting pressure to keep pace with this record surge in electricity consumption.
== See also ==
== References ==
== External links ==
World Electricity production 2012
World Map and Chart of Energy Consumption by country by Lebanese-economy-forum, World Bank data
Electricity Information 2019 - IEA | Wikipedia/Electric_energy_consumption |
Grid energy storage, also known as large-scale energy storage, are technologies connected to the electrical power grid that store energy for later use. These systems help balance supply and demand by storing excess electricity from variable renewables such as solar and inflexible sources like nuclear power, releasing it when needed. They further provide essential grid services, such as helping to restart the grid after a power outage.
As of 2023, the largest form of grid storage is pumped-storage hydroelectricity, with utility-scale batteries and behind-the-meter batteries coming second and third. Lithium-ion batteries are highly suited for shorter duration storage up to 8 hours. Flow batteries and compressed air energy storage may provide storage for medium duration. Two forms of storage are suited for long-duration storage: green hydrogen, produced via electrolysis and thermal energy storage.
Energy storage is one option to making grids more flexible. An other solution is the use of more dispatchable power plants that can change their output rapidly, for instance peaking power plants to fill in supply gaps. Demand response can shift load to other times and interconnections between regions can balance out fluctuations in renewables production.
The price of storage technologies typically goes down with experience. For instance, lithium-ion batteries have been getting some 20% cheaper for each doubling of worldwide capacity. Systems with under 40% variable renewables need only short-term storage. At 80%, medium-duration storage becomes essential and beyond 90%, long-duration storage does too. The economics of long-duration storage is challenging, and alternative flexibility options like demand response may be more economic.
== Roles in the power grid ==
Any electrical power grid must match electricity production to consumption, both of which vary significantly over time. Energy derived from solar and wind sources varies with the weather on time scales ranging from less than a second to weeks or longer. Nuclear power is less flexible than fossil fuels, meaning it cannot easily match the variations in demand. Thus, low-carbon electricity without storage presents special challenges to electric utilities.
Electricity storage is one of the three key ways to replace flexibility from fossil fuels in the grid. Other options are demand-side response, in which consumers change when they use electricity or how much they use. For instance, households may have cheaper night tariffs to encourage them to use electricity at night. Industry and commercial consumers can also change their demand to meet supply. Improved network interconnection smooths the variations of renewables production and demand. When there is little wind in one location, another might have a surplus of production. Expansion of transmission lines usually takes a long time.
Energy storage has a large set of roles in the electricity grid and can therefore provide many different services. For instance, it can arbitrage by keeping it until the electricity price rises, it can help make the grid more stable, and help reduce investment into transmission infrastructure. The type of service provided by storage depends on who manages the technology, whether the technology is based alongside generation of electricity, within the network, or at the side of consumption.
Providing short-term flexibility is a key role for energy storage. On the generation side, it can help with the integration of variable renewable energy, storing it when there is an oversupply of wind and solar and electricity prices are low. More generally, it can exploit the changes in prices of electricity over time in the wholesale market, charging when electricity is cheap and selling when it is expensive. It can further help with grid congestion (where there is insufficient capacity on transmission lines). Consumers can use storage to use more of their self-produced electricity (for instance from rooftop solar power).
Storage can also be used to provide essential grid services. On the generation side, storage can smooth out the variations in production, for instance for solar and wind. It can assist in a black start after a power outage. On the network side, these include frequency regulation (continuously) and frequency response (after unexpected changes in supply or demand). On the consumption side, storage can help to improve the quality of the delivered electricity in less stable grids.
Investment in storage may make some investments in the transmission and distribution network unnecessary, or may allow them to be scaled down. Additionally, storage can ensure there is sufficient capacity to meet peak demand within the electricity grid. Finally, in off-grid home systems or mini-grids, electricity storage can help provide energy access in areas that were previously not connected to the electricity grid.
== Forms ==
Electricity can be stored directly for a short time in capacitors, somewhat longer electrochemically in batteries, and much longer chemically (e.g. hydrogen), mechanically (e.g. pumped hydropower) or as heat. The first pumped hydroelectricity was constructed at the end of the 19th century around the Alps in Italy, Austria, and Switzerland. The technique rapidly expanded during the 1960s to 1980s nuclear boom, due to nuclear power's inability to quickly adapt to changes in electricity demand. In the 21st century, interest in storage surged due to the rise of sustainable energy sources, which are often weather-dependent. Commercial batteries have been available for over a century, their widespread use in the power grid is more recent, with only 1 GW available in 2013.
=== Batteries ===
==== Lithium-ion batteries ====
Lithium-ion batteries are the most commonly used batteries for grid applications, as of 2024, following the application of batteries in electric vehicles (EVs). In comparison with EVs, grid batteries require less energy density, meaning that more emphasis can be put on costs, the ability to charge and discharge often and lifespan. This has led to a shift towards lithium iron phosphate batteries (LFP batteries), which are cheaper and last longer than traditional lithium-ion batteries.
Costs of batteries are declining rapidly; from 2010 to 2023 costs fell by 90%. As of 2024, utility-scale systems account for two thirds of added capacity, and home applications (behind-the-meter) for one third. Lithium-ion batteries are highly suited to short-duration storage (<8h) due to cost and degradation associated with high states of charge.
===== Electric vehicles =====
The electric vehicle fleet has a large overall battery capacity, which can potentially be used for grid energy storage. This could be in the form of vehicle-to-grid (V2G), where cars store energy when they are not in use, or by repurposing batteries from cars at the end of the vehicle's life. Car batteries typically range between 33 and 100 kWh; for comparison, a typical upper-middle-class household in Spain might use some 18 kWh in a day. By 2030, batteries in electric vehicles may be able to meet all short-term storage demand globally.
As of 2024, there have been more than 100 V2G pilot projects globally. The effect of V2G charging on battery life can be positive or negative. Increased cycling of batteries can lead to faster degradation, but due to better management of the state of charge and gentler charging and discharing, V2G might instead increase the lifetime of batteries. Second-hand batteries may be useable for stationary grid storage for roughly 6 years, when their capacity drops from roughly 80% to 60% of the initial capacity. LFP batteries are particularly suitable for reusing, as they degrade less than other lithium-ion batteries and recycling is less attractive as their materials are not as valuable.
==== Other battery types ====
In redox flow batteries, energy is stored in liquids, which are placed in two separate tanks. When charging or discharging, the liquids are pumped into a cell with the electrodes. The amount of energy stored (as set by the size of the tanks) can be adjusted separately from the power output (as set by the speed of the pumps). Flow batteries have the advantages of low capital cost for charge-discharge duration over 4 h, and of long durability (many years). Flow batteries are inferior to lithium-ion batteries in terms of energy efficiency, averaging efficiencies between 60% and 75%. Vanadium redox batteries is most commercially advanced type of flow battery, with roughly 40 companies making them as of 2022.
Sodium-ion batteries are a possible alternative to lithium-ion batteries, as they are less flammable, and use cheaper and less critical materials. They have a lower energy density, and possibly a shorter lifespan. If produced at the same scale as lithium-ion batteries, they may become 20% to 30% cheaper. Iron-air batteries may be suitable for even longer duration storage than flow batteries (weeks), but the technology is not yet mature.
=== Electrical ===
Storage in supercapacitors works well for applications where a lot of power is needed for short amount of time. In the power grid, they are therefore mostly used in short-term frequency regulation.
=== Hydrogen and chemical storage ===
Various power-to-gas technologies exist that can convert excess electricity into an easier to store chemical. The lowest cost and most efficient one is hydrogen. However, it is easier to use synthetic methane with existing infrastructure and appliances, as it is very similar to natural gas.
As of 2024, there have been a number of demonstration plants where hydrogen is burned in gas turbines, either co-firing with natural gas, or on its own. Similarly, a number of coal plants have demonstrated it is possible to co-fire ammonia when burning coal. In 2022, there was also a small pilot to burn pure ammonia in a gas turbine. A portion of existing gas turbines are capable of co-firing hydrogen, which means there is, as a lower estimate, 80 GW of capacity ready to burn hydrogen.
==== Hydrogen ====
Hydrogen can be used as a long-term storage medium. Green hydrogen is produced from the electrolysis of water and converted back into electricity in an internal combustion engine, or a fuel cell, with a round-trip efficiency of roughly 41%. Together with thermal storage, it is expected to be best suited to seasonal energy storage.
Hydrogen can be stored aboveground in tanks or underground in larger quantities. Underground storage is easiest in salt caverns, but only a certain number of places have suitable geology. Storage in porous rocks, for instance in empty gas fields and some aquifers, can store hydrogen at a larger scale, but this type of storage may have some drawbacks. For instance, some of the hydrogen may leak, or react into H2S or methane.
==== Ammonia ====
Hydrogen can be converted into ammonia in a reaction with nitrogen in the Haber-Bosch process. Ammonia, a gas at room temperature, is more expensive to produce than hydrogen. However, it can be stored more cheaply than hydrogen. Tank storage is usually done at between one and ten times atmospheric pressure and at a temperature of −30 °C (−22 °F), in liquid form. Ammonia has multiple uses besides being an energy carrier: it is the basis for the production of many chemicals; the most common use is for fertilizer. It can be used for power generation directly, or converted back to hydrogen first. Alternatively, it has potential applications as a fuel in shipping.
==== Methane ====
It is possible to further convert hydrogen into methane via the Sabatier reaction, a chemical reaction which combines CO2 and H2. While the reaction that converts CO from gasified coal into CH4 is mature, the process to form methane out of CO2 is less so. Efficiencies of around 80% one-way can be achieved, that is, some 20% of the energy in hydrogen is lost in the reaction.
=== Mechanical ===
==== Flywheel ====
Flywheels store energy in the form of mechanical energy. They are suited to supplying high levels of electricity over minutes and can also be charged rapidly. They have a long lifetime and can be used in settings with widely varying temperatures. The technology is mature, but more expensive than batteries and supercapacitors and not used frequently.
==== Pumped hydro ====
As of 2023, pumped-storage hydroelectricity (PSH) was the largest form of grid energy storage globally, with an installed capacity of 181 GW, surpassing the combined capacity of utility-scale and behind-the-meter battery storage, which totaled approximately 88 GW.
PSH is particularly effective for managing daily fluctuations in energy demand. During periods of low demand, water is pumped to a higher-elevation reservoir, and during peak demand, the stored water is released to generate electricity through turbines. The system has an efficiency rate of 75% to 85% and can quickly respond to changes in demand, typically within seconds to minutes.
While traditional PSH systems require specific geographical conditions, alternative designs have been proposed. These include using deep salt caverns or constructing hollow structures on the seabed, where the ocean serves as the upper reservoir. However, PSH construction is often expensive, time-consuming, and can have significant environmental and social impacts on nearby communities. Installing floating solar panels on reservoirs, can increase the efficiency of PSH systems. These panels reduce water evaporation and benefit from cooling by the water surface, which also improves their energy generation efficiency.
==== Hydroelectric dams ====
Hydroelectric dams with large reservoirs can also be operated to provide peak generation at times of peak demand. Water is stored in the reservoir during periods of low demand and released through the plant when demand is higher. While technically no electricity is stored, the net effect is the similar as pumped storage. The amount of storage available in hydroelectric dams is much larger than in pumped storage. Upgrades may be needed so that these dams can respond to variable demand. For instance, additional investment may be needed in transmission lines, or additional turbines may need to be installed to increase the peak output from the dam.
Dams usually have multiple purposes. As well as energy generation, they often play a role in flood defense and protection of ecosystems, recreation, and they supply water for irrigation. This means it is not always possible to change their operation much, but even with low flexibility, they may still play an important role in responding to changes in wind and solar production.
==== Gravity ====
Alternative methods that use gravity include storing energy by moving large solid masses upward against gravity. This can be achieved inside old mine shafts or in specially constructed towers where heavy weights are winched up to store energy and allowed a controlled descent to release it.
==== Compressed air ====
Compressed air energy storage (CAES) stores electricity by compressing air. The compressed air is typically stored in large underground caverns. The expanding air can be used to drive turbines, converting the energy back into electricity. As air cools when expanding, some heat needs to be added in this stage to prevent freezing. This can be provided by a low-carbon source, or in the case of advanced CAES, by reusing the heat that is released when air is compressed. As of 2023, there are three advanced CAES projects in operation in China. Typical efficiencies of advanced CAES are between 60% and 80%.
==== Liquid air or CO2 ====
Another electricity storage method is to compress and cool air, turning it into liquid air, which can be stored and expanded when needed, turning a turbine to generate electricity. This is called liquid air energy storage (LAES). The air would be cooled to temperatures of −196 °C (−320.8 °F) to become liquid. Like with compressed air, heat is needed for the expansion step. In the case of LAES, low-grade industrial heat can be used for this. Energy efficiency for LAES lies between 50% and 70%. As of 2023, LAES is moving from pre-commercial to commercial. An alternative is the compression of CO2 to store electricity.
=== Thermal ===
Electricity can be directly stored thermally with a Carnot battery. A Carnot battery is a type of energy storage system that stores electricity in heat storage and converts the stored heat back to electricity via thermodynamic cycles (for instance, a turbine). While less efficient than pumped hydro or battery storage, this type of system is expected to be cheap and can provide long-duration storage. A pumped-heat electricity storage system is a Carnot battery that uses a reversible heat pump to convert the electricity into heat. It usually stores the energy in both a hot and cold reservoir. To achieve decent efficiencies (>50%), the temperature ratio between the two must reach a factor of 5.
Thermal energy storage is also used in combination with concentrated solar power (CSP). In CSP, solar energy is first converted into heat, and then either directly converted into electricity or first stored. The energy is released when there is little or no sunshine. This means that CSP can be used as a dispatchable (flexible) form of generation. The energy in a CSP system can for instance be stored in molten salts or in a solid medium such as sand.
Finally, heating and cooling systems in buildings can be controlled to store thermal energy in either the building's mass or dedicated thermal storage tanks. This thermal storage can provide load-shifting or even more complex ancillary services by increasing power consumption (charging the storage) during off-peak times and lowering power consumption (discharging the storage) during higher-priced peak times.
== Economics ==
=== Costs ===
The levelized cost of storing electricity (LCOS) is a measure of the lifetime costs of storing electricity per MWh of electricity discharged. It includes investment costs, but also operational costs and charging costs. It depends highly on storage type and purpose; as subsecond-scale frequency regulation, minute/hour-scale peaker plants, or day/week-scale season storage.
For power applications (for instance around ancillary services or black starts), a similar metric is the annuitized capacity cost (ACC), which measures the lifetime costs per kW. ACC is lowest when there are few cycles (<300) and when the discharge is less than one hour. This is because the technology is reimbursed only when it provides spare capacity, not when it is discharged.
The cost of storage is coming down following technology-dependent experience curves, the price drop for each doubling in cumulative capacity (or experience). Lithium-ion battery prices fast: the price utitlities pay for them falls 19% with each doubling of capacity. Hydrogen production via electrolysis has a similar learning rate, but it is much more uncertain. Vanadium-flow batteries typically get 14% cheaper for each doubling of capacity. Pumped hydropower has not seen prices fall much with increased experience.
=== Market and system value ===
There are four categories of services which provide economic value for storage: those related to power quality (such as frequency regulation), reliability (ensuring peak demand can be met), better use of assets in the system (e.g. avoiding transmission investments) and arbitrage (exploiting price differences over time). Before 2020, most value for storage was in providing power quality services. Arbitrage is the service with the largest economic potential for storage applications.
In systems with under 40% of variable renewables, only short-term storage (of less than 4 hours) is needed for integration. When the share of variable renewables climbs to 80%, medium-duration storage (between 4 and 16 hours, for instance compressed air) is needed. Above 90%, large-scale long-duration storage is required. The economics of long-duration storage is challenging even then, as the costs are high. Alternative flexibility options, such as demand response, network expansions or flexible generation (geothermal or fossil gas with carbon capture and storage) may be lower-cost.
Like with renewables, storage will "cannibalise" its own income, but even more strongly. That is, with more storage on the market, there is less of an opportunity to do arbitrage or deliver other services to the grid. How markets are designed impacts revenue potential too. The income from arbitrage is quite variable between years, whereas markets that have capacity payments likely show less volatility.
Electricity storage is not 100% efficient, so more electricity needs to be bought than can be sold. This implies that if there is only a small variation in price, it may not be economical to charge and discharge. For instance, if the storage application is 75% efficient, the price at which the electricity is sold needs to be at least 1.33 higher than the price for which it was bought. Typically, electricity prices vary most between day and night, which means that storage up to 8 hours has relatively high potential for profit.
== See also ==
== References ==
=== Cited sources ===
Armstrong, Robert; Chiang, Yet-Ming (2022). The Future of Energy Storage: An Interdisciplinary MIT Study (PDF). Massachusetts Institute of Technology. ISBN 978-0-578-29263-2.
Cozzi, Laura; Petropoulos, Apostolos; Wanner, Brent (April 2024). Batteries and Secure Energy Transitions (PDF). International Energy Agency.
Remme, Uwe; Bermudez Menendez, Jose Miguel; et al. (October 2024). Global Hydrogen Review 2024 (PDF). International Energy Agency.
Clarke, L.; Wei, Y.-M.; De La Vega Navarro, A.; Garg, A.; et al. "Chapter 6: Energy Systems" (PDF). Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (Report). doi:10.1017/9781009157926.008.
IRENA (2020). Innovation landscape brief: Innovative operation of pumped hydropower storage. Abu Dhabi: International Renewable Energy Agency. ISBN 978-92-9260-180-5.
Letcher, Trevor M., ed. (2022). Storing energy: with special reference to renewable energy sources (Second ed.). Amsterdam Oxford Cambridge, MA: Elsevier. ISBN 978-0-12-824510-1.
Schmidt, Oliver; Staffell, Iain (2023). Monetizing energy storage: a toolkit to assess future cost and value. Oxford, United Kingdom: Oxford University Press. ISBN 978-0-19-288817-4.
Smith, Chris Llewellyn (2023). Large-scale electricity storage (PDF). Royal Society. ISBN 978-1-78252-666-7.
Vandersickel, Annelies; Gutierrez, Andrea (2023). Task 36 Carnot Batteries Final Report (Report). Technology Collaboration Programme Energy Storage, International Energy Agency. Retrieved 29 October 2024.
== External links ==
UK Government report on the Benefits of long-duration electricity storage (Aug 2022) | Wikipedia/Grid_energy_storage |
A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages.
Electronic voltage regulators are found in devices such as computer power supplies where they stabilize the DC voltages used by the processor and other elements. In automobile alternators and central power station generator plants, voltage regulators control the output of the plant. In an electric power distribution system, voltage regulators may be installed at a substation or along distribution lines so that all customers receive steady voltage independent of how much power is drawn from the line.
== Electronic voltage regulators ==
A simple voltage/current regulator can be made from a resistor in series with a diode (or series of diodes). Due to the logarithmic shape of diode V-I curves, the voltage across the diode changes only slightly due to changes in current drawn or changes in the input. When precise voltage control and efficiency are not important, this design may be fine. Since the forward voltage of a diode is small, this kind of voltage regulator is only suitable for
low-voltage regulated output. When higher voltage output is needed, a zener diode or series of zener diodes may be employed. Zener diode regulators make use of the zener diode's fixed reverse voltage, which can be quite large.
Feedback voltage regulators operate by comparing the actual output voltage to some fixed reference voltage. Any difference is amplified and used to control the regulation element in such a way as to reduce the voltage error. This forms a negative feedback control loop; increasing the open-loop gain tends to increase regulation accuracy but reduce stability. (Stability is the avoidance of oscillation, or ringing, during step changes.) There will also be a trade-off between stability and the speed of the response to changes. If the output voltage is too low (perhaps due to input voltage reducing or load current increasing), the regulation element is commanded, up to a point, to produce a higher output voltage–by dropping less of the input voltage (for linear series regulators and buck switching regulators), or to draw input current for longer periods (boost-type switching regulators); if the output voltage is too high, the regulation element will normally be commanded to produce a lower voltage. However, many regulators have over-current protection, so that they will entirely stop sourcing current (or limit the current in some way) if the output current is too high, and some regulators may also shut down if the input voltage is outside a given range (see also: crowbar circuits).
== Electromechanical regulators ==
In electromechanical regulators, voltage regulation is easily accomplished by coiling the sensing wire to make an electromagnet. The magnetic field produced by the current attracts a moving ferrous core held back under spring tension or gravitational pull. As voltage increases, so does the current, strengthening the magnetic field produced by the coil and pulling the core towards the field. The magnet is physically connected to a mechanical power switch, which opens as the magnet moves into the field. As voltage decreases, so does the current, releasing spring tension or the weight of the core and causing it to retract. This closes the switch and allows the power to flow once more.
If the mechanical regulator design is sensitive to small voltage fluctuations, the motion of the solenoid core can be used to move a selector switch across a range of resistances or transformer windings to gradually step the output voltage up or down, or to rotate the position of a moving-coil AC regulator.
Early automobile generators and alternators had a mechanical voltage regulator using one, two, or three relays and various resistors to stabilize the generator's output at slightly more than 6.7 or 13.4 V to maintain the battery as independently of the engine's rpm or the varying load on the vehicle's electrical system as possible. The relay(s) modulated the width of a current pulse to regulate the voltage output of the generator by controlling the average field current in the rotating machine which determines strength of the magnetic field produced which determines the unloaded output voltage per rpm. Capacitors are not used to smooth the pulsed voltage as described earlier. The large inductance of the field coil stores the energy delivered to the magnetic field in an iron core so the pulsed field current does not result in as strongly pulsed a field. Both types of rotating machine produce a rotating magnetic field that induces an alternating current in the coils in the stator. A generator uses a mechanical commutator, graphite brushes running on copper segments, to convert the AC produced into DC by switching the external connections at the shaft angle when the voltage would reverse. An alternator accomplishes the same goal using rectifiers that do not wear down and require replacement.
Modern designs now use solid state technology (transistors) to perform the same function that the relays perform in electromechanical regulators.
Electromechanical regulators are used for mains voltage stabilisation—see AC voltage stabilizers below.
== Automatic voltage regulator ==
Generators, as used in power stations, ship electrical power production, or standby power systems, will have automatic voltage regulators (AVR) to stabilize their voltages as the load on the generators changes. The first AVRs for generators were electromechanical systems, but a modern AVR uses solid-state devices. An AVR is a feedback control system that measures the output voltage of the generator, compares that output to a set point, and generates an error signal that is used to adjust the excitation of the generator. As the excitation current in the field winding of the generator increases, its terminal voltage will increase. The AVR will control current by using power electronic devices; generally a small part of the generator's output is used to provide current for the field winding. Where a generator is connected in parallel with other sources such as an electrical transmission grid, changing the excitation has more of an effect on the reactive power produced by the generator than on its terminal voltage, which is mostly set by the connected power system. Where multiple generators are connected in parallel, the AVR system will have circuits to ensure all generators operate at the same power factor. AVRs on grid-connected power station generators may have additional control features to help stabilize the electrical grid against upsets due to sudden load loss or faults.
== AC voltage stabilizers ==
=== Coil-rotation AC voltage regulator ===
This is an older type of regulator used in the 1920s that uses the principle of a fixed-position field coil and a second field coil that can be rotated on an axis in parallel with the fixed coil, similar to a variocoupler.
When the movable coil is positioned perpendicular to the fixed coil, the magnetic forces acting on the movable coil balance each other out and voltage output is unchanged. Rotating the coil in one direction or the other away from the center position will increase or decrease voltage in the secondary movable coil.
This type of regulator can be automated via a servo control mechanism to advance the movable coil position in order to provide voltage increase or decrease. A braking mechanism or high-ratio gearing is used to hold the rotating coil in place against the powerful magnetic forces acting on the moving coil.
=== Electromechanical ===
Electromechanical regulators called voltage stabilizers or tap-changers, have also been used to regulate the voltage on AC power distribution lines. These regulators operate by using a servomechanism to select the appropriate tap on an autotransformer with multiple taps, or by moving the wiper on a continuously variable auto transfomer. If the output voltage is not in the acceptable range, the servomechanism switches the tap, changing the turns ratio of the transformer, to move the secondary voltage into the acceptable region. The controls provide a dead band wherein the controller will not act, preventing the controller from constantly adjusting the voltage ("hunting") as it varies by an acceptably small amount.
=== Constant-voltage transformer ===
The ferroresonant transformer, ferroresonant regulator or constant-voltage transformer is a type of saturating transformer used as a voltage regulator. These transformers use a tank circuit composed of a high-voltage resonant winding and a capacitor to produce a nearly constant average output voltage with a varying input current or varying load. The circuit has a primary on one side of a magnet shunt and the tuned circuit coil and secondary on the other side. The regulation is due to magnetic saturation in the section around the secondary.
The ferroresonant approach is attractive due to its lack of active components, relying on the square loop saturation characteristics of the tank circuit to absorb variations in average input voltage. Saturating transformers provide a simple rugged method to stabilize an AC power supply.
Older designs of ferroresonant transformers had an output with high harmonic content, leading to a distorted output waveform. Modern devices are used to construct a perfect sine wave. The ferroresonant action is a flux limiter rather than a voltage regulator, but with a fixed supply frequency it can maintain an almost constant average output voltage even as the input voltage varies widely.
The ferroresonant transformers, which are also known as constant-voltage transformers (CVTs) or "ferros", are also good surge suppressors, as they provide high isolation and inherent short-circuit protection.
A ferroresonant transformer can operate with an input voltage range ±40% or more of the nominal voltage.
Output power factor remains in the range of 0.96 or higher from half to full load.
Because it regenerates an output voltage waveform, output distortion, which is typically less than 4%, is independent of any input voltage distortion, including notching.
Efficiency at full load is typically in the range of 89% to 93%. However, at low loads, efficiency can drop below 60%. The current-limiting capability also becomes a handicap when a CVT is used in an application with moderate to high inrush current, like motors, transformers or magnets. In this case, the CVT has to be sized to accommodate the peak current, thus forcing it to run at low loads and poor efficiency.
Minimum maintenance is required, as transformers and capacitors can be very reliable. Some units have included redundant capacitors to allow several capacitors to fail between inspections without any noticeable effect on the device's performance.
Output voltage varies about 1.2% for every 1% change in supply frequency. For example, a 2 Hz change in generator frequency, which is very large, results in an output voltage change of only 4%, which has little effect for most loads.
It accepts 100% single-phase switch-mode power-supply loading without any requirement for derating, including all neutral components.
Input current distortion remains less than 8% THD even when supplying nonlinear loads with more than 100% current THD.
Drawbacks of CVTs are their larger size, audible humming sound, and the high heat generation caused by saturation.
=== Power distribution ===
Voltage regulators or stabilizers are used to compensate for voltage fluctuations in mains power. Large regulators may be permanently installed on distribution lines. Small portable regulators may be plugged in between sensitive equipment and a wall outlet. Automatic voltage regulators on generator sets to maintain a constant voltage for changes in load. The voltage regulator compensates for the change in load. Power distribution voltage regulators normally operate on a range of voltages, for example 150–240 V or 90–280 V.
== DC voltage stabilizers ==
Many simple DC power supplies regulate the voltage using either series or shunt regulators, but most apply a voltage reference using a shunt regulator such as a Zener diode, avalanche breakdown diode, or voltage regulator tube. Each of these devices begins conducting at a specified voltage and will conduct as much current as required to hold its terminal voltage to that specified voltage by diverting excess current from a non-ideal power source to ground, often through a relatively low-value resistor to dissipate the excess energy. The power supply is designed to only supply a maximum amount of current that is within the safe operating capability of the shunt regulating device.
If the stabilizer must provide more power, the shunt output is only used to provide the standard voltage reference for the electronic device, known as the voltage stabilizer. The voltage stabilizer is the electronic device, able to deliver much larger currents on demand.
== Active regulators ==
Active regulators employ at least one active (amplifying) component such as a transistor or operational amplifier. Shunt regulators are often (but not always) passive and simple, but always inefficient because they (essentially) dump the excess current which is not available to the load. When more power must be supplied, more sophisticated circuits are used. In general, these active regulators can be divided into several classes:
Linear series regulators
Switching regulators
SCR regulators
=== Linear regulators ===
Linear regulators are based on devices that operate in their linear region (in contrast, a switching regulator is based on a device forced to act as an on/off switch). Linear regulators are also classified in two types:
series regulators
shunt regulators
In the past, one or more vacuum tubes were commonly used as the variable resistance. Modern designs use one or more transistors instead, perhaps within an integrated circuit. Linear designs have the advantage of very "clean" output with little noise introduced into their DC output, but are most often much less efficient and unable to step-up or invert the input voltage like switched supplies. All linear regulators require a higher input than the output. If the input voltage approaches the desired output voltage, the regulator will "drop out". The input to output voltage differential at which this occurs is known as the regulator's drop-out voltage. Low-dropout regulators (LDOs) allow an input voltage that can be much lower (i.e., they waste less energy than conventional linear regulators).
Entire linear regulators are available as integrated circuits. These chips come in either fixed or adjustable voltage types. Examples of some integrated circuits are the 723 general purpose regulator and 78xx /79xx series
=== Switching regulators ===
Switching regulators rapidly switch a series device on and off. The duty cycle of the switch sets how much charge is transferred to the load. This is controlled by a similar feedback mechanism as in a linear regulator. Because the series element is either fully conducting, or switched off, it dissipates almost no power; this is what gives the switching design its efficiency. Switching regulators are also able to generate output voltages which are higher than the input, or of opposite polarity—something not possible with a linear design. In switched regulators, the pass transistor is used as a "controlled switch" and is operated at either cutoff or saturated state. Hence the power transmitted across the pass device is in discrete pulses rather than a steady current flow. Greater efficiency is achieved since the pass device is operated as a low-impedance switch. When the pass device is at cutoff, there is no current and it dissipates no power. Again when the pass device is in saturation, a negligible voltage drop appears across it and thus dissipates only a small amount of average power, providing maximum current to the load. In either case, the power wasted in the pass device is very little and almost all the power is transmitted to the load. Thus the efficiency of a switched-mode power supply is remarkably high – in the range of 70–90%.
Switched mode regulators rely on pulse-width modulation to control the average value of the output voltage. The average value of a repetitive-pulse waveform depends on the area under the waveform. When the duty cycle is varied, the average voltage changes proportionally.
Like linear regulators, nearly complete switching regulators are also available as integrated circuits. Unlike linear regulators, these usually require an inductor that acts as the energy storage element. The IC regulators combine the reference voltage source, error op-amp, and pass transistor with short-circuit current limiting and thermal-overload protection.
Switching regulators are more prone to output noise and instability than linear regulators. However, they provide much better power efficiency than linear regulators.
=== SCR regulators ===
Regulators powered from AC power circuits can use silicon controlled rectifiers (SCRs) as the series device. Whenever the output voltage is below the desired value, the SCR is triggered, allowing electricity to flow into the load until the AC mains voltage passes through zero (ending the half cycle). SCR regulators have the advantages of being both very efficient and very simple, but because they can not terminate an ongoing half cycle of conduction, they are not capable of very accurate voltage regulation in response to rapidly changing loads. An alternative is the SCR shunt regulator which uses the regulator output as a trigger. Both series and shunt designs are noisy, but powerful, as the device has a low on resistance.
=== Combination or hybrid regulators ===
Many power supplies use more than one regulating method in series. For example, the output from a switching regulator can be further regulated by a linear regulator. The switching regulator accepts a wide range of input voltages and efficiently generates a (somewhat noisy) voltage slightly above the ultimately desired output. That is followed by a linear regulator that generates exactly the desired voltage and eliminates nearly all the noise generated by the switching regulator. Other designs may use an SCR regulator as the "pre-regulator", followed by another type of regulator. An efficient way of creating a variable-voltage, accurate output power supply is to combine a multi-tapped transformer with an adjustable linear post-regulator.
== Example of linear regulators ==
=== Transistor regulator ===
In the simplest case a common base amplifier is used with the base of the regulating transistor connected directly to the voltage reference:
A simple transistor regulator will provide a relatively constant output voltage Uout for changes in the voltage Uin of the power source and for changes in load RL, provided that Uin exceeds Uout by a sufficient margin and that the power handling capacity of the transistor is not exceeded.
The output voltage of the stabilizer is equal to the Zener diode voltage minus the base–emitter voltage of the transistor, UZ − UBE, where UBE is usually about 0.7 V for a silicon transistor, depending on the load current. If the output voltage drops for any external reason, such as an increase in the current drawn by the load (causing an increase in the collector–emitter voltage to observe KVL), the transistor's base–emitter voltage (UBE) increases, turning the transistor on further and delivering more current to increase the load voltage again.
Rv provides a bias current for both the Zener diode and the transistor. The current in the diode is minimal when the load current is maximal. The circuit designer must choose a minimum voltage that can be tolerated across Rv, bearing in mind that the higher this voltage requirement is, the higher the required input voltage Uin, and hence the lower the efficiency of the regulator. On the other hand, lower values of Rv lead to higher power dissipation in the diode and to inferior regulator characteristics.
Rv is given by
R
v
=
min
V
R
min
I
D
+
max
I
L
/
(
h
FE
+
1
)
,
{\displaystyle R_{\text{v}}={\frac {\min V_{R}}{\min I_{\text{D}}+\max I_{\text{L}}/(h_{\text{FE}}+1)}},}
where
min VR is the minimum voltage to be maintained across Rv,
min ID is the minimum current to be maintained through the Zener diode,
max IL is the maximum design load current,
hFE is the forward current gain of the transistor (IC/IB).
=== Regulator with a differential amplifier ===
The stability of the output voltage can be significantly increased by using a differential amplifier, possibly implemented as an operational amplifier:
In this case, the operational amplifier drives the transistor with more current if the voltage at its inverting input drops below the output of the voltage reference at the non-inverting input. Using the voltage divider (R1, R2 and R3) allows choice of the arbitrary output voltage between Uz and Uin.
== Regulator specification ==
The output voltage can only be held constant within specified limits. The regulation is specified by two measurements:
Load regulation is the change in output voltage for a given change in load current (for example, "typically 15 mV, maximum 100 mV for load currents between 5 mA and 1.4 A, at some specified temperature and input voltage").
Line regulation or input regulation is the degree to which output voltage changes with input (supply) voltage changes—as a ratio of output to input change (for example, "typically 13 mV/V"), or the output voltage change over the entire specified input voltage range (for example, "plus or minus 2% for input voltages between 90 V and 260 V, 50–60 Hz").
Other important parameters are:
Temperature coefficient of the output voltage is the change with temperature (perhaps averaged over a given temperature range).
Initial accuracy of a voltage regulator (or simply "the voltage accuracy") reflects the error in output voltage for a fixed regulator without taking into account temperature or aging effects on output accuracy.
Dropout voltage is the minimum difference between input voltage and output voltage for which the regulator can still supply the specified current. The input-output differential at which the voltage regulator will no longer maintain regulation is the dropout voltage. Further reduction in input voltage will result in reduced output voltage. This value is dependent on load current and junction temperature.
Inrush current or input surge current or switch-on surge is the maximum, instantaneous input current drawn by an electrical device when first turned on. Inrush current usually lasts for half a second, or a few milliseconds, but it is often very high, which makes it dangerous because it can degrade and burn components gradually (over months or years), especially if there is no inrush current protection. Alternating current transformers or electric motors in automatic voltage regulators may draw and output several times their normal full-load current for a few cycles of the input waveform when first energized or switched on. Power converters also often have inrush currents much higher than their steady state currents, due to the charging current of the input capacitance.
Absolute maximum ratings are defined for regulator components, specifying the continuous and peak output currents that may be used (sometimes internally limited), the maximum input voltage, maximum power dissipation at a given temperature, etc.
Output noise (thermal white noise) and output dynamic impedance may be specified as graphs versus frequency, while output ripple noise (mains "hum" or switch-mode "hash" noise) may be given as peak-to-peak or RMS voltages, or in terms of their spectra.
Quiescent current in a regulator circuit is the current drawn internally, not available to the load, normally measured as the input current while no load is connected and hence a source of inefficiency (some linear regulators are, surprisingly, more efficient at very low current loads than switch-mode designs because of this).
Transient response is the reaction of a regulator when a (sudden) change of the load current (called the load transient) or input voltage (called the line transient) occurs. Some regulators will tend to oscillate or have a slow response time which in some cases might lead to undesired results. This value is different from the regulation parameters, as that is the stable situation definition. The transient response shows the behaviour of the regulator on a change. This data is usually provided in the technical documentation of a regulator and is also dependent on output capacitance.
Mirror-image insertion protection means that a regulator is designed for use when a voltage, usually not higher than the maximum input voltage of the regulator, is applied to its output pin while its input terminal is at a low voltage, volt-free or grounded. Some regulators can continuously withstand this situation. Others might only manage it for a limited time such as 60 seconds (usually specified in the data sheet). For instance, this situation can occur when a three terminal regulator is incorrectly mounted on a PCB, with the output terminal connected to the unregulated DC input and the input connected to the load. Mirror-image insertion protection is also important when a regulator circuit is used in battery charging circuits, when external power fails or is not turned on and the output terminal remains at battery voltage.
== See also ==
Charge controller
Constant current regulator
DC-to-DC converter
List of LM-series integrated circuits
Third-brush dynamo
Voltage comparator
Voltage regulator module
== References ==
== Further reading ==
Linear & Switching Voltage Regulator Handbook; ON Semiconductor; 118 pages; 2002; HB206/D.(Free PDF download) | Wikipedia/Constant-potential_transformer |
Energy demand management, also known as demand-side management (DSM) or demand-side response (DSR), is the modification of consumer demand for energy through various methods such as financial incentives and behavioral change through education.
Usually, the goal of demand-side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends. Peak demand management does not necessarily decrease total energy consumption, but could be expected to reduce the need for investments in networks and/or power plants for meeting peak demands. An example is the use of energy storage units to store energy during off-peak hours and discharge them during peak hours.
A newer application for DSM is to aid grid operators in balancing variable generation from wind and solar units, particularly when the timing and magnitude of energy demand does not coincide with the renewable generation. Generators brought on line during peak demand periods are often fossil fuel units. Minimizing their use reduces emissions of carbon dioxide and other pollutants.
The term DSM was coined following the time of the 1973 energy crisis and 1979 energy crisis. Governments of many countries mandated performance of various programs for demand management. An early example is the National Energy Conservation Policy Act of 1978 in the U.S., preceded by similar actions in California and Wisconsin. Demand-side management was introduced publicly by Electric Power Research Institute (EPRI) in the 1980s. Nowadays, DSM technologies become increasingly feasible due to the integration of information and communications technology and the power system, new terms such as integrated demand-side management (IDSM), or smart grid.
== Operation ==
The American electric power industry originally relied heavily on foreign energy imports, whether in the form of consumable electricity or fossil fuels that were then used to produce electricity. During the time of the energy crises in the 1970s, the federal government passed the Public Utility Regulatory Policies Act (PURPA), hoping to reduce dependence on foreign oil and to promote energy efficiency and alternative energy sources. This act forced utilities to obtain the cheapest possible power from independent power producers, which in turn promoted renewables and encouraged the utility to reduce the amount of power they need, hence pushing forward agendas for energy efficiency and demand management.
Electricity use can vary dramatically on short and medium time frames, depending on current weather patterns. Generally the wholesale electricity system adjusts to changing demand by dispatching additional or less generation. However, during peak periods, the additional generation is usually supplied by less efficient ("peaking") sources. Unfortunately, the instantaneous financial and environmental cost of using these "peaking" sources is not necessarily reflected in the retail pricing system. In addition, the ability or willingness of electricity consumers to adjust to price signals by altering demand (elasticity of demand) may be low, particularly over short time frames. In many markets, consumers (particularly retail customers) do not face real-time pricing at all, but pay rates based on average annual costs or other constructed prices.
Energy demand management activities attempt to bring the electricity demand and supply closer to a perceived optimum, and help give electricity end users benefits for reducing their demand. In the modern system, the integrated approach to demand-side management is becoming increasingly common. IDSM automatically sends signals to end-use systems to shed load depending on system conditions. This allows for very precise tuning of demand to ensure that it matches supply at all times, reduces capital expenditures for the utility. Critical system conditions could be peak times, or in areas with levels of variable renewable energy, during times when demand must be adjusted upward to avoid over-generation or downward to help with ramping needs.
In general, adjustments to demand can occur in various ways: through responses to price signals, such as permanent differential rates for evening and day times or occasional highly priced usage days, behavioral changes achieved through home area networks, automated controls such as with remotely controlled air-conditioners, or with permanent load adjustments with energy efficient appliances.
== Logical foundations ==
Demand for any commodity can be modified by actions of market players and government (regulation and taxation). Energy demand management implies actions that influence demand for energy. DSM was originally adopted in electricity, but today it is applied widely to utilities including water and gas as well.
Reducing energy demand is contrary to what both energy suppliers and governments have been doing during most of the modern industrial history. Whereas real prices of various energy forms have been decreasing during most of the industrial era, due to economies of scale and technology, the expectation for the future is the opposite. Previously, it was not unreasonable to promote energy use as more copious and cheaper energy sources could be anticipated in the future or the supplier had installed excess capacity that would be made more profitable by increased consumption.
In centrally planned economies subsidizing energy was one of the main economic development tools. Subsidies to the energy supply industry are still common in some countries.
Contrary to the historical situation, energy prices and availability are expected to deteriorate. Governments and other public actors, if not the energy suppliers themselves, are tending to employ energy demand measures that will increase the efficiency of energy consumption.
== Types ==
Energy efficiency: Using less power to perform the same tasks. This involves a permanent reduction of demand by using more efficient load-intensive appliances such as water heaters, refrigerators, or washing machines.
Demand response: Any reactive or preventative method to reduce, flatten or shift demand. Historically, demand response programs have focused on peak reduction to defer the high cost of constructing generation capacity. However, demand response programs are now being looked to assist with changing the net load shape as well, load minus solar and wind generation, to help with integration of variable renewable energy. Demand response includes all intentional modifications to consumption patterns of electricity of end user customers that are intended to alter the timing, level of instantaneous demand, or the total electricity consumption. Demand response refers to a wide range of actions which can be taken at the customer side of the electricity meter in response to particular conditions within the electricity system (such as peak period network congestion or high prices), including the aforementioned IDSM.
Dynamic demand: Advance or delay appliance operating cycles by a few seconds to increase the diversity factor of the set of loads. The concept is that by monitoring the power factor of the power grid, as well as their own control parameters, individual, intermittent loads would switch on or off at optimal moments to balance the overall system load with generation, reducing critical power mismatches. As this switching would only advance or delay the appliance operating cycle by a few seconds, it would be unnoticeable to the end user. In the United States, in 1982, a (now-lapsed) patent for this idea was issued to power systems engineer Fred Schweppe. This type of dynamic demand control is frequently used for air-conditioners. One example of this is through the SmartAC program in California.
Distributed energy resources: Distributed generation, also distributed energy, on-site generation (OSG) or district/decentralized energy is electrical generation and storage performed by a variety of small, grid-connected devices referred to as distributed energy resources (DER). Conventional power stations, such as coal-fired, gas and nuclear powered plants, as well as hydroelectric dams and large-scale solar power stations, are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular and more flexible technologies, that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance they are referred to as hybrid power systems. DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system, and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables collection of energy from many sources and may lower environmental impacts and improve security of supply.
== Scale ==
Broadly, demand side management can be classified into four categories: national scale, utility scale, community scale, and individual household scale.
=== National scale ===
Energy efficiency improvement is one of the most important demand side management strategies. Efficiency improvements can be implemented nationally through legislation and standards in housing, building, appliances, transport, machines, etc.
=== Utility scale ===
During peak demand time, utilities are able to control storage water heaters, pool pumps and air conditioners in large areas to reduce peak demand, e.g. Australia and Switzerland. One of the common technologies is ripple control: high frequency signal (e.g. 1000 Hz) is superimposed to normal electricity (50 or 60 Hz) to switch on or off devices.
In more service-based economies, such as Australia, electricity network peak demand often occurs in the late afternoon to early evening (4pm to 8pm). Residential and commercial demand is the most significant part of these types of peak demand. Therefore, it makes great sense for utilities (electricity network distributors) to manage residential storage water heaters, pool pumps, and air conditioners.
=== Community scale ===
Other names can be neighborhood, precinct, or district. Community central heating systems have been existing for many decades in regions of cold winters. Similarly, peak demand in summer peak regions need to be managed, e.g. Texas & Florida in the U.S., Queensland and New South Wales in Australia. Demand side management can be implemented in community scale to reduce peak demand for heating or cooling. Another aspect is to achieve Net zero-energy building or community.
Managing energy, peak demand and bills in community level may be more feasible and viable, because of the collective purchasing power, the bargaining power, more options in energy efficiency or storage, more flexibility and diversity in generating and consuming energy at different times, e.g. using PV to compensate day time consumption or for energy storage.
=== Household scale ===
In areas of Australia, more than 30% (2016) of households have rooftop photo-voltaic systems. It is useful for them to use free energy from the sun to reduce energy import from the grid. Further, demand side management can be helpful when a systematic approach is considered: the operation of photovoltaic, air conditioner, battery energy storage systems, storage water heaters, building performance and energy efficiency measures.
== Examples ==
=== Queensland, Australia ===
The utility companies in the state of Queensland, Australia have devices fitted onto certain household appliances such as air conditioners or into household meters to control water heater, pool pumps etc. These devices would allow energy companies to remotely cycle the use of these items during peak hours. Their plan also includes improving the efficiency of energy-using items and giving financial incentives to consumers who use electricity during off-peak hours, when it is less expensive for energy companies to produce.
Another example is that with demand side management, Southeast Queensland households can use electricity from rooftop photo-voltaic system to heat up water.
=== Toronto, Canada ===
In 2008, Toronto Hydro, the monopoly energy distributor of Ontario, had over 40,000 people signed up to have remote devices attached to air conditioners which energy companies use to offset spikes in demand. Spokeswoman Tanya Bruckmueller says that this program can reduce demand by 40 megawatts during emergency situations.
=== Indiana, US ===
The Alcoa Warrick Operation is participating in MISO as a qualified demand response resource, which means it is providing demand response in terms of energy, spinning reserve, and regulation service.
=== Brazil ===
Demand-side management can apply to electricity system based on thermal power plants or to systems where renewable energy, as hydroelectricity, is predominant but with a complementary thermal generation, for instance, in Brazil.
In Brazil's case, despite the generation of hydroelectric power corresponds to more than 80% of the total, to achieve a practical balance in the generation system, the energy generated by hydroelectric plants supplies the consumption below the peak demand. Peak generation is supplied by the use of fossil-fuel power plants. In 2008, Brazilian consumers paid more than U$1 billion for complementary thermoelectric generation not previously programmed.
In Brazil, the consumer pays for all the investment to provide energy, even if a plant sits idle. For most fossil-fuel thermal plants, the consumers pay for the "fuels" and other operation costs only when these plants generate energy. The energy, per unit generated, is more expensive from thermal plants than from hydroelectric. Only a few of the Brazilian's thermoelectric plants use natural gas, so they pollute significantly more than hydroelectric plants. The power generated to meet the peak demand has higher costs—both investment and operating costs—and the pollution has a significant environmental cost and potentially, financial and social liability for its use. Thus, the expansion and the operation of the current system is not as efficient as it could be using demand side management. The consequence of this inefficiency is an increase in energy tariffs that is passed on to the consumers.
Moreover, because electric energy is generated and consumed almost instantaneously, all the facilities, as transmission lines and distribution nets, are built for peak consumption. During the non-peak periods their full capacity is not utilized.
The reduction of peak consumption can benefit the efficiency of the electric systems, like the Brazilian system, in various ways: as deferring new investments in distribution and transmission networks, and reducing the necessity of complementary thermal power operation during peak periods, which can diminish both the payment for investment in new power plants to supply only during the peak period and the environmental impact associated with greenhouse gas emission.
== Issues ==
Some people argue that demand-side management has been ineffective because it has often resulted in higher utility costs for consumers and less profit for utilities.
One of the main goals of demand side management is to be able to charge the consumer based on the true price of the utilities at that time. If consumers could be charged less for using electricity during off-peak hours, and more during peak hours, then supply and demand would theoretically encourage the consumer to use less electricity during peak hours, thus achieving the main goal of demand side management.
== See also ==
== Notes ==
== References ==
Loughran, David S; Kulick, Jonathan (2004). "Demand-Side Management and Energy Efficiency in the United States". The Energy Journal. 25 (1): 19–43. Bibcode:2004EnerJ..25...19L. doi:10.5547/issn0195-6574-ej-vol25-no1-2.
Dunn, Rodney (23 June 2002). "Electric Utility Demand-Side Management 1999". US Energy Information Administration. Retrieved 9 November 2010..
"Demand-Side Management". Pacificorp: A Midamerican Energy Holdings Company. 2010. Archived from the original on 13 October 2010. Retrieved 9 November 2010.
Sarkar, Ashok & Singh, Jas (October 2009). "Financing Energy Efficiency in Developing Countries – Lessons Learned and Remaining Challenges" (PDF). United States Energy Association. The World Bank. Archived from the original (PDF) on 13 August 2010. Retrieved 9 November 2010..
Simmons, Daniel (20 May 2010). "Demand-Side Management: Government Planning, Not Market Conservation (Testimony of Dan Simmons Before the Georgia Public Service Commission)". MasterResource. Retrieved 9 November 2010.
=== Works cited ===
Assessment of Long Term, System Wide Potential for Demand-Side and Other Supplemental Resources (PDF). PacificCorp (Report). Vol. 1 (Final Report ed.). Portland: Quantec. 2006. Archived from the original (PDF) on 28 September 2011. Retrieved 7 November 2011.
Brennan, Timothy J (2010). "Optimal energy efficiency policies and regulatory demand-side management tests: How well do they match?" (PDF). Energy Policy. 38 (8): 3874–85. Bibcode:2010EnPol..38.3874B. doi:10.1016/j.enpol.2010.03.007.
Moura, Pedro S; De Almeida, Aníbal T (2010). "The role of demand-side management in the grid integration of wind power". Applied Energy. 87 (8): 2581–8. doi:10.1016/j.apenergy.2010.03.019.
Primer on Demand-Side Management (PDF) (Report) (Rep. no. D06090 ed.). Oakland: Charles River Associates. 2005.
== External links ==
Demand-Side Management Programme IEA
Energy subsidies in the European Union: A brief overview
Managing Energy Demand seminar Bern, nov 4 2009
Torriti, Jacopo (2012). "Demand Side Management for the European Supergrid: Occupancy variances of European single-person households". Energy Policy. 44: 199–206. Bibcode:2012EnPol..44..199T. doi:10.1016/j.enpol.2012.01.039.
UK Demand Side Response | Wikipedia/Energy_demand_management |
High-voltage transformer fire barriers, also known as transformer firewalls, transformer ballistic firewalls, or transformer blast walls, are outdoor countermeasures against a fire or explosion involving a single transformer from damaging adjacent transformers. These barriers compartmentalize transformer fires and explosions involving combustible transformer oil.
High-voltage transformer fire barriers are typically located in electrical substations, but may also be attached to buildings, such as valve halls or manufacturing plants with large electrical distribution systems, such as pulp and paper mills. Outdoor transformer fire barriers that are attached at least on one side to a building are referred to as wing walls.
== Voluntary recommendations by NFPA 850 ==
The primary North American document that deals with outdoor high-voltage transformer fire barriers is NFPA 850. NFPA 850 outlines that outdoor oil-insulated transformers should be separated from adjacent structures and from each other by firewalls, spatial separation, or other approved means for the purpose of limiting the damage and potential spread of fire from a transformer failure.
=== Automatic fire suppression systems ===
Instead of a passive barrier, fire protection water spray systems are sometimes used to cool a transformer to prevent damage if exposed to radiation heat transfer from a fire involving oil released from another transformer that has failed.
=== Transformer Fast Depressurization Systems (FDS) ===
Mechanical systems designed to quickly depressurize the transformer oil tank after the occurrence of an electrical fault can minimize the chance that a transformer tank will rupture given a minor fault, but are not effective on major internal faults.
=== Alternatives to mineral-based transformer oil ===
Transformer oil is available in with sufficiently low combustibility that a fire will not continue after an internal electrical fault. These fluids include those approved by FM Global. FM Data Sheet 5-4 indicates different levels of protection depending on the type of fluid used. Alternatives include, but are not limited to, esters and silicone oil.
== See also ==
Arc fault
Cascading failure
Electrical power distribution
Fire test
Firewall (construction)
North American Electric Reliability Corporation
Passive fire protection
== References == | Wikipedia/High-voltage_transformer_fire_barriers |
A battery energy storage system (BESS), battery storage power station, battery energy grid storage (BEGS) or battery grid storage is a type of energy storage technology that uses a group of batteries in the grid to store electrical energy. Battery storage is the fastest responding dispatchable source of power on electric grids, and it is used to stabilise those grids, as battery storage can transition from standby to full power in under a second to deal with grid contingencies.
Battery energy storage systems are generally designed to deliver their full rated power for durations ranging from 1 to 4 hours, with emerging technologies extending this to longer durations to meet evolving grid demands. Battery storage can be used for short-term peak power and ancillary services, such as providing operating reserve and frequency control to minimize the chance of power outages. They are often installed at, or close to, other active or disused power stations and may share the same grid connection to reduce costs. Since battery storage plants require no deliveries of fuel, are compact compared to generating stations and have no chimneys or large cooling systems, they can be rapidly installed and placed if necessary within urban areas, close to customer load, or even inside customer premises.
As of 2021, the power and capacity of the largest individual battery storage system is an order of magnitude less than that of the largest pumped-storage power plants, the most common form of grid energy storage. For example, the Bath County Pumped Storage Station, the second largest in the world, can store 24 GWh of electricity and dispatch 3 GW while the first phase of Vistra Energy's Moss Landing Energy Storage Facility can store 1.2 GWh and dispatch 300 MW. However, grid batteries do not have to be large — a high number of smaller ones (often as hybrid power) can be widely deployed across a grid for greater redundancy and large overall capacity.
As of 2019, battery power storage is typically cheaper than open cycle gas turbine power for use up to two hours, and there was around 365 GWh of battery storage deployed worldwide, growing rapidly.
Levelized cost of storage (LCOS) has fallen rapidly. From 2014 to 2024, cost halving time was 4.1 years. The price was US$150 per MWh in 2020, and further reduced to US$117 by 2023.
== Construction ==
Battery storage power plants and uninterruptible power supplies (UPS) are comparable in technology and function. However, battery storage power plants are larger.
For safety and security, the actual batteries are housed in their own structures, like warehouses or containers. As with a UPS, one concern is that electrochemical energy is stored or emitted in the form of direct current (DC), while electric power networks are usually operated with alternating current (AC). For this reason, additional inverters are needed to connect the battery storage power plants to the high voltage network. This kind of power electronics include gate turn-off thyristor, commonly used in high-voltage direct current (HVDC) transmission.
Various accumulator systems may be used depending on the power-to-energy ratio, the expected lifetime and the costs. In the 1980s, lead-acid batteries were used for the first battery-storage power plants. During the next few decades, nickel–cadmium and sodium–sulfur batteries were increasingly used. Since 2010, more and more utility-scale battery storage plants rely on lithium-ion batteries, as a result of the fast decrease in the cost of this technology, caused by the electric automotive industry. Lithium-ion batteries are mainly used. A 4-hour flow vanadium redox battery at 175 MW / 700 MWh opened in 2024. Lead-acid batteries are still used in small budget applications.
== Safety ==
Most of the BESS systems are composed of securely sealed battery packs, which are electronically monitored and replaced once their performance falls below a given threshold. Batteries suffer from cycle ageing, or deterioration caused by charge–discharge cycles. This deterioration is generally higher at high charging rates and higher depth of discharge. This aging cause a loss of performance (capacity or voltage decrease), overheating, and may eventually lead to critical failure (electrolyte leaks, fire, explosion). Sometimes battery storage power stations are built with flywheel storage power systems in order to conserve battery power. Flywheels may handle rapid fluctuations better than older battery plants.
BESS warranties typically include lifetime limits on energy throughput, expressed as number of charge–discharge cycles.
=== Lead-acid based batteries ===
Lead-acid batteries, as a first-generation technology, are generally used in older BESS systems. Some examples are 1.6 MW peak, 1.0 MW continuous battery was commissioned in 1997. Compared to modern rechargeable batteries, lead-acid batteries have relatively low energy density. Despite this, they are able to supply high surge currents. However, non-sealed lead-acid batteries produce hydrogen and oxygen from the aqueous electrolyte when overcharged. The water has to be refilled regularly to avoid damage to the battery; and, the inflammable gases have to be vented out to avoid explosion risks. However, this maintenance has a cost, and recent batteries such as Li-ion batteries do not have such an issue.
=== Lithium based batteries ===
Lithium-ion batteries are designed to have a long lifespan without maintenance. They generally have high energy density and low self-discharge. Due to these properties, most modern BESS are lithium-ion-based batteries.
A drawback of some types of lithium-ion batteries is fire safety, mostly ones containing cobalt. The number of BESS incidents has remained around 10–20 per year (mostly within the first 2–3 years of age), despite the large increase in number and size of BESS. Thus failure rate has decreased. Failures occurred mostly in controls and balance of system, while 11% occurred in cells.
Examples of BESS fire accidents include individual modules in 23 battery farms in South Korea in 2017 to 2019, a Tesla Megapack in Geelong, the fire and subsequent explosion of a battery module in Arizona, and the cooling liquid short circuiting incidents and fire at the Moss Landing LG battery.
This resulted in more research in recent years for mitigation measures for fire safety.
By 2024, the lithium iron phosphate (LFP) battery has become another significant type for large storages due to the high availability of its components, longer lifetime and higher safety compared to nickel-based Li-ion chemistries. An LFP-based energy storage system that was installed in Paiyun Lodge on Mt. Jade (Yushan) (the highest alpine lodge in Taiwan) and operated since 2016, has, as of 2024, operated without a safety incident.
=== Sodium-based batteries ===
Alternatively, sodium-based batteries are increasingly being considered for BESS applications. Compared to lithium-ion batteries, sodium-ion batteries have somewhat lower cost, better safety characteristics, and similar power delivery characteristics. However it has a lower energy density compared to lithium-ion batteries. Its working principle and cell construction are similar to those of lithium-ion battery (LIB) types, but it replaces lithium with sodium as the intercalating ion. Some sodium-based batteries can also operate safely at high temperatures (sodium–sulfur battery). Some notable sodium battery producers with high safety claims include (non-exclusive) Altris AB, SgNaPlus and Tiamat. Sodium-based batteries are not fully commercialised yet. The largest BESS utilizing sodium-ion technology started operating in 2024 in Hubei province, boasts a capacity of 50 MW / 100 MWh.
== Operating characteristics ==
Since they do not have any mechanical parts, battery storage power plants offer extremely short control times and start times, as little as 10 ms. They can therefore help dampen the fast oscillations that occur when electrical power networks are operated close to their maximum capacity or when grids suffer anomalies. These instabilities – fluctuations with periods of as much as 30 seconds – can produce peak swings of such amplitude that they can cause regional blackouts. Some of the parameters are voltage, frequency and phase. A properly sized battery storage power plant can efficiently counteract these oscillations; therefore, applications are found primarily in those regions where electrical power systems are operated at full capacity, leading to a risk of instability. However, some batteries have insufficient control systems, failing during moderate disruptions they should have tolerated. Batteries are also commonly used for peak shaving for periods of up to a few hours. A more recent use is strengthening transmission, as long power lines can be operated closer to their capacity when batteries handle the local difference between supply and demand.
Storage plants can also be used in combination with an intermittent renewable energy source in stand-alone power systems.
== Largest grid batteries ==
=== Under construction ===
=== Planned ===
== Market development and deployment ==
While the capacity of grid batteries is small compared to the other major form of grid storage, pumped hydroelectricity, the battery market is growing very fast as price drops. Relative to 2010, batteries and photovoltaics have followed roughly the same downward price curve due to the learning rate. Cells are the major cost component, costing 30-40% of a full system.
At the end of 2024, China had 62 GW / 141 GWh of battery power stations. In 2020, China added 1,557 MW to its battery storage capacity, while storage facilities for photovoltaics projects accounting for 27% of the capacity, to the total 3,269 MW of electrochemical energy storage capacity.
USA installed 12.3 GW and 37.1 GWh of batteries in 2024. In 2022, US capacity doubled to 9 GW / 25 GWh. At the end of 2021, the capacity grew to 4,588 MW. The 2021 price of a 60 MW / 240 MWh (4-hour) battery installation in the United States was US$379/usable kWh, or US$292/nameplate kWh, a 13% drop from 2020. In 2010, the United States had 59 MW of battery storage capacity from 7 battery power plants. This increased to 49 plants comprising 351 MW of capacity in 2015. In 2018, the capacity was 869 MW from 125 plants, capable of storing a maximum of 1,236 MWh of generated electricity. By the end of 2020, the battery storage capacity reached 1,756 MW. The US market for storage power plants in 2015 increased by 243% compared to 2014.
In June 2024 the capacity was 4.6 GW of power and 5.9 GWh of energy in the United Kingdom. In 2022, UK capacity grew by 800 MWh, ending at 2.4 GW / 2.6 GWh. As of May 2021, 1.3 GW of battery storage was operating, with 16 GW of projects in the pipeline potentially deployable over the next few years.
At the end of 2024, Europe had 61 GWh of battery energy capacity, after adding 21 GWh that year - 6 GWh each in Germany and Italy, and price averaged €300-400 per kilowatt-hour installed. Europe added 1.9 GW in 2022.
Some developers are building storage systems from old batteries of electric cars, where costs may be halved compared to the original price. However, due to the cost drop of new batteries, second-hand buyers may only be willing to pay 10% of the original price. A 53 MWh battery made from 900 electric cars started in 2024.
Following the blackout on 28 April 2025 that disconnected the Iberian grid from Europe in just five seconds and caused losses of up to €4.5 billion, the importance of resilience in the power system has become increasingly evident in Spain with Battery Energy Storage Systems emerging as a cornerstone of Spain’s energy transition. Leading firms such as Iberdrola and Solaria are now actively developing hybrid solar and battery projects to counteract the effects of solar overproduction and the resulting decline in market prices, with Solaria alone initiating eight new BESS installations in Castilla y León and Castilla-La Mancha.
== See also ==
List of energy storage power plants
== References == | Wikipedia/Battery_energy_storage_system |
A transformer is a device that transfers electrical energy from one circuit to another.
Transformer may also refer to:
== Art and entertainment ==
Characters in the Transformers franchise
Transformer (film), a 2017 Canadian documentary
Transformer, a 1986 Sega arcade game
Transformer: The Deep Chemistry of Life and Death, a 2022 book by Nick Lane
=== Music ===
Transformer (David Stoughton album), 1968
Transformer (Lou Reed album), 1972
Transformer (Bruce Kulick album), 2003
"Transformer", a song by Gnarls Barkley from St. Elsewhere
"Transformer", a song by Marnie Stern from This Is It and I Am It and You Are It and So Is That and He Is It and She Is It and It Is It and That Is That
== Science and technology ==
Transformer (deep learning architecture), a machine learning architecture
Transformer (flying car), a DARPA military project
"Electronic transformer", a term commonly used in extra-low-voltage lighting applications for a switched-mode power supply
Asus Transformer, a series of hybrid tablet computers
TrikeBuggy Transformer, a U.S. powered hang glider
Transformer (gene), a family of genes that regulate sex determination in some insects
== Other uses ==
Transformer (spirit-being), an indigenous tradition of the Pacific Northwest of North America
Prada Transformer, a building in Seoul, South Korea
== See also ==
Transformers (disambiguation) | Wikipedia/Transformer_(disambiguation) |
A current transformer (CT) is a type of transformer that reduces or multiplies alternating current (AC), producing a current in its secondary which is proportional to the current in its primary.
Current transformers, along with voltage or potential transformers, are instrument transformers, which scale the large values of voltage or current to small, standardized values that are easy to handle for measuring instruments and protective relays. Instrument transformers isolate measurement or protection circuits from the high voltage of the primary system. A current transformer presents a negligible load to the primary circuit.
Current transformers are the current-sensing units of the power system and are used at generating stations, electrical substations, and in industrial and commercial electric power distribution.
== Function ==
A current transformer has a primary winding, a core, and a secondary winding, although some transformers use an air core. While the physical principles are the same, the details of a "current" transformer compared with a "voltage" transformer will differ owing to different requirements of the application. A current transformer is designed to maintain an accurate ratio between the currents in its primary and secondary circuits over a defined range.
The alternating current in the primary produces an alternating magnetic field in the core, which then induces an alternating current in the secondary. The primary circuit is largely unaffected by the insertion of the CT. Accurate current transformers need close coupling between the primary and secondary to ensure that the secondary current is proportional to the primary current over a wide current range. The current in the secondary is the current in the primary (assuming a single turn primary) divided by the number of turns of the secondary. In the illustration on the right, 'I' is the current in the primary, 'B' is the magnetic field, 'N' is the number of turns on the secondary, and 'A' is an AC ammeter.
Current transformers typically consist of a silicon steel ring core wound with many turns of copper wire, as shown in the illustration to the right. The conductor carrying the primary current is passed through the ring. The CT's primary, therefore, consists of a single 'turn'. The primary 'winding' may be a permanent part of the current transformer, i.e., a heavy copper bar to carry current through the core. Window-type current transformers are also common, which can have circuit cables run through the middle of an opening in the core to provide a single-turn primary winding. To assist accuracy, the primary conductor should be centered in the aperture.
CTs are specified by their current ratio from primary to secondary. The rated secondary current is normally standardized at 1 or 5 amperes. For example, a 4000:5 CT secondary winding will supply an output current of 5 amperes when the primary winding current is 4000 amperes. This ratio can also be used to find the impedance or voltage on one side of the transformer, given the appropriate value at the other side. For the 4000:5 CT, the secondary impedance can be found as ZS = NZP = 800ZP, and the secondary voltage can be found as VS = NVP = 800VP. In some cases, the secondary impedance is referred to the primary side, and is found as ZS′ = N2ZP. Referring the impedance is done simply by multiplying initial secondary impedance value by the current ratio. The secondary winding of a CT can have taps to provide a range of ratios, five taps being common.
Current transformer shapes and sizes vary depending on the end-user or switch gear manufacturer. Low-voltage single ratio metering current transformers are either a ring type or plastic molded case.
Split-core current transformers either have a two-part core or a core with a removable section. This allows the transformer to be placed around a conductor without disconnecting it first. Split-core current transformers are typically used in low current measuring instruments, often portable, battery-operated, and hand-held (see illustration lower right).
== Use ==
Current transformers are used extensively for measuring current and monitoring the operation of the power grid. Along with voltage leads, revenue-grade CTs drive the electrical utility's watt-hour meter on many larger commercial and industrial supplies.
High-voltage current transformers are mounted on porcelain or polymer insulators to isolate them from ground. Some CT configurations slip around the bushing of a high-voltage transformer or circuit breaker, which automatically centers the conductor inside the CT window.
Current transformers can be mounted on the low voltage or high voltage leads of a power transformer. Sometimes a section of a bus bar can be removed to replace a current transformer.
Often, multiple CTs are installed as a "stack" for various uses. For example, protection devices and revenue metering may use separate CTs to provide isolation between metering and protection circuits and allows current transformers with different characteristics (accuracy, overload performance) to be used for the devices.
In the United States, the National Electrical Code (NEC) requires residual current devices in commercial and residential electrical systems to protect outlets installed in "wet" locations such as kitchens and bathrooms, as well as weatherproof outlets installed outdoors. Such devices, most commonly ground fault circuit interrupters (GFCIs), typically run both the 120-volt energized conductor and the neutral return conductor through a current transformer, with the secondary coil connected to a trip device.
Under normal conditions, the current in the two circuit wires will be equal and flow in opposite directions, resulting in zero net current through the CT and no current in the secondary coil. If the supply current is redirected downstream into the third (ground) circuit conductor (e.g., if the grounded metallic case of a power tool contacts a 120-volt conductor), or into earth ground (e.g., if a person contacts a 120-volt conductor), the neutral return current will be less than the supply current, resulting in a positive net current flow through the CT. This net current flow will induce current in the secondary coil, which will cause the trip device to operate and de-energize the circuit - typically within 0.2 seconds.
The burden (load) impedance should not exceed the specified maximum value to avoid the secondary voltage exceeding the limits for the current transformer. The primary current rating of a current transformer should not be exceeded, or the core may enter its non-linear region and ultimately saturate. This would occur near the end of the first half of each half (positive and negative) of the AC sine wave in the primary and compromise accuracy.
== Safety ==
Current transformers are often used to monitor high currents or currents at high voltages. Technical standards and design practices are used to ensure the safety of installations using current transformers.
The secondary of a current transformer should not be disconnected from its burden while current is in the primary, as the secondary will attempt to continue driving current into an effective infinite impedance potentially generating high voltages and thus compromising operator safety. For certain current transformers, this voltage may reach several kilovolts and may cause arcing. Exceeding the secondary voltage may also degrade the accuracy of the transformer or destroy it. Output voltage in open operation is limited by core saturation since the primary flux is no longer canceled by secondary flux, smaller current transformers may not actually incur dangerous voltages when operating nominally. Faster current transients from loads being switched on etc. can however still induce dangerous voltage levels due to high current slope.
== Accuracy ==
The accuracy of a CT is affected by a number of factors including:
Burden
Burden class/saturation class
Rating factor
Load
External electromagnetic fields
Temperature
Physical configuration
The selected tap, for multi-ratio CTs
Phase change
Capacitive coupling between primary and secondary
Resistance of primary and secondary
Core magnetizing current
Accuracy classes for various types of measurement and at standard loads in the secondary circuit (burdens) are defined in IEC 61869-1 as classes 0.1, 0.2s, 0.2, 0.5, 0.5s, 1 and 3. The class designation is an approximate measure of the CT's accuracy. The ratio (primary to secondary current) error of a Class 1 CT is 1% at rated current; the ratio error of a Class 0.5 CT is 0.5% or less. Errors in phase are also important, especially in power measuring circuits. Each class has an allowable maximum phase error for a specified load impedance.
Current transformers used for protective relaying also have accuracy requirements at overload currents in excess of the normal rating to ensure accurate performance of relays during system faults. A CT with a rating of 2.5L400 specifies with an output from its secondary winding of twenty times its rated secondary current (usually 5 A × 20 = 100 A) and 400 V (IZ drop) its output accuracy will be within 2.5 percent.
=== Burden ===
The secondary load of a current transformer is termed the "burden" to distinguish it from the primary load.
The burden in a CT metering electrical network is largely resistive impedance presented to its secondary winding. Typical burden ratings for IEC CTs are 1.5 VA, 3 VA, 5 VA, 10 VA, 15 VA, 20 VA, 30 VA, 45 VA and 60 VA. ANSI/IEEE burden ratings are B-0.1, B-0.2, B-0.5, B-1.0, B-2.0 and B-4.0. This means a CT with a burden rating of B-0.2 will maintain its stated accuracy with up to 0.2 Ω on the secondary circuit. These specification diagrams show accuracy parallelograms on a grid incorporating magnitude and phase angle error scales at the CT's rated burden. Items that contribute to the burden of a current measurement circuit are switch-blocks, meters and intermediate conductors. The most common cause of excess burden impedance is the conductor between the meter and the CT. When substation meters are located far from the meter cabinets, the excessive length of cable creates a large resistance. This problem can be reduced by using thicker cables and CTs with lower secondary currents (1 A), both of which will produce less voltage drop between the CT and its metering devices.
=== Knee-point core-saturation voltage ===
The knee-point voltage of a current transformer is the magnitude of the secondary voltage above which the output current ceases to linearly follow the input current within declared accuracy. In testing, if a voltage is applied across the secondary terminals the magnetizing current will increase in proportion to the applied voltage, until the knee point is reached. The knee point is defined as the voltage at which a 10% increase in applied voltage increases the magnetizing current by 50%. For voltages greater than the knee point, the magnetizing current increases considerably even for small increments in voltage across the secondary terminals. The knee-point voltage is less applicable for metering current transformers as their accuracy is generally much higher but constrained within a very small range of the current transformer rating, typically 1.2 to 1.5 times rated current. However, the concept of knee point voltage is very pertinent to protection current transformers, since they are necessarily exposed to fault currents of 20 to 30 times rated current.
=== Phase shift ===
Ideally, the primary and secondary currents of a current transformer should be in phase. In practice, this is impossible, but, at normal power frequencies, phase shifts of a few tenths of a degree are achievable, while simpler CTs may have larger phase shifts. For current measurement, phase shift is immaterial as ammeters only display the magnitude of the current. However, in wattmeters, energy meters, and power factor, phase shift produces errors. For power and energy measurement, the errors are considered to be negligible at unity power factor but become more significant as the power factor approaches zero. The introduction of electronic power and energy meters has allowed current phase error to be calibrated out.
== Construction ==
Bar-type current transformers have terminals for source and load connections of the primary circuit, and the body of the current transformer provides insulation between the primary circuit and ground. By use of oil insulation and porcelain bushings, such transformers can be applied at the highest transmission voltages.
Ring-type current transformers are installed over a bus bar or an insulated cable and have only a low level of insulation on the secondary coil. To obtain non-standard ratios or for other special purposes, more than one turn of the primary cable may be passed through the ring. Where a metal shield is present in the cable jacket, it must be terminated so no net sheath current passes through the ring, to ensure accuracy. Current transformers used to sense ground fault (zero sequence) currents, such as in a three-phase installation, may have three primary conductors passed through the ring. Only the net unbalanced current produces a secondary current - this can be used to detect a fault from an energized conductor to ground. Ring-type transformers usually use dry insulation systems, with a hard rubber or plastic case over the secondary windings.
For temporary connections, a split ring-type current transformer can be slipped over a cable without disconnecting it. This type has a laminated iron core, with a hinged section that allows it to be installed over the cable; the core links the magnetic flux produced by the single turn primary winding to a wound secondary with many turns. Because the gaps in the hinged segment introduce inaccuracy, such devices are not normally used for revenue metering.
Current transformers, especially those intended for high voltage substation service, may have multiple taps on their secondary windings, providing several ratios in the same device. This can be done to allow for reduced inventory of spare units, or to allow for load growth in an installation. A high-voltage current transformer may have several secondary windings with the same primary, to allow for separate metering and protection circuits, or for connection to different types of protective devices. For example, one secondary may be used for branch overcurrent protection, while a second winding may be used in a bus differential protective scheme, and a third winding used for power and current measurement.
== Special types ==
Specially constructed wideband current transformers are also used (usually with an oscilloscope) to measure waveforms of high frequency or pulsed currents within pulsed power systems. Unlike CTs used for power circuitry, wideband CTs are rated in output volts per ampere of primary current.
If the burden resistance is much less than inductive impedance of the secondary winding at the measurement frequency then the current in the secondary tracks the primary current and the transformer provides a current output that is proportional to the measured current. On the other hand, if that condition is not true, then the transformer is inductive and gives a differential output. The Rogowski coil uses this effect and requires an external integrator in order to provide a voltage output that is proportional to the measured current.
== Standards ==
Ultimately, depending on client requirements, there are two main standards to which current transformers are designed. IEC 61869-1 (in the past IEC 60044-1) & IEEE C57.13 (ANSI), although the Canadian and Australian standards are also recognised.
== High voltage types ==
Current transformers are used for protection, measurement and control in high-voltage electrical substations and the electrical grid. Current transformers may be installed inside switchgear or in apparatus bushings, but very often free-standing outdoor current transformers are used. In a switchyard, live tank current transformers have a substantial part of their enclosure energized at the line voltage and must be mounted on insulators. Dead tank current transformers isolate the measured circuit from the enclosure. Live tank CTs are useful because the primary conductor is short, which gives better stability and a higher short-circuit current rating. The primary of the winding can be evenly distributed around the magnetic core, which gives better performance for overloads and transients. Since the major insulation of a live-tank current transformer is not exposed to the heat of the primary conductors, insulation life and thermal stability is improved.
A high-voltage current transformer may contain several cores, each with a secondary winding, for different purposes (such as metering circuits, control, or protection). A neutral current transformer is used as earth fault protection to measure any fault current flowing through the neutral line from the wye neutral point of a transformer.
== See also ==
Instrumentation
Transformer types
Current sensing techniques
== References ==
Guile, A.; Paterson, W. (1977). Electrical Power Systems, Volume One. Pergamon. p. 331. ISBN 0-08-021729-X. | Wikipedia/Current_transformer |
In energy economics and ecological energetics, energy return on investment (EROI), also sometimes called energy returned on energy invested (ERoEI), is the ratio of the amount of usable energy (the exergy) delivered from a particular energy resource to the amount of exergy used to obtain that energy resource.
Arithmetically the EROI can be defined as:
E
R
O
I
=
Energy Delivered
Energy Required to Deliver that Energy
{\displaystyle EROI={\frac {\hbox{Energy Delivered}}{\hbox{Energy Required to Deliver that Energy}}}}
.
When the EROI of a source of energy is less than or equal to one, that energy source becomes a net "energy sink", and can no longer be used as a source of energy. A related measure, called energy stored on energy invested (ESOEI), is used to analyse storage systems.
To be considered viable as a prominent fuel or energy source a fuel or energy must have an EROI ratio of at least 3:1.
== History ==
The energy analysis field of study is credited with being popularized by Charles A. S. Hall, a Systems ecology and biophysical economics professor at the State University of New York. Hall applied the biological methodology, developed at an Ecosystems Marine Biological Laboratory, and then adapted that method to research human industrial civilization. The concept would have its greatest exposure in 1984, with a paper by Hall that appeared on the cover of the journal Science.
== Application to various technologies ==
=== Photovoltaic ===
The issue is still subject of numerous studies, and prompting academic argument. That's mainly because the "energy invested" critically depends on technology, methodology, and system boundary assumptions, resulting in a range from a maximum of 2000 kWh/m2 of module area down to a minimum of 300 kWh/m2 with a median value of 585 kWh/m2 according to a meta-study from 2013.
Regarding output, it obviously depends on the local insolation, not just the system itself, so assumptions have to be made.
Some studies (see below) include in their analysis that photovoltaic produce electricity, while the invested energy may be lower grade primary energy.
A 2015 review in Renewable and Sustainable Energy Reviews assessed the energy payback time and EROI of a variety of PV module technologies. In this study, which uses an insolation of 1700 kWh/m2/yr and a system lifetime of 30 years, mean harmonized EROIs between 8.7 and 34.2 were found. Mean harmonized energy payback time varied from 1.0 to 4.1 years.
In 2021, the Fraunhofer Institute for Solar Energy Systems calculated an energy payback time of around 1 year for European PV installations (0.9 years for Catania in Southern Italy, 1.1 years for Brussels) with wafer-based silicon PERC cells.
=== Wind turbines ===
In the scientific literature EROIs wind turbines is around 16 unbuffered and 4 buffered. Data collected in 2018 found that the EROI of operational wind turbines averaged 19.8 with high variability depending on wind conditions and wind turbine size.
EROIs tend to be higher for recent wind turbines compared to older technology wind turbines.
Vestas reports an EROI of 31 for its V150 model wind turbine.
=== Hydropower plants ===
The EROI for hydropower plants averages to about 110 when it is run for about 100 years.
=== Oil sands ===
Because much of the energy required for producing oil from oil sands (bitumen) comes from low value fractions separated out by the upgrading process, there are two ways to calculate EROI, the higher value given by considering only the external energy inputs and the lower by considering all energy inputs, including self generated. One study found that in 1970 oil sands net energy returns was about 1.0 but by 2010 had increased to about 5.23.
=== Conventional oil ===
Conventional sources of oil have a rather large variation depending on various geologic factors. The EROI for refined fuel from conventional oil sources varies from around 18 to 43.
=== Oil Shale ===
Due to the process heat input requirements for oil shale harvesting, the EROI is low. Typically natural gas is used, either directly combusted for process heat or used to power an electricity generating turbine, which then uses electrical heating elements to heat the underground layers of shale to produce oil from the kerogen. Resulting EROI is typically around 1.4-1.5. Economically, oil shale might be viable due to the effectively free natural gas on site used for heating the kerogen, but opponents have debated that the natural gas could be extracted directly and used for relatively inexpensive transportation fuel rather than heating shale for a lower EROI and higher carbon emissions.
=== Oil liquids ===
The weighted average standard EROI of all oil liquids (including coal-to-liquids, gas-to-liquids, biofuels, etc.) is expected to decrease from 44.4 in 1950 to a plateau of 6.7 in 2050.
=== Natural gas ===
The standard EROI for natural gas is estimated to decrease from 141.5 in 1950 to an apparent plateau of 16.8 in 2050.
=== Nuclear plants ===
The EROI for nuclear plants ranges from 20 to 81.
== Non-manmade energy inputs ==
The natural or primary energy sources are not included in the calculation of energy invested, only the human-applied sources.
For example, in the case of biofuels the solar insolation driving photosynthesis is not included, and the energy used in the stellar synthesis of fissile elements is not included for nuclear fission. The energy returned includes only human usable energy and not wastes such as waste heat.
Nevertheless, heat of any form can be counted where it is actually used for heating. However the use of waste heat in district heating and water desalination in cogeneration plants is rare, and in practice it is often excluded in EROI analysis of energy sources.
== Competing methodology ==
In a 2010 paper by Murphy and Hall, the advised extended ["Ext"] boundary protocol, for all future research on EROI, was detailed. In order to produce, what they consider, a more realistic assessment and generate greater consistency in comparisons, than what Hall and others view as the "weak points" in a competing methodology. In more recent years, however, a source of continued controversy is the creation of a different methodology endorsed by certain members of the IEA which for example most notably in the case of photovoltaic solar panels, controversially generates more favorable values.
In the case of photovoltaic solar panels, the IEA method tends to focus on the energy used in the factory process alone. In 2016, Hall observed that much of the published work in this field is produced by advocates or persons with a connection to business interests among the competing technologies, and that government agencies had not yet provided adequate funding for rigorous analysis by more neutral observers.
== Relationship to net energy gain ==
EROI and Net energy (gain) measure the same quality of an energy source or sink in numerically different ways. Net energy describes the amounts, while EROI measures the ratio or efficiency of the process. They are related simply by
GrossEnergyYield
÷
EnergyExpended
=
E
R
O
I
{\displaystyle {\hbox{GrossEnergyYield}}\div {\hbox{EnergyExpended}}=EROI}
or
(
NetEnergy
÷
EnergyExpended
)
+
1
=
E
R
O
I
{\displaystyle ({\hbox{NetEnergy}}\div {\hbox{EnergyExpended}})+1=EROI}
For example, given a process with an EROI of 5, expending 1 unit of energy yields a net energy gain of 4 units. The break-even point happens with an EROI of 1 or a net energy gain of 0. The time to reach this break-even point is called energy payback period (EPP) or energy payback time (EPBT).
== Economic influence ==
Although many qualities of an energy source matter (for example oil is energy-dense and transportable, while wind is variable), when the EROI of the main sources of energy for an economy fall that energy becomes more difficult to obtain and its relative price may increase.
In regard to fossil fuels, when oil was originally discovered, it took on average one barrel of oil to find, extract, and process about 100 barrels of oil. The ratio, for discovery of fossil fuels in the United States, has declined steadily over the last century from about 1000:1 in 1919 to only 5:1 in the 2010s.
Since the invention of agriculture, humans have increasingly used exogenous sources of energy to multiply human muscle-power.
Some historians have attributed this largely to more easily exploited (i.e. higher EROI) energy sources, which is related to the concept of energy slaves. Thomas Homer-Dixon argues that a falling EROI in the Later Roman Empire was one of the reasons for the collapse of the Western Empire in the fifth century CE. In "The Upside of Down" he suggests that EROI analysis provides a basis for the analysis of the rise and fall of civilisations. Looking at the maximum extent of the Roman Empire, (60 million) and its technological base the agrarian base of Rome was about 1:12 per hectare for wheat and 1:27 for alfalfa (giving a 1:2.7 production for oxen). One can then use this to calculate the population of the Roman Empire required at its height, on the basis of about 2,500–3,000 calories per day per person. It comes out roughly equal to the area of food production at its height. But ecological damage (deforestation, soil fertility loss particularly in southern Spain, southern Italy, Sicily and especially north Africa) saw a collapse in the system beginning in the 2nd century, as EROI began to fall. It bottomed in 1084 when Rome's population, which had peaked under Trajan at 1.5 million, was only 15,000.
Evidence also fits the cycle of Mayan and Cambodian collapse too. Joseph Tainter suggests that diminishing returns of the EROI is a chief cause of the collapse of complex societies, which has been suggested as caused by peak wood in early societies. Falling EROI due to depletion of high quality fossil fuel resources also poses a difficult challenge for industrial economies, and could potentially lead to declining economic output and challenge the concept (which is very recent when considered from a historical perspective) of perpetual economic growth.
== Criticism of EROI ==
EROI is calculated by dividing the energy output by the energy input. Measuring total energy output is often easy, especially in the case for an electrical output where some appropriate electricity meter can be used. However, researchers disagree on how to determine energy input accurately and therefore arrive at different numbers for the same source of energy.
How deep should the probing in the supply chain of the tools being used to generate energy go? For example, if steel is being used to drill for oil or construct a nuclear power plant, should the energy input of the steel be taken into account? Should the energy input into building the factory being used to construct the steel be taken into account and amortized? Should the energy input of the roads which are used to ferry the goods be taken into account? What about the energy used to cook the steelworkers' breakfasts? These are complex questions evading simple answers. A full accounting would require considerations of opportunity costs and comparing total energy expenditures in the presence and absence of this economic activity.
However, when comparing two energy sources a standard practice for the supply chain energy input can be adopted. For example, consider the steel, but don't consider the energy invested in factories deeper than the first level in the supply chain. It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology is considered necessary to reach the minimum threshold of sustainability, while a value of 12–13 by Hall's methodology is considered the minimum value necessary for technological progress and a society supporting high art.
Richards and Watt propose an Energy Yield Ratio for photovoltaic systems as an alternative to EROI (which they refer to as Energy Return Factor). The difference is that it uses the design lifetime of the system, which is known in advance, rather than the actual lifetime. This also means that it can be adapted to multi-component systems where the components have different lifetimes.
Another issue with EROI that many studies attempt to tackle is that the energy returned can be in different forms, and these forms can have different utility. For example, electricity can be converted more efficiently than thermal energy into motion, due to electricity's lower entropy. In addition, the form of energy of the input can be completely different from the output. For example, energy in the form of coal could be used in the production of ethanol. This might have an EROI of less than one, but could still be desirable due to the benefits of liquid fuels (assuming the latters are not used in the processes of extraction and transformation).
== Additional EROI calculations ==
There are three prominent expanded EROI calculations, they are point of use, extended and societal. Point of Use EROI expands the calculation to include the cost of refining and transporting the fuel during the refining process. Since this expands the bounds of the calculation to include more production process EROI will decrease. Extended EROI includes point of use expansions as well as including the cost of creating the infrastructure needed for transportation of the energy or fuel once refined. Societal EROI is a sum of all the EROIs of all the fuels used in a society or nation. A societal EROI has never been calculated and researchers believe it may currently be impossible to know all variables necessary to complete the calculation, but attempted estimates have been made for some nations. Calculations are done by summing all of the EROIs for domestically produced and imported fuels and comparing the result to the Human Development Index (HDI), a tool often used to understand well-being in a society. According to this calculation, the amount of energy a society has available to them increases the quality of life for the people living in that country, and countries with less energy available also have a harder time satisfying citizens' basic needs. This is to say that societal EROI and overall quality of life are very closely linked.
== EROI and payback periods of some types of power plants ==
The following table is a compilation of sources of energy. The minimum requirement is a breakdown of the cumulative energy expenses according to material data. Frequently in literature harvest factors are reported, for which the origin of the values is not completely transparent. These are not included in this table.
The bold numbers are those given in the respective literature source, the normal printed ones are derived (see Mathematical Description).
(a) The cost of fuel transportation is taken into account
(b) The values refer to the total energy output. The expense for storage power plants, seasonal reserves or conventional load balancing power plants is not taken into account.
(c) The data for the E-82 come from the manufacturer, but are confirmed by TÜV Rheinland.
== ESOEI ==
ESOEI (or ESOIe) is used when EROI is below 1. "ESOIe is the ratio of electrical energy stored over the lifetime of a storage device to the amount of embodied electrical energy required to build the device."
One of the notable outcomes of the Stanford University team's assessment on ESOI, was that if pumped storage was not available, the combination of wind energy and the commonly suggested pairing with battery technology as it presently exists, would not be sufficiently worth the investment, suggesting instead curtailment.
== EROI under rapid growth ==
A related recent concern is energy cannibalism where energy technologies can have a limited growth rate if climate neutrality is demanded. Many energy technologies are capable of replacing significant volumes of fossil fuels and concomitant green house gas emissions. Unfortunately, neither the enormous scale of the current fossil fuel energy system nor the necessary growth rate of these technologies is well understood within the limits imposed by the net energy produced for a growing industry. This technical limitation is known as energy cannibalism and refers to an effect where rapid growth of an entire energy producing or energy efficiency industry creates a need for energy that uses (or cannibalizes) the energy of existing power plants or production plants.
The solar breeder overcomes some of these problems. A solar breeder is a photovoltaic panel manufacturing plant which can be made energy-independent by using energy derived from its own roof using its own panels. Such a plant becomes not only energy self-sufficient but a major supplier of new energy, hence the name solar breeder. Research on the concept was conducted by Centre for Photovoltaic Engineering, University of New South Wales, Australia. The reported investigation establishes certain mathematical relationships for the solar breeder which clearly indicate that a vast amount of net energy is available from such a plant for the indefinite future. The solar module processing plant at Frederick, Maryland was originally planned as such a solar breeder. In 2009 the Sahara Solar Breeder Project was proposed by the Science Council of Japan as a cooperation between Japan and Algeria with the highly ambitious goal of creating hundreds of GW of capacity within 30 years.
== See also ==
Cost of electricity by source – levelized cost of energy
Embodied energy
Emergy
Energy balance
Energy cannibalism
Exergy – useful energy
Jevons paradox – 1880s observation of the efficiency effect multiplier
Khazzoom-Brookes Postulate – 1980s updating of Jevons paradox
Net energy gain
Social metabolism
Thermoeconomics
== References ==
== External links ==
World-Nuclear.org Archived 2013-02-15 at the Wayback Machine, World Nuclear Association study on EROI with assumptions listed.
Web.archive.org, Wayback Archive of OilAnalytics.org, "EROI as a Measure of Energy Availability"
EOearth.org, Energy return on investment (EROI)
EOearth.org, Net energy analysis
H2-pv.us, Essay on H2-PV Breeder Synergies | Wikipedia/Energy_return_on_investment |
Transformer oil or insulating oil is an oil that is stable at high temperatures and has excellent electrical insulating properties. It is used in oil-filled wet transformers, some types of high-voltage capacitors, fluorescent lamp ballasts, and some types of high-voltage switches and circuit breakers. It functions to insulate, suppress corona discharge and arcing, and serves as a coolant.
Most often, transformer oil is based on mineral oil, but alternative formulations - with different engineering or environmental properties - are growing in popularity.
== Function and properties ==
Transformer oil's primary functions are to insulate and cool a transformer. It must therefore have high dielectric strength, thermal conductivity, and chemical stability, and must keep these properties when held at high temperatures for extended periods. Typically, they have a flash point greater than 140 °C (284 °F), pour point less than −40 °C (−40 °F), and a dielectric breakdown at greater than 28 kVRMS. To improve cooling of large power transformers, the oil-filled tank may have external radiators through which the oil circulates by natural convection. Power transformers with capacities of thousands of kilovolt-ampere may also have cooling fans, oil pumps, and even oil-to-water heat exchangers.
Power transformers undergo prolonged drying processes, using electrical self-heating, the application of a vacuum, or both to ensure that the transformer is completely free of water vapor before the insulating oil is introduced. This helps prevent corona formation and subsequent electrical breakdown under load.
Oil filled transformers with a conservator oil reservoir may have a gas detector relay like a Buchholz relay. These safety devices detect the buildup of gas inside the transformer due to corona discharge, overheating, or an internal electric arc. On a slow accumulation of gas, or rapid pressure rise, these devices can trip a protective circuit breaker to remove power from the transformer. Transformers without conservators are usually equipped with sudden pressure relays, which perform a similar function as the Buchholz relay.
== Mineral oil alternatives ==
Mineral oil is generally effective as a transformer oil, but it has some disadvantages, one of which is its relatively low flashpoint versus some alternatives. If a transformer leaks mineral oil, it can potentially start a fire. Fire codes often require that transformers inside buildings use a less flammable liquid, or the use of dry-type transformers with no liquid at all. Mineral oil is also an environmental contaminant, and its insulating properties are rapidly degraded by even small amounts of water. Transformers are well equipped to keep water outside the oil for this reason.
Pentaerythritol tetra fatty acid synthetic and natural esters have emerged as an increasingly common mineral oil alternative, especially in high-fire-risk applications such as indoors due to their high fire point, which are over 300 °C (572 °F). They are biodegradable, but are more expensive than mineral oil. Natural esters have lower oxidation stability in the 120C oxygen saturated test of approximately 48-hours compared to 500-hours for Mineral oils, and are therefore used in closed transformers.
Hermetic seals are important for larger transformers due to thermal expansion and contraction. Mid-size and large power transformers will typically have a conservator and employ a rubber bag with the use of natural ester to reduce oxygen ingress and prevent the natural ester from experiencing a faster oxidation than utilities are accustomed to with mineral oils. Silicone or fluorocarbon-based oils, which are even less flammable, are also used, but they are more expensive than esters.
There are over 3 million transformers in service with vegetable-based formulations, using soy or rapeseed based formulations in up to 500 kV transformers so far. However, coconut oil-based formulations are unsuitable for use in cold climates or for voltages over 230 kV. Researchers are also investigating nanofluids for transformer use; these would be used as additives to improve the stability and thermal and electrical properties of the oil.
== Polychlorinated biphenyls (PCBs) ==
Polychlorinated biphenyls (PCB) are synthetic dielectrics first made over a century ago and found to have desirable properties that led to their widespread use. Polychlorinated biphenyls were formerly used as transformer oil, since they have high dielectric strength and are not flammable. Unfortunately, they are also toxic, bioaccumulative, not at all biodegradable, and difficult to dispose of safely. When burned, they form even more toxic products, such as chlorinated dioxins and chlorinated dibenzofurans.
Beginning in the 1970s, production and new uses of PCBs were banned in many countries, due to concerns about the accumulation of PCBs and toxicity of their byproducts. For instance, in the USA, production of PCBs was banned in 1979 under the Toxic Substances Control Act. In many countries significant programs are in place to reclaim and safely destroy PCB contaminated equipment. One method that can be used to reclaim PCB contaminated transformer oil is the application of a PCB removal system, also called a PCB dechlorination system.
PCB removal systems use an alkali dispersion to strip the chlorine atoms from the other molecules in a chemical reaction. This forms PCB-free transformer oil and a PCB-free sludge. The two can then be separated via a centrifuge. The sludge can be disposed as regular non-PCB industrial waste. The treated transformer oil is fully restored, meeting the required standards, without any detectable PCB content. It can, thus, be used as the insulating fluid in transformers again.
PCBs and mineral oil are miscible in all proportions, and sometimes the same equipment (drums, pumps, hoses, and so on) was used for either type of liquid, so PCB contamination of transformer oil continues to be a concern. For instance, under present regulations, concentrations of PCBs exceeding 5 parts per million can cause an oil to be classified as hazardous waste in California.
== Testing and oil quality ==
Transformer oils are subject to electrical and mechanical stresses while a transformer is in operation. In addition there is contamination caused by chemical interactions with windings and other solid insulation, catalyzed by high operating temperature. The original chemical properties of transformer oil change gradually, rendering it ineffective for its intended purpose after many years. Oil in large transformers and electrical apparatus is periodically tested for its electrical and chemical properties, to make sure it is suitable for further use. Sometimes oil condition can be improved by filtration and treatment. Tests can be divided into:
Dissolved gas analysis
Furan analysis
PCB analysis
General electrical & physical tests:
Color & Appearance
Breakdown Voltage
Water Content
Acidity (Neutralization Value)
Dielectric Dissipation Factor
Resistivity
Sediments & Sludge
Flash Point
Pour Point
Density
Kinematic Viscosity
The details of conducting these tests are available in standards released by International Electrotechnical Commission, ASTM International, International standard, British Standards, and testing can be done by any of the methods. The Furan and DGA tests are specifically not for determining the quality of transformer oil, but for determining any abnormalities in the internal windings of the transformer or the paper insulation of the transformer, which cannot be otherwise detected without a complete overhaul of the transformer. Suggested intervals for these test are:
General and physical tests - bi-yearly
Dissolved gas analysis - yearly
Furan testing - once every 2 years, subject to the transformer being in operation for min 5 years.
== On-site testing ==
Some transformer oil tests can be carried out in the field, using portable test apparatus. Other tests, such as dissolved gas, normally require a sample to be sent to a laboratory. Electronic on-line dissolved gas detectors can be connected to important or distressed transformers to continually monitor gas generation trends.
To determine the insulating property of the dielectric oil, an oil sample is taken from the device under test, and its breakdown voltage is measured on-site according to the following test sequence:
In the vessel, two standard-compliant test electrodes with a typical clearance of 2.5 mm are surrounded by the insulating oil.
During the test, a test voltage is applied to the electrodes. The test voltage is continuously increased up to the breakdown voltage with a constant slew rate of e.g. 2 kV/s.
Breakdown occurs in an electric arc, leading to a collapse of the test voltage.
Immediately after ignition of the arc, the test voltage is switched off automatically.
Ultra fast switch off is crucial, as the energy that is brought into the oil and is burning it during the breakdown, must be limited to keep the additional pollution by carbonisation as low as possible.
The root mean square value of the test voltage is measured at the very instant of the breakdown and is reported as the breakdown voltage.
After the test is completed, the insulating oil is stirred automatically and the test sequence is performed repeatedly.
The resulting breakdown voltage is calculated as mean value of the individual measurements.
== See also ==
Heat-transfer oil
== References ==
== External links == | Wikipedia/Transformer_oil |
Condition monitoring of transformers in electrical engineering is the process of acquiring and processing data related to various parameters of transformers to determine their state of quality and predict their failure. This is done by observing the deviation of the transformer parameters from their expected values. Transformers are the most critical assets of electrical transmission and distribution systems, and their failures could cause power outages, personal and environmental hazards, and expensive rerouting or purchase of power from other suppliers. Identifying a transformer which is near failure can allow it to be replaced under controlled conditions at a non-critical time and avoid a system failure.
Transformer failures can occur due to various causes. Transformer in-service interruptions and failures usually result from dielectric breakdown, winding distortion caused by short circuits, hots spots caused by localized deviations in winding and electromagnetic fields, deterioration of insulation, effects of lightning and other electrical disturbances, inadequate maintenance, loose connections, overloading, or failure of accessory components (e.g.: OLTCs, bushings, etc). Accounting for these causes through monitoring can allow for the determination of the overall condition of the transformer.
== Aspects ==
The important aspects of condition monitoring of transformers are:
Thermal modelling – The useful life of a transformer is partially determined by the ability of the transformer to dissipate its internally generated heat to its surroundings. The comparison of actual and predicted operating temperatures can provide a sensitive diagnosis of the transformer condition and might indicate abnormal operation. Consequences of temperature rise include gradual deterioration of insulation, damage which is very costly. To predict this, thermal modelling is used to determine the top transformer oil temperature and hot spot temperature (the maximum temperature occurring in the winding insulation system) rise.
Dissolved gas analysis – The degradation of transformer oil and solid insulating materials produces gases, which are generated at a more rapid rate when an electrical fault occurs. By evaluating the concentration and proportion of hydrocarbon gasses, hydrogen, and carbon oxides present in the transformer, it is possible to predict early stage faults in three categories: corona or partial discharge, thermal heating, and arcing.
Frequency response analysis – When a transformer is subjected to high currents through fault currents (abnormal currents), the mechanical structure and windings are subjected to severe mechanical stresses causing winding movement and deformations. It may also result in insulation damage and turn-to-turn faults. Frequency response analysis (FRA) is a non-intrusive and sensitive technique for detecting winding movement faults and assessing the deformation caused by loss of clamping pressure or by short-circuit forces. FRA technique involves measuring the impedance of the windings of the transformer with a low-voltage sine input varying in a wide frequency range.
Partial discharge (PD) analysis – Partial discharge occurs when a local electric field exceeds a threshold value, partially breaking the surrounding medium. Its cumulative effect leads to the degradation of insulation. PDs are initiated by defects during manufacture or higher stress dictated by design considerations. Measurements can be collected to detect these PDs and monitor the soundness of insulation. PDs manifest as sharp current pulses at transformer terminals, whose nature depends on the types of insulation, defects, measuring circuits, and detectors used.
== References ==
Giesecke, J.L. Transformer Condition Assessment using HFCT method. see article in transformers-magazine.com July 2016 | Wikipedia/Condition_monitoring_of_transformers |
Voltage control and reactive power management are two facets of an ancillary service that enables reliability of the transmission networks and facilitates the electricity market on these networks. Both aspects of this activity are intertwined (voltage change in an alternating current (AC) network is effected through production or absorption of reactive power), so within this article the term voltage control will be primarily used to designate this essentially single activity, as suggested by Kirby & Hirst (1997). Voltage control does not include reactive power injections to dampen the grid oscillations; these are a part of a separate ancillary service, so-called system stability service. The transmission of reactive power is limited by its nature, so the voltage control is provided through pieces of equipment distributed throughout the power grid, unlike the frequency control that is based on maintaining the overall active power balance in the system.
== Need for voltage control ==
Kirby & Hirst indicate three reasons behind the need for voltage control:
the power network equipment is designed for a narrow voltage range, so is the power consuming equipment on the customer side. Operation outside of this range will cause the equipment to fail;
reactive power causes heating in the generators and the transmission lines, thermal limits will require restricting the production and the flow of real (active) power;
injection of reactive power into transmission lines causes losses that waste power, forcing an increase in power supplied by the prime mover.
Use of specialized voltage control devices in the grid also improves the power system stability by reducing the fluctuations of the rotor angle of a synchronous generator (that are caused by generators sourcing or sinking the reactive power).
Power buses and systems that exhibit large changes in voltage when the reactive power conditions change are called weak systems, while the ones that have relatively smaller changes are strong (numerically, the strength is expressed as a short circuit ratio that is higher for the stronger systems).
== Absorption and production of reactive power ==
Devices absorb reactive energy if they have lagging power factor (are inductor-like) and produce reactive energy if they have a leading power factor (are capacitor-like).
Electric grid equipment units typically either supply or consume the reactive power:
Synchronous generators will provide reactive power if overexcited and absorb it if underexcited, subject to the limits of the generator capability curve.
Transformers will always absorb the reactive power.
Power lines will either absorb or provide reactive power: overhead power lines will provide reactive power at low load, but as the load increases past the surge impedance of the line, the lines start consuming an increasing amount of reactive power. Underground power lines are capacitive, so they are loaded below the surge impedance and provide reactive power.
Electrical loads usually absorb the reactive power, with the power factor for typical appliances ranging from 0.65 (household equipment with electrical motors, like a washing machine) to 1.0 (purely resistive loads like incandescent lamps).
In a typical electrical grid, the basics of the voltage control are provided by the synchronous generators. These generators are equipped with automatic voltage regulators that adjust the excitation field keeping the voltage at the generator's terminals within the target range.
The task of additional reactive power compensation (also known as voltage compensation) is assigned to compensating devices:
passive (either permanently connected or switched) sinks of reactive power (e.g., shunt reactors that are similar to transformers in construction, with a single winding and iron core). A shunt reactor is typically connected to an end of a long transmission line or a weak system to prevent overvoltage under light load (Ferranti effect);
passive sources of reactive power (e. g., shunt or series capacitors).
shunt capacitors are used in power systems since the 1910s and are popular due to low cost and relative ease of deployment. The amount of reactive power supplied by a shunt capacitor is proportional to the square of the line voltage, so the capacitor contributes less under low-voltage conditions (frequently caused by the lack of reactive power). This is a serious drawback, as the supply of reactive power by a capacitor drops when it is most needed;
series capacitors are used to compensate for the inductive reactance of the loaded overhead power lines. These devices, connected in series to the power conductors are typically used to lower the reactive power losses and to increase the amount of active power that can be transmitted through the line, with the supply of reactive power with self-regulation (the supply fortuitously increases with higher load) being the secondary consideration; The voltage across a series capacitor is typically low (within the regulation range of the network, few percent of the rated voltage), so its construction is relatively low-cost. However, in the case of a short on the load side, the capacitor will be briefly exposed to the full line voltage, thus protection circuits are provisioned, usually involving spark gaps, ZnO varistors, and switches;
active compensators (e.g., synchronous condensers, static var compensators, static synchronous compensators that can be either sources or sinks of reactive power;
regulating transformers (e.g., tap-changing transformers).
The passive compensation devices can be permanently attached, or are switched (connected and disconnected) either manually, using a timer, or automatically based on sensor data. The active devices are by nature self-adjusting. The tap-changing transformers with under-load tap-changing (ULTC) feature can be used to control the voltage directly. The operation of all tap-changing transformers in the system needs to be synchronized between the transformers and with the application of shunt capacitors.
Due to the localized nature of reactive power balance, the standard approach is to manage the reactive power locally (decentralized method). The proliferation of microgrids might make the flexible centralized approach more economical.
== Reactive power reserves ==
The system should be capable of providing additional amounts of reactive power very quickly (dynamic requirement) since a single failure of a generator or a transmission line (that has to be planned for) has the potential to immediately increase the load on some of the remaining transmission lines. The nature of overhead power lines is that as the load increases, the lines start consuming an increasing amount of reactive power that needs to be replaced. Thus a large transmission system requires reactive power reserves just like it needs reserves for the real power. Since the reactive power does not travel over the wires as well as the real power, there is an incentive to concentrate its production close to the load. Restructuring of electric power systems takes this area of the power grid out of hands of the integrated power utility, so the trend is to push the problem onto the customer and require the load to operate with a near-unity power factor.
== See also ==
Active Network Management
== References ==
== Sources ==
Kirby, Brendan J.; Hirst, Eric (1997). Ancillary service details: Voltage control (ORNL/CON-453) (PDF). Oak Ridge, Tennessee: Oak Ridge National Laboratory.
Ibrahimzadeh, Esmaeil; Blaabjerg, Frede (5 April 2017). "Reactive Power Role and Its Controllability in AC Power Systems". In Naser Mahdavi Tabatabaei; Ali Jafari Aghbolaghi; Nicu Bizon; Frede Blaabjerg (eds.). Reactive Power Control in AC Power Systems: Fundamentals and Current Issues. Springer. pp. 117–136. ISBN 978-3-319-51118-4. OCLC 1005810845.
Kundur, Prabha (22 January 1994). "Reactive Power and Voltage Control" (PDF). Power System Stability and Control. McGraw-Hill Education. pp. 627–687. ISBN 978-0-07-035958-1. OCLC 1054007373.
Khan, Baseem (2022). "Reactive power management in active distribution network". Active Electrical Distribution Network. Elsevier. pp. 287–301. doi:10.1016/B978-0-323-85169-5.00005-8. | Wikipedia/Voltage_control_and_reactive_power_management |
The linear variable differential transformer (LVDT) – also called linear variable displacement transformer, linear variable displacement transducer, or simply differential transformer – is a type of electrical transformer used for measuring linear displacement (position along a given direction). It is the base of LVDT-type displacement sensors. A counterpart to this device that is used for measuring rotary displacement is called a rotary variable differential transformer (RVDT).
== Introduction ==
LVDTs are robust, absolute linear position/displacement transducers; inherently frictionless, they have a virtually infinite cycle life when properly used. As AC operated LVDTs do not contain any electronics, they can be designed to operate at cryogenic temperatures or up to 1200 °F (650 °C), in harsh environments, and under high vibration and shock levels. LVDTs have been widely used in applications such as power turbines, hydraulics, automation, aircraft, satellites, nuclear reactors, and many others. These transducers have low hysteresis and excellent repeatability.
The LVDT converts a position or linear displacement from a mechanical reference (zero or null position) into a proportional electrical signal containing phase (for direction) and amplitude (for distance) information. The LVDT operation does not require an electrical contact between the moving part (probe or core assembly) and the coil assembly, but instead relies on electromagnetic coupling.
== Operation ==
The linear variable differential transformer has three solenoidal coils placed end-to-end around a tube. The center coil is the primary, and the two outer coils are the top and bottom secondaries. A cylindrical ferromagnetic core, attached to the object whose position is to be measured, slides along the axis of the tube. An alternating current drives the primary and causes a voltage to be induced in each secondary proportional to the length of the core linking to the secondary. The frequency is usually in the range 1 to 10 kHz.
As the core moves, the primary's linkage to the two secondary coils changes and causes the induced voltages to change. The coils are connected so that the output voltage is the difference (hence "differential") between the top secondary voltage and the bottom secondary voltage. When the core is in its central position, equidistant between the two secondaries, equal voltages are induced in the two secondary coils, but the two signals cancel, so the output voltage is theoretically zero. In practice minor variations in the way in which the primary is coupled to each secondary means that a small voltage is output when the core is central.
This small residual voltage is due to phase shift and is often called quadrature error. It is a nuisance in closed loop control systems as it can result in oscillation about the null point, and may also be unacceptable in simple measurement applications. It is a consequence of using synchronous demodulation, with direct subtraction of the secondary voltages at AC. Modern systems, particularly those involving safety, require fault detection of the LVDT, and the normal method is to demodulate each secondary separately, using precision half wave or full wave rectifiers, based on op-amps, and compute the difference by subtracting the DC signals. Because, for constant excitation voltage, the sum of the two secondary voltages is almost constant throughout the operating stroke of the LVDT, its value remains within a small window and can be monitored such that any internal failures of the LVDT will cause the sum voltage to deviate from its limits and be rapidly detected, causing a fault to be indicated. There is no quadrature error with this scheme, and the position-dependent difference voltage passes smoothly through zero at the null point.
Where digital processing in the form of a microprocessor or FPGA is available in the system, it is customary for the processing device to carry out the fault detection, and possibly ratiometric processing to improve accuracy, by dividing the difference in secondary voltages by the sum of the secondary voltages, to make the measurement independent of the exact amplitude of the excitation signal. If sufficient digital processing capacity is available, it is becoming commonplace to use this to generate the sinusoidal excitation via a DAC and possibly also perform the secondary demodulation via a multiplexed ADC.
When the core is displaced toward the top, the voltage in top secondary coil increases as the voltage in the bottom decreases. The resulting output voltage increases from zero. This voltage is in phase with the primary voltage. When the core moves in the other direction, the output voltage also increases from zero, but its phase is opposite to that of the primary. The phase of the output voltage determines the direction of the displacement (up or down) and amplitude indicates the amount of displacement. A synchronous detector can determine a signed output voltage that relates to the displacement.
The LVDT is designed with long slender coils to make the output voltage essentially linear over displacement up to several inches (several hundred millimetres) long.
The LVDT can be used as an absolute position sensor. Even if the power is switched off, on restarting it, the LVDT shows the same measurement, and no positional information is lost. Its biggest advantages are repeatability and reproducibility once it is properly configured. Also, apart from the uni-axial linear motion of the core, any other movements such as the rotation of the core around the axis will not affect its measurements.
Because the sliding core does not touch the inside of the tube, it can move without friction, making the LVDT a highly reliable device. The absence of any sliding or rotating contacts allows the LVDT to be completely sealed against the environment.
LVDTs are commonly used for position feedback in servomechanisms, and for automated measurement in machine tools and many other industrial and scientific applications.
== See also ==
Dot convention
Linear encoder
Rotary encoder
== References ==
Baumeister, Theodore; Marks, Lionel S., eds. (1967), Standard Handbook for Mechanical Engineers (Seventh ed.), McGraw-Hill, LCCN 16-12915
== External links ==
How LVDTs Work: an interactive explanation
Phasing Explanation
LVDT models and applications
Analog Devices AD598 datasheet: A LVDT Signal Conditioner | Wikipedia/Linear_variable_differential_transformer |
Various types of electrical transformer are made for different purposes. Despite their design differences, the various types employ the same basic principle as discovered in 1831 by Michael Faraday, and share several key functional parts.
== Power transformer ==
=== Laminated core ===
This is the most common type of transformer, widely used in electric power transmission and appliances to convert mains voltage to low voltage to power electronic devices. They are available in power ratings ranging from mW to MW. The insulated laminations minimize eddy current losses in the iron core.
Small appliance and electronic transformers may use a split bobbin, giving a high level of insulation between the windings. The rectangular cores are made up of stampings, often in E-I shape pairs, but other shapes are sometimes used. Shields between primary and secondary may be fitted to reduce EMI (electromagnetic interference), or a screen winding is occasionally used.
Small appliance and electronics transformers may have a thermal cut-out built into the winding, to shut-off power at high temperatures to prevent further overheating.
=== Toroidal ===
Donut-shaped toroidal transformers save space compared to E-I cores, and may reduce external magnetic field. These use a ring shaped core, copper windings wrapped around this ring (and thus threaded through the ring during winding), and tape for insulation.
Toroidal transformers have a lower external magnetic field compared to rectangular transformers, and can be smaller for a given power rating. However, they cost more to make, as winding requires more complex and slower equipment.
They can be mounted by a bolt through the center, using washers and rubber pads or by potting in resin. Care must be taken that the bolt does not form part of a short-circuit turn.
=== Autotransformer ===
An autotransformer consists of only one winding that is tapped at some point along the winding. Voltage is applied across a terminal of the winding, and a higher (or lower) voltage is produced across another portion of the same winding. The equivalent power rating of the autotransformer is lower than the actual load power rating. It is calculated by: load VA × (|Vin – Vout|)/Vin. For example, an auto transformer that adapts a 1000 VA load rated at 120 volts to a 240 volt supply has an equivalent rating of at least: 1,000 VA (240 V – 120 V) / 240 V = 500 VA. However, the actual rating (shown on the tally plate) must be at least 1000 VA.
For voltage ratios that don't exceed about 3:1, an autotransformer is cheaper, lighter, smaller, and more efficient than an isolating (two-winding) transformer of the same rating. Large three-phase autotransformers are used in electric power distribution systems, for example, to interconnect 220 kV and 33 kV sub-transmission networks or other high voltage levels.
=== Variable autotransformer ===
By exposing part of the winding coils of an autotransformer, and making the secondary connection through a sliding carbon brush, an autotransformer with a near-continuously variable turns ratio can be obtained, allowing for wide voltage adjustment in very small increments.
=== Induction regulator ===
The induction regulator is similar in design to a wound-rotor induction motor but it is essentially a transformer whose output voltage is varied by rotating its secondary relative to the primary—i.e., rotating the angular position of the rotor. It can be seen as a power transformer exploiting rotating magnetic fields. The major advantage of the induction regulator is that unlike variacs, they are practical for transformers over 5 kVA. Hence, such regulators find widespread use in high-voltage laboratories.
=== Polyphase transformer ===
For polyphase systems, multiple single-phase transformers can be used, or all phases can be connected to a single polyphase transformer. For a three phase transformer, the three primary windings are connected together and the three secondary windings are connected together. Examples of connections are wye-delta, delta-wye, delta-delta, and wye-wye. A vector group indicates the configuration of the windings and the phase angle difference between them. If a winding is connected to earth (grounded), the earth connection point is usually the center point of a wye winding. If the secondary is a delta winding, the ground may be connected to a center tap on one winding (high leg delta) or one phase may be grounded (corner grounded delta). A special purpose polyphase transformer is the zigzag transformer. There are many possible configurations that may involve more or fewer than six windings and various tap connections.
=== Grounding transformer ===
Grounding or earthing transformers let three wire (delta) polyphase system supplies accommodate phase to neutral loads by providing a return path for current to a neutral. Grounding transformers most commonly incorporate a single winding transformer with a zigzag winding configuration but may also be created with a wye-delta isolated winding transformer connection.
=== Phase-shifting transformer ===
This is a specialized type of transformer which can be configured to adjust the phase relationship between input and output. This allows power flow in an electric grid to be controlled, e.g. to steer power flows away from a shorter (but overloaded) link to a longer path with excess capacity.
=== Variable-frequency transformer ===
A variable-frequency transformer is a specialized three-phase power transformer which allows the phase relationship between the input and output windings to be continuously adjusted by rotating one half. They are used to interconnect electrical grids with the same nominal frequency but without synchronous phase coordination.
=== Leakage or stray field transformer ===
A leakage transformer, also called a stray-field transformer, has a significantly higher leakage inductance than other transformers, sometimes increased by a magnetic bypass or shunt in its core between primary and secondary, which is sometimes adjustable with a set screw. This provides a transformer with an inherent current limitation due to the loose coupling between its primary and the secondary windings. The adjustable short-circuit inductance acts as a current limiting parameter.
The output and input currents are kept low enough to preclude thermal overload under any load conditions — even if the secondary is shorted.
==== Uses ====
Leakage transformers are used for arc welding and high voltage discharge lamps (neon lights and cold cathode fluorescent lamps, which are series connected up to 7.5 kV AC). It acts both as a voltage transformer and as a magnetic ballast.
Other applications are short-circuit-proof extra-low voltage transformers for toys or doorbell installations.
=== Resonant transformer ===
A resonant transformer is a transformer in which one or both windings has a capacitor across it and functions as a tuned circuit. Used at radio frequencies, resonant transformers can function as high Q factor bandpass filters. The transformer windings have either air or ferrite cores and the bandwidth can be adjusted by varying the coupling (mutual inductance). One common form is the IF (intermediate frequency) transformer, used in superheterodyne radio receivers. They are also used in radio transmitters.
Resonant transformers are also used in electronic ballasts for gas discharge lamps, and high voltage power supplies. They are also used in some types of switching power supplies. Here the short-circuit inductance value is an important parameter that determines the resonance frequency of the resonant transformer. Often only secondary winding has a resonant capacitor (or stray capacitance) and acts as a serial resonant tank circuit. When the short-circuit inductance of the secondary side of the transformer is Lsc and the resonant capacitor (or stray capacitance) of the secondary side is Cr, The resonance frequency ωs of 1' is as follows
ω
s
=
1
L
s
c
C
r
=
1
(
1
−
k
2
)
L
s
C
r
{\displaystyle \omega _{s}={\frac {1}{\sqrt {L_{sc}C_{r}}}}={\frac {1}{\sqrt {(1-k^{2})L_{s}C_{r}}}}}
The transformer is driven by a pulse or square wave for efficiency, generated by an electronic oscillator circuit. Each pulse serves to drive resonant sinusoidal oscillations in the tuned winding, and due to resonance a high voltage can be developed across the secondary.
Applications:
Intermediate frequency (IF) transformer in superheterodyne radio receiver
Tank transformers in radio transmitters
Tesla coil
Power inverter
Oudin coil (or Oudin resonator; named after its inventor Paul Oudin)
D'Arsonval apparatus
Ignition coil or induction coil used in the ignition system of a petrol engine
Electrical breakdown and insulation testing of high voltage equipment and cables. In the latter case, the transformer's secondary is resonated with the cable's capacitance.
==== Constant voltage transformer ====
By arranging particular magnetic properties of a transformer core, and installing a ferro-resonant tank circuit (a capacitor and an additional winding), a transformer can be arranged to automatically keep the secondary winding voltage relatively constant for varying primary supply without additional circuitry or manual adjustment. Ferro-resonant transformers run hotter than standard power transformers, because regulating action depends on core saturation, which reduces efficiency. The output waveform is heavily distorted unless careful measures are taken to prevent this. Saturating transformers provide a simple rugged method to stabilize an AC power supply.
=== Ferrite core ===
Ferrite core power transformers are widely used in switched-mode power supplies (SMPSs). The powder core enables high-frequency operation, and hence much smaller size-to-power ratio than laminated-iron transformers.
Ferrite transformers are not used as power transformers at mains frequency since laminated iron cores cost less than an equivalent ferrite core.
==== Planar transformer ====
Manufacturers either use flat copper sheets or etch spiral patterns on a printed circuit board to form the "windings" of a planar transformer, replacing the turns of wire used to make other types. Some planar transformers are commercially sold as discrete components, other planar transformers are etched directly into the main printed circuit board and only need a ferrite core to be attached over the PCB. A planar transformer can be thinner than other transformers, which is useful for low-profile applications or when several printed circuit boards are stacked. Almost all planar transformers use a ferrite planar core.
=== Liquid-cooled transformer ===
Large transformers used in power distribution or electrical substations have their core and coils immersed in oil, which cools and insulates. Oil circulates through ducts in the coil and around the coil and core assembly, moved by convection. The oil is cooled by the outside of the tank in small ratings, and by an air-cooled radiator in larger ratings. Where a higher rating is required, or where the transformer is in a building or underground, oil pumps circulate the oil, fans may force air over the radiators, or an oil-to-water heat exchanger may also be used.
Transformer oil is flammable, so oil-filled transformers inside a building are installed in vaults to prevent spread of fire and smoke from a burning transformer. Some transformers were built to use fire-resistant PCBs, but because these compounds persist in the environment and have adverse effects on organisms, their use has been discontinued in most areas; for example, after 1979 in South Africa. Substitute fire-resistant liquids such as silicone oils are now used instead.
=== Cast resin transformer ===
Cast-resin power transformers encase the windings in epoxy resin. These transformers simplify installation since they are dry, without cooling oil, and so require no fire-proof vault for indoor installations. The epoxy protects the windings from dust and corrosive atmospheres. However, because the molds for casting the coils are only available in fixed sizes, the design of the transformers is less flexible, which may make them more costly if customized features (voltage, turns ratio, taps) are required.
=== Isolating transformer ===
An isolation transformer links two circuits magnetically, but provides no metallic conductive path between the circuits. An example application would be in the power supply for medical equipment, when it is necessary to prevent any leakage from the AC power system into devices connected to a patient. Special purpose isolation transformers may include shielding to prevent coupling of electromagnetic noise between circuits, or may have reinforced insulation to withstand thousands of volts of potential difference between primary and secondary circuits.
=== Solid-state transformer ===
A solid-state transformer is actually a power converter that performs the same function as a conventional transformer, sometimes with added functionality. Most contain a smaller high-frequency transformer. It can consist of an AC-to-AC converter, or a rectifier powering an inverter.
== Instrument transformer ==
Instrument transformers are typically used to operate instruments from high voltage lines or high current circuits, safely isolating measurement and control circuitry from the high voltages or currents. The primary winding of the transformer is connected to the high voltage or high current circuit, and the meter or relay is connected to the secondary circuit. Instrument transformers may also be used as an isolation transformer so that secondary quantities may be used without affecting the primary circuitry.
Terminal identifications (either alphanumeric such as H1, X1, Y1, etc. or a colored spot or dot impressed in the case) indicate one end of each winding, indicating the same instantaneous polarity and phase between windings. This applies to both types of instrument transformers. Correct identification of terminals and wiring is essential for proper operation of metering and protective relay instrumentation.
=== Current transformer ===
A current transformer (CT) is a series connected measurement device designed to provide a current in its secondary coil proportional to the current flowing in its primary. Current transformers are commonly used in metering and protective relays in the electrical power industry.
Current transformers are often constructed by passing a single primary turn (either an insulated cable or an uninsulated bus bar) through a well-insulated toroidal core wrapped with many turns of wire. The CT is typically described by its current ratio from primary to secondary. For example, a 1000:1 CT provides an output current of 1 amperes when 1000 amperes flow through the primary winding. Standard secondary current ratings are 5 amperes or 1 ampere, compatible with standard measuring instruments. The secondary winding can be single ratio or have several tap points to provide a range of ratios. Care must be taken to make sure the secondary winding is not disconnected from its low-impedance load while current flows in the primary, as this may produce a dangerously high voltage across the open secondary and may permanently affect the accuracy of the transformer.
Specially constructed wideband CTs are also used, usually with an oscilloscope, to measure high frequency waveforms or pulsed currents within pulsed power systems. One type provides a voltage output that is proportional to the measured current. Another, called a Rogowski coil, requires an external integrator in order to provide a proportional output.
A current clamp uses a current transformer with a split core that can be easily wrapped around a conductor in a circuit. This is a common method used in portable current measuring instruments but permanent installations use more economical types of current transformer.
=== Voltage transformer or potential transformer ===
Voltage transformers (VT), also called potential transformers (PT), are a parallel connected type of instrument transformer, used for metering and protection in high-voltage circuits or phasor phase shift isolation. They are designed to present negligible load to the supply being measured and to have an accurate voltage ratio to enable accurate metering. A potential transformer may have several secondary windings on the same core as a primary winding, for use in different metering or protection circuits. The primary may be connected phase to ground or phase to phase. The secondary is usually grounded on one terminal.
There are three primary types of voltage transformers (VT): electromagnetic, capacitor, and optical. The electromagnetic voltage transformer is a wire-wound transformer. The capacitor voltage transformer uses a capacitance potential divider and is used at higher voltages due to a lower cost than an electromagnetic VT. An optical voltage transformer exploits the electrical properties of optical materials. Measurement of high voltages is possible by the potential transformers. An optical voltage transformer is not strictly a transformer, but a sensor similar to a Hall effect sensor.
=== Combined instrument transformer ===
A combined instrument transformer encloses a current transformer and a voltage transformer in the same transformer. There are two main combined current and voltage transformer designs: oil-paper insulated and SF6 insulated. One advantage of applying this solution is reduced substation footprint, due to reduced number of transformers in a bay, supporting structures and connections as well as lower costs for civil works, transportation and installation.
== Pulse transformer ==
A pulse transformer is a transformer that is optimised for transmitting rectangular electrical pulses (that is, pulses with fast rise and fall times and a relatively constant amplitude). Small versions called signal types are used in digital logic and telecommunications circuits such as in Ethernet, often for matching logic drivers to transmission lines. These are also called Ethernet transformer modules.
Medium-sized power versions are used in power-control circuits such as camera flash controllers. Larger power versions are used in the electrical power distribution industry to interface low-voltage control circuitry to the high-voltage gates of power semiconductors. Special high voltage pulse transformers are also used to generate high power pulses for radar, particle accelerators, or other high energy pulsed power applications.
To minimize distortion of the pulse shape, a pulse transformer needs to have low values of leakage inductance and distributed capacitance, and a high open-circuit inductance. In power-type pulse transformers, a low coupling capacitance (between the primary and secondary) is important to protect the circuitry on the primary side from high-powered transients created by the load. For the same reason, high insulation resistance and high breakdown voltage are required. A good transient response is necessary to maintain the rectangular pulse shape at the secondary, because a pulse with slow edges would create switching losses in the power semiconductors.
The product of the peak pulse voltage and the duration of the pulse (or more accurately, the voltage-time integral) is often used to characterise pulse transformers. Generally speaking, the larger this product, the larger and more expensive the transformer.
Pulse transformers by definition have a duty cycle of less than 1⁄2; whatever energy stored in the coil during the pulse must be "dumped" out before the pulse is fired again.
== RF transformer ==
There are several types of transformer used in radio frequency (RF) work, distinguished by how their windings are connected, and by the type of cores (if any) the coil turns are wound onto.
Laminated steel used for power transformer cores is very inefficient at RF, wasting a lot of RF power as heat, so transformers for use at radio frequencies tends to use magnetic ceramics for winding cores, such as powdered iron (for mediumwave and lower shortwave frequencies) or ferrite (for upper shortwave).
The core material a coil is wrapped around can increase its inductance dramatically – hundreds to thousands of times more than “air” – thereby raising the transformer's Q. The cores of such transformers tend to help performance the most at the lower end of the frequency band transformer was designed for.
Old RF transformers sometimes included an extra, third coil (called a tickler winding) to inject feedback into an earlier (detector) stage in antique regenerative radio receivers.
=== Air-core transformer ===
So-called “air-core” transformers actually have no core at all – they are wound onto non-magnetic forms or frames, or merely held in shape by the stiffness of the coiled wire. These are used for very high frequency and upper shortwave work.
The lack of a magnetically reactive core means very low inductance per turn, requiring many turns of wire on the transformer coil. All forward current excites reverse current and induces secondary voltage which is proportional to the mutual inductance. At VHF, such transformers may be nothing more than a few turns of wire soldered onto a printed circuit board.
=== Ferrite-core transformer ===
Ferrite core transformers are widely used in RF transformers, especially for current balancing (see below) and impedance matching for TV and radio antennas. Because of the enormous improvement in inductance that ferrite produces, many ferrite cored transformers work well with only one or two turns.
Ferrite is an intensely magnetically reactive ceramic material made from iron oxide (rust) mixed with small fractions of other metals or their oxides, such as magnesium, zinc, and nickel. Different mixtures respond best at different frequencies.
Because they are ceramics, ferrites are (almost) non-conductive, so they respond only to the magnetic fields created by nearby currents, and not to the electric fields created by the accompanying voltages.
=== Choke transformer ===
For radio frequency use, "choke" transformers are sometimes made from windings of transmission line wired in parallel. Sometimes the windings are coaxial cable, sometimes bifilar (paired parallel wire); either is wound around a ferrite, powdered iron, or "air" core. This style of transformer gives an extremely wide bandwidth but only a limited number of impedance ratios (such as 1:1, 1:4, or 1:9) can be achieved with this technique.
Choke transformers are sometimes called transmission-line transformers (although see below for a different transformer type with the same name), or Guanella transformers, or current baluns, or line isolators. Although called a "transmission line" transformer, it is distinct from the transformers made from segments of transmission line.
The name "transmission-line" is used because actual coaxial line is sometimes used, and when paired wires are used, the builder is expected to take special care with the wire spacing, to ensure that the transmission line impedance of the coax or paired wires lies near the geometric mean of the input and output impedances.
The name "choke" is used because the equal and opposite (anti-parallel, balanced) currents in the coax or paired wires cancel each others' magnetic fields, allowing them to pass through unhindered, but magnetic field of the unbalanced flow inhibits the unbalanced current, "choking" it off. Similar reasoning applies to the name "line isolator".
It is called a "current balun" or "current transformer" because the transformed flow produces balanced currents, rather than balanced voltages typical of other transformer types.
=== Line section transformer ===
At radio frequencies and microwave frequencies, a quarter-wave impedance transformer can provide impedance matching between circuits over a limited range of frequencies, using only a section of transmission line no more than a 1 /4 wave long. The line may be coaxial cable, waveguide, stripline, or microstrip. For upper VHF and UHF frequencies, where coil self resonance interferes with proper operation, it is usually the only feasible method for transforming line impedances.
Single frequency transformers are made using sections of transmission line, often called a "matching section" or a "matching stub". Like the choke transformer above, it is also called a "transmission line transformer" even though the two are very different in form and operation.
Unless it is terminated in its characteristic impedance, any transmission line will produce standing waves of impedance along its length, repeating exactly every full wavelength, and covering its full range of absolute values over only a quarter wave. One may exploit this behavior to transform currents and voltages by connecting sections of transmission line with mismatched impedances to deliberately create a standing wave on a line, and the cut and reconnect to the line at the position where a desired impedance is reached – never requiring more than a 1 /4 wave of mismatched line.
This type of transformer is very efficient (very little loss) but severely limited in the frequency span it will operate on: Whereas the choke transformer, above, is very broadbanded, a line section transformer is very narrowbanded.
=== Balun ===
"Balun" is a generic name for any transformer configured specifically to connect between balanced (non-grounded) and unbalanced (grounded) circuits. They can be made using any transformer type, but the actual balance achieved depends on the type; for example, "choke" baluns produce balanced current and autotransformer-type baluns produce balanced voltages. Baluns can also be made from configurations of transmission line, using bifilar or coaxial cable similar to transmission line transformers in construction and operation.
In addition to interfacing between balanced and unbalanced loads by producing balanced current or balanced voltage (or both), baluns can in addition separately transform (match) impedance between the loads.
== IF transformer ==
Ferrite-core transformers are widely used in (intermediate frequency) (IF) stages in superheterodyne radio receivers. They are mostly tuned transformers, containing a threaded ferrite slug that is screwed in or out to adjust IF tuning. The transformers are usually canned (shielded) for stability and to reduce interference.
== Audio transformer ==
Audio transformers are those specifically designed for use in audio circuits to carry audio signal. They can be used to block radio frequency interference or the DC component of an audio signal, to split or combine audio signals, or to provide impedance matching between high impedance and low impedance circuits, such as between a high impedance tube (valve) amplifier output and a low impedance loudspeaker, or between a high impedance instrument output and the low impedance input of a mixing console. Audio transformers that operate with loudspeaker voltages and current are larger than those that operate at microphone or line level, which carry much less power. Bridge transformers connect 2-wire and 4-wire communication circuits.
Being magnetic devices, audio transformers are susceptible to external magnetic fields such as those generated by AC current-carrying conductors. "Hum" is a term commonly used to describe unwanted signals originating from the "mains" power supply (typically 50 or 60 Hz). Audio transformers used for low-level signals, such as those from microphones, often include magnetic shielding to protect against extraneous magnetically coupled signals.
Audio transformers were originally designed to connect different telephone systems to one another while keeping their respective power supplies isolated, and are still commonly used to interconnect professional audio systems or system components, to eliminate buzz and hum. Such transformers typically have a 1:1 ratio between the primary and the secondary. These can also be used for splitting signals, balancing unbalanced signals, or feeding a balanced signal to unbalanced equipment. Transformers are also used in DI boxes to convert high-impedance instrument signals (e.g., bass guitar) to low impedance signals to enable them to connect to a microphone input on the mixing console.
A particularly critical component is the output transformer of a valve amplifier. Valve circuits for quality reproduction have long been produced with no other (inter-stage) audio transformers, but an output transformer is needed to couple the relatively high impedance (up to a few hundred ohms depending upon configuration) of the output valve(s) to the low impedance of a loudspeaker. (The valves can deliver a low current at a high voltage; the speakers require high current at low voltage.) Most solid-state power amplifiers need no output transformer at all.
Audio transformers affect the sound quality because they are non-linear. They add harmonic distortion to the original signal, especially odd-order harmonics, with an emphasis on third-order harmonics. When the incoming signal amplitude is very low there is not enough level to energize the magnetic core (see coercivity and magnetic hysteresis). When the incoming signal amplitude is very high the transformer saturates and adds harmonics from soft clipping. Another non-linearity comes from limited frequency response. For good low-frequency response a relatively large magnetic core is required; high power handling increases the required core size. Good high-frequency response requires carefully designed and implemented windings without excessive leakage inductance or stray capacitance. All this makes for an expensive component.
Early transistor audio power amplifiers often had output transformers, but they were eliminated as advances in semiconductors allowed the design of amplifiers with sufficiently low output impedance to drive a loudspeaker directly.
=== Loudspeaker transformer ===
In the same way that transformers create high voltage power transmission circuits that minimize transmission losses, loudspeaker transformers can power many individual loudspeakers from a single audio circuit operated at higher than normal loudspeaker voltages. This application is common in public address applications. Such circuits are commonly referred to as constant-voltage speaker systems. Such systems are also known by the nominal voltage of the loudspeaker line, such as 25-, 70- and 100-volt speaker systems (the voltage corresponding to the power rating of a speaker or amplifier). A transformer steps up the output of the system's amplifier to the distribution voltage. At the distant loudspeaker locations, a step-down transformer matches the speaker to the rated voltage of the line, so the speaker produces rated nominal output when the line is at nominal voltage. Loudspeaker transformers commonly have multiple primary taps to adjust the volume at each speaker in steps.
=== Output transformer ===
Valve (tube) amplifiers almost always use an output transformer to match the high load impedance requirement of the valves (several kilohms) to a low impedance speaker
=== Small-signal transformer ===
Moving coil phonograph cartridges produce a very small voltage. A transformer may be used to convert the voltage to the range of the more common moving-magnet cartridges.
Microphones may also be matched to their load with a small transformer that is shielded with mu-metal to minimise noise pickup.
=== Interstage and coupling transformer ===
In a push–pull amplifier, an inverted signal is required and can be obtained from a transformer with a center-tapped winding, used to drive two active devices in opposite phase. These phase splitting transformers are not much used today.
== Other types ==
=== Transactor ===
A transactor is a combination of a transformer and a reactor. A transactor has an iron core with an air-gap, which limits the coupling between windings.
=== Hedgehog ===
Hedgehog transformers are occasionally encountered in homemade 1920s radios. They are homemade audio interstage coupling transformers.
Enameled copper wire is wound round the central half of the length of a bundle of insulated iron wire (e.g., florists' wire), to make the windings. The ends of the iron wires are then bent around the electrical winding to complete the magnetic circuit, and the whole is wrapped with tape or string to hold it together.
=== Variometer and variocoupler ===
A variometer is a type of continuously variable air-core RF inductor with two windings. One common form consisted of a coil wound on a short hollow cylindrical form, with a second smaller coil inside, mounted on a shaft so its magnetic axis can be rotated with respect to the outer coil. The two coils are connected in series. When the two coils are collinear, with their magnetic fields pointed in the same direction, the two magnetic fields add, and the inductance is maximum. If the inner coil is rotated so its axis is at an angle to the outer coil, the magnetic fields do not add and the inductance is less. If the inner coil is rotated so it is collinear with the outer coil but their magnetic fields point in opposite directions, the fields cancel each other out and the inductance is very small or zero. The advantage of the variometer is that inductance can be adjusted continuously, over a wide range. Variometers were widely used in 1920s radio receivers. One of their main uses today is as antenna matching coils to match longwave radio transmitters to their antennas.
The vario-coupler was a device with similar construction, but the two coils were not connected but attached to separate circuits. So it functioned as an air-core RF transformer with variable coupling. The inner coil could be rotated from 0° to 90° angle with the outer, reducing the mutual inductance from maximum to near zero.
The pancake coil variometer was another common construction used in both 1920s receivers and transmitters. It consists of two flat spiral coils suspended vertically facing each other, hinged at one side so one could swing away from the other to an angle of 90° to reduce the coupling. The flat spiral design served to reduce parasitic capacitance and losses at radio frequencies.
Pancake or "honeycomb" coil vario-couplers were used in the 1920s in the common Armstrong or "tickler" regenerative radio receivers. One coil was connected to the detector tube's grid circuit. The other coil, the "tickler" coil was connected to the tube's plate (output) circuit. It fed back some of the signal from the plate circuit into the input again, and this positive feedback increased the tube's gain and selectivity.
=== Rotary transformer ===
A rotary (rotatory) transformer is a specialized transformer that couples electrical signals between two parts that rotate in relation to each other—as an alternative to slip rings, which are prone to wear and contact noise. They are commonly used in helical scan magnetic tape applications.
=== Variable differential transformer ===
A variable differential transformer is a rugged non-contact position sensor. It has two oppositely-phased primaries which nominally produce zero output in the secondary, but any movement of the core changes the coupling to produce a signal.
=== Resolver and synchro ===
The two-phase resolver and related three-phase synchro are rotary position sensors which work over a full 360°. The primary is rotated within two or three secondaries at different angles, and the amplitudes of the secondary signals can be decoded into an angle. Unlike variable differential transformers, the coils, and not just the core, move relative to each other, so slip rings are required to connect the primary.
Resolvers produce in-phase and quadrature components which are useful for computation. Synchros produce three-phase signals which can be connected to other synchros to rotate them in a generator/motor configuration.
=== Piezoelectric transformer ===
Two piezoelectric transducers can be mechanically coupled or integrated in one piece of material, creating a piezoelectric transformer.
=== Flyback ===
A Flyback transformer is a high-voltage, high-frequency transformer used in plasma balls and with cathode-ray tubes (CRTs). It provides the high (often several kV) anode DC voltage required for operation of CRTs. Variations in anode voltage supplied by the flyback can result in distortions in the image displayed by the CRT. CRT flybacks may contain multiple secondary windings to provide several other, lower voltages. Its output is often pulsed because it is often used with a voltage multiplier, which may be integrated with the flyback.
== See also ==
Buck–boost transformer
Center tap
Magnetic amplifier
Motor-generator
Saturable reactor
Tap changer
Three-phase electric power
Three-phase
Transformer
== References == | Wikipedia/Transformer_types |
In an electric power system, automatic generation control (AGC) is a system for adjusting the power output of multiple generators at different power plants, in response to changes in the load. Since a power grid requires that generation and load closely balance moment by moment, frequent adjustments to the output of generators are necessary. The balance can be judged by measuring the system frequency; if it is increasing, more power is being generated than used, which causes all the machines in the system to accelerate. If the system frequency is decreasing, more load is on the system than the instantaneous generation can provide, which causes all generators to slow down.
== History ==
Before the use of automatic generation control, one generating unit in a system would be designated as the regulating unit and would be manually adjusted to control the balance between generation and load to maintain system frequency at the desired value. The remaining units would be controlled with speed droop to share the load in proportion to their ratings. With automatic systems, many units in a system can participate in regulation, reducing wear on a single unit's controls and improving overall system efficiency, stability, and economy.
Where the grid has tie interconnections to adjacent control areas, automatic generation control helps maintain the power interchanges over the tie lines at the scheduled levels. With computer-based control systems and multiple inputs, an automatic generation control system can take into account such matters as the most economical units to adjust, the coordination of thermal, hydroelectric, and other generation types, and even constraints related to the stability of the system and capacity of interconnections to other power grids.
== Types ==
=== Turbine-governor control ===
Turbine generators in a power system have stored kinetic energy due to their large rotating masses. All the kinetic energy stored in a power system in such rotating masses is a part of the grid inertia. When system load increases, grid inertia is initially used to supply the load. This, however, leads to a decrease in the stored kinetic energy of the turbine generators. Since the mechanical power of these turbines correlates with the delivered electrical power, the turbine generators have a decrease in angular velocity, which is directly proportional to a decrease in frequency in synchronous generators.
The purpose of the turbine-governor control (TGC) is to maintain the desired system frequency by adjusting the mechanical power output of the turbine. These controllers have become automated and at steady state, the frequency-power relation for turbine-governor control is,
Δ
p
m
=
Δ
p
r
e
f
−
1
/
R
×
Δ
f
{\displaystyle \Delta p_{m}=\Delta p_{ref}-1/R\times \Delta f}
where,
Δ
p
m
{\displaystyle \Delta p_{m}}
is the change in turbine mechanical power output
Δ
p
r
e
f
{\displaystyle \Delta p_{ref}}
is the change in a reference power setting
R
=
−
Δ
f
/
Δ
p
m
=
−
s
l
o
p
e
{\displaystyle R=-\Delta f/\Delta p_{m}=-slope}
is the regulation constant which quantifies the sensitivity of the generator to a change in frequency
Δ
f
{\displaystyle \Delta f}
is the change in frequency.
For steam turbines, steam turbine governing adjusts the mechanical output of the turbine by increasing or decreasing the amount of steam entering the turbine via a throttle valve.
=== Load-frequency control ===
Load-frequency control (LFC) is employed to allow an area to first meet its own load demands, then to assist in returning the steady-state frequency of the system, Δf, to zero. Load-frequency control operates with a response time of a few seconds to keep system frequency stable.
=== Economic dispatch ===
The goal of economic dispatch is to minimize total operating costs in an area by determining how the real power output of each generating unit will meet a given load. Generating units have different costs to produce a unit of electrical energy, and incur different costs for the losses in transmitting energy to the load. An economic dispatch algorithm will run every few minutes to select the combination of generating unit power setpoints that minimizes overall cost, subject to the constraints of transmission limitation or security of the system against failures. Further constraints may be imposed by the water supply of hydroelectric generation, or by the availability of sun and wind power.
== See also ==
Optimal Power Flow
== References == | Wikipedia/Automatic_generation_control |
A buck–boost transformer is a type of transformer used to make adjustments to the voltage applied to alternating current equipment. Buck–boost connections are used in several places such as uninterruptible power supply (UPS) units for computers and in the tanning bed industry.
Buck–boost transformers can be used to power low voltage circuits including control, lighting circuits, or applications that require 12, 16, 24, 32 or 48 volts, consistent with the design's secondaries. The transformer is connected as an isolating transformer and the nameplate kVA rating is the transformer’s capacity.
== Application ==
Buck-boost transformers may be used for electrical equipment where the amount of buck or boost is fixed. For example, a fixed boost would be used when connecting equipment rated for 230 V AC to a 208 V power source.
Units are rated in volt-amperes (most commonly, kilo-volt amperes KVA) (or more rarely, amperes) and are rated for a percent of voltage drop or rise. For example, a buck–boost transformer rated at 10% boost will raise a supplied voltage of 208 V AC to 229 V AC. A rating of 10% buck will yield the result of 209 V AC if the actual incoming supplied voltage is 230 V AC.
== Frequency ==
All transformers operate only with alternating current. Transformers change only voltage and current, not frequency. Equipment that uses synchronous motors will operate at a different speed if operated at other than the design frequency. Some equipment is marked on its nameplate to run at either 50 Hz or 60 Hz, and would need only the voltage adjusted with a buck–boost transformer to produce the rated output voltage.
== Consumer and business applications ==
Transformers may come semi-wired, where the installer completes the last internal connections to have the unit perform the amount of buck or boost needed. Units may have multiple taps on both the primary and secondary coils to achieve this flexibility. They may be designed for hard-wired installations (no plugs) or with plug and receptacle to allow the same transformer to be used in several different applications. The same transformer can be rewired to raise or lower voltage by 5%, 10% or 15%. The primary may have wiring combinations for dual voltage usage: example for either 120 V AC or 240 V AC applications, depending on the final wiring done by the electrician.
Not all equipment requires voltage correction. These transformers are used when electrical equipment has a voltage requirement that is slightly out of tolerance with the incoming power supply. This is most common when using 240 V equipment in a business with 208 V service or vice versa.
Equipment is typically labeled with its voltage rating, and may advertise the amount of tolerance it will accept before degraded performance or damage can be expected. A unit that requires 230 V AC with a tolerance of 5% will not require a buck–boost transformer if the branch circuit (under load) is between 219 V AC and 241 V AC. Measurement should be made while the circuit is loaded, as the voltage can drop several volts compared to the open measurement. The transformer must be rated to carry the full load current or it may be damaged.
== See also ==
Tap changer
Buck–boost converter
Transformer types
== References == | Wikipedia/Buck–boost_transformer |
A flyback transformer (FBT), also called a line output transformer (LOPT), is a special type of electrical transformer. It was initially designed to generate high-voltage sawtooth signals at a relatively high frequency. In modern applications, it is used extensively in switched-mode power supplies for both low (3 V) and high voltage (over 10 kV) supplies.
== History ==
The flyback transformer circuit was invented as a means of controlling the horizontal movement of the electron beam in a cathode-ray tube (CRT). Unlike conventional transformers, a flyback transformer is not fed with a signal of the same waveshape as the intended output current. A convenient side effect of such a transformer is the considerable energy that is available in its magnetic circuit. This can be exploited using extra windings to provide power to operate other parts of the equipment. In particular, very high voltages are easily obtained using relatively few turns of windings which, after rectification, can provide a very high accelerating voltage for a CRT. Many more recent applications of such a transformer dispense with the need to produce high voltages and use the device as a relatively efficient means of producing a wide range of lower voltages using a transformer that is much smaller than a conventional mains transformer.
== Operation and usage ==
The primary winding of the flyback transformer is driven by a switch from a DC supply (usually a transistor). When the switch is closed, the primary inductance causes the current to build up in a ramp. An integral diode connected in series with the secondary winding prevents the formation of a secondary current that would eventually oppose the primary current ramp.
When the switch is opened, the current in the primary falls to zero. The energy stored in the magnetic core is released to the secondary as the magnetic field in the core collapses. The voltage in the output winding rises very quickly (usually less than a microsecond) until the load conditions limit it. Once the voltage reaches such a level as to allow a secondary current, the charge flow is like a descending ramp.
The cycle can then be repeated. If the secondary current is allowed to drop completely to zero (no energy stored in the core), then it is said that the transformer works in discontinuous mode. When the secondary current is always non-zero (some energy is always stored in the core), then this is continuous mode. This terminology is used especially in power supply transformers.
The low voltage output winding mirrors the sawtooth of the primary current and, e.g. for television purposes, has fewer turns than the primary, thus providing a higher current. This is a ramped and pulsed waveform that repeats at the horizontal (line) frequency of the display. The flyback (the vertical portion of the sawtooth wave) can be a potential problem for the flyback transformer if the energy has nowhere to go: the faster a magnetic field collapses, the greater the induced voltage, which, if not controlled, can flash over the transformer terminals. The high frequency used permits the use of a much smaller transformer. In television sets, this high frequency is about 15 kilohertz (15.625 kHz for PAL, 15.734 kHz for NTSC), and vibrations from the transformer core caused by magnetostriction can often be heard as a high-pitched whine. In CRT-based computer displays, the frequency can vary over a wide range, from about 30 kHz to 150 kHz.
The transformer can be equipped with extra windings whose sole purpose is to induce a relatively large voltage pulse when the magnetic field collapses as the input switch is turned off. There is considerable energy stored in the magnetic field, and coupling it out via extra windings helps it to collapse quickly, and avoids the voltage flash over that might otherwise occur. The pulse train coming from the flyback transformer windings is converted to direct current by a simple half-wave rectifier. There is no point in using a full wave design as there are no corresponding pulses of opposite polarity. One turn of a winding often produces pulses of several volts. In older television designs, the transformer produced the required high voltage for the CRT accelerating voltage directly with the output rectified by a simple rectifier. In more modern designs, the rectifier is replaced by a voltage multiplier. Color television sets must also use a regulator to control the high voltage. The earliest sets used a shunt vacuum tube regulator, but the introduction of solid-state sets employed a simpler voltage-dependent resistor. The rectified voltage is then used to supply the final anode of the cathode-ray tube.
There are often auxiliary windings that produce lower voltages for driving other parts of the television circuitry. The voltage used to bias the varactor diodes in modern tuners is often derived from the flyback transformer ("Line OutPut Transformer" LOPT). In tube sets, a one or two-turn filament winding is located on the opposite side of the core as the HV secondary, used to drive the HV rectifier tube's heater.
== Practical considerations ==
In modern displays, the LOPT, voltage multiplier, and rectifier are often integrated into a single package on the main circuit board. There is usually a thickly insulated wire from the LOPT to the anode terminal (covered by a rubber cap) on the side of the picture tube.
One advantage of operating the transformer at the flyback frequency is that it can be much smaller and lighter than a comparable transformer operating at mains (line) frequency. Another advantage is that it provides a failsafe mechanism — should the horizontal deflection circuitry fail, the flyback transformer will cease operating and shut down the rest of the display, preventing the screen burn-in that would otherwise result from a stationary electron beam.
== Construction ==
The primary is wound first around a ferrite rod, and then the secondary is wound around the primary. This arrangement minimizes the leakage inductance of the primary. Finally, a ferrite frame is wrapped around the primary/secondary assembly, closing the magnetic field lines. Between the rod and the frame is an air gap, which increases the reluctance. The secondary is wound layer by layer with enameled wire, and Mylar film between the layers. In this way, parts of the wire with a higher voltage difference have more dielectric material between them.
== Applications ==
The flyback transformer operates CRT-display devices such as television sets and CRT computer monitors. The voltage and frequency can each range over a wide scale, depending on the device. For example, a large color TV CRT may require 20 to 50 kV with a horizontal scan rate of 15.734 kHz for NTSC devices and 15.625 kHz for PAL devices. Unlike a power (or "mains") transformer, which uses an alternating current of 50 or 60 hertz, a flyback transformer typically operates with switched currents at much higher frequencies in the range of 15 kHz to 50 kHz.
== See also ==
Flyback converter
Flyback diode
== Notes ==
== References ==
== External links ==
U.S. patent 3,665,288 - "Television sweep transformer" - Theodore J. Godawski | Wikipedia/Flyback_transformer |
An isolation transformer is a transformer used to transfer electrical power from a source of alternating current (AC) power to some equipment or device while isolating the powered device from the power source, usually for safety reasons or to reduce transients and harmonics. Isolation transformers provide galvanic isolation; no conductive path is present between source and load. This isolation is used to protect against electric shock, to suppress electrical noise in sensitive devices, or to transfer power between two circuits which must not be connected. A transformer sold for isolation is often built with special insulation between primary and secondary, and is specified to withstand a high voltage between windings.
Isolation transformers block transmission of the DC component in signals from one circuit to the other, but allow AC components in signals to pass. Transformers that have a ratio of 1 to 1 between the primary and secondary windings are often used to protect secondary circuits and individuals from electrical shocks between energized conductors and earth ground.
Suitably designed isolation transformers block interference caused by ground loops. Isolation transformers with electrostatic shields are used for power supplies for sensitive equipment such as computers, medical devices, or laboratory instruments.
Some specifications require that Isolation transformers be a part of the lightning protection on the AC circuits.
== Terminology ==
Sometimes the term is used to emphasize that a device is not an autotransformer whose primary and secondary circuits are connected. Power transformers with specified insulation between primary and secondary are not usually described only as "isolation transformers" unless this is their primary function. Only transformers whose primary purpose is to isolate circuits are routinely described as isolation transformers.
== Operation ==
Isolation transformers are designed with attention to capacitive coupling between the two windings. The capacitance between primary and secondary windings would also couple AC current from the primary to the secondary. A grounded Faraday shield between the primary and the secondary greatly reduces the coupling of common-mode noise. This may be another winding or a metal strip surrounding a winding.
Differential noise can magnetically couple from the primary to the secondary of an isolation transformer, and must be filtered out if a problem occurs.
== Applications ==
=== Pulse transformers ===
Some small transformers are used for isolation in pulse circuits.
=== Electronics testing ===
In electronics testing and servicing, an isolation transformer is a 1:1 (under load) power transformer used for safety. Without it, exposed live metal in a device under test is at a hazardous voltage relative to grounded objects such as a heating radiator or oscilloscope ground lead (a particular hazard with some old vacuum-tube equipment with live chassis). With the transformer, as there is no conductive connection between transformer secondary and primary, only a small leakage current will flow if the exposed live metal is connected to earth.
Even if an isolation transformer is used, hazardous voltages may still be present between components of the isolated device. Thus it is still possible for an operator to be exposed to lethal voltages by touching multiple elements in the circuit. An isolation transformer provides maximum protection when the device is ungrounded. Connecting it to test equipment, for example, an oscilloscope or a benchtop multimeter, may ground the circuit.
Electrical isolation is considered to be particularly important on medical equipment, and special standards apply. Often the system must additionally be designed so that fault conditions do not interrupt power, but generate a warning.
=== Supply of equipment at elevated potentials ===
Isolation transformers are also used for the power supply of devices not at ground potential. An example is the Austin transformer for the power supply of air-traffic obstacle warning lamps on radio antenna masts. Without the isolation transformer, the lighting circuits on the mast would conduct radio-frequency energy to ground through the power supply.
== See also ==
Balun
Power quality
Transformer types
Zigzag transformer
== References ==
== External links ==
Transformer Isolation | Wikipedia/Isolation_transformer |
A rotary (rotatory) transformer is a specialized transformer used to couple electrical signals between two parts that rotate in relation to each other. They may be either cylindrical or 'pancake' shaped.
Slip rings can be used for the same purpose, but are subject to friction, wear, intermittent contact, and limitations on the rotational speed that can be accommodated without damage. Wear can be eliminated by using a pool of liquid mercury or liquid metal alloy instead of a solid ring contact, but the toxicity and slow corrosion of mercury are problematic, and very high rotational speeds are again difficult to achieve. A rotary transformer has none of these limitations.
Rotary transformers are constructed by winding the primary and secondary windings into separate halves of a cup core; these concentric halves face each other, with each half mounted to one of the rotating parts. Magnetic flux provides the coupling from one half of the cup core to the other across an air gap, providing the mutual inductance that couples energy from the transformer's primary to its secondary.
In brushless synchros, typical rotary transformers (in pairs) provide longer life than slip rings. These rotary transformers have a cylindrical, rather than a disc-shaped, air gap between windings. The rotor winding is a spool-shaped ferromagnetic core, with the winding placed like thread on a spool. The flanges are the pole pieces. The stator winding is a ferromagnetic cylinder with the winding inside, and end poles that are discs with holes, like washers.
== Uses ==
Rotary transformers are most commonly used in videocassette recorders, as well as other tape drives that use rotary heads to implement helical scan, such as those used for tape backup. Signals must be coupled from the electronics of the VCR or other tape drive to the fast-moving tape heads carried on the rotating head drum; a rotary transformer is ideal for this purpose. Most VCR designs require more than one signal to be coupled to the head drum. In this case, the cup core has more than one concentric winding, isolated by individual raised portions of the core. The transformer for the head drum shown to the right couples six individual channels.
Another use is to transmit the signals from rotary torque sensors installed on electric motors, to allow electronic control of motor speed and torque using feedback.
Because they are transformers, rotary transformers can only pass AC, not DC, power and signals. The supporting electronics, including the tape heads or torque sensors, must be designed to accommodate this.
== See also ==
Metadyne
Rotary converter
Variable-frequency transformer
== References == | Wikipedia/Rotary_transformer |
A variable-frequency transformer (VFT) is used to transmit electricity between two (asynchronous or synchronous) alternating current frequency domains. The VFT is a relatively recent development. Most asynchronous grid inter-ties use high-voltage direct current converters, while synchronous grid inter-ties are connected by lines and "ordinary" transformers, but without the ability to control power flow between the systems, or with phase-shifting transformers with some flow control.
It can be thought of as a very high power synchro, or a rotary converter acting as a frequency changer, which is more efficient than a motor–generator of the same rating.
== Construction and operation ==
A variable-frequency transformer is a doubly fed electric machine resembling a vertical shaft hydroelectric generator with a three-phase wound rotor, connected by slip rings to one external power circuit. The stator is connected to the other. With no applied torque, the shaft rotates due to the difference in frequency between the networks connected to the rotor and stator. A direct-current torque motor is mounted on the same shaft; changing the direction of torque applied to the shaft changes the direction of power flow.
The variable-frequency transformer behaves as a continuously adjustable phase-shifting transformer. It allows control of the power flow between two networks. Unlike power electronics solutions such as back-to-back HVDC, the variable frequency transformer does not demand harmonic filters and reactive power compensation. Limitations of the concept are the current-carrying capacity of the slip rings for the rotor winding.
== Projects ==
Five small variable-frequency transformer with a total power rate of 25 MVA were in use at Neuhof Substation, Bad Sachsa, Germany for coupling power grids of former East and West Germany between 1985 and 1990.
Langlois Substation in Québec, Canada (45°17′13.76″N 74°0′56.07″W) installed a 100 MW variable-frequency transformer in 2004 to connect the asynchronous grids in Québec and the northeastern United States. This was the first large scale, commercial variable frequency transformer, and was installed at Hydro-Québec Langlois substation and is located electrically near sixteen hydro generators at Les Cèdres, Quebec and thirty-six more hydro generators at Beauharnois, Quebec. The operating experience since April 2004 has demonstrated the VFT's inherent compatibility with the nearby generators
AEP Texas installed a 100 MW VFT substation in Laredo, Texas, United States (27°34′13.64″N 99°30′34.98″W) in early 2007. It connects the power systems of ERCOT (in the United States) to CFE (in Mexico). (See The Laredo VFT Project.)
Smaller VFTs are used in large land-based wind turbines, so that the turbine rotation speed can vary while connected to an electric power distribution grid.
== Linden VFT ==
General Electric installed a 3 × 100 MW VFT substation in Linden, New Jersey, in the United States in 2009. It connects the power systems of PJM & New York Independent System Operator (NYISO). This installation is in parallel with three existing phase-shifting transformers to regulate synchronous power flow.
== Economics of energy trading ==
VFTs provide the technical feasibility to flow power in both directions between two grids, permitting power exchanges that were previously impossible. Energy in a grid with lower costs can be transmitted to a grid with higher costs (higher demand), with energy trading. Power capacity is sold by providers. Transmission scheduling rights (TSRs) are auctioned by the transmission line owners.
Financial transmission rights (FTRs) are a financial instrument used to balance energy congestion and demand costs.
== See also ==
Induction regulator
Quadrature booster
== References ==
== External links ==
Power Transmission Without the Power Electronics | Wikipedia/Variable-frequency_transformer |
An amorphous metal transformer (AMT) is a type of energy efficient transformer found on electric grids. The magnetic core of this transformer is made with a ferromagnetic amorphous metal.
The typical material (Metglas) is an alloy of iron with boron, silicon, and phosphorus in the form of thin (e.g. 25 μm) foils rapidly cooled from melt. These materials have high magnetic susceptibility, very low coercivity and high electrical resistance.
The high resistance and thin foils lead to low losses by eddy currents when subjected to alternating magnetic fields. On the downside amorphous alloys have a lower saturation induction and often a higher magnetostriction compared to conventional crystalline iron-silicon electrical steel.
== Core loss and copper loss ==
In a transformer the no-load loss is dominated by the core loss. With an amorphous core, this can be 70–80% lower than with traditional crystalline materials.
The loss under heavy load is dominated by the resistance of the copper windings and thus called copper loss. Here the lower saturation magnetization of amorphous cores tends to result in a lower efficiency at full load. Using more copper and core material it is possible to compensate for this. So high efficiency AMTs can be more efficient at low
and high load, though at a larger size. The more expensive amorphous core material, the more difficult handling and the need for more copper windings make an AMT more expensive than a traditional transformer.
== Applications ==
The main application of AMTs are the grid distribution transformers rated at about 50–1000 kVA. These transformers typically run 24 hours a day and at a low load factor (average load divided by nominal load). The no load loss of these transformers makes up a significant part of the loss of the whole distribution net. Amorphous iron is also used in specialized electric motors that operate at high frequencies of perhaps 350 Hz or more.
== Advantages and disadvantages ==
More efficient transformers lead to a reduction of generation requirement and, when using electric power generated from fossil fuels, less CO2 emissions. This technology has been widely adopted by large developing countries such as China and India where labour cost is low. AMT are in fact more labour-intensive than conventional distribution transformers, a reason that explains a very low adoption in the comparable (by size) European market. These two countries can potentially save 25–30 TWh electricity annually, eliminate 6-8 GW generation investment, and reduce 20–30 million tons of CO2 emission by fully utilizing this technology.
== Notes and references ==
== External links ==
Amorphous Metal Transformer Information Website
Amorphous Metals in Electric-Power Distribution Applications
Amorphous Ribbon for Transformers | Wikipedia/Amorphous_metal_transformer |
An Austin ring transformer is a special type of isolation transformer with low capacitance between the primary and secondary windings and high isolation.
== Etymology ==
It is named after its inventor, Arthur O. Austin.
== Background ==
AM radio stations that broadcast in the medium frequency (MF) and low frequency (LF) bands typically use a type of antenna called a base-fed mast radiator. This is a tall radio mast in which the steel mast structure itself is energized and serves as the antenna. The mast is mounted on a ceramic insulator to isolate it from the ground and the feedline from the transmitter is bolted to it. Typically the mast will have a radio frequency AC potential of several thousand volts on it with respect to ground during operation.
Aviation regulations require that radio towers have aircraft warning lights along their length, so the tower will be visible to aircraft at night. The high voltage on the tower poses a problem with powering the lights. The power cable that runs down the tower and connects to the utility line is at the high voltage of the mast. Without protective equipment the RF current from the mast would flow down the cable to the power line ground, short-circuiting the mast. To prevent this, a protective isolator device is installed in the lighting power cable at the base of the mast which blocks the radio frequency power while allowing the 50/60 hertz AC mains power for the lights through.
== Use and mechanism ==
The Austin transformer is a specialized type of isolation transformer made specifically for this use, in which the primary and secondary windings of the transformer are separated by an air gap, wide enough so the high voltage on the antenna cannot jump across. It consists of a ring-shaped toroidal iron core with the primary winding wrapped around it, mounted on a bracket from the mast's concrete base, connected to the lighting power source. The secondary winding which provides power to the mast lights is a ring-shaped coil which circles the toroidal core through the center, like two links in a chain, with an air gap between the two. The magnetic field created by the primary winding induces current in the secondary winding without the necessity of a direct connection between them. The wide gap of several centimeters between the coils also ensures that there is minimal interwinding capacitance, to prevent RF voltage being induced in the supply wiring by capacitive coupling. A spark gap is often located nearby with a gap smaller than the gap between the rings, to prevent damage to the transformer and transmitting equipment in the case of a lightning strike.
== References ==
== External links ==
"Picture of an Austin transformer at a broadcast transmitter site". Retrieved 2020-04-12. | Wikipedia/Austin_transformer |
A grounding transformer or earthing transformer is a type of auxiliary transformer used in three-phase electric power systems to provide a ground path to either an ungrounded wye or a delta-connected system. Grounding transformers are part of an earthing system of the network. They let three-phase (delta connected) systems accommodate phase-to-neutral loads by providing a return path for current to a neutral.
Grounding transformers are typically used to:
Provide a relatively low-impedance path to ground, thereby maintaining the system neutral at or near ground potential.
Limit the magnitude of transient over voltages when restriking ground faults occur.
Provide a source of ground fault current during line-to-ground faults.
Permit the connection of phase-to-neutral loads when desired.
Grounding transformers most commonly incorporate a single winding transformer with a zigzag winding configuration, but may also be created with a (rare case) delta-wye transformer. Neutral grounding transformers are very common on generators in power plants and wind farms. Neutral grounding transformers are sometimes applied on high-voltage (sub-transmission) systems, such as at 33 kV, where the circuit would otherwise not have a ground; for example, if a system is fed by a delta-connected transformer. The grounding point of the transformer may be connected through a resistor or arc suppression coil to limit the fault current on the system in the event of a line-to-ground fault.
== References == | Wikipedia/Grounding_transformer |
Planar transformers are high frequency transformers used in isolated switchmode power supplies operating at high frequency. As opposed to conventional "wire-wound-on-a-bobbin" transformers, planar transformers usually contain winding turns made of thin copper sheets riveted together at the ends of turns in the case of high current windings, or windings etched on a PCB in a spiral form. As the current conductors are thin sheets of copper, the operating frequency is not limited by skin effect. As such, high power converters built with planar transformers can be designed to operate at relatively high switching frequencies, often 100 kHz or above. This reduces the size of required magnetic components and capacitors, thereby increasing power density.
== References == | Wikipedia/Planar_transformer |
In a typical power distribution grid, electric transformer power loss typically contributes to about 40-50% of the total transmission and distribution loss. Energy efficient transformers are therefore an important means to reduce transmission and distribution loss. With the improvement of electrical steel (silicon steel) properties, the losses of a transformer in 2010 can be half that of a similar transformer in the 1970s. With new magnetic materials, it is possible to achieve even higher efficiency. The amorphous metal transformer is a modern example.
== References ==
== External links ==
World's largest Amorphous Metal Power Transformer: 99.31% Efficiency [1]
Amorphous Metals in Electric-Power Distribution Applications
Australian MandatoryEfficiency Requirements for Distribution Transformers | Wikipedia/Energy_efficient_transformer |
Instrument transformers are high accuracy class electrical devices used to isolate or transform voltage or current levels. The most common usage of instrument transformers is to operate instruments or metering from high voltage or high current circuits, safely isolating secondary control circuitry from the high voltages or currents. The primary winding of the transformer is connected to the high voltage or high current circuit, and the meter or relay is connected to the secondary circuit.
Instrument transformers may also be used as an isolation transformer so that secondary quantities may be used in phase shifting without affecting other primary connected devices.
== Current transformer ==
Current transformers (CT) are a series-connected type of instrument transformer. They are designed to present negligible load to the supply being measured and have an accurate current ratio and phase relationship to enable accurate secondary connected metering.
Current transformers are often constructed by passing a single primary turn (either an insulated cable or an uninsulated bus bar) through a well-insulated toroidal core wrapped with many turns of wire. This affords easy implementation on high voltage bushings of grid transformers and other devices by installing the secondary turn core inside high-voltage bushing insulators and using the pass-through conductor as a single turn primary.
A current clamp uses a current transformer with a split core that can be easily wrapped around a conductor in a circuit. This is a common method used in portable current measuring instruments but permanent installations use more economical types of the current transformer.
Specially constructed wideband CTs are also used, usually with an oscilloscope, to measure high frequency waveforms or pulsed currents within pulsed power systems. One type provides an IR voltage output that is proportional to the measured current; another, called a Rogowski coil, requires an external integrator in order to provide a proportional output.
=== Ratio ===
The CT is typically described by its current ratio from primary to secondary. A 1000:5 CT will provide an output current of 5 amperes when 1000 amperes are flowing through its primary winding. Standard secondary current ratings are 5 amperes or 1 ampere, compatible with standard measuring instruments. It is used to step down current for metering purposes for the safety of the equipments as well as operator.
=== Burden and accuracy ===
Burden and accuracy are usually stated as a combined parameter due to being dependent on each other.
Metering style CTs are designed with smaller cores and VA capacities. This causes metering CTs to saturate at lower secondary voltages saving sensitive connected metering devices from damaging large fault currents in the event of a primary electrical fault. A CT with a rating of 0.3B0.6 would indicate with up to 0.6 ohms of secondary burden the secondary current will be within a 0.3 percent error parallelogram on an accuracy diagram incorporating both phase angle and ratio errors.
Relaying CTs used for protective circuits are designed with larger cores and higher VA capacities to ensure secondary measuring devices have true representations with massive grid fault currents on primary circuits. A CT with a rating of 2.5L400 would indicate it can produce a secondary voltage to 400 volts with a secondary current of 100 amperes (20 times its rated 5-ampere rating) and still be within 2.5 amperes of true accuracy.
Care must be taken that the secondary winding of a CT is not disconnected from its low-impedance load while current flows in the primary, as this may produce a dangerously high voltage across the open secondary (especially in a relaying type CT) and could permanently affect the accuracy of the transformer.
=== Multi-ratio CT ===
The secondary winding can be a single ratio or have several tap points to provide a range of ratios.
== Voltage transformer ==
== References == | Wikipedia/Instrument_transformer |
A solid-state transformer (SST), power electronic transformer (PET), or electronic power transformer is an AC-to-AC converter, a type of electric power converter that replaces a conventional transformer used in AC electric power distribution. It is more complex than a conventional transformer operating at utility frequency, but can be smaller and more efficient than conventional transformers because it operates at higher frequencies. Solid-state transformers are an emerging technology As of 2025.
Solid-state transformers can actively regulate voltage and current. Some can convert single-phase power to three-phase power and vice versa. Variations can input or output DC power to reduce the number of conversions, for greater end-to-end efficiency. As a complex electronic circuit, it must be designed to withstand lightning and other surges.
The main types are true AC-to-AC converter (with no DC stages) and AC-to-DC-to-DC-to-AC converter (in which an active rectifier supplies power to a DC-to-DC converter, which supplies power to a power inverter). A solid-state transformer usually contains a transformer, inside the AC-to-AC converter or DC-to-DC converter, which provides electrical isolation and carries the full power. This transformer is smaller due to smaller DC-DC inverting stages between transformer coils, which consequently mean smaller transformer coils required to step up or step down voltages.
A modular solid-state transformer consists of several high-frequency transformers and is similar to a multi-level converter.
== References ==
"Are Solid-State Transformers Ready for Prime Time?".
"Solid State Transformer For Power Distribution Applications" (PDF). | Wikipedia/Solid-state_transformer |
Various types of electrical transformer are made for different purposes. Despite their design differences, the various types employ the same basic principle as discovered in 1831 by Michael Faraday, and share several key functional parts.
== Power transformer ==
=== Laminated core ===
This is the most common type of transformer, widely used in electric power transmission and appliances to convert mains voltage to low voltage to power electronic devices. They are available in power ratings ranging from mW to MW. The insulated laminations minimize eddy current losses in the iron core.
Small appliance and electronic transformers may use a split bobbin, giving a high level of insulation between the windings. The rectangular cores are made up of stampings, often in E-I shape pairs, but other shapes are sometimes used. Shields between primary and secondary may be fitted to reduce EMI (electromagnetic interference), or a screen winding is occasionally used.
Small appliance and electronics transformers may have a thermal cut-out built into the winding, to shut-off power at high temperatures to prevent further overheating.
=== Toroidal ===
Donut-shaped toroidal transformers save space compared to E-I cores, and may reduce external magnetic field. These use a ring shaped core, copper windings wrapped around this ring (and thus threaded through the ring during winding), and tape for insulation.
Toroidal transformers have a lower external magnetic field compared to rectangular transformers, and can be smaller for a given power rating. However, they cost more to make, as winding requires more complex and slower equipment.
They can be mounted by a bolt through the center, using washers and rubber pads or by potting in resin. Care must be taken that the bolt does not form part of a short-circuit turn.
=== Autotransformer ===
An autotransformer consists of only one winding that is tapped at some point along the winding. Voltage is applied across a terminal of the winding, and a higher (or lower) voltage is produced across another portion of the same winding. The equivalent power rating of the autotransformer is lower than the actual load power rating. It is calculated by: load VA × (|Vin – Vout|)/Vin. For example, an auto transformer that adapts a 1000 VA load rated at 120 volts to a 240 volt supply has an equivalent rating of at least: 1,000 VA (240 V – 120 V) / 240 V = 500 VA. However, the actual rating (shown on the tally plate) must be at least 1000 VA.
For voltage ratios that don't exceed about 3:1, an autotransformer is cheaper, lighter, smaller, and more efficient than an isolating (two-winding) transformer of the same rating. Large three-phase autotransformers are used in electric power distribution systems, for example, to interconnect 220 kV and 33 kV sub-transmission networks or other high voltage levels.
=== Variable autotransformer ===
By exposing part of the winding coils of an autotransformer, and making the secondary connection through a sliding carbon brush, an autotransformer with a near-continuously variable turns ratio can be obtained, allowing for wide voltage adjustment in very small increments.
=== Induction regulator ===
The induction regulator is similar in design to a wound-rotor induction motor but it is essentially a transformer whose output voltage is varied by rotating its secondary relative to the primary—i.e., rotating the angular position of the rotor. It can be seen as a power transformer exploiting rotating magnetic fields. The major advantage of the induction regulator is that unlike variacs, they are practical for transformers over 5 kVA. Hence, such regulators find widespread use in high-voltage laboratories.
=== Polyphase transformer ===
For polyphase systems, multiple single-phase transformers can be used, or all phases can be connected to a single polyphase transformer. For a three phase transformer, the three primary windings are connected together and the three secondary windings are connected together. Examples of connections are wye-delta, delta-wye, delta-delta, and wye-wye. A vector group indicates the configuration of the windings and the phase angle difference between them. If a winding is connected to earth (grounded), the earth connection point is usually the center point of a wye winding. If the secondary is a delta winding, the ground may be connected to a center tap on one winding (high leg delta) or one phase may be grounded (corner grounded delta). A special purpose polyphase transformer is the zigzag transformer. There are many possible configurations that may involve more or fewer than six windings and various tap connections.
=== Grounding transformer ===
Grounding or earthing transformers let three wire (delta) polyphase system supplies accommodate phase to neutral loads by providing a return path for current to a neutral. Grounding transformers most commonly incorporate a single winding transformer with a zigzag winding configuration but may also be created with a wye-delta isolated winding transformer connection.
=== Phase-shifting transformer ===
This is a specialized type of transformer which can be configured to adjust the phase relationship between input and output. This allows power flow in an electric grid to be controlled, e.g. to steer power flows away from a shorter (but overloaded) link to a longer path with excess capacity.
=== Variable-frequency transformer ===
A variable-frequency transformer is a specialized three-phase power transformer which allows the phase relationship between the input and output windings to be continuously adjusted by rotating one half. They are used to interconnect electrical grids with the same nominal frequency but without synchronous phase coordination.
=== Leakage or stray field transformer ===
A leakage transformer, also called a stray-field transformer, has a significantly higher leakage inductance than other transformers, sometimes increased by a magnetic bypass or shunt in its core between primary and secondary, which is sometimes adjustable with a set screw. This provides a transformer with an inherent current limitation due to the loose coupling between its primary and the secondary windings. The adjustable short-circuit inductance acts as a current limiting parameter.
The output and input currents are kept low enough to preclude thermal overload under any load conditions — even if the secondary is shorted.
==== Uses ====
Leakage transformers are used for arc welding and high voltage discharge lamps (neon lights and cold cathode fluorescent lamps, which are series connected up to 7.5 kV AC). It acts both as a voltage transformer and as a magnetic ballast.
Other applications are short-circuit-proof extra-low voltage transformers for toys or doorbell installations.
=== Resonant transformer ===
A resonant transformer is a transformer in which one or both windings has a capacitor across it and functions as a tuned circuit. Used at radio frequencies, resonant transformers can function as high Q factor bandpass filters. The transformer windings have either air or ferrite cores and the bandwidth can be adjusted by varying the coupling (mutual inductance). One common form is the IF (intermediate frequency) transformer, used in superheterodyne radio receivers. They are also used in radio transmitters.
Resonant transformers are also used in electronic ballasts for gas discharge lamps, and high voltage power supplies. They are also used in some types of switching power supplies. Here the short-circuit inductance value is an important parameter that determines the resonance frequency of the resonant transformer. Often only secondary winding has a resonant capacitor (or stray capacitance) and acts as a serial resonant tank circuit. When the short-circuit inductance of the secondary side of the transformer is Lsc and the resonant capacitor (or stray capacitance) of the secondary side is Cr, The resonance frequency ωs of 1' is as follows
ω
s
=
1
L
s
c
C
r
=
1
(
1
−
k
2
)
L
s
C
r
{\displaystyle \omega _{s}={\frac {1}{\sqrt {L_{sc}C_{r}}}}={\frac {1}{\sqrt {(1-k^{2})L_{s}C_{r}}}}}
The transformer is driven by a pulse or square wave for efficiency, generated by an electronic oscillator circuit. Each pulse serves to drive resonant sinusoidal oscillations in the tuned winding, and due to resonance a high voltage can be developed across the secondary.
Applications:
Intermediate frequency (IF) transformer in superheterodyne radio receiver
Tank transformers in radio transmitters
Tesla coil
Power inverter
Oudin coil (or Oudin resonator; named after its inventor Paul Oudin)
D'Arsonval apparatus
Ignition coil or induction coil used in the ignition system of a petrol engine
Electrical breakdown and insulation testing of high voltage equipment and cables. In the latter case, the transformer's secondary is resonated with the cable's capacitance.
==== Constant voltage transformer ====
By arranging particular magnetic properties of a transformer core, and installing a ferro-resonant tank circuit (a capacitor and an additional winding), a transformer can be arranged to automatically keep the secondary winding voltage relatively constant for varying primary supply without additional circuitry or manual adjustment. Ferro-resonant transformers run hotter than standard power transformers, because regulating action depends on core saturation, which reduces efficiency. The output waveform is heavily distorted unless careful measures are taken to prevent this. Saturating transformers provide a simple rugged method to stabilize an AC power supply.
=== Ferrite core ===
Ferrite core power transformers are widely used in switched-mode power supplies (SMPSs). The powder core enables high-frequency operation, and hence much smaller size-to-power ratio than laminated-iron transformers.
Ferrite transformers are not used as power transformers at mains frequency since laminated iron cores cost less than an equivalent ferrite core.
==== Planar transformer ====
Manufacturers either use flat copper sheets or etch spiral patterns on a printed circuit board to form the "windings" of a planar transformer, replacing the turns of wire used to make other types. Some planar transformers are commercially sold as discrete components, other planar transformers are etched directly into the main printed circuit board and only need a ferrite core to be attached over the PCB. A planar transformer can be thinner than other transformers, which is useful for low-profile applications or when several printed circuit boards are stacked. Almost all planar transformers use a ferrite planar core.
=== Liquid-cooled transformer ===
Large transformers used in power distribution or electrical substations have their core and coils immersed in oil, which cools and insulates. Oil circulates through ducts in the coil and around the coil and core assembly, moved by convection. The oil is cooled by the outside of the tank in small ratings, and by an air-cooled radiator in larger ratings. Where a higher rating is required, or where the transformer is in a building or underground, oil pumps circulate the oil, fans may force air over the radiators, or an oil-to-water heat exchanger may also be used.
Transformer oil is flammable, so oil-filled transformers inside a building are installed in vaults to prevent spread of fire and smoke from a burning transformer. Some transformers were built to use fire-resistant PCBs, but because these compounds persist in the environment and have adverse effects on organisms, their use has been discontinued in most areas; for example, after 1979 in South Africa. Substitute fire-resistant liquids such as silicone oils are now used instead.
=== Cast resin transformer ===
Cast-resin power transformers encase the windings in epoxy resin. These transformers simplify installation since they are dry, without cooling oil, and so require no fire-proof vault for indoor installations. The epoxy protects the windings from dust and corrosive atmospheres. However, because the molds for casting the coils are only available in fixed sizes, the design of the transformers is less flexible, which may make them more costly if customized features (voltage, turns ratio, taps) are required.
=== Isolating transformer ===
An isolation transformer links two circuits magnetically, but provides no metallic conductive path between the circuits. An example application would be in the power supply for medical equipment, when it is necessary to prevent any leakage from the AC power system into devices connected to a patient. Special purpose isolation transformers may include shielding to prevent coupling of electromagnetic noise between circuits, or may have reinforced insulation to withstand thousands of volts of potential difference between primary and secondary circuits.
=== Solid-state transformer ===
A solid-state transformer is actually a power converter that performs the same function as a conventional transformer, sometimes with added functionality. Most contain a smaller high-frequency transformer. It can consist of an AC-to-AC converter, or a rectifier powering an inverter.
== Instrument transformer ==
Instrument transformers are typically used to operate instruments from high voltage lines or high current circuits, safely isolating measurement and control circuitry from the high voltages or currents. The primary winding of the transformer is connected to the high voltage or high current circuit, and the meter or relay is connected to the secondary circuit. Instrument transformers may also be used as an isolation transformer so that secondary quantities may be used without affecting the primary circuitry.
Terminal identifications (either alphanumeric such as H1, X1, Y1, etc. or a colored spot or dot impressed in the case) indicate one end of each winding, indicating the same instantaneous polarity and phase between windings. This applies to both types of instrument transformers. Correct identification of terminals and wiring is essential for proper operation of metering and protective relay instrumentation.
=== Current transformer ===
A current transformer (CT) is a series connected measurement device designed to provide a current in its secondary coil proportional to the current flowing in its primary. Current transformers are commonly used in metering and protective relays in the electrical power industry.
Current transformers are often constructed by passing a single primary turn (either an insulated cable or an uninsulated bus bar) through a well-insulated toroidal core wrapped with many turns of wire. The CT is typically described by its current ratio from primary to secondary. For example, a 1000:1 CT provides an output current of 1 amperes when 1000 amperes flow through the primary winding. Standard secondary current ratings are 5 amperes or 1 ampere, compatible with standard measuring instruments. The secondary winding can be single ratio or have several tap points to provide a range of ratios. Care must be taken to make sure the secondary winding is not disconnected from its low-impedance load while current flows in the primary, as this may produce a dangerously high voltage across the open secondary and may permanently affect the accuracy of the transformer.
Specially constructed wideband CTs are also used, usually with an oscilloscope, to measure high frequency waveforms or pulsed currents within pulsed power systems. One type provides a voltage output that is proportional to the measured current. Another, called a Rogowski coil, requires an external integrator in order to provide a proportional output.
A current clamp uses a current transformer with a split core that can be easily wrapped around a conductor in a circuit. This is a common method used in portable current measuring instruments but permanent installations use more economical types of current transformer.
=== Voltage transformer or potential transformer ===
Voltage transformers (VT), also called potential transformers (PT), are a parallel connected type of instrument transformer, used for metering and protection in high-voltage circuits or phasor phase shift isolation. They are designed to present negligible load to the supply being measured and to have an accurate voltage ratio to enable accurate metering. A potential transformer may have several secondary windings on the same core as a primary winding, for use in different metering or protection circuits. The primary may be connected phase to ground or phase to phase. The secondary is usually grounded on one terminal.
There are three primary types of voltage transformers (VT): electromagnetic, capacitor, and optical. The electromagnetic voltage transformer is a wire-wound transformer. The capacitor voltage transformer uses a capacitance potential divider and is used at higher voltages due to a lower cost than an electromagnetic VT. An optical voltage transformer exploits the electrical properties of optical materials. Measurement of high voltages is possible by the potential transformers. An optical voltage transformer is not strictly a transformer, but a sensor similar to a Hall effect sensor.
=== Combined instrument transformer ===
A combined instrument transformer encloses a current transformer and a voltage transformer in the same transformer. There are two main combined current and voltage transformer designs: oil-paper insulated and SF6 insulated. One advantage of applying this solution is reduced substation footprint, due to reduced number of transformers in a bay, supporting structures and connections as well as lower costs for civil works, transportation and installation.
== Pulse transformer ==
A pulse transformer is a transformer that is optimised for transmitting rectangular electrical pulses (that is, pulses with fast rise and fall times and a relatively constant amplitude). Small versions called signal types are used in digital logic and telecommunications circuits such as in Ethernet, often for matching logic drivers to transmission lines. These are also called Ethernet transformer modules.
Medium-sized power versions are used in power-control circuits such as camera flash controllers. Larger power versions are used in the electrical power distribution industry to interface low-voltage control circuitry to the high-voltage gates of power semiconductors. Special high voltage pulse transformers are also used to generate high power pulses for radar, particle accelerators, or other high energy pulsed power applications.
To minimize distortion of the pulse shape, a pulse transformer needs to have low values of leakage inductance and distributed capacitance, and a high open-circuit inductance. In power-type pulse transformers, a low coupling capacitance (between the primary and secondary) is important to protect the circuitry on the primary side from high-powered transients created by the load. For the same reason, high insulation resistance and high breakdown voltage are required. A good transient response is necessary to maintain the rectangular pulse shape at the secondary, because a pulse with slow edges would create switching losses in the power semiconductors.
The product of the peak pulse voltage and the duration of the pulse (or more accurately, the voltage-time integral) is often used to characterise pulse transformers. Generally speaking, the larger this product, the larger and more expensive the transformer.
Pulse transformers by definition have a duty cycle of less than 1⁄2; whatever energy stored in the coil during the pulse must be "dumped" out before the pulse is fired again.
== RF transformer ==
There are several types of transformer used in radio frequency (RF) work, distinguished by how their windings are connected, and by the type of cores (if any) the coil turns are wound onto.
Laminated steel used for power transformer cores is very inefficient at RF, wasting a lot of RF power as heat, so transformers for use at radio frequencies tends to use magnetic ceramics for winding cores, such as powdered iron (for mediumwave and lower shortwave frequencies) or ferrite (for upper shortwave).
The core material a coil is wrapped around can increase its inductance dramatically – hundreds to thousands of times more than “air” – thereby raising the transformer's Q. The cores of such transformers tend to help performance the most at the lower end of the frequency band transformer was designed for.
Old RF transformers sometimes included an extra, third coil (called a tickler winding) to inject feedback into an earlier (detector) stage in antique regenerative radio receivers.
=== Air-core transformer ===
So-called “air-core” transformers actually have no core at all – they are wound onto non-magnetic forms or frames, or merely held in shape by the stiffness of the coiled wire. These are used for very high frequency and upper shortwave work.
The lack of a magnetically reactive core means very low inductance per turn, requiring many turns of wire on the transformer coil. All forward current excites reverse current and induces secondary voltage which is proportional to the mutual inductance. At VHF, such transformers may be nothing more than a few turns of wire soldered onto a printed circuit board.
=== Ferrite-core transformer ===
Ferrite core transformers are widely used in RF transformers, especially for current balancing (see below) and impedance matching for TV and radio antennas. Because of the enormous improvement in inductance that ferrite produces, many ferrite cored transformers work well with only one or two turns.
Ferrite is an intensely magnetically reactive ceramic material made from iron oxide (rust) mixed with small fractions of other metals or their oxides, such as magnesium, zinc, and nickel. Different mixtures respond best at different frequencies.
Because they are ceramics, ferrites are (almost) non-conductive, so they respond only to the magnetic fields created by nearby currents, and not to the electric fields created by the accompanying voltages.
=== Choke transformer ===
For radio frequency use, "choke" transformers are sometimes made from windings of transmission line wired in parallel. Sometimes the windings are coaxial cable, sometimes bifilar (paired parallel wire); either is wound around a ferrite, powdered iron, or "air" core. This style of transformer gives an extremely wide bandwidth but only a limited number of impedance ratios (such as 1:1, 1:4, or 1:9) can be achieved with this technique.
Choke transformers are sometimes called transmission-line transformers (although see below for a different transformer type with the same name), or Guanella transformers, or current baluns, or line isolators. Although called a "transmission line" transformer, it is distinct from the transformers made from segments of transmission line.
The name "transmission-line" is used because actual coaxial line is sometimes used, and when paired wires are used, the builder is expected to take special care with the wire spacing, to ensure that the transmission line impedance of the coax or paired wires lies near the geometric mean of the input and output impedances.
The name "choke" is used because the equal and opposite (anti-parallel, balanced) currents in the coax or paired wires cancel each others' magnetic fields, allowing them to pass through unhindered, but magnetic field of the unbalanced flow inhibits the unbalanced current, "choking" it off. Similar reasoning applies to the name "line isolator".
It is called a "current balun" or "current transformer" because the transformed flow produces balanced currents, rather than balanced voltages typical of other transformer types.
=== Line section transformer ===
At radio frequencies and microwave frequencies, a quarter-wave impedance transformer can provide impedance matching between circuits over a limited range of frequencies, using only a section of transmission line no more than a 1 /4 wave long. The line may be coaxial cable, waveguide, stripline, or microstrip. For upper VHF and UHF frequencies, where coil self resonance interferes with proper operation, it is usually the only feasible method for transforming line impedances.
Single frequency transformers are made using sections of transmission line, often called a "matching section" or a "matching stub". Like the choke transformer above, it is also called a "transmission line transformer" even though the two are very different in form and operation.
Unless it is terminated in its characteristic impedance, any transmission line will produce standing waves of impedance along its length, repeating exactly every full wavelength, and covering its full range of absolute values over only a quarter wave. One may exploit this behavior to transform currents and voltages by connecting sections of transmission line with mismatched impedances to deliberately create a standing wave on a line, and the cut and reconnect to the line at the position where a desired impedance is reached – never requiring more than a 1 /4 wave of mismatched line.
This type of transformer is very efficient (very little loss) but severely limited in the frequency span it will operate on: Whereas the choke transformer, above, is very broadbanded, a line section transformer is very narrowbanded.
=== Balun ===
"Balun" is a generic name for any transformer configured specifically to connect between balanced (non-grounded) and unbalanced (grounded) circuits. They can be made using any transformer type, but the actual balance achieved depends on the type; for example, "choke" baluns produce balanced current and autotransformer-type baluns produce balanced voltages. Baluns can also be made from configurations of transmission line, using bifilar or coaxial cable similar to transmission line transformers in construction and operation.
In addition to interfacing between balanced and unbalanced loads by producing balanced current or balanced voltage (or both), baluns can in addition separately transform (match) impedance between the loads.
== IF transformer ==
Ferrite-core transformers are widely used in (intermediate frequency) (IF) stages in superheterodyne radio receivers. They are mostly tuned transformers, containing a threaded ferrite slug that is screwed in or out to adjust IF tuning. The transformers are usually canned (shielded) for stability and to reduce interference.
== Audio transformer ==
Audio transformers are those specifically designed for use in audio circuits to carry audio signal. They can be used to block radio frequency interference or the DC component of an audio signal, to split or combine audio signals, or to provide impedance matching between high impedance and low impedance circuits, such as between a high impedance tube (valve) amplifier output and a low impedance loudspeaker, or between a high impedance instrument output and the low impedance input of a mixing console. Audio transformers that operate with loudspeaker voltages and current are larger than those that operate at microphone or line level, which carry much less power. Bridge transformers connect 2-wire and 4-wire communication circuits.
Being magnetic devices, audio transformers are susceptible to external magnetic fields such as those generated by AC current-carrying conductors. "Hum" is a term commonly used to describe unwanted signals originating from the "mains" power supply (typically 50 or 60 Hz). Audio transformers used for low-level signals, such as those from microphones, often include magnetic shielding to protect against extraneous magnetically coupled signals.
Audio transformers were originally designed to connect different telephone systems to one another while keeping their respective power supplies isolated, and are still commonly used to interconnect professional audio systems or system components, to eliminate buzz and hum. Such transformers typically have a 1:1 ratio between the primary and the secondary. These can also be used for splitting signals, balancing unbalanced signals, or feeding a balanced signal to unbalanced equipment. Transformers are also used in DI boxes to convert high-impedance instrument signals (e.g., bass guitar) to low impedance signals to enable them to connect to a microphone input on the mixing console.
A particularly critical component is the output transformer of a valve amplifier. Valve circuits for quality reproduction have long been produced with no other (inter-stage) audio transformers, but an output transformer is needed to couple the relatively high impedance (up to a few hundred ohms depending upon configuration) of the output valve(s) to the low impedance of a loudspeaker. (The valves can deliver a low current at a high voltage; the speakers require high current at low voltage.) Most solid-state power amplifiers need no output transformer at all.
Audio transformers affect the sound quality because they are non-linear. They add harmonic distortion to the original signal, especially odd-order harmonics, with an emphasis on third-order harmonics. When the incoming signal amplitude is very low there is not enough level to energize the magnetic core (see coercivity and magnetic hysteresis). When the incoming signal amplitude is very high the transformer saturates and adds harmonics from soft clipping. Another non-linearity comes from limited frequency response. For good low-frequency response a relatively large magnetic core is required; high power handling increases the required core size. Good high-frequency response requires carefully designed and implemented windings without excessive leakage inductance or stray capacitance. All this makes for an expensive component.
Early transistor audio power amplifiers often had output transformers, but they were eliminated as advances in semiconductors allowed the design of amplifiers with sufficiently low output impedance to drive a loudspeaker directly.
=== Loudspeaker transformer ===
In the same way that transformers create high voltage power transmission circuits that minimize transmission losses, loudspeaker transformers can power many individual loudspeakers from a single audio circuit operated at higher than normal loudspeaker voltages. This application is common in public address applications. Such circuits are commonly referred to as constant-voltage speaker systems. Such systems are also known by the nominal voltage of the loudspeaker line, such as 25-, 70- and 100-volt speaker systems (the voltage corresponding to the power rating of a speaker or amplifier). A transformer steps up the output of the system's amplifier to the distribution voltage. At the distant loudspeaker locations, a step-down transformer matches the speaker to the rated voltage of the line, so the speaker produces rated nominal output when the line is at nominal voltage. Loudspeaker transformers commonly have multiple primary taps to adjust the volume at each speaker in steps.
=== Output transformer ===
Valve (tube) amplifiers almost always use an output transformer to match the high load impedance requirement of the valves (several kilohms) to a low impedance speaker
=== Small-signal transformer ===
Moving coil phonograph cartridges produce a very small voltage. A transformer may be used to convert the voltage to the range of the more common moving-magnet cartridges.
Microphones may also be matched to their load with a small transformer that is shielded with mu-metal to minimise noise pickup.
=== Interstage and coupling transformer ===
In a push–pull amplifier, an inverted signal is required and can be obtained from a transformer with a center-tapped winding, used to drive two active devices in opposite phase. These phase splitting transformers are not much used today.
== Other types ==
=== Transactor ===
A transactor is a combination of a transformer and a reactor. A transactor has an iron core with an air-gap, which limits the coupling between windings.
=== Hedgehog ===
Hedgehog transformers are occasionally encountered in homemade 1920s radios. They are homemade audio interstage coupling transformers.
Enameled copper wire is wound round the central half of the length of a bundle of insulated iron wire (e.g., florists' wire), to make the windings. The ends of the iron wires are then bent around the electrical winding to complete the magnetic circuit, and the whole is wrapped with tape or string to hold it together.
=== Variometer and variocoupler ===
A variometer is a type of continuously variable air-core RF inductor with two windings. One common form consisted of a coil wound on a short hollow cylindrical form, with a second smaller coil inside, mounted on a shaft so its magnetic axis can be rotated with respect to the outer coil. The two coils are connected in series. When the two coils are collinear, with their magnetic fields pointed in the same direction, the two magnetic fields add, and the inductance is maximum. If the inner coil is rotated so its axis is at an angle to the outer coil, the magnetic fields do not add and the inductance is less. If the inner coil is rotated so it is collinear with the outer coil but their magnetic fields point in opposite directions, the fields cancel each other out and the inductance is very small or zero. The advantage of the variometer is that inductance can be adjusted continuously, over a wide range. Variometers were widely used in 1920s radio receivers. One of their main uses today is as antenna matching coils to match longwave radio transmitters to their antennas.
The vario-coupler was a device with similar construction, but the two coils were not connected but attached to separate circuits. So it functioned as an air-core RF transformer with variable coupling. The inner coil could be rotated from 0° to 90° angle with the outer, reducing the mutual inductance from maximum to near zero.
The pancake coil variometer was another common construction used in both 1920s receivers and transmitters. It consists of two flat spiral coils suspended vertically facing each other, hinged at one side so one could swing away from the other to an angle of 90° to reduce the coupling. The flat spiral design served to reduce parasitic capacitance and losses at radio frequencies.
Pancake or "honeycomb" coil vario-couplers were used in the 1920s in the common Armstrong or "tickler" regenerative radio receivers. One coil was connected to the detector tube's grid circuit. The other coil, the "tickler" coil was connected to the tube's plate (output) circuit. It fed back some of the signal from the plate circuit into the input again, and this positive feedback increased the tube's gain and selectivity.
=== Rotary transformer ===
A rotary (rotatory) transformer is a specialized transformer that couples electrical signals between two parts that rotate in relation to each other—as an alternative to slip rings, which are prone to wear and contact noise. They are commonly used in helical scan magnetic tape applications.
=== Variable differential transformer ===
A variable differential transformer is a rugged non-contact position sensor. It has two oppositely-phased primaries which nominally produce zero output in the secondary, but any movement of the core changes the coupling to produce a signal.
=== Resolver and synchro ===
The two-phase resolver and related three-phase synchro are rotary position sensors which work over a full 360°. The primary is rotated within two or three secondaries at different angles, and the amplitudes of the secondary signals can be decoded into an angle. Unlike variable differential transformers, the coils, and not just the core, move relative to each other, so slip rings are required to connect the primary.
Resolvers produce in-phase and quadrature components which are useful for computation. Synchros produce three-phase signals which can be connected to other synchros to rotate them in a generator/motor configuration.
=== Piezoelectric transformer ===
Two piezoelectric transducers can be mechanically coupled or integrated in one piece of material, creating a piezoelectric transformer.
=== Flyback ===
A Flyback transformer is a high-voltage, high-frequency transformer used in plasma balls and with cathode-ray tubes (CRTs). It provides the high (often several kV) anode DC voltage required for operation of CRTs. Variations in anode voltage supplied by the flyback can result in distortions in the image displayed by the CRT. CRT flybacks may contain multiple secondary windings to provide several other, lower voltages. Its output is often pulsed because it is often used with a voltage multiplier, which may be integrated with the flyback.
== See also ==
Buck–boost transformer
Center tap
Magnetic amplifier
Motor-generator
Saturable reactor
Tap changer
Three-phase electric power
Three-phase
Transformer
== References == | Wikipedia/Pulse_transformer |
A distribution transformer or service transformer provides a final voltage transformation in the electric power distribution system, stepping down the voltage used in the distribution lines to the level used by the customer.
The invention of a practical, efficient transformer made AC power distribution feasible; a system using distribution transformers was demonstrated as early as 1882.
If mounted on a utility pole, they are called pole-mount transformers. When placed either at ground level or underground, distribution transformers are mounted on concrete pads and locked in steel cases, thus known as distribution tap pad-mount transformers.
Distribution transformers typically have ratings less than 200 kVA, although some national standards allow units up to 5000 kVA to be described as distribution transformers. Since distribution transformers are energized 24 hours a day (even when they don't carry any load), reducing iron losses is vital in their design. They usually don't operate at full load, so they are designed to have maximum efficiency at lower loads. To have better efficiency, voltage regulation in these transformers should be kept to a minimum. Hence, they are designed to have small leakage reactance.
== Types ==
Distribution transformers are classified into different categories based on factors such as:
Mounting location – pole, pad, underground vault
Type of insulation – liquid-immersed or dry-type
Number of phases – single-phase or three-phase
Voltage class
Basic impulse insulation level (BIL).
== Use ==
Distribution transformers are normally located at a service drop, where wires run from a utility pole or underground power lines to a customer's premises. They are often used for the power supply of facilities outside settlements, such as isolated houses, farmyards, or pumping stations at voltages below 30 kV. Another application is the power supply of the overhead wire of railways electrified with AC. In this case, single-phase distribution transformers are used.
The number of customers fed by a single distribution transformer varies depending on the number of customers in an area. Several homes may be fed from a single transformer in urban areas; depending on the mains voltage, rural distribution may require one transformer per customer. A large commercial or industrial complex will have multiple distribution transformers. In urban areas and neighborhoods where primary distribution lines run underground, padmount transformers, and locked metal enclosures are mounted on a concrete pad. Many large buildings have electric service provided at primary distribution voltage. These buildings have customer-owned transformers in the basement for step-down purposes.
Distribution transformers are also found in wind farm power collection networks, where they step up power from each wind turbine to connect to a substation that may be several miles (kilometers) distant.
== Connections ==
Both pole-mounted and pad-mounted transformers convert the overhead or underground distribution lines' high 'primary' voltage to the lower 'secondary' or 'utilization' voltage inside the building. The primary distribution wires use the three-phase system. Main distribution lines always have three 'hot' wires plus an optional neutral. In the North American system, where single-phase transformers connect to only one phase wire, smaller 'lateral' lines branching off on side roads may include only one or two 'hot' phase wires. (When only one phase wire exists, a neutral will always be provided as a return path.) Primaries provide power at the standard distribution voltages used in the area; these range from as low as 2.3 kV to about 35 kV depending on local distribution practice and standards, often 11 kV (50 Hz systems) and 13.8 kV (60 Hz systems) are used, but many other voltages are standard. For example, in the United States, the most common voltage is 12.47 kV, with a line-to-ground voltage of 7.2 kV. It has a 7.2 kV phase-to-neutral voltage, exactly 30 times the 240 V on the split-phase secondary side.
=== Primary ===
The high-voltage primary windings are brought out to bushings on the top of the case.
Single-phase transformers, generally used in the North American system, are attached to the overhead distribution wires with two different types of connections:
Wye – A 'wye' or 'phase to neutral' transformer is used on a wye distribution circuit. A single-phase wye transformer usually has only one bushing on top, connected to one of the three primary phases. The other end of the primary winding is connected to the transformer case, which is connected to the neutral wire of the wye system and is also grounded. A wye distribution system is not preferred because the transformers present unbalanced loads on the line that cause currents in the neutral wire and are then grounded. However, with a delta distribution system, the unbalanced loads can cause variations in the voltages on the 3-phase wires.
Delta – A 'delta' or 'phase to phase' transformer is used on a delta distribution circuit. A single-phase delta transformer has two bushings connected to two of the three primary wires, so the primary winding sees the phase-to-phase voltage; this avoids returning primary current through a neutral that must be solidly grounded to keep its voltage near earth's potential. Since the neutral is also provided to customers, this is a significant safety advantage in a dry area like California, where soil conductivity is low. The main disadvantage is higher cost, e.g., needing at least two insulated 'hot' phase wires even on a branch circuit. Another minor disadvantage is that if only one of the primary phases is disconnected upstream, it will remain live as the transformers try to return current. It could be a hazard to line workers.
Transformers providing three-phase secondary power, used for residential service in the European system, have three primary windings attached to all three primary phase wires. The windings are almost always connected in a 'wye' configuration, with the ends connected and grounded.
The transformer is always connected to the primary distribution lines through protective fuses and disconnect switches. For pole-mounted transformers, this is usually a 'fused cutout.' An electrical fault melts the fuse, and the device drops open to give a visual indication of trouble. Lineworkers can also manually open it while the line is energized using insulated hot sticks. In some cases, completely self-protected transformers are used, which have a circuit breaker built in, so a fused cutout isn't needed.
=== Secondary ===
The low-voltage secondary windings are attached to three or four terminals on the transformer's side.
In North American residences and small businesses, the secondary is often the split-phase 120/240-volt system. The 240 V secondary winding is center-tapped, and the center neutral wire is grounded, making the two end conductors "hot" concerning the center tap. These three wires run down the service drop to the building's electric meter and service panel. Connecting a load between the hot wire and the neutral gives 120 volts, which is used for lighting circuits. Connecting both hot wires gives 240 volts for heavy loads such as air conditioners, ovens, dryers, and electric vehicle charging stations.
In Europe and other countries using its system, the secondary is often the three-phase 400Y/230 system. There are three 230 V secondary windings, each receiving power from a primary winding attached to one of the primary phases. One end of each secondary winding is connected to a 'neutral' wire, which is grounded. The other end of the three secondary windings and the neutral are brought down the service drop to the service panel. 230 V loads are connected between any of the three-phase wires and the neutral. Because the phases are 120 degrees from each other, the voltage between any two phases is sqrt(3) * 230V = 400V, compared to the 2 * 120V = 240V in the North American split phase system. While three-phase power is almost unheard of in individual North American residences, it is common in Europe for heavy loads such as kitchen stoves, air conditioners, and electric vehicle chargers.
== Construction ==
Distribution transformers consist of a magnetic core made from laminations of sheet silicon steel (transformer steel) stacked and either glued together with resin or banded together with steel straps, with the primary and secondary wire windings wrapped around them. This core construction is designed to reduce core losses and dissipation of magnetic energy as heat in the core, an economically important cause of power loss in utility grids. Two effects cause core losses: hysteresis loss in the steel and eddy currents. Silicon steel has low hysteresis loss, and the laminated construction prevents eddy currents from flowing in the core, which dissipates power in the resistance of the steel. The efficiency of typical distribution transformers is between about 98 and 99 percent. Where large numbers of transformers are made to standard designs, a wound C-shaped core is economical to manufacture. A steel strip is wrapped around a former, pressed into shape, and then cut into two C-shaped halves re-assembled on the copper windings.
The primary coils are wound from enamel-coated copper or aluminum wire, and the high-current, low-voltage secondaries are wound using a thick ribbon of aluminum or copper. The windings are insulated with resin-impregnated paper. The entire assembly is baked to cure the resin and then submerged in a powder-coated steel tank, which is then filled with transformer oil (or other insulating liquid), which is inert and non-conductive. The transformer oil cools and insulates the windings and protects them from moisture. The tank is temporarily evacuated during manufacture to remove any remaining moisture that would cause arcing and is sealed against the weather with a gasket at the top.
Formerly, distribution transformers for indoor use would be filled with a polychlorinated biphenyl (PCB) liquid. Because these chemicals persist in the environment and adversely affect on animals, they have been banned. Other fire-resistant liquids such as silicones are used where a liquid-filled transformer must be used indoors. Certain vegetable oils have been applied as transformer oil; these have the advantage of a high fire point and are completely biodegradable in the environment.
Pole-mounted transformers often include accessories such as surge arresters or protective fuse links. A self-protected transformer consists of an internal fuse and surge arrester; other transformers have these components mounted separately outside the tank. Pole-mounted transformers may have lugs allowing direct mounting to a pole or may be mounted on cross-arms bolted to the pole. Aerial transformers, larger than around 75 kVA, may be mounted on a platform supported by one or more poles. A three-phase service may use three identical transformers, one per phase.
Transformers designed for below-grade installation can be designed for periodic submersion in water.
Distribution transformers may include an off-load tap changer, which slightly adjusts the ratio between primary and secondary voltage to bring the customer's voltage within the desired range on long or heavily loaded lines.
Pad-mounted transformers have secure locked, bolted' and grounded metal enclosures to discourage unauthorized access to live internal parts. The enclosure may also include fuses, isolating switches, load-break bushings, and other accessories as described in technical standards. Pad-mounted transformers for distribution systems typically range from around 100 to 2000 kVA, although some larger units are also used.
== Placement ==
In the United States, distribution transformers are often installed outdoors on wooden poles.
In Europe, it is most common to place them in buildings. If the feeding lines are overhead, these look like towers. If all lines running to the transformer are underground, small buildings are used. In rural areas, sometimes distribution transformers are mounted on poles, and the pole is usually made of concrete or iron due to the weight of the transformer.
== See also ==
Bushing (electrical)
Transformer types
Current transformer
Distribution Transformer Monitor
== References ==
== Bibliography ==
Bakshi, V.B.U.A. (2009). Transformers & Induction Machines. Technical Publications. ISBN 9788184313802. Retrieved 2014-01-14.
De Keulenaer, Hans; Chapman, David; Fassbinder, Stefan; McDermott, Mike (2001). The Scope for Energy Saving in the EU through the Use of Energy-Efficient Electricity Distribution Transformers (PDF). 16th International Conference and Exhibition on Electricity Distribution (CIRED 2001). Institution of Engineering and Technology. doi:10.1049/cp:20010853. Retrieved 10 July 2014.
Harlow, James H. (2012). Electric Power Transformer Engineering, Third Edition, Volume 2. CRC Press. ISBN 978-1439856291.
Pansini, Anthony J. (2005). Guide to Electrical Power Distribution Systems. The Fairmont Press, Inc. ISBN 088173506X. | Wikipedia/Distribution_transformer |
Renewable Energy Payments are a competitive alternative to Renewable Energy Credits (REC's).
Although the intent with both methods is the same, to stimulate growth in the alternative and renewable energy space, REP's have proven to offer benefits to local jobs, businesses and economies while making the growth fundable and lendable by financial institutions.
Renewable Energy Payments are the mechanisms or instruments at the heart of specific state, provincial or national renewable energy policies. REPs are incentives for homeowners, farmers, businesses, etc., to become producers of renewable energy, or to increase their production of renewable energy. As such, they increase our overall production and use of renewable energy, and decrease our consumption and burning of fossil fuels.
In a broad stroke, Renewable Energy Payments, sometimes known as a Feed-in Tariff place obligations on utility companies to buy electricity from renewable energy sources, often small, local companies, for a fixed period of time. The underlying premise being that with fixed payments the once volatile renewable energy projects now become lendable and attractive for financing, thus stimulating growth and innovation.
Proponents of Renewable Energy Payments argue that this policy has proven to stimulate local economies, innovation and small business growth because in its truest form REP's put everyone, whether small business, individual, or farmers on an equal footing with large commercial titans of industry.
Representative Jay Inslee of Washington says "We can give homeowners, farmers and communities across America investment security that they can take to the bank. We know from experience in Germany, Spain and dozens of other countries around the world that this policy approach spurs unparalleled and affordable renewable-energy development."
The alternative to Renewable Energy Payments are what are called Renewable Energy Credits, which have been likened to the Alaskan Bridge to nowhere in a recent filing by the Florida Alliance for Renewable Energy. (FARE)
== References == | Wikipedia/Renewable_Energy_Payments |
The transformer utilization factor (TUF) of a rectifier circuit is defined as the ratio of the DC power available at the load resistor to the AC rating of the secondary coil of a transformer.
T
.
U
.
F
=
P
o
d
c
V
A
r
a
t
i
n
g
o
f
t
r
a
n
s
f
o
r
m
e
r
{\displaystyle T.U.F={\frac {P_{odc}}{VA\ rating\ of\ transformer}}}
The
V
A
{\displaystyle VA}
rating of the transformer can be defined as:
V
A
=
V
r
.
m
.
s
I
˙
r
.
m
.
s
(
F
o
r
s
e
c
o
n
d
a
r
y
c
o
i
l
.
)
{\displaystyle VA=V_{r.m.s}{\dot {I}}_{r.m.s}(For\ secondary\ coil.)}
TRANSFORMER utilization factor for half wave rectifier is .287 or .3.
== References ==
== External links ==
"Transformer Utilization Factor (TUF)". electricalbaba.com. | Wikipedia/Transformer_utilization_factor |
Variable renewable energy (VRE) or intermittent renewable energy sources (IRES) are renewable energy sources that are not dispatchable due to their fluctuating nature, such as wind power and solar power, as opposed to controllable renewable energy sources, such as dammed hydroelectricity or bioenergy, or relatively constant sources, such as geothermal power.
The use of small amounts of intermittent power has little effect on grid operations. Using larger amounts of intermittent power may require upgrades or even a redesign of the grid infrastructure.
Options to absorb large shares of variable energy into the grid include using storage, improved interconnection between different variable sources to smooth out supply, using dispatchable energy sources such as hydroelectricity and having overcapacity, so that sufficient energy is produced even when weather is less favourable. More connections between the energy sector and the building, transport and industrial sectors may also help.: 55
== Background and terminology ==
The penetration of intermittent renewables in most power grids is low: global electricity generation in 2021 was 7% wind and 4% solar. However, in 2021 Denmark, Luxembourg and Uruguay generated over 40% of their electricity from wind and solar. Characteristics of variable renewables include their unpredictability, variability, and low operating costs. These, along with renewables typically being asynchronous generators, provide a challenge to grid operators, who must make sure supply and demand are matched. Solutions include energy storage, demand response, availability of overcapacity and sector coupling. Smaller isolated grids may be less tolerant to high levels of penetration.
Matching power demand to supply is not a problem specific to intermittent power sources. Existing power grids already contain elements of uncertainty including sudden and large changes in demand and unforeseen power plant failures. Though power grids are already designed to have some capacity in excess of projected peak demand to deal with these problems, significant upgrades may be required to accommodate large amounts of intermittent power.
Several key terms are useful for understanding the issue of intermittent power sources. These terms are not standardized, and variations may be used. Most of these terms also apply to traditional power plants.
Intermittency or variability is the extent to which a power source fluctuates. This has two aspects: a predictable variability, such as the day-night cycle, and an unpredictable part (imperfect local weather forecasting). The term intermittent can be used to refer to the unpredictable part, with variable then referring to the predictable part.
Dispatchability is the ability of a given power source to add output on demand. The concept is distinct from intermittency; dispatchability is one of several ways system operators match supply (generator's output) to system demand (technical loads).
Penetration is the amount of electricity generated from a particular source as a percentage of annual consumption.
Nominal power or nameplate capacity is the theoretical output registered with authorities for classifying the unit. For intermittent power sources, such as wind and solar, nameplate power is the source's output under ideal conditions, such as maximum usable wind or high sun on a clear summer day.
Capacity factor, average capacity factor, or load factor is the ratio of actual electrical generation over a given period of time, usually a year, to actual generation in that time period. Basically, it is the ratio between the how much electricity a plant produced and how much electricity a plant would have produced if were running at its nameplate capacity for the entire time period.
Firm capacity or firm power is "guaranteed by the supplier to be available at all times during a period covered by a commitment".
Capacity credit: the amount of conventional (dispatchable) generation power that can be potentially removed from the system while keeping the reliability, usually expressed as a percentage of the nominal power.
Foreseeability or predictability is how accurately the operator can anticipate the generation: for example tidal power varies with the tides but is completely foreseeable because the orbit of the moon can be predicted exactly, and improved weather forecasts can make wind power more predictable.
== Sources ==
Dammed hydroelectricity, biomass and geothermal are dispatchable as each has a store of potential energy; wind and solar without storage can be decreased (curtailed) but are not dispatchable.
=== Wind power ===
Grid operators use day ahead forecasting to determine which of the available power sources to use the next day, and weather forecasting is used to predict the likely wind power and solar power output available. Although wind power forecasts have been used operationally for decades, as of 2019 the IEA is organizing international collaboration to further improve their accuracy.
Wind-generated power is a variable resource, and the amount of electricity produced at any given point in time by a given plant will depend on wind speeds, air density, and turbine characteristics, among other factors. If wind speed is too low then the wind turbines will not be able to make electricity, and if it is too high the turbines will have to be shut down to avoid damage. While the output from a single turbine can vary greatly and rapidly as local wind speeds vary, as more turbines are connected over larger and larger areas the average power output becomes less variable.
Intermittence: Regions smaller than synoptic scale, less than about 1000 km long, the size of an average country, have mostly the same weather and thus around the same wind power, unless local conditions favor special winds. Some studies show that wind farms spread over a geographically diverse area will as a whole rarely stop producing power altogether. This is rarely the case for smaller areas with uniform geography such as Ireland, Scotland and Denmark which have several days per year with little wind power.
Capacity factor: Wind power typically has an annual capacity factor of 25–50%, with offshore wind outperforming onshore wind.
Dispatchability: Because wind power is not by itself dispatchable wind farms are sometimes built with storage.
Capacity credit: At low levels of penetration, the capacity credit of wind is about the same as the capacity factor. As the concentration of wind power on the grid rises, the capacity credit percentage drops.
Variability: Site dependent. Sea breezes are much more constant than land breezes. Seasonal variability may reduce output by 50%.
Reliability: A wind farm has high technical reliability when the wind blows. That is, the output at any given time will only vary gradually due to falling wind speeds or storms, the latter necessitating shut downs. A typical wind farm is unlikely to have to shut down in less than half an hour at the extreme, whereas an equivalent-sized power station can fail totally instantaneously and without warning. The total shutdown of wind turbines is predictable via weather forecasting. The average availability of a wind turbine is 98%, and when a turbine fails or is shut down for maintenance it only affects a small percentage of the output of a large wind farm.
Predictability: Although wind is variable, it is also predictable in the short term. There is an 80% chance that wind output will change less than 10% in an hour and a 40% chance that it will change 10% or more in 5 hours.
Because wind power is generated by large numbers of small generators, individual failures do not have large impacts on power grids. This feature of wind has been referred to as resiliency.
=== Solar power ===
Intermittency inherently affects solar energy, as the production of renewable electricity from solar sources depends on the amount of sunlight at a given place and time. Solar output varies throughout the day and through the seasons, and is affected by dust, fog, cloud cover, frost or snow. Many of the seasonal factors are fairly predictable, and some solar thermal systems make use of heat storage to produce grid power for a full day.
Variability: In the absence of an energy storage system, solar does not produce power at night, little in bad weather and varies between seasons. In many countries, solar produces most energy in seasons with low wind availability and vice versa.
Capacity factor Standard photovoltaic solar has an annual average capacity factor of 10-20%, but panels that move and track the sun have a capacity factor up to 30%. Thermal solar parabolic trough with storage 56%. Thermal solar power tower with storage 73%.
The impact of intermittency of solar-generated electricity will depend on the correlation of generation with demand. For example, solar thermal power plants such as Nevada Solar One are somewhat matched to summer peak loads in areas with significant cooling demands, such as the south-western United States. Thermal energy storage systems like the small Spanish Gemasolar Thermosolar Plant can improve the match between solar supply and local consumption. The improved capacity factor using thermal storage represents a decrease in maximum capacity, and extends the total time the system generates power.
=== Run-of-the-river hydroelectricity ===
In many countries new large dams are no longer being built, because of the environmental impact of reservoirs. Run of the river projects have continued to be built. The absence of a reservoir results in both seasonal and annual variations in electricity generated.
=== Tidal power ===
Tidal power is the most predictable of all the variable renewable energy sources. The tides reverse twice a day, but they are never intermittent, on the contrary they are completely reliable.
=== Wave power ===
Waves are primarily created by wind, so the power available from waves tends to follow that available from wind, but due to the mass of the water is less variable than wind power. Wind power is proportional to the cube of the wind speed, while wave power is proportional to the square of the wave height.
== Solutions for their integration ==
The displaced dispatchable generation could be coal, natural gas, biomass, nuclear, geothermal or storage hydro. Rather than starting and stopping nuclear or geothermal, it is cheaper to use them as constant base load power. Any power generated in excess of demand can displace heating fuels, be converted to storage or sold to another grid. Biofuels and conventional hydro can be saved for later when intermittents are not generating power. Some forecast that “near-firm” renewables (batteries with solar and/or wind) power will be cheaper than existing nuclear by the late 2020s: therefore they say base load power will not be needed.
Alternatives to burning coal and natural gas which produce fewer greenhouse gases may eventually make fossil fuels a stranded asset that is left in the ground. Highly integrated grids favor flexibility and performance over cost, resulting in more plants that operate for fewer hours and lower capacity factors.
All sources of electrical power have some degree of variability, as do demand patterns which routinely drive large swings in the amount of electricity that suppliers feed into the grid. Wherever possible, grid operations procedure are designed to match supply with demand at high levels of reliability, and the tools to influence supply and demand are well-developed. The introduction of large amounts of highly variable power generation may require changes to existing procedures and additional investments.
The capacity of a reliable renewable power supply, can be fulfilled by the use of backup or extra infrastructure and technology, using mixed renewables to produce electricity above the intermittent average, which may be used to meet regular and unanticipated supply demands. Additionally, the storage of energy to fill the shortfall intermittency or for emergencies can be part of a reliable power supply.
In practice, as the power output from wind varies, partially loaded conventional plants, which are already present to provide response and reserve, adjust their output to compensate. While low penetrations of intermittent power may use existing levels of response and spinning reserve, the larger overall variations at higher penetrations levels will require additional reserves or other means of compensation.
=== Operational reserve ===
All managed grids already have existing operational and "spinning" reserve to compensate for existing uncertainties in the power grid. The addition of intermittent resources such as wind does not require 100% "back-up" because operating reserves and balancing requirements are calculated on a system-wide basis, and not dedicated to a specific generating plant.
Some gas, or hydro power plants are partially loaded and then controlled to change as demand changes or to replace rapidly lost generation. The ability to change as demand changes is termed "response". The ability to quickly replace lost generation, typically within timescales of 30 seconds to 30 minutes, is termed "spinning reserve".
Generally thermal plants running as peaking plants will be less efficient than if they were running as base load. Hydroelectric facilities with storage capacity, such as the traditional dam configuration, may be operated as base load or peaking plants.
Grids can contract for grid battery plants, which provide immediately available power for an hour or so, which gives time for other generators to be started up in the event of a failure, and greatly reduces the amount of spinning reserve required.
=== Demand response ===
Demand response is a change in consumption of energy to better align with supply. It can take the form of switching off loads, or absorb additional energy to correct supply/demand imbalances. Incentives have been widely created in the American, British and French systems for the use of these systems, such as favorable rates or capital cost assistance, encouraging consumers with large loads to take them offline whenever there is a shortage of capacity, or conversely to increase load when there is a surplus.
Certain types of load control allow the power company to turn loads off remotely if insufficient power is available. In France large users such as CERN cut power usage as required by the System Operator - EDF under the encouragement of the EJP tariff.
Energy demand management refers to incentives to adjust use of electricity, such as higher rates during peak hours. Real-time variable electricity pricing can encourage users to adjust usage to take advantage of periods when power is cheaply available and avoid periods when it is more scarce and expensive. Some loads such as desalination plants, electric boilers and industrial refrigeration units, are able to store their output (water and heat). Several papers also concluded that Bitcoin mining loads would reduce curtailment, hedge electricity price risk, stabilize the grid, increase the profitability of renewable energy power stations and therefore accelerate transition to sustainable energy. But others argue that Bitcoin mining can never be sustainable.
Instantaneous demand reduction. Most large systems also have a category of loads which instantly disconnect when there is a generation shortage, under some mutually beneficial contract. This can give instant load reductions or increases.
=== Storage ===
At times of low load where non-dispatchable output from wind and solar may be high, grid stability requires lowering the output of various dispatchable generating sources or even increasing controllable loads, possibly by using energy storage to time-shift output to times of higher demand. Such mechanisms can include:
Pumped storage hydropower is the most prevalent existing technology used, and can substantially improve the economics of wind power. The availability of hydropower sites suitable for storage will vary from grid to grid. Typical round trip efficiency is 80%.
Traditional lithium-ion is the most common type used for grid-scale battery storage as of 2020. Rechargeable flow batteries can serve as a large capacity, rapid-response storage medium. Hydrogen can be created through electrolysis and stored for later use.
Flywheel energy storage systems have some advantages over chemical batteries. Along with substantial durability which allows them to be cycled frequently without noticeable life reduction, they also have very fast response and ramp rates. They can go from full discharge to full charge within a few seconds. They can be manufactured using non-toxic and environmentally friendly materials, easily recyclable once the service life is over.
Thermal energy storage stores heat. Stored heat can be used directly for heating needs or converted into electricity. In the context of a CHP plant a heat storage can serve as a functional electricity storage at comparably low costs. Ice storage air conditioning Ice can be stored inter seasonally and can be used as a source of air-conditioning during periods of high demand. Present systems only need to store ice for a few hours but are well developed.
Storage of electrical energy results in some lost energy because storage and retrieval are not perfectly efficient. Storage also requires capital investment and space for storage facilities.
=== Geographic diversity and complementing technologies ===
The variability of production from a single wind turbine can be high. Combining any additional number of turbines, for example, in a wind farm, results in lower statistical variation, as long as the correlation between the output of each turbine is imperfect, and the correlations are always imperfect due to the distance between each turbine. Similarly, geographically distant wind turbines or wind farms have lower correlations, reducing overall variability. Since wind power is dependent on weather systems, there is a limit to the benefit of this geographic diversity for any power system.
Multiple wind farms spread over a wide geographic area and gridded together produce power more constantly and with less variability than smaller installations. Wind output can be predicted with some degree of confidence using weather forecasts, especially from large numbers of turbines/farms. The ability to predict wind output is expected to increase over time as data is collected, especially from newer facilities.
Electricity produced from solar energy tends to counterbalance the fluctuating supplies generated from wind. Normally it is windiest at night and during cloudy or stormy weather, and there is more sunshine on clear days with less wind. Besides, wind energy has often a peak in the winter season, whereas solar energy has a peak in the summer season; the combination of wind and solar reduces the need for dispatchable backup power.
In some locations, electricity demand may have a high correlation with wind output, particularly in locations where cold temperatures drive electric consumption, as cold air is denser and carries more energy.
The allowable penetration may be increased with further investment in standby generation. For instance some days could produce 80% intermittent wind and on the many windless days substitute 80% dispatchable power like natural gas, biomass and Hydro.
Areas with existing high levels of hydroelectric generation may ramp up or down to incorporate substantial amounts of wind. Norway, Brazil, and Manitoba all have high levels of hydroelectric generation, Quebec produces over 90% of its electricity from hydropower, and Hydro-Québec is the largest hydropower producer in the world. The U.S. Pacific Northwest has been identified as another region where wind energy is complemented well by existing hydropower. Storage capacity in hydropower facilities will be limited by size of reservoir, and environmental and other considerations.
=== Connecting grid internationally ===
It is often feasible to export energy to neighboring grids at times of surplus, and import energy when needed. This practice is common in Europe and between the US and Canada. Integration with other grids can lower the effective concentration of variable power: for instance, Denmark's high penetration of VRE, in the context of the German/Dutch/Scandinavian grids with which it has interconnections, is considerably lower as a proportion of the total system. Hydroelectricity that compensates for variability can be used across countries.
The capacity of power transmission infrastructure may have to be substantially upgraded to support export/import plans. Some energy is lost in transmission. The economic value of exporting variable power depends in part on the ability of the exporting grid to provide the importing grid with useful power at useful times for an attractive price.
=== Sector coupling ===
Demand and generation can be better matched when sectors such as mobility, heat and gas are coupled with the power system. The electric vehicle market is for instance expected to become the largest source of storage capacity. This may be a more expensive option appropriate for high penetration of variable renewables, compared to other sources of flexibility. The International Energy Agency says that sector coupling is needed to compensate for the mismatch between seasonal demand and supply.
Electric vehicles can be charged during periods of low demand and high production, and in some places send power back from the vehicle-to-grid.
== Penetration ==
Penetration refers to the proportion of a primary energy (PE) source in an electric power system, expressed as a percentage. There are several methods of calculation yielding different penetrations. The penetration can be calculated either as:
the nominal capacity (installed power) of a PE source divided by the peak load within an electric power system; or
the nominal capacity (installed power) of a PE source divided by the total capacity of the electric power system; or
the electrical energy generated by a PE source in a given period, divided by the demand of the electric power system in this period.
The level of penetration of intermittent variable sources is significant for the following reasons:
Power grids with significant amounts of dispatchable pumped storage, hydropower with reservoir or pondage or other peaking power plants such as natural gas-fired power plants are capable of accommodating fluctuations from intermittent power more easily.
Relatively small electric power systems without strong interconnection (such as remote islands) may retain some existing diesel generators but consuming less fuel, for flexibility until cleaner energy sources or storage such as pumped hydro or batteries become cost-effective.
In the early 2020s wind and solar produce 10% of the world's electricity, but supply in the 40-55% penetration range has already been implemented in several systems, with over 65% planned for the UK by 2030.
There is no generally accepted maximum level of penetration, as each system's capacity to compensate for intermittency differs, and the systems themselves will change over time. Discussion of acceptable or unacceptable penetration figures should be treated and used with caution, as the relevance or significance will be highly dependent on local factors, grid structure and management, and existing generation capacity.
For most systems worldwide, existing penetration levels are significantly lower than practical or theoretical maximums.
=== Maximum penetration limits ===
Maximum penetration of combined wind and solar is estimated at 70% to 90% without regional aggregation, demand management or storage; and up to 94% with 12 hours of storage. Economic efficiency and cost considerations are more likely to dominate as critical factors; technical solutions may allow higher penetration levels to be considered in future, particularly if cost considerations are secondary.
=== Economic impacts of variability ===
Estimates of the cost of wind and solar energy may include estimates of the "external" costs of wind and solar variability, or be limited to the cost of production. All electrical plant has costs that are separate from the cost of production, including, for example, the cost of any necessary transmission capacity or reserve capacity in case of loss of generating capacity. Many types of generation, particularly fossil fuel derived, will have cost externalities such as pollution, greenhouse gas emission, and habitat destruction, which are generally not directly accounted for.
The magnitude of the economic impacts is debated and will vary by location, but is expected to rise with higher penetration levels. At low penetration levels, costs such as operating reserve and balancing costs are believed to be insignificant.
Intermittency may introduce additional costs that are distinct from or of a different magnitude than for traditional generation types. These may include:
Transmission capacity: transmission capacity may be more expensive than for nuclear and coal generating capacity due to lower load factors. Transmission capacity will generally be sized to projected peak output, but average capacity for wind will be significantly lower, raising cost per unit of energy actually transmitted. However transmission costs are a low fraction of total energy costs.
Additional operating reserve: if additional wind and solar does not correspond to demand patterns, additional operating reserve may be required compared to other generating types, however this does not result in higher capital costs for additional plants since this is merely existing plants running at low output - spinning reserve. Contrary to statements that all wind must be backed by an equal amount of "back-up capacity", intermittent generators contribute to base capacity "as long as there is some probability of output during peak periods". Back-up capacity is not attributed to individual generators, as back-up or operating reserve "only have meaning at the system level".
Balancing costs: to maintain grid stability, some additional costs may be incurred for balancing of load with demand. Although improvements to grid balancing can be costly, they can lead to long term savings.
In many countries for many types of variable renewable energy, from time to time the government invites companies to tender sealed bids to construct a certain capacity of solar power to connect to certain electricity substations. By accepting the lowest bid the government commits to buy at that price per kWh for a fixed number of years, or up to a certain total amount of power. This provides certainty for investors against highly volatile wholesale electricity prices. However they may still risk exchange rate volatility if they borrowed in foreign currency.
== Examples by country ==
=== Great Britain ===
The operator of the British electricity system has said that it will be capable of operating zero-carbon by 2025, whenever there is enough renewable generation, and may be carbon negative by 2033. The company, National Grid Electricity System Operator, states that new products and services will help reduce the overall cost of operating the system.
=== Germany ===
In countries with a considerable amount of renewable energy, solar energy causes price drops around noon every day. PV production follows the higher demand during these hours. The images below show two weeks in 2022 in Germany, where renewable energy has a share of over 40%. Prices also drop every night and weekend due to low demand. In hours without PV and wind power, electricity prices rise. This can lead to demand side adjustments. While industry is dependent on the hourly prices, most private households still pay a fixed tariff. With smart meters, private consomers can also be motivated i.e. to load an electric car when enough renewable energy is available and prices are cheap.
Steerable flexibility in electricity production is essential to back up variable energy sources. The German example shows that pumped hydro storage, gas plants and hard coal jump in fast. Lignite varies on a daily basis. Nuclear power and biomass can theoretically adjust to a certain extent. However, in this case incentives still seem not be high enough.
== See also ==
Combined cycle hydrogen power plant
Cost of electricity by source
Energy security and renewable technology
Ground source heat pump
List of energy storage power plants
Spark spread: calculating the cost of back up
== References ==
== Further reading ==
Sivaram, Varun (2018). Taming the Sun: Innovation to Harness Solar Energy and Power the Planet. Cambridge, MA: MIT Press. ISBN 978-0-262-03768-6.
== External links ==
Grid Integration of Wind Energy Archived 2012-05-10 at the Wayback Machine | Wikipedia/Variable_renewable_energy |
Steinmetz's equation, sometimes called the power equation, is an empirical equation used to calculate the total power loss (core losses) per unit volume in magnetic materials when subjected to external sinusoidally varying magnetic flux. The equation is named after Charles Steinmetz, a German-American electrical engineer, who proposed a similar equation without the frequency dependency in 1890. The equation is:
P
v
=
k
⋅
f
a
⋅
B
b
{\displaystyle P_{v}=k\cdot f^{a}\cdot B^{b}}
where
P
v
{\displaystyle P_{v}}
is the time average power loss per unit volume in mW per cubic centimeter,
f
{\displaystyle f}
is frequency in kilohertz, and
B
{\displaystyle B}
is the peak magnetic flux density;
k
{\displaystyle k}
,
a
{\displaystyle a}
, and
b
{\displaystyle b}
, called the Steinmetz coefficients, are material parameters generally found empirically from the material's B-H hysteresis curve by curve fitting. In typical magnetic materials, the Steinmetz coefficients all vary with temperature.
The energy loss, called core loss, is due mainly to two effects: magnetic hysteresis and, in conductive materials, eddy currents, which consume energy from the source of the magnetic field, dissipating it as waste heat in the magnetic material. The equation is used mainly to calculate core losses in ferromagnetic magnetic cores used in electric motors, generators, transformers and inductors excited by sinusoidal current. Core losses are an economically important source of inefficiency in alternating current (AC) electric power grids and appliances.
If only hysteresis is taken into account (à la Steinmetz), the coefficient
a
{\displaystyle a}
will be close to 1 and
b
{\displaystyle b}
will be 2 for nearly all modern magnetic materials. However, due to other nonlinearities,
a
{\displaystyle a}
is usually between 1 and 2, and
b
{\displaystyle b}
is between 2 and 3. The equation is a simplified form that only applies when the magnetic field
B
{\displaystyle B}
has a sinusoidal waveform and does not take into account factors such as DC offset. However, because most electronics expose materials to non-sinusoidal flux waveforms, various improvements to the equation have been made. An improved
generalized Steinmetz equation, often referred to as iGSE, can be expressed as
P
=
1
T
∫
0
T
k
i
|
d
B
d
t
|
a
(
Δ
B
b
−
a
)
d
t
{\displaystyle P={\frac {1}{T}}\int _{0}^{T}k_{i}{\left|{\frac {dB}{dt}}\right|}^{a}(\Delta B^{b-a})dt}
where
Δ
B
{\displaystyle \Delta B}
is the flux density from peak to peak and
k
i
{\displaystyle k_{i}}
is defined by
k
i
=
k
(
2
π
)
a
−
1
∫
0
2
π
|
c
o
s
θ
|
a
2
b
−
a
d
θ
{\displaystyle k_{i}={\frac {k}{{(2\pi )}^{a-1}\int _{0}^{2\pi }{\left|cos\theta \right|}^{a}2^{b-a}d\theta }}}
where
a
{\displaystyle a}
,
b
{\displaystyle b}
and
k
{\displaystyle k}
are the same parameters used in the original equation. This equation can calculate losses with any flux waveform using only the parameters needed for the original equation, but it ignores the fact that the parameters, and therefore the losses, can vary under DC bias conditions. DC bias cannot be neglected without severely affecting results, but there is still not a practical physically-based model that takes both dynamic and nonlinear effects into account. However, this equation is still widely used because most other models require parameters that are not usually given by manufacturers and that engineers are not likely to take the time and resources to measure.
The Steinmetz coefficients for magnetic materials may be available from the manufacturers. However, manufacturers of magnetic materials intended for high-power applications usually provide graphs that plot specific core loss (watts per volume or watts per weight) at a given temperature against peak flux density
B
p
k
{\displaystyle B_{pk}}
, with frequency as a parameter. Families of curves for different temperatures may also be given. These graphs apply to the case where the flux density excursion is ±
B
p
k
{\displaystyle B_{pk}}
. In cases where the magnetizing field has a DC offset or is unidirectional (i.e. ranges between zero and a peak value), core losses can be much lower but are rarely covered by published data.
== See also ==
Core losses
Eddy currents
Legg's equation
Magnetic hysteresis
== References ==
== External links ==
Steinmetz's Equation at ScienceWorld | Wikipedia/Steinmetz's_equation |
The Feynman Lectures on Physics is a physics textbook based on a great number of lectures by Richard Feynman, a Nobel laureate who has sometimes been called "The Great Explainer". The lectures were presented before undergraduate students at the California Institute of Technology (Caltech), during 1961–1964. The book's co-authors are Feynman, Robert B. Leighton, and Matthew Sands.
A 2013 review in Nature described the book as having "simplicity, beauty, unity ... presented with enthusiasm and insight".
== Description ==
The textbook comprises three volumes. The first volume focuses on mechanics, radiation, and heat, including relativistic effects. The second volume covers mainly electromagnetism and matter. The third volume covers quantum mechanics; for example, it shows how the double-slit experiment demonstrates the essential features of quantum mechanics. The book also includes chapters on the relationship between mathematics and physics, and the relationship of physics to other sciences.
In 2013, Caltech in cooperation with The Feynman Lectures Website made the book freely available, on the web site.
== Background ==
By 1960, Richard Feynman’s research and discoveries in physics had resolved a number of troubling inconsistencies in several fundamental theories. In particular, it was his work in quantum electrodynamics for which he was awarded the 1965 Nobel Prize in physics. At the same time that Feynman was at the pinnacle of his fame, the faculty of the California Institute of Technology was concerned about the quality of the introductory courses for undergraduate students. It was thought the courses were burdened by an old-fashioned syllabus and the exciting discoveries of recent years, many of which had occurred at Caltech, were not being taught to the students.
Thus, it was decided to reconfigure the first physics course offered to students at Caltech, with the goal being to generate more excitement in the students. Feynman readily agreed to give the course, though only once. Aware of the fact that this would be a historic event, Caltech recorded each lecture and took photographs of each drawing made on the blackboard by Feynman.
Based on the lectures and the tape recordings, a team of physicists and graduate students put together a manuscript that would become The Feynman Lectures on Physics. Although Feynman's most valuable technical contribution to the field of physics may have been in the field of quantum electrodynamics, the Feynman Lectures were destined to become his most widely-read work.
The Feynman Lectures are considered to be one of the most sophisticated and comprehensive college-level introductions to physics. Feynman himself stated in his original preface that he was “pessimistic” with regard to his success in reaching all of his students. The Feynman lectures were written “to maintain the interest of very enthusiastic and rather smart students coming out of high schools and into Caltech”. Feynman was targeting the lectures to students who, “at the end of two years of our previous course, [were] very discouraged because there were really very few grand, new, modern ideas presented to them”. As a result, some physics students find the lectures more valuable after they have obtained a good grasp of physics by studying more traditional texts, and the books are sometimes seen as more helpful for teachers than for students.
While the two-year course (1961–1963) was still underway, rumors of it spread throughout the physics research and teaching community. In a special preface to the 1989 edition, David Goodstein and Gerry Neugebauer claimed that as time went on, the attendance of registered undergraduate students dropped sharply but was matched by a compensating increase in the number of faculty and graduate students. Co-author Matthew Sands, in his memoir accompanying the 2005 edition, contested this claim. Goodstein and Neugebauer also stated that, “it was [Feynman’s] peers — scientists, physicists, and professors — who would be the main beneficiaries of his magnificent achievement, which was nothing less than to see physics through the fresh and dynamic perspective of Richard Feynman”, and that his "gift was that he was an extraordinary teacher of teachers".
Addison-Wesley published a collection of exercises and problems to accompany The Feynman Lectures on Physics. The problem sets were first used in the 1962–1963 academic year, and were organized by Robert B. Leighton. Some of the problems are sophisticated and difficult enough to require an understanding of advanced topics, such as Kolmogorov's zero–one law. The original set of books and supplements contained a number of errors, some of which rendered problems insoluble. Various errata were issued, which are now available online.
Addison-Wesley also released in CD format all the audio tapes of the lectures, over 103 hours with Richard Feynman, after remastering the sound and clearing the recordings. For the CD release, the order of the lectures was rearranged from that of the original texts. The publisher has released a table showing the correspondence between the books and the CDs.
In March 1964, Feynman appeared once again before the freshman physics class as a lecturer, but the notes for this particular guest lecture were lost for a number of years. They were finally located, restored, and made available as Feynman's Lost Lecture: The Motion of Planets Around the Sun.
In 2005, Michael A. Gottlieb and Ralph Leighton co-authored Feynman's Tips on Physics, which includes four of Feynman's freshman lectures which had not been included in the main text (three on problem solving, one on inertial guidance), a memoir by Matthew Sands about the origins of the Feynman Lectures on Physics, and exercises (with answers) that were assigned to students by Robert B. Leighton and Rochus Vogt in recitation sections of the Feynman Lectures course at Caltech. Also released in 2005, was a "Definitive Edition" of the lectures which included corrections to the original text.
An account of the history of these famous volumes is given by Sands in his memoir article “Capturing the Wisdom of Feynman", and another article "Memories of Feynman" by the physicist T. A. Welton.
In a September 13, 2013 email to members of the Feynman Lectures online forum, Gottlieb announced the launch of a new website by Caltech and The Feynman Lectures Website which offers "[A] free high-quality online edition" of the lecture text. To provide a device-independent reading experience, the website takes advantage of modern web technologies like HTML5, SVG, and MathJax to present text, figures, and equations in any sizes while maintaining the display quality.
== Contents ==
=== Volume I: Mainly mechanics, radiation, and heat ===
Preface: “When new ideas came in, I would try either to deduce them if they were deducible or to explain that it was a new idea … and which was not supposed to be provable.”
Chapters
=== Volume II: Mainly electromagnetism and matter ===
Chapters
=== Volume III: Quantum mechanics ===
Chapters
== Abbreviated editions ==
Six readily-accessible chapters were later compiled into a book entitled Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. Six more chapters are in the book Six Not So Easy Pieces: Einstein's Relativity, Symmetry and Space-Time.
“Six Easy Pieces grew out of the need to bring to as wide an audience as possible, a substantial yet nontechnical physics primer based on the science of Richard Feynman... General readers are fortunate that Feynman chose to present certain key topics in largely qualitative terms without formal mathematics…”
=== Six Easy Pieces (1994) ===
Chapters:
Atoms in motion
Basic Physics
The relation of physics to other sciences
Conservation of energy
The theory of gravitation
Quantum behavior
=== Six Not-So-Easy Pieces (1998) ===
Chapters:
Vectors
Symmetry in physical laws
The special theory of relativity
Relativistic energy and momentum
Space-time
Curved space
=== The Very Best of The Feynman Lectures (Audio, 2005) ===
Chapters:
The Theory of Gravitation (Vol. I, Chapter 7)
Curved Space (Vol. II, Chapter 42)
Electromagnetism (Vol. II, Chapter 1)
Probability (Vol. I, Chapter 6)
The Relation of Wave and Particle Viewpoints (Vol. III, Chapter 2)
Superconductivity (Vol. III, Chapter 21)
== Publishing information ==
Feynman R, Leighton R, and Sands M. The Feynman Lectures on Physics. Three volumes 1964, 1966. Library of Congress Catalog Card No. 63-20717
ISBN 0-201-02115-3 (1970 paperback three-volume set)
ISBN 0-201-50064-7 (1989 commemorative hardcover three-volume set)
ISBN 0-8053-9045-6 (2006 the definitive edition, 2nd printing, hardcover)
Feynman's Tips On Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics (hardcover) ISBN 0-8053-9063-4
Six Easy Pieces (hardcover book with original Feynman audio on CDs) ISBN 0-201-40896-1
Six Easy Pieces (paperback book) ISBN 0-201-40825-2
Six Not-So-Easy Pieces (paperback book with original Feynman audio on CDs) ISBN 0-201-32841-0
Six Not-So-Easy Pieces (paperback book) ISBN 0-201-32842-9
Exercises for the Feynman Lectures (paperback book) ISBN 2-35648-789-1 (out of print)
Feynman R, Leighton R, and Sands M., The Feynman Lectures Website, September 2013.
"The Feynman Lectures on Physics, Volume I" (online edition)
"The Feynman Lectures on Physics, Volume II" (online edition)
"The Feynman Lectures on Physics, Volume III" (online edition)
== See also ==
Berkeley Physics Course – another contemporaneously developed and influential college-level physics series
The Character of Physical Law – a condensed series of Feynman lectures for scientists and non-scientists
Project Tuva
List of textbooks on classical and quantum mechanics
List of textbooks on electromagnetism
List of textbooks on thermodynamics and statistical mechanics
== References ==
== External links ==
The Feynman Lectures on Physics California Institute of Technology (Caltech) – HTML edition.
The Feynman Lectures on Physics The Feynman Lectures Website – HTML edition and also exercises and other related material. | Wikipedia/Feynman_Lectures_on_Physics |
The Canadian Journal of Physics is a monthly peer-reviewed scientific journal covering research in physics. It was established in 1929 as the Canadian Journal of Research, Section A: Physical Sciences, obtaining its current title in 1951. The journal is published monthly by the NRC Research Press and edited by Robert Mann (University of Waterloo) and Marco Merkli (Memorial University of Newfoundland). The journal is affiliated with the Canadian Association of Physicists.
== Abstracting and indexing ==
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.2.
== References ==
== External links ==
Official website | Wikipedia/Canadian_Journal_of_Physics |
Computer Physics Communications is a peer-reviewed scientific journal published by Elsevier under the North-Holland imprint.
The journal focuses on computational methodology, numerical analysis and hardware and software development in support of physics and physical chemistry.
According to the Journal Citation Reports, the journal has a 2023 impact factor of 7.2.
== Computer Physics Communications Program Library ==
Associated with the journal is the CPC Program Library. This repository houses computer programs which have been described in the journal. Access to the library is bundled with journal subscriptions, although those unaffiliated with a subscribing institution can purchase individual subscriptions.
Originally hosted by the Queen's University Belfast, the service moved to 2016 in the Mendeley Data repository. It contains over 3,000 programs from 1969 and 2016.
== References ==
== External links ==
Official website | Wikipedia/Computer_Physics_Communications |
The Hoyle–Narlikar theory of gravity is a Machian and conformal theory of gravity proposed by Fred Hoyle and Jayant Narlikar that originally fits into the quasi steady state model of the universe.
The gravitational constant G is arbitrary and is determined by the mean density of matter in the universe. The theory was inspired by the Wheeler–Feynman absorber theory for electrodynamics. When Richard Feynman, as a graduate student, lectured on the Wheeler–Feynman absorber theory in the weekly physics seminar at Princeton, Albert Einstein was in the audience and stated at question time that he was trying to achieve the same thing for gravity. Stephen Hawking showed in 1965 that the theory is incompatible with an expanding universe, because the Wheeler–Feynman advanced solution would diverge. However, at that time the accelerating expansion of the universe was not known, which resolves the divergence issue because of the cosmic event horizon.
The Hoyle–Narlikar theory reduces to Einstein's general relativity in the limit of a smooth fluid model of particle distribution constant in time and space. Hoyle–Narlikar's theory is consistent with some cosmological tests. Unlike the standard cosmological model, the quasi steady state hypothesis implies the universe is eternal. According to Narlikar, multiple mini bangs would occur at the center of quasars, with various creation fields (or C-field) continuously generating matter out of empty space due to local concentration of negative energy that would also prevent violation of conservation laws, in order to keep the mass density constant as the universe expands. The low-temperature cosmic background radiation would not originate from the Big Bang but from metallic dust made from supernovae, radiating the energy of stars. However, the quasi steady-state hypothesis is challenged by observation as it does not fit into WMAP data.
== See also ==
Brans–Dicke theory
Non-standard cosmology
== Notes ==
== Bibliography ==
Hoyle, Fred; Narlikar, Jayant V.; Freeman, W.H. (1974). Action at a distance in physics and cosmology. W. H. Freeman and Company. ISBN 978-0716703464.
Hoyle, Fred; Narlikar, Jayant V. (1996). Lectures on Cosmology and Action at a Distance Electrodynamics. World Scientific. ISBN 978-9810225582.
Hoyle, Fred; Burbidge, Geoffrey; Narlikar, Jayant V. (2000). A Different Approach to Cosmology: From a Static Universe through the Big Bang towards Reality. Cambridge University Press. ISBN 978-0521662239.
Narlikar, Jayant V. (2002). An Introduction to Cosmology (3rd ed.). Cambridge University Press. ISBN 978-0521793766. | Wikipedia/Hoyle–Narlikar_theory_of_gravity |
Nuclear Physics A, Nuclear Physics B, Nuclear Physics B: Proceedings Supplements and discontinued Nuclear Physics are peer-reviewed scientific journals published by Elsevier. The scope of Nuclear Physics A is nuclear and hadronic physics, and that of Nuclear Physics B is high energy physics, quantum field theory, statistical systems, and mathematical physics.
Nuclear Physics was established in 1956, and then split into Nuclear Physics A and Nuclear Physics B in 1967. A supplement series to Nuclear Physics B, called Nuclear Physics B: Proceedings Supplements has been published from 1987 onwards until 2015 and continues as Nuclear and Particle Physics Proceedings.
Nuclear Physics B is part of the SCOAP3 initiative.
== Abstracting and indexing ==
=== Nuclear Physics A ===
Current Contents/Physics, Chemical, & Earth Sciences
=== Nuclear Physics B ===
Current Contents/Physics, Chemical, & Earth Sciences
== References ==
== External links ==
Nuclear Physics
Nuclear Physics A
Nuclear Physics B
Nuclear Physics B: Proceedings Supplements | Wikipedia/Nuclear_Physics_B:_Proceedings_Supplements |
In mathematical analysis, the Dirac delta function (or δ distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
δ
(
x
)
=
{
0
,
x
≠
0
∞
,
x
=
0
{\displaystyle \delta (x)={\begin{cases}0,&x\neq 0\\{\infty },&x=0\end{cases}}}
such that
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)dx=1.}
Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.
== Motivation and overview ==
The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis.: 174 The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).
To be specific, suppose that a billiard ball is at rest. At time
t
=
0
{\displaystyle t=0}
it is struck by another ball, imparting it with a momentum P, with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is P δ(t); the units of δ(t) are s−1.
To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval
Δ
t
=
[
0
,
T
]
{\displaystyle \Delta t=[0,T]}
. That is,
F
Δ
t
(
t
)
=
{
P
/
Δ
t
0
<
t
≤
T
,
0
otherwise
.
{\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t\leq T,\\0&{\text{otherwise}}.\end{cases}}}
Then the momentum at any time t is found by integration:
p
(
t
)
=
∫
0
t
F
Δ
t
(
τ
)
d
τ
=
{
P
t
≥
T
P
t
/
Δ
t
0
≤
t
≤
T
0
otherwise.
{\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t\geq T\\P\,t/\Delta t&0\leq t\leq T\\0&{\text{otherwise.}}\end{cases}}}
Now, the model situation of an instantaneous transfer of momentum requires taking the limit as Δt → 0, giving a result everywhere except at 0:
p
(
t
)
=
{
P
t
>
0
0
t
<
0.
{\displaystyle p(t)={\begin{cases}P&t>0\\0&t<0.\end{cases}}}
Here the functions
F
Δ
t
{\displaystyle F_{\Delta t}}
are thought of as useful approximations to the idea of instantaneous transfer of momentum.
The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence)
lim
Δ
t
→
0
+
F
Δ
t
{\textstyle \lim _{\Delta t\to 0^{+}}F_{\Delta t}}
is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property
∫
−
∞
∞
F
Δ
t
(
t
)
d
t
=
P
,
{\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,}
which holds for all
Δ
t
>
0
{\displaystyle \Delta t>0}
, should continue to hold in the limit. So, in the equation
F
(
t
)
=
P
δ
(
t
)
=
lim
Δ
t
→
0
F
Δ
t
(
t
)
{\textstyle F(t)=P\,\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t)}
, it is understood that the limit is always taken outside the integral.
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.
The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.
== History ==
In physics, the Dirac delta function was popularized by Paul Dirac in this book The Principles of Quantum Mechanics published in 1930. However, Oliver Heaviside, 35 years before Dirac, described an impulsive function called the Heaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics. He called it the "delta function" since he used it as a continuum analogue of the discrete Kronecker delta.
Mathematicians refer to the same concept as a distribution rather than a function.: 33
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:
f
(
x
)
=
1
2
π
∫
−
∞
∞
d
α
f
(
α
)
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
,
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,}
which is tantamount to the introduction of the δ-function in the form:
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .}
Later, Augustin Cauchy expressed the theorem using exponentials:
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
.
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.}
Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem).
As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
=
1
2
π
∫
−
∞
∞
(
∫
−
∞
∞
e
i
p
x
e
−
i
p
α
d
p
)
f
(
α
)
d
α
=
∫
−
∞
∞
δ
(
x
−
α
)
f
(
α
)
d
α
,
{\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}}
where the δ-function is expressed as
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
e
i
p
(
x
−
α
)
d
p
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .}
A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:
The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function.
== Definitions ==
The Dirac delta function
δ
(
x
)
{\displaystyle \delta (x)}
can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,
δ
(
x
)
≃
{
+
∞
,
x
=
0
0
,
x
≠
0
{\displaystyle \delta (x)\simeq {\begin{cases}+\infty ,&x=0\\0,&x\neq 0\end{cases}}}
and which is also constrained to satisfy the identity
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)\,dx=1.}
This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties.
=== As a measure ===
One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset A of the real line R as an argument, and returns δ(A) = 1 if 0 ∈ A, and δ(A) = 0 otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then δ(A) represents the mass contained in the set A. One may then define the integral against δ as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure δ satisfies
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=f(0)}
for all continuous compactly supported functions f. The measure δ is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property
∫
−
∞
∞
f
(
x
)
δ
(
x
)
d
x
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (x)\,dx=f(0)}
holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral.
As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function.
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0.
{\displaystyle H(x)={\begin{cases}1&{\text{if }}x\geq 0\\0&{\text{if }}x<0.\end{cases}}}
This means that H(x) is the integral of the cumulative indicator function 1(−∞, x] with respect to the measure δ; to wit,
H
(
x
)
=
∫
R
1
(
−
∞
,
x
]
(
t
)
δ
(
d
t
)
=
δ
(
(
−
∞
,
x
]
)
,
{\displaystyle H(x)=\int _{\mathbf {R} }\mathbf {1} _{(-\infty ,x]}(t)\,\delta (dt)=\delta \!\left((-\infty ,x]\right),}
the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral:
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
∫
−
∞
∞
f
(
x
)
d
H
(
x
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=\int _{-\infty }^{\infty }f(x)\,dH(x).}
All higher moments of δ are zero. In particular, characteristic function and moment generating function are both equal to one.
=== As a distribution ===
In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function φ. Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on R with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
for every test function φ.
For δ to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function φ, one has the inequality
|
S
[
φ
]
|
≤
C
N
∑
k
=
0
M
N
sup
x
∈
[
−
N
,
N
]
|
φ
(
k
)
(
x
)
|
{\displaystyle \left|S[\varphi ]\right|\leq C_{N}\sum _{k=0}^{M_{N}}\sup _{x\in [-N,N]}\left|\varphi ^{(k)}(x)\right|}
where sup represents the supremum. With the δ distribution, one has such an inequality (with CN = 1) with MN = 0 for all N. Thus δ is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}).
The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function φ, one has
δ
[
φ
]
=
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
.
{\displaystyle \delta [\varphi ]=-\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx.}
Intuitively, if integration by parts were permitted, then the latter integral should simplify to
∫
−
∞
∞
φ
(
x
)
H
′
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
δ
(
x
)
d
x
,
{\displaystyle \int _{-\infty }^{\infty }\varphi (x)\,H'(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,\delta (x)\,dx,}
and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
d
H
(
x
)
.
{\displaystyle -\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,dH(x).}
In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions φ which, by the Riesz representation theorem, can be represented as the Lebesgue integral of φ with respect to some Radon measure.
Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution.
=== Generalizations ===
The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that
∫
R
n
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (d\mathbf {x} )=f(\mathbf {0} )}
for every compactly supported continuous function f. As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x = (x1, x2, ..., xn), one has
The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.
The notion of a Dirac measure makes sense on any set. Thus if X is a set, x0 ∈ X is a marked point, and Σ is any sigma algebra of subsets of X, then the measure defined on sets A ∈ Σ by
δ
x
0
(
A
)
=
{
1
if
x
0
∈
A
0
if
x
0
∉
A
{\displaystyle \delta _{x_{0}}(A)={\begin{cases}1&{\text{if }}x_{0}\in A\\0&{\text{if }}x_{0}\notin A\end{cases}}}
is the delta measure or unit mass concentrated at x0.
Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0 ∈ M is defined as the following distribution:
for all compactly supported smooth real-valued functions φ on M. A common special case of this construction is a case in which M is an open set in the Euclidean space Rn.
On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions φ. At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping
x
0
↦
δ
x
0
{\displaystyle x_{0}\mapsto \delta _{x_{0}}}
is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X.
== Properties ==
=== Scaling and symmetry ===
The delta function satisfies the following scaling property for a non-zero scalar α:
∫
−
∞
∞
δ
(
α
x
)
d
x
=
∫
−
∞
∞
δ
(
u
)
d
u
|
α
|
=
1
|
α
|
{\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}}
and so
Scaling property proof:
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
a
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
a
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{a}}g(0).}
where a change of variable x′ = ax is used. If a is negative, i.e., a = −|a|, then
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
−
|
a
|
∫
∞
−
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{-\left\vert a\right\vert }}\int \limits _{\infty }^{-\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}g(0).}
Thus,
δ
(
a
x
)
=
1
|
a
|
δ
(
x
)
{\displaystyle \delta (ax)={\frac {1}{\left\vert a\right\vert }}\delta (x)}
.
In particular, the delta function is an even distribution (symmetry), in the sense that
δ
(
−
x
)
=
δ
(
x
)
{\displaystyle \delta (-x)=\delta (x)}
which is homogeneous of degree −1.
=== Algebraic properties ===
The distributional product of δ with x is equal to zero:
x
δ
(
x
)
=
0.
{\displaystyle x\,\delta (x)=0.}
More generally,
(
x
−
a
)
n
δ
(
x
−
a
)
=
0
{\displaystyle (x-a)^{n}\delta (x-a)=0}
for all positive integers
n
{\displaystyle n}
.
Conversely, if xf(x) = xg(x), where f and g are distributions, then
f
(
x
)
=
g
(
x
)
+
c
δ
(
x
)
{\displaystyle f(x)=g(x)+c\delta (x)}
for some constant c.
=== Translation ===
The integral of any function multiplied by the time-delayed Dirac delta
δ
T
(
t
)
=
δ
(
t
−
T
)
{\displaystyle \delta _{T}(t){=}\delta (t{-}T)}
is
∫
−
∞
∞
f
(
t
)
δ
(
t
−
T
)
d
t
=
f
(
T
)
.
{\displaystyle \int _{-\infty }^{\infty }f(t)\,\delta (t-T)\,dt=f(T).}
This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T.
It follows that the effect of convolving a function f(t) with the time-delayed Dirac delta is to time-delay f(t) by the same amount:
(
f
∗
δ
T
)
(
t
)
=
d
e
f
∫
−
∞
∞
f
(
τ
)
δ
(
t
−
T
−
τ
)
d
τ
=
∫
−
∞
∞
f
(
τ
)
δ
(
τ
−
(
t
−
T
)
)
d
τ
since
δ
(
−
x
)
=
δ
(
x
)
by (4)
=
f
(
t
−
T
)
.
{\displaystyle {\begin{aligned}(f*\delta _{T})(t)\ &{\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }f(\tau )\,\delta (t-T-\tau )\,d\tau \\&=\int _{-\infty }^{\infty }f(\tau )\,\delta (\tau -(t-T))\,d\tau \qquad {\text{since}}~\delta (-x)=\delta (x)~~{\text{by (4)}}\\&=f(t-T).\end{aligned}}}
The sifting property holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)
∫
−
∞
∞
δ
(
ξ
−
x
)
δ
(
x
−
η
)
d
x
=
δ
(
η
−
ξ
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta (\xi -x)\delta (x-\eta )\,dx=\delta (\eta -\xi ).}
=== Composition with a function ===
More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar change of variables formula holds (where
u
=
g
(
x
)
{\displaystyle u=g(x)}
), that
∫
R
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
g
′
(
x
)
|
d
x
=
∫
g
(
R
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du}
provided that g is a continuously differentiable function with g′ nowhere zero. That is, there is a unique way to assign meaning to the distribution
δ
∘
g
{\displaystyle \delta \circ g}
so that this identity holds for all compactly supported test functions f. Therefore, the domain must be broken up to exclude the g′ = 0 point. This distribution satisfies δ(g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then
δ
(
g
(
x
)
)
=
δ
(
x
−
x
0
)
|
g
′
(
x
0
)
|
.
{\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.}
It is natural therefore to define the composition δ(g(x)) for continuously differentiable functions g by
δ
(
g
(
x
)
)
=
∑
i
δ
(
x
−
x
i
)
|
g
′
(
x
i
)
|
{\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}}
where the sum extends over all roots of g(x), which are assumed to be simple. Thus, for example
δ
(
x
2
−
α
2
)
=
1
2
|
α
|
[
δ
(
x
+
α
)
+
δ
(
x
−
α
)
]
.
{\displaystyle \delta \left(x^{2}-\alpha ^{2}\right)={\frac {1}{2|\alpha |}}{\Big [}\delta \left(x+\alpha \right)+\delta \left(x-\alpha \right){\Big ]}.}
In the integral form, the generalized scaling property may be written as
∫
−
∞
∞
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∑
i
f
(
x
i
)
|
g
′
(
x
i
)
|
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (g(x))\,dx=\sum _{i}{\frac {f(x_{i})}{|g'(x_{i})|}}.}
=== Indefinite integral ===
For a constant
a
∈
R
{\displaystyle a\in \mathbb {R} }
and a "well-behaved" arbitrary real-valued function y(x),
∫
y
(
x
)
δ
(
x
−
a
)
d
x
=
y
(
a
)
H
(
x
−
a
)
+
c
,
{\displaystyle \displaystyle {\int }y(x)\delta (x-a)dx=y(a)H(x-a)+c,}
where H(x) is the Heaviside step function and c is an integration constant.
=== Properties in n dimensions ===
The delta distribution in an n-dimensional space satisfies the following scaling property instead,
δ
(
α
x
)
=
|
α
|
−
n
δ
(
x
)
,
{\displaystyle \delta (\alpha {\boldsymbol {x}})=|\alpha |^{-n}\delta ({\boldsymbol {x}})~,}
so that δ is a homogeneous distribution of degree −n.
Under any reflection or rotation ρ, the delta function is invariant,
δ
(
ρ
x
)
=
δ
(
x
)
.
{\displaystyle \delta (\rho {\boldsymbol {x}})=\delta ({\boldsymbol {x}})~.}
As in the one-variable case, it is possible to define the composition of δ with a bi-Lipschitz function g: Rn → Rn uniquely so that the following holds
∫
R
n
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
det
g
′
(
x
)
|
d
x
=
∫
g
(
R
n
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} ^{n}}\delta (g({\boldsymbol {x}}))\,f(g({\boldsymbol {x}}))\left|\det g'({\boldsymbol {x}})\right|d{\boldsymbol {x}}=\int _{g(\mathbb {R} ^{n})}\delta ({\boldsymbol {u}})f({\boldsymbol {u}})\,d{\boldsymbol {u}}}
for all compactly supported functions f.
Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function g : Rn → R such that the gradient of g is nowhere zero, the following identity holds
∫
R
n
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∫
g
−
1
(
0
)
f
(
x
)
|
∇
g
|
d
σ
(
x
)
{\displaystyle \int _{\mathbb {R} ^{n}}f({\boldsymbol {x}})\,\delta (g({\boldsymbol {x}}))\,d{\boldsymbol {x}}=\int _{g^{-1}(0)}{\frac {f({\boldsymbol {x}})}{|{\boldsymbol {\nabla }}g|}}\,d\sigma ({\boldsymbol {x}})}
where the integral on the right is over g−1(0), the (n − 1)-dimensional surface defined by g(x) = 0 with respect to the Minkowski content measure. This is known as a simple layer integral.
More generally, if S is a smooth hypersurface of Rn, then we can associate to S the distribution that integrates any compactly supported smooth function g over S:
δ
S
[
g
]
=
∫
S
g
(
s
)
d
σ
(
s
)
{\displaystyle \delta _{S}[g]=\int _{S}g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}})}
where σ is the hypersurface measure associated to S. This generalization is associated with the potential theory of simple layer potentials on S. If D is a domain in Rn with smooth boundary S, then δS is equal to the normal derivative of the indicator function of D in the distribution sense,
−
∫
R
n
g
(
x
)
∂
1
D
(
x
)
∂
n
d
x
=
∫
S
g
(
s
)
d
σ
(
s
)
,
{\displaystyle -\int _{\mathbb {R} ^{n}}g({\boldsymbol {x}})\,{\frac {\partial 1_{D}({\boldsymbol {x}})}{\partial n}}\,d{\boldsymbol {x}}=\int _{S}\,g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}}),}
where n is the outward normal. For a proof, see e.g. the article on the surface delta function.
In three dimensions, the delta function is represented in spherical coordinates by:
δ
(
r
−
r
0
)
=
{
1
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
δ
(
ϕ
−
ϕ
0
)
x
0
,
y
0
,
z
0
≠
0
1
2
π
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
x
0
=
y
0
=
0
,
z
0
≠
0
1
4
π
r
2
δ
(
r
−
r
0
)
x
0
=
y
0
=
z
0
=
0
{\displaystyle \delta ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})={\begin{cases}\displaystyle {\frac {1}{r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})\delta (\phi -\phi _{0})&x_{0},y_{0},z_{0}\neq 0\\\displaystyle {\frac {1}{2\pi r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})&x_{0}=y_{0}=0,\ z_{0}\neq 0\\\displaystyle {\frac {1}{4\pi r^{2}}}\delta (r-r_{0})&x_{0}=y_{0}=z_{0}=0\end{cases}}}
== Derivatives ==
The derivative of the Dirac delta distribution, denoted δ′ and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions φ by
δ
′
[
φ
]
=
−
δ
[
φ
′
]
=
−
φ
′
(
0
)
.
{\displaystyle \delta '[\varphi ]=-\delta [\varphi ']=-\varphi '(0).}
The first equality here is a kind of integration by parts, for if δ were a true function then
∫
−
∞
∞
δ
′
(
x
)
φ
(
x
)
d
x
=
δ
(
x
)
φ
(
x
)
|
−
∞
∞
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
φ
′
(
0
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta '(x)\varphi (x)\,dx=\delta (x)\varphi (x)|_{-\infty }^{\infty }-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\varphi '(0).}
By mathematical induction, the k-th derivative of δ is defined similarly as the distribution given on test functions by
δ
(
k
)
[
φ
]
=
(
−
1
)
k
φ
(
k
)
(
0
)
.
{\displaystyle \delta ^{(k)}[\varphi ]=(-1)^{k}\varphi ^{(k)}(0).}
In particular, δ is an infinitely differentiable distribution.
The first derivative of the delta function is the distributional limit of the difference quotients:
δ
′
(
x
)
=
lim
h
→
0
δ
(
x
+
h
)
−
δ
(
x
)
h
.
{\displaystyle \delta '(x)=\lim _{h\to 0}{\frac {\delta (x+h)-\delta (x)}{h}}.}
More properly, one has
δ
′
=
lim
h
→
0
1
h
(
τ
h
δ
−
δ
)
{\displaystyle \delta '=\lim _{h\to 0}{\frac {1}{h}}(\tau _{h}\delta -\delta )}
where τh is the translation operator, defined on functions by τhφ(x) = φ(x + h), and on a distribution S by
(
τ
h
S
)
[
φ
]
=
S
[
τ
−
h
φ
]
.
{\displaystyle (\tau _{h}S)[\varphi ]=S[\tau _{-h}\varphi ].}
In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.
The derivative of the delta function satisfies a number of basic properties, including:
δ
′
(
−
x
)
=
−
δ
′
(
x
)
x
δ
′
(
x
)
=
−
δ
(
x
)
{\displaystyle {\begin{aligned}\delta '(-x)&=-\delta '(x)\\x\delta '(x)&=-\delta (x)\end{aligned}}}
which can be shown by applying a test function and integrating by parts.
The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:
⟨
x
δ
′
,
φ
⟩
=
⟨
δ
′
,
x
φ
⟩
=
−
⟨
δ
,
(
x
φ
)
′
⟩
=
−
⟨
δ
,
x
′
φ
+
x
φ
′
⟩
=
−
⟨
δ
,
x
′
φ
⟩
−
⟨
δ
,
x
φ
′
⟩
=
−
x
′
(
0
)
φ
(
0
)
−
x
(
0
)
φ
′
(
0
)
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
−
x
(
0
)
⟨
δ
,
φ
′
⟩
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
+
x
(
0
)
⟨
δ
′
,
φ
⟩
=
⟨
x
(
0
)
δ
′
−
x
′
(
0
)
δ
,
φ
⟩
⟹
x
(
t
)
δ
′
(
t
)
=
x
(
0
)
δ
′
(
t
)
−
x
′
(
0
)
δ
(
t
)
=
−
x
′
(
0
)
δ
(
t
)
=
−
δ
(
t
)
{\displaystyle {\begin{aligned}\langle x\delta ',\varphi \rangle \,&=\,\langle \delta ',x\varphi \rangle \,=\,-\langle \delta ,(x\varphi )'\rangle \,=\,-\langle \delta ,x'\varphi +x\varphi '\rangle \,=\,-\langle \delta ,x'\varphi \rangle -\langle \delta ,x\varphi '\rangle \,=\,-x'(0)\varphi (0)-x(0)\varphi '(0)\\&=\,-x'(0)\langle \delta ,\varphi \rangle -x(0)\langle \delta ,\varphi '\rangle \,=\,-x'(0)\langle \delta ,\varphi \rangle +x(0)\langle \delta ',\varphi \rangle \,=\,\langle x(0)\delta '-x'(0)\delta ,\varphi \rangle \\\Longrightarrow x(t)\delta '(t)&=x(0)\delta '(t)-x'(0)\delta (t)=-x'(0)\delta (t)=-\delta (t)\end{aligned}}}
Furthermore, the convolution of δ′ with a compactly-supported, smooth function f is
δ
′
∗
f
=
δ
∗
f
′
=
f
′
,
{\displaystyle \delta '*f=\delta *f'=f',}
which follows from the properties of the distributional derivative of a convolution.
=== Higher dimensions ===
More generally, on an open set U in the n-dimensional Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, the Dirac delta distribution centered at a point a ∈ U is defined by
δ
a
[
φ
]
=
φ
(
a
)
{\displaystyle \delta _{a}[\varphi ]=\varphi (a)}
for all
φ
∈
C
c
∞
(
U
)
{\displaystyle \varphi \in C_{c}^{\infty }(U)}
, the space of all smooth functions with compact support on U. If
α
=
(
α
1
,
…
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
is any multi-index with
|
α
|
=
α
1
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}}
and
∂
α
{\displaystyle \partial ^{\alpha }}
denotes the associated mixed partial derivative operator, then the α-th derivative ∂αδa of δa is given by
⟨
∂
α
δ
a
,
φ
⟩
=
(
−
1
)
|
α
|
⟨
δ
a
,
∂
α
φ
⟩
=
(
−
1
)
|
α
|
∂
α
φ
(
x
)
|
x
=
a
for all
φ
∈
C
c
∞
(
U
)
.
{\displaystyle \left\langle \partial ^{\alpha }\delta _{a},\,\varphi \right\rangle =(-1)^{|\alpha |}\left\langle \delta _{a},\partial ^{\alpha }\varphi \right\rangle =(-1)^{|\alpha |}\partial ^{\alpha }\varphi (x){\Big |}_{x=a}\quad {\text{ for all }}\varphi \in C_{c}^{\infty }(U).}
That is, the α-th derivative of δa is the distribution whose value on any test function φ is the α-th derivative of φ at a (with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.
Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an integer m and coefficients cα such that
S
=
∑
|
α
|
≤
m
c
α
∂
α
δ
a
.
{\displaystyle S=\sum _{|\alpha |\leq m}c_{\alpha }\partial ^{\alpha }\delta _{a}.}
== Representations ==
=== Nascent delta function ===
The delta function can be viewed as the limit of a sequence of functions
δ
(
x
)
=
lim
ε
→
0
+
η
ε
(
x
)
,
{\displaystyle \delta (x)=\lim _{\varepsilon \to 0^{+}}\eta _{\varepsilon }(x),}
where ηε(x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
for all continuous functions f having compact support, or that this limit holds for all smooth functions f with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.
==== Approximations to the identity ====
Typically a nascent delta function ηε can be constructed in the following manner. Let η be an absolutely integrable function on R of total integral 1, and define
η
ε
(
x
)
=
ε
−
1
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\eta \left({\frac {x}{\varepsilon }}\right).}
In n dimensions, one uses instead the scaling
η
ε
(
x
)
=
ε
−
n
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-n}\eta \left({\frac {x}{\varepsilon }}\right).}
Then a simple change of variables shows that ηε also has integral 1. One may show that (5) holds for all continuous compactly supported functions f, and so ηε converges weakly to δ in the sense of measures.
The ηε constructed in this way are known as an approximation to the identity. This terminology is because the space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: f ∗ g ∈ L1(R) whenever f and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such that f ∗ h = f for all f. Nevertheless, the sequence ηε does approximate such an identity in the sense that
f
∗
η
ε
→
f
as
ε
→
0.
{\displaystyle f*\eta _{\varepsilon }\to f\quad {\text{as }}\varepsilon \to 0.}
This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the ηε, for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere.
If the initial η = η1 is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing η to be a suitably normalized bump function, for instance
η
(
x
)
=
{
1
I
n
exp
(
−
1
1
−
|
x
|
2
)
if
|
x
|
<
1
0
if
|
x
|
≥
1.
{\displaystyle \eta (x)={\begin{cases}{\frac {1}{I_{n}}}\exp {\Big (}-{\frac {1}{1-|x|^{2}}}{\Big )}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1.\end{cases}}}
(
I
n
{\displaystyle I_{n}}
ensuring that the total integral is 1).
In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking η1 to be a hat function. With this choice of η1, one has
η
ε
(
x
)
=
ε
−
1
max
(
1
−
|
x
ε
|
,
0
)
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\max \left(1-\left|{\frac {x}{\varepsilon }}\right|,0\right)}
which are all continuous and compactly supported, although not smooth and so not a mollifier.
==== Probabilistic considerations ====
In the context of probability theory, it is natural to impose the additional condition that the initial η1 in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking η1 to be any probability distribution at all, and letting ηε(x) = η1(x/ε)/ε as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, η has mean 0 and has small higher moments. For instance, if η1 is the uniform distribution on
[
−
1
2
,
1
2
]
{\textstyle \left[-{\frac {1}{2}},{\frac {1}{2}}\right]}
, also known as the rectangular function, then:
η
ε
(
x
)
=
1
ε
rect
(
x
ε
)
=
{
1
ε
,
−
ε
2
<
x
<
ε
2
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x}{\varepsilon }}\right)={\begin{cases}{\frac {1}{\varepsilon }},&-{\frac {\varepsilon }{2}}<x<{\frac {\varepsilon }{2}},\\0,&{\text{otherwise}}.\end{cases}}}
Another example is with the Wigner semicircle distribution
η
ε
(
x
)
=
{
2
π
ε
2
ε
2
−
x
2
,
−
ε
<
x
<
ε
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\begin{cases}{\frac {2}{\pi \varepsilon ^{2}}}{\sqrt {\varepsilon ^{2}-x^{2}}},&-\varepsilon <x<\varepsilon ,\\0,&{\text{otherwise}}.\end{cases}}}
This is continuous and compactly supported, but not a mollifier because it is not smooth.
==== Semigroups ====
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of ηε with ηδ must satisfy
η
ε
∗
η
δ
=
η
ε
+
δ
{\displaystyle \eta _{\varepsilon }*\eta _{\delta }=\eta _{\varepsilon +\delta }}
for all ε, δ > 0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
{
∂
∂
t
η
(
t
,
x
)
=
A
η
(
t
,
x
)
,
t
>
0
lim
t
→
0
+
η
(
t
,
x
)
=
δ
(
x
)
{\displaystyle {\begin{cases}{\dfrac {\partial }{\partial t}}\eta (t,x)=A\eta (t,x),\quad t>0\\[5pt]\displaystyle \lim _{t\to 0^{+}}\eta (t,x)=\delta (x)\end{cases}}}
in which the limit is as usual understood in the weak sense. Setting ηε(x) = η(ε, x) gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
===== The heat kernel =====
The heat kernel, defined by
η
ε
(
x
)
=
1
2
π
ε
e
−
x
2
2
ε
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\sqrt {2\pi \varepsilon }}}\mathrm {e} ^{-{\frac {x^{2}}{2\varepsilon }}}}
represents the temperature in an infinite wire at time t > 0, if a unit of heat energy is stored at the origin of the wire at time t = 0. This semigroup evolves according to the one-dimensional heat equation:
∂
u
∂
t
=
1
2
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}u}{\partial x^{2}}}.}
In probability theory, ηε(x) is a normal distribution of variance ε and mean 0. It represents the probability density at time t = ε of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion.
In higher-dimensional Euclidean space Rn, the heat kernel is
η
ε
=
1
(
2
π
ε
)
n
/
2
e
−
x
⋅
x
2
ε
,
{\displaystyle \eta _{\varepsilon }={\frac {1}{(2\pi \varepsilon )^{n/2}}}\mathrm {e} ^{-{\frac {x\cdot x}{2\varepsilon }}},}
and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense that ηε → δ in the distribution sense as ε → 0.
===== The Poisson kernel =====
The Poisson kernel
η
ε
(
x
)
=
1
π
I
m
{
1
x
−
i
ε
}
=
1
π
ε
ε
2
+
x
2
=
1
2
π
∫
−
∞
∞
e
i
ξ
x
−
|
ε
ξ
|
d
ξ
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi }}\mathrm {Im} \left\{{\frac {1}{x-\mathrm {i} \varepsilon }}\right\}={\frac {1}{\pi }}{\frac {\varepsilon }{\varepsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} \xi x-|\varepsilon \xi |}\,d\xi }
is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation
∂
u
∂
t
=
−
(
−
∂
2
∂
x
2
)
1
2
u
(
t
,
x
)
{\displaystyle {\frac {\partial u}{\partial t}}=-\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}u(t,x)}
where the operator is rigorously defined as the Fourier multiplier
F
[
(
−
∂
2
∂
x
2
)
1
2
f
]
(
ξ
)
=
|
2
π
ξ
|
F
f
(
ξ
)
.
{\displaystyle {\mathcal {F}}\left[\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}f\right](\xi )=|2\pi \xi |{\mathcal {F}}f(\xi ).}
==== Oscillatory integrals ====
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function
ε
−
1
/
3
Ai
(
x
ε
−
1
/
3
)
.
{\displaystyle \varepsilon ^{-1/3}\operatorname {Ai} \left(x\varepsilon ^{-1/3}\right).}
Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in R1+1:
c
−
2
∂
2
u
∂
t
2
−
Δ
u
=
0
u
=
0
,
∂
u
∂
t
=
δ
for
t
=
0.
{\displaystyle {\begin{aligned}c^{-2}{\frac {\partial ^{2}u}{\partial t^{2}}}-\Delta u&=0\\u=0,\quad {\frac {\partial u}{\partial t}}=\delta &\qquad {\text{for }}t=0.\end{aligned}}}
The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.
Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)
η
ε
(
x
)
=
1
π
x
sin
(
x
ε
)
=
1
2
π
∫
−
1
ε
1
ε
cos
(
k
x
)
d
k
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\varepsilon }}\right)={\frac {1}{2\pi }}\int _{-{\frac {1}{\varepsilon }}}^{\frac {1}{\varepsilon }}\cos(kx)\,dk}
and the Bessel function
η
ε
(
x
)
=
1
ε
J
1
ε
(
x
+
1
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}J_{\frac {1}{\varepsilon }}\left({\frac {x+1}{\varepsilon }}\right).}
=== Plane wave decomposition ===
One approach to the study of a linear partial differential equation
L
[
u
]
=
f
,
{\displaystyle L[u]=f,}
where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation
L
[
u
]
=
δ
.
{\displaystyle L[u]=\delta .}
When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form
L
[
u
]
=
h
{\displaystyle L[u]=h}
where h is a plane wave function, meaning that it has the form
h
=
h
(
x
⋅
ξ
)
{\displaystyle h=h(x\cdot \xi )}
for some vector ξ. Such an equation can be resolved (if the coefficients of L are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose k so that n + k is an even integer, and for a real number s, put
g
(
s
)
=
Re
[
−
s
k
log
(
−
i
s
)
k
!
(
2
π
i
)
n
]
=
{
|
s
|
k
4
k
!
(
2
π
i
)
n
−
1
n
odd
−
|
s
|
k
log
|
s
|
k
!
(
2
π
i
)
n
n
even.
{\displaystyle g(s)=\operatorname {Re} \left[{\frac {-s^{k}\log(-is)}{k!(2\pi i)^{n}}}\right]={\begin{cases}{\frac {|s|^{k}}{4k!(2\pi i)^{n-1}}}&n{\text{ odd}}\\[5pt]-{\frac {|s|^{k}\log |s|}{k!(2\pi i)^{n}}}&n{\text{ even.}}\end{cases}}}
Then δ is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure dω of g(x · ξ) for ξ in the unit sphere Sn−1:
δ
(
x
)
=
Δ
x
(
n
+
k
)
/
2
∫
S
n
−
1
g
(
x
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \delta (x)=\Delta _{x}^{(n+k)/2}\int _{S^{n-1}}g(x\cdot \xi )\,d\omega _{\xi }.}
The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function φ,
φ
(
x
)
=
∫
R
n
φ
(
y
)
d
y
Δ
x
n
+
k
2
∫
S
n
−
1
g
(
(
x
−
y
)
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \varphi (x)=\int _{\mathbf {R} ^{n}}\varphi (y)\,dy\,\Delta _{x}^{\frac {n+k}{2}}\int _{S^{n-1}}g((x-y)\cdot \xi )\,d\omega _{\xi }.}
The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of φ(x) from its integrals over hyperplanes. For instance, if n is odd and k = 1, then the integral on the right hand side is
c
n
Δ
x
n
+
1
2
∬
S
n
−
1
φ
(
y
)
|
(
y
−
x
)
⋅
ξ
|
d
ω
ξ
d
y
=
c
n
Δ
x
(
n
+
1
)
/
2
∫
S
n
−
1
d
ω
ξ
∫
−
∞
∞
|
p
|
R
φ
(
ξ
,
p
+
x
⋅
ξ
)
d
p
{\displaystyle {\begin{aligned}&c_{n}\Delta _{x}^{\frac {n+1}{2}}\iint _{S^{n-1}}\varphi (y)|(y-x)\cdot \xi |\,d\omega _{\xi }\,dy\\[5pt]&\qquad =c_{n}\Delta _{x}^{(n+1)/2}\int _{S^{n-1}}\,d\omega _{\xi }\int _{-\infty }^{\infty }|p|R\varphi (\xi ,p+x\cdot \xi )\,dp\end{aligned}}}
where Rφ(ξ, p) is the Radon transform of φ:
R
φ
(
ξ
,
p
)
=
∫
x
⋅
ξ
=
p
f
(
x
)
d
n
−
1
x
.
{\displaystyle R\varphi (\xi ,p)=\int _{x\cdot \xi =p}f(x)\,d^{n-1}x.}
An alternative equivalent expression of the plane wave decomposition is:
δ
(
x
)
=
{
(
n
−
1
)
!
(
2
π
i
)
n
∫
S
n
−
1
(
x
⋅
ξ
)
−
n
d
ω
ξ
n
even
1
2
(
2
π
i
)
n
−
1
∫
S
n
−
1
δ
(
n
−
1
)
(
x
⋅
ξ
)
d
ω
ξ
n
odd
.
{\displaystyle \delta (x)={\begin{cases}{\frac {(n-1)!}{(2\pi i)^{n}}}\displaystyle \int _{S^{n-1}}(x\cdot \xi )^{-n}\,d\omega _{\xi }&n{\text{ even}}\\{\frac {1}{2(2\pi i)^{n-1}}}\displaystyle \int _{S^{n-1}}\delta ^{(n-1)}(x\cdot \xi )\,d\omega _{\xi }&n{\text{ odd}}.\end{cases}}}
=== Fourier transform ===
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds
δ
^
(
ξ
)
=
∫
−
∞
∞
e
−
2
π
i
x
ξ
δ
(
x
)
d
x
=
1.
{\displaystyle {\widehat {\delta }}(\xi )=\int _{-\infty }^{\infty }e^{-2\pi ix\xi }\,\delta (x)dx=1.}
Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
of tempered distributions with Schwartz functions. Thus
δ
^
{\displaystyle {\widehat {\delta }}}
is defined as the unique tempered distribution satisfying
⟨
δ
^
,
φ
⟩
=
⟨
δ
,
φ
^
⟩
{\displaystyle \langle {\widehat {\delta }},\varphi \rangle =\langle \delta ,{\widehat {\varphi }}\rangle }
for all Schwartz functions φ. And indeed it follows from this that
δ
^
=
1.
{\displaystyle {\widehat {\delta }}=1.}
As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S:
S
∗
δ
=
S
.
{\displaystyle S*\delta =S.}
That is to say that δ is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for δ, and once it is known, it characterizes the system completely. See LTI system theory § Impulse response and convolution.
The inverse Fourier transform of the tempered distribution f(ξ) = 1 is the delta function. Formally, this is expressed as
∫
−
∞
∞
1
⋅
e
2
π
i
x
ξ
d
ξ
=
δ
(
x
)
{\displaystyle \int _{-\infty }^{\infty }1\cdot e^{2\pi ix\xi }\,d\xi =\delta (x)}
and more rigorously, it follows since
⟨
1
,
f
^
⟩
=
f
(
0
)
=
⟨
δ
,
f
⟩
{\displaystyle \langle 1,{\widehat {f}}\rangle =f(0)=\langle \delta ,f\rangle }
for all Schwartz functions f.
In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on R. Formally, one has
∫
−
∞
∞
e
i
2
π
ξ
1
t
[
e
i
2
π
ξ
2
t
]
∗
d
t
=
∫
−
∞
∞
e
−
i
2
π
(
ξ
2
−
ξ
1
)
t
d
t
=
δ
(
ξ
2
−
ξ
1
)
.
{\displaystyle \int _{-\infty }^{\infty }e^{i2\pi \xi _{1}t}\left[e^{i2\pi \xi _{2}t}\right]^{*}\,dt=\int _{-\infty }^{\infty }e^{-i2\pi (\xi _{2}-\xi _{1})t}\,dt=\delta (\xi _{2}-\xi _{1}).}
This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution
f
(
t
)
=
e
i
2
π
ξ
1
t
{\displaystyle f(t)=e^{i2\pi \xi _{1}t}}
is
f
^
(
ξ
2
)
=
δ
(
ξ
1
−
ξ
2
)
{\displaystyle {\widehat {f}}(\xi _{2})=\delta (\xi _{1}-\xi _{2})}
which again follows by imposing self-adjointness of the Fourier transform.
By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be
∫
0
∞
δ
(
t
−
a
)
e
−
s
t
d
t
=
e
−
s
a
.
{\displaystyle \int _{0}^{\infty }\delta (t-a)\,e^{-st}\,dt=e^{-sa}.}
==== Fourier kernels ====
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The n-th partial sum of the Fourier series of a function f of period 2π is defined by convolution (on the interval [−π,π]) with the Dirichlet kernel:
D
N
(
x
)
=
∑
n
=
−
N
N
e
i
n
x
=
sin
(
(
N
+
1
2
)
x
)
sin
(
x
/
2
)
.
{\displaystyle D_{N}(x)=\sum _{n=-N}^{N}e^{inx}={\frac {\sin \left(\left(N+{\frac {1}{2}}\right)x\right)}{\sin(x/2)}}.}
Thus,
s
N
(
f
)
(
x
)
=
D
N
∗
f
(
x
)
=
∑
n
=
−
N
N
a
n
e
i
n
x
{\displaystyle s_{N}(f)(x)=D_{N}*f(x)=\sum _{n=-N}^{N}a_{n}e^{inx}}
where
a
n
=
1
2
π
∫
−
π
π
f
(
y
)
e
−
i
n
y
d
y
.
{\displaystyle a_{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(y)e^{-iny}\,dy.}
A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval [−π,π] tends to a multiple of the delta function as N → ∞. This is interpreted in the distribution sense, that
s
N
(
f
)
(
0
)
=
∫
−
π
π
D
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle s_{N}(f)(0)=\int _{-\pi }^{\pi }D_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported smooth function f. Thus, formally one has
δ
(
x
)
=
1
2
π
∑
n
=
−
∞
∞
e
i
n
x
{\displaystyle \delta (x)={\frac {1}{2\pi }}\sum _{n=-\infty }^{\infty }e^{inx}}
on the interval [−π,π].
Despite this, the result does not hold for all compactly supported continuous functions: that is DN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel
F
N
(
x
)
=
1
N
∑
n
=
0
N
−
1
D
n
(
x
)
=
1
N
(
sin
N
x
2
sin
x
2
)
2
.
{\displaystyle F_{N}(x)={\frac {1}{N}}\sum _{n=0}^{N-1}D_{n}(x)={\frac {1}{N}}\left({\frac {\sin {\frac {Nx}{2}}}{\sin {\frac {x}{2}}}}\right)^{2}.}
The Fejér kernels tend to the delta function in a stronger sense that
∫
−
π
π
F
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle \int _{-\pi }^{\pi }F_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported continuous function f. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.
=== Hilbert space theory ===
The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in L2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to give a stronger topology on which the delta function defines a bounded linear functional.
==== Sobolev spaces ====
The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function f such that
‖
f
‖
H
1
2
=
∫
−
∞
∞
|
f
^
(
ξ
)
|
2
(
1
+
|
ξ
|
2
)
d
ξ
<
∞
{\displaystyle \|f\|_{H^{1}}^{2}=\int _{-\infty }^{\infty }|{\widehat {f}}(\xi )|^{2}(1+|\xi |^{2})\,d\xi <\infty }
is automatically continuous, and satisfies in particular
δ
[
f
]
=
|
f
(
0
)
|
<
C
‖
f
‖
H
1
.
{\displaystyle \delta [f]=|f(0)|<C\|f\|_{H^{1}}.}
Thus δ is a bounded linear functional on the Sobolev space H1. Equivalently δ is an element of the continuous dual space H−1 of H1. More generally, in n dimensions, one has δ ∈ H−s(Rn) provided s > n/2.
==== Spaces of holomorphic functions ====
In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if D is a domain in the complex plane with smooth boundary, then
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
,
z
∈
D
{\displaystyle f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}},\quad z\in D}
for all holomorphic functions f in D that are continuous on the closure of D. As a result, the delta function δz is represented in this class of holomorphic functions by the Cauchy integral:
δ
z
[
f
]
=
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
.
{\displaystyle \delta _{z}[f]=f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}}.}
Moreover, let H2(∂D) be the Hardy space consisting of the closure in L2(∂D) of all holomorphic functions in D continuous up to the boundary of D. Then functions in H2(∂D) uniquely extend to holomorphic functions in D, and the Cauchy integral formula continues to hold. In particular for z ∈ D, the delta function δz is a continuous linear functional on H2(∂D). This is a special case of the situation in several complex variables in which, for smooth domains D, the Szegő kernel plays the role of the Cauchy integral.
Another representation of the delta function in a space of holomorphic functions is on the space
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
of square-integrable holomorphic functions in an open set
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
. This is a closed subspace of
L
2
(
D
)
{\displaystyle L^{2}(D)}
, and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
at a point
z
{\displaystyle z}
of
D
{\displaystyle D}
is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel
K
z
(
ζ
)
{\displaystyle K_{z}(\zeta )}
, the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has
δ
w
[
f
]
=
f
(
w
)
=
1
π
∬
|
z
|
<
1
f
(
z
)
d
x
d
y
(
1
−
z
¯
w
)
2
.
{\displaystyle \delta _{w}[f]=f(w)={\frac {1}{\pi }}\iint _{|z|<1}{\frac {f(z)\,dx\,dy}{(1-{\bar {z}}w)^{2}}}.}
==== Resolutions of the identity ====
Given a complete orthonormal basis set of functions {φn} in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as
f
=
∑
n
=
1
∞
α
n
φ
n
.
{\displaystyle f=\sum _{n=1}^{\infty }\alpha _{n}\varphi _{n}.}
The coefficients {αn} are found as
α
n
=
⟨
φ
n
,
f
⟩
,
{\displaystyle \alpha _{n}=\langle \varphi _{n},f\rangle ,}
which may be represented by the notation:
α
n
=
φ
n
†
f
,
{\displaystyle \alpha _{n}=\varphi _{n}^{\dagger }f,}
a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of f takes the dyadic form:
f
=
∑
n
=
1
∞
φ
n
(
φ
n
†
f
)
.
{\displaystyle f=\sum _{n=1}^{\infty }\varphi _{n}\left(\varphi _{n}^{\dagger }f\right).}
Letting I denote the identity operator on the Hilbert space, the expression
I
=
∑
n
=
1
∞
φ
n
φ
n
†
,
{\displaystyle I=\sum _{n=1}^{\infty }\varphi _{n}\varphi _{n}^{\dagger },}
is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a domain D, the quantity:
φ
n
φ
n
†
,
{\displaystyle \varphi _{n}\varphi _{n}^{\dagger },}
is an integral operator, and the expression for f can be rewritten
f
(
x
)
=
∑
n
=
1
∞
∫
D
(
φ
n
(
x
)
φ
n
∗
(
ξ
)
)
f
(
ξ
)
d
ξ
.
{\displaystyle f(x)=\sum _{n=1}^{\infty }\int _{D}\,\left(\varphi _{n}(x)\varphi _{n}^{*}(\xi )\right)f(\xi )\,d\xi .}
The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous function. Nevertheless, it is common to abuse notation and write
f
(
x
)
=
∫
δ
(
x
−
ξ
)
f
(
ξ
)
d
ξ
,
{\displaystyle f(x)=\int \,\delta (x-\xi )f(\xi )\,d\xi ,}
resulting in the representation of the delta function:
δ
(
x
−
ξ
)
=
∑
n
=
1
∞
φ
n
(
x
)
φ
n
∗
(
ξ
)
.
{\displaystyle \delta (x-\xi )=\sum _{n=1}^{\infty }\varphi _{n}(x)\varphi _{n}^{*}(\xi ).}
With a suitable rigged Hilbert space (Φ, L2(D), Φ*) where Φ ⊂ L2(D) contains all compactly supported smooth functions, this summation may converge in Φ*, depending on the properties of the basis φn. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. the heat kernel), in which case the series converges in the distribution sense.
=== Infinitesimal delta functions ===
Cauchy used an infinitesimal α to write down a unit impulse, infinitely tall and narrow Dirac-type delta function δα satisfying
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Non-standard analysis allows one to rigorously treat infinitesimals. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function F one has
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
as anticipated by Fourier and Cauchy.
== Dirac comb ==
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,
Ш
(
x
)
=
∑
n
=
−
∞
∞
δ
(
x
−
n
)
,
{\displaystyle \operatorname {\text{Ш}} (x)=\sum _{n=-\infty }^{\infty }\delta (x-n),}
which is a sequence of point masses at each of the integers.
Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if f is any Schwartz function, then the periodization of f is given by the convolution
(
f
∗
Ш
)
(
x
)
=
∑
n
=
−
∞
∞
f
(
x
−
n
)
.
{\displaystyle (f*\operatorname {\text{Ш}} )(x)=\sum _{n=-\infty }^{\infty }f(x-n).}
In particular,
(
f
∗
Ш
)
∧
=
f
^
Ш
^
=
f
^
Ш
{\displaystyle (f*\operatorname {\text{Ш}} )^{\wedge }={\widehat {f}}{\widehat {\operatorname {\text{Ш}} }}={\widehat {f}}\operatorname {\text{Ш}} }
is precisely the Poisson summation formula.
More generally, this formula remains to be true if f is a tempered distribution of rapid descent or, equivalently, if
f
^
{\displaystyle {\widehat {f}}}
is a slowly growing, ordinary function within the space of tempered distributions.
== Sokhotski–Plemelj theorem ==
The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution p.v. 1/x, the Cauchy principal value of the function 1/x, defined by
⟨
p
.
v
.
1
x
,
φ
⟩
=
lim
ε
→
0
+
∫
|
x
|
>
ε
φ
(
x
)
x
d
x
.
{\displaystyle \left\langle \operatorname {p.v.} {\frac {1}{x}},\varphi \right\rangle =\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {\varphi (x)}{x}}\,dx.}
Sokhotsky's formula states that
lim
ε
→
0
+
1
x
±
i
ε
=
p
.
v
.
1
x
∓
i
π
δ
(
x
)
,
{\displaystyle \lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\operatorname {p.v.} {\frac {1}{x}}\mp i\pi \delta (x),}
Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f,
∫
−
∞
∞
lim
ε
→
0
+
f
(
x
)
x
±
i
ε
d
x
=
∓
i
π
f
(
0
)
+
lim
ε
→
0
+
∫
|
x
|
>
ε
f
(
x
)
x
d
x
.
{\displaystyle \int _{-\infty }^{\infty }\lim _{\varepsilon \to 0^{+}}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {f(x)}{x}}\,dx.}
== Relationship to the Kronecker delta ==
The Kronecker delta δij is the quantity defined by
δ
i
j
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \delta _{ij}={\begin{cases}1&i=j\\0&i\not =j\end{cases}}}
for all integers i, j. This function then satisfies the following analog of the sifting property: if ai (for i in the set of all integers) is any doubly infinite sequence, then
∑
i
=
−
∞
∞
a
i
δ
i
k
=
a
k
.
{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ik}=a_{k}.}
Similarly, for any real or complex valued continuous function f on R, the Dirac delta satisfies the sifting property
∫
−
∞
∞
f
(
x
)
δ
(
x
−
x
0
)
d
x
=
f
(
x
0
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\delta (x-x_{0})\,dx=f(x_{0}).}
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
== Applications ==
=== Probability theory ===
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function f(x) of a discrete distribution consisting of points x = {x1, ..., xn}, with corresponding probabilities p1, ..., pn, can be written as
f
(
x
)
=
∑
i
=
1
n
p
i
δ
(
x
−
x
i
)
.
{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as
f
(
x
)
=
0.6
1
2
π
e
−
x
2
2
+
0.4
δ
(
x
−
3.5
)
.
{\displaystyle f(x)=0.6\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}+0.4\,\delta (x-3.5).}
The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If Y = g(X) is a continuous differentiable function, then the density of Y can be written as
f
Y
(
y
)
=
∫
−
∞
+
∞
f
X
(
x
)
δ
(
y
−
g
(
x
)
)
d
x
.
{\displaystyle f_{Y}(y)=\int _{-\infty }^{+\infty }f_{X}(x)\delta (y-g(x))\,dx.}
The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process B(t) is given by
ℓ
(
x
,
t
)
=
∫
0
t
δ
(
x
−
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\int _{0}^{t}\delta (x-B(s))\,ds}
and represents the amount of time that the process spends at the point x in the range of the process. More precisely, in one dimension this integral can be written
ℓ
(
x
,
t
)
=
lim
ε
→
0
+
1
2
ε
∫
0
t
1
[
x
−
ε
,
x
+
ε
]
(
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\varepsilon }}\int _{0}^{t}\mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}(B(s))\,ds}
where
1
[
x
−
ε
,
x
+
ε
]
{\displaystyle \mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}}
is the indicator function of the interval
[
x
−
ε
,
x
+
ε
]
.
{\displaystyle [x-\varepsilon ,x+\varepsilon ].}
=== Quantum mechanics ===
The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {|φn⟩} of wave functions is orthonormal if
⟨
φ
n
∣
φ
m
⟩
=
δ
n
m
,
{\displaystyle \langle \varphi _{n}\mid \varphi _{m}\rangle =\delta _{nm},}
where δnm is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function |ψ⟩ can be expressed as a linear combination of the {|φn⟩} with complex coefficients:
ψ
=
∑
c
n
φ
n
,
{\displaystyle \psi =\sum c_{n}\varphi _{n},}
where cn = ⟨φn|ψ⟩. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity:
I
=
∑
|
φ
n
⟩
⟨
φ
n
|
.
{\displaystyle I=\sum |\varphi _{n}\rangle \langle \varphi _{n}|.}
Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, Qψ(x) = xψ(x). The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of generalized eigenfunctions, labeled by the points y of the real line, given by
φ
y
(
x
)
=
δ
(
x
−
y
)
.
{\displaystyle \varphi _{y}(x)=\delta (x-y).}
The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by φy = |y⟩.
Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator P. In that case, there is a set Ω of real numbers (the spectrum) and a collection of distributions φy with y ∈ Ω such that
P
φ
y
=
y
φ
y
.
{\displaystyle P\varphi _{y}=y\varphi _{y}.}
That is, φy are the generalized eigenvectors of P. If they form an "orthonormal basis" in the distribution sense, that is:
⟨
φ
y
,
φ
y
′
⟩
=
δ
(
y
−
y
′
)
,
{\displaystyle \langle \varphi _{y},\varphi _{y'}\rangle =\delta (y-y'),}
then for any test function ψ,
ψ
(
x
)
=
∫
Ω
c
(
y
)
φ
y
(
x
)
d
y
{\displaystyle \psi (x)=\int _{\Omega }c(y)\varphi _{y}(x)\,dy}
where c(y) = ⟨ψ, φy⟩. That is, there is a resolution of the identity
I
=
∫
Ω
|
φ
y
⟩
⟨
φ
y
|
d
y
{\displaystyle I=\int _{\Omega }|\varphi _{y}\rangle \,\langle \varphi _{y}|\,dy}
where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.
=== Structural mechanics ===
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse I at time t = 0 can be written
m
d
2
ξ
d
t
2
+
k
ξ
=
I
δ
(
t
)
,
{\displaystyle m{\frac {d^{2}\xi }{dt^{2}}}+k\xi =I\delta (t),}
where m is the mass, ξ is the deflection, and k is the spring constant.
As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory,
E
I
d
4
w
d
x
4
=
q
(
x
)
,
{\displaystyle EI{\frac {d^{4}w}{dx^{4}}}=q(x),}
where EI is the bending stiffness of the beam, w is the deflection, x is the spatial coordinate, and q(x) is the load distribution. If a beam is loaded by a point force F at x = x0, the load distribution is written
q
(
x
)
=
F
δ
(
x
−
x
0
)
.
{\displaystyle q(x)=F\delta (x-x_{0}).}
As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces F at a distance d apart. They then produce a moment M = Fd acting on the beam. Now, let the distance d approach the limit zero, while M is kept constant. The load distribution, assuming a clockwise moment acting at x = 0, is written
q
(
x
)
=
lim
d
→
0
(
F
δ
(
x
)
−
F
δ
(
x
−
d
)
)
=
lim
d
→
0
(
M
d
δ
(
x
)
−
M
d
δ
(
x
−
d
)
)
=
M
lim
d
→
0
δ
(
x
)
−
δ
(
x
−
d
)
d
=
M
δ
′
(
x
)
.
{\displaystyle {\begin{aligned}q(x)&=\lim _{d\to 0}{\Big (}F\delta (x)-F\delta (x-d){\Big )}\\[4pt]&=\lim _{d\to 0}\left({\frac {M}{d}}\delta (x)-{\frac {M}{d}}\delta (x-d)\right)\\[4pt]&=M\lim _{d\to 0}{\frac {\delta (x)-\delta (x-d)}{d}}\\[4pt]&=M\delta '(x).\end{aligned}}}
Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.
== See also ==
Atom (measure theory)
Degenerate distribution
Laplacian of the indicator
Uncertainty principle
== Notes ==
== References ==
Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple, World Scientific, ISBN 978-981-256-461-0.
Arfken, G. B.; Weber, H. J. (2000), Mathematical Methods for Physicists (5th ed.), Boston, Massachusetts: Academic Press, ISBN 978-0-12-059825-0.
atis (2013), ATIS Telecom Glossary, archived from the original on 2013-03-13
Bracewell, R. N. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill, Bibcode:1986ftia.book.....B.
Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), McGraw-Hill.
Córdoba, A. (1988), "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376.
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience.
Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with applications in Mathematica, Academic Press, ISBN 978-0-12-206349-7
Dieudonné, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-215502-4, MR 0530406.
Dieudonné, Jean (1972), Treatise on analysis. Vol. III, Boston, Massachusetts: Academic Press, MR 0350769
Dirac, Paul (1930), The Principles of Quantum Mechanics (1st ed.), Oxford University Press.
Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, Bibcode:2003eoe..book.....D, ISBN 978-0-8247-0940-2.
Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer.
Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, vol. 153, New York: Springer-Verlag, pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325.
Gannon, Terry (2008), "Vertex operator algebras", Princeton Companion to Mathematics, Princeton University Press, ISBN 978-1400830398.
Gelfand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, vol. 1–5, Academic Press, ISBN 9781483262246.
Hartmann, William M. (1997), Signals, sound, and sensation, Springer, ISBN 978-1-56396-283-7.
Hazewinkel, Michiel (1995). Encyclopaedia of Mathematics (set). Springer Science & Business Media. ISBN 978-1-55608-010-4.
Hazewinkel, Michiel (2011). Encyclopaedia of mathematics. Vol. 10. Springer. ISBN 978-90-481-4896-7. OCLC 751862625.
Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag.
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 978-3-540-12104-6, MR 0717035.
Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College Press, Bibcode:1995lqtm.book.....I, ISBN 978-81-7764-190-5.
John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience Publishers, New York-London, MR 0075429. Reprinted, Dover Publications, 2004, ISBN 9780486438047.
Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4757-2698-5, ISBN 978-0-387-94841-6, MR 1476913.
Lange, Rutger-Jan (2012), "Potential theory, path integrals and the Laplacian of the indicator", Journal of High Energy Physics, 2012 (11): 29–30, arXiv:1302.0864, Bibcode:2012JHEP...11..032L, doi:10.1007/JHEP11(2012)032, S2CID 56188533.
Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820", Arch. Hist. Exact Sci., 39 (3): 195–245, doi:10.1007/BF00329867, S2CID 120890300.
Levin, Frank S. (2002), "Coordinate-space wave functions and completeness", An introduction to quantum theory, Cambridge University Press, pp. 109ff, ISBN 978-0-521-59841-5
Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl. Anal., 7 (2): 229–247, arXiv:1303.1943, doi:10.3934/cpaa.2008.7.229, MR 2373214, S2CID 119319140.
de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid.
de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum", Fortschr. Phys., 50 (2): 185–216, arXiv:quant-ph/0109154, Bibcode:2002ForPh..50..185D, doi:10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S, S2CID 9407651.
McMahon, D. (2005-11-22), "An Introduction to State Space" (PDF), Quantum Mechanics Demystified, A Self-Teaching Guide, Demystified Series, New York: McGraw-Hill, p. 108, ISBN 978-0-07-145546-6, retrieved 2008-03-17.
van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0327-6, MR 0904873.
Rudin, Walter (1966). Devine, Peter R. (ed.). Real and complex analysis (3rd ed.). New York: McGraw-Hill (published 1987). ISBN 0-07-100276-6.
Rudin, Walter (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN 978-0-07-054236-5.
Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 9781911299486.
Saichev, A I; Woyczyński, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations", Distributions in the Physical and Engineering Sciences: Distributional and fractal calculus, integral transforms, and wavelets, Birkhäuser, ISBN 978-0-8176-3924-2
Schwartz, L. (1950), Théorie des distributions, vol. 1, Hermann.
Schwartz, L. (1951), Théorie des distributions, vol. 2, Hermann.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 978-0-691-08078-9.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 978-0-8493-8273-4.
Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 978-0-8247-1713-1.
Weisstein, Eric W. "Delta Function". MathWorld.
Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical Physics, 47 (9): 092301, Bibcode:2006JMP....47i2301Y, doi:10.1063/1.2339017
Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math. Phys. 47, 092301 (2006)]", Journal of Mathematical Physics, 48 (8): 084101, Bibcode:2007JMP....48h4101Y, doi:10.1063/1.2771422
== External links ==
Media related to Dirac distribution at Wikimedia Commons
"Delta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
KhanAcademy.org video lesson
The Dirac Delta function, a tutorial on the Dirac delta function.
Video Lectures – Lecture 23, a lecture by Arthur Mattuck.
The Dirac delta measure is a hyperfunction
We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure
Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. Archived 2008-03-07 at the Wayback Machine | Wikipedia/Dirac_function |
In relativistic physics, the electromagnetic stress–energy tensor is the contribution to the stress–energy tensor due to the electromagnetic field. The stress–energy tensor describes the flow of energy and momentum in spacetime. The electromagnetic stress–energy tensor contains the negative of the classical Maxwell stress tensor that governs the electromagnetic interactions.
== Definition ==
=== ISQ convention ===
The electromagnetic stress–energy tensor in the International System of Quantities (ISQ), which underlies the SI, is
T
μ
ν
=
1
μ
0
[
F
μ
α
F
ν
α
−
1
4
η
μ
ν
F
α
β
F
α
β
]
,
{\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,,}
where
F
μ
ν
{\displaystyle F^{\mu \nu }}
is the electromagnetic tensor and where
η
μ
ν
{\displaystyle \eta _{\mu \nu }}
is the Minkowski metric tensor of metric signature (− + + +) and the Einstein summation convention over repeated indices is used.
Explicitly in matrix form:
T
μ
ν
=
[
u
1
c
S
x
1
c
S
y
1
c
S
z
1
c
S
x
−
σ
xx
−
σ
xy
−
σ
xz
1
c
S
y
−
σ
yx
−
σ
yy
−
σ
yz
1
c
S
z
−
σ
zx
−
σ
zy
−
σ
zz
]
,
{\displaystyle T^{\mu \nu }={\begin{bmatrix}u&{\frac {1}{c}}S_{\text{x}}&{\frac {1}{c}}S_{\text{y}}&{\frac {1}{c}}S_{\text{z}}\\{\frac {1}{c}}S_{\text{x}}&-\sigma _{\text{xx}}&-\sigma _{\text{xy}}&-\sigma _{\text{xz}}\\{\frac {1}{c}}S_{\text{y}}&-\sigma _{\text{yx}}&-\sigma _{\text{yy}}&-\sigma _{\text{yz}}\\{\frac {1}{c}}S_{\text{z}}&-\sigma _{\text{zx}}&-\sigma _{\text{zy}}&-\sigma _{\text{zz}}\end{bmatrix}},}
where
u
=
1
2
(
ϵ
0
E
2
+
1
μ
0
B
2
)
{\displaystyle u={\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)}
is the volumetric energy density,
S
=
1
μ
0
E
×
B
{\displaystyle \mathbf {S} ={\frac {1}{\mu _{0}}}\mathbf {E} \times \mathbf {B} }
is the Poynting vector,
σ
i
j
=
ϵ
0
E
i
E
j
+
1
μ
0
B
i
B
j
−
1
2
(
ϵ
0
E
2
+
1
μ
0
B
2
)
δ
i
j
{\displaystyle \sigma _{ij}=\epsilon _{0}E_{i}E_{j}+{\frac {1}{\mu _{0}}}B_{i}B_{j}-{\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)\delta _{ij}}
is the Maxwell stress tensor, and
c
{\displaystyle c}
is the speed of light. Thus, each component of
T
μ
ν
{\displaystyle T^{\mu \nu }}
is dimensionally equivalent to pressure (with SI unit pascal).
=== Gaussian CGS conventions ===
The in the Gaussian system (shown here with a prime) that correspond to the permittivity of free space and permeability of free space are
ϵ
0
′
=
1
4
π
,
μ
0
′
=
4
π
{\displaystyle \epsilon _{0}'={\frac {1}{4\pi }},\quad \mu _{0}'=4\pi }
then:
T
μ
ν
=
1
4
π
[
F
′
μ
α
F
′
ν
α
−
1
4
η
μ
ν
F
α
β
′
F
′
α
β
]
{\displaystyle T^{\mu \nu }={\frac {1}{4\pi }}\left[F'^{\mu \alpha }F'^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F'_{\alpha \beta }F'^{\alpha \beta }\right]}
and in explicit matrix form:
T
μ
ν
=
[
u
1
c
S
x
1
c
S
y
1
c
S
z
1
c
S
x
−
σ
xx
−
σ
xy
−
σ
xz
1
c
S
y
−
σ
yx
−
σ
yy
−
σ
yz
1
c
S
z
−
σ
zx
−
σ
zy
−
σ
zz
]
{\displaystyle T^{\mu \nu }={\begin{bmatrix}u&{\frac {1}{c}}S_{\text{x}}&{\frac {1}{c}}S_{\text{y}}&{\frac {1}{c}}S_{\text{z}}\\{\frac {1}{c}}S_{\text{x}}&-\sigma _{\text{xx}}&-\sigma _{\text{xy}}&-\sigma _{\text{xz}}\\{\frac {1}{c}}S_{\text{y}}&-\sigma _{\text{yx}}&-\sigma _{\text{yy}}&-\sigma _{\text{yz}}\\{\frac {1}{c}}S_{\text{z}}&-\sigma _{\text{zx}}&-\sigma _{\text{zy}}&-\sigma _{\text{zz}}\end{bmatrix}}}
where the energy density becomes
u
=
1
8
π
(
E
′
2
+
B
′
2
)
{\displaystyle u={\frac {1}{8\pi }}\left(\mathbf {E} '^{2}+\mathbf {B} '^{2}\right)}
and the Poynting vector becomes
S
=
c
4
π
E
′
×
B
′
.
{\displaystyle \mathbf {S} ={\frac {c}{4\pi }}\mathbf {E} '\times \mathbf {B} '.}
The stress–energy tensor for an electromagnetic field in a dielectric medium is less well understood and is the subject of the Abraham–Minkowski controversy.
The element
T
μ
ν
{\displaystyle T^{\mu \nu }}
of the stress–energy tensor represents the flux of the component with index
μ
{\displaystyle \mu }
of the four-momentum of the electromagnetic field,
P
μ
{\displaystyle P^{\mu }}
, going through a hyperplane. It represents the contribution of electromagnetism to the source of the gravitational field (curvature of spacetime) in general relativity.
== Algebraic properties ==
The electromagnetic stress–energy tensor has several algebraic properties:
The symmetry of the tensor is as for a general stress–energy tensor in general relativity. The trace of the energy–momentum tensor is a Lorentz scalar; the electromagnetic field (and in particular electromagnetic waves) has no Lorentz-invariant energy scale, so its energy–momentum tensor must have a vanishing trace. This tracelessness eventually relates to the masslessness of the photon.
== Conservation laws ==
The electromagnetic stress–energy tensor allows a compact way of writing the conservation laws of linear momentum and energy in electromagnetism. The divergence of the stress–energy tensor is:
∂
ν
T
μ
ν
+
η
μ
ρ
f
ρ
=
0
{\displaystyle \partial _{\nu }T^{\mu \nu }+\eta ^{\mu \rho }\,f_{\rho }=0\,}
where
f
ρ
{\displaystyle f_{\rho }}
is the (4D) Lorentz force per unit volume on matter.
This equation is equivalent to the following 3D conservation laws
∂
u
e
m
∂
t
+
∇
⋅
S
+
J
⋅
E
=
0
∂
p
e
m
∂
t
−
∇
⋅
σ
+
ρ
E
+
J
×
B
=
0
⇔
ϵ
0
μ
0
∂
S
∂
t
−
∇
⋅
σ
+
f
=
0
{\displaystyle {\begin{aligned}{\frac {\partial u_{\mathrm {em} }}{\partial t}}+\mathbf {\nabla } \cdot \mathbf {S} +\mathbf {J} \cdot \mathbf {E} &=0\\{\frac {\partial \mathbf {p} _{\mathrm {em} }}{\partial t}}-\mathbf {\nabla } \cdot \sigma +\rho \mathbf {E} +\mathbf {J} \times \mathbf {B} &=0\ \Leftrightarrow \ \epsilon _{0}\mu _{0}{\frac {\partial \mathbf {S} }{\partial t}}-\nabla \cdot \mathbf {\sigma } +\mathbf {f} =0\end{aligned}}}
respectively describing the electromagnetic energy density
u
e
m
=
1
2
(
ϵ
0
E
2
+
1
μ
0
B
2
)
{\displaystyle u_{\mathrm {em} }={\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)}
and electromagnetic momentum density
p
e
m
=
S
c
2
,
{\displaystyle \mathbf {p} _{\mathrm {em} }={\mathbf {S} \over {c^{2}}},}
where
J
{\displaystyle \mathbf {J} }
is the electric current density,
ρ
{\displaystyle \rho }
the electric charge density, and
f
{\displaystyle \mathbf {f} }
is the Lorentz force density.
== See also ==
Ricci calculus
Covariant formulation of classical electromagnetism
Mathematical descriptions of the electromagnetic field
Maxwell's equations
Maxwell's equations in curved spacetime
General relativity
Einstein field equations
Magnetohydrodynamics
Vector calculus
== References == | Wikipedia/Electromagnetic_stress-energy_tensor |
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.
== Definition ==
Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
=== As multidimensional arrays ===
A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an n-dimensional space is represented by a one-dimensional array with n components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2 tensor T could be denoted Tij , where i and j are indices running from 1 to n, or also by T ij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while Tij and T ij can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices (m) required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an m-dimensional array or an m-way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors.
Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors
e
^
i
{\displaystyle \mathbf {\hat {e}} _{i}}
are expressed in terms of the old basis vectors
e
j
{\displaystyle \mathbf {e} _{j}}
as,
e
^
i
=
∑
j
=
1
n
e
j
R
i
j
=
e
j
R
i
j
.
{\displaystyle \mathbf {\hat {e}} _{i}=\sum _{j=1}^{n}\mathbf {e} _{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.}
Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R,
v
^
i
=
(
R
−
1
)
j
i
v
j
,
{\displaystyle {\hat {v}}^{i}=\left(R^{-1}\right)_{j}^{i}v^{j},}
where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself,
w
^
i
=
w
j
R
i
j
.
{\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.}
This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array
T
{\displaystyle T}
that transforms under a change of basis matrix
R
=
(
R
i
j
)
{\displaystyle R=\left(R_{i}^{j}\right)}
by
T
^
=
R
−
1
T
R
{\displaystyle {\hat {T}}=R^{-1}TR}
. For the individual matrix entries, this transformation law has the form
T
^
j
′
i
′
=
(
R
−
1
)
i
i
′
T
j
i
R
j
′
j
{\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}}
so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
v
=
v
^
i
e
^
i
=
(
(
R
−
1
)
j
i
v
j
)
(
e
k
R
i
k
)
=
(
(
R
−
1
)
j
i
R
i
k
)
v
j
e
k
=
δ
j
k
v
j
e
k
=
v
k
e
k
=
v
i
e
i
{\displaystyle \mathbf {v} ={\hat {v}}^{i}\,\mathbf {\hat {e}} _{i}=\left(\left(R^{-1}\right)_{j}^{i}{v}^{j}\right)\left(\mathbf {e} _{k}R_{i}^{k}\right)=\left(\left(R^{-1}\right)_{j}^{i}R_{i}^{k}\right){v}^{j}\mathbf {e} _{k}=\delta _{j}^{k}{v}^{j}\mathbf {e} _{k}={v}^{k}\,\mathbf {e} _{k}={v}^{i}\,\mathbf {e} _{i}}
,
where
δ
j
k
{\displaystyle \delta _{j}^{k}}
is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like
v
i
e
i
{\displaystyle {v}^{i}\,\mathbf {e} _{i}}
can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components
(
T
v
)
i
{\displaystyle (Tv)^{i}}
are given by
(
T
v
)
i
=
T
j
i
v
j
{\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}}
. These components transform contravariantly, since
(
T
v
^
)
i
′
=
T
^
j
′
i
′
v
^
j
′
=
[
(
R
−
1
)
i
i
′
T
j
i
R
j
′
j
]
[
(
R
−
1
)
k
j
′
v
k
]
=
(
R
−
1
)
i
i
′
(
T
v
)
i
.
{\displaystyle \left({\widehat {Tv}}\right)^{i'}={\hat {T}}_{j'}^{i'}{\hat {v}}^{j'}=\left[\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}\right]\left[\left(R^{-1}\right)_{k}^{j'}v^{k}\right]=\left(R^{-1}\right)_{i}^{i'}(Tv)^{i}.}
The transformation law for an order p + q tensor with p contravariant indices and q covariant indices is thus given as,
T
^
j
1
′
,
…
,
j
q
′
i
1
′
,
…
,
i
p
′
=
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
{\displaystyle {\hat {T}}_{j'_{1},\ldots ,j'_{q}}^{i'_{1},\ldots ,i'_{p}}=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}}
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
{\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}}
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type (p, q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), p + q in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type (p, q) is also called a (p, q)-tensor for short.
This discussion motivates the following formal definition:
Definition. A tensor of type (p, q) is an assignment of a multidimensional array
T
j
1
…
j
q
i
1
…
i
p
[
f
]
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}[\mathbf {f} ]}
to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis
f
↦
f
⋅
R
=
(
e
i
R
1
i
,
…
,
e
i
R
n
i
)
{\displaystyle \mathbf {f} \mapsto \mathbf {f} \cdot R=\left(\mathbf {e} _{i}R_{1}^{i},\dots ,\mathbf {e} _{i}R_{n}^{i}\right)}
then the multidimensional array obeys the transformation law
T
j
1
′
…
j
q
′
i
1
′
…
i
p
′
[
f
⋅
R
]
=
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}}
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
[
f
]
{\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]}
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.
An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If
f
=
(
f
1
,
…
,
f
n
)
{\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})}
is an ordered basis, and
R
=
(
R
j
i
)
{\displaystyle R=\left(R_{j}^{i}\right)}
is an invertible
n
×
n
{\displaystyle n\times n}
matrix, then the action is given by
f
R
=
(
f
i
R
1
i
,
…
,
f
i
R
n
i
)
.
{\displaystyle \mathbf {f} R=\left(\mathbf {f} _{i}R_{1}^{i},\dots ,\mathbf {f} _{i}R_{n}^{i}\right).}
Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let
ρ
{\displaystyle \rho }
be a representation of GL(n) on W (that is, a group homomorphism
ρ
:
GL
(
n
)
→
GL
(
W
)
{\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)}
). Then a tensor of type
ρ
{\displaystyle \rho }
is an equivariant map
T
:
F
→
W
{\displaystyle T:F\to W}
. Equivariance here means that
T
(
F
R
)
=
ρ
(
R
−
1
)
T
(
F
)
.
{\displaystyle T(FR)=\rho \left(R^{-1}\right)T(F).}
When
ρ
{\displaystyle \rho }
is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups.
=== As multilinear maps ===
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type (p, q) tensor T is defined as a multilinear map,
T
:
V
∗
×
⋯
×
V
∗
⏟
p
copies
×
V
×
⋯
×
V
⏟
q
copies
→
R
,
{\displaystyle T:\underbrace {V^{*}\times \dots \times V^{*}} _{p{\text{ copies}}}\times \underbrace {V\times \dots \times V} _{q{\text{ copies}}}\rightarrow \mathbf {R} ,}
where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers,
R
{\displaystyle \mathbb {R} }
. More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing
R
{\displaystyle \mathbb {R} }
as the codomain of the multilinear maps.
By applying a multilinear map T of type (p, q) to a basis {ej} for V and a canonical cobasis {εi} for V∗,
T
j
1
…
j
q
i
1
…
i
p
≡
T
(
ε
i
1
,
…
,
ε
i
p
,
e
j
1
,
…
,
e
j
q
)
,
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T\left({\boldsymbol {\varepsilon }}^{i_{1}},\ldots ,{\boldsymbol {\varepsilon }}^{i_{p}},\mathbf {e} _{j_{1}},\ldots ,\mathbf {e} _{j_{q}}\right),}
a (p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual.
=== Using tensor products ===
For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here.
A type (p, q) tensor is defined in this context as an element of the tensor product of vector spaces,
T
∈
V
⊗
⋯
⊗
V
⏟
p
copies
⊗
V
∗
⊗
⋯
⊗
V
∗
⏟
q
copies
.
{\displaystyle T\in \underbrace {V\otimes \dots \otimes V} _{p{\text{ copies}}}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{q{\text{ copies}}}.}
A basis vi of V and basis wj of W naturally induce a basis vi ⊗ wj of the tensor product V ⊗ W. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual basis {εj}, i.e.
T
=
T
j
1
…
j
q
i
1
…
i
p
e
i
1
⊗
⋯
⊗
e
i
p
⊗
ε
j
1
⊗
⋯
⊗
ε
j
q
.
{\displaystyle T=T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\;\mathbf {e} _{i_{1}}\otimes \cdots \otimes \mathbf {e} _{i_{p}}\otimes {\boldsymbol {\varepsilon }}^{j_{1}}\otimes \cdots \otimes {\boldsymbol {\varepsilon }}^{j_{q}}.}
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (p, q) tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps.
This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual:
U
⊗
V
≅
(
U
∗
∗
)
⊗
(
V
∗
∗
)
≅
(
U
∗
⊗
V
∗
)
∗
≅
Hom
2
(
U
∗
×
V
∗
;
F
)
{\displaystyle U\otimes V\cong \left(U^{**}\right)\otimes \left(V^{**}\right)\cong \left(U^{*}\otimes V^{*}\right)^{*}\cong \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}
The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from
Hom
2
(
U
∗
×
V
∗
;
F
)
{\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}
and
Hom
(
U
∗
⊗
V
∗
;
F
)
{\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)}
.
Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above.
=== Tensors in infinite dimensions ===
This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories.
=== Tensor fields ===
In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor.
In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions,
x
¯
i
(
x
1
,
…
,
x
n
)
,
{\displaystyle {\bar {x}}^{i}\left(x^{1},\ldots ,x^{n}\right),}
defining a coordinate transformation,
T
^
j
1
′
…
j
q
′
i
1
′
…
i
p
′
(
x
¯
1
,
…
,
x
¯
n
)
=
∂
x
¯
i
1
′
∂
x
i
1
⋯
∂
x
¯
i
p
′
∂
x
i
p
∂
x
j
1
∂
x
¯
j
1
′
⋯
∂
x
j
q
∂
x
¯
j
q
′
T
j
1
…
j
q
i
1
…
i
p
(
x
1
,
…
,
x
n
)
.
{\displaystyle {\hat {T}}_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}\left({\bar {x}}^{1},\ldots ,{\bar {x}}^{n}\right)={\frac {\partial {\bar {x}}^{i'_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial {\bar {x}}^{i'_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial {\bar {x}}^{j'_{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial {\bar {x}}^{j'_{q}}}}T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\left(x^{1},\ldots ,x^{n}\right).}
== History ==
The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898.
Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.
In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product.
From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s.
== Examples ==
An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems.
This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.
Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
== Properties ==
Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
not being a tensor, for the sign change under transformations changing the orientation.
Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, vectors: n (contravariant indices) and dual vectors: m (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers (n, m), which determine the precise form of the transformation law. The order of a tensor is the sum of these two numbers.
The order (also degree or rank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order 2 + 0 = 2, the same as the stress tensor, taking one vector and returning another 1 + 1 = 2. The
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
-symbol, mapping two vectors to one vector, would have order 2 + 1 = 3.
The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order 2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this.
== Notation ==
There are several notational systems that are used to describe tensors and perform calculations involving them.
=== Ricci calculus ===
Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives.
=== Einstein summation convention ===
The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.
=== Penrose graphical notation ===
Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.
=== Abstract index notation ===
The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.
=== Component-free notation ===
A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces.
== Operations ==
There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
=== Tensor product ===
The tensor product takes two tensors, S and T, and produces a new tensor, S ⊗ T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.,
(
S
⊗
T
)
(
v
1
,
…
,
v
n
,
v
n
+
1
,
…
,
v
n
+
m
)
=
S
(
v
1
,
…
,
v
n
)
T
(
v
n
+
1
,
…
,
v
n
+
m
)
,
{\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),}
which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e.,
(
S
⊗
T
)
j
1
…
j
k
j
k
+
1
…
j
k
+
m
i
1
…
i
l
i
l
+
1
…
i
l
+
n
=
S
j
1
…
j
k
i
1
…
i
l
T
j
k
+
1
…
j
k
+
m
i
l
+
1
…
i
l
+
n
.
{\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.}
If S is of type (l, k) and T is of type (n, m), then the tensor product S ⊗ T has type (l + n, k + m).
=== Contraction ===
Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a (1, 1)-tensor
T
i
j
{\displaystyle T_{i}^{j}}
can be contracted to a scalar through
T
i
i
{\displaystyle T_{i}^{i}}
, where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace.
The contraction is often used in conjunction with the tensor product to contract an index from each tensor.
The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor
T
∈
V
⊗
V
⊗
V
∗
{\displaystyle T\in V\otimes V\otimes V^{*}}
can be written as a linear combination
T
=
v
1
⊗
w
1
⊗
α
1
+
v
2
⊗
w
2
⊗
α
2
+
⋯
+
v
N
⊗
w
N
⊗
α
N
.
{\displaystyle T=v_{1}\otimes w_{1}\otimes \alpha _{1}+v_{2}\otimes w_{2}\otimes \alpha _{2}+\cdots +v_{N}\otimes w_{N}\otimes \alpha _{N}.}
The contraction of T on the first and last slots is then the vector
α
1
(
v
1
)
w
1
+
α
2
(
v
2
)
w
2
+
⋯
+
α
N
(
v
N
)
w
N
.
{\displaystyle \alpha _{1}(v_{1})w_{1}+\alpha _{2}(v_{2})w_{2}+\cdots +\alpha _{N}(v_{N})w_{N}.}
In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a (2, 0)-tensor
T
i
j
{\displaystyle T^{ij}}
can be contracted to a scalar through
T
i
j
g
i
j
{\displaystyle T^{ij}g_{ij}}
(yet again assuming the summation convention).
=== Raising or lowering an index ===
When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index.
Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor.
== Applications ==
=== Continuum mechanics ===
Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.
If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point.
=== Other examples from physics ===
Common applications include:
Electromagnetic tensor (or Faraday tensor) in electromagnetism
Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics
Permittivity and electric susceptibility are tensors in anisotropic media
Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes
Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates
Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments
Quantum mechanics and quantum computing utilize tensor products for combination of quantum states
=== Computer vision and optics ===
The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.
The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:
P
i
ε
0
=
∑
j
χ
i
j
(
1
)
E
j
+
∑
j
k
χ
i
j
k
(
2
)
E
j
E
k
+
∑
j
k
ℓ
χ
i
j
k
ℓ
(
3
)
E
j
E
k
E
ℓ
+
⋯
.
{\displaystyle {\frac {P_{i}}{\varepsilon _{0}}}=\sum _{j}\chi _{ij}^{(1)}E_{j}+\sum _{jk}\chi _{ijk}^{(2)}E_{j}E_{k}+\sum _{jk\ell }\chi _{ijk\ell }^{(3)}E_{j}E_{k}E_{\ell }+\cdots .\!}
Here
χ
(
1
)
{\displaystyle \chi ^{(1)}}
is the linear susceptibility,
χ
(
2
)
{\displaystyle \chi ^{(2)}}
gives the Pockels effect and second harmonic generation, and
χ
(
3
)
{\displaystyle \chi ^{(3)}}
gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.
=== Machine learning ===
The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same.
== Generalizations ==
=== Tensor products of vector spaces ===
The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space V ⊗ W is a second-order "tensor" in this more general sense, and an order-d tensor may likewise be defined as an element of a tensor product of d different vector spaces. A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring.
=== Tensors in infinite dimensions ===
The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds.
=== Tensor densities ===
Suppose that a homogeneous medium fills R3, so that the density of the medium is described by a single scalar value ρ in kg⋅m−3. The mass, in kg, of a region Ω is obtained by multiplying ρ by the volume of the region Ω, or equivalently integrating the constant ρ over the region:
m
=
∫
Ω
ρ
d
x
d
y
d
z
,
{\displaystyle m=\int _{\Omega }\rho \,dx\,dy\,dz,}
where the Cartesian coordinates x, y, z are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:
x
′
=
100
x
,
y
′
=
100
y
,
z
′
=
100
z
.
{\displaystyle x'=100x,\quad y'=100y,\quad z'=100z.}
The numerical value of the density ρ must then also transform by 100−3 m3/cm3 to compensate, so that the numerical value of the mass in kg is still given by integral of
ρ
d
x
d
y
d
z
{\displaystyle \rho \,dx\,dy\,dz}
. Thus
ρ
′
=
100
−
3
ρ
{\displaystyle \rho '=100^{-3}\rho }
(in units of kg⋅cm−3).
More generally, if the Cartesian coordinates x, y, z undergo a linear transformation, then the numerical value of the density ρ must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, ρ is a function of the variables x, y, z (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold.
A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:
T
j
1
′
…
j
q
′
i
1
′
…
i
p
′
[
f
⋅
R
]
=
|
det
R
|
−
w
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
[
f
]
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left|\det R\right|^{-w}\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism.
Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an (x, y) ∈ R2 with the transformation law
(
x
,
y
)
↦
(
x
+
y
log
|
det
R
|
,
y
)
.
{\displaystyle (x,y)\mapsto (x+y\log \left|\det R\right|,y).}
=== Geometric objects ===
The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles.
=== Spinors ===
When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.
Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.
== See also ==
The dictionary definition of tensor at Wiktionary
Array data type, for tensor storage and manipulation
Bitensor
=== Foundational ===
=== Applications ===
== Explanatory notes ==
== References ==
=== Specific ===
=== General ===
This article incorporates material from tensor on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== External links == | Wikipedia/Tensor_transformation_law |
In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations.
The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis:
ε
i
1
i
2
…
i
n
{\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}}
where each index i1, i2, ..., in takes values 1, 2, ..., n. There are nn indexed values of εi1i2...in, which can be arranged into an n-dimensional array. The key defining property of the symbol is total antisymmetry in the indices. When any two indices are interchanged, equal or not, the symbol is negated:
ε
…
i
p
…
i
q
…
=
−
ε
…
i
q
…
i
p
…
.
{\displaystyle \varepsilon _{\dots i_{p}\dots i_{q}\dots }=-\varepsilon _{\dots i_{q}\dots i_{p}\dots }.}
If any two indices are equal, the symbol is zero. When all indices are unequal, we have:
ε
i
1
i
2
…
i
n
=
(
−
1
)
p
ε
1
2
…
n
,
{\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}=(-1)^{p}\varepsilon _{1\,2\,\dots n},}
where p (called the parity of the permutation) is the number of pairwise interchanges of indices necessary to unscramble i1, i2, ..., in into the order 1, 2, ..., n, and the factor (−1)p is called the sign, or signature of the permutation. The value ε1 2 ... n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε1 2 ... n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal. This choice is used throughout this article.
The term "n-dimensional Levi-Civita symbol" refers to the fact that the number of indices on the symbol n matches the dimensionality of the vector space in question, which may be Euclidean or non-Euclidean, for example,
R
3
{\displaystyle \mathbb {R} ^{3}}
or Minkowski space. The values of the Levi-Civita symbol are independent of any metric tensor and coordinate system. Also, the specific term "symbol" emphasizes that it is not a tensor because of how it transforms between coordinate systems; however it can be interpreted as a tensor density.
The Levi-Civita symbol allows the determinant of a square matrix, and the cross product of two vectors in three-dimensional Euclidean space, to be expressed in Einstein index notation.
== Definition ==
The Levi-Civita symbol is most often used in three and four dimensions, and to some extent in two dimensions, so these are given here before defining the general case.
=== Two dimensions ===
In two dimensions, the Levi-Civita symbol is defined by:
ε
i
j
=
{
+
1
if
(
i
,
j
)
=
(
1
,
2
)
−
1
if
(
i
,
j
)
=
(
2
,
1
)
0
if
i
=
j
{\displaystyle \varepsilon _{ij}={\begin{cases}+1&{\text{if }}(i,j)=(1,2)\\-1&{\text{if }}(i,j)=(2,1)\\\;\;\,0&{\text{if }}i=j\end{cases}}}
The values can be arranged into a 2 × 2 antisymmetric matrix:
(
ε
11
ε
12
ε
21
ε
22
)
=
(
0
1
−
1
0
)
{\displaystyle {\begin{pmatrix}\varepsilon _{11}&\varepsilon _{12}\\\varepsilon _{21}&\varepsilon _{22}\end{pmatrix}}={\begin{pmatrix}0&1\\-1&0\end{pmatrix}}}
Use of the two-dimensional symbol is common in condensed matter, and in certain specialized high-energy topics like supersymmetry and twistor theory, where it appears in the context of 2-spinors.
=== Three dimensions ===
In three dimensions, the Levi-Civita symbol is defined by:
ε
i
j
k
=
{
+
1
if
(
i
,
j
,
k
)
is
(
1
,
2
,
3
)
,
(
2
,
3
,
1
)
,
or
(
3
,
1
,
2
)
,
−
1
if
(
i
,
j
,
k
)
is
(
3
,
2
,
1
)
,
(
1
,
3
,
2
)
,
or
(
2
,
1
,
3
)
,
0
if
i
=
j
,
or
j
=
k
,
or
k
=
i
{\displaystyle \varepsilon _{ijk}={\begin{cases}+1&{\text{if }}(i,j,k){\text{ is }}(1,2,3),(2,3,1),{\text{ or }}(3,1,2),\\-1&{\text{if }}(i,j,k){\text{ is }}(3,2,1),(1,3,2),{\text{ or }}(2,1,3),\\\;\;\,0&{\text{if }}i=j,{\text{ or }}j=k,{\text{ or }}k=i\end{cases}}}
That is, εijk is 1 if (i, j, k) is an even permutation of (1, 2, 3), −1 if it is an odd permutation, and 0 if any index is repeated. In three dimensions only, the cyclic permutations of (1, 2, 3) are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of (1, 2, 3) and easily obtain all the even or odd permutations.
Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 × 3 × 3 array:
where i is the depth (blue: i = 1; red: i = 2; green: i = 3), j is the row and k is the column.
Some examples:
ε
1
3
2
=
−
ε
1
2
3
=
−
1
ε
3
1
2
=
−
ε
2
1
3
=
−
(
−
ε
1
2
3
)
=
1
ε
2
3
1
=
−
ε
1
3
2
=
−
(
−
ε
1
2
3
)
=
1
ε
2
3
2
=
−
ε
2
3
2
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{2}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}}&=-1\\\varepsilon _{\color {Violet}{3}\color {BrickRed}{1}\color {Orange}{2}}=-\varepsilon _{\color {Orange}{2}\color {BrickRed}{1}\color {Violet}{3}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}})=1\\\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {BrickRed}{1}}=-\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{2}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}})=1\\\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {Orange}{2}}=-\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {Orange}{2}}&=0\end{aligned}}}
=== Four dimensions ===
In four dimensions, the Levi-Civita symbol is defined by:
ε
i
j
k
l
=
{
+
1
if
(
i
,
j
,
k
,
l
)
is an even permutation of
(
1
,
2
,
3
,
4
)
−
1
if
(
i
,
j
,
k
,
l
)
is an odd permutation of
(
1
,
2
,
3
,
4
)
0
otherwise
{\displaystyle \varepsilon _{ijkl}={\begin{cases}+1&{\text{if }}(i,j,k,l){\text{ is an even permutation of }}(1,2,3,4)\\-1&{\text{if }}(i,j,k,l){\text{ is an odd permutation of }}(1,2,3,4)\\\;\;\,0&{\text{otherwise}}\end{cases}}}
These values can be arranged into a 4 × 4 × 4 × 4 array, although in 4 dimensions and higher this is difficult to draw.
Some examples:
ε
1
4
3
2
=
−
ε
1
2
3
4
=
−
1
ε
2
1
3
4
=
−
ε
1
2
3
4
=
−
1
ε
4
3
2
1
=
−
ε
1
3
2
4
=
−
(
−
ε
1
2
3
4
)
=
1
ε
3
2
4
3
=
−
ε
3
2
4
3
=
0
{\displaystyle {\begin{aligned}\varepsilon _{\color {BrickRed}{1}\color {RedViolet}{4}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}}&=-1\\\varepsilon _{\color {Orange}{\color {Orange}{2}}\color {BrickRed}{1}\color {Violet}{3}\color {RedViolet}{4}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}}&=-1\\\varepsilon _{\color {RedViolet}{4}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {BrickRed}{1}}=-\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}})=1\\\varepsilon _{\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}\color {Violet}{3}}=-\varepsilon _{\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}\color {Violet}{3}}&=0\end{aligned}}}
=== Generalization to n dimensions ===
More generally, in n dimensions, the Levi-Civita symbol is defined by:
ε
a
1
a
2
a
3
…
a
n
=
{
+
1
if
(
a
1
,
a
2
,
a
3
,
…
,
a
n
)
is an even permutation of
(
1
,
2
,
3
,
…
,
n
)
−
1
if
(
a
1
,
a
2
,
a
3
,
…
,
a
n
)
is an odd permutation of
(
1
,
2
,
3
,
…
,
n
)
0
otherwise
{\displaystyle \varepsilon _{a_{1}a_{2}a_{3}\ldots a_{n}}={\begin{cases}+1&{\text{if }}(a_{1},a_{2},a_{3},\ldots ,a_{n}){\text{ is an even permutation of }}(1,2,3,\dots ,n)\\-1&{\text{if }}(a_{1},a_{2},a_{3},\ldots ,a_{n}){\text{ is an odd permutation of }}(1,2,3,\dots ,n)\\\;\;\,0&{\text{otherwise}}\end{cases}}}
Thus, it is the sign of the permutation in the case of a permutation, and zero otherwise.
Using the capital pi notation Π for ordinary multiplication of numbers, an explicit expression for the symbol is:
ε
a
1
a
2
a
3
…
a
n
=
∏
1
≤
i
<
j
≤
n
sgn
(
a
j
−
a
i
)
=
sgn
(
a
2
−
a
1
)
sgn
(
a
3
−
a
1
)
⋯
sgn
(
a
n
−
a
1
)
sgn
(
a
3
−
a
2
)
sgn
(
a
4
−
a
2
)
⋯
sgn
(
a
n
−
a
2
)
⋯
sgn
(
a
n
−
a
n
−
1
)
{\displaystyle {\begin{aligned}\varepsilon _{a_{1}a_{2}a_{3}\ldots a_{n}}&=\prod _{1\leq i<j\leq n}\operatorname {sgn}(a_{j}-a_{i})\\&=\operatorname {sgn}(a_{2}-a_{1})\operatorname {sgn}(a_{3}-a_{1})\dotsm \operatorname {sgn}(a_{n}-a_{1})\operatorname {sgn}(a_{3}-a_{2})\operatorname {sgn}(a_{4}-a_{2})\dotsm \operatorname {sgn}(a_{n}-a_{2})\dotsm \operatorname {sgn}(a_{n}-a_{n-1})\end{aligned}}}
where the signum function (denoted sgn) returns the sign of its argument while discarding the absolute value if nonzero. The formula is valid for all index values, and for any n (when n = 0 or n = 1, this is the empty product). However, computing the formula above naively has a time complexity of O(n2), whereas the sign can be computed from the parity of the permutation from its disjoint cycles in only O(n log(n)) cost.
== Properties ==
A tensor whose components in an orthonormal basis are given by the Levi-Civita symbol (a tensor of covariant rank n) is sometimes called a permutation tensor.
Under the ordinary transformation rules for tensors the Levi-Civita symbol is unchanged under pure rotations, consistent with that it is (by definition) the same in all coordinate systems related by orthogonal transformations. However, the Levi-Civita symbol is a pseudotensor because under an orthogonal transformation of Jacobian determinant −1, for example, a reflection in an odd number of dimensions, it should acquire a minus sign if it were a tensor. As it does not change at all, the Levi-Civita symbol is, by definition, a pseudotensor.
As the Levi-Civita symbol is a pseudotensor, the result of taking a cross product is a pseudovector, not a vector.
Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, its components can differ from those of the Levi-Civita symbol by an overall factor. If the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not.
In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual.
Summation symbols can be eliminated by using Einstein notation, where an index repeated between two or more terms indicates summation over that index. For example,
ε
i
j
k
ε
i
m
n
≡
∑
i
=
1
,
2
,
3
ε
i
j
k
ε
i
m
n
{\displaystyle \varepsilon _{ijk}\varepsilon ^{imn}\equiv \sum _{i=1,2,3}\varepsilon _{ijk}\varepsilon ^{imn}}
.
In the following examples, Einstein notation is used.
=== Two dimensions ===
In two dimensions, when all i, j, m, n each take the values 1 and 2:
=== Three dimensions ===
==== Index and symbol values ====
In three dimensions, when all i, j, k, m, n each take values 1, 2, and 3:
==== Product ====
The Levi-Civita symbol is related to the Kronecker delta. In three dimensions, the relationship is given by the following equations (vertical lines denote the determinant):
ε
i
j
k
ε
l
m
n
=
|
δ
i
l
δ
i
m
δ
i
n
δ
j
l
δ
j
m
δ
j
n
δ
k
l
δ
k
m
δ
k
n
|
=
δ
i
l
(
δ
j
m
δ
k
n
−
δ
j
n
δ
k
m
)
−
δ
i
m
(
δ
j
l
δ
k
n
−
δ
j
n
δ
k
l
)
+
δ
i
n
(
δ
j
l
δ
k
m
−
δ
j
m
δ
k
l
)
.
{\displaystyle {\begin{aligned}\varepsilon _{ijk}\varepsilon _{lmn}&={\begin{vmatrix}\delta _{il}&\delta _{im}&\delta _{in}\\\delta _{jl}&\delta _{jm}&\delta _{jn}\\\delta _{kl}&\delta _{km}&\delta _{kn}\\\end{vmatrix}}\\[6pt]&=\delta _{il}\left(\delta _{jm}\delta _{kn}-\delta _{jn}\delta _{km}\right)-\delta _{im}\left(\delta _{jl}\delta _{kn}-\delta _{jn}\delta _{kl}\right)+\delta _{in}\left(\delta _{jl}\delta _{km}-\delta _{jm}\delta _{kl}\right).\end{aligned}}}
A special case of this result occurs when one of the indices is repeated and summed over:
∑
i
=
1
3
ε
i
j
k
ε
i
m
n
=
δ
j
m
δ
k
n
−
δ
j
n
δ
k
m
{\displaystyle \sum _{i=1}^{3}\varepsilon _{ijk}\varepsilon _{imn}=\delta _{jm}\delta _{kn}-\delta _{jn}\delta _{km}}
In Einstein notation, the duplication of the i index implies the sum on i. The previous is then denoted εijkεimn = δjmδkn − δjnδkm.
If two indices are repeated (and summed over), this further reduces to:
∑
i
=
1
3
∑
j
=
1
3
ε
i
j
k
ε
i
j
n
=
2
δ
k
n
{\displaystyle \sum _{i=1}^{3}\sum _{j=1}^{3}\varepsilon _{ijk}\varepsilon _{ijn}=2\delta _{kn}}
=== n dimensions ===
==== Index and symbol values ====
In n dimensions, when all i1, ...,in, j1, ..., jn take values 1, 2, ..., n:
where the exclamation mark (!) denotes the factorial, and δα...β... is the generalized Kronecker delta. For any n, the property
∑
i
,
j
,
k
,
⋯
=
1
n
ε
i
j
k
…
ε
i
j
k
…
=
n
!
{\displaystyle \sum _{i,j,k,\dots =1}^{n}\varepsilon _{ijk\dots }\varepsilon _{ijk\dots }=n!}
follows from the facts that
every permutation is either even or odd,
(+1)2 = (−1)2 = 1, and
the number of permutations of any n-element set number is exactly n!.
The particular case of (8) with
k
=
n
−
2
{\textstyle k=n-2}
is
ε
i
1
…
i
n
−
2
j
k
ε
i
1
…
i
n
−
2
l
m
=
(
n
−
2
)
!
(
δ
j
l
δ
k
m
−
δ
j
m
δ
k
l
)
.
{\displaystyle \varepsilon _{i_{1}\dots i_{n-2}jk}\varepsilon ^{i_{1}\dots i_{n-2}lm}=(n-2)!(\delta _{j}^{l}\delta _{k}^{m}-\delta _{j}^{m}\delta _{k}^{l})\,.}
==== Product ====
In general, for n dimensions, one can write the product of two Levi-Civita symbols as:
ε
i
1
i
2
…
i
n
ε
j
1
j
2
…
j
n
=
|
δ
i
1
j
1
δ
i
1
j
2
…
δ
i
1
j
n
δ
i
2
j
1
δ
i
2
j
2
…
δ
i
2
j
n
⋮
⋮
⋱
⋮
δ
i
n
j
1
δ
i
n
j
2
…
δ
i
n
j
n
|
.
{\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}\varepsilon _{j_{1}j_{2}\dots j_{n}}={\begin{vmatrix}\delta _{i_{1}j_{1}}&\delta _{i_{1}j_{2}}&\dots &\delta _{i_{1}j_{n}}\\\delta _{i_{2}j_{1}}&\delta _{i_{2}j_{2}}&\dots &\delta _{i_{2}j_{n}}\\\vdots &\vdots &\ddots &\vdots \\\delta _{i_{n}j_{1}}&\delta _{i_{n}j_{2}}&\dots &\delta _{i_{n}j_{n}}\\\end{vmatrix}}.}
Proof: Both sides change signs upon switching two indices, so without loss of generality assume
i
1
≤
⋯
≤
i
n
,
j
1
≤
⋯
≤
j
n
{\displaystyle i_{1}\leq \cdots \leq i_{n},j_{1}\leq \cdots \leq j_{n}}
. If some
i
c
=
i
c
+
1
{\displaystyle i_{c}=i_{c+1}}
then left side is zero, and right side is also zero since two of its rows are equal. Similarly for
j
c
=
j
c
+
1
{\displaystyle j_{c}=j_{c+1}}
. Finally, if
i
1
<
⋯
<
i
n
,
j
1
<
⋯
<
j
n
{\displaystyle i_{1}<\cdots <i_{n},j_{1}<\cdots <j_{n}}
, then both sides are 1.
=== Proofs ===
For (1), both sides are antisymmetric with respect of ij and mn. We therefore only need to consider the case i ≠ j and m ≠ n. By substitution, we see that the equation holds for ε12ε12, that is, for i = m = 1 and j = n = 2. (Both sides are then one). Since the equation is antisymmetric in ij and mn, any set of values for these can be reduced to the above case (which holds). The equation thus holds for all values of ij and mn.
Using (1), we have for (2)
ε
i
j
ε
i
n
=
δ
i
i
δ
j
n
−
δ
i
n
δ
j
i
=
2
δ
j
n
−
δ
j
n
=
δ
j
n
.
{\displaystyle \varepsilon _{ij}\varepsilon ^{in}=\delta _{i}{}^{i}\delta _{j}{}^{n}-\delta _{i}{}^{n}\delta _{j}{}^{i}=2\delta _{j}{}^{n}-\delta _{j}{}^{n}=\delta _{j}{}^{n}\,.}
Here we used the Einstein summation convention with i going from 1 to 2. Next, (3) follows similarly from (2).
To establish (5), notice that both sides vanish when i ≠ j. Indeed, if i ≠ j, then one can not choose m and n such that both permutation symbols on the left are nonzero. Then, with i = j fixed, there are only two ways to choose m and n from the remaining two indices. For any such indices, we have
ε
j
m
n
ε
i
m
n
=
(
ε
i
m
n
)
2
=
1
{\displaystyle \varepsilon _{jmn}\varepsilon ^{imn}=\left(\varepsilon ^{imn}\right)^{2}=1}
(no summation), and the result follows.
Then (6) follows since 3! = 6 and for any distinct indices i, j, k taking values 1, 2, 3, we have
ε
i
j
k
ε
i
j
k
=
1
{\displaystyle \varepsilon _{ijk}\varepsilon ^{ijk}=1}
(no summation, distinct i, j, k)
== Applications and examples ==
=== Determinants ===
In linear algebra, the determinant of a 3 × 3 square matrix A = [aij] can be written
det
(
A
)
=
∑
i
=
1
3
∑
j
=
1
3
∑
k
=
1
3
ε
i
j
k
a
1
i
a
2
j
a
3
k
{\displaystyle \det(\mathbf {A} )=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon _{ijk}a_{1i}a_{2j}a_{3k}}
Similarly the determinant of an n × n matrix A = [aij] can be written as
det
(
A
)
=
ε
i
1
…
i
n
a
1
i
1
…
a
n
i
n
,
{\displaystyle \det(\mathbf {A} )=\varepsilon _{i_{1}\dots i_{n}}a_{1i_{1}}\dots a_{ni_{n}},}
where each ir should be summed over 1, ..., n, or equivalently:
det
(
A
)
=
1
n
!
ε
i
1
…
i
n
ε
j
1
…
j
n
a
i
1
j
1
…
a
i
n
j
n
,
{\displaystyle \det(\mathbf {A} )={\frac {1}{n!}}\varepsilon _{i_{1}\dots i_{n}}\varepsilon _{j_{1}\dots j_{n}}a_{i_{1}j_{1}}\dots a_{i_{n}j_{n}},}
where now each ir and each jr should be summed over 1, ..., n. More generally, we have the identity
∑
i
1
,
i
2
,
…
ε
i
1
…
i
n
a
i
1
j
1
…
a
i
n
j
n
=
det
(
A
)
ε
j
1
…
j
n
{\displaystyle \sum _{i_{1},i_{2},\dots }\varepsilon _{i_{1}\dots i_{n}}a_{i_{1}\,j_{1}}\dots a_{i_{n}\,j_{n}}=\det(\mathbf {A} )\varepsilon _{j_{1}\dots j_{n}}}
=== Vector cross product ===
==== Cross product (two vectors) ====
Let
(
e
1
,
e
2
,
e
3
)
{\displaystyle (\mathbf {e_{1}} ,\mathbf {e_{2}} ,\mathbf {e_{3}} )}
a positively oriented orthonormal basis of a vector space. If (a1, a2, a3) and (b1, b2, b3) are the coordinates of the vectors a and b in this basis, then their cross product can be written as a determinant:
a
×
b
=
|
e
1
e
2
e
3
a
1
a
2
a
3
b
1
b
2
b
3
|
=
∑
i
=
1
3
∑
j
=
1
3
∑
k
=
1
3
ε
i
j
k
e
i
a
j
b
k
{\displaystyle \mathbf {a\times b} ={\begin{vmatrix}\mathbf {e_{1}} &\mathbf {e_{2}} &\mathbf {e_{3}} \\a^{1}&a^{2}&a^{3}\\b^{1}&b^{2}&b^{3}\\\end{vmatrix}}=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon _{ijk}\mathbf {e} _{i}a^{j}b^{k}}
hence also using the Levi-Civita symbol, and more simply:
(
a
×
b
)
i
=
∑
j
=
1
3
∑
k
=
1
3
ε
i
j
k
a
j
b
k
.
{\displaystyle (\mathbf {a\times b} )^{i}=\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon _{ijk}a^{j}b^{k}.}
In Einstein notation, the summation symbols may be omitted, and the ith component of their cross product equals
(
a
×
b
)
i
=
ε
i
j
k
a
j
b
k
.
{\displaystyle (\mathbf {a\times b} )^{i}=\varepsilon _{ijk}a^{j}b^{k}.}
The first component is
(
a
×
b
)
1
=
a
2
b
3
−
a
3
b
2
,
{\displaystyle (\mathbf {a\times b} )^{1}=a^{2}b^{3}-a^{3}b^{2}\,,}
then by cyclic permutations of 1, 2, 3 the others can be derived immediately, without explicitly calculating them from the above formulae:
(
a
×
b
)
2
=
a
3
b
1
−
a
1
b
3
,
(
a
×
b
)
3
=
a
1
b
2
−
a
2
b
1
.
{\displaystyle {\begin{aligned}(\mathbf {a\times b} )^{2}&=a^{3}b^{1}-a^{1}b^{3}\,,\\(\mathbf {a\times b} )^{3}&=a^{1}b^{2}-a^{2}b^{1}\,.\end{aligned}}}
==== Triple scalar product (three vectors) ====
From the above expression for the cross product, we have:
a
×
b
=
−
b
×
a
{\displaystyle \mathbf {a\times b} =-\mathbf {b\times a} }
.
If c = (c1, c2, c3) is a third vector, then the triple scalar product equals
a
⋅
(
b
×
c
)
=
ε
i
j
k
a
i
b
j
c
k
.
{\displaystyle \mathbf {a} \cdot (\mathbf {b\times c} )=\varepsilon _{ijk}a^{i}b^{j}c^{k}.}
From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any pair of arguments. For example,
a
⋅
(
b
×
c
)
=
−
b
⋅
(
a
×
c
)
{\displaystyle \mathbf {a} \cdot (\mathbf {b\times c} )=-\mathbf {b} \cdot (\mathbf {a\times c} )}
.
==== Curl (one vector field) ====
If F = (F1, F2, F3) is a vector field defined on some open set of
R
3
{\displaystyle \mathbb {R} ^{3}}
as a function of position x = (x1, x2, x3) (using Cartesian coordinates). Then the ith component of the curl of F equals
(
∇
×
F
)
i
(
x
)
=
ε
i
j
k
∂
∂
x
j
F
k
(
x
)
,
{\displaystyle (\nabla \times \mathbf {F} )^{i}(\mathbf {x} )=\varepsilon _{ijk}{\frac {\partial }{\partial x^{j}}}F^{k}(\mathbf {x} ),}
which follows from the cross product expression above, substituting components of the gradient vector operator (nabla).
== Tensor density ==
In any arbitrary curvilinear coordinate system and even in the absence of a metric on the manifold, the Levi-Civita symbol as defined above may be considered to be a tensor density field in two different ways. It may be regarded as a contravariant tensor density of weight +1 or as a covariant tensor density of weight −1. In n dimensions using the generalized Kronecker delta,
ε
μ
1
…
μ
n
=
δ
1
…
n
μ
1
…
μ
n
ε
ν
1
…
ν
n
=
δ
ν
1
…
ν
n
1
…
n
.
{\displaystyle {\begin{aligned}\varepsilon ^{\mu _{1}\dots \mu _{n}}&=\delta _{\,1\,\dots \,n}^{\mu _{1}\dots \mu _{n}}\,\\\varepsilon _{\nu _{1}\dots \nu _{n}}&=\delta _{\nu _{1}\dots \nu _{n}}^{\,1\,\dots \,n}\,.\end{aligned}}}
Notice that these are numerically identical. In particular, the sign is the same.
== Levi-Civita tensors ==
On a pseudo-Riemannian manifold, one may define a coordinate-invariant covariant tensor field whose coordinate representation agrees with the Levi-Civita symbol wherever the coordinate system is such that the basis of the tangent space is orthonormal with respect to the metric and matches a selected orientation. This tensor should not be confused with the tensor density field mentioned above. The presentation in this section closely follows Carroll 2004.
The covariant Levi-Civita tensor (also known as the Riemannian volume form) in any coordinate system that matches the selected orientation is
E
a
1
…
a
n
=
|
det
[
g
a
b
]
|
ε
a
1
…
a
n
,
{\displaystyle E_{a_{1}\dots a_{n}}={\sqrt {\left|\det[g_{ab}]\right|}}\,\varepsilon _{a_{1}\dots a_{n}}\,,}
where gab is the representation of the metric in that coordinate system. We can similarly consider a contravariant Levi-Civita tensor by raising the indices with the metric as usual,
E
a
1
…
a
n
=
E
b
1
…
b
n
∏
i
=
1
n
g
a
i
b
i
=
1
|
det
[
g
a
b
]
|
ε
a
1
…
a
n
,
,
{\displaystyle E^{a_{1}\dots a_{n}}=E_{b_{1}\dots b_{n}}\prod _{i=1}^{n}g^{a_{i}b_{i}}={\frac {1}{\sqrt {\left|\det[g_{ab}]\right|}}}\,\varepsilon ^{a_{1}\dots a_{n}},,}
but notice that if the metric signature contains an odd number of negative eigenvalues q, then the sign of the components of this tensor differ from the standard Levi-Civita symbol:
E
a
1
…
a
n
=
sgn
(
det
[
g
a
b
]
)
|
det
[
g
a
b
]
|
ε
a
1
…
a
n
,
{\displaystyle E^{a_{1}\dots a_{n}}={\frac {\operatorname {sgn} \left(\det[g_{ab}]\right)}{\sqrt {\left|\det[g_{ab}]\right|}}}\,\varepsilon ^{a_{1}\dots a_{n}},}
where sgn(det[gab]) = (−1)q,
ε
a
1
…
a
n
{\displaystyle \varepsilon _{a_{1}\dots a_{n}}}
is the usual Levi-Civita symbol discussed in the rest of this article, and we used the definition of the metric determinant in the derivation. More explicitly, when the tensor and basis orientation are chosen such that
E
01
…
n
=
+
|
det
[
g
a
b
]
|
{\textstyle E_{01\dots n}=+{\sqrt {\left|\det[g_{ab}]\right|}}}
, we have that
E
01
…
n
=
sgn
(
det
[
g
a
b
]
)
|
det
[
g
a
b
]
|
{\displaystyle E^{01\dots n}={\frac {\operatorname {sgn}(\det[g_{ab}])}{\sqrt {\left|\det[g_{ab}]\right|}}}}
.
From this we can infer the identity,
E
μ
1
…
μ
p
α
1
…
α
n
−
p
E
μ
1
…
μ
p
β
1
…
β
n
−
p
=
(
−
1
)
q
p
!
δ
β
1
…
β
n
−
p
α
1
…
α
n
−
p
,
{\displaystyle E^{\mu _{1}\dots \mu _{p}\alpha _{1}\dots \alpha _{n-p}}E_{\mu _{1}\dots \mu _{p}\beta _{1}\dots \beta _{n-p}}=(-1)^{q}p!\delta _{\beta _{1}\dots \beta _{n-p}}^{\alpha _{1}\dots \alpha _{n-p}}\,,}
where
δ
β
1
…
β
n
−
p
α
1
…
α
n
−
p
=
(
n
−
p
)
!
δ
β
1
[
α
1
…
δ
β
n
−
p
α
n
−
p
]
{\displaystyle \delta _{\beta _{1}\dots \beta _{n-p}}^{\alpha _{1}\dots \alpha _{n-p}}=(n-p)!\delta _{\beta _{1}}^{\lbrack \alpha _{1}}\dots \delta _{\beta _{n-p}}^{\alpha _{n-p}\rbrack }}
is the generalized Kronecker delta.
=== Example: Minkowski space ===
In Minkowski space (the four-dimensional spacetime of special relativity), the covariant Levi-Civita tensor is
E
α
β
γ
δ
=
±
|
det
[
g
μ
ν
]
|
ε
α
β
γ
δ
,
{\displaystyle E_{\alpha \beta \gamma \delta }=\pm {\sqrt {\left|\det[g_{\mu \nu }]\right|}}\,\varepsilon _{\alpha \beta \gamma \delta }\,,}
where the sign depends on the orientation of the basis. The contravariant Levi-Civita tensor is
E
α
β
γ
δ
=
g
α
ζ
g
β
η
g
γ
θ
g
δ
ι
E
ζ
η
θ
ι
.
{\displaystyle E^{\alpha \beta \gamma \delta }=g^{\alpha \zeta }g^{\beta \eta }g^{\gamma \theta }g^{\delta \iota }E_{\zeta \eta \theta \iota }\,.}
The following are examples of the general identity above specialized to Minkowski space (with the negative sign arising from the odd number of negatives in the signature of the metric tensor in either sign convention):
E
α
β
γ
δ
E
ρ
σ
μ
ν
=
−
g
α
ζ
g
β
η
g
γ
θ
g
δ
ι
δ
ρ
σ
μ
ν
ζ
η
θ
ι
E
α
β
γ
δ
E
ρ
σ
μ
ν
=
−
g
α
ζ
g
β
η
g
γ
θ
g
δ
ι
δ
ζ
η
θ
ι
ρ
σ
μ
ν
E
α
β
γ
δ
E
α
β
γ
δ
=
−
24
E
α
β
γ
δ
E
ρ
β
γ
δ
=
−
6
δ
ρ
α
E
α
β
γ
δ
E
ρ
σ
γ
δ
=
−
2
δ
ρ
σ
α
β
E
α
β
γ
δ
E
ρ
σ
θ
δ
=
−
δ
ρ
σ
θ
α
β
γ
.
{\displaystyle {\begin{aligned}E_{\alpha \beta \gamma \delta }E_{\rho \sigma \mu \nu }&=-g_{\alpha \zeta }g_{\beta \eta }g_{\gamma \theta }g_{\delta \iota }\delta _{\rho \sigma \mu \nu }^{\zeta \eta \theta \iota }\\E^{\alpha \beta \gamma \delta }E^{\rho \sigma \mu \nu }&=-g^{\alpha \zeta }g^{\beta \eta }g^{\gamma \theta }g^{\delta \iota }\delta _{\zeta \eta \theta \iota }^{\rho \sigma \mu \nu }\\E^{\alpha \beta \gamma \delta }E_{\alpha \beta \gamma \delta }&=-24\\E^{\alpha \beta \gamma \delta }E_{\rho \beta \gamma \delta }&=-6\delta _{\rho }^{\alpha }\\E^{\alpha \beta \gamma \delta }E_{\rho \sigma \gamma \delta }&=-2\delta _{\rho \sigma }^{\alpha \beta }\\E^{\alpha \beta \gamma \delta }E_{\rho \sigma \theta \delta }&=-\delta _{\rho \sigma \theta }^{\alpha \beta \gamma }\,.\end{aligned}}}
== See also ==
List of permutation topics
Symmetric tensor
== Notes ==
== References ==
Misner, C.; Thorne, K. S.; Wheeler, J. A. (1973). Gravitation. W. H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0.
Neuenschwander, D. E. (2015). Tensor Calculus for Physics. Johns Hopkins University Press. pp. 11, 29, 95. ISBN 978-1-4214-1565-9.
Carroll, Sean M. (2004), Spacetime and Geometry, Addison-Wesley, ISBN 0-8053-8732-3
== External links ==
This article incorporates material from Levi-Civita permutation symbol on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Weisstein, Eric W. "Permutation Tensor". MathWorld. | Wikipedia/Levi-Civita_tensor |
The magnetic radiation reaction force is a force on an electromagnet when its magnetic moment changes. One can derive an electric radiation reaction force for an accelerating charged particle caused by the particle emitting electromagnetic radiation. Likewise, a magnetic radiation reaction force can be derived for an accelerating magnetic moment emitting electromagnetic radiation.
Similar to the electric radiation reaction force, three conditions must be met in order to derive the following formula for the magnetic radiation reaction force. First, the motion of the magnetic moment must be periodic, an assumption used to derive the force. Second, the magnetic moment is traveling at non-relativistic velocities (that is, much slower than the speed of light). Finally, this only applies this force is proportional to the fifth derivative of the position as a function of time (sometimes somewhat facetiously referred to as the "Crackle"). Unlike the Abraham–Lorentz force, the force points in the direction opposite of the "Crackle".
== Definition and description ==
Mathematically, the magnetic radiation reaction force is given by, in SI units:
F
r
a
d
=
−
μ
0
q
2
R
24
π
c
3
d
3
a
→
d
t
3
{\displaystyle \mathbf {F} _{\mathrm {rad} }=-{\frac {\mu _{0}q^{2}R}{24\pi c^{3}}}{\frac {\mathrm {d} ^{3}{\vec {a}}}{\mathrm {d} t^{3}}}}
where:
F is the force,
d
3
a
→
d
t
3
{\displaystyle {\frac {\mathrm {d} ^{3}{\vec {a}}}{\mathrm {d} t^{3}}}}
is the Pop (the third derivative of acceleration, or the fifth derivative of displacement),
μ0 is the permeability of free space,
c is the speed of light in free space
q is the electric charge of the particle.
R is the radius of the magnetic moment
Note that this formula applies only for non-relativistic velocities.
Physically, a time changing magnetic moment emits radiation similar to the Larmor formula of an accelerating charge. Since momentum is conserved, the magnetic moment is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the magnetic version of the Larmor formula, as shown below.
== Background ==
In classical electrodynamics, problems are typically divided into two classes:
Problems in which the charge and current sources of fields are specified and the fields are calculated, and
The reverse situation, problems in which the fields are specified and the motion of particles are calculated.
In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold:
Neglect of the "self-fields" usually leads to answers that are accurate enough for many applications, and
Inclusion of self-fields leads to problems in physics such as renormalization, some of which still unsolved, that relate to the very nature of matter and energy.
This conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson]
The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~1948–50) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain.
The magnetic radiation reaction force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating non-relativistic particles with associated magnetic moment emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. See precision tests of QED. The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore general relativity has unsolved self-field problems. String theory is a current attempt to resolve these problems for all forces.
== Derivation ==
We begin with the Larmor formula for radiation of the second derivative of a magnetic moment with respect to time:
P
=
μ
0
m
¨
2
6
π
c
3
.
{\displaystyle P={\frac {\mu _{0}{\ddot {m}}^{2}}{6\pi c^{3}}}.}
In the case that the magnetic moment is produced by an electric charge moving along a circular path is
m
=
1
2
q
r
×
v
,
{\displaystyle \mathbf {m} ={\frac {1}{2}}\,q\,\mathbf {r} \times \mathbf {v} ,}
where
r
{\displaystyle \mathbf {r} }
is the position of the charge
q
{\displaystyle q}
relative to the center of the circle and
v
{\displaystyle \mathbf {v} }
is the instantaneous velocity of the charge.
The above Larmor formula becomes as follows:
P
=
μ
0
q
2
r
2
a
˙
2
24
π
c
3
.
{\displaystyle P={\frac {\mu _{0}q^{2}r^{2}{\dot {a}}^{2}}{24\pi c^{3}}}.}
If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from
τ
1
{\displaystyle \tau _{1}}
to
τ
2
{\displaystyle \tau _{2}}
:
∫
τ
1
τ
2
F
r
a
d
⋅
v
d
t
=
∫
τ
1
τ
2
−
P
d
t
=
−
∫
τ
1
τ
2
μ
0
q
2
r
2
a
˙
2
24
π
c
3
d
t
=
−
∫
τ
1
τ
2
μ
0
q
2
r
2
24
π
c
3
d
a
d
t
⋅
d
a
d
t
d
t
.
{\displaystyle \int _{\tau _{1}}^{\tau _{2}}\mathbf {F} _{\mathrm {rad} }\cdot \mathbf {v} dt=\int _{\tau _{1}}^{\tau _{2}}-Pdt=-\int _{\tau _{1}}^{\tau _{2}}{\frac {\mu _{0}q^{2}r^{2}{\dot {a}}^{2}}{24\pi c^{3}}}dt=-\int _{\tau _{1}}^{\tau _{2}}{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d\mathbf {a} }{dt}}\cdot {\frac {d\mathbf {a} }{dt}}dt.}
Notice that we can integrate the above expression by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears:
∫
τ
1
τ
2
F
r
a
d
⋅
v
d
t
=
−
μ
0
q
2
r
2
24
π
c
3
d
a
d
t
⋅
a
|
τ
1
τ
2
+
∫
τ
1
τ
2
μ
0
q
2
r
2
24
π
c
3
d
2
a
d
t
2
⋅
a
d
t
=
−
0
+
∫
τ
1
τ
2
μ
0
q
2
r
2
24
π
c
3
a
¨
⋅
a
d
t
.
{\displaystyle \int _{\tau _{1}}^{\tau _{2}}\mathbf {F} _{\mathrm {rad} }\cdot \mathbf {v} dt=-{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d\mathbf {a} }{dt}}\cdot \mathbf {a} {\bigg |}_{\tau _{1}}^{\tau _{2}}+\int _{\tau _{1}}^{\tau _{2}}{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d^{2}\mathbf {a} }{dt^{2}}}\cdot \mathbf {a} dt=-0+\int _{\tau _{1}}^{\tau _{2}}{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}\mathbf {\ddot {a}} \cdot \mathbf {a} dt.}
Integrating by parts a second time, we find
∫
τ
1
τ
2
F
r
a
d
⋅
v
d
t
=
−
μ
0
q
2
r
2
24
π
c
3
d
a
d
t
⋅
a
|
τ
1
τ
2
+
μ
0
q
2
r
2
24
π
c
3
d
3
v
d
t
3
⋅
v
|
τ
1
τ
2
−
∫
τ
1
τ
2
μ
0
q
2
r
2
24
π
c
3
d
3
a
d
t
3
⋅
v
d
t
=
−
0
+
0
−
∫
τ
1
τ
2
μ
0
q
2
r
2
24
π
c
3
d
3
a
d
t
3
⋅
v
d
t
.
{\displaystyle \int _{\tau _{1}}^{\tau _{2}}\mathbf {F} _{\mathrm {rad} }\cdot \mathbf {v} dt=-{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d\mathbf {a} }{dt}}\cdot \mathbf {a} {\bigg |}_{\tau _{1}}^{\tau _{2}}+{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d^{3}\mathbf {v} }{dt^{3}}}\cdot \mathbf {v} {\bigg |}_{\tau _{1}}^{\tau _{2}}-\int _{\tau _{1}}^{\tau _{2}}{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d^{3}\mathbf {a} }{dt^{3}}}\cdot \mathbf {v} dt=-0+0-\int _{\tau _{1}}^{\tau _{2}}{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d^{3}\mathbf {a} }{dt^{3}}}\cdot \mathbf {v} dt.}
Clearly, we can identify
F
r
a
d
=
−
μ
0
q
2
r
2
24
π
c
3
d
3
a
d
t
3
.
{\displaystyle \mathbf {F} _{\mathrm {rad} }=-{\frac {\mu _{0}q^{2}r^{2}}{24\pi c^{3}}}{\frac {d^{3}\mathbf {a} }{dt^{3}}}.}
== Signals from the future ==
Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning "the importance of obeying the validity limits of a physical theory".
For a particle in an external force
F
e
x
t
{\displaystyle \mathbf {F} _{\mathrm {ext} }}
, we have
m
v
˙
=
F
r
a
d
+
F
e
x
t
=
m
t
0
v
¨
+
F
e
x
t
,
{\displaystyle m{\dot {\mathbf {v} }}=\mathbf {F} _{\mathrm {rad} }+\mathbf {F} _{\mathrm {ext} }=mt_{0}{\ddot {\mathbf {v} }}+\mathbf {F} _{\mathrm {ext} },}
where
t
0
=
μ
0
q
2
6
π
m
c
.
{\displaystyle t_{0}={\frac {\mu _{0}q^{2}}{6\pi mc}}.}
This equation can be integrated once to obtain
m
v
˙
=
1
t
0
∫
t
∞
exp
(
−
t
′
−
t
t
0
)
F
e
x
t
(
t
′
)
d
t
′
.
{\displaystyle m{\dot {\mathbf {v} }}={1 \over t_{0}}\int _{t}^{\infty }\exp \left(-{t'-t \over t_{0}}\right)\,\mathbf {F} _{\mathrm {ext} }(t')\,dt'.}
The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor
exp
(
−
t
′
−
t
t
0
)
{\displaystyle \exp \left(-{t'-t \over t_{0}}\right)}
which falls off rapidly for times greater than
t
0
{\displaystyle t_{0}}
in the future. Therefore, signals from an interval approximately
t
0
{\displaystyle t_{0}}
into the future affect the acceleration in the present. For an electron, this time is approximately
10
−
24
{\displaystyle 10^{-24}}
sec, which is the time it takes for a light wave to travel across the "size" of an electron.
== See also ==
Max Abraham
Hendrik Lorentz
Cyclotron radiation
Electromagnetic mass
Radiation resistance
Radiation damping
Synchrotron radiation
Wheeler–Feynman absorber theory
== References ==
== Further reading ==
Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X. See sections 11.2.2 and 11.2.3
Jackson, John D. (1998). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-30932-X.\
Jose A. Heras, The Radiation Force of an Electron Reexamined, 2003, http://www.joseheras.com/jheras_papers/JAH-PAPER_16.pdf.
Donald H. Menzel, Fundamental Formulas of Physics, 1960, Dover Publications Inc., ISBN 0-486-60595-7, vol. 1, page 345.
== External links ==
MathPages - Does A Uniformly Accelerating Charge Radiate?
Feynman: The Development of the Space-Time View of Quantum Electrodynamics
Heras: The Radiation Reaction Force of an Electron Reexamined | Wikipedia/Magnetic_radiation_reaction_force |
Clusters in physics refers to small, polyatomic particles. Any particle made of between 3 × 100 and 3 × 107 atoms is considered a cluster.
The term can also refer to the organization of protons and neutrons within an atomic nucleus, e.g. the alpha particle (also known as "α-cluster"), consisting of two protons and two neutrons (as in a helium nucleus).
== Overview ==
Although first reports of cluster species date back to the 1940s, cluster science emerged as a separate direction of research in the 1980s, One purpose of the research was to study the gradual development of collective phenomena which characterize a bulk solid. For example, these are the color of a body, its electrical conductivity, its ability to absorb or reflect light, and magnetic phenomena such as ferromagnetism, ferrimagnetism, or antiferromagnetism. These are typical collective phenomena which only develop in an aggregate of a large number of atoms.
It was found that collective phenomena break down for very small cluster sizes. It turned out, for example, that small clusters of a ferromagnetic material are super-paramagnetic rather than ferromagnetic. Paramagnetism is not a collective phenomenon, which means that the ferromagnetism of the macrostate was not conserved by going into the nanostate. The question then was asked for example, "How many atoms do we need in order to obtain the collective metallic or magnetic properties of a solid?" Soon after the first cluster sources had been developed in 1980, an ever larger community of cluster scientists was involved in such studies.
This development led to the discovery of fullerenes in 1986 and carbon nanotubes a few years later.
In science, a lot is known about properties of the gas phase; however, comparatively little is known about the condensed phases (the liquid phase and solid phase). The study of clusters attempts to bridge this gap of knowledge by clustering atoms together and studying their characteristics. If enough atoms were clustered together, eventually one would obtain a liquid or solid.
The study of atomic and molecular clusters also benefits the developing field of nanotechnology. If new materials are to be made out of nanoscale particles, such as nanocatalysts and quantum computers, the properties of the nanoscale particles (the clusters) would first need to be understood.
== See also ==
Cluster chemistry
Nanoparticle
Nanocluster
== References ==
== External links ==
Scientific community portal for clusters, fullerenes, nanotubes, nanostructures, and similar small systems. | Wikipedia/Cluster_(physics) |
New Journal of Physics is an online-only, open-access, peer-reviewed scientific journal covering research in all aspects of physics, as well as interdisciplinary topics where physics forms the central theme. The journal was established in 1998 and is a joint publication of the Institute of Physics and the Deutsche Physikalische Gesellschaft. It is published by IOP Publishing. The editor-in-chief is Andreas Buchleitner (Albert Ludwigs University). New Journal of Physics is part of the SCOAP3 initiative.
In April 2023, on occasion of the World Quantum Day, IOP Publishing has launched a special collection of its most important articles published in the field of quantum research. The articles will be extracted from Materials for Quantum Technology, Quantum Science and Technology, New Journal of Physics and Reports on Progress in Physics.
== Abstracting and indexing ==
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.8.
== References ==
== External links ==
Official website | Wikipedia/New_Journal_of_Physics |
In chemistry, a hypervalent molecule (the phenomenon is sometimes colloquially known as expanded octet) is a molecule that contains one or more main group elements apparently bearing more than eight electrons in their valence shells. Phosphorus pentachloride (PCl5), sulfur hexafluoride (SF6), chlorine trifluoride (ClF3), the chlorite (ClO−2) ion in chlorous acid and the triiodide (I−3) ion are examples of hypervalent molecules.
== Definitions and nomenclature ==
Hypervalent molecules were first formally defined by Jeremy I. Musher in 1969 as molecules having central atoms of group 15–18 in any valence other than the lowest (i.e. 3, 2, 1, 0 for Groups 15, 16, 17, 18 respectively, based on the octet rule).
Several specific classes of hypervalent molecules exist:
Hypervalent iodine compounds are useful reagents in organic chemistry (e.g. Dess–Martin periodinane)
Tetra-, penta- and hexavalent phosphorus, silicon, and sulfur compounds (e.g. PCl5, PF5, SF6, sulfuranes and persulfuranes)
Noble gas compounds (ex. xenon tetrafluoride, XeF4)
Halogen polyfluorides (ex. chlorine pentafluoride, ClF5)
=== N-X-L notation ===
N-X-L nomenclature, introduced collaboratively by the research groups of Martin, Arduengo, and Kochi in 1980, is often used to classify hypervalent compounds of main group elements, where:
N represents the number of valence electrons
X is the chemical symbol of the central atom
L the number of ligands to the central atom
Examples of N-X-L nomenclature include:
XeF2, 10-Xe-2
PCl5, 10-P-5
SF6, 12-S-6
IF7, 14-I-7
== History and controversy ==
The debate over the nature and classification of hypervalent molecules goes back to Gilbert N. Lewis and Irving Langmuir and the debate over the nature of the chemical bond in the 1920s. Lewis maintained the importance of the two-center two-electron (2c–2e) bond in describing hypervalence, thus using expanded octets to account for such molecules. Using the language of orbital hybridization, the bonds of molecules like PF5 and SF6 were said to be constructed from sp3dn orbitals on the central atom. Langmuir, on the other hand, upheld the dominance of the octet rule and preferred the use of ionic bonds to account for hypervalence without violating the rule (e.g. "SF2+4 2F−" for SF6).
In the late 1920s and 1930s, Sugden argued for the existence of a two-center one-electron (2c–1e) bond and thus rationalized bonding in hypervalent molecules without the need for expanded octets or ionic bond character; this was poorly accepted at the time. In the 1940s and 1950s, Rundle and Pimentel popularized the idea of the three-center four-electron bond, which is essentially the same concept which Sugden attempted to advance decades earlier; the three-center four-electron bond can be alternatively viewed as consisting of two collinear two-center one-electron bonds, with the remaining two nonbonding electrons localized to the ligands.
The attempt to actually prepare hypervalent organic molecules began with Hermann Staudinger and Georg Wittig in the first half of the twentieth century, who sought to challenge the extant valence theory and successfully prepare nitrogen and phosphorus-centered hypervalent molecules. The theoretical basis for hypervalency was not delineated until J.I. Musher's work in 1969.
In 1990, Magnusson published a seminal work definitively excluding the significance of d-orbital hybridization in the bonding of hypervalent compounds of second-row elements. This had long been a point of contention and confusion in describing these molecules using molecular orbital theory. Part of the confusion here originates from the fact that one must include d-functions in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result), and the contribution of the d-function to the molecular wavefunction is large. These facts were historically interpreted to mean that d-orbitals must be involved in bonding. However, Magnusson concludes in his work that d-orbital involvement is not implicated in hypervalency.
Nevertheless, a 2013 study showed that although the Pimentel ionic model best accounts for the bonding of hypervalent species, the energetic contribution of an expanded octet structure is also not null. In this modern valence bond theory study of the bonding of xenon difluoride, it was found that ionic structures account for about 81% of the overall wavefunction, of which 70% arises from ionic structures employing only the p orbital on xenon while 11% arises from ionic structures employing an
s
d
z
2
{\displaystyle \mathrm {sd} _{z^{2}}}
hybrid on xenon. The contribution of a formally hypervalent structure employing an orbital of sp3d hybridization on xenon accounts for 11% of the wavefunction, with a diradical contribution making up the remaining 8%. The 11% sp3d contribution results in a net stabilization of the molecule by 7.2 kcal (30 kJ) mol−1, a minor but significant fraction of the total energy of the total bond energy (64 kcal (270 kJ) mol−1). Other studies have similarly found minor but non-negligible energetic contributions from expanded octet structures in SF6 (17%) and XeF6 (14%).
Despite the lack of chemical realism, the IUPAC recommends the drawing of expanded octet structures for functional groups like sulfones and phosphoranes, in order to avoid the drawing of a large number of formal charges or partial single bonds.
== Hypervalent hydrides ==
A special type of hypervalent molecules is hypervalent hydrides. Most known hypervalent molecules contain substituents more electronegative than their central atoms. Hypervalent hydrides are of special interest because hydrogen is usually less electronegative than the central atom. A number of computational studies have been performed on chalcogen hydrides and pnictogen hydrides. Recently, a new computational study has shown that most hypervalent halogen hydrides XHn can exist. It is suggested that IH3 and IH5 are stable enough to be observable or, possibly, even isolable.
== Criticism ==
Both the term and concept of hypervalency still fall under criticism. In 1984, in response to this general controversy, Paul von Ragué Schleyer proposed the replacement of 'hypervalency' with use of the term hypercoordination because this term does not imply any mode of chemical bonding and the question could thus be avoided altogether.
The concept itself has been criticized by Ronald Gillespie who, based on an analysis of electron localization functions, wrote in 2002 that "as there is no fundamental difference between the bonds in hypervalent and non-hypervalent (Lewis octet) molecules there is no reason to continue to use the term hypervalent."
For hypercoordinated molecules with electronegative ligands such as PF5, it has been demonstrated that the ligands can pull away enough electron density from the central atom so that its net content is again 8 electrons or fewer. Consistent with this alternative view is the finding that hypercoordinated molecules based on fluorine ligands, for example PF5 do not have hydride counterparts, e.g. phosphorane (PH5) which is unknown.
The ionic model holds up well in thermochemical calculations. It predicts favorable exothermic formation of PF+4F− from phosphorus trifluoride PF3 and fluorine F2 whereas a similar reaction forming PH+4H− is not favorable.
== Alternative definition ==
Durrant has proposed an alternative definition of hypervalency, based on the analysis of atomic charge maps obtained from atoms in molecules theory. This approach defines a parameter called the valence electron equivalent, γ, as “the formal shared electron count at a given atom, obtained by any combination of valid ionic and covalent resonance forms that reproduces the observed charge distribution”. For any particular atom X, if the value of γ(X) is greater than 8, that atom is hypervalent. Using this alternative definition, many species such as PCl5, SO2−4, and XeF4, that are hypervalent by Musher's definition, are reclassified as hypercoordinate but not hypervalent, due to strongly ionic bonding that draws electrons away from the central atom. On the other hand, some compounds that are normally written with ionic bonds in order to conform to the octet rule, such as ozone O3, nitrous oxide NNO, and trimethylamine N-oxide (CH3)3NO, are found to be genuinely hypervalent. Examples of γ calculations for phosphate PO3−4 (γ(P) = 2.6, non-hypervalent) and orthonitrate NO3−4 (γ(N) = 8.5, hypervalent) are shown below.
== Bonding in hypervalent molecules ==
Early considerations of the geometry of hypervalent molecules returned familiar arrangements that were well explained by the VSEPR model for atomic bonding. Accordingly, AB5 and AB6 type molecules would possess a trigonal bi-pyramidal and octahedral geometry, respectively. However, in order to account for the observed bond angles, bond lengths and apparent violation of the Lewis octet rule, several alternative models have been proposed.
In the 1950s an expanded valence shell treatment of hypervalent bonding was adduced to explain the molecular architecture, where the central atom of penta- and hexacoordinated molecules would utilize d AOs in addition to s and p AOs. However, advances in the study of ab initio calculations have revealed that the contribution of d-orbitals to hypervalent bonding is too small to describe the bonding properties, and this description is now regarded as much less important. It was shown that in the case of hexacoordinated SF6, d-orbitals are not involved in S-F bond formation, but charge transfer between the sulfur and fluorine atoms and the apposite resonance structures were able to account for the hypervalency (See below).
Additional modifications to the octet rule have been attempted to involve ionic characteristics in hypervalent bonding. As one of these modifications, in 1951, the concept of the 3-center 4-electron (3c-4e) bond, which described hypervalent bonding with a qualitative molecular orbital, was proposed. The 3c-4e bond is described as three molecular orbitals given by the combination of a p atomic orbital on the central atom and an atomic orbital from each of the two ligands on opposite sides of the central atom. Only one of the two pairs of electrons is occupying a molecular orbital that involves bonding to the central atom, the second pair being non-bonding and occupying a molecular orbital composed of only atomic orbitals from the two ligands. This model in which the octet rule is preserved was also advocated by Musher.
=== Molecular orbital theory ===
A complete description of hypervalent molecules arises from consideration of molecular orbital theory through quantum mechanical methods. An LCAO in, for example, sulfur hexafluoride, taking a basis set of the one sulfur 3s-orbital, the three sulfur 3p-orbitals, and six octahedral geometry symmetry-adapted linear combinations (SALCs) of fluorine orbitals, a total of ten molecular orbitals are obtained (four fully occupied bonding MOs of the lowest energy, two fully occupied intermediate energy non-bonding MOs and four vacant antibonding MOs with the highest energy) providing room for all 12 valence electrons. This is a stable configuration only for SX6 molecules containing electronegative ligand atoms like fluorine, which explains why SH6 is not a stable molecule. In the bonding model, the two non-bonding MOs (1eg) are localized equally on all six fluorine atoms.
=== Valence bond theory ===
For hypervalent compounds in which the ligands are more electronegative than the central, hypervalent atom, resonance structures can be drawn with no more than four covalent electron pair bonds and completed with ionic bonds to obey the octet rule. For example, in phosphorus pentafluoride (PF5), 5 resonance structures can be generated each with four covalent bonds and one ionic bond with greater weight in the structures placing ionic character in the axial bonds, thus satisfying the octet rule and explaining both the observed trigonal bipyramidal molecular geometry and the fact that the axial bond length (158 pm) is longer than the equatorial (154 pm).
For a hexacoordinate molecule such as sulfur hexafluoride, each of the six bonds is the same length. The rationalization described above can be applied to generate 15 resonance structures each with four covalent bonds and two ionic bonds, such that the ionic character is distributed equally across each of the sulfur-fluorine bonds.
Spin-coupled valence bond theory has been applied to diazomethane and the resulting orbital analysis was interpreted in terms of a chemical structure in which the central nitrogen has five covalent bonds;
This led the authors to the interesting conclusion that "Contrary to what we were all taught as undergraduates, the nitrogen atom does indeed form five covalent linkages and the availability or otherwise of d-orbitals has nothing to do with this state of affairs."
== Structure, reactivity, and kinetics ==
=== Structure ===
==== Hexacoordinated phosphorus ====
Hexacoordinate phosphorus molecules involving nitrogen, oxygen, or sulfur ligands provide examples of Lewis acid-Lewis base hexacoordination. For the two similar complexes shown below, the length of the C–P bond increases with decreasing length of the N–P bond; the strength of the C–P bond decreases with increasing strength of the N–P Lewis acid–Lewis base interaction.
==== Pentacoordinated silicon ====
This trend is also generally true of pentacoordinated main-group elements with one or more lone-pair-containing ligand, including the oxygen-pentacoordinated silicon examples shown below.
The Si-halogen bonds range from close to the expected van der Waals value in A (a weak bond) almost to the expected covalent single bond value in C (a strong bond).
=== Reactivity ===
==== Silicon ====
Corriu and coworkers performed early work characterizing reactions thought to proceed through a hypervalent transition state. Measurements of the reaction rates of hydrolysis of tetravalent chlorosilanes incubated with catalytic amounts of water returned a rate that is first order in chlorosilane and second order in water. This indicated that two water molecules interacted with the silane during hydrolysis and from this a binucleophilic reaction mechanism was proposed. Corriu and coworkers then measured the rates of hydrolysis in the presence of nucleophilic catalyst HMPT, DMSO or DMF. It was shown that the rate of hydrolysis was again first order in chlorosilane, first order in catalyst and now first order in water. Appropriately, the rates of hydrolysis also exhibited a dependence on the magnitude of charge on the oxygen of the nucleophile.
Taken together this led the group to propose a reaction mechanism in which there is a pre-rate determining nucleophilic attack of the tetracoordinated silane by the nucleophile (or water) in which a hypervalent pentacoordinated silane is formed. This is followed by a nucleophilic attack of the intermediate by water in a rate determining step leading to hexacoordinated species that quickly decomposes giving the hydroxysilane.
Silane hydrolysis was further investigated by Holmes and coworkers in which tetracoordinated Mes2SiF2 (Mes = mesityl) and pentacoordinated Mes2SiF−3 were reacted with two equivalents of water. Following twenty-four hours, almost no hydrolysis of the tetracoordinated silane was observed, while the pentacoordinated silane was completely hydrolyzed after fifteen minutes. Additionally, X-ray diffraction data collected for the tetraethylammonium salts of the fluorosilanes showed the formation of hydrogen bisilonate lattice supporting a hexacoordinated intermediate from which HF−2 is quickly displaced leading to the hydroxylated product. This reaction and crystallographic data support the mechanism proposed by Corriu et al..
The apparent increased reactivity of hypervalent molecules, contrasted with tetravalent analogues, has also been observed for Grignard reactions. The Corriu group measured Grignard reaction half-times by NMR for related 18-crown-6 potassium salts of a variety of tetra- and pentacoordinated fluorosilanes in the presence of catalytic amounts of nucleophile.
Though the half reaction method is imprecise, the magnitudinal differences in reactions rates allowed for a proposed reaction scheme wherein, a pre-rate determining attack of the tetravalent silane by the nucleophile results in an equilibrium between the neutral tetracoordinated species and the anionic pentavalent compound. This is followed by nucleophilic coordination by two Grignard reagents as normally seen, forming a hexacoordinated transition state and yielding the expected product.
The mechanistic implications of this are extended to a hexacoordinated silicon species that is thought to be active as a transition state in some reactions. The reaction of allyl- or crotyl-trifluorosilanes with aldehydes and ketones only precedes with fluoride activation to give a pentacoordinated silicon. This intermediate then acts as a Lewis acid to coordinate with the carbonyl oxygen atom. The further weakening of the silicon–carbon bond as the silicon becomes hexacoordinate helps drive this reaction.
==== Phosphorus ====
Similar reactivity has also been observed for other hypervalent structures such as the miscellany of phosphorus compounds, for which hexacoordinated transition states have been proposed.
Hydrolysis of phosphoranes and oxyphosphoranes have been studied and shown to be second order in water. Bel'skii et al.. have proposed a prerate determining nucleophilic attack by water resulting in an equilibrium between the penta- and hexacoordinated phosphorus species, which is followed by a proton transfer involving the second water molecule in a rate determining ring-opening step, leading to the hydroxylated product.
Alcoholysis of pentacoordinated phosphorus compounds, such as trimethoxyphospholene with benzyl alcohol, have also been postulated to occur through a similar octahedral transition state, as in hydrolysis, however without ring opening.
It can be understood from these experiments that the increased reactivity observed for hypervalent molecules, contrasted with analogous nonhypervalent compounds, can be attributed to the congruence of these species to the hypercoordinated activated states normally formed during the course of the reaction.
=== Ab initio calculations ===
The enhanced reactivity at pentacoordinated silicon is not fully understood. Corriu and coworkers suggested that greater electropositive character at the pentavalent silicon atom may be responsible for its increased reactivity. Preliminary ab initio calculations supported this hypothesis to some degree, but used a small basis set.
A software program for ab initio calculations, Gaussian 86, was used by Dieters and coworkers to compare tetracoordinated silicon and phosphorus to their pentacoordinate analogues. This ab initio approach is used as a supplement to determine why reactivity improves in nucleophilic reactions with pentacoordinated compounds. For silicon, the 6-31+G* basis set was used because of its pentacoordinated anionic character and for phosphorus, the 6-31G* basis set was used.
Pentacoordinated compounds should theoretically be less electrophilic than tetracoordinated analogues due to steric hindrance and greater electron density from the ligands, yet experimentally show greater reactivity with nucleophiles than their tetracoordinated analogues. Advanced ab initio calculations were performed on series of tetracoordinated and pentacoordinated species to further understand this reactivity phenomenon. Each series varied by degree of fluorination. Bond lengths and charge densities are shown as functions of how many hydride ligands are on the central atoms. For every new hydride, there is one less fluoride.
For silicon and phosphorus bond lengths, charge densities, and Mulliken bond overlap, populations were calculated for tetra and pentacoordinated species by this ab initio approach. Addition of a fluoride ion to tetracoordinated silicon shows an overall average increase of 0.1 electron charge, which is considered insignificant. In general, bond lengths in trigonal bipyramidal pentacoordinate species are longer than those in tetracoordinate analogues. Si-F bonds and Si-H bonds both increase in length upon pentacoordination and related effects are seen in phosphorus species, but to a lesser degree. The reason for the greater magnitude in bond length change for silicon species over phosphorus species is the increased effective nuclear charge at phosphorus. Therefore, silicon is concluded to be more loosely bound to its ligands.
In addition Dieters and coworkers show an inverse correlation between bond length and bond overlap for all series. Pentacoordinated species are concluded to be more reactive because of their looser bonds as trigonal-bipyramidal structures.
By calculating the energies for the addition and removal of a fluoride ion in various silicon and phosphorus species, several trends were found. In particular, the tetracoordinated species have much higher energy requirements for ligand removal than do pentacoordinated species. Further, silicon species have lower energy requirements for ligand removal than do phosphorus species, which is an indication of weaker bonds in silicon.
== See also ==
Charge-shift bond
== References ==
== External links ==
Media related to Hypervalent molecules at Wikimedia Commons | Wikipedia/Hypervalent_molecule |
In computational chemistry, a water model is used to simulate and thermodynamically calculate water clusters, liquid water, and aqueous solutions with explicit solvent, often using molecular dynamics or Monte Carlo methods. The models describe intermolecular forces between water molecules and are determined from quantum mechanics, molecular mechanics, experimental results, and these combinations. To imitate the specific nature of the intermolecular forces, many types of models have been developed. In general, these can be classified by the following three characteristics; (i) the number of interaction points or sites, (ii) whether the model is rigid or flexible, and (iii) whether the model includes polarization effects.
An alternative to the explicit water models is to use an implicit solvation model, also termed a continuum model. Examples of this type of model include the COSMO solvation model, the polarizable continuum model (PCM) and hybrid solvation models.
== Simple water models ==
The rigid models are considered the simplest water models and rely on non-bonded interactions. In these models, bonding interactions are implicitly treated by holonomic constraints. The electrostatic interaction is modeled using Coulomb's law, and the dispersion and repulsion forces using the Lennard-Jones potential. The potential for models such as TIP3P (transferable intermolecular potential with 3 points) and TIP4P is represented by
E
a
b
=
∑
i
on
a
∑
j
on
b
k
C
q
i
q
j
r
i
j
+
A
r
OO
12
−
B
r
OO
6
,
{\displaystyle E_{ab}=\sum _{i}^{{\text{on }}a}\sum _{j}^{{\text{on }}b}{\frac {k_{C}q_{i}q_{j}}{r_{ij}}}+{\frac {A}{r_{\text{OO}}^{12}}}-{\frac {B}{r_{\text{OO}}^{6}}},}
where kC, the electrostatic constant, has a value of 332.1 Å·kcal/(mol·e²) in the units commonly used in molecular modeling; qi and qj are the partial charges relative to the charge of the electron; rij is the distance between two atoms or charged sites; and A and B are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms.
The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model.
== 2-site ==
A 2-site model of water based on the familiar three-site SPC model (see below) has been shown to predict the dielectric properties of water using site-renormalized molecular fluid theory.
== 3-site ==
Three-site models have three interaction points corresponding to the three atoms of the water molecule. Each site has a point charge, and the site corresponding to the oxygen atom also has the Lennard-Jones parameters. Since 3-site models achieve a high computational efficiency, these are widely used for many applications of molecular dynamics simulations. Most of the models use a rigid geometry matching that of actual water molecules. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°.
The table below lists the parameters for some 3-site models.
The SPC/E model adds an average polarization correction to the potential energy function:
E
pol
=
1
2
∑
i
(
μ
−
μ
0
)
2
α
i
,
{\displaystyle E_{\text{pol}}={\frac {1}{2}}\sum _{i}{\frac {(\mu -\mu ^{0})^{2}}{\alpha _{i}}},}
where μ is the electric dipole moment of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of 1.608×10−40 F·m2. Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model.
The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms too, in addition to the one on oxygen. The charges are not modified. Three-site model (TIP3P) has better performance in calculating specific heats.
=== Flexible SPC water model ===
The flexible simple point-charge water model (or flexible SPC water model) is a re-parametrization of the three-site SPC water model. The SPC model is rigid, whilst the flexible SPC model is flexible. In the model of Toukan and Rahman, the O–H stretching is made anharmonic, and thus the dynamical behavior is well described. This is one of the most accurate three-center water models without taking into account the polarization. In molecular dynamics simulations it gives the correct density and dielectric permittivity of water.
Flexible SPC is implemented in the programs MDynaMix and Abalone.
=== Other models ===
Ferguson (flexible SPC)
CVFF (flexible)
MG (flexible and dissociative)
KKY potential (flexible model).
BLXL (smear charged potential).
== 4-site ==
The four-site models have four interaction points by adding one dummy atom near of the oxygen along the bisector of the HOH angle of the three-site models (labeled M in the figure). The dummy atom only has a negative charge. This model improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal–Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is thus of historical interest only. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough.
The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; TIP4P/2005, a general parameterization for simulating the entire phase diagram of condensed water; and TIP4PQ/2005, a similar model but designed to accurately describe the properties of solid and liquid water when quantum effects are included in the simulation.
Most of the four-site water models use an OH distance and HOH angle which match those of the free water molecule. One exception is the OPC model, in which no geometry constraints are imposed other than the fundamental C2v molecular symmetry of the water molecule. Instead, the point charges and their positions are optimized to best describe the electrostatics of the water molecule. OPC reproduces a comprehensive set of bulk properties more accurately than several of the commonly used rigid n-site water models. The OPC model is implemented within the AMBER force field.
Others:
q-TIP4P/F (flexible)
TIP4P/2005f (flexible)
== 5-site ==
The 5-site models place the negative charge on dummy atoms (labelled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximal density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums.
Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function S(r):
S
(
r
i
j
)
=
{
0
if
r
i
j
≤
R
L
,
(
r
i
j
−
R
L
)
2
(
3
R
U
−
R
L
−
2
r
i
j
)
(
R
U
−
R
L
)
2
if
R
L
≤
r
i
j
≤
R
U
,
1
if
R
U
≤
r
i
j
.
{\displaystyle S(r_{ij})={\begin{cases}0&{\text{if }}r_{ij}\leq R_{\text{L}},\\{\frac {(r_{ij}-R_{L})^{2}(3R_{\text{U}}-R_{\text{L}}-2r_{ij})}{(R_{\text{U}}-R_{\text{L}})^{2}}}&{\text{if }}R_{\text{L}}\leq r_{ij}\leq R_{\text{U}},\\1&{\text{if }}R_{\text{U}}\leq r_{ij}.\end{cases}}}
Thus, the RL and RU parameters only apply to BNS and ST2.
== 6-site ==
Originally designed to study water/ice systems, a 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden. Since it had a very high melting temperature when employed under periodic electrostatic conditions (Ewald summation), a modified version was published later optimized by using the Ewald method for estimating the Coulomb interaction.
== Other ==
The effect of explicit solute model on solute behavior in biomolecular simulations has been also extensively studied. It was shown that explicit water models affected the specific solvation and dynamics of unfolded peptides, while the conformational behavior and flexibility of folded peptides remained intact.
MB model. A more abstract model resembling the Mercedes-Benz logo that reproduces some features of water in two-dimensional systems. It is not used as such for simulations of "real" (i.e., three-dimensional) systems, but it is useful for qualitative studies and for educational purposes.
Coarse-grained models. One- and two-site models of water have also been developed. In coarse-grain models, each site can represent several water molecules.
Many-body models. Water models built using training-set configurations solved quantum mechanically, which then use machine learning protocols to extract potential-energy surfaces. These potential-energy surfaces are fed into MD simulations for an unprecedented degree of accuracy in computing physical properties of condensed phase systems.
Another classification of many body models is on the basis of the expansion of the underlying electrostatics, e.g., the SCME (Single Center Multipole Expansion) model
== Computational cost ==
The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O–O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1).
When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step).
== See also ==
Water (properties)
Water (data page)
Water dimer
Force field (chemistry)
Comparison of force field implementations
Molecular mechanics
Molecular modelling
Comparison of software for molecular mechanics modeling
Solvent models
== References == | Wikipedia/Water_model |
In physics and chemistry, an equation of state is a thermodynamic equation relating state variables, which describe the state of matter under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. Most modern equations of state are formulated in the Helmholtz free energy. Equations of state are useful in describing the properties of pure substances and mixtures in liquids, gases, and solid states as well as the state of matter in the interior of stars. Though there are many equations of state, none accurately predicts properties of substances under all conditions. The quest for a universal equation of state has spanned three centuries.
== Overview ==
At present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. An example of an equation of state correlates densities of gases and liquids to temperatures and pressures, known as the ideal gas law, which is roughly accurate for weakly polar gases at low pressures and moderate temperatures. This equation becomes increasingly inaccurate at higher pressures and lower temperatures, and fails to predict condensation from a gas to a liquid.
The general form of an equation of state may be written as
f
(
p
,
V
,
T
)
=
0
{\displaystyle f(p,V,T)=0}
where
p
{\displaystyle p}
is the pressure,
V
{\displaystyle V}
the volume, and
T
{\displaystyle T}
the temperature of the system. Yet also other variables may be used in that form. It is directly related to Gibbs phase rule, that is, the number of independent variables depends on the number of substances and phases in the system.
An equation used to model this relationship is called an equation of state. In most cases this model will comprise some empirical parameters that are usually adjusted to measurement data. Equations of state can also describe solids, including the transition of solids from one crystalline state to another. Equations of state are also used for the modeling of the state of matter in the interior of stars, including neutron stars, dense matter (quark–gluon plasmas) and radiation fields. A related concept is the perfect fluid equation of state used in cosmology.
Equations of state are applied in many fields such as process engineering and petroleum industry as well as pharmaceutical industry.
Any consistent set of units may be used, although SI units are preferred. Absolute temperature refers to the use of the Kelvin (K), with zero being absolute zero.
n
{\displaystyle n}
, number of moles of a substance
V
m
{\displaystyle V_{m}}
,
V
n
{\displaystyle {\frac {V}{n}}}
, molar volume, the volume of 1 mole of gas or liquid
R
{\displaystyle R}
, ideal gas constant ≈ 8.3144621 J/mol·K
p
c
{\displaystyle p_{c}}
, pressure at the critical point
V
c
{\displaystyle V_{c}}
, molar volume at the critical point
T
c
{\displaystyle T_{c}}
, absolute temperature at the critical point
== Historical background ==
Equations of state essentially begin three centuries ago with the history of the ideal gas law:
p
V
=
n
R
T
{\displaystyle pV=nRT}
Boyle's law was one of the earliest formulation of an equation of state. In 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, sealed end of the tube. Then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the mercury level in the short end of the tube and that in the long, open end. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure. In mathematical form, this can be stated as:
p
V
=
c
o
n
s
t
a
n
t
.
{\displaystyle pV=\mathrm {constant} .}
The above relationship has also been attributed to Edme Mariotte and is sometimes referred to as Mariotte's law. However, Mariotte's work was not published until 1676.
In 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80-kelvin interval. This is known today as Charles's law. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, indicating a linear relationship between volume and temperature:
V
1
T
1
=
V
2
T
2
.
{\displaystyle {\frac {V_{1}}{T_{1}}}={\frac {V_{2}}{T_{2}}}.}
Dalton's law (1801) of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone.
Mathematically, this can be represented for
n
{\displaystyle n}
species as:
p
total
=
p
1
+
p
2
+
⋯
+
p
n
=
∑
i
=
1
n
p
i
.
{\displaystyle p_{\text{total}}=p_{1}+p_{2}+\cdots +p_{n}=\sum _{i=1}^{n}p_{i}.}
In 1834, Émile Clapeyron combined Boyle's law and Charles' law into the first statement of the ideal gas law. Initially, the law was formulated as pVm = R(TC + 267) (with temperature expressed in degrees Celsius), where R is the gas constant. However, later work revealed that the number should actually be closer to 273.2, and then the Celsius scale was defined with
0
∘
C
=
273.15
K
{\displaystyle 0~^{\circ }\mathrm {C} =273.15~\mathrm {K} }
, giving:
p
V
m
=
R
(
T
C
+
273.15
∘
C
)
.
{\displaystyle pV_{m}=R\left(T_{C}+273.15\ {}^{\circ }{\text{C}}\right).}
In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. His new formula revolutionized the study of equations of state, and was the starting point of cubic equations of state, which most famously continued via the Redlich–Kwong equation of state and the Soave modification of Redlich-Kwong.
The van der Waals equation of state can be written as
(
P
+
a
1
V
m
2
)
(
V
m
−
b
)
=
R
T
{\displaystyle \left(P+a{\frac {1}{V_{m}^{2}}}\right)(V_{m}-b)=RT}
where
a
{\displaystyle a}
is a parameter describing the attractive energy between particles and
b
{\displaystyle b}
is a parameter describing the volume of the particles.
== Ideal gas law ==
=== Classical ideal gas law ===
The classical ideal gas law may be written
p
V
=
n
R
T
.
{\displaystyle pV=nRT.}
In the form shown above, the equation of state is thus
f
(
p
,
V
,
T
)
=
p
V
−
n
R
T
=
0.
{\displaystyle f(p,V,T)=pV-nRT=0.}
If the calorically perfect gas approximation is used, then the ideal gas law may also be expressed as follows
p
=
ρ
(
γ
−
1
)
e
{\displaystyle p=\rho (\gamma -1)e}
where
ρ
{\displaystyle \rho }
is the number density of the gas (number of atoms/molecules per unit volume),
γ
=
C
p
/
C
v
{\displaystyle \gamma =C_{p}/C_{v}}
is the (constant) adiabatic index (ratio of specific heats),
e
=
C
v
T
{\displaystyle e=C_{v}T}
is the internal energy per unit mass (the "specific internal energy"),
C
v
{\displaystyle C_{v}}
is the specific heat capacity at constant volume, and
C
p
{\displaystyle C_{p}}
is the specific heat capacity at constant pressure.
=== Quantum ideal gas law ===
Since for atomic and molecular gases, the classical ideal gas law is well suited in most cases, let us describe the equation of state for elementary particles with mass
m
{\displaystyle m}
and spin
s
{\displaystyle s}
that takes into account quantum effects. In the following, the upper sign will always correspond to Fermi–Dirac statistics and the lower sign to Bose–Einstein statistics. The equation of state of such gases with
N
{\displaystyle N}
particles occupying a volume
V
{\displaystyle V}
with temperature
T
{\displaystyle T}
and pressure
p
{\displaystyle p}
is given by
p
=
(
2
s
+
1
)
2
m
3
k
B
5
T
5
3
π
2
ℏ
3
∫
0
∞
z
3
/
2
d
z
e
z
−
μ
/
(
k
B
T
)
±
1
{\displaystyle p={\frac {(2s+1){\sqrt {2m^{3}k_{\text{B}}^{5}T^{5}}}}{3\pi ^{2}\hbar ^{3}}}\int _{0}^{\infty }{\frac {z^{3/2}\,\mathrm {d} z}{e^{z-\mu /(k_{\text{B}}T)}\pm 1}}}
where
k
B
{\displaystyle k_{\text{B}}}
is the Boltzmann constant and
μ
(
T
,
N
/
V
)
{\displaystyle \mu (T,N/V)}
the chemical potential is given by the following implicit function
N
V
=
(
2
s
+
1
)
(
m
k
B
T
)
3
/
2
2
π
2
ℏ
3
∫
0
∞
z
1
/
2
d
z
e
z
−
μ
/
(
k
B
T
)
±
1
.
{\displaystyle {\frac {N}{V}}={\frac {(2s+1)(mk_{\text{B}}T)^{3/2}}{{\sqrt {2}}\pi ^{2}\hbar ^{3}}}\int _{0}^{\infty }{\frac {z^{1/2}\,\mathrm {d} z}{e^{z-\mu /(k_{\text{B}}T)}\pm 1}}.}
In the limiting case where
e
μ
/
(
k
B
T
)
≪
1
{\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1}
, this equation of state will reduce to that of the classical ideal gas. It can be shown that the above equation of state in the limit
e
μ
/
(
k
B
T
)
≪
1
{\displaystyle e^{\mu /(k_{\text{B}}T)}\ll 1}
reduces to
p
V
=
N
k
B
T
[
1
±
π
3
/
2
2
(
2
s
+
1
)
N
ℏ
3
V
(
m
k
B
T
)
3
/
2
+
⋯
]
{\displaystyle pV=Nk_{\text{B}}T\left[1\pm {\frac {\pi ^{3/2}}{2(2s+1)}}{\frac {N\hbar ^{3}}{V(mk_{\text{B}}T)^{3/2}}}+\cdots \right]}
With a fixed number density
N
/
V
{\displaystyle N/V}
, decreasing the temperature causes in Fermi gas, an increase in the value for pressure from its classical value implying an effective repulsion between particles (this is an apparent repulsion due to quantum exchange effects not because of actual interactions between particles since in ideal gas, interactional forces are neglected) and in Bose gas, a decrease in pressure from its classical value implying an effective attraction. The quantum nature of this equation is in it dependence on s and ħ.
== Cubic equations of state ==
Cubic equations of state are called such because they can be rewritten as a cubic function of
V
m
{\displaystyle V_{m}}
. Cubic equations of state originated from the van der Waals equation of state. Hence, all cubic equations of state can be considered 'modified van der Waals equation of state'. There is a very large number of such cubic equations of state. For process engineering, cubic equations of state are today still highly relevant, e.g. the Peng Robinson equation of state or the Soave Redlich Kwong equation of state.
== Virial equations of state ==
=== Virial equation of state ===
p
V
m
R
T
=
A
+
B
V
m
+
C
V
m
2
+
D
V
m
3
+
⋯
{\displaystyle {\frac {pV_{m}}{RT}}=A+{\frac {B}{V_{m}}}+{\frac {C}{V_{m}^{2}}}+{\frac {D}{V_{m}^{3}}}+\cdots }
Although usually not the most convenient equation of state, the virial equation is important because it can be derived directly from statistical mechanics. This equation is also called the Kamerlingh Onnes equation. If appropriate assumptions are made about the mathematical form of intermolecular forces, theoretical expressions can be developed for each of the coefficients. A is the first virial coefficient, which has a constant value of 1 and makes the statement that when volume is large, all fluids behave like ideal gases. The second virial coefficient B corresponds to interactions between pairs of molecules, C to triplets, and so on. Accuracy can be increased indefinitely by considering higher order terms. The coefficients B, C, D, etc. are functions of temperature only.
=== The BWR equation of state ===
p
=
ρ
R
T
+
(
B
0
R
T
−
A
0
−
C
0
T
2
+
D
0
T
3
−
E
0
T
4
)
ρ
2
+
(
b
R
T
−
a
−
d
T
)
ρ
3
+
α
(
a
+
d
T
)
ρ
6
+
c
ρ
3
T
2
(
1
+
γ
ρ
2
)
exp
(
−
γ
ρ
2
)
{\displaystyle {\begin{aligned}p=\rho RT&+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}+\left(bRT-a-{\frac {d}{T}}\right)\rho ^{3}\\[2pt]&+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}+{\frac {c\rho ^{3}}{T^{2}}}\left(1+\gamma \rho ^{2}\right)\exp \left(-\gamma \rho ^{2}\right)\end{aligned}}}
where
p
{\displaystyle p}
is pressure
ρ
{\displaystyle \rho }
is molar density
Values of the various parameters can be found in reference materials. The BWR equation of state has also frequently been used for the modelling of the Lennard-Jones fluid. There are several extensions and modifications of the classical BWR equation of state available.
The Benedict–Webb–Rubin–Starling equation of state is a modified BWR equation of state and can be written as
p
=
ρ
R
T
+
(
B
0
R
T
−
A
0
−
C
0
T
2
+
D
0
T
3
−
E
0
T
4
)
ρ
2
+
(
b
R
T
−
a
−
d
T
+
c
T
2
)
ρ
3
+
α
(
a
+
d
T
)
ρ
6
{\displaystyle {\begin{aligned}p=\rho RT&+\left(B_{0}RT-A_{0}-{\frac {C_{0}}{T^{2}}}+{\frac {D_{0}}{T^{3}}}-{\frac {E_{0}}{T^{4}}}\right)\rho ^{2}\\[2pt]&+\left(bRT-a-{\frac {d}{T}}+{\frac {c}{T^{2}}}\right)\rho ^{3}+\alpha \left(a+{\frac {d}{T}}\right)\rho ^{6}\end{aligned}}}
Note that in this virial equation, the fourth and fifth virial terms are zero. The second virial coefficient is monotonically decreasing as temperature is lowered. The third virial coefficient is monotonically increasing as temperature is lowered.
The Lee–Kesler equation of state is based on the corresponding states principle, and is a modification of the BWR equation of state.
p
=
R
T
V
(
1
+
B
V
r
+
C
V
r
2
+
D
V
r
5
+
c
4
T
r
3
V
r
2
(
β
+
γ
V
r
2
)
exp
(
−
γ
V
r
2
)
)
{\displaystyle p={\frac {RT}{V}}\left(1+{\frac {B}{V_{r}}}+{\frac {C}{V_{r}^{2}}}+{\frac {D}{V_{r}^{5}}}+{\frac {c_{4}}{T_{r}^{3}V_{r}^{2}}}\left(\beta +{\frac {\gamma }{V_{r}^{2}}}\right)\exp \left(-{\frac {\gamma }{V_{r}^{2}}}\right)\right)}
== Physically based equations of state ==
There is a large number of physically based equations of state available today. Most of those are formulated in the Helmholtz free energy as a function of temperature, density (and for mixtures additionally the composition). The Helmholtz energy is formulated as a sum of multiple terms modelling different types of molecular interaction or molecular structures, e.g. the formation of chains or dipolar interactions. Hence, physically based equations of state model the effect of molecular size, attraction and shape as well as hydrogen bonding and polar interactions of fluids. In general, physically based equations of state give more accurate results than traditional cubic equations of state, especially for systems containing liquids or solids. Most physically based equations of state are built on monomer term describing the Lennard-Jones fluid or the Mie fluid.
=== Perturbation theory-based models ===
Perturbation theory is frequently used for modelling dispersive interactions in an equation of state. There is a large number of perturbation theory based equations of state available today, e.g. for the classical Lennard-Jones fluid. The two most important theories used for these types of equations of state are the Barker-Henderson perturbation theory and the Weeks–Chandler–Andersen perturbation theory.
=== Statistical associating fluid theory (SAFT) ===
An important contribution for physically based equations of state is the statistical associating fluid theory (SAFT) that contributes the Helmholtz energy that describes the association (a.k.a. hydrogen bonding) in fluids, which can also be applied for modelling chain formation (in the limit of infinite association strength). The SAFT equation of state was developed using statistical mechanical methods (in particular the perturbation theory of Wertheim) to describe the interactions between molecules in a system. The idea of a SAFT equation of state was first proposed by Chapman et al. in 1988 and 1989. Many different versions of the SAFT models have been proposed, but all use the same chain and association terms derived by Chapman et al.
== Multiparameter equations of state ==
Multiparameter equations of state are empirical equations of state that can be used to represent pure fluids with high accuracy. Multiparameter equations of state are empirical correlations of experimental data and are usually formulated in the Helmholtz free energy. The functional form of these models is in most parts not physically motivated. They can be usually applied in both liquid and gaseous states. Empirical multiparameter equations of state represent the Helmholtz energy of the fluid as the sum of ideal gas and residual terms. Both terms are explicit in temperature and density:
a
(
T
,
ρ
)
R
T
=
a
i
d
e
a
l
g
a
s
(
τ
,
δ
)
+
a
residual
(
τ
,
δ
)
R
T
{\displaystyle {\frac {a(T,\rho )}{RT}}={\frac {a^{\mathrm {ideal\,gas} }(\tau ,\delta )+a^{\textrm {residual}}(\tau ,\delta )}{RT}}}
with
τ
=
T
r
T
,
δ
=
ρ
ρ
r
{\displaystyle \tau ={\frac {T_{r}}{T}},\delta ={\frac {\rho }{\rho _{r}}}}
The reduced density
ρ
r
{\displaystyle \rho _{r}}
and reduced temperature
T
r
{\displaystyle T_{r}}
are in most cases the critical values for the pure fluid. Because integration of the multiparameter equations of state is not required and thermodynamic properties can be determined using classical thermodynamic relations, there are few restrictions as to the functional form of the ideal or residual terms. Typical multiparameter equations of state use upwards of 50 fluid specific parameters, but are able to represent the fluid's properties with high accuracy. Multiparameter equations of state are available currently for about 50 of the most common industrial fluids including refrigerants. The IAPWS95 reference equation of state for water is also a multiparameter equations of state. Mixture models for multiparameter equations of state exist, as well. Yet, multiparameter equations of state applied to mixtures are known to exhibit artifacts at times.
One example of such an equation of state is the form proposed by Span and Wagner.
a
r
e
s
i
d
u
a
l
=
∑
i
=
1
8
∑
j
=
−
8
12
n
i
,
j
δ
i
τ
j
/
8
+
∑
i
=
1
5
∑
j
=
−
8
24
n
i
,
j
δ
i
τ
j
/
8
exp
(
−
δ
)
+
∑
i
=
1
5
∑
j
=
16
56
n
i
,
j
δ
i
τ
j
/
8
exp
(
−
δ
2
)
+
∑
i
=
2
4
∑
j
=
24
38
n
i
,
j
δ
i
τ
j
/
2
exp
(
−
δ
3
)
{\displaystyle {\begin{aligned}a^{\mathrm {residual} }={}&\sum _{i=1}^{8}\sum _{j=-8}^{12}n_{i,j}\delta ^{i}\tau ^{j/8}+\sum _{i=1}^{5}\sum _{j=-8}^{24}n_{i,j}\delta ^{i}\tau ^{j/8}\exp \left(-\delta \right)\\&+\sum _{i=1}^{5}\sum _{j=16}^{56}n_{i,j}\delta ^{i}\tau ^{j/8}\exp \left(-\delta ^{2}\right)+\sum _{i=2}^{4}\sum _{j=24}^{38}n_{i,j}\delta ^{i}\tau ^{j/2}\exp \left(-\delta ^{3}\right)\end{aligned}}}
This is a somewhat simpler form that is intended to be used more in technical applications. Equations of state that require a higher accuracy use a more complicated form with more terms.
== List of further equations of state ==
=== Stiffened equation of state ===
When considering water under very high pressures, in situations such as underwater nuclear explosions, sonic shock lithotripsy, and sonoluminescence, the stiffened equation of state is often used:
p
=
ρ
(
γ
−
1
)
e
−
γ
p
0
{\displaystyle p=\rho (\gamma -1)e-\gamma p^{0}\,}
where
e
{\displaystyle e}
is the internal energy per unit mass,
γ
{\displaystyle \gamma }
is an empirically determined constant typically taken to be about 6.1, and
p
0
{\displaystyle p^{0}}
is another constant, representing the molecular attraction between water molecules. The magnitude of the correction is about 2 gigapascals (20,000 atmospheres).
The equation is stated in this form because the speed of sound in water is given by
c
2
=
γ
(
p
+
p
0
)
/
ρ
{\displaystyle c^{2}=\gamma \left(p+p^{0}\right)/\rho }
.
Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa).
This equation mispredicts the specific heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks.
=== Morse oscillator equation of state ===
An equation of state of Morse oscillator has been derived, and it has the following form:
p
=
Γ
1
ν
+
Γ
2
ν
2
{\displaystyle p=\Gamma _{1}\nu +\Gamma _{2}\nu ^{2}}
Where
Γ
1
{\displaystyle \Gamma _{1}}
is the first order virial parameter and it depends on the temperature,
Γ
2
{\displaystyle \Gamma _{2}}
is the second order virial parameter of Morse oscillator and it depends on the parameters of Morse oscillator in addition to the absolute temperature.
ν
{\displaystyle \nu }
is the fractional volume of the system.
=== Ultrarelativistic equation of state ===
An ultrarelativistic fluid has equation of state
p
=
ρ
m
c
s
2
{\displaystyle p=\rho _{m}c_{s}^{2}}
where
p
{\displaystyle p}
is the pressure,
ρ
m
{\displaystyle \rho _{m}}
is the mass density, and
c
s
{\displaystyle c_{s}}
is the speed of sound.
=== Ideal Bose equation of state ===
The equation of state for an ideal Bose gas is
p
V
m
=
R
T
Li
α
+
1
(
z
)
ζ
(
α
)
(
T
T
c
)
α
{\displaystyle pV_{m}=RT~{\frac {\operatorname {Li} _{\alpha +1}(z)}{\zeta (\alpha )}}\left({\frac {T}{T_{c}}}\right)^{\alpha }}
where α is an exponent specific to the system (e.g. in the absence of a potential field, α = 3/2), z is exp(μ/kBT) where μ is the chemical potential, Li is the polylogarithm, ζ is the Riemann zeta function, and Tc is the critical temperature at which a Bose–Einstein condensate begins to form.
=== Jones–Wilkins–Lee equation of state for explosives (JWL equation) ===
The equation of state from Jones–Wilkins–Lee is used to describe the detonation products of explosives.
p
=
A
(
1
−
ω
R
1
V
)
exp
(
−
R
1
V
)
+
B
(
1
−
ω
R
2
V
)
exp
(
−
R
2
V
)
+
ω
e
0
V
{\displaystyle p=A\left(1-{\frac {\omega }{R_{1}V}}\right)\exp(-R_{1}V)+B\left(1-{\frac {\omega }{R_{2}V}}\right)\exp \left(-R_{2}V\right)+{\frac {\omega e_{0}}{V}}}
The ratio
V
=
ρ
e
/
ρ
{\displaystyle V=\rho _{e}/\rho }
is defined by using
ρ
e
{\displaystyle \rho _{e}}
, which is the density of the explosive (solid part) and
ρ
{\displaystyle \rho }
, which is the density of the detonation products. The parameters
A
{\displaystyle A}
,
B
{\displaystyle B}
,
R
1
{\displaystyle R_{1}}
,
R
2
{\displaystyle R_{2}}
and
ω
{\displaystyle \omega }
are given by several references. In addition, the initial density (solid part)
ρ
0
{\displaystyle \rho _{0}}
, speed of detonation
V
D
{\displaystyle V_{D}}
, Chapman–Jouguet pressure
P
C
J
{\displaystyle P_{CJ}}
and the chemical energy per unit volume of the explosive
e
0
{\displaystyle e_{0}}
are given in such references. These parameters are obtained by fitting the JWL-EOS to experimental results. Typical parameters for some explosives are listed in the table below.
=== Others ===
Tait equation for water and other liquids. Several equations are referred to as the Tait equation.
Murnaghan equation of state
Birch–Murnaghan equation of state
Stacey–Brennan–Irvine equation of state
Modified Rydberg equation of state
Adapted polynomial equation of state
Johnson–Holmquist equation of state
Mie–Grüneisen equation of state
Anton-Schmidt equation of state
State-transition equation
== See also ==
Gas laws
Departure function
Table of thermodynamic equations
Real gas
Cluster expansion
Polytrope
== References ==
== External links == | Wikipedia/PVT_(physics) |
In chemistry, a salt bridge is a combination of two non-covalent interactions: hydrogen bonding and ionic bonding (Figure 1). Ion pairing is one of the most important noncovalent forces in chemistry, in biological systems, in different materials and in many applications such as ion pair chromatography. It is a most commonly observed contribution to the stability to the entropically unfavorable folded conformation of proteins. Although non-covalent interactions are known to be relatively weak interactions, small stabilizing interactions can add up to make an important contribution to the overall stability of a conformer. Not only are salt bridges found in proteins, but they can also be found in supramolecular chemistry. The thermodynamics of each are explored through experimental procedures to access the free energy contribution of the salt bridge to the overall free energy of the state.
== Salt bridges in chemical bonding ==
In water, formation of salt bridges or ion pairs is mostly driven by entropy, usually accompanied by unfavorable ΔH contributions on account of desolvation of the interacting ions upon association. Hydrogen bonds contribute to the stability of ion pairs with e.g. protonated ammonium ions, and with anions is formed by deprotonation as in the case of carboxylate, phosphate etc; then the association constants depend on the pH. Entropic driving forces for ion pairing (in absence of significant H-bonding contributions) are also found in methanol as solvent. In nonpolar solvents contact ion pairs with very high association constants are formed; in the gas phase the association energies of e.g. alkali halides reach up to 200 kJ/mol. The Bjerrum or the Fuoss equation describe ion pair association as function of the ion charges zA and zB and the dielectric constant ε of the medium; a corresponding plot of the stability ΔG vs. zAzB shows for over 200 ion pairs the expected linear correlation for a large variety of ions.
Inorganic as well as organic ions display at moderate ionic strength I similar salt bridge association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability etc) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye–Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol. The stabilities of the alkali-ion pairs as function of the anion charge z by can be described by a more detailed equation.
== Salt bridges found in proteins ==
The salt bridge most often arises from the anionic carboxylate (RCOO−) of either aspartic acid or glutamic acid and the cationic ammonium (RNH3+) from lysine or the guanidinium (RNHC(NH2)2+) of arginine (Figure 2). Although these are the most common, other residues with ionizable side chains such as histidine, tyrosine, and serine can also participate, depending on outside factors perturbing their pKa's. The distance between the residues participating in the salt bridge is also cited as being important. The N-O distance required is less than 4 Å (400 pm). Amino acids greater than this distance apart do not qualify as forming a salt bridge. Due to the numerous ionizable side chains of amino acids found throughout a protein, the pH at which a protein is placed is crucial to its stability.
== Salt bridges found in protein - ligand complexes ==
Salt bridges also can form between a protein and small molecule ligands. Over 1100 unique protein-ligand complexes from the Protein Databank were found to form salt bridges with their protein targets, indicating that salt bridges are frequent in drug-protein interaction. These contain structures from different enzyme classes, including hydrolase, transferases, kinases, reductase, oxidoreductase, lyases, and G protein-coupled receptors (GPCRs).
== Methods for quantifying salt bridge stability in proteins ==
The contribution of a salt bridge to the overall stability to the folded state of a protein can be assessed through thermodynamic data gathered from mutagenesis studies and nuclear magnetic resonance techniques. Using a mutated pseudo-wild-type protein specifically mutated to prevent precipitation at high pH, the salt bridge’s contribution to the overall free energy of the folded protein state can be determined by performing a point-mutation, altering and, consequently, breaking the salt bridge. For example, a salt bridge was identified to exist in the T4 lysozyme between aspartic acid (Asp) at residue 70 and a histidine (His) at residue 31 (Figure 3). Site-directed mutagenesis with asparagine (Asn) (Figure 4) was done obtaining three new mutants: Asp70Asn His31 (Mutant 1), Asp70 His31Asn (Mutant 2), and Asp70Asn His31Asn (Double Mutant).
Once the mutants have been established, two methods can be employed to calculate the free energy associated with a salt bridge. One method involves the observation of the melting temperature of the wild-type protein versus that of the three mutants. The denaturation can be monitored through a change in circular dichroism. A reduction in melting temperature indicates a reduction in stability. This is quantified through a method described by Becktel and Schellman where the free energy difference between the two is calculated through ΔTΔS. There are some issues with this calculation and can only be used with very accurate data. In the T4 lysozyme example, ΔS of the pseudo-wild-type had previously been reported at pH 5.5 so the midpoint temperature difference of 11 °C at this pH multiplied by the reported ΔS of 360 cal/(mol·K) (1.5 kJ/(mol·K)) yields a free energy change of about −4 kcal/mol (−17 kJ/mol). This value corresponds to the amount of free energy contributed to the stability of the protein by the salt bridge.
The second method utilizes nuclear magnetic resonance spectroscopy to calculate the free energy of the salt bridge. A titration is performed, while recording the chemical shift corresponding to the protons of the carbon adjacent to the carboxylate or ammonium group. The midpoint of the titration curve corresponds to the pKa, or the pH where the ratio of protonated: deprotonated molecules is 1:1. Continuing with the T4 lysozyme example, a titration curve is obtained through observation of a shift in the C2 proton of histidine 31 (Figure 5). Figure 5 shows the shift in the titration curve between the wild-type and the mutant in which Asp70 is Asn. The salt bridge formed is between the deprotonated Asp70 and protonated His31. This interaction causes the shift seen in His31’s pKa. In the unfolded wild-type protein, where the salt bridge is absent, His31 is reported to have a pKa of 6.8 in H2O buffers of moderate ionic strength. Figure 5 shows a pKa of the wild-type of 9.05. This difference in pKa is supported by the His31’s interaction with Asp70. To maintain the salt bridge, His31 will attempt to keep its proton as long as possible. When the salt bridge is disrupted, like in the mutant D70N, the pKa shifts back to a value of 6.9, much closer to that of His31 in the unfolded state.
The difference in pKa can be quantified to reflect the salt bridge’s contribution to free energy. Using Gibbs free energy: ΔG = −RT ln(Keq), where R is the universal gas constant, T is the temperature in kelvins, and Keq is the equilibrium constant of a reaction in equilibrium. The deprotonation of His31 is an acid equilibrium reaction with a special Keq known as the acid dissociation constant, Ka: His31-H+ ⇌ His31 + H+. The pKa is then related to Ka by the following: pKa = −log(Ka). Calculation of the free energy difference of the mutant and wild-type can now be done using the free energy equation, the definition of pKa, the observed pKa values, and the relationship between natural logarithms and logarithms. In the T4 lysozyme example, this approach yielded a calculated contribution of about 3 kcal/mol to the overall free energy. A similar approach can be taken with the other participant in the salt bridge, such as Asp70 in the T4 lysozyme example, by monitoring its shift in pKa after mutation of His31.
A word of caution when choosing the appropriate experiment involves the location of the salt bridge within the protein. The environment plays a large role in the interaction. At high ionic strengths, the salt bridge can be completely masked since an electrostatic interaction is involved. The His31-Asp70 salt bridge in T4 lysozyme was buried within the protein. Entropy plays a larger role in surface salt bridges where residues that normally have the ability to move are constricted by their electrostatic interaction and hydrogen bonding. This has been shown to decrease entropy enough to nearly erase the contribution of the interaction. Surface salt bridges can be studied similarly to that of buried salt bridges, employing double mutant cycles and NMR titrations. Although cases exist where buried salt bridges contribute to stability, like anything else, exceptions do exist and buried salt bridges can display a destabilizing effect. Also, surface salt bridges, under certain conditions, can display a stabilizing effect. The stabilizing or destabilizing effect must be assessed on a case by case basis and few blanket statements are able to be made.
== Supramolecular chemistry ==
Supramolecular chemistry is a field concerned with non-covalent interactions between macromolecules. Salt bridges have been used by chemists within this field in both diverse and creative ways, including sensing of anions, the synthesis of molecular capsules and double helical polymers.
=== Anion complexation ===
Major contributions of supramolecular chemistry have been devoted to recognition and sensing of anions. Ion pairing is the most important driving force for anion complexation, but selectivity e.g. within the halide series has been achieved, mostly by hydrogen bonds contributions.
=== Molecular capsules ===
Molecular capsules are chemical scaffolds designed to capture and hold a guest molecule (see molecular encapsulation). Szumna and coworkers developed a novel molecular capsule with a chiral interior. This capsule is made of two halves, like a plastic easter egg (Figure 6). Salt bridge interactions between the two halves cause them to self-assemble in solution (Figure 7). They are stable even when heated to 60 °C.
=== Double helical polymers ===
Yashima and coworkers have used salt bridges to construct several polymers that adopt a double helix conformation much like DNA. In one example, they incorporated platinum to create a double helical metallopolymer. Starting from their monomer and platinum(II) biphenyl (Figure 8), their metallopolymer self assembles through a series of ligand exchange reactions. The two halves of the monomer are anchored together through the salt bridge between the deprotonated carboxylate and the protonated nitrogens.
== References == | Wikipedia/Salt_bridge_(protein_and_supramolecular) |
An intramolecular force (from Latin intra- 'within') is any force that binds together the atoms making up a molecule. Intramolecular forces are stronger than the intermolecular forces that govern the interactions between molecules.
== Types ==
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies, a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegativity of bonded atom.
=== Ionic bond ===
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom donating electrons to the other. This type of bond is generally formed between a metal and nonmetal, such as sodium and chlorine in NaCl. Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
=== Covalent bond ===
In a true covalent bond, the electrons are uniformly shared between the two atoms of the bond; there is little or no charge separation. Covalent bonds are generally formed between two nonmetals. There are several types of covalent bonds: in polar covalent bonds, electrons are more likely to be found around one of the two atoms, whereas in nonpolar covalent bonds, electrons are evenly shared. Homonuclear diatomic molecules are purely covalent. The polarity of a covalent bond is determined by the electronegativities of each atom and thus a polar covalent bond has a dipole moment pointing from the partial positive end to the partial negative end. Polar covalent bonds represent an intermediate type in which the electrons are neither completely transferred from one atom to another nor evenly shared.
=== Metallic bond ===
Metallic bonds generally form within a pure metal or metal alloy. Metallic electrons are generally delocalized; the result is a large number of free electrons around positive nuclei, sometimes called an electron sea.
== Bond formation ==
Bonds are formed by atoms so that they are able to achieve a lower energy state. Free atoms will have more energy than a bonded atom. This is because some energy is released during bond formation, allowing the entire system to achieve a lower energy state. The bond length, or the minimum separating distance between two atoms participating in bond formation, is determined by their repulsive and attractive forces along the internuclear direction. As the two atoms get closer and closer, the positively charged nuclei repel, creating a force that attempts to push the atoms apart. As the two atoms get further apart, attractive forces work to pull them back together. Thus an equilibrium bond length is achieved and is a good measure of bond stability.
== Biochemistry ==
Intramolecular forces are extremely important in the field of biochemistry, where it comes into play at the most basic levels of biological structures. Intramolecular forces such as disulfide bonds give proteins and DNA their structure. Proteins derive their structure from the intramolecular forces that shape them and hold them together. The main source of structure in these molecules is the interaction between the amino acid residues that form the foundation of proteins. The interactions between residues of the same proteins forms the secondary structure of the protein, allowing for the formation of beta sheets and alpha helices, which are important structures for proteins and in the case of alpha helices, for DNA.
== See also ==
Chemical bond
Intermolecular force
== References == | Wikipedia/Intramolecular_force |
An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction
or repulsion which act between atoms and other types of neighbouring particles (e.g. atoms or ions). Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.
The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling.
Attractive intermolecular forces are categorized into the following types:
Hydrogen bonding
Ion–dipole forces and ion–induced dipole force
Cation–π, σ–π and π–π bonding
Van der Waals forces – Keesom force, Debye force, and London dispersion force
Cation–cation bonding
Salt bridge (protein and supramolecular)
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential.
In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical (that is, ionic, covalent or metallic) bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology).
== Hydrogen bonding ==
A hydrogen bond refers to the attraction between a hydrogen atom that is covalently bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine, and another highly electronegative atom. The hydrogen bond is often described as a strong electrostatic interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both are not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
== Salt bridge ==
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge.
It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
== Dipole–dipole and similar interactions ==
Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).
H
δ
+
−
Cl
δ
−
⋯
H
δ
+
−
Cl
δ
−
{\displaystyle {\overset {\color {Red}\delta +}{{\ce {H}}}}-{\overset {\color {Red}\delta -}{{\ce {Cl}}}}\cdots {\overset {\color {Red}\delta +}{{\ce {H}}}}-{\overset {\color {Red}\delta -}{{\ce {Cl}}}}}
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.
The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces".
=== Ion–dipole and ion–induced dipole forces ===
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.
== Van der Waals forces ==
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies.
=== Keesom force (permanent dipole – permanent dipole) ===
The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
−
d
1
2
d
2
2
24
π
2
ε
0
2
ε
r
2
k
B
T
r
6
=
V
,
{\displaystyle {\frac {-d_{1}^{2}d_{2}^{2}}{24\pi ^{2}\varepsilon _{0}^{2}\varepsilon _{r}^{2}k_{\text{B}}Tr^{6}}}=V,}
where d = electric dipole moment,
ε
0
{\displaystyle \varepsilon _{0}}
= permittivity of free space,
ε
r
{\displaystyle \varepsilon _{r}}
= dielectric constant of surrounding material, T = temperature,
k
B
{\displaystyle k_{\text{B}}}
= Boltzmann constant, and r = distance between molecules.
=== Debye force (permanent dipoles–induced dipoles) ===
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.
The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye.
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:
−
d
1
2
α
2
16
π
2
ε
0
2
ε
r
2
r
6
=
V
,
{\displaystyle {\frac {-d_{1}^{2}\alpha _{2}}{16\pi ^{2}\varepsilon _{0}^{2}\varepsilon _{r}^{2}r^{6}}}=V,}
where
α
2
{\displaystyle \alpha _{2}}
= polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.
=== London dispersion force (fluctuating dipole–induced dipole interaction) ===
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.
== Relative strength of forces ==
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way enabling the thousands of enzymatic reactions, so important for living organisms.
== Effect on the behavior of gases ==
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
== Quantum mechanical theories ==
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this.
Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
== See also ==
== References == | Wikipedia/Debye_force |
In chemistry, the term substrate is highly context-dependent. Broadly speaking, it can refer either to a chemical species being observed in a chemical reaction, or to a surface on which other chemical reactions or microscopy are performed.
In the former sense, a reagent is added to the substrate to generate a product through a chemical reaction. The term is used in a similar sense in synthetic and organic chemistry, where the substrate is the chemical of interest that is being modified. In biochemistry, an enzyme substrate is the material upon which an enzyme acts. When referring to Le Chatelier's principle, the substrate is the reagent whose concentration is changed.
In the latter sense, it may refer to a surface on which other chemical reactions are performed or play a supporting role in a variety of spectroscopic and microscopic techniques, as discussed in the first few subsections below.
== Microscopy ==
In three of the most common nano-scale microscopy techniques, atomic force microscopy (AFM), scanning tunneling microscopy (STM), and transmission electron microscopy (TEM), a substrate is required for sample mounting. Substrates are often thin and relatively free of chemical features or defects. Typically silver, gold, or silicon wafers are used due to their ease of manufacturing and lack of interference in the microscopy data. Samples are deposited onto the substrate in fine layers where it can act as a solid support of reliable thickness and malleability. Smoothness of the substrate is especially important for these types of microscopy because they are sensitive to very small changes in sample height.
Various other substrates are used in specific cases to accommodate a wide variety of samples. Thermally-insulating substrates are required for AFM of graphite flakes for instance, and conductive substrates are required for TEM. In some contexts, the word substrate can be used to refer to the sample itself, rather than the solid support on which it is placed.
== Spectroscopy ==
Various spectroscopic techniques also require samples to be mounted on substrates, such as powder diffraction. This type of diffraction, which involves directing high-powered X-rays at powder samples to deduce crystal structures, is often performed with an amorphous substrate such that it does not interfere with the resulting data collection. Silicon substrates are also commonly used because of their cost-effective nature and relatively little data interference in X-ray collection.
Single-crystal substrates are useful in powder diffraction because they are distinguishable from the sample of interest in diffraction patterns by differentiating by phase.
== Atomic layer deposition ==
In atomic layer deposition, the substrate acts as an initial surface on which reagents can combine to precisely build up chemical structures. A wide variety of substrates are used depending on the reaction of interest, but they frequently bind the reagents with some affinity to allow sticking to the substrate.
The substrate is exposed to different reagents sequentially and washed in between to remove excess. A substrate is critical in this technique because the first layer needs a place to bind to such that it is not lost when exposed to the second or third set of reagents.
== Biochemistry ==
In biochemistry, the substrate is a molecule upon which an enzyme acts. Enzymes catalyze chemical reactions involving the substrate(s). In the case of a single substrate, the substrate bonds with the enzyme active site, and an enzyme-substrate complex is formed. The substrate is transformed into one or more products, which are then released from the active site. The active site is then free to accept another substrate molecule. In the case of more than one substrate, these may bind in a particular order to the active site, before reacting together to produce products. A substrate is called 'chromogenic' if it gives rise to a coloured product when acted on by an enzyme. In histological enzyme localization studies, the colored product of enzyme action can be viewed under a microscope, in thin sections of biological tissues. Similarly, a substrate is called 'fluorogenic' if it gives rise to a fluorescent product when acted on by an enzyme.
For example, curd formation (rennet coagulation) is a reaction that occurs upon adding the enzyme rennin to milk. In this reaction, the substrate is a milk protein (e.g., casein) and the enzyme is rennin. The products are two polypeptides that have been formed by the cleavage of the larger peptide substrate. Another example is the chemical decomposition of hydrogen peroxide carried out by the enzyme catalase. As enzymes are catalysts, they are not changed by the reactions they carry out. The substrate(s), however, is/are converted to product(s). Here, hydrogen peroxide is converted to water and oxygen gas.
E + S ⇌ ES → EP ⇌ E + P
Where E is enzyme, S is substrate, and P is product
While the first (binding) and third (unbinding) steps are, in general, reversible, the middle step may be irreversible (as in the rennin and catalase reactions just mentioned) or reversible (e.g. many reactions in the glycolysis metabolic pathway).
By increasing the substrate concentration, the rate of reaction will increase due to the likelihood that the number of enzyme-substrate complexes will increase; this occurs until the enzyme concentration becomes the limiting factor.
=== Substrate promiscuity ===
Although enzymes are typically highly specific, some are able to perform catalysis on more than one substrate, a property termed enzyme promiscuity. An enzyme may have many native substrates and broad specificity (e.g. oxidation by cytochrome p450s) or it may have a single native substrate with a set of similar non-native substrates that it can catalyse at some lower rate. The substrates that a given enzyme may react with in vitro, in a laboratory setting, may not necessarily reflect the physiological, endogenous substrates of the enzyme's reactions in vivo. That is to say that enzymes do not necessarily perform all the reactions in the body that may be possible in the laboratory. For example, while fatty acid amide hydrolase (FAAH) can hydrolyze the endocannabinoids 2-arachidonoylglycerol (2-AG) and anandamide at comparable rates in vitro, genetic or pharmacological disruption of FAAH elevates anandamide but not 2-AG, suggesting that 2-AG is not an endogenous, in vivo substrate for FAAH. In another example, the N-acyl taurines (NATs) are observed to increase dramatically in FAAH-disrupted animals, but are actually poor in vitro FAAH substrates.
=== Sensitivity ===
Sensitive substrates, also known as sensitive index substrates, are drugs that demonstrate an increase in AUC of ≥5-fold with strong index inhibitors of a given metabolic pathway in clinical drug-drug interaction (DDI) studies.
Moderate sensitive substrates are drugs that demonstrate an increase in AUC of ≥2 to <5-fold with strong index inhibitors of a given metabolic pathway in clinical DDI studies.
==== Interaction between substrates ====
Metabolism by the same cytochrome P450 isozyme can result in several clinically significant drug-drug interactions.
== See also ==
Limiting reagent
Reaction progress kinetic analysis
Solvent
== References == | Wikipedia/Substrate_(chemistry) |
Garland Science was a publishing group that specialized in developing textbooks in a wide range of life sciences subjects, including cell and molecular biology, immunology, protein chemistry, genetics, and bioinformatics. It was a subsidiary of the Taylor & Francis Group.
== History ==
The firm was founded as Garland Publishing in 1969 by Gavin Borden (1939–1991). Initially it published "18th-century literary criticism". By the late 1970s it was mainly publishing academic reference books along with facsimile and reprint editions for niche markets.
Notable book series published by Garland Publishing included the Garland Reference Library of the Humanities (1975–), the Garland Reference Library of Social Science (1983–), and Garland Medieval Bibliographies (1989–). The Garland Encyclopedia of World Music (10 volumes), originally published by Garland Publishing, is now published by Routledge, another imprint of the Taylor & Francis Group.
In 1984 the firm published a new edition of James Joyce's Ulysses, under the title of Ulysses: A Critical and Synoptic Edition. Edited by Hans Walter Gabler, it was intended to correct "almost 5,000 omissions, transpositions and other errors in the original text" as published in 1922.
In 1983 the firm began publishing scientific textbooks. In 1997 the firm was acquired by Taylor & Francis and published under the name of Garland Science Publishing or Garland Science.
One Garland Science success was the textbook Molecular Biology of the Cell (authors include Bruce Alberts and Peter Walter; James D. Watson was a previous author), which has been lauded as "the most influential cell biology textbook of its time". Other notable textbooks published by Garland Science included The Biology of Cancer (by Robert Weinberg), Immunobiology (authors including Charles Janeway and Kenneth Murphy), Molecular Biology of the Cell: The Problems Book (by John Wilson and Tim Hunt), Essential Cell Biology (Bruce Alberts et al.), The Immune System (Peter Parham), Molecular Driving Forces (Ken A. Dill & Sarina Bromberg), and Physical Biology of the Cell (Rob Phillips, Jane Kondev & Julie Theriot).
As of 2018, the Garland Science website had been shut down and their major textbooks have been sold to W. W. Norton & Company.
== References ==
== External links ==
Official website | Wikipedia/Garland_Science |
An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction
or repulsion which act between atoms and other types of neighbouring particles (e.g. atoms or ions). Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.
The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling.
Attractive intermolecular forces are categorized into the following types:
Hydrogen bonding
Ion–dipole forces and ion–induced dipole force
Cation–π, σ–π and π–π bonding
Van der Waals forces – Keesom force, Debye force, and London dispersion force
Cation–cation bonding
Salt bridge (protein and supramolecular)
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential.
In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical (that is, ionic, covalent or metallic) bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology).
== Hydrogen bonding ==
A hydrogen bond refers to the attraction between a hydrogen atom that is covalently bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine, and another highly electronegative atom. The hydrogen bond is often described as a strong electrostatic interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both are not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
== Salt bridge ==
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge.
It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
== Dipole–dipole and similar interactions ==
Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).
H
δ
+
−
Cl
δ
−
⋯
H
δ
+
−
Cl
δ
−
{\displaystyle {\overset {\color {Red}\delta +}{{\ce {H}}}}-{\overset {\color {Red}\delta -}{{\ce {Cl}}}}\cdots {\overset {\color {Red}\delta +}{{\ce {H}}}}-{\overset {\color {Red}\delta -}{{\ce {Cl}}}}}
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.
The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces".
=== Ion–dipole and ion–induced dipole forces ===
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.
== Van der Waals forces ==
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies.
=== Keesom force (permanent dipole – permanent dipole) ===
The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
−
d
1
2
d
2
2
24
π
2
ε
0
2
ε
r
2
k
B
T
r
6
=
V
,
{\displaystyle {\frac {-d_{1}^{2}d_{2}^{2}}{24\pi ^{2}\varepsilon _{0}^{2}\varepsilon _{r}^{2}k_{\text{B}}Tr^{6}}}=V,}
where d = electric dipole moment,
ε
0
{\displaystyle \varepsilon _{0}}
= permittivity of free space,
ε
r
{\displaystyle \varepsilon _{r}}
= dielectric constant of surrounding material, T = temperature,
k
B
{\displaystyle k_{\text{B}}}
= Boltzmann constant, and r = distance between molecules.
=== Debye force (permanent dipoles–induced dipoles) ===
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.
The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye.
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:
−
d
1
2
α
2
16
π
2
ε
0
2
ε
r
2
r
6
=
V
,
{\displaystyle {\frac {-d_{1}^{2}\alpha _{2}}{16\pi ^{2}\varepsilon _{0}^{2}\varepsilon _{r}^{2}r^{6}}}=V,}
where
α
2
{\displaystyle \alpha _{2}}
= polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.
=== London dispersion force (fluctuating dipole–induced dipole interaction) ===
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.
== Relative strength of forces ==
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way enabling the thousands of enzymatic reactions, so important for living organisms.
== Effect on the behavior of gases ==
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
== Quantum mechanical theories ==
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this.
Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
== See also ==
== References == | Wikipedia/Keesom_force |
After the explanation of van der Waals forces by Fritz London, several scientists soon realised that his definition could be extended from the interaction of two molecules with induced dipoles to macro-scale objects by summing all of the forces between the molecules in each of the bodies involved. The theory is named after H. C. Hamaker, who derived the interaction between two spheres, a sphere and a wall, and presented a general discussion in a heavily cited 1937 paper.
The interaction of two bodies is then treated as the pairwise interaction of a set of N molecules at positions: Ri {i:1,2,... ...,N}. The distance between the molecules i and j is then:
R
i
j
=
|
R
i
−
R
j
|
{\displaystyle R_{ij}=|R_{i}-R_{j}|}
The interaction energy of the system is taken to be:
V
i
n
t
1
,
2
,
.
.
.
N
=
1
2
∑
i
=
0
N
∑
j
=
0
(
≠
i
)
N
V
i
n
t
i
j
(
R
i
j
)
{\displaystyle V_{\mathrm {int} }^{1,2,...N}={\frac {1}{2}}\sum _{i=0}^{\mathbb {N} }\sum _{j=0(\neq i)}^{\mathbb {N} }V_{\mathrm {int} }^{ij}(R_{ij})}
where
V
i
n
t
i
j
{\displaystyle V_{\mathrm {int} }^{ij}}
is the interaction of molecules i and j in the absence of the influence of other molecules.
The theory is however only an approximation which assumes that the interactions can be treated independently, the theory must also be adjusted to take into account quantum perturbation theory.
== References == | Wikipedia/Hamaker_theory |
In chemistry, hydration energy (also hydration enthalpy) is the amount of energy released when one mole of ions undergoes solvation. Hydration energy is one component in the quantitative analysis of solvation. It is a particular special case of water. The value of hydration energies is one of the most challenging aspects of structural prediction. Upon dissolving a salt in water, the cations and anions interact with the positive and negative dipoles of the water. The trade-off of these interactions vs those within the crystalline solid comprises the hydration energy.
== Examples ==
If the hydration energy is greater than the lattice energy, then the enthalpy of solution is negative (heat is released), otherwise it is positive (heat is absorbed).
The hydration energy should not be confused with solvation energy, which is the change in Gibbs free energy (not enthalpy) as solute in the gaseous state is dissolved. If the solvation energy is positive, then the solvation process is endergonic; otherwise, it is exergonic.
For instance, water warms when treated with CaCl2 (anhydrous calcium chloride) as a consequence of the large heat of hydration. However, the hexahydrate, CaCl2·6H2O cools the water upon dissolution. The latter happens because the hydration energy does not completely overcome the lattice energy, and the remainder has to be taken from the water in order to compensate the energy loss.
The hydration energies of the gaseous Li+, Na+, and Cs+ are respectively 520, 405, and 265 kJ/mol.
== See also ==
Enthalpy of solution
Heat of dilution
Hydrate
Hydrational fluid
Ionization energy
== References == | Wikipedia/Hydration_energy |
In differential geometry, the Ricci curvature tensor, named after Gregorio Ricci-Curbastro, is a geometric object which is determined by a choice of Riemannian or pseudo-Riemannian metric on a manifold. It can be considered, broadly, as a measure of the degree to which the geometry of a given metric tensor differs locally from that of ordinary Euclidean space or pseudo-Euclidean space.
The Ricci tensor can be characterized by measurement of how a shape is deformed as one moves along geodesics in the space. In general relativity, which involves the pseudo-Riemannian setting, this is reflected by the presence of the Ricci tensor in the Raychaudhuri equation. Partly for this reason, the Einstein field equations propose that spacetime can be described by a pseudo-Riemannian metric, with a strikingly simple relationship between the Ricci tensor and the matter content of the universe.
Like the metric tensor, the Ricci tensor assigns to each tangent space of the manifold a symmetric bilinear form (Besse 1987, p. 43). Broadly, one could analogize the role of the Ricci curvature in Riemannian geometry to that of the Laplacian in the analysis of functions; in this analogy, the Riemann curvature tensor, of which the Ricci curvature is a natural by-product, would correspond to the full matrix of second derivatives of a function. However, there are other ways to draw the same analogy.
For three-dimensional manifolds, the Ricci tensor contains all of the information which in higher dimensions is encoded by the more complicated Riemann curvature tensor. In part, this simplicity allows for the application of many geometric and analytic tools, which led to the solution of the Poincaré conjecture through the work of Richard S. Hamilton and Grigori Perelman.
In differential geometry, the determination of lower bounds on the Ricci tensor on a Riemannian manifold would allow one to extract global geometric and topological information by comparison (cf. comparison theorem) with the geometry of a constant curvature space form. This is since lower bounds on the Ricci tensor can be successfully used in studying the length functional in Riemannian geometry, as first shown in 1941 via Myers's theorem.
One common source of the Ricci tensor is that it arises whenever one commutes the covariant derivative with the tensor Laplacian. This, for instance, explains its presence in the Bochner formula, which is used ubiquitously in Riemannian geometry. For example, this formula explains why the gradient estimates due to Shing-Tung Yau (and their developments such as the Cheng-Yau and Li-Yau inequalities) nearly always depend on a lower bound for the Ricci curvature.
In 2007, John Lott, Karl-Theodor Sturm, and Cedric Villani demonstrated decisively that lower bounds on Ricci curvature can be understood entirely in terms of the metric space structure of a Riemannian manifold, together with its volume form. This established a deep link between Ricci curvature and Wasserstein geometry and optimal transport, which is presently the subject of much research.
== Definition ==
Suppose that
(
M
,
g
)
{\displaystyle \left(M,g\right)}
is an
n
{\displaystyle n}
-dimensional
Riemannian or pseudo-Riemannian manifold, equipped
with its Levi-Civita connection
∇
{\displaystyle \nabla }
. The
Riemann curvature of
M
{\displaystyle M}
is a map which
takes smooth vector fields
X
{\displaystyle X}
,
Y
{\displaystyle Y}
, and
Z
{\displaystyle Z}
,
and returns the vector field
R
(
X
,
Y
)
Z
:=
∇
X
∇
Y
Z
−
∇
Y
∇
X
Z
−
∇
[
X
,
Y
]
Z
{\displaystyle R(X,Y)Z:=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z}
on vector fields
X
,
Y
,
Z
{\displaystyle X,Y,Z}
. Since
R
{\displaystyle R}
is a tensor field, for
each point
p
∈
M
{\displaystyle p\in M}
, it gives rise to a (multilinear) map:
R
p
:
T
p
M
×
T
p
M
×
T
p
M
→
T
p
M
.
{\displaystyle \operatorname {R} _{p}:T_{p}M\times T_{p}M\times T_{p}M\to T_{p}M.}
Define for each point
p
∈
M
{\displaystyle p\in M}
the map
Ric
p
:
T
p
M
×
T
p
M
→
R
{\displaystyle \operatorname {Ric} _{p}:T_{p}M\times T_{p}M\to \mathbb {R} }
by
Ric
p
(
Y
,
Z
)
:=
tr
(
X
↦
R
p
(
X
,
Y
)
Z
)
.
{\displaystyle \operatorname {Ric} _{p}(Y,Z):=\operatorname {tr} {\big (}X\mapsto \operatorname {R} _{p}(X,Y)Z{\big )}.}
That is, having fixed
Y
{\displaystyle Y}
and
Z
{\displaystyle Z}
, then for any orthonormal basis
v
1
,
…
,
v
n
{\displaystyle v_{1},\ldots ,v_{n}}
of the vector space
T
p
M
{\displaystyle T_{p}M}
, one has
Ric
p
(
Y
,
Z
)
=
∑
i
=
1
⟨
R
p
(
v
i
,
Y
)
Z
,
v
i
⟩
.
{\displaystyle \operatorname {Ric} _{p}(Y,Z)=\sum _{i=1}\langle \operatorname {R} _{p}(v_{i},Y)Z,v_{i}\rangle .}
It is a standard exercise of (multi)linear
algebra to verify that this definition does not depend on the choice of the basis
v
1
,
…
,
v
n
{\displaystyle v_{1},\ldots ,v_{n}}
.
In abstract index notation,
R
i
c
a
b
=
R
c
b
c
a
=
R
c
a
c
b
.
{\displaystyle \mathrm {Ric} _{ab}=\mathrm {R} ^{c}{}_{bca}=\mathrm {R} ^{c}{}_{acb}.}
Sign conventions. Note that some sources define
R
(
X
,
Y
)
Z
{\displaystyle R(X,Y)Z}
to be
what would here be called
−
R
(
X
,
Y
)
Z
;
{\displaystyle -R(X,Y)Z;}
they would then define
Ric
p
{\displaystyle \operatorname {Ric} _{p}}
as
−
tr
(
X
↦
R
p
(
X
,
Y
)
Z
)
.
{\displaystyle -\operatorname {tr} (X\mapsto \operatorname {R} _{p}(X,Y)Z).}
Although sign conventions differ about the Riemann tensor, they do not differ about
the Ricci tensor.
=== Definition via local coordinates on a smooth manifold ===
Let
(
M
,
g
)
{\displaystyle \left(M,g\right)}
be a smooth Riemannian
or pseudo-Riemannian
n
{\displaystyle n}
-manifold.
Given a smooth chart
(
U
,
φ
)
{\displaystyle \left(U,\varphi \right)}
one then has functions
g
i
j
:
φ
(
U
)
→
R
{\displaystyle g_{ij}:\varphi (U)\rightarrow \mathbb {R} }
and
g
i
j
:
φ
(
U
)
→
R
{\displaystyle g^{ij}:\varphi (U)\rightarrow \mathbb {R} }
for each
i
,
j
=
1
,
…
,
n
{\displaystyle i,j=1,\ldots ,n}
which satisfy
∑
k
=
1
n
g
i
k
(
x
)
g
k
j
(
x
)
=
δ
j
i
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \sum _{k=1}^{n}g^{ik}(x)g_{kj}(x)=\delta _{j}^{i}={\begin{cases}1&i=j\\0&i\neq j\end{cases}}}
for all
x
∈
φ
(
U
)
{\displaystyle x\in \varphi (U)}
. The latter shows that, expressed as
matrices,
g
i
j
(
x
)
=
(
g
−
1
)
i
j
(
x
)
{\displaystyle g^{ij}(x)=(g^{-1})_{ij}(x)}
.
The functions
g
i
j
{\displaystyle g_{ij}}
are defined by evaluating
g
{\displaystyle g}
on
coordinate vector fields, while the functions
g
i
j
{\displaystyle g^{ij}}
are defined so
that, as a matrix-valued function, they provide an inverse to the matrix-valued
function
x
↦
g
i
j
(
x
)
{\displaystyle x\mapsto g_{ij}(x)}
.
Now define, for each
a
{\displaystyle a}
,
b
{\displaystyle b}
,
c
{\displaystyle c}
,
i
{\displaystyle i}
,
and
j
{\displaystyle j}
between 1 and
n
{\displaystyle n}
, the functions
Γ
a
b
c
:=
1
2
∑
d
=
1
n
(
∂
g
b
d
∂
x
a
+
∂
g
a
d
∂
x
b
−
∂
g
a
b
∂
x
d
)
g
c
d
R
i
j
:=
∑
a
=
1
n
∂
Γ
i
j
a
∂
x
a
−
∑
a
=
1
n
∂
Γ
a
i
a
∂
x
j
+
∑
a
=
1
n
∑
b
=
1
n
(
Γ
a
b
a
Γ
i
j
b
−
Γ
i
b
a
Γ
a
j
b
)
{\displaystyle {\begin{aligned}\Gamma _{ab}^{c}&:={\frac {1}{2}}\sum _{d=1}^{n}\left({\frac {\partial g_{bd}}{\partial x^{a}}}+{\frac {\partial g_{ad}}{\partial x^{b}}}-{\frac {\partial g_{ab}}{\partial x^{d}}}\right)g^{cd}\\R_{ij}&:=\sum _{a=1}^{n}{\frac {\partial \Gamma _{ij}^{a}}{\partial x^{a}}}-\sum _{a=1}^{n}{\frac {\partial \Gamma _{ai}^{a}}{\partial x^{j}}}+\sum _{a=1}^{n}\sum _{b=1}^{n}\left(\Gamma _{ab}^{a}\Gamma _{ij}^{b}-\Gamma _{ib}^{a}\Gamma _{aj}^{b}\right)\end{aligned}}}
as maps
φ
:
U
→
R
{\displaystyle \varphi :U\rightarrow \mathbb {R} }
.
Now let
(
U
,
φ
)
{\displaystyle \left(U,\varphi \right)}
and
(
V
,
ψ
)
{\displaystyle \left(V,\psi \right)}
be two smooth charts with
U
∩
V
≠
∅
{\displaystyle U\cap V\neq \emptyset }
.
Let
R
i
j
:
φ
(
U
)
→
R
{\displaystyle R_{ij}:\varphi (U)\rightarrow \mathbb {R} }
be the functions computed as above via the chart
(
U
,
φ
)
{\displaystyle \left(U,\varphi \right)}
and let
r
i
j
:
ψ
(
V
)
→
R
{\displaystyle r_{ij}:\psi (V)\rightarrow \mathbb {R} }
be the functions computed as above via the chart
(
V
,
ψ
)
{\displaystyle \left(V,\psi \right)}
.
Then one can check by a calculation with the chain rule and the product rule that
R
i
j
(
x
)
=
∑
k
,
l
=
1
n
r
k
l
(
ψ
∘
φ
−
1
(
x
)
)
D
i
|
x
(
ψ
∘
φ
−
1
)
k
D
j
|
x
(
ψ
∘
φ
−
1
)
l
.
{\displaystyle R_{ij}(x)=\sum _{k,l=1}^{n}r_{kl}\left(\psi \circ \varphi ^{-1}(x)\right)D_{i}{\Big |}_{x}\left(\psi \circ \varphi ^{-1}\right)^{k}D_{j}{\Big |}_{x}\left(\psi \circ \varphi ^{-1}\right)^{l}.}
where
D
i
{\displaystyle D_{i}}
is the first derivative along
i
{\displaystyle i}
th direction
of
R
n
{\displaystyle \mathbb {R} ^{n}}
.
This shows that the following definition does not depend on the choice of
(
U
,
φ
)
{\displaystyle \left(U,\varphi \right)}
.
For any
p
∈
U
{\displaystyle p\in U}
, define a bilinear map
Ric
p
:
T
p
M
×
T
p
M
→
R
{\displaystyle \operatorname {Ric} _{p}:T_{p}M\times T_{p}M\rightarrow \mathbb {R} }
by
(
X
,
Y
)
∈
T
p
M
×
T
p
M
↦
Ric
p
(
X
,
Y
)
=
∑
i
,
j
=
1
n
R
i
j
(
φ
(
x
)
)
X
i
(
p
)
Y
j
(
p
)
,
{\displaystyle (X,Y)\in T_{p}M\times T_{p}M\mapsto \operatorname {Ric} _{p}(X,Y)=\sum _{i,j=1}^{n}R_{ij}(\varphi (x))X^{i}(p)Y^{j}(p),}
where
X
1
,
…
,
X
n
{\displaystyle X^{1},\ldots ,X^{n}}
and
Y
1
,
…
,
Y
n
{\displaystyle Y^{1},\ldots ,Y^{n}}
are the
components of the tangent vectors at
p
{\displaystyle p}
in
X
{\displaystyle X}
and
Y
{\displaystyle Y}
relative to
the coordinate vector fields of
(
U
,
φ
)
{\displaystyle \left(U,\varphi \right)}
.
It is common to abbreviate the above formal presentation in the following style:
The final line includes the demonstration that the bilinear map Ric is well-defined,
which is much easier to write out with the informal notation.
=== Comparison of the definitions ===
The two above definitions are identical. The formulas defining
Γ
i
j
k
{\displaystyle \Gamma _{ij}^{k}}
and
R
i
j
{\displaystyle R_{ij}}
in the coordinate approach have an exact parallel in the formulas defining the Levi-Civita connection, and the Riemann curvature via the Levi-Civita connection. Arguably, the definitions directly using local coordinates are preferable, since the "crucial property" of the Riemann tensor mentioned above requires
M
{\displaystyle M}
to be Hausdorff in order to hold. By contrast, the local coordinate approach only requires a smooth atlas. It is also somewhat easier to connect the "invariance" philosophy underlying the local approach with the methods of constructing more exotic geometric objects, such as spinor fields.
The complicated formula defining
R
i
j
{\displaystyle R_{ij}}
in the introductory section is the same as that in the following section. The only difference is that terms have been grouped so that it is easy to see that
R
i
j
=
R
j
i
.
{\displaystyle R_{ij}=R_{ji}.}
== Properties ==
As can be seen from the symmetries of the Riemann curvature tensor, the Ricci tensor of a Riemannian
manifold is symmetric, in the sense that
Ric
(
X
,
Y
)
=
Ric
(
Y
,
X
)
{\displaystyle \operatorname {Ric} (X,Y)=\operatorname {Ric} (Y,X)}
for all
X
,
Y
∈
T
p
M
.
{\displaystyle X,Y\in T_{p}M.}
It thus follows linear-algebraically that the Ricci tensor is completely determined
by knowing the quantity
Ric
(
X
,
X
)
{\displaystyle \operatorname {Ric} (X,X)}
for all vectors
X
{\displaystyle X}
of unit length. This function on the set of unit tangent vectors
is often also called the Ricci curvature, since knowing it is equivalent to
knowing the Ricci curvature tensor.
The Ricci curvature is determined by the sectional curvatures of a Riemannian
manifold, but generally contains less information. Indeed, if
ξ
{\displaystyle \xi }
is a
vector of unit length on a Riemannian
n
{\displaystyle n}
-manifold, then
Ric
(
ξ
,
ξ
)
{\displaystyle \operatorname {Ric} (\xi ,\xi )}
is precisely
(
n
−
1
)
{\displaystyle (n-1)}
times the average value of the sectional curvature, taken over all the 2-planes
containing
ξ
{\displaystyle \xi }
. There is an
(
n
−
2
)
{\displaystyle (n-2)}
-dimensional family
of such 2-planes, and so only in dimensions 2 and 3 does the Ricci tensor determine
the full curvature tensor. A notable exception is when the manifold is given a
priori as a hypersurface of Euclidean space. The second fundamental form,
which determines the full curvature via the Gauss–Codazzi equation,
is itself determined by the Ricci tensor and the principal directions
of the hypersurface are also the eigendirections of the Ricci tensor. The
tensor was introduced by Ricci for this reason.
As can be seen from the second Bianchi identity, one has
div
Ric
=
1
2
d
R
,
{\displaystyle \operatorname {div} \operatorname {Ric} ={\frac {1}{2}}dR,}
where
R
{\displaystyle R}
is the scalar curvature, defined in local coordinates as
g
i
j
R
i
j
.
{\displaystyle g^{ij}R_{ij}.}
This is often called the contracted second Bianchi identity.
== Direct geometric meaning ==
Near any point
p
{\displaystyle p}
in a Riemannian manifold
(
M
,
g
)
{\displaystyle \left(M,g\right)}
,
one can define preferred local coordinates, called geodesic normal coordinates.
These are adapted to the metric so that geodesics through
p
{\displaystyle p}
correspond
to straight lines through the origin, in such a manner that the geodesic distance
from
p
{\displaystyle p}
corresponds to the Euclidean distance from the origin.
In these coordinates, the metric tensor is well-approximated by the Euclidean
metric, in the precise sense that
g
i
j
=
δ
i
j
+
O
(
|
x
|
2
)
.
{\displaystyle g_{ij}=\delta _{ij}+O\left(|x|^{2}\right).}
In fact, by taking the Taylor expansion of the metric applied to a Jacobi field along a radial geodesic in the normal coordinate system, one has
g
i
j
=
δ
i
j
−
1
3
R
i
k
j
l
x
k
x
l
+
O
(
|
x
|
3
)
.
{\displaystyle g_{ij}=\delta _{ij}-{\frac {1}{3}}R_{ikjl}x^{k}x^{l}+O\left(|x|^{3}\right).}
In these coordinates, the metric volume element then has the following expansion at p:
d
μ
g
=
[
1
−
1
6
R
j
k
x
j
x
k
+
O
(
|
x
|
3
)
]
d
μ
Euclidean
,
{\displaystyle d\mu _{g}=\left[1-{\frac {1}{6}}R_{jk}x^{j}x^{k}+O\left(|x|^{3}\right)\right]d\mu _{\text{Euclidean}},}
which follows by expanding the square root of the determinant of the metric.
Thus, if the Ricci curvature
Ric
(
ξ
,
ξ
)
{\displaystyle \operatorname {Ric} (\xi ,\xi )}
is positive
in the direction of a vector
ξ
{\displaystyle \xi }
, the conical region in
M
{\displaystyle M}
swept out by a tightly focused family of geodesic segments of length
ε
{\displaystyle \varepsilon }
emanating from
p
{\displaystyle p}
, with initial velocity inside
a small cone about
ξ
{\displaystyle \xi }
, will have smaller volume than the corresponding
conical region in Euclidean space, at least provided that
ε
{\displaystyle \varepsilon }
is sufficiently small. Similarly, if the Ricci curvature is negative in the
direction of a given vector
ξ
{\displaystyle \xi }
, such a conical region in the manifold
will instead have larger volume than it would in Euclidean space.
The Ricci curvature is essentially an average of curvatures in the planes including
ξ
{\displaystyle \xi }
. Thus if a cone emitted with an initially circular (or spherical)
cross-section becomes distorted into an ellipse (ellipsoid), it is possible
for the volume distortion to vanish if the distortions along the
principal axes counteract one another. The Ricci
curvature would then vanish along
ξ
{\displaystyle \xi }
. In physical applications, the
presence of a nonvanishing sectional curvature does not necessarily indicate the
presence of any mass locally; if an initially circular cross-section of a cone
of worldlines later becomes elliptical, without changing its volume, then
this is due to tidal effects from a mass at some other location.
== Applications ==
Ricci curvature plays an important role in general relativity, where it is
the key term in the Einstein field equations.
Ricci curvature also appears in the Ricci flow equation, first
introduced by Richard S. Hamilton in 1982, where certain
one-parameter families of Riemannian metrics are singled out as solutions of a
geometrically-defined partial differential equation.
In harmonic local coordinates the Ricci tensor can be expressed as (Chow & Knopf 2004, Lemma 3.32).
R
i
j
=
−
1
2
Δ
(
g
i
j
)
+
lower-order terms
,
{\displaystyle R_{ij}=-{\frac {1}{2}}\Delta \left(g_{ij}\right)+{\text{lower-order terms}},}
where
g
i
j
{\displaystyle g_{ij}}
are the components of the metric tensor and
Δ
{\displaystyle \Delta }
is the Laplace–Beltrami operator.
This fact motivates the introduction of the Ricci flow equation
as a natural extension of the heat equation for the metric.
Since heat tends to spread through
a solid until the body reaches an equilibrium state of constant temperature, if
one is given a manifold, the Ricci flow may be hoped to produce an 'equilibrium'
Riemannian metric which is Einstein or of constant curvature.
However, such a clean "convergence" picture cannot be achieved since many manifolds
cannot support such metrics. A detailed study of the nature of solutions of the
Ricci flow, due principally to Hamilton and Grigori Perelman, shows that the
types of "singularities" that occur along a Ricci flow, corresponding to the
failure of convergence, encodes deep information about 3-dimensional topology.
The culmination of this work was a proof of the geometrization conjecture
first proposed by William Thurston in the 1970s, which can be thought of as
a classification of compact 3-manifolds.
On a Kähler manifold, the Ricci curvature determines the first Chern class
of the manifold (mod torsion). However, the Ricci curvature has no analogous
topological interpretation on a generic Riemannian manifold.
== Global geometry and topology ==
Here is a short list of global results concerning manifolds with positive Ricci curvature; see also classical theorems of Riemannian geometry. Briefly, positive Ricci curvature of a Riemannian manifold has strong topological consequences, while (for dimension at least 3), negative Ricci curvature has no topological implications. (The Ricci curvature is said to be positive if the Ricci curvature function
Ric
(
ξ
,
ξ
)
{\displaystyle \operatorname {Ric} (\xi ,\xi )}
is positive on the set of non-zero tangent vectors
ξ
{\displaystyle \xi }
.) Some results are also known for pseudo-Riemannian manifolds.
Myers' theorem (1941) states that if the Ricci curvature is bounded from below on a complete Riemannian n-manifold by
(
n
−
1
)
k
>
0
{\displaystyle (n-1)k>0}
, then the manifold has diameter
≤
π
/
k
{\displaystyle \leq \pi /{\sqrt {k}}}
. By a covering-space argument, it follows that any compact manifold of positive Ricci curvature must have finite fundamental group. Cheng (1975) showed that, in this setting, equality in the diameter inequality occurs if only if the manifold is isometric to a sphere of a constant curvature
k
{\displaystyle k}
.
The Bishop–Gromov inequality states that if a complete
n
{\displaystyle n}
-dimensional Riemannian manifold has non-negative Ricci curvature, then the volume of a geodesic ball is less than or equal to the volume of a geodesic ball of the same radius in Euclidean
n
{\displaystyle n}
-space. Moreover, if
v
p
(
R
)
{\displaystyle v_{p}(R)}
denotes the volume of the ball with center
p
{\displaystyle p}
and radius
R
{\displaystyle R}
in the manifold and
V
(
R
)
=
c
n
R
n
{\displaystyle V(R)=c_{n}R^{n}}
denotes the volume of the ball of radius
R
{\displaystyle R}
in Euclidean
n
{\displaystyle n}
-space then the function
v
p
(
R
)
/
V
(
R
)
{\displaystyle v_{p}(R)/V(R)}
is nonincreasing. This can be generalized to any lower bound on the Ricci curvature (not just nonnegativity), and is the key point in the proof of Gromov's compactness theorem.)
The Cheeger–Gromoll splitting theorem states that if a complete Riemannian manifold
(
M
,
g
)
{\displaystyle \left(M,g\right)}
with
Ric
≥
0
{\displaystyle \operatorname {Ric} \geq 0}
contains a line, meaning a geodesic
γ
:
R
→
M
{\displaystyle \gamma :\mathbb {R} \to M}
such that
d
(
γ
(
u
)
,
γ
(
v
)
)
=
|
u
−
v
|
{\displaystyle d(\gamma (u),\gamma (v))=\left|u-v\right|}
for all
u
,
v
∈
R
{\displaystyle u,v\in \mathbb {R} }
, then it is isometric to a product space
R
×
L
{\displaystyle \mathbb {R} \times L}
. Consequently, a complete manifold of positive Ricci curvature can have at most one topological end. The theorem is also true under some additional hypotheses for complete Lorentzian manifolds (of metric signature
(
+
−
−
…
)
{\displaystyle \left(+--\ldots \right)}
) with non-negative Ricci tensor (Galloway 2000).
Hamilton's first convergence theorem for Ricci flow has, as a corollary, that the only compact 3-manifolds which have Riemannian metrics of positive Ricci curvature are the quotients of the 3-sphere by discrete subgroups of SO(4) which act properly discontinuously. He later extended this to allow for nonnegative Ricci curvature. In particular, the only simply-connected possibility is the 3-sphere itself.
These results, particularly Myers' and Hamilton's, show that positive Ricci curvature has strong topological consequences. By contrast, excluding the case of surfaces, negative Ricci curvature is now known to have no topological implications; Lohkamp (1994) has shown that any manifold of dimension greater than two admits a complete Riemannian metric of negative Ricci curvature. In the case of two-dimensional manifolds, negativity of the Ricci curvature is synonymous with negativity of the Gaussian curvature, which has very clear topological implications. There are very few two-dimensional manifolds which fail to admit Riemannian metrics of negative Gaussian curvature.
== Behavior under conformal rescaling ==
If the metric
g
{\displaystyle g}
is changed by multiplying it by a conformal factor
e
2
f
{\displaystyle e^{2f}}
, the Ricci tensor of the new, conformally-related metric
g
~
=
e
2
f
g
{\displaystyle {\tilde {g}}=e^{2f}g}
is given (Besse 1987, p. 59) by
Ric
~
=
Ric
+
(
2
−
n
)
[
∇
d
f
−
d
f
⊗
d
f
]
+
[
Δ
f
−
(
n
−
2
)
‖
d
f
‖
2
]
g
,
{\displaystyle {\widetilde {\operatorname {Ric} }}=\operatorname {Ric} +(2-n)\left[\nabla df-df\otimes df\right]+\left[\Delta f-(n-2)\|df\|^{2}\right]g,}
where
Δ
=
∗
d
∗
d
{\displaystyle \Delta =*d*d}
is the (positive spectrum) Hodge Laplacian, i.e.,
the opposite of the usual trace of the Hessian.
In particular, given a point
p
{\displaystyle p}
in a Riemannian manifold, it is always
possible to find metrics conformal to the given metric
g
{\displaystyle g}
for which the
Ricci tensor vanishes at
p
{\displaystyle p}
. Note, however, that this is only pointwise
assertion; it is usually impossible to make the Ricci curvature vanish identically
on the entire manifold by a conformal rescaling.
For two dimensional manifolds, the above formula shows that if
f
{\displaystyle f}
is a
harmonic function, then the conformal scaling
g
↦
e
2
f
g
{\displaystyle g\mapsto e^{2f}g}
does not change the Ricci tensor (although it still changes its trace with respect
to the metric unless
f
=
0
{\displaystyle f=0}
.
== Trace-free Ricci tensor ==
In Riemannian geometry and pseudo-Riemannian geometry, the
trace-free Ricci tensor (also called traceless Ricci tensor) of a
Riemannian or pseudo-Riemannian
n
{\displaystyle n}
-manifold
(
M
,
g
)
{\displaystyle \left(M,g\right)}
is the tensor defined by
Z
=
Ric
−
1
n
R
g
,
{\displaystyle Z=\operatorname {Ric} -{\frac {1}{n}}Rg,}
where
Ric
{\displaystyle \operatorname {Ric} }
and
R
{\displaystyle R}
denote the Ricci curvature
and scalar curvature of
g
{\displaystyle g}
. The name of this object reflects the
fact that its trace automatically vanishes:
tr
g
Z
≡
g
a
b
Z
a
b
=
0.
{\displaystyle \operatorname {tr} _{g}Z\equiv g^{ab}Z_{ab}=0.}
However, it is quite an
important tensor since it reflects an "orthogonal decomposition" of the Ricci tensor.
=== The orthogonal decomposition of the Ricci tensor ===
The following, not so trivial, property is
Ric
=
Z
+
1
n
R
g
.
{\displaystyle \operatorname {Ric} =Z+{\frac {1}{n}}Rg.}
It is less immediately obvious that the two terms on the right hand side are orthogonal
to each other:
⟨
Z
,
1
n
R
g
⟩
g
≡
g
a
b
(
R
a
b
−
1
n
R
g
a
b
)
=
0.
{\displaystyle \left\langle Z,{\frac {1}{n}}Rg\right\rangle _{g}\equiv g^{ab}\left(R_{ab}-{\frac {1}{n}}Rg_{ab}\right)=0.}
An identity which is intimately connected with this (but which could be proved directly)
is that
|
Ric
|
g
2
=
|
Z
|
g
2
+
1
n
R
2
.
{\displaystyle \left|\operatorname {Ric} \right|_{g}^{2}=|Z|_{g}^{2}+{\frac {1}{n}}R^{2}.}
=== The trace-free Ricci tensor and Einstein metrics ===
By taking a divergence, and using the contracted Bianchi identity, one sees that
Z
=
0
{\displaystyle Z=0}
implies
1
2
d
R
−
1
n
d
R
=
0
{\textstyle {\frac {1}{2}}dR-{\frac {1}{n}}dR=0}
.
So, provided that n ≥ 3 and
M
{\displaystyle M}
is connected, the vanishing
of
Z
{\displaystyle Z}
implies that the scalar curvature is constant. One can then see
that the following are equivalent:
Z
=
0
{\displaystyle Z=0}
Ric
=
λ
g
{\displaystyle \operatorname {Ric} =\lambda g}
for some number
λ
{\displaystyle \lambda }
Ric
=
1
n
R
g
{\displaystyle \operatorname {Ric} ={\frac {1}{n}}Rg}
In the Riemannian setting, the above orthogonal decomposition shows that
R
2
=
n
|
Ric
|
2
{\displaystyle R^{2}=n|\operatorname {Ric} |^{2}}
is also equivalent to these conditions.
In the pseudo-Riemmannian setting, by contrast, the condition
|
Z
|
g
2
=
0
{\displaystyle |Z|_{g}^{2}=0}
does not necessarily imply
Z
=
0
,
{\displaystyle Z=0,}
so the most that one can say is that
these conditions imply
R
2
=
n
|
Ric
|
g
2
.
{\displaystyle R^{2}=n\left|\operatorname {Ric} \right|_{g}^{2}.}
In particular, the vanishing of trace-free Ricci tensor characterizes
Einstein manifolds, as defined by the condition
Ric
=
λ
g
{\displaystyle \operatorname {Ric} =\lambda g}
for a number
λ
.
{\displaystyle \lambda .}
In general relativity, this equation states
that
(
M
,
g
)
{\displaystyle \left(M,g\right)}
is a solution of Einstein's vacuum field
equations with cosmological constant.
== Kähler manifolds ==
On a Kähler manifold
X
{\displaystyle X}
, the Ricci curvature determines the
curvature form of the canonical line bundle
(Moroianu 2007, Chapter 12). The canonical line bundle is the top
exterior power of the bundle of holomorphic Kähler differentials:
κ
=
⋀
n
Ω
X
.
{\displaystyle \kappa ={\textstyle \bigwedge }^{n}~\Omega _{X}.}
The Levi-Civita connection corresponding to the metric on
X
{\displaystyle X}
gives
rise to a connection on
κ
{\displaystyle \kappa }
. The curvature of this connection is
the 2-form defined by
ρ
(
X
,
Y
)
=
def
Ric
(
J
X
,
Y
)
{\displaystyle \rho (X,Y)\;{\stackrel {\text{def}}{=}}\;\operatorname {Ric} (JX,Y)}
where
J
{\displaystyle J}
is the complex structure map on the
tangent bundle determined by the structure of the Kähler manifold. The Ricci
form is a closed 2-form. Its cohomology class is,
up to a real constant factor, the first Chern class of the canonical bundle,
and is therefore a topological invariant of
X
{\displaystyle X}
(for compact
X
{\displaystyle X}
)
in the sense that it depends only on the topology of
X
{\displaystyle X}
and the
homotopy class of the complex structure.
Conversely, the Ricci form determines the Ricci tensor by
Ric
(
X
,
Y
)
=
ρ
(
X
,
J
Y
)
.
{\displaystyle \operatorname {Ric} (X,Y)=\rho (X,JY).}
In local holomorphic coordinates
z
α
{\displaystyle z^{\alpha }}
, the Ricci form is given by
ρ
=
−
i
∂
∂
¯
log
det
(
g
α
β
¯
)
{\displaystyle \rho =-i\partial {\overline {\partial }}\log \det \left(g_{\alpha {\overline {\beta }}}\right)}
where ∂ is the Dolbeault operator and
g
α
β
¯
=
g
(
∂
∂
z
α
,
∂
∂
z
¯
β
)
.
{\displaystyle g_{\alpha {\overline {\beta }}}=g\left({\frac {\partial }{\partial z^{\alpha }}},{\frac {\partial }{\partial {\overline {z}}^{\beta }}}\right).}
If the Ricci tensor vanishes, then the canonical bundle is flat, so the
structure group can be locally reduced to a subgroup of the
special linear group
S
L
(
n
;
C
)
{\displaystyle SL(n;\mathbb {C} )}
. However, Kähler manifolds
already possess holonomy in
U
(
n
)
{\displaystyle U(n)}
, and so the (restricted)
holonomy of a Ricci-flat Kähler manifold is contained in
S
U
(
n
)
{\displaystyle SU(n)}
.
Conversely, if the (restricted) holonomy of a 2
n
{\displaystyle n}
-dimensional Riemannian
manifold is contained in
S
U
(
n
)
{\displaystyle SU(n)}
, then the manifold is a Ricci-flat
Kähler manifold (Kobayashi & Nomizu 1996, IX, §4).
== Generalization to affine connections ==
The Ricci tensor can also be generalized to arbitrary affine connections,
where it is an invariant that plays an especially important role in the study of
projective geometry (geometry associated to
unparameterized geodesics) (Nomizu & Sasaki 1994). If
∇
{\displaystyle \nabla }
denotes an affine connection, then the curvature tensor
R
{\displaystyle R}
is the
(1,3)-tensor defined by
R
(
X
,
Y
)
Z
=
∇
X
∇
Y
Z
−
∇
Y
∇
X
Z
−
∇
[
X
,
Y
]
Z
{\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z}
for any vector fields
X
,
Y
,
Z
{\displaystyle X,Y,Z}
. The Ricci tensor is defined to be the trace:
ric
(
X
,
Y
)
=
tr
(
Z
↦
R
(
Z
,
X
)
Y
)
.
{\displaystyle \operatorname {ric} (X,Y)=\operatorname {tr} {\big (}Z\mapsto R(Z,X)Y{\big )}.}
In this more general situation, the Ricci tensor is symmetric if and only if there
exists locally a parallel volume form for the connection.
== Discrete Ricci curvature ==
Notions of Ricci curvature on discrete manifolds have been defined on graphs and
networks, where they quantify local divergence properties of edges. Ollivier's
Ricci curvature is defined using optimal transport theory.
A different (and earlier) notion, Forman's Ricci curvature, is based on
topological arguments.
== See also ==
== Footnotes ==
== References ==
Besse, A.L. (1987), Einstein manifolds, Springer, ISBN 978-3-540-15279-8.
Chow, Bennet & Knopf, Dan (2004), The Ricci Flow: an introduction, American Mathematical Society, ISBN 0-8218-3515-7.
Eisenhart, L.P. (1949), Riemannian geometry, Princeton Univ. Press.
Forman (2003), "Bochner's Method for Cell Complexes and Combinatorial Ricci Curvature", Discrete & Computational Geometry, 29 (3): 323–374. doi:10.1007/s00454-002-0743-x. ISSN 1432-0444
Galloway, Gregory (2000), "Maximum Principles for Null Hypersurfaces and Null Splitting Theorems", Annales de l'Institut Henri Poincaré A, 1 (3): 543–567, arXiv:math/9909158, Bibcode:2000AnHP....1..543G, doi:10.1007/s000230050006, S2CID 9619157.
Kobayashi, S.; Nomizu, K. (1963), Foundations of Differential Geometry, Volume 1, Interscience.
Kobayashi, Shoshichi; Nomizu, Katsumi (1996), Foundations of Differential Geometry, Vol. 2, Wiley-Interscience, ISBN 978-0-471-15732-8.
Lohkamp, Joachim (1994), "Metrics of negative Ricci curvature", Annals of Mathematics, Second Series, 140 (3), Annals of Mathematics: 655–683, doi:10.2307/2118620, ISSN 0003-486X, JSTOR 2118620, MR 1307899.
Moroianu, Andrei (2007), Lectures on Kähler geometry, London Mathematical Society Student Texts, vol. 69, Cambridge University Press, arXiv:math/0402223, doi:10.1017/CBO9780511618666, ISBN 978-0-521-68897-0, MR 2325093, S2CID 209824092
Nomizu, Katsumi; Sasaki, Takeshi (1994), Affine differential geometry, Cambridge University Press, ISBN 978-0-521-44177-3.
Ollivier, Yann (2009), "Ricci curvature of Markov chains on metric spaces", Journal of Functional Analysis 256 (3): 810–864. doi:10.1016/j.jfa.2008.11.001. ISSN 0022-1236
Ricci, G. (1903–1904), "Direzioni e invarianti principali in una varietà qualunque", Atti R. Inst. Veneto, 63 (2): 1233–1239.
L.A. Sidorov (2001) [1994], "Ricci tensor", Encyclopedia of Mathematics, EMS Press
L.A. Sidorov (2001) [1994], "Ricci curvature", Encyclopedia of Mathematics, EMS Press
Najman, Laurent and Romon, Pascal (2017): Modern approaches to discrete curvature, Springer (Cham), Lecture notes in mathematics
== External links ==
Z. Shen, C. Sormani "The Topology of Open Manifolds with Nonnegative Ricci Curvature" (a survey)
G. Wei, "Manifolds with A Lower Ricci Curvature Bound" (a survey) | Wikipedia/Ricci_curvature_tensor |
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
== Covariant objects ==
=== Preliminary four-vectors ===
Lorentz tensors of the following kinds may be used in this article to describe bodies or particles:
four-displacement:
x
α
=
(
c
t
,
x
)
=
(
c
t
,
x
,
y
,
z
)
.
{\displaystyle x^{\alpha }=(ct,\mathbf {x} )=(ct,x,y,z)\,.}
Four-velocity:
u
α
=
γ
(
c
,
u
)
,
{\displaystyle u^{\alpha }=\gamma (c,\mathbf {u} ),}
where γ(u) is the Lorentz factor at the 3-velocity u.
Four-momentum:
p
α
=
(
E
/
c
,
p
)
=
m
0
u
α
{\displaystyle p^{\alpha }=(E/c,\mathbf {p} )=m_{0}u^{\alpha }}
where
p
{\displaystyle \mathbf {p} }
is 3-momentum,
E
{\displaystyle E}
is the total energy, and
m
0
{\displaystyle m_{0}}
is rest mass.
Four-gradient:
∂
ν
=
∂
∂
x
ν
=
(
1
c
∂
∂
t
,
−
∇
)
,
{\displaystyle \partial ^{\nu }={\frac {\partial }{\partial x_{\nu }}}=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},-\mathbf {\nabla } \right)\,,}
The d'Alembertian operator is denoted
∂
2
{\displaystyle {\partial }^{2}}
,
∂
2
=
1
c
2
∂
2
∂
t
2
−
∇
2
.
{\displaystyle \partial ^{2}={\frac {1}{c^{2}}}{\partial ^{2} \over \partial t^{2}}-\nabla ^{2}.}
The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is (+ − − −), corresponding to the Minkowski metric tensor:
η
μ
ν
=
(
1
0
0
0
0
−
1
0
0
0
0
−
1
0
0
0
0
−
1
)
{\displaystyle \eta ^{\mu \nu }={\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}}
=== Electromagnetic tensor ===
The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities.
F
α
β
=
(
0
E
x
/
c
E
y
/
c
E
z
/
c
−
E
x
/
c
0
−
B
z
B
y
−
E
y
/
c
B
z
0
−
B
x
−
E
z
/
c
−
B
y
B
x
0
)
{\displaystyle F_{\alpha \beta }={\begin{pmatrix}0&E_{x}/c&E_{y}/c&E_{z}/c\\-E_{x}/c&0&-B_{z}&B_{y}\\-E_{y}/c&B_{z}&0&-B_{x}\\-E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}}
and the result of raising its indices is
F
μ
ν
=
d
e
f
η
μ
α
F
α
β
η
β
ν
=
(
0
−
E
x
/
c
−
E
y
/
c
−
E
z
/
c
E
x
/
c
0
−
B
z
B
y
E
y
/
c
B
z
0
−
B
x
E
z
/
c
−
B
y
B
x
0
)
,
{\displaystyle F^{\mu \nu }\mathrel {\stackrel {\mathrm {def} }{=}} \eta ^{\mu \alpha }\,F_{\alpha \beta }\,\eta ^{\beta \nu }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}\,,}
where E is the electric field, B the magnetic field, and c the speed of light.
=== Four-current ===
The four-current is the contravariant four-vector which combines electric charge density ρ and electric current density j:
J
α
=
(
c
ρ
,
j
)
.
{\displaystyle J^{\alpha }=(c\rho ,\mathbf {j} )\,.}
=== Four-potential ===
The electromagnetic four-potential is a covariant four-vector containing the electric potential (also called the scalar potential) ϕ and magnetic vector potential (or vector potential) A, as follows:
A
α
=
(
ϕ
/
c
,
A
)
.
{\displaystyle A^{\alpha }=\left(\phi /c,\mathbf {A} \right)\,.}
The differential of the electromagnetic potential is
F
α
β
=
∂
α
A
β
−
∂
β
A
α
.
{\displaystyle F_{\alpha \beta }=\partial _{\alpha }A_{\beta }-\partial _{\beta }A_{\alpha }\,.}
In the language of differential forms, which provides the generalisation to curved spacetimes, these are the components of a 1-form
A
=
A
α
d
x
α
{\displaystyle A=A_{\alpha }dx^{\alpha }}
and a 2-form
F
=
d
A
=
1
2
F
α
β
d
x
α
∧
d
x
β
{\textstyle F=dA={\frac {1}{2}}F_{\alpha \beta }dx^{\alpha }\wedge dx^{\beta }}
respectively. Here,
d
{\displaystyle d}
is the exterior derivative and
∧
{\displaystyle \wedge }
the wedge product.
=== Electromagnetic stress–energy tensor ===
The electromagnetic stress–energy tensor can be interpreted as the flux density of the momentum four-vector, and is a contravariant symmetric tensor that is the contribution of the electromagnetic fields to the overall stress–energy tensor:
T
α
β
=
(
ε
0
E
2
/
2
+
B
2
/
2
μ
0
S
x
/
c
S
y
/
c
S
z
/
c
S
x
/
c
−
σ
x
x
−
σ
x
y
−
σ
x
z
S
y
/
c
−
σ
y
x
−
σ
y
y
−
σ
y
z
S
z
/
c
−
σ
z
x
−
σ
z
y
−
σ
z
z
)
,
{\displaystyle T^{\alpha \beta }={\begin{pmatrix}\varepsilon _{0}E^{2}/2+B^{2}/2\mu _{0}&S_{x}/c&S_{y}/c&S_{z}/c\\S_{x}/c&-\sigma _{xx}&-\sigma _{xy}&-\sigma _{xz}\\S_{y}/c&-\sigma _{yx}&-\sigma _{yy}&-\sigma _{yz}\\S_{z}/c&-\sigma _{zx}&-\sigma _{zy}&-\sigma _{zz}\end{pmatrix}}\,,}
where
ε
0
{\displaystyle \varepsilon _{0}}
is the electric permittivity of vacuum, μ0 is the magnetic permeability of vacuum, the Poynting vector is
S
=
1
μ
0
E
×
B
{\displaystyle \mathbf {S} ={\frac {1}{\mu _{0}}}\mathbf {E} \times \mathbf {B} }
and the Maxwell stress tensor is given by
σ
i
j
=
ε
0
E
i
E
j
+
1
μ
0
B
i
B
j
−
(
1
2
ε
0
E
2
+
1
2
μ
0
B
2
)
δ
i
j
.
{\displaystyle \sigma _{ij}=\varepsilon _{0}E_{i}E_{j}+{\frac {1}{\mu _{0}}}B_{i}B_{j}-\left({\frac {1}{2}}\varepsilon _{0}E^{2}+{\frac {1}{2\mu _{0}}}B^{2}\right)\delta _{ij}\,.}
The electromagnetic field tensor F constructs the electromagnetic stress–energy tensor T by the equation:
T
α
β
=
1
μ
0
(
η
α
ν
F
ν
γ
F
β
γ
−
1
4
η
α
β
F
γ
ν
F
γ
ν
)
{\displaystyle T^{\alpha \beta }={\frac {1}{\mu _{0}}}\left(\eta ^{\alpha \nu }F_{\nu \gamma }F^{\beta \gamma }-{\frac {1}{4}}\eta ^{\alpha \beta }F_{\gamma \nu }F^{\gamma \nu }\right)}
where η is the Minkowski metric tensor (with signature (+ − − −)). Notice that we use the fact that
ε
0
μ
0
c
2
=
1
,
{\displaystyle \varepsilon _{0}\mu _{0}c^{2}=1\,,}
which is predicted by Maxwell's equations.
Another way to covariant expression for the eletromagnetic stress-energy tensor which may be simpler since it does not not involve covariant and contravariant indices is this one:
T
=
−
1
μ
0
(
F
∗
η
∗
F
′
−
1
4
t
r
a
c
e
(
F
∗
η
∗
F
′
∗
η
)
)
{\displaystyle T=-{\frac {1}{\mu _{0}}}(F*\eta *F'-{\frac {1}{4}}trace(F*\eta *F'*\eta ))}
Where F' is the trasposed electromagnetic tensor or equivalently -F and the asterisk denotes matrix multipliaction.
== Maxwell's equations in vacuum ==
In vacuum (or for the microscopic equations, not including macroscopic material descriptions), Maxwell's equations can be written as two tensor equations.
The two inhomogeneous Maxwell's equations, Gauss's Law and Ampère's law (with Maxwell's correction) combine into (with (+ − − −) metric):
The homogeneous equations – Faraday's law of induction and Gauss's law for magnetism combine to form
∂
σ
F
μ
ν
+
∂
μ
F
ν
σ
+
∂
ν
F
σ
μ
=
0
{\displaystyle \partial ^{\sigma }F^{\mu \nu }+\partial ^{\mu }F^{\nu \sigma }+\partial ^{\nu }F^{\sigma \mu }=0}
, which may be written using Levi-Civita duality as:
where Fαβ is the electromagnetic tensor, Jα is the four-current, εαβγδ is the Levi-Civita symbol, and the indices behave according to the Einstein summation convention.
Each of these tensor equations corresponds to four scalar equations, one for each value of β.
Using the antisymmetric tensor notation and comma notation for the partial derivative (see Ricci calculus), the second equation can also be written more compactly as:
F
[
α
β
,
γ
]
=
0.
{\displaystyle F_{[\alpha \beta ,\gamma ]}=0.}
In the absence of sources, Maxwell's equations reduce to:
∂
ν
∂
ν
F
α
β
=
def
∂
2
F
α
β
=
def
1
c
2
∂
2
F
α
β
∂
t
2
−
∇
2
F
α
β
=
0
,
{\displaystyle \partial ^{\nu }\partial _{\nu }F^{\alpha \beta }\mathrel {\stackrel {\text{def}}{=}} \partial ^{2}F^{\alpha \beta }\mathrel {\stackrel {\text{def}}{=}} {1 \over c^{2}}{\partial ^{2}F^{\alpha \beta } \over {\partial t}^{2}}-\nabla ^{2}F^{\alpha \beta }=0\,,}
which is an electromagnetic wave equation in the field strength tensor.
=== Maxwell's equations in the Lorenz gauge ===
The Lorenz gauge condition is a Lorentz-invariant gauge condition. (This can be contrasted with other gauge conditions such as the Coulomb gauge, which if it holds in one inertial frame will generally not hold in any other.) It is expressed in terms of the four-potential as follows:
∂
α
A
α
=
∂
α
A
α
=
0
.
{\displaystyle \partial _{\alpha }A^{\alpha }=\partial ^{\alpha }A_{\alpha }=0\,.}
In the Lorenz gauge, the microscopic Maxwell's equations can be written as:
∂
2
A
σ
=
μ
0
J
σ
.
{\displaystyle {\partial }^{2}A^{\sigma }=\mu _{0}\,J^{\sigma }\,.}
== Lorentz force ==
=== Charged particle ===
Electromagnetic (EM) fields affect the motion of electrically charged matter: due to the Lorentz force. In this way, EM fields can be detected (with applications in particle physics, and natural occurrences such as in aurorae). In relativistic form, the Lorentz force uses the field strength tensor as follows.
Expressed in terms of coordinate time t, it is:
d
p
α
d
t
=
q
F
α
β
d
x
β
d
t
,
{\displaystyle {dp_{\alpha } \over {dt}}=q\,F_{\alpha \beta }\,{\frac {dx^{\beta }}{dt}},}
where pα is the four-momentum, q is the charge, and xβ is the position.
Expressed in frame-independent form, we have the four-force
d
p
α
d
τ
=
q
F
α
β
u
β
,
{\displaystyle {\frac {dp_{\alpha }}{d\tau }}\,=q\,F_{\alpha \beta }\,u^{\beta },}
where uβ is the four-velocity, and τ is the particle's proper time, which is related to coordinate time by dt = γdτ.
=== Charge continuum ===
The density of force due to electromagnetism, whose spatial part is the Lorentz force, is given by
f
α
=
F
α
β
J
β
.
{\displaystyle f_{\alpha }=F_{\alpha \beta }J^{\beta }.}
and is related to the electromagnetic stress–energy tensor by
f
α
=
−
T
α
β
,
β
≡
−
∂
T
α
β
∂
x
β
.
{\displaystyle f^{\alpha }=-{T^{\alpha \beta }}_{,\beta }\equiv -{\frac {\partial T^{\alpha \beta }}{\partial x^{\beta }}}.}
== Conservation laws ==
=== Electric charge ===
The continuity equation:
J
β
,
β
=
def
∂
β
J
β
=
∂
β
∂
α
F
α
β
/
μ
0
=
0.
{\displaystyle {J^{\beta }}_{,\beta }\mathrel {\overset {\text{def}}{\mathop {=} }} \partial _{\beta }J^{\beta }=\partial _{\beta }\partial _{\alpha }F^{\alpha \beta }/\mu _{0}=0.}
expresses charge conservation.
=== Electromagnetic energy–momentum ===
Using the Maxwell equations, one can see that the electromagnetic stress–energy tensor (defined above) satisfies the following differential equation, relating it to the electromagnetic tensor and the current four-vector
T
α
β
,
β
+
F
α
β
J
β
=
0
{\displaystyle {T^{\alpha \beta }}_{,\beta }+F^{\alpha \beta }J_{\beta }=0}
or
η
α
ν
T
ν
β
,
β
+
F
α
β
J
β
=
0
,
{\displaystyle \eta _{\alpha \nu }{T^{\nu \beta }}_{,\beta }+F_{\alpha \beta }J^{\beta }=0,}
which expresses the conservation of linear momentum and energy by electromagnetic interactions.
== Covariant objects in matter ==
=== Free and bound four-currents ===
In order to solve the equations of electromagnetism given here, it is necessary to add information about how to calculate the electric current, Jν. Frequently, it is convenient to separate the current into two parts, the free current and the bound current, which are modeled by different equations;
J
ν
=
J
ν
free
+
J
ν
bound
,
{\displaystyle J^{\nu }={J^{\nu }}_{\text{free}}+{J^{\nu }}_{\text{bound}}\,,}
where
J
ν
free
=
(
c
ρ
free
,
J
free
)
=
(
c
∇
⋅
D
,
−
∂
D
∂
t
+
∇
×
H
)
,
J
ν
bound
=
(
c
ρ
bound
,
J
bound
)
=
(
−
c
∇
⋅
P
,
∂
P
∂
t
+
∇
×
M
)
.
{\displaystyle {\begin{aligned}{J^{\nu }}_{\text{free}}={\begin{pmatrix}c\rho _{\text{free}},&\mathbf {J} _{\text{free}}\end{pmatrix}}&={\begin{pmatrix}c\nabla \cdot \mathbf {D} ,&-{\frac {\partial \mathbf {D} }{\partial t}}+\nabla \times \mathbf {H} \end{pmatrix}}\,,\\{J^{\nu }}_{\text{bound}}={\begin{pmatrix}c\rho _{\text{bound}},&\mathbf {J} _{\text{bound}}\end{pmatrix}}&={\begin{pmatrix}-c\nabla \cdot \mathbf {P} ,&{\frac {\partial \mathbf {P} }{\partial t}}+\nabla \times \mathbf {M} \end{pmatrix}}\,.\end{aligned}}}
Maxwell's macroscopic equations have been used, in addition the definitions of the electric displacement D and the magnetic intensity H:
D
=
ε
0
E
+
P
,
H
=
1
μ
0
B
−
M
.
{\displaystyle {\begin{aligned}\mathbf {D} &=\varepsilon _{0}\mathbf {E} +\mathbf {P} ,\\\mathbf {H} &={\frac {1}{\mu _{0}}}\mathbf {B} -\mathbf {M} \,.\end{aligned}}}
where M is the magnetization and P the electric polarization.
=== Magnetization–polarization tensor ===
The bound current is derived from the P and M fields which form an antisymmetric contravariant magnetization-polarization tensor
M
μ
ν
=
(
0
P
x
c
P
y
c
P
z
c
−
P
x
c
0
−
M
z
M
y
−
P
y
c
M
z
0
−
M
x
−
P
z
c
−
M
y
M
x
0
)
,
{\displaystyle {\mathcal {M}}^{\mu \nu }={\begin{pmatrix}0&P_{x}c&P_{y}c&P_{z}c\\-P_{x}c&0&-M_{z}&M_{y}\\-P_{y}c&M_{z}&0&-M_{x}\\-P_{z}c&-M_{y}&M_{x}&0\end{pmatrix}},}
which determines the bound current
J
ν
bound
=
∂
μ
M
μ
ν
.
{\displaystyle {J^{\nu }}_{\text{bound}}=\partial _{\mu }{\mathcal {M}}^{\mu \nu }\,.}
=== Electric displacement tensor ===
If this is combined with Fμν we get the antisymmetric contravariant electromagnetic displacement tensor which combines the D and H fields as follows:
D
μ
ν
=
(
0
−
D
x
c
−
D
y
c
−
D
z
c
D
x
c
0
−
H
z
H
y
D
y
c
H
z
0
−
H
x
D
z
c
−
H
y
H
x
0
)
.
{\displaystyle {\mathcal {D}}^{\mu \nu }={\begin{pmatrix}0&-D_{x}c&-D_{y}c&-D_{z}c\\D_{x}c&0&-H_{z}&H_{y}\\D_{y}c&H_{z}&0&-H_{x}\\D_{z}c&-H_{y}&H_{x}&0\end{pmatrix}}.}
The three field tensors are related by:
D
μ
ν
=
1
μ
0
F
μ
ν
−
M
μ
ν
{\displaystyle {\mathcal {D}}^{\mu \nu }={\frac {1}{\mu _{0}}}F^{\mu \nu }-{\mathcal {M}}^{\mu \nu }}
which is equivalent to the definitions of the D and H fields given above.
== Maxwell's equations in matter ==
The result is that Ampère's law,
∇
×
H
−
∂
D
∂
t
=
J
free
,
{\displaystyle \mathbf {\nabla } \times \mathbf {H} -{\frac {\partial \mathbf {D} }{\partial t}}=\mathbf {J} _{\text{free}},}
and Gauss's law,
∇
⋅
D
=
ρ
free
,
{\displaystyle \mathbf {\nabla } \cdot \mathbf {D} =\rho _{\text{free}},}
combine into one equation:
The bound current and free current as defined above are automatically and separately conserved
∂
ν
J
ν
bound
=
0
∂
ν
J
ν
free
=
0
.
{\displaystyle {\begin{aligned}\partial _{\nu }{J^{\nu }}_{\text{bound}}&=0\,\\\partial _{\nu }{J^{\nu }}_{\text{free}}&=0\,.\end{aligned}}}
=== Constitutive equations ===
==== Vacuum ====
In vacuum, the constitutive relations between the field tensor and displacement tensor are:
μ
0
D
μ
ν
=
η
μ
α
F
α
β
η
β
ν
.
{\displaystyle \mu _{0}{\mathcal {D}}^{\mu \nu }=\eta ^{\mu \alpha }F_{\alpha \beta }\eta ^{\beta \nu }\,.}
Antisymmetry reduces these 16 equations to just six independent equations. Because it is usual to define Fμν by
F
μ
ν
=
η
μ
α
F
α
β
η
β
ν
,
{\displaystyle F^{\mu \nu }=\eta ^{\mu \alpha }F_{\alpha \beta }\eta ^{\beta \nu },}
the constitutive equations may, in vacuum, be combined with the Gauss–Ampère law to get:
∂
β
F
α
β
=
μ
0
J
α
.
{\displaystyle \partial _{\beta }F^{\alpha \beta }=\mu _{0}J^{\alpha }.}
The electromagnetic stress–energy tensor in terms of the displacement is:
T
α
π
=
F
α
β
D
π
β
−
1
4
δ
α
π
F
μ
ν
D
μ
ν
,
{\displaystyle T_{\alpha }{}^{\pi }=F_{\alpha \beta }{\mathcal {D}}^{\pi \beta }-{\frac {1}{4}}\delta _{\alpha }^{\pi }F_{\mu \nu }{\mathcal {D}}^{\mu \nu },}
where δαπ is the Kronecker delta. When the upper index is lowered with η, it becomes symmetric and is part of the source of the gravitational field.
==== Linear, nondispersive matter ====
Thus we have reduced the problem of modeling the current, Jν to two (hopefully) easier problems — modeling the free current, Jνfree and modeling the magnetization and polarization,
M
μ
ν
{\displaystyle {\mathcal {M}}^{\mu \nu }}
. For example, in the simplest materials at low frequencies, one has
J
free
=
σ
E
P
=
ε
0
χ
e
E
M
=
χ
m
H
{\displaystyle {\begin{aligned}\mathbf {J} _{\text{free}}&=\sigma \mathbf {E} \,\\\mathbf {P} &=\varepsilon _{0}\chi _{e}\mathbf {E} \,\\\mathbf {M} &=\chi _{m}\mathbf {H} \,\end{aligned}}}
where one is in the instantaneously comoving inertial frame of the material, σ is its electrical conductivity, χe is its electric susceptibility, and χm is its magnetic susceptibility.
The constitutive relations between the
D
{\displaystyle {\mathcal {D}}}
and F tensors, proposed by Minkowski for a linear materials (that is, E is proportional to D and B proportional to H), are:
D
μ
ν
u
ν
=
c
2
ε
F
μ
ν
u
ν
⋆
D
μ
ν
u
ν
=
1
μ
⋆
F
μ
ν
u
ν
{\displaystyle {\begin{aligned}{\mathcal {D}}^{\mu \nu }u_{\nu }&=c^{2}\varepsilon F^{\mu \nu }u_{\nu }\\{\star {\mathcal {D}}^{\mu \nu }}u_{\nu }&={\frac {1}{\mu }}{\star F^{\mu \nu }}u_{\nu }\end{aligned}}}
where u is the four-velocity of material, ε and μ are respectively the proper permittivity and permeability of the material (i.e. in rest frame of material),
⋆
{\displaystyle \star }
and denotes the Hodge star operator.
== Lagrangian for classical electrodynamics ==
=== Vacuum ===
The Lagrangian density for classical electrodynamics is composed by two components: a field component and a source component:
L
=
L
field
+
L
int
=
−
1
4
μ
0
F
α
β
F
α
β
−
A
α
J
α
.
{\displaystyle {\mathcal {L}}\,=\,{\mathcal {L}}_{\text{field}}+{\mathcal {L}}_{\text{int}}=-{\frac {1}{4\mu _{0}}}F^{\alpha \beta }F_{\alpha \beta }-A_{\alpha }J^{\alpha }\,.}
In the interaction term, the four-current should be understood as an abbreviation of many terms expressing the electric currents of other charged fields in terms of their variables; the four-current is not itself a fundamental field.
The Lagrange equations for the electromagnetic lagrangian density
L
(
A
α
,
∂
β
A
α
)
{\displaystyle {\mathcal {L}}{\mathord {\left(A_{\alpha },\partial _{\beta }A_{\alpha }\right)}}}
can be stated as follows:
∂
β
[
∂
L
∂
(
∂
β
A
α
)
]
−
∂
L
∂
A
α
=
0
.
{\displaystyle \partial _{\beta }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\beta }A_{\alpha })}}\right]-{\frac {\partial {\mathcal {L}}}{\partial A_{\alpha }}}=0\,.}
Noting
F
λ
σ
=
F
μ
ν
η
μ
λ
η
ν
σ
,
F
μ
ν
=
∂
μ
A
ν
−
∂
ν
A
μ
∂
(
∂
μ
A
ν
)
∂
(
∂
ρ
A
σ
)
=
δ
μ
ρ
δ
ν
σ
{\displaystyle {\begin{aligned}F^{\lambda \sigma }&=F_{\mu \nu }\eta ^{\mu \lambda }\eta ^{\nu \sigma },\\F_{\mu \nu }&=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\,\\{\partial \left(\partial _{\mu }A_{\nu }\right) \over \partial \left(\partial _{\rho }A_{\sigma }\right)}&=\delta _{\mu }^{\rho }\delta _{\nu }^{\sigma }\end{aligned}}}
the expression inside the square bracket is
∂
L
∂
(
∂
β
A
α
)
=
−
1
4
μ
0
∂
(
F
μ
ν
η
μ
λ
η
ν
σ
F
λ
σ
)
∂
(
∂
β
A
α
)
=
−
1
4
μ
0
η
μ
λ
η
ν
σ
(
F
λ
σ
(
δ
μ
β
δ
ν
α
−
δ
ν
β
δ
μ
α
)
+
F
μ
ν
(
δ
λ
β
δ
σ
α
−
δ
σ
β
δ
λ
α
)
)
=
−
F
β
α
μ
0
.
{\displaystyle {\begin{aligned}{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\beta }A_{\alpha })}}&=-\ {\frac {1}{4\mu _{0}}}\ {\frac {\partial \left(F_{\mu \nu }\eta ^{\mu \lambda }\eta ^{\nu \sigma }F_{\lambda \sigma }\right)}{\partial \left(\partial _{\beta }A_{\alpha }\right)}}\\&=-\ {\frac {1}{4\mu _{0}}}\ \eta ^{\mu \lambda }\eta ^{\nu \sigma }\left(F_{\lambda \sigma }\left(\delta _{\mu }^{\beta }\delta _{\nu }^{\alpha }-\delta _{\nu }^{\beta }\delta _{\mu }^{\alpha }\right)+F_{\mu \nu }\left(\delta _{\lambda }^{\beta }\delta _{\sigma }^{\alpha }-\delta _{\sigma }^{\beta }\delta _{\lambda }^{\alpha }\right)\right)\\&=-\ {\frac {F^{\beta \alpha }}{\mu _{0}}}\,.\end{aligned}}}
The second term is
∂
L
∂
A
α
=
−
J
α
.
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial A_{\alpha }}}=-J^{\alpha }\,.}
Therefore, the electromagnetic field's equations of motion are
∂
F
β
α
∂
x
β
=
μ
0
J
α
.
{\displaystyle {\frac {\partial F^{\beta \alpha }}{\partial x^{\beta }}}=\mu _{0}J^{\alpha }\,.}
which is the Gauss–Ampère equation above.
=== Matter ===
Separating the free currents from the bound currents, another way to write the Lagrangian density is as follows:
L
=
−
1
4
μ
0
F
α
β
F
α
β
−
A
α
J
free
α
+
1
2
F
α
β
M
α
β
.
{\displaystyle {\mathcal {L}}\,=\,-{\frac {1}{4\mu _{0}}}F^{\alpha \beta }F_{\alpha \beta }-A_{\alpha }J_{\text{free}}^{\alpha }+{\frac {1}{2}}F_{\alpha \beta }{\mathcal {M}}^{\alpha \beta }\,.}
Using Lagrange equation, the equations of motion for
D
μ
ν
{\displaystyle {\mathcal {D}}^{\mu \nu }}
can be derived.
The equivalent expression in vector notation is:
L
=
1
2
(
ε
0
E
2
−
1
μ
0
B
2
)
−
ϕ
ρ
free
+
A
⋅
J
free
+
E
⋅
P
+
B
⋅
M
.
{\displaystyle {\mathcal {L}}\,=\,{\frac {1}{2}}\left(\varepsilon _{0}E^{2}-{\frac {1}{\mu _{0}}}B^{2}\right)-\phi \,\rho _{\text{free}}+\mathbf {A} \cdot \mathbf {J} _{\text{free}}+\mathbf {E} \cdot \mathbf {P} +\mathbf {B} \cdot \mathbf {M} \,.}
== See also ==
Covariant classical field theory
Electromagnetic tensor
Electromagnetic wave equation
Liénard–Wiechert potential for a charge in arbitrary motion
Moving magnet and conductor problem
Inhomogeneous electromagnetic wave equation
Proca action
Quantum electrodynamics
Relativistic electromagnetism
Stueckelberg action
Wheeler–Feynman absorber theory
== Notes ==
== References ==
== Further reading ==
The Feynman Lectures on Physics Vol. II Ch. 25: Electrodynamics in Relativistic Notation
Einstein, A. (1961). Relativity: The Special and General Theory. New York: Crown. ISBN 0-517-02961-8. {{cite book}}: ISBN / Date incompatibility (help)
Misner, Charles; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.
Landau, L. D.; Lifshitz, E. M. (1975). Classical Theory of Fields (Fourth Revised English ed.). Oxford: Pergamon. ISBN 0-08-018176-7.
R. P. Feynman; F. B. Moringo; W. G. Wagner (1995). Feynman Lectures on Gravitation. Addison-Wesley. ISBN 0-201-62734-5. | Wikipedia/Formulation_of_Maxwell's_equations_in_special_relativity |
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation
x
2
d
2
y
d
x
2
+
x
d
y
d
x
+
(
x
2
−
α
2
)
y
=
0
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0}
for an arbitrary complex number
α
{\displaystyle \alpha }
, which represents the order of the Bessel function. Although
α
{\displaystyle \alpha }
and
−
α
{\displaystyle -\alpha }
produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of
α
{\displaystyle \alpha }
.
The most important cases are when
α
{\displaystyle \alpha }
is an integer or half-integer. Bessel functions for integer
α
{\displaystyle \alpha }
are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer
α
{\displaystyle \alpha }
are obtained when solving the Helmholtz equation in spherical coordinates.
== Applications ==
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example:
Electromagnetic waves in a cylindrical waveguide
Pressure amplitudes of inviscid rotational flows
Heat conduction in a cylindrical object
Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory)
Diffusion problems on a lattice
Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle
Position space representation of the Feynman propagator in quantum field theory
Solving for patterns of acoustical radiation
Frequency-dependent friction in circular pipelines
Dynamics of floating bodies
Angular resolution
Diffraction from helical objects, including DNA
Probability density function of product of two normally distributed random variables
Analyzing of the surface waves generated by microtremors, in geophysics and seismology.
Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter).
== Definitions ==
Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of
α
{\displaystyle \alpha }
when
α
{\displaystyle \alpha }
is known to be an integer.
Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn.
=== Bessel functions of the first kind: Jα ===
Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by
x
α
{\displaystyle x^{\alpha }}
times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation:
J
α
(
x
)
=
∑
m
=
0
∞
(
−
1
)
m
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
{\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },}
where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by
2
{\displaystyle 2}
in
x
/
2
{\displaystyle x/2}
; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to
x
−
1
/
2
{\displaystyle x^{-{1}/{2}}}
(see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.)
For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers):
J
−
n
(
x
)
=
(
−
1
)
n
J
n
(
x
)
.
{\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).}
This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below.
==== Bessel's integrals ====
Another definition of the Bessel function, for integer values of n, is possible using an integral representation:
J
n
(
x
)
=
1
π
∫
0
π
cos
(
n
τ
−
x
sin
τ
)
d
τ
=
1
π
Re
(
∫
0
π
e
i
(
n
τ
−
x
sin
τ
)
d
τ
)
,
{\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),}
which is also called Hansen-Bessel formula.
This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0:
J
α
(
x
)
=
1
π
∫
0
π
cos
(
α
τ
−
x
sin
τ
)
d
τ
−
sin
(
α
π
)
π
∫
0
∞
e
−
x
sinh
t
−
α
t
d
t
.
{\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.}
==== Relation to hypergeometric series ====
The Bessel functions can be expressed in terms of the generalized hypergeometric series as
J
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
α
+
1
;
−
x
2
4
)
.
{\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).}
This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function.
==== Relation to Laguerre polynomials ====
In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as
J
α
(
x
)
(
x
2
)
α
=
e
−
t
Γ
(
α
+
1
)
∑
k
=
0
∞
L
k
(
α
)
(
x
2
4
t
)
(
k
+
α
k
)
t
k
k
!
.
{\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.}
=== Bessel functions of the second kind: Yα ===
The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann.
For non-integer α, Yα(x) is related to Jα(x) by
Y
α
(
x
)
=
J
α
(
x
)
cos
(
α
π
)
−
J
−
α
(
x
)
sin
(
α
π
)
.
{\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.}
In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n:
Y
n
(
x
)
=
lim
α
→
n
Y
α
(
x
)
.
{\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).}
If n is a nonnegative integer, we have the series
Y
n
(
z
)
=
−
(
z
2
)
−
n
π
∑
k
=
0
n
−
1
(
n
−
k
−
1
)
!
k
!
(
z
2
4
)
k
+
2
π
J
n
(
z
)
ln
z
2
−
(
z
2
)
n
π
∑
k
=
0
∞
(
ψ
(
k
+
1
)
+
ψ
(
n
+
k
+
1
)
)
(
−
z
2
4
)
k
k
!
(
n
+
k
)
!
{\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}}
where
ψ
(
z
)
{\displaystyle \psi (z)}
is the digamma function, the logarithmic derivative of the gamma function.
There is also a corresponding integral formula (for Re(x) > 0):
Y
n
(
x
)
=
1
π
∫
0
π
sin
(
x
sin
θ
−
n
θ
)
d
θ
−
1
π
∫
0
∞
(
e
n
t
+
(
−
1
)
n
e
−
n
t
)
e
−
x
sinh
t
d
t
.
{\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.}
In the case where n = 0: (with
γ
{\displaystyle \gamma }
being Euler's constant)
Y
0
(
x
)
=
4
π
2
∫
0
1
2
π
cos
(
x
cos
θ
)
(
γ
+
ln
(
2
x
sin
2
θ
)
)
d
θ
.
{\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .}
Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below.
When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid:
Y
−
n
(
x
)
=
(
−
1
)
n
Y
n
(
x
)
.
{\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).}
Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α.
The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem.
=== Hankel functions: H(1)α, H(2)α ===
Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as
H
α
(
1
)
(
x
)
=
J
α
(
x
)
+
i
Y
α
(
x
)
,
H
α
(
2
)
(
x
)
=
J
α
(
x
)
−
i
Y
α
(
x
)
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}}
where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel.
These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real
x
>
0
{\displaystyle x>0}
where
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for
e
±
i
x
{\displaystyle e^{\pm ix}}
and
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
for
cos
(
x
)
{\displaystyle \cos(x)}
,
sin
(
x
)
{\displaystyle \sin(x)}
, as explicitly shown in the asymptotic expansion.
The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency).
Using the previous relationships, they can be expressed as
H
α
(
1
)
(
x
)
=
J
−
α
(
x
)
−
e
−
α
π
i
J
α
(
x
)
i
sin
α
π
,
H
α
(
2
)
(
x
)
=
J
−
α
(
x
)
−
e
α
π
i
J
α
(
x
)
−
i
sin
α
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}}
If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not:
H
−
α
(
1
)
(
x
)
=
e
α
π
i
H
α
(
1
)
(
x
)
,
H
−
α
(
2
)
(
x
)
=
e
−
α
π
i
H
α
(
2
)
(
x
)
.
{\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}}
In particular, if α = m + 1/2 with m a nonnegative integer, the above relations imply directly that
J
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
+
1
Y
m
+
1
2
(
x
)
,
Y
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
J
m
+
1
2
(
x
)
.
{\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}}
These are useful in developing the spherical Bessel functions (see below).
The Hankel functions admit the following integral representations for Re(x) > 0:
H
α
(
1
)
(
x
)
=
1
π
i
∫
−
∞
+
∞
+
π
i
e
x
sinh
t
−
α
t
d
t
,
H
α
(
2
)
(
x
)
=
−
1
π
i
∫
−
∞
+
∞
−
π
i
e
x
sinh
t
−
α
t
d
t
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}}
where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis.
=== Modified Bessel functions: Iα, Kα ===
The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as
I
α
(
x
)
=
i
−
α
J
α
(
i
x
)
=
∑
m
=
0
∞
1
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
K
α
(
x
)
=
π
2
I
−
α
(
x
)
−
I
α
(
x
)
sin
α
π
,
{\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}}
when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor.
K
α
{\displaystyle K_{\alpha }}
can be expressed in terms of Hankel functions:
K
α
(
x
)
=
{
π
2
i
α
+
1
H
α
(
1
)
(
i
x
)
−
π
<
arg
x
≤
π
2
π
2
(
−
i
)
α
+
1
H
α
(
2
)
(
−
i
x
)
−
π
2
<
arg
x
≤
π
{\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}}
Using these two formulae the result to
J
α
2
(
z
)
{\displaystyle J_{\alpha }^{2}(z)}
+
Y
α
2
(
z
)
{\displaystyle Y_{\alpha }^{2}(z)}
, commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
π
2
∫
0
∞
cosh
(
2
α
t
)
K
0
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,}
given that the condition Re(x) > 0 is met. It can also be shown that
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
cos
(
α
π
)
π
2
∫
0
∞
K
2
α
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,}
only when |Re(α)| < 1/2 and Re(x) ≥ 0 but not when x = 0.
We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ π/2):
J
α
(
i
z
)
=
e
α
π
i
2
I
α
(
z
)
,
Y
α
(
i
z
)
=
e
(
α
+
1
)
π
i
2
I
α
(
z
)
−
2
π
e
−
α
π
i
2
K
α
(
z
)
.
{\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}}
Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation:
x
2
d
2
y
d
x
2
+
x
d
y
d
x
−
(
x
2
+
α
2
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.}
Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and 1/2Γ(|α|)(2/x)|α| otherwise.
Two integral formulas for the modified Bessel functions are (for Re(x) > 0):
I
α
(
x
)
=
1
π
∫
0
π
e
x
cos
θ
cos
α
θ
d
θ
−
sin
α
π
π
∫
0
∞
e
−
x
cosh
t
−
α
t
d
t
,
K
α
(
x
)
=
∫
0
∞
e
−
x
cosh
t
cosh
α
t
d
t
.
{\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}}
Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0):
2
K
0
(
ω
)
=
∫
−
∞
∞
e
i
ω
t
t
2
+
1
d
t
.
{\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.}
It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane.
Modified Bessel functions of the second kind may be represented with Bassett's integral
K
n
(
x
z
)
=
Γ
(
n
+
1
2
)
(
2
z
)
n
π
x
n
∫
0
∞
cos
(
x
t
)
d
t
(
t
2
+
z
2
)
n
+
1
2
.
{\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.}
Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals
K
1
3
(
ξ
)
=
3
∫
0
∞
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
,
K
2
3
(
ξ
)
=
1
3
∫
0
∞
3
+
2
x
2
1
+
x
2
3
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
.
{\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}}
The modified Bessel function
K
1
2
(
ξ
)
=
(
2
ξ
/
π
)
−
1
/
2
exp
(
−
ξ
)
{\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )}
is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions.
The modified Bessel function of the second kind has also been called by the following names (now rare):
Basset function after Alfred Barnard Basset
Modified Bessel function of the third kind
Modified Hankel function
Macdonald function after Hector Munro Macdonald
=== Spherical Bessel functions: jn, yn ===
When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form
x
2
d
2
y
d
x
2
+
2
x
d
y
d
x
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.}
The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
,
y
n
(
x
)
=
π
2
x
Y
n
+
1
2
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
n
−
1
2
(
x
)
.
{\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}}
yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions.
From the relations to the ordinary Bessel functions it is directly seen that:
j
n
(
x
)
=
(
−
1
)
n
y
−
n
−
1
(
x
)
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
{\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}}
The spherical Bessel functions can also be written as (Rayleigh's formulas)
j
n
(
x
)
=
(
−
x
)
n
(
1
x
d
d
x
)
n
sin
x
x
,
y
n
(
x
)
=
−
(
−
x
)
n
(
1
x
d
d
x
)
n
cos
x
x
.
{\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}}
The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are:
j
0
(
x
)
=
sin
x
x
.
j
1
(
x
)
=
sin
x
x
2
−
cos
x
x
,
j
2
(
x
)
=
(
3
x
2
−
1
)
sin
x
x
−
3
cos
x
x
2
,
j
3
(
x
)
=
(
15
x
3
−
6
x
)
sin
x
x
−
(
15
x
2
−
1
)
cos
x
x
{\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}}
and
y
0
(
x
)
=
−
j
−
1
(
x
)
=
−
cos
x
x
,
y
1
(
x
)
=
j
−
2
(
x
)
=
−
cos
x
x
2
−
sin
x
x
,
y
2
(
x
)
=
−
j
−
3
(
x
)
=
(
−
3
x
2
+
1
)
cos
x
x
−
3
sin
x
x
2
,
y
3
(
x
)
=
j
−
4
(
x
)
=
(
−
15
x
3
+
6
x
)
cos
x
x
−
(
15
x
2
−
1
)
sin
x
x
.
{\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}}
The first few non-zero roots of the first few spherical Bessel functions are:
==== Generating function ====
The spherical Bessel functions have the generating functions
1
z
cos
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
j
n
−
1
(
z
)
,
1
z
sin
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
y
n
−
1
(
z
)
.
{\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}}
==== Finite series expansions ====
In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression:
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
=
=
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
1
x
[
sin
(
x
−
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
+
cos
(
x
−
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
(
n
+
1
2
)
(
x
)
=
=
(
−
1
)
n
+
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
=
(
−
1
)
n
+
1
x
[
cos
(
x
+
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
−
sin
(
x
+
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
{\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}}
==== Differential relations ====
In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ...
(
1
z
d
d
z
)
m
(
z
n
+
1
f
n
(
z
)
)
=
z
n
−
m
+
1
f
n
−
m
(
z
)
,
(
1
z
d
d
z
)
m
(
z
−
n
f
n
(
z
)
)
=
(
−
1
)
m
z
−
n
−
m
f
n
+
m
(
z
)
.
{\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}}
=== Spherical Hankel functions: h(1)n, h(2)n ===
There are also spherical analogues of the Hankel functions:
h
n
(
1
)
(
x
)
=
j
n
(
x
)
+
i
y
n
(
x
)
,
h
n
(
2
)
(
x
)
=
j
n
(
x
)
−
i
y
n
(
x
)
.
{\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}}
There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n:
h
n
(
1
)
(
x
)
=
(
−
i
)
n
+
1
e
i
x
x
∑
m
=
0
n
i
m
m
!
(
2
x
)
m
(
n
+
m
)
!
(
n
−
m
)
!
,
{\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},}
and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = sin x/x and y0(x) = −cos x/x, and so on.
The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field.
=== Riccati–Bessel functions: Sn, Cn, ξn, ζn ===
Riccati–Bessel functions only slightly differ from spherical Bessel functions:
S
n
(
x
)
=
x
j
n
(
x
)
=
π
x
2
J
n
+
1
2
(
x
)
C
n
(
x
)
=
−
x
y
n
(
x
)
=
−
π
x
2
Y
n
+
1
2
(
x
)
ξ
n
(
x
)
=
x
h
n
(
1
)
(
x
)
=
π
x
2
H
n
+
1
2
(
1
)
(
x
)
=
S
n
(
x
)
−
i
C
n
(
x
)
ζ
n
(
x
)
=
x
h
n
(
2
)
(
x
)
=
π
x
2
H
n
+
1
2
(
2
)
(
x
)
=
S
n
(
x
)
+
i
C
n
(
x
)
{\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}}
They satisfy the differential equation
x
2
d
2
y
d
x
2
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.}
For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references.
Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn.
== Asymptotic forms ==
The Bessel functions have the following asymptotic forms. For small arguments
0
<
z
≪
α
+
1
{\displaystyle 0<z\ll {\sqrt {\alpha +1}}}
, one obtains, when
α
{\displaystyle \alpha }
is not a negative integer:
J
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.}
When α is a negative integer, we have
J
α
(
z
)
∼
(
−
1
)
α
(
−
α
)
!
(
2
z
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.}
For the Bessel function of the second kind we have three cases:
Y
α
(
z
)
∼
{
2
π
(
ln
(
z
2
)
+
γ
)
if
α
=
0
−
Γ
(
α
)
π
(
2
z
)
α
+
1
Γ
(
α
+
1
)
(
z
2
)
α
cot
(
α
π
)
if
α
is a positive integer (one term dominates unless
α
is imaginary)
,
−
(
−
1
)
α
Γ
(
−
α
)
π
(
z
2
)
α
if
α
is a negative integer,
{\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}}
where γ is the Euler–Mascheroni constant (0.5772...).
For large real arguments z ≫ |α2 − 1/4|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1:
J
α
(
z
)
=
2
π
z
(
cos
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
,
Y
α
(
z
)
=
2
π
z
(
sin
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}}
(For α = 1/2, the last terms in these formulas drop out completely; see the spherical Bessel functions above.)
The asymptotic forms for the Hankel functions are:
H
α
(
1
)
(
z
)
∼
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
2
π
,
H
α
(
2
)
(
z
)
∼
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
−
2
π
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}}
These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z).
It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part):
J
α
(
z
)
∼
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
J
α
(
z
)
∼
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
,
Y
α
(
z
)
∼
−
i
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
Y
α
(
z
)
∼
i
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}}
For the modified Bessel functions, Hankel developed asymptotic expansions as well:
I
α
(
z
)
∼
e
z
2
π
z
(
1
−
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
−
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
π
2
,
K
α
(
z
)
∼
π
2
z
e
−
z
(
1
+
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
3
π
2
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}}
There is also the asymptotic form (for large real
z
{\displaystyle z}
)
I
α
(
z
)
=
1
2
π
z
1
+
α
2
z
2
4
exp
(
−
α
arcsinh
(
α
z
)
+
z
1
+
α
2
z
2
)
(
1
+
O
(
1
z
1
+
α
2
z
2
)
)
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}}
When α = 1/2, all the terms except the first vanish, and we have
I
1
/
2
(
z
)
=
2
π
sinh
(
z
)
z
∼
e
z
2
π
z
for
|
arg
z
|
<
π
2
,
K
1
/
2
(
z
)
=
π
2
e
−
z
z
.
{\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}}
For small arguments
0
<
|
z
|
≪
α
+
1
{\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}}
, we have
I
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
,
K
α
(
z
)
∼
{
−
ln
(
z
2
)
−
γ
if
α
=
0
Γ
(
α
)
2
(
2
z
)
α
if
α
>
0
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}}
== Properties ==
For integer order α = n, Jn is often defined via a Laurent series for a generating function:
e
x
2
(
t
−
1
t
)
=
∑
n
=
−
∞
∞
J
n
(
x
)
t
n
{\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}}
an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.)
Infinite series of Bessel functions in the form
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)}
where
ν
,
p
∈
Z
,
N
∈
Z
+
\nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+}
arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3:
∑
ν
=
−
∞
∞
J
3
ν
+
p
(
x
)
=
1
3
[
1
+
2
cos
(
x
3
/
2
−
2
π
p
/
3
)
]
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]}
. More generally, the Sung series and the alternating Sung series are written as:
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
2
π
q
/
N
e
−
i
2
π
p
q
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}}
∑
ν
=
−
∞
∞
(
−
1
)
ν
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
(
2
q
+
1
)
π
/
N
e
−
i
(
2
q
+
1
)
π
p
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}}
A series expansion using Bessel functions (Kapteyn series) is
1
1
−
z
=
1
+
2
∑
n
=
1
∞
J
n
(
n
z
)
.
{\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).}
Another important relation for integer orders is the Jacobi–Anger expansion:
e
i
z
cos
ϕ
=
∑
n
=
−
∞
∞
i
n
J
n
(
z
)
e
i
n
ϕ
{\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }}
and
e
±
i
z
sin
ϕ
=
J
0
(
z
)
+
2
∑
n
=
1
∞
J
2
n
(
z
)
cos
(
2
n
ϕ
)
±
2
i
∑
n
=
0
∞
J
2
n
+
1
(
z
)
sin
(
(
2
n
+
1
)
ϕ
)
{\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )}
which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal.
More generally, a series
f
(
z
)
=
a
0
ν
J
ν
(
z
)
+
2
⋅
∑
k
=
1
∞
a
k
ν
J
ν
+
k
(
z
)
{\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)}
is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form
a
k
0
=
1
2
π
i
∫
|
z
|
=
c
f
(
z
)
O
k
(
z
)
d
z
{\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz}
where Ok is Neumann's polynomial.
Selected functions admit the special representation
f
(
z
)
=
∑
k
=
0
∞
a
k
ν
J
ν
+
2
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)}
with
a
k
ν
=
2
(
ν
+
2
k
)
∫
0
∞
f
(
z
)
J
ν
+
2
k
(
z
)
z
d
z
{\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz}
due to the orthogonality relation
∫
0
∞
J
α
(
z
)
J
β
(
z
)
d
z
z
=
2
π
sin
(
π
2
(
α
−
β
)
)
α
2
−
β
2
{\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}}
More generally, if f has a branch-point near the origin of such a nature that
f
(
z
)
=
∑
k
=
0
a
k
J
ν
+
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)}
then
L
{
∑
k
=
0
a
k
J
ν
+
k
}
(
s
)
=
1
1
+
s
2
∑
k
=
0
a
k
(
s
+
1
+
s
2
)
ν
+
k
{\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}}
or
∑
k
=
0
a
k
ξ
ν
+
k
=
1
+
ξ
2
2
ξ
L
{
f
}
(
1
−
ξ
2
2
ξ
)
{\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)}
where
L
{
f
}
{\displaystyle {\mathcal {L}}\{f\}}
is the Laplace transform of f.
Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula:
J
ν
(
z
)
=
(
z
2
)
ν
Γ
(
ν
+
1
2
)
π
∫
−
1
1
e
i
z
s
(
1
−
s
2
)
ν
−
1
2
d
s
=
2
(
z
2
)
ν
⋅
π
⋅
Γ
(
1
2
−
ν
)
∫
1
∞
sin
z
u
(
u
2
−
1
)
ν
+
1
2
d
u
{\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}}
where ν > −1/2 and z ∈ C.
This formula is useful especially when working with Fourier transforms.
Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that:
∫
0
1
x
J
α
(
x
u
α
,
m
)
J
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
J
α
+
1
(
u
α
,
m
)
]
2
=
δ
m
,
n
2
[
J
α
′
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}}
where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m.
An analogous relationship for the spherical Bessel functions follows immediately:
∫
0
1
x
2
j
α
(
x
u
α
,
m
)
j
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
j
α
+
1
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}}
If one defines a boxcar function of x that depends on a small parameter ε as:
f
ε
(
x
)
=
1
ε
rect
(
x
−
1
ε
)
{\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)}
(where rect is the rectangle function) then the Hankel transform of it (of any given order α > −1/2), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x):
∫
0
∞
k
J
α
(
k
x
)
g
ε
(
k
)
d
k
=
f
ε
(
x
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)}
which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense):
∫
0
∞
k
J
α
(
k
x
)
J
α
(
k
)
d
k
=
δ
(
x
−
1
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)}
A change of variables then yields the closure equation:
∫
0
∞
x
J
α
(
u
x
)
J
α
(
v
x
)
d
x
=
1
u
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)}
for α > −1/2. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is:
∫
0
∞
x
2
j
α
(
u
x
)
j
α
(
v
x
)
d
x
=
π
2
u
v
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)}
for α > −1.
Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions:
A
α
(
x
)
d
B
α
d
x
−
d
A
α
d
x
B
α
(
x
)
=
C
α
x
{\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}}
where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular,
J
α
(
x
)
d
Y
α
d
x
−
d
J
α
d
x
Y
α
(
x
)
=
2
π
x
{\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}}
and
I
α
(
x
)
d
K
α
d
x
−
d
I
α
d
x
K
α
(
x
)
=
−
1
x
,
{\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},}
for α > −1.
For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let
0
<
j
α
,
1
<
j
α
,
2
<
⋯
<
j
α
,
n
<
⋯
{\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots }
be all its positive zeros, then
J
α
(
z
)
=
(
z
2
)
α
Γ
(
α
+
1
)
∏
n
=
1
∞
(
1
−
z
2
j
α
,
n
2
)
{\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)}
(There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.)
=== Recurrence relations ===
The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations
2
α
x
Z
α
(
x
)
=
Z
α
−
1
(
x
)
+
Z
α
+
1
(
x
)
{\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)}
and
2
d
Z
α
(
x
)
d
x
=
Z
α
−
1
(
x
)
−
Z
α
+
1
(
x
)
,
{\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),}
where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that
(
1
x
d
d
x
)
m
[
x
α
Z
α
(
x
)
]
=
x
α
−
m
Z
α
−
m
(
x
)
,
(
1
x
d
d
x
)
m
[
Z
α
(
x
)
x
α
]
=
(
−
1
)
m
Z
α
+
m
(
x
)
x
α
+
m
.
{\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}}
Using the previous relations one can arrive to similar relations for the Spherical Bessel functions:
2
α
+
1
x
j
α
(
x
)
=
j
α
−
1
+
j
α
+
1
{\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}}
and
d
j
α
(
x
)
d
x
=
j
α
−
1
−
α
+
1
x
j
α
{\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }}
Modified Bessel functions follow similar relations:
e
(
x
2
)
(
t
+
1
t
)
=
∑
n
=
−
∞
∞
I
n
(
x
)
t
n
{\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}}
and
e
z
cos
θ
=
I
0
(
z
)
+
2
∑
n
=
1
∞
I
n
(
z
)
cos
n
θ
{\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta }
and
1
2
π
∫
0
2
π
e
z
cos
(
m
θ
)
+
y
cos
θ
d
θ
=
I
0
(
z
)
I
0
(
y
)
+
2
∑
n
=
1
∞
I
n
(
z
)
I
m
n
(
y
)
.
{\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).}
The recurrence relation reads
C
α
−
1
(
x
)
−
C
α
+
1
(
x
)
=
2
α
x
C
α
(
x
)
,
C
α
−
1
(
x
)
+
C
α
+
1
(
x
)
=
2
d
d
x
C
α
(
x
)
,
{\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}}
where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems.
=== Transcendence ===
In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative J'ν(x)/Jν(x) are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that
Γ
(
v
+
1
)
(
2
/
x
)
v
J
v
(
x
)
{\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)}
is transcendental under the same assumptions.
=== Sums with Bessel functions ===
The product of two Bessel functions admits the following sum:
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
n
−
ν
(
y
)
=
J
n
(
x
+
y
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),}
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
y
)
=
J
n
(
y
−
x
)
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).}
From these equalities it follows that
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
x
)
=
δ
n
,
0
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}}
and as a consequence
∑
ν
=
−
∞
∞
J
ν
2
(
x
)
=
1.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.}
These sums can be extended to include a term multiplier that is a polynomial function of the index. For example,
∑
ν
=
−
∞
∞
ν
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
1
+
δ
n
,
−
1
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),}
∑
ν
=
−
∞
∞
ν
J
ν
2
(
x
)
=
0
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,}
∑
ν
=
−
∞
∞
ν
2
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
−
1
−
δ
n
,
1
)
+
x
2
4
(
δ
n
,
−
2
+
2
δ
n
,
0
+
δ
n
,
2
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),}
∑
ν
=
−
∞
∞
ν
2
J
ν
2
(
x
)
=
x
2
2
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.}
== Multiplication theorem ==
The Bessel functions obey a multiplication theorem
λ
−
ν
J
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
1
−
λ
2
)
z
2
)
n
J
ν
+
n
(
z
)
,
{\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),}
where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are
λ
−
ν
I
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
λ
2
−
1
)
z
2
)
n
I
ν
+
n
(
z
)
{\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)}
and
λ
−
ν
K
ν
(
λ
z
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
(
(
λ
2
−
1
)
z
2
)
n
K
ν
+
n
(
z
)
.
{\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).}
== Zeros of the Bessel function ==
=== Bourget's hypothesis ===
Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929.
=== Transcendence ===
Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives
J
ν
(
n
)
(
x
)
{\displaystyle J_{\nu }^{(n)}(x)}
for n ≤ 18 are transcendental, except for the special values
J
1
(
3
)
(
±
3
)
=
0
{\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0}
and
J
0
(
4
)
(
±
3
)
=
0
{\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0}
.
=== Numerical approaches ===
For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004).
=== Numerical values ===
The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively.
== History ==
=== Waves and elasticity problems ===
The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered
J
0
(
x
)
{\displaystyle J_{0}(x)}
. Bernoulli also developed a method to find the zeros of the function.
Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions
I
n
(
x
)
{\displaystyle I_{n}(x)}
.
In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings.
Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for
J
±
1
/
3
(
x
)
{\displaystyle J_{\pm 1/3}(x)}
. Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to
J
n
(
x
)
{\displaystyle J_{n}(x)}
, for integer n.
During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of
J
0
(
x
)
{\displaystyle J_{0}(x)}
using cosine.
At the beginning of the 1800s, Joseph Fourier used
J
0
(
x
)
{\displaystyle J_{0}(x)}
to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions).
=== Astronomical problems ===
In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later.
In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions.
== See also ==
== Notes ==
== References ==
== External links == | Wikipedia/Spherical_Bessel_functions |
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation
x
2
d
2
y
d
x
2
+
x
d
y
d
x
+
(
x
2
−
α
2
)
y
=
0
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0}
for an arbitrary complex number
α
{\displaystyle \alpha }
, which represents the order of the Bessel function. Although
α
{\displaystyle \alpha }
and
−
α
{\displaystyle -\alpha }
produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of
α
{\displaystyle \alpha }
.
The most important cases are when
α
{\displaystyle \alpha }
is an integer or half-integer. Bessel functions for integer
α
{\displaystyle \alpha }
are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer
α
{\displaystyle \alpha }
are obtained when solving the Helmholtz equation in spherical coordinates.
== Applications ==
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example:
Electromagnetic waves in a cylindrical waveguide
Pressure amplitudes of inviscid rotational flows
Heat conduction in a cylindrical object
Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory)
Diffusion problems on a lattice
Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle
Position space representation of the Feynman propagator in quantum field theory
Solving for patterns of acoustical radiation
Frequency-dependent friction in circular pipelines
Dynamics of floating bodies
Angular resolution
Diffraction from helical objects, including DNA
Probability density function of product of two normally distributed random variables
Analyzing of the surface waves generated by microtremors, in geophysics and seismology.
Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter).
== Definitions ==
Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of
α
{\displaystyle \alpha }
when
α
{\displaystyle \alpha }
is known to be an integer.
Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn.
=== Bessel functions of the first kind: Jα ===
Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by
x
α
{\displaystyle x^{\alpha }}
times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation:
J
α
(
x
)
=
∑
m
=
0
∞
(
−
1
)
m
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
{\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },}
where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by
2
{\displaystyle 2}
in
x
/
2
{\displaystyle x/2}
; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to
x
−
1
/
2
{\displaystyle x^{-{1}/{2}}}
(see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.)
For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers):
J
−
n
(
x
)
=
(
−
1
)
n
J
n
(
x
)
.
{\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).}
This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below.
==== Bessel's integrals ====
Another definition of the Bessel function, for integer values of n, is possible using an integral representation:
J
n
(
x
)
=
1
π
∫
0
π
cos
(
n
τ
−
x
sin
τ
)
d
τ
=
1
π
Re
(
∫
0
π
e
i
(
n
τ
−
x
sin
τ
)
d
τ
)
,
{\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),}
which is also called Hansen-Bessel formula.
This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0:
J
α
(
x
)
=
1
π
∫
0
π
cos
(
α
τ
−
x
sin
τ
)
d
τ
−
sin
(
α
π
)
π
∫
0
∞
e
−
x
sinh
t
−
α
t
d
t
.
{\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.}
==== Relation to hypergeometric series ====
The Bessel functions can be expressed in terms of the generalized hypergeometric series as
J
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
α
+
1
;
−
x
2
4
)
.
{\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).}
This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function.
==== Relation to Laguerre polynomials ====
In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as
J
α
(
x
)
(
x
2
)
α
=
e
−
t
Γ
(
α
+
1
)
∑
k
=
0
∞
L
k
(
α
)
(
x
2
4
t
)
(
k
+
α
k
)
t
k
k
!
.
{\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.}
=== Bessel functions of the second kind: Yα ===
The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann.
For non-integer α, Yα(x) is related to Jα(x) by
Y
α
(
x
)
=
J
α
(
x
)
cos
(
α
π
)
−
J
−
α
(
x
)
sin
(
α
π
)
.
{\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.}
In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n:
Y
n
(
x
)
=
lim
α
→
n
Y
α
(
x
)
.
{\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).}
If n is a nonnegative integer, we have the series
Y
n
(
z
)
=
−
(
z
2
)
−
n
π
∑
k
=
0
n
−
1
(
n
−
k
−
1
)
!
k
!
(
z
2
4
)
k
+
2
π
J
n
(
z
)
ln
z
2
−
(
z
2
)
n
π
∑
k
=
0
∞
(
ψ
(
k
+
1
)
+
ψ
(
n
+
k
+
1
)
)
(
−
z
2
4
)
k
k
!
(
n
+
k
)
!
{\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}}
where
ψ
(
z
)
{\displaystyle \psi (z)}
is the digamma function, the logarithmic derivative of the gamma function.
There is also a corresponding integral formula (for Re(x) > 0):
Y
n
(
x
)
=
1
π
∫
0
π
sin
(
x
sin
θ
−
n
θ
)
d
θ
−
1
π
∫
0
∞
(
e
n
t
+
(
−
1
)
n
e
−
n
t
)
e
−
x
sinh
t
d
t
.
{\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.}
In the case where n = 0: (with
γ
{\displaystyle \gamma }
being Euler's constant)
Y
0
(
x
)
=
4
π
2
∫
0
1
2
π
cos
(
x
cos
θ
)
(
γ
+
ln
(
2
x
sin
2
θ
)
)
d
θ
.
{\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .}
Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below.
When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid:
Y
−
n
(
x
)
=
(
−
1
)
n
Y
n
(
x
)
.
{\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).}
Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α.
The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem.
=== Hankel functions: H(1)α, H(2)α ===
Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as
H
α
(
1
)
(
x
)
=
J
α
(
x
)
+
i
Y
α
(
x
)
,
H
α
(
2
)
(
x
)
=
J
α
(
x
)
−
i
Y
α
(
x
)
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}}
where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel.
These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real
x
>
0
{\displaystyle x>0}
where
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for
e
±
i
x
{\displaystyle e^{\pm ix}}
and
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
for
cos
(
x
)
{\displaystyle \cos(x)}
,
sin
(
x
)
{\displaystyle \sin(x)}
, as explicitly shown in the asymptotic expansion.
The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency).
Using the previous relationships, they can be expressed as
H
α
(
1
)
(
x
)
=
J
−
α
(
x
)
−
e
−
α
π
i
J
α
(
x
)
i
sin
α
π
,
H
α
(
2
)
(
x
)
=
J
−
α
(
x
)
−
e
α
π
i
J
α
(
x
)
−
i
sin
α
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}}
If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not:
H
−
α
(
1
)
(
x
)
=
e
α
π
i
H
α
(
1
)
(
x
)
,
H
−
α
(
2
)
(
x
)
=
e
−
α
π
i
H
α
(
2
)
(
x
)
.
{\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}}
In particular, if α = m + 1/2 with m a nonnegative integer, the above relations imply directly that
J
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
+
1
Y
m
+
1
2
(
x
)
,
Y
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
J
m
+
1
2
(
x
)
.
{\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}}
These are useful in developing the spherical Bessel functions (see below).
The Hankel functions admit the following integral representations for Re(x) > 0:
H
α
(
1
)
(
x
)
=
1
π
i
∫
−
∞
+
∞
+
π
i
e
x
sinh
t
−
α
t
d
t
,
H
α
(
2
)
(
x
)
=
−
1
π
i
∫
−
∞
+
∞
−
π
i
e
x
sinh
t
−
α
t
d
t
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}}
where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis.
=== Modified Bessel functions: Iα, Kα ===
The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as
I
α
(
x
)
=
i
−
α
J
α
(
i
x
)
=
∑
m
=
0
∞
1
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
K
α
(
x
)
=
π
2
I
−
α
(
x
)
−
I
α
(
x
)
sin
α
π
,
{\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}}
when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor.
K
α
{\displaystyle K_{\alpha }}
can be expressed in terms of Hankel functions:
K
α
(
x
)
=
{
π
2
i
α
+
1
H
α
(
1
)
(
i
x
)
−
π
<
arg
x
≤
π
2
π
2
(
−
i
)
α
+
1
H
α
(
2
)
(
−
i
x
)
−
π
2
<
arg
x
≤
π
{\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}}
Using these two formulae the result to
J
α
2
(
z
)
{\displaystyle J_{\alpha }^{2}(z)}
+
Y
α
2
(
z
)
{\displaystyle Y_{\alpha }^{2}(z)}
, commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
π
2
∫
0
∞
cosh
(
2
α
t
)
K
0
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,}
given that the condition Re(x) > 0 is met. It can also be shown that
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
cos
(
α
π
)
π
2
∫
0
∞
K
2
α
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,}
only when |Re(α)| < 1/2 and Re(x) ≥ 0 but not when x = 0.
We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ π/2):
J
α
(
i
z
)
=
e
α
π
i
2
I
α
(
z
)
,
Y
α
(
i
z
)
=
e
(
α
+
1
)
π
i
2
I
α
(
z
)
−
2
π
e
−
α
π
i
2
K
α
(
z
)
.
{\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}}
Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation:
x
2
d
2
y
d
x
2
+
x
d
y
d
x
−
(
x
2
+
α
2
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.}
Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and 1/2Γ(|α|)(2/x)|α| otherwise.
Two integral formulas for the modified Bessel functions are (for Re(x) > 0):
I
α
(
x
)
=
1
π
∫
0
π
e
x
cos
θ
cos
α
θ
d
θ
−
sin
α
π
π
∫
0
∞
e
−
x
cosh
t
−
α
t
d
t
,
K
α
(
x
)
=
∫
0
∞
e
−
x
cosh
t
cosh
α
t
d
t
.
{\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}}
Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0):
2
K
0
(
ω
)
=
∫
−
∞
∞
e
i
ω
t
t
2
+
1
d
t
.
{\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.}
It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane.
Modified Bessel functions of the second kind may be represented with Bassett's integral
K
n
(
x
z
)
=
Γ
(
n
+
1
2
)
(
2
z
)
n
π
x
n
∫
0
∞
cos
(
x
t
)
d
t
(
t
2
+
z
2
)
n
+
1
2
.
{\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.}
Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals
K
1
3
(
ξ
)
=
3
∫
0
∞
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
,
K
2
3
(
ξ
)
=
1
3
∫
0
∞
3
+
2
x
2
1
+
x
2
3
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
.
{\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}}
The modified Bessel function
K
1
2
(
ξ
)
=
(
2
ξ
/
π
)
−
1
/
2
exp
(
−
ξ
)
{\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )}
is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions.
The modified Bessel function of the second kind has also been called by the following names (now rare):
Basset function after Alfred Barnard Basset
Modified Bessel function of the third kind
Modified Hankel function
Macdonald function after Hector Munro Macdonald
=== Spherical Bessel functions: jn, yn ===
When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form
x
2
d
2
y
d
x
2
+
2
x
d
y
d
x
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.}
The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
,
y
n
(
x
)
=
π
2
x
Y
n
+
1
2
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
n
−
1
2
(
x
)
.
{\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}}
yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions.
From the relations to the ordinary Bessel functions it is directly seen that:
j
n
(
x
)
=
(
−
1
)
n
y
−
n
−
1
(
x
)
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
{\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}}
The spherical Bessel functions can also be written as (Rayleigh's formulas)
j
n
(
x
)
=
(
−
x
)
n
(
1
x
d
d
x
)
n
sin
x
x
,
y
n
(
x
)
=
−
(
−
x
)
n
(
1
x
d
d
x
)
n
cos
x
x
.
{\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}}
The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are:
j
0
(
x
)
=
sin
x
x
.
j
1
(
x
)
=
sin
x
x
2
−
cos
x
x
,
j
2
(
x
)
=
(
3
x
2
−
1
)
sin
x
x
−
3
cos
x
x
2
,
j
3
(
x
)
=
(
15
x
3
−
6
x
)
sin
x
x
−
(
15
x
2
−
1
)
cos
x
x
{\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}}
and
y
0
(
x
)
=
−
j
−
1
(
x
)
=
−
cos
x
x
,
y
1
(
x
)
=
j
−
2
(
x
)
=
−
cos
x
x
2
−
sin
x
x
,
y
2
(
x
)
=
−
j
−
3
(
x
)
=
(
−
3
x
2
+
1
)
cos
x
x
−
3
sin
x
x
2
,
y
3
(
x
)
=
j
−
4
(
x
)
=
(
−
15
x
3
+
6
x
)
cos
x
x
−
(
15
x
2
−
1
)
sin
x
x
.
{\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}}
The first few non-zero roots of the first few spherical Bessel functions are:
==== Generating function ====
The spherical Bessel functions have the generating functions
1
z
cos
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
j
n
−
1
(
z
)
,
1
z
sin
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
y
n
−
1
(
z
)
.
{\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}}
==== Finite series expansions ====
In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression:
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
=
=
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
1
x
[
sin
(
x
−
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
+
cos
(
x
−
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
(
n
+
1
2
)
(
x
)
=
=
(
−
1
)
n
+
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
=
(
−
1
)
n
+
1
x
[
cos
(
x
+
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
−
sin
(
x
+
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
{\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}}
==== Differential relations ====
In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ...
(
1
z
d
d
z
)
m
(
z
n
+
1
f
n
(
z
)
)
=
z
n
−
m
+
1
f
n
−
m
(
z
)
,
(
1
z
d
d
z
)
m
(
z
−
n
f
n
(
z
)
)
=
(
−
1
)
m
z
−
n
−
m
f
n
+
m
(
z
)
.
{\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}}
=== Spherical Hankel functions: h(1)n, h(2)n ===
There are also spherical analogues of the Hankel functions:
h
n
(
1
)
(
x
)
=
j
n
(
x
)
+
i
y
n
(
x
)
,
h
n
(
2
)
(
x
)
=
j
n
(
x
)
−
i
y
n
(
x
)
.
{\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}}
There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n:
h
n
(
1
)
(
x
)
=
(
−
i
)
n
+
1
e
i
x
x
∑
m
=
0
n
i
m
m
!
(
2
x
)
m
(
n
+
m
)
!
(
n
−
m
)
!
,
{\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},}
and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = sin x/x and y0(x) = −cos x/x, and so on.
The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field.
=== Riccati–Bessel functions: Sn, Cn, ξn, ζn ===
Riccati–Bessel functions only slightly differ from spherical Bessel functions:
S
n
(
x
)
=
x
j
n
(
x
)
=
π
x
2
J
n
+
1
2
(
x
)
C
n
(
x
)
=
−
x
y
n
(
x
)
=
−
π
x
2
Y
n
+
1
2
(
x
)
ξ
n
(
x
)
=
x
h
n
(
1
)
(
x
)
=
π
x
2
H
n
+
1
2
(
1
)
(
x
)
=
S
n
(
x
)
−
i
C
n
(
x
)
ζ
n
(
x
)
=
x
h
n
(
2
)
(
x
)
=
π
x
2
H
n
+
1
2
(
2
)
(
x
)
=
S
n
(
x
)
+
i
C
n
(
x
)
{\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}}
They satisfy the differential equation
x
2
d
2
y
d
x
2
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.}
For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references.
Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn.
== Asymptotic forms ==
The Bessel functions have the following asymptotic forms. For small arguments
0
<
z
≪
α
+
1
{\displaystyle 0<z\ll {\sqrt {\alpha +1}}}
, one obtains, when
α
{\displaystyle \alpha }
is not a negative integer:
J
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.}
When α is a negative integer, we have
J
α
(
z
)
∼
(
−
1
)
α
(
−
α
)
!
(
2
z
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.}
For the Bessel function of the second kind we have three cases:
Y
α
(
z
)
∼
{
2
π
(
ln
(
z
2
)
+
γ
)
if
α
=
0
−
Γ
(
α
)
π
(
2
z
)
α
+
1
Γ
(
α
+
1
)
(
z
2
)
α
cot
(
α
π
)
if
α
is a positive integer (one term dominates unless
α
is imaginary)
,
−
(
−
1
)
α
Γ
(
−
α
)
π
(
z
2
)
α
if
α
is a negative integer,
{\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}}
where γ is the Euler–Mascheroni constant (0.5772...).
For large real arguments z ≫ |α2 − 1/4|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1:
J
α
(
z
)
=
2
π
z
(
cos
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
,
Y
α
(
z
)
=
2
π
z
(
sin
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}}
(For α = 1/2, the last terms in these formulas drop out completely; see the spherical Bessel functions above.)
The asymptotic forms for the Hankel functions are:
H
α
(
1
)
(
z
)
∼
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
2
π
,
H
α
(
2
)
(
z
)
∼
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
−
2
π
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}}
These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z).
It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part):
J
α
(
z
)
∼
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
J
α
(
z
)
∼
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
,
Y
α
(
z
)
∼
−
i
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
Y
α
(
z
)
∼
i
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}}
For the modified Bessel functions, Hankel developed asymptotic expansions as well:
I
α
(
z
)
∼
e
z
2
π
z
(
1
−
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
−
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
π
2
,
K
α
(
z
)
∼
π
2
z
e
−
z
(
1
+
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
3
π
2
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}}
There is also the asymptotic form (for large real
z
{\displaystyle z}
)
I
α
(
z
)
=
1
2
π
z
1
+
α
2
z
2
4
exp
(
−
α
arcsinh
(
α
z
)
+
z
1
+
α
2
z
2
)
(
1
+
O
(
1
z
1
+
α
2
z
2
)
)
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}}
When α = 1/2, all the terms except the first vanish, and we have
I
1
/
2
(
z
)
=
2
π
sinh
(
z
)
z
∼
e
z
2
π
z
for
|
arg
z
|
<
π
2
,
K
1
/
2
(
z
)
=
π
2
e
−
z
z
.
{\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}}
For small arguments
0
<
|
z
|
≪
α
+
1
{\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}}
, we have
I
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
,
K
α
(
z
)
∼
{
−
ln
(
z
2
)
−
γ
if
α
=
0
Γ
(
α
)
2
(
2
z
)
α
if
α
>
0
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}}
== Properties ==
For integer order α = n, Jn is often defined via a Laurent series for a generating function:
e
x
2
(
t
−
1
t
)
=
∑
n
=
−
∞
∞
J
n
(
x
)
t
n
{\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}}
an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.)
Infinite series of Bessel functions in the form
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)}
where
ν
,
p
∈
Z
,
N
∈
Z
+
\nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+}
arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3:
∑
ν
=
−
∞
∞
J
3
ν
+
p
(
x
)
=
1
3
[
1
+
2
cos
(
x
3
/
2
−
2
π
p
/
3
)
]
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]}
. More generally, the Sung series and the alternating Sung series are written as:
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
2
π
q
/
N
e
−
i
2
π
p
q
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}}
∑
ν
=
−
∞
∞
(
−
1
)
ν
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
(
2
q
+
1
)
π
/
N
e
−
i
(
2
q
+
1
)
π
p
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}}
A series expansion using Bessel functions (Kapteyn series) is
1
1
−
z
=
1
+
2
∑
n
=
1
∞
J
n
(
n
z
)
.
{\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).}
Another important relation for integer orders is the Jacobi–Anger expansion:
e
i
z
cos
ϕ
=
∑
n
=
−
∞
∞
i
n
J
n
(
z
)
e
i
n
ϕ
{\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }}
and
e
±
i
z
sin
ϕ
=
J
0
(
z
)
+
2
∑
n
=
1
∞
J
2
n
(
z
)
cos
(
2
n
ϕ
)
±
2
i
∑
n
=
0
∞
J
2
n
+
1
(
z
)
sin
(
(
2
n
+
1
)
ϕ
)
{\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )}
which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal.
More generally, a series
f
(
z
)
=
a
0
ν
J
ν
(
z
)
+
2
⋅
∑
k
=
1
∞
a
k
ν
J
ν
+
k
(
z
)
{\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)}
is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form
a
k
0
=
1
2
π
i
∫
|
z
|
=
c
f
(
z
)
O
k
(
z
)
d
z
{\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz}
where Ok is Neumann's polynomial.
Selected functions admit the special representation
f
(
z
)
=
∑
k
=
0
∞
a
k
ν
J
ν
+
2
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)}
with
a
k
ν
=
2
(
ν
+
2
k
)
∫
0
∞
f
(
z
)
J
ν
+
2
k
(
z
)
z
d
z
{\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz}
due to the orthogonality relation
∫
0
∞
J
α
(
z
)
J
β
(
z
)
d
z
z
=
2
π
sin
(
π
2
(
α
−
β
)
)
α
2
−
β
2
{\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}}
More generally, if f has a branch-point near the origin of such a nature that
f
(
z
)
=
∑
k
=
0
a
k
J
ν
+
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)}
then
L
{
∑
k
=
0
a
k
J
ν
+
k
}
(
s
)
=
1
1
+
s
2
∑
k
=
0
a
k
(
s
+
1
+
s
2
)
ν
+
k
{\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}}
or
∑
k
=
0
a
k
ξ
ν
+
k
=
1
+
ξ
2
2
ξ
L
{
f
}
(
1
−
ξ
2
2
ξ
)
{\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)}
where
L
{
f
}
{\displaystyle {\mathcal {L}}\{f\}}
is the Laplace transform of f.
Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula:
J
ν
(
z
)
=
(
z
2
)
ν
Γ
(
ν
+
1
2
)
π
∫
−
1
1
e
i
z
s
(
1
−
s
2
)
ν
−
1
2
d
s
=
2
(
z
2
)
ν
⋅
π
⋅
Γ
(
1
2
−
ν
)
∫
1
∞
sin
z
u
(
u
2
−
1
)
ν
+
1
2
d
u
{\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}}
where ν > −1/2 and z ∈ C.
This formula is useful especially when working with Fourier transforms.
Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that:
∫
0
1
x
J
α
(
x
u
α
,
m
)
J
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
J
α
+
1
(
u
α
,
m
)
]
2
=
δ
m
,
n
2
[
J
α
′
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}}
where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m.
An analogous relationship for the spherical Bessel functions follows immediately:
∫
0
1
x
2
j
α
(
x
u
α
,
m
)
j
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
j
α
+
1
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}}
If one defines a boxcar function of x that depends on a small parameter ε as:
f
ε
(
x
)
=
1
ε
rect
(
x
−
1
ε
)
{\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)}
(where rect is the rectangle function) then the Hankel transform of it (of any given order α > −1/2), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x):
∫
0
∞
k
J
α
(
k
x
)
g
ε
(
k
)
d
k
=
f
ε
(
x
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)}
which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense):
∫
0
∞
k
J
α
(
k
x
)
J
α
(
k
)
d
k
=
δ
(
x
−
1
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)}
A change of variables then yields the closure equation:
∫
0
∞
x
J
α
(
u
x
)
J
α
(
v
x
)
d
x
=
1
u
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)}
for α > −1/2. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is:
∫
0
∞
x
2
j
α
(
u
x
)
j
α
(
v
x
)
d
x
=
π
2
u
v
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)}
for α > −1.
Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions:
A
α
(
x
)
d
B
α
d
x
−
d
A
α
d
x
B
α
(
x
)
=
C
α
x
{\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}}
where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular,
J
α
(
x
)
d
Y
α
d
x
−
d
J
α
d
x
Y
α
(
x
)
=
2
π
x
{\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}}
and
I
α
(
x
)
d
K
α
d
x
−
d
I
α
d
x
K
α
(
x
)
=
−
1
x
,
{\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},}
for α > −1.
For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let
0
<
j
α
,
1
<
j
α
,
2
<
⋯
<
j
α
,
n
<
⋯
{\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots }
be all its positive zeros, then
J
α
(
z
)
=
(
z
2
)
α
Γ
(
α
+
1
)
∏
n
=
1
∞
(
1
−
z
2
j
α
,
n
2
)
{\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)}
(There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.)
=== Recurrence relations ===
The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations
2
α
x
Z
α
(
x
)
=
Z
α
−
1
(
x
)
+
Z
α
+
1
(
x
)
{\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)}
and
2
d
Z
α
(
x
)
d
x
=
Z
α
−
1
(
x
)
−
Z
α
+
1
(
x
)
,
{\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),}
where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that
(
1
x
d
d
x
)
m
[
x
α
Z
α
(
x
)
]
=
x
α
−
m
Z
α
−
m
(
x
)
,
(
1
x
d
d
x
)
m
[
Z
α
(
x
)
x
α
]
=
(
−
1
)
m
Z
α
+
m
(
x
)
x
α
+
m
.
{\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}}
Using the previous relations one can arrive to similar relations for the Spherical Bessel functions:
2
α
+
1
x
j
α
(
x
)
=
j
α
−
1
+
j
α
+
1
{\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}}
and
d
j
α
(
x
)
d
x
=
j
α
−
1
−
α
+
1
x
j
α
{\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }}
Modified Bessel functions follow similar relations:
e
(
x
2
)
(
t
+
1
t
)
=
∑
n
=
−
∞
∞
I
n
(
x
)
t
n
{\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}}
and
e
z
cos
θ
=
I
0
(
z
)
+
2
∑
n
=
1
∞
I
n
(
z
)
cos
n
θ
{\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta }
and
1
2
π
∫
0
2
π
e
z
cos
(
m
θ
)
+
y
cos
θ
d
θ
=
I
0
(
z
)
I
0
(
y
)
+
2
∑
n
=
1
∞
I
n
(
z
)
I
m
n
(
y
)
.
{\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).}
The recurrence relation reads
C
α
−
1
(
x
)
−
C
α
+
1
(
x
)
=
2
α
x
C
α
(
x
)
,
C
α
−
1
(
x
)
+
C
α
+
1
(
x
)
=
2
d
d
x
C
α
(
x
)
,
{\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}}
where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems.
=== Transcendence ===
In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative J'ν(x)/Jν(x) are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that
Γ
(
v
+
1
)
(
2
/
x
)
v
J
v
(
x
)
{\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)}
is transcendental under the same assumptions.
=== Sums with Bessel functions ===
The product of two Bessel functions admits the following sum:
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
n
−
ν
(
y
)
=
J
n
(
x
+
y
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),}
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
y
)
=
J
n
(
y
−
x
)
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).}
From these equalities it follows that
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
x
)
=
δ
n
,
0
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}}
and as a consequence
∑
ν
=
−
∞
∞
J
ν
2
(
x
)
=
1.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.}
These sums can be extended to include a term multiplier that is a polynomial function of the index. For example,
∑
ν
=
−
∞
∞
ν
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
1
+
δ
n
,
−
1
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),}
∑
ν
=
−
∞
∞
ν
J
ν
2
(
x
)
=
0
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,}
∑
ν
=
−
∞
∞
ν
2
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
−
1
−
δ
n
,
1
)
+
x
2
4
(
δ
n
,
−
2
+
2
δ
n
,
0
+
δ
n
,
2
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),}
∑
ν
=
−
∞
∞
ν
2
J
ν
2
(
x
)
=
x
2
2
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.}
== Multiplication theorem ==
The Bessel functions obey a multiplication theorem
λ
−
ν
J
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
1
−
λ
2
)
z
2
)
n
J
ν
+
n
(
z
)
,
{\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),}
where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are
λ
−
ν
I
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
λ
2
−
1
)
z
2
)
n
I
ν
+
n
(
z
)
{\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)}
and
λ
−
ν
K
ν
(
λ
z
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
(
(
λ
2
−
1
)
z
2
)
n
K
ν
+
n
(
z
)
.
{\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).}
== Zeros of the Bessel function ==
=== Bourget's hypothesis ===
Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929.
=== Transcendence ===
Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives
J
ν
(
n
)
(
x
)
{\displaystyle J_{\nu }^{(n)}(x)}
for n ≤ 18 are transcendental, except for the special values
J
1
(
3
)
(
±
3
)
=
0
{\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0}
and
J
0
(
4
)
(
±
3
)
=
0
{\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0}
.
=== Numerical approaches ===
For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004).
=== Numerical values ===
The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively.
== History ==
=== Waves and elasticity problems ===
The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered
J
0
(
x
)
{\displaystyle J_{0}(x)}
. Bernoulli also developed a method to find the zeros of the function.
Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions
I
n
(
x
)
{\displaystyle I_{n}(x)}
.
In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings.
Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for
J
±
1
/
3
(
x
)
{\displaystyle J_{\pm 1/3}(x)}
. Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to
J
n
(
x
)
{\displaystyle J_{n}(x)}
, for integer n.
During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of
J
0
(
x
)
{\displaystyle J_{0}(x)}
using cosine.
At the beginning of the 1800s, Joseph Fourier used
J
0
(
x
)
{\displaystyle J_{0}(x)}
to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions).
=== Astronomical problems ===
In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later.
In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions.
== See also ==
== Notes ==
== References ==
== External links == | Wikipedia/Spherical_Bessel_function |
"On Physical Lines of Force" is a four-part paper written by James Clerk Maxwell, published in 1861. In it, Maxwell derived the equations of electromagnetism in conjunction with a "sea" of "molecular vortices" which he used to model Faraday's lines of force. Maxwell had studied and commented on the field of electricity and magnetism as early as 1855/56 when "On Faraday's Lines of Force" was read to the Cambridge Philosophical Society. Maxwell made an analogy between the density of this medium and the magnetic permeability, as well as an analogy between the transverse elasticity and the dielectric constant, and using the results of a prior experiment by Wilhelm Eduard Weber and Rudolf Kohlrausch performed in 1856, he established a connection between the speed of light and the speed of propagation of waves in this medium.
The paper ushered in a new era of classical electrodynamics and catalyzed further progress in the mathematical field of vector calculus. Because of this, it is considered one of the most historically significant publications in physics and science in general, comparable with Einstein's Annus Mirabilis papers and Newton's Principia Mathematica.
== Motivations ==
In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch performed an experiment with a Leyden jar and established the ratio of electric charge as measured statically to the same electric charge as measured electrodynamically. Maxwell used this ratio in Isaac Newton's equation for the speed of sound, as applied using the density and transverse elasticity of his sea of molecular vortices. He obtained a value which was very close to the speed of light, as recently measured directly by Hippolyte Fizeau. Maxwell then wrote
"we can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena"
It was also in this 1861 paper that Maxwell first introduced the displacement current term which is now included in Ampère's circuital law. But it wasn't until his next paper in 1865, "A Dynamical Theory of the Electromagnetic Field" that Maxwell used this displacement current term to derive the electromagnetic wave equation.
∇
×
B
=
μ
0
J
+
μ
0
ϵ
0
∂
∂
t
E
⏟
M
a
x
w
e
l
l
′
s
t
e
r
m
{\displaystyle \mathbf {\nabla } \times \mathbf {B} =\mu _{0}\mathbf {J} +\underbrace {\mu _{0}\epsilon _{0}{\frac {\partial }{\partial t}}\mathbf {E} } _{\mathrm {Maxwell's\ term} }}
== Impact ==
The four modern Maxwell's equations, as laid down in a publication by Oliver Heaviside in 1884, had all appeared in Maxwell's 1861 paper. Heaviside however presented these equations in modern vector format using the nabla operator (∇) devised by William Rowan Hamilton in 1837,
Of Maxwell's work, Albert Einstein wrote:
"Imagine [Maxwell's] feelings when the differential equations he had formulated proved to him that electromagnetic fields spread in the form of polarised waves, and at the speed of light! To few men in the world has such an experience been vouchsafed... it took physicists some decades to grasp the full significance of Maxwell's discovery, so bold was the leap that his genius forced upon the conceptions of his fellow-workers."
Other physicists were equally impressed with Maxwell's work, such as Richard Feynman who commented:
"From a long view of the history of the world—seen from, say, ten thousand years from now—there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electromagnetism. The American Civil War will pale into provincial insignificance in comparison with this important scientific event of the same decade."
Charles Coulston Gillispie states that the paper introduced the word "field" to the world of physics, but Faraday first coined the term in 1849.
== See also ==
A Treatise on Electricity and Magnetism
Flux tube
== References ==
== Further reading ==
James C. Maxwell (1861). "On Physical Lines of Force" . Philosophical Magazine.
Basil Mahon (2003). The Man who Changed Everything: The Life of James Clerk Maxwell. Wiley. ISBN 978-0-470-86088-5.
James Clerk Maxwell (1878). "Ether" . Encyclopædia Britannica Ninth Edition. Vol. 8. pp. 568–572. | Wikipedia/On_Physical_Lines_of_Force |
The lumped-element model (also called lumped-parameter model, or lumped-component model) is a simplified representation of a physical system or circuit that assumes all components are concentrated at a single point and their behavior can be described by idealized mathematical models. The lumped-element model simplifies the system or circuit behavior description into a topology. It is useful in electrical systems (including electronics), mechanical multibody systems, heat transfer, acoustics, etc. This is in contrast to distributed parameter systems or models in which the behaviour is distributed spatially and cannot be considered as localized into discrete entities.
The simplification reduces the state space of the system to a finite dimension, and the partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into ordinary differential equations (ODEs) with a finite number of parameters.
== Electrical systems ==
=== Lumped-matter discipline ===
The lumped-matter discipline is a set of imposed assumptions in electrical engineering that provides the foundation for lumped-circuit abstraction used in network analysis. The self-imposed constraints are:
The change of the magnetic flux in time outside a conductor is zero.
∂
Φ
B
∂
t
=
0
{\displaystyle {\frac {\partial \Phi _{B}}{\partial t}}=0}
The change of the charge in time inside conducting elements is zero.
∂
q
∂
t
=
0
{\displaystyle {\frac {\partial q}{\partial t}}=0}
Signal timescales of interest are much larger than propagation delay of electromagnetic waves across the lumped element.
The first two assumptions result in Kirchhoff's circuit laws when applied to Maxwell's equations and are only applicable when the circuit is in steady state. The third assumption is the basis of the lumped-element model used in network analysis. Less severe assumptions result in the distributed-element model, while still not requiring the direct application of the full Maxwell equations.
=== Lumped-element model ===
The lumped-element model of electronic circuits makes the simplifying assumption that the attributes of the circuit, resistance, capacitance, inductance, and gain, are concentrated into idealized electrical components; resistors, capacitors, and inductors, etc. joined by a network of perfectly conducting wires.
The lumped-element model is valid whenever
L
c
≪
λ
{\displaystyle L_{c}\ll \lambda }
, where
L
c
{\displaystyle L_{c}}
denotes the circuit's characteristic length, and
λ
{\displaystyle \lambda }
denotes the circuit's operating wavelength. Otherwise, when the circuit length is on the order of a wavelength, we must consider more general models, such as the distributed-element model (including transmission lines), whose dynamic behaviour is described by Maxwell's equations. Another way of viewing the validity of the lumped-element model is to note that this model ignores the finite time it takes signals to propagate around a circuit. Whenever this propagation time is not significant to the application the lumped-element model can be used. This is the case when the propagation time is much less than the period of the signal involved. However, with increasing propagation time there will be an increasing error between the assumed and actual phase of the signal which in turn results in an error in the assumed amplitude of the signal. The exact point at which the lumped-element model can no longer be used depends to a certain extent on how accurately the signal needs to be known in a given application.
Real-world components exhibit non-ideal characteristics which are, in reality, distributed elements but are often represented to a first-order approximation by lumped elements. To account for leakage in capacitors for example, we can model the non-ideal capacitor as having a large lumped resistor connected in parallel even though the leakage is, in reality distributed throughout the dielectric. Similarly a wire-wound resistor has significant inductance as well as resistance distributed along its length but we can model this as a lumped inductor in series with the ideal resistor.
== Thermal systems ==
A lumped-capacitance model, also called lumped system analysis, reduces a thermal system to a number of discrete “lumps” and assumes that the temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex differential heat equations. It was developed as a mathematical analog of electrical capacitance, although it also includes thermal analogs of electrical resistance as well.
The lumped-capacitance model is a common approximation in transient conduction, which may be used whenever heat conduction within an object is much faster than heat transfer across the boundary of the object. The method of approximation then suitably reduces one aspect of the transient conduction system (spatial temperature variation within the object) to a more mathematically tractable form (that is, it is assumed that the temperature within the object is completely uniform in space, although this spatially uniform temperature value changes over time). The rising uniform temperature within the object or part of a system, can then be treated like a capacitative reservoir which absorbs heat until it reaches a steady thermal state in time (after which temperature does not change within it).
An early-discovered example of a lumped-capacitance system which exhibits mathematically simple behavior due to such physical simplifications, are systems which conform to Newton's law of cooling. This law simply states that the temperature of a hot (or cold) object progresses toward the temperature of its environment in a simple exponential fashion. Objects follow this law strictly only if the rate of heat conduction within them is much larger than the heat flow into or out of them. In such cases it makes sense to talk of a single "object temperature" at any given time (since there is no spatial temperature variation within the object) and also the uniform temperatures within the object allow its total thermal energy excess or deficit to vary proportionally to its surface temperature, thus setting up the Newton's law of cooling requirement that the rate of temperature decrease is proportional to difference between the object and the environment. This in turn leads to simple exponential heating or cooling behavior (details below).
=== Method ===
To determine the number of lumps, the Biot number (Bi), a dimensionless parameter of the system, is used. Bi is defined as the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary with a uniform bath of different temperature. When the thermal resistance to heat transferred into the object is larger than the resistance to heat being diffused completely within the object, the Biot number is less than 1. In this case, particularly for Biot numbers which are even smaller, the approximation of spatially uniform temperature within the object can begin to be used, since it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
If the Biot number is less than 0.1 for a solid object, then the entire material will be nearly the same temperature, with the dominant temperature difference being at the surface. It may be regarded as being "thermally thin". The Biot number must generally be less than 0.1 for usefully accurate approximation and heat transfer analysis. The mathematical solution to the lumped-system approximation gives Newton's law of cooling.
A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body.
The single capacitance approach can be expanded to involve many resistive and capacitive elements, with Bi < 0.1 for each lump. As the Biot number is calculated based upon a characteristic length of the system, the system can often be broken into a sufficient number of sections, or lumps, so that the Biot number is acceptably small.
Some characteristic lengths of thermal systems are:
Plate: thickness
Fin: thickness/2
Long cylinder: diameter/4
Sphere: diameter/6
For arbitrary shapes, it may be useful to consider the characteristic length to be volume / surface area.
==== Thermal purely resistive circuits ====
A useful concept used in heat transfer applications once the condition of steady state heat conduction has been reached, is the representation of thermal transfer by what is known as thermal circuits. A thermal circuit is the representation of the resistance to heat flow in each element of a circuit, as though it were an electrical resistor. The heat transferred is analogous to the electric current and the thermal resistance is analogous to the electrical resistor. The values of the thermal resistance for the different modes of heat transfer are then calculated as the denominators of the developed equations. The thermal resistances of the different modes of heat transfer are used in analyzing combined modes of heat transfer. The lack of "capacitative" elements in the following purely resistive example, means that no section of the circuit is absorbing energy or changing in distribution of temperature. This is equivalent to demanding that a state of steady state heat conduction (or transfer, as in radiation) has already been established.
The equations describing the three heat transfer modes and their thermal resistances in steady state conditions, as discussed previously, are summarized in the table below:
In cases where there is heat transfer through different media (for example, through a composite material), the equivalent resistance is the sum of the resistances of the components that make up the composite. Likely, in cases where there are different heat transfer modes, the total resistance is the sum of the resistances of the different modes. Using the thermal circuit concept, the amount of heat transferred through any medium is the quotient of the temperature change and the total thermal resistance of the medium.
As an example, consider a composite wall of cross-sectional area
A
{\displaystyle A}
. The composite is made of an
L
1
{\displaystyle L_{1}}
long cement plaster with a thermal coefficient
k
1
{\displaystyle k_{1}}
and
L
2
{\displaystyle L_{2}}
long paper faced fiber glass, with thermal coefficient
k
2
{\displaystyle k_{2}}
. The left surface of the wall is at
T
i
{\displaystyle T_{i}}
and exposed to air with a convective coefficient of
h
i
{\displaystyle h_{i}}
. The right surface of the wall is at
T
o
{\displaystyle T_{o}}
and exposed to air with convective coefficient
h
o
{\displaystyle h_{o}}
.
Using the thermal resistance concept, heat flow through the composite is as follows:
Q
˙
=
T
i
−
T
o
R
i
+
R
1
+
R
2
+
R
o
=
T
i
−
T
1
R
i
=
T
i
−
T
2
R
i
+
R
1
=
T
i
−
T
3
R
i
+
R
1
+
R
2
=
T
1
−
T
2
R
1
=
T
3
−
T
o
R
0
{\displaystyle {\dot {Q}}={\frac {T_{i}-T_{o}}{R_{i}+R_{1}+R_{2}+R_{o}}}={\frac {T_{i}-T_{1}}{R_{i}}}={\frac {T_{i}-T_{2}}{R_{i}+R_{1}}}={\frac {T_{i}-T_{3}}{R_{i}+R_{1}+R_{2}}}={\frac {T_{1}-T_{2}}{R_{1}}}={\frac {T_{3}-T_{o}}{R_{0}}}}
where
R
i
=
1
h
i
A
{\displaystyle R_{i}={\frac {1}{h_{i}A}}}
,
R
o
=
1
h
o
A
{\displaystyle R_{o}={\frac {1}{h_{o}A}}}
,
R
1
=
L
1
k
1
A
{\displaystyle R_{1}={\frac {L_{1}}{k_{1}A}}}
, and
R
2
=
L
2
k
2
A
{\displaystyle R_{2}={\frac {L_{2}}{k_{2}A}}}
==== Newton's law of cooling ====
Newton's law of cooling is an empirical relationship attributed to English physicist Sir Isaac Newton (1642–1727). This law stated in non-mathematical form is the following:
The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings.
Or, using symbols:
Rate of cooling
∼
Δ
T
{\displaystyle {\text{Rate of cooling}}\sim \Delta T}
An object at a different temperature from its surroundings will ultimately come to a common temperature with its surroundings. A relatively hot object cools as it warms its surroundings; a cool object is warmed by its surroundings. When considering how quickly (or slowly) something cools, we speak of its rate of cooling – how many degrees' change in temperature per unit of time.
The rate of cooling of an object depends on how much hotter the object is than its surroundings. The temperature change per minute of a hot apple pie will be more if the pie is put in a cold freezer than if it is placed on the kitchen table. When the pie cools in the freezer, the temperature difference between it and its surroundings is greater. On a cold day, a warm home will leak heat to the outside at a greater rate when there is a large difference between the inside and outside temperatures. Keeping the inside of a home at high temperature on a cold day is thus more costly than keeping it at a lower temperature. If the temperature difference is kept small, the rate of cooling will be correspondingly low.
As Newton's law of cooling states, the rate of cooling of an object – whether by conduction, convection, or radiation – is approximately proportional to the temperature difference ΔT. Frozen food will warm up faster in a warm room than in a cold room. Note that the rate of cooling experienced on a cold day can be increased by the added convection effect of the wind. This is referred to as wind chill. For example, a wind chill of -20 °C means that heat is being lost at the same rate as if the temperature were -20 °C without wind.
==== Applicable situations ====
This law describes many situations in which an object has a large thermal capacity and large conductivity, and is suddenly immersed in a uniform bath which conducts heat relatively poorly. It is an example of a thermal circuit with one resistive and one capacitative element. For the law to be correct, the temperatures at all points inside the body must be approximately the same at each time point, including the temperature at its surface. Thus, the temperature difference between the body and surroundings does not depend on which part of the body is chosen, since all parts of the body have effectively the same temperature. In these situations, the material of the body does not act to "insulate" other parts of the body from heat flow, and all of the significant insulation (or "thermal resistance") controlling the rate of heat flow in the situation resides in the area of contact between the body and its surroundings. Across this boundary, the temperature-value jumps in a discontinuous fashion.
In such situations, heat can be transferred from the exterior to the interior of a body, across the insulating boundary, by convection, conduction, or diffusion, so long as the boundary serves as a relatively poor conductor with regard to the object's interior. The presence of a physical insulator is not required, so long as the process which serves to pass heat across the boundary is "slow" in comparison to the conductive transfer of heat inside the body (or inside the region of interest—the "lump" described above).
In such a situation, the object acts as the "capacitative" circuit element, and the resistance of the thermal contact at the boundary acts as the (single) thermal resistor. In electrical circuits, such a combination would charge or discharge toward the input voltage, according to a simple exponential law in time. In the thermal circuit, this configuration results in the same behavior in temperature: an exponential approach of the object temperature to the bath temperature.
==== Mathematical statement ====
Newton's law is mathematically stated by the simple first-order differential equation:
d
Q
d
t
=
−
h
⋅
A
(
T
(
t
)
−
T
env
)
=
−
h
⋅
A
Δ
T
(
t
)
{\displaystyle {\frac {dQ}{dt}}=-h\cdot A(T(t)-T_{\text{env}})=-h\cdot A\Delta T(t)}
where
Q is thermal energy in joules
h is the heat transfer coefficient between the surface and the fluid
A is the surface area of the heat being transferred
T is the temperature of the object's surface and interior (since these are the same in this approximation)
Tenv is the temperature of the environment
ΔT(t) = T(t) − Tenv is the time-dependent thermal gradient between environment and object
Putting heat transfers into this form is sometimes not a very good approximation, depending on ratios of heat conductances in the system. If the differences are not large, an accurate formulation of heat transfers in the system may require analysis of heat flow based on the (transient) heat transfer equation in nonhomogeneous or poorly conductive media.
==== Solution in terms of object heat capacity ====
If the entire body is treated as lumped-capacitance heat reservoir, with total heat content which is proportional to simple total heat capacity
C
{\displaystyle C}
, and
T
{\displaystyle T}
, the temperature of the body, or
Q
=
C
T
{\displaystyle Q=CT}
. It is expected that the system will experience exponential decay with time in the temperature of a body.
From the definition of heat capacity
C
{\displaystyle C}
comes the relation
C
=
d
Q
/
d
T
{\displaystyle C=dQ/dT}
. Differentiating this equation with regard to time gives the identity (valid so long as temperatures in the object are uniform at any given time):
d
Q
/
d
t
=
C
(
d
T
/
d
t
)
{\displaystyle dQ/dt=C(dT/dt)}
. This expression may be used to replace
d
Q
/
d
t
{\displaystyle dQ/dt}
in the first equation which begins this section, above. Then, if
T
(
t
)
{\displaystyle T(t)}
is the temperature of such a body at time
t
{\displaystyle t}
, and
T
env
{\displaystyle T_{\text{env}}}
is the temperature of the environment around the body:
d
T
(
t
)
d
t
=
−
r
(
T
(
t
)
−
T
env
)
=
−
r
Δ
T
(
t
)
{\displaystyle {\frac {dT(t)}{dt}}=-r(T(t)-T_{\text{env}})=-r\Delta T(t)}
where
r
=
h
A
/
C
{\displaystyle r=hA/C}
is a positive constant characteristic of the system, which must be in units of
s
−
1
{\displaystyle s^{-1}}
, and is therefore sometimes expressed in terms of a characteristic time constant
t
0
{\displaystyle t_{0}}
given by:
t
0
=
1
/
r
=
−
Δ
T
(
t
)
/
(
d
T
(
t
)
/
d
t
)
{\displaystyle t_{0}=1/r=-\Delta T(t)/(dT(t)/dt)}
. Thus, in thermal systems,
t
0
=
C
/
h
A
{\displaystyle t_{0}=C/hA}
. (The total heat capacity
C
{\displaystyle C}
of a system may be further represented by its mass-specific heat capacity
c
p
{\displaystyle c_{p}}
multiplied by its mass
m
{\displaystyle m}
, so that the time constant
t
0
{\displaystyle t_{0}}
is also given by
m
c
p
/
h
A
{\displaystyle mc_{p}/hA}
).
The solution of this differential equation, by standard methods of integration and substitution of boundary conditions, gives:
T
(
t
)
=
T
e
n
v
+
(
T
(
0
)
−
T
e
n
v
)
e
−
r
t
.
{\displaystyle T(t)=T_{\mathrm {env} }+(T(0)-T_{\mathrm {env} })\ e^{-rt}.}
If:
Δ
T
(
t
)
{\displaystyle \Delta T(t)\quad }
is defined as :
T
(
t
)
−
T
e
n
v
,
{\displaystyle T(t)-T_{\mathrm {env} }\ ,\quad }
where
Δ
T
(
0
)
{\displaystyle \Delta T(0)\quad }
is the initial temperature difference at time 0,
then the Newtonian solution is written as:
Δ
T
(
t
)
=
Δ
T
(
0
)
e
−
r
t
=
Δ
T
(
0
)
e
−
t
/
t
0
.
{\displaystyle \Delta T(t)=\Delta T(0)\ e^{-rt}=\Delta T(0)\ e^{-t/t_{0}}.}
This same solution is almost immediately apparent if the initial differential equation is written in terms of
Δ
T
(
t
)
{\displaystyle \Delta T(t)}
, as the single function to be solved for.
d
T
(
t
)
d
t
=
d
Δ
T
(
t
)
d
t
=
−
1
t
0
Δ
T
(
t
)
{\displaystyle {\frac {dT(t)}{dt}}={\frac {d\Delta T(t)}{dt}}=-{\frac {1}{t_{0}}}\Delta T(t)}
=== Applications ===
This mode of analysis has been applied to forensic sciences to analyze the time of death of humans. Also, it can be applied to HVAC (heating, ventilating and air-conditioning, which can be referred to as "building climate control"), to ensure more nearly instantaneous effects of a change in comfort level setting.
== Mechanical systems ==
The simplifying assumptions in this domain are:
all objects are rigid bodies;
all interactions between rigid bodies take place via kinematic pairs (joints), springs and dampers.
== Acoustics ==
In this context, the lumped-component model extends the distributed concepts of acoustic theory subject to approximation. In the acoustical lumped-component model, certain physical components with acoustical properties may be approximated as behaving similarly to standard electronic components or simple combinations of components.
A rigid-walled cavity containing air (or similar compressible fluid) may be approximated as a capacitor whose value is proportional to the volume of the cavity. The validity of this approximation relies on the shortest wavelength of interest being significantly (much) larger than the longest dimension of the cavity.
A reflex port may be approximated as an inductor whose value is proportional to the effective length of the port divided by its cross-sectional area. The effective length is the actual length plus an end correction. This approximation relies on the shortest wavelength of interest being significantly larger than the longest dimension of the port.
Certain types of damping material can be approximated as a resistor. The value depends on the properties and dimensions of the material. The approximation relies in the wavelengths being long enough and on the properties of the material itself.
A loudspeaker drive unit (typically a woofer or subwoofer drive unit) may be approximated as a series connection of a zero-impedance voltage source, a resistor, a capacitor and an inductor. The values depend on the specifications of the unit and the wavelength of interest.
== Heat transfer for buildings ==
A simplifying assumption in this domain is that all heat transfer mechanisms are linear, implying that radiation and convection are linearised for each problem.
Several publications can be found that describe how to generate lumped-element models of buildings. In most cases, the building is considered a single thermal zone and in this case, turning multi-layered walls into lumped elements can be one of the most complicated tasks in the creation of the model. The dominant-layer method is one simple and reasonably accurate method. In this method, one of the layers is selected as the dominant layer in the whole construction, this layer is chosen considering the most relevant frequencies of the problem.
Lumped-element models of buildings have also been used to evaluate the efficiency of domestic energy systems, by running many simulations under different future weather scenarios.
== Fluid systems ==
Fluid systems can be described by means of lumped-element cardiovascular models by using voltage to represent pressure and current to represent flow; identical equations from the electrical circuit representation are valid after substituting these two variables. Such applications can, for example, study the response of the human cardiovascular system to ventricular assist device implantation.
== See also ==
System isomorphism
Model order reduction
== References ==
== External links ==
Advanced modelling and simulation techniques for magnetic components
IMTEK Mathematica Supplement (IMS), the Open Source IMTEK Mathematica Supplement (IMS) for lumped modelling | Wikipedia/Lumped_element_model |
Ferromagnetism is a property of certain materials (such as iron) that results in a significant, observable magnetic permeability, and in many cases, a significant magnetic coercivity, allowing the material to form a permanent magnet. Ferromagnetic materials are noticeably attracted to a magnet, which is a consequence of their substantial magnetic permeability.
Magnetic permeability describes the induced magnetization of a material due to the presence of an external magnetic field. For example, this temporary magnetization inside a steel plate accounts for the plate's attraction to a magnet. Whether or not that steel plate then acquires permanent magnetization depends on both the strength of the applied field and on the coercivity of that particular piece of steel (which varies with the steel's chemical composition and any heat treatment it may have undergone).
In physics, multiple types of material magnetism have been distinguished. Ferromagnetism (along with the similar effect ferrimagnetism) is the strongest type and is responsible for the common phenomenon of everyday magnetism. A common example of a permanent magnet is a refrigerator magnet. Substances respond weakly to magnetic fields by three other types of magnetism—paramagnetism, diamagnetism, and antiferromagnetism—but the forces are usually so weak that they can be detected only by lab instruments.
Permanent magnets (materials that can be magnetized by an external magnetic field and remain magnetized after the external field is removed) are either ferromagnetic or ferrimagnetic, as are the materials that are strongly attracted to them. Relatively few materials are ferromagnetic; the common ones are the metals iron, cobalt, nickel and most of their alloys, and certain rare-earth metals.
Ferromagnetism is widely used in industrial applications and modern technology, in electromagnetic and electromechanical devices such as electromagnets, electric motors, generators, transformers, magnetic storage (including tape recorders and hard disks), and nondestructive testing of ferrous materials.
Ferromagnetic materials can be divided into magnetically "soft" materials (like annealed iron) having low coercivity, which do not tend to stay magnetized, and magnetically "hard" materials having high coercivity, which do. Permanent magnets are made from hard ferromagnetic materials (such as alnico) and ferrimagnetic materials (such as ferrite) that are subjected to special processing in a strong magnetic field during manufacturing to align their internal microcrystalline structure, making them difficult to demagnetize. To demagnetize a saturated magnet, a magnetic field must be applied. The threshold at which demagnetization occurs depends on the coercivity of the material. The overall strength of a magnet is measured by its magnetic moment or, alternatively, its total magnetic flux. The local strength of magnetism in a material is measured by its magnetization.
== Terms ==
Historically, the term ferromagnetism was used for any material that could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field; that is, any material that could become a magnet. This definition is still in common use.
In a landmark paper in 1948, Louis Néel showed that two levels of magnetic alignment result in this behavior. One is ferromagnetism in the strict sense, where all the magnetic moments are aligned. The other is ferrimagnetism, where some magnetic moments point in the opposite direction but have a smaller contribution, so spontaneous magnetization is present.: 28–29
In the special case where the opposing moments balance completely, the alignment is known as antiferromagnetism; antiferromagnets do not have a spontaneous magnetization.
== Materials ==
Ferromagnetism is an unusual property that occurs in only a few substances. The common ones are the transition metals iron, nickel, and cobalt, as well as their alloys and alloys of rare-earth metals. It is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. Ferromagnetism results from these materials having many unpaired electrons in their d-block (in the case of iron and its relatives) or f-block (in the case of the rare-earth metals), a result of Hund's rule of maximum multiplicity. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, named after Fritz Heusler. Conversely, there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point.
A relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals.
The table lists a selection of ferromagnetic and ferrimagnetic compounds, along with their Curie temperature (TC), above which they cease to exhibit spontaneous magnetization.
=== Unusual materials ===
Most ferromagnetic materials are metals, since the conducting electrons are often responsible for mediating the ferromagnetic interactions. It is therefore a challenge to develop ferromagnetic insulators, especially multiferroic materials, which are both ferromagnetic and ferroelectric.
A number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, but which undergoes a structural transition into a tetragonal state with ferromagnetic order when cooled below its TC = 125 K. In its ferromagnetic state, PuP's easy axis is in the ⟨100⟩ direction.
In NpFe2 the easy axis is ⟨111⟩. Above TC ≈ 500 K, NpFe2 is also paramagnetic and cubic. Cooling below the Curie temperature produces a rhombohedral distortion wherein the rhombohedral angle changes from 60° (cubic phase) to 60.53°. An alternate description of this distortion is to consider the length c along the unique trigonal axis (after the distortion has begun) and a as the distance in the plane perpendicular to c. In the cubic phase this reduces to c/a = 1.00. Below the Curie temperature, the lattice acquires a distortion
c
a
−
1
=
−
(
120
±
5
)
×
10
−
4
,
{\displaystyle {\frac {c}{a}}-1=-(120\pm 5)\times 10^{-4},}
which is the largest strain in any actinide compound. NpNi2 undergoes a similar lattice distortion below TC = 32 K, with a strain of (43 ± 5) × 10−4. NpCo2 is a ferrimagnet below 15 K.
In 2009, a team of MIT physicists demonstrated that a lithium gas cooled to less than one kelvin can exhibit ferromagnetism. The team cooled fermionic lithium-6 to less than 150 nK (150 billionths of one kelvin) using infrared laser cooling. This demonstration is the first time that ferromagnetism has been demonstrated in a gas.
In rare circumstances, ferromagnetism can be observed in compounds consisting of only s-block and p-block elements, such as rubidium sesquioxide.
In 2018, a team of University of Minnesota physicists demonstrated that body-centered tetragonal ruthenium exhibits ferromagnetism at room temperature.
=== Electrically induced ferromagnetism ===
Recent research has shown evidence that ferromagnetism can be induced in some materials by an electric current or voltage. Antiferromagnetic LaMnO3 and SrCoO have been switched to be ferromagnetic by a current. In July 2020, scientists reported inducing ferromagnetism in the abundant diamagnetic material iron pyrite ("fool's gold") by an applied voltage. In these experiments, the ferromagnetism was limited to a thin surface layer.
== Explanation ==
The Bohr–Van Leeuwen theorem, discovered in the 1910s, showed that classical physics theories are unable to account for any form of material magnetism, including ferromagnetism; the explanation rather depends on the quantum mechanical description of atoms. Each of an atom's electrons has a magnetic moment according to its spin state, as described by quantum mechanics. The Pauli exclusion principle, also a consequence of quantum mechanics, restricts the occupancy of electrons' spin states in atomic orbitals, generally causing the magnetic moments from an atom's electrons to largely or completely cancel. An atom will have a net magnetic moment when that cancellation is incomplete.
=== Origin of atomic magnetism ===
One of the fundamental properties of an electron (besides that it carries charge) is that it has a magnetic dipole moment, i.e., it behaves like a tiny magnet, producing a magnetic field. This dipole moment comes from a more fundamental property of the electron: its quantum mechanical spin. Due to its quantum nature, the spin of the electron can be in one of only two states, with the magnetic field either pointing "up" or "down" (for any choice of up and down). Electron spin in atoms is the main source of ferromagnetism, although there is also a contribution from the orbital angular momentum of the electron about the nucleus. When these magnetic dipoles in a piece of matter are aligned (point in the same direction), their individually tiny magnetic fields add together to create a much larger macroscopic field.
However, materials made of atoms with filled electron shells have a total dipole moment of zero: because the electrons all exist in pairs with opposite spin, every electron's magnetic moment is cancelled by the opposite moment of the second electron in the pair. Only atoms with partially filled shells (i.e., unpaired spins) can have a net magnetic moment, so ferromagnetism occurs only in materials with partially filled shells. Because of Hund's rules, the first few electrons in an otherwise unoccupied shell tend to have the same spin, thereby increasing the total dipole moment.
These unpaired dipoles (often called simply "spins", even though they also generally include orbital angular momentum) tend to align in parallel to an external magnetic field – leading to a macroscopic effect called paramagnetism. In ferromagnetism, however, the magnetic interaction between neighboring atoms' magnetic dipoles is strong enough that they align with each other regardless of any applied field, resulting in the spontaneous magnetization of so-called domains. This results in the large observed magnetic permeability of ferromagnetics, and the ability of magnetically hard materials to form permanent magnets.
=== Exchange interaction ===
When two nearby atoms have unpaired electrons, whether the electron spins are parallel or antiparallel affects whether the electrons can share the same orbit as a result of the quantum mechanical effect called the exchange interaction. This in turn affects the electron location and the Coulomb (electrostatic) interaction and thus the energy difference between these states.
The exchange interaction is related to the Pauli exclusion principle, which says that two electrons with the same spin cannot also be in the same spatial state (orbital). This is a consequence of the spin–statistics theorem and that electrons are fermions. Therefore, under certain conditions, when the orbitals of the unpaired outer valence electrons from adjacent atoms overlap, the distributions of their electric charge in space are farther apart when the electrons have parallel spins than when they have opposite spins. This reduces the electrostatic energy of the electrons when their spins are parallel compared to their energy when the spins are antiparallel, so the parallel-spin state is more stable. This difference in energy is called the exchange energy. In simple terms, the outer electrons of adjacent atoms, which repel each other, can move further apart by aligning their spins in parallel, so the spins of these electrons tend to line up.
This energy difference can be orders of magnitude larger than the energy differences associated with the magnetic dipole–dipole interaction due to dipole orientation, which tends to align the dipoles antiparallel. In certain doped semiconductor oxides, RKKY interactions have been shown to bring about periodic longer-range magnetic interactions, a phenomenon of significance in the study of spintronic materials.
The materials in which the exchange interaction is much stronger than the competing dipole–dipole interaction are frequently called magnetic materials. For instance, in iron (Fe) the exchange force is about 1,000 times stronger than the dipole interaction. Therefore, below the Curie temperature, virtually all of the dipoles in a ferromagnetic material will be aligned. In addition to ferromagnetism, the exchange interaction is also responsible for the other types of spontaneous ordering of atomic magnetic moments occurring in magnetic solids: antiferromagnetism and ferrimagnetism. There are different exchange interaction mechanisms which create the magnetism in different ferromagnetic, ferrimagnetic, and antiferromagnetic substances—these mechanisms include direct exchange, RKKY exchange, double exchange, and superexchange.
=== Magnetic anisotropy ===
Although the exchange interaction keeps spins aligned, it does not align them in a particular direction. Without magnetic anisotropy, the spins in a magnet randomly change direction in response to thermal fluctuations, and the magnet is superparamagnetic. There are several kinds of magnetic anisotropy, the most common of which is magnetocrystalline anisotropy. This is a dependence of the energy on the direction of magnetization relative to the crystallographic lattice. Another common source of anisotropy, inverse magnetostriction, is induced by internal strains. Single-domain magnets also can have a shape anisotropy due to the magnetostatic effects of the particle shape. As the temperature of a magnet increases, the anisotropy tends to decrease, and there is often a blocking temperature at which a transition to superparamagnetism occurs.
=== Magnetic domains ===
The spontaneous alignment of magnetic dipoles in ferromagnetic materials would seem to suggest that every piece of ferromagnetic material should have a strong magnetic field, since all the spins are aligned; yet iron and other ferromagnets are often found in an "unmagnetized" state. This is because a bulk piece of ferromagnetic material is divided into tiny regions called magnetic domains (also known as Weiss domains). Within each domain, the spins are aligned, but if the bulk material is in its lowest energy configuration (i.e. "unmagnetized"), the spins of separate domains point in different directions and their magnetic fields cancel out, so the bulk material has no net large-scale magnetic field.
Ferromagnetic materials spontaneously divide into magnetic domains because the exchange interaction is a short-range force, so over long distances of many atoms, the tendency of the magnetic dipoles to reduce their energy by orienting in opposite directions wins out. If all the dipoles in a piece of ferromagnetic material are aligned parallel, it creates a large magnetic field extending into the space around it. This contains a lot of magnetostatic energy. The material can reduce this energy by splitting into many domains pointing in different directions, so the magnetic field is confined to small local fields in the material, reducing the volume of the field. The domains are separated by thin domain walls a number of molecules thick, in which the direction of magnetization of the dipoles rotates smoothly from one domain's direction to the other.
=== Magnetized materials ===
Thus, a piece of iron in its lowest energy state ("unmagnetized") generally has little or no net magnetic field. However, the magnetic domains in a material are not fixed in place; they are simply regions where the spins of the electrons have aligned spontaneously due to their magnetic fields, and thus can be altered by an external magnetic field. If a strong-enough external magnetic field is applied to the material, the domain walls will move via a process in which the spins of the electrons in atoms near the wall in one domain turn under the influence of the external field to face in the same direction as the electrons in the other domain, thus reorienting the domains so more of the dipoles are aligned with the external field. The domains will remain aligned when the external field is removed, and sum to create a magnetic field of their own extending into the space around the material, thus creating a "permanent" magnet. The domains do not go back to their original minimum energy configuration when the field is removed because the domain walls tend to become 'pinned' or 'snagged' on defects in the crystal lattice, preserving their parallel orientation. This is shown by the Barkhausen effect: as the magnetizing field is changed, the material's magnetization changes in thousands of tiny discontinuous jumps as domain walls suddenly "snap" past defects.
This magnetization as a function of an external field is described by a hysteresis curve. Although this state of aligned domains found in a piece of magnetized ferromagnetic material is not a minimal-energy configuration, it is metastable, and can persist for long periods, as shown by samples of magnetite from the sea floor which have maintained their magnetization for millions of years.
Heating and then cooling (annealing) a magnetized material, subjecting it to vibration by hammering it, or applying a rapidly oscillating magnetic field from a degaussing coil tends to release the domain walls from their pinned state, and the domain boundaries tend to move back to a lower energy configuration with less external magnetic field, thus demagnetizing the material.
Commercial magnets are made of "hard" ferromagnetic or ferrimagnetic materials with very large magnetic anisotropy such as alnico and ferrites, which have a very strong tendency for the magnetization to be pointed along one axis of the crystal, the "easy axis". During manufacture the materials are subjected to various metallurgical processes in a powerful magnetic field, which aligns the crystal grains so their "easy" axes of magnetization all point in the same direction. Thus, the magnetization, and the resulting magnetic field, is "built in" to the crystal structure of the material, making it very difficult to demagnetize.
=== Curie temperature ===
As the temperature of a material increases, thermal motion, or entropy, competes with the ferromagnetic tendency for dipoles to align. When the temperature rises beyond a certain point, called the Curie temperature, there is a second-order phase transition and the system can no longer maintain a spontaneous magnetization, so its ability to be magnetized or attracted to a magnet disappears, although it still responds paramagnetically to an external field. Below that temperature, there is a spontaneous symmetry breaking and magnetic moments become aligned with their neighbors. The Curie temperature itself is a critical point, where the magnetic susceptibility is theoretically infinite and, although there is no net magnetization, domain-like spin correlations fluctuate at all length scales.
The study of ferromagnetic phase transitions, especially via the simplified Ising spin model, had an important impact on the development of statistical physics. There, it was first clearly shown that mean field theory approaches failed to predict the correct behavior at the critical point (which was found to fall under a universality class that includes many other systems, such as liquid-gas transitions), and had to be replaced by renormalization group theory.
== See also ==
Ferromagnetic material properties
Hysteresis – Dependence of the state of a system on its history
Orbital magnetization
Stoner criterion
Thermo-magnetic motor – Magnet motor
Neodymium magnet – Strongest type of permanent magnet from an alloy of neodymium, iron and boron
== References ==
== External links ==
Media related to Ferromagnetism at Wikimedia Commons
Electromagnetism – ch. 11, from an online textbook
Sandeman, Karl (January 2008). "Ferromagnetic Materials". DoITPoMS. Dept. of Materials Sci. and Metallurgy, Univ. of Cambridge. Retrieved 2019-06-22. Detailed nonmathematical description of ferromagnetic materials with illustrations
Magnetism: Models and Mechanisms in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter, Jülich 2013, ISBN 978-3-89336-884-6 | Wikipedia/Ferromagnetic_materials |
A magnetic circuit is made up of one or more closed loop paths containing a magnetic flux. The flux is usually generated by permanent magnets or electromagnets and confined to the path by magnetic cores consisting of ferromagnetic materials like iron, although there may be air gaps or other materials in the path. Magnetic circuits are employed to efficiently channel magnetic fields in many devices such as electric motors, generators, transformers, relays, lifting electromagnets, SQUIDs, galvanometers, and magnetic recording heads.
The relation between magnetic flux, magnetomotive force, and magnetic reluctance in an unsaturated magnetic circuit can be described by Hopkinson's law, which bears a superficial resemblance to Ohm's law in electrical circuits, resulting in a one-to-one correspondence between properties of a magnetic circuit and an analogous electric circuit. Using this concept the magnetic fields of complex devices such as transformers can be quickly solved using the methods and techniques developed for electrical circuits.
Some examples of magnetic circuits are:
horseshoe magnet with iron keeper (low-reluctance circuit)
horseshoe magnet with no keeper (high-reluctance circuit)
electric motor (variable-reluctance circuit)
some types of pickup cartridge (variable-reluctance circuits)
== Magnetomotive force (MMF) ==
Similar to the way that electromotive force (EMF) drives a current of electrical charge in electrical circuits, magnetomotive force (MMF) 'drives' magnetic flux through magnetic circuits. The term 'magnetomotive force', though, is a misnomer since it is not a force nor is anything moving. It is perhaps better to call it simply MMF. In analogy to the definition of EMF, the magnetomotive force
F
{\displaystyle {\mathcal {F}}}
around a closed loop is defined as:
F
=
∮
H
⋅
d
l
.
{\displaystyle {\mathcal {F}}=\oint \mathbf {H} \cdot \mathrm {d} \mathbf {l} .}
The MMF represents the potential that a hypothetical magnetic charge would gain by completing the loop. The magnetic flux that is driven is not a current of magnetic charge; it merely has the same relationship to MMF that electric current has to EMF. (See microscopic origins of reluctance below for a further description.)
The unit of magnetomotive force is the ampere-turn (At), represented by a steady, direct electric current of one ampere flowing in a single-turn loop of electrically conducting material in a vacuum. The gilbert (Gb), established by the IEC in 1930, is the CGS unit of magnetomotive force and is a slightly smaller unit than the ampere-turn. The unit is named after William Gilbert (1544–1603) English physician and natural philosopher.
1
Gb
=
10
4
π
At
≈
0.795775
At
{\displaystyle {\begin{aligned}1\;{\text{Gb}}&={\frac {10}{4\pi }}\;{\text{At}}\\[2pt]&\approx 0.795775\;{\text{At}}\end{aligned}}}
The magnetomotive force can often be quickly calculated using Ampère's law. For example, the magnetomotive force
F
{\displaystyle {\mathcal {F}}}
of a long coil is:
F
=
N
I
{\displaystyle {\mathcal {F}}=NI}
where N is the number of turns and I is the current in the coil. In practice this equation is used for the MMF of real inductors with N being the winding number of the inducting coil.
== Magnetic flux ==
An applied MMF 'drives' magnetic flux through the magnetic components of the system. The magnetic flux through a magnetic component is proportional to the number of magnetic field lines that pass through the cross sectional area of that component. This is the net number, i.e. the number passing through in one direction, minus the number passing through in the other direction. The direction of the magnetic field vector B is by definition from the south to the north pole of a magnet inside the magnet; outside the field lines go from north to south.
The flux through an element of area perpendicular to the direction of magnetic field is given by the product of the magnetic field and the area element. More generally, magnetic flux Φ is defined by a scalar product of the magnetic field and the area element vector. Quantitatively, the magnetic flux through a surface S is defined as the integral of the magnetic field over the area of the surface
Φ
m
=
∬
S
B
⋅
d
S
.
{\displaystyle \Phi _{m}=\iint _{S}\mathbf {B} \cdot \mathrm {d} \mathbf {S} .}
For a magnetic component the area S used to calculate the magnetic flux Φ is usually chosen to be the cross-sectional area of the component.
The SI unit of magnetic flux is the weber (in derived units: volt-seconds), and the unit of magnetic flux density (or "magnetic induction", B) is the weber per square meter, or tesla.
== Circuit models ==
The most common way of representing a magnetic circuit is the resistance–reluctance model, which draws an analogy between electrical and magnetic circuits. This model is good for systems that contain only magnetic components, but for modelling a system that contains both electrical and magnetic parts it has serious drawbacks. It does not properly model power and energy flow between the electrical and magnetic domains. This is because electrical resistance will dissipate energy whereas magnetic reluctance stores it and returns it later. An alternative model that correctly models energy flow is the gyrator–capacitor model.
== Resistance–reluctance model ==
The resistance–reluctance model for magnetic circuits is a lumped-element model that makes electrical resistance analogous to magnetic reluctance.
=== Hopkinson's law ===
In electrical circuits, Ohm's law is an empirical relation between the EMF
E
{\displaystyle {\mathcal {E}}}
applied across an element and the current
I
{\displaystyle I}
it generates through that element. It is written as:
E
=
I
R
.
{\displaystyle {\mathcal {E}}=IR.}
where R is the electrical resistance of that material. There is a counterpart to Ohm's law used in magnetic circuits. This law is often called Hopkinson's law, after John Hopkinson, but was actually formulated earlier by Henry Augustus Rowland in 1873. It states that
F
=
Φ
R
.
{\displaystyle {\mathcal {F}}=\Phi {\mathcal {R}}.}
where
F
{\displaystyle {\mathcal {F}}}
is the magnetomotive force (MMF) across a magnetic element,
Φ
{\displaystyle \Phi }
is the magnetic flux through the magnetic element, and
R
{\displaystyle {\mathcal {R}}}
is the magnetic reluctance of that element. (It will be shown later that this relationship is due to the empirical relationship between the H-field and the magnetic field B, B = μH, where μ is the permeability of the material). Like Ohm's law, Hopkinson's law can be interpreted either as an empirical equation that works for some materials, or it may serve as a definition of reluctance.
Hopkinson's law is not a correct analogy with Ohm's law in terms of modelling power and energy flow. In particular, there is no power dissipation associated with a magnetic reluctance in the same way as there is a dissipation in an electrical resistance. The magnetic resistance that is a true analogy of electrical resistance in this respect is defined as the ratio of magnetomotive force and the rate of change of magnetic flux. Here rate of change of magnetic flux is standing in for electric current and the Ohm's law analogy becomes,
F
=
d
Φ
d
t
R
m
,
{\displaystyle {\mathcal {F}}={\frac {d\Phi }{dt}}R_{\mathrm {m} },}
where
R
m
{\displaystyle R_{\mathrm {m} }}
is the magnetic resistance. This relationship is part of an electrical-magnetic analogy called the gyrator-capacitor model and is intended to overcome the drawbacks of the reluctance model. The gyrator-capacitor model is, in turn, part of a wider group of compatible analogies used to model systems across multiple energy domains.
=== Reluctance ===
Magnetic reluctance, or magnetic resistance, is analogous to resistance in an electrical circuit (although it does not dissipate magnetic energy). In likeness to the way an electric field causes an electric current to follow the path of least resistance, a magnetic field causes magnetic flux to follow the path of least magnetic reluctance. It is a scalar, extensive quantity, akin to electrical resistance.
The total reluctance is equal to the ratio of the MMF in a passive magnetic circuit and the magnetic flux in this circuit. In an AC field, the reluctance is the ratio of the amplitude values for a sinusoidal MMF and magnetic flux. (see phasors)
The definition can be expressed as:
R
=
F
Φ
,
{\displaystyle {\mathcal {R}}={\frac {\mathcal {F}}{\Phi }},}
where
R
{\displaystyle {\mathcal {R}}}
is the reluctance in ampere-turns per weber (a unit that is equivalent to turns per henry).
Magnetic flux always forms a closed loop, as described by Maxwell's equations, but the path of the loop depends on the reluctance of the surrounding materials. It is concentrated around the path of least reluctance. Air and vacuum have high reluctance, while easily magnetized materials such as soft iron have low reluctance. The concentration of flux in low-reluctance materials forms strong temporary poles and causes mechanical forces that tend to move the materials towards regions of higher flux so it is always an attractive force(pull).
The inverse of reluctance is called permeance.
P
=
1
R
.
{\displaystyle {\mathcal {P}}={\frac {1}{\mathcal {R}}}.}
Its SI derived unit is the henry (the same as the unit of inductance, although the two concepts are distinct).
=== Permeability and conductivity ===
The reluctance of a magnetically uniform magnetic circuit element can be calculated as:
R
=
l
μ
A
.
{\displaystyle {\mathcal {R}}={\frac {l}{\mu A}}.}
where
l is the length of the element,
μ
=
μ
r
μ
0
{\displaystyle \mu =\mu _{r}\mu _{0}}
is the permeability of the material (
μ
r
{\displaystyle \mu _{\mathrm {r} }}
is the relative permeability of the material (dimensionless), and
μ
0
{\displaystyle \mu _{0}}
is the permeability of free space), and
A is the cross-sectional area of the circuit.
This is similar to the equation for electrical resistance in materials, with permeability being analogous to conductivity; the reciprocal of the permeability is known as magnetic reluctivity and is analogous to resistivity. Longer, thinner geometries with low permeabilities lead to higher reluctance. Low reluctance, like low resistance in electric circuits, is generally preferred.
=== Summary of analogy ===
The following table summarizes the mathematical analogy between electrical circuit theory and magnetic circuit theory. This is mathematical analogy and not a physical one. Objects in the same row have the same mathematical role; the physics of the two theories are very different. For example, current is the flow of electrical charge, while magnetic flux is not the flow of any quantity.
=== Limitations of the analogy ===
The resistance–reluctance model has limitations. Electric and magnetic circuits are only superficially similar because of the similarity between Hopkinson's law and Ohm's law. Magnetic circuits have significant differences that need to be taken into account in their construction:
Electric currents represent the flow of particles (electrons) and carry power, part or all of which is dissipated as heat in resistances. Magnetic fields don't represent a "flow" of anything, and no power is dissipated in reluctances.
The current in typical electric circuits is confined to the circuit, with very little "leakage". In typical magnetic circuits not all of the magnetic field is confined to the magnetic circuit because magnetic permeability also exists outside materials (see vacuum permeability). Thus, there may be significant "leakage flux" in the space outside the magnetic cores, which must be taken into account but is often difficult to calculate.
Most importantly, magnetic circuits are nonlinear; the reluctance in a magnetic circuit is not constant, as resistance is, but varies depending on the magnetic field. At high magnetic fluxes the ferromagnetic materials used for the cores of magnetic circuits saturate, limiting further increase of the magnetic flux through, so above this level the reluctance increases rapidly. In addition, ferromagnetic materials suffer from hysteresis so the flux in them depends not just on the instantaneous MMF but also on the history of MMF. After the source of the magnetic flux is turned off, remanent magnetism is left in ferromagnetic materials, creating flux with no MMF.
=== Circuit laws ===
Magnetic circuits obey other laws that are similar to electrical circuit laws. For example, the total reluctance
R
T
{\displaystyle {\mathcal {R}}_{\mathrm {T} }}
of reluctances
R
1
,
R
2
,
…
{\displaystyle {\mathcal {R}}_{1},\ {\mathcal {R}}_{2},\ \ldots }
in series is:
R
T
=
R
1
+
R
2
+
⋯
{\displaystyle {\mathcal {R}}_{\mathrm {T} }={\mathcal {R}}_{1}+{\mathcal {R}}_{2}+\dotsm }
This also follows from Ampère's law and is analogous to Kirchhoff's voltage law for adding resistances in series. Also, the sum of magnetic fluxes
Φ
1
,
Φ
2
,
…
{\displaystyle \Phi _{1},\ \Phi _{2},\ \ldots }
into any node is always zero:
Φ
1
+
Φ
2
+
⋯
=
0.
{\displaystyle \Phi _{1}+\Phi _{2}+\dotsm =0.}
This follows from Gauss's law and is analogous to Kirchhoff's current law for analyzing electrical circuits.
Together, the three laws above form a complete system for analysing magnetic circuits, in a manner similar to electric circuits. Comparing the two types of circuits shows that:
The equivalent to resistance R is the reluctance
R
m
{\displaystyle {\mathcal {R}}_{\mathrm {m} }}
The equivalent to current I is the magnetic flux Φ
The equivalent to voltage V is the magnetomotive Force F
Magnetic circuits can be solved for the flux in each branch by application of the magnetic equivalent of Kirchhoff's voltage law (KVL) for pure source/resistance circuits. Specifically, whereas KVL states that the voltage excitation applied to a loop is equal to the sum of the voltage drops (resistance times current) around the loop, the magnetic analogue states that the magnetomotive force (achieved from ampere-turn excitation) is equal to the sum of MMF drops (product of flux and reluctance) across the rest of the loop. (If there are multiple loops, the current in each branch can be solved through a matrix equation—much as a matrix solution for mesh circuit branch currents is obtained in loop analysis—after which the individual branch currents are obtained by adding and/or subtracting the constituent loop currents as indicated by the adopted sign convention and loop orientations.) Per Ampère's law, the excitation is the product of the current and the number of complete loops made and is measured in ampere-turns. Stated more generally:
F
=
N
I
=
∮
H
⋅
d
l
.
{\displaystyle F=NI=\oint \mathbf {H} \cdot \mathrm {d} \mathbf {l} .}
By Stokes's theorem, the closed line integral of H·dl around a contour is equal to the open surface integral of curl H·dA across the surface bounded by the closed contour. Since, from Maxwell's equations, curl H = J, the closed line integral of H·dl evaluates to the total current passing through the surface. This is equal to the excitation, NI, which also measures current passing through the surface, thereby verifying that the net current flow through a surface is zero ampere-turns in a closed system that conserves energy.
More complex magnetic systems, where the flux is not confined to a simple loop, must be analysed from first principles by using Maxwell's equations.
== Applications ==
Air gaps can be created in the cores of certain transformers to reduce the effects of saturation. This increases the reluctance of the magnetic circuit, and enables it to store more energy before core saturation. This effect is used in the flyback transformers of cathode-ray tube video displays and in some types of switch-mode power supply.
Variation of reluctance is the principle behind the reluctance motor (or the variable reluctance generator) and the Alexanderson alternator.
Multimedia loudspeakers are typically shielded magnetically, in order to reduce magnetic interference caused to televisions and other CRTs. The speaker magnet is covered with a material such as soft iron to minimize the stray magnetic field.
Reluctance can also be applied to variable reluctance (magnetic) pickups.
== See also ==
Magnetic capacitance
Magnetic complex reluctance
Tokamak
== References ==
== External links ==
Magnetic–Electric Analogs by Dennis L. Feucht, Innovatia Laboratories (PDF) Archived July 17, 2012, at the Wayback Machine
Interactive Java Tutorial on Magnetic Shunts National High Magnetic Field Laboratory | Wikipedia/Resistance–reluctance_model |
The lumped-element model (also called lumped-parameter model, or lumped-component model) is a simplified representation of a physical system or circuit that assumes all components are concentrated at a single point and their behavior can be described by idealized mathematical models. The lumped-element model simplifies the system or circuit behavior description into a topology. It is useful in electrical systems (including electronics), mechanical multibody systems, heat transfer, acoustics, etc. This is in contrast to distributed parameter systems or models in which the behaviour is distributed spatially and cannot be considered as localized into discrete entities.
The simplification reduces the state space of the system to a finite dimension, and the partial differential equations (PDEs) of the continuous (infinite-dimensional) time and space model of the physical system into ordinary differential equations (ODEs) with a finite number of parameters.
== Electrical systems ==
=== Lumped-matter discipline ===
The lumped-matter discipline is a set of imposed assumptions in electrical engineering that provides the foundation for lumped-circuit abstraction used in network analysis. The self-imposed constraints are:
The change of the magnetic flux in time outside a conductor is zero.
∂
Φ
B
∂
t
=
0
{\displaystyle {\frac {\partial \Phi _{B}}{\partial t}}=0}
The change of the charge in time inside conducting elements is zero.
∂
q
∂
t
=
0
{\displaystyle {\frac {\partial q}{\partial t}}=0}
Signal timescales of interest are much larger than propagation delay of electromagnetic waves across the lumped element.
The first two assumptions result in Kirchhoff's circuit laws when applied to Maxwell's equations and are only applicable when the circuit is in steady state. The third assumption is the basis of the lumped-element model used in network analysis. Less severe assumptions result in the distributed-element model, while still not requiring the direct application of the full Maxwell equations.
=== Lumped-element model ===
The lumped-element model of electronic circuits makes the simplifying assumption that the attributes of the circuit, resistance, capacitance, inductance, and gain, are concentrated into idealized electrical components; resistors, capacitors, and inductors, etc. joined by a network of perfectly conducting wires.
The lumped-element model is valid whenever
L
c
≪
λ
{\displaystyle L_{c}\ll \lambda }
, where
L
c
{\displaystyle L_{c}}
denotes the circuit's characteristic length, and
λ
{\displaystyle \lambda }
denotes the circuit's operating wavelength. Otherwise, when the circuit length is on the order of a wavelength, we must consider more general models, such as the distributed-element model (including transmission lines), whose dynamic behaviour is described by Maxwell's equations. Another way of viewing the validity of the lumped-element model is to note that this model ignores the finite time it takes signals to propagate around a circuit. Whenever this propagation time is not significant to the application the lumped-element model can be used. This is the case when the propagation time is much less than the period of the signal involved. However, with increasing propagation time there will be an increasing error between the assumed and actual phase of the signal which in turn results in an error in the assumed amplitude of the signal. The exact point at which the lumped-element model can no longer be used depends to a certain extent on how accurately the signal needs to be known in a given application.
Real-world components exhibit non-ideal characteristics which are, in reality, distributed elements but are often represented to a first-order approximation by lumped elements. To account for leakage in capacitors for example, we can model the non-ideal capacitor as having a large lumped resistor connected in parallel even though the leakage is, in reality distributed throughout the dielectric. Similarly a wire-wound resistor has significant inductance as well as resistance distributed along its length but we can model this as a lumped inductor in series with the ideal resistor.
== Thermal systems ==
A lumped-capacitance model, also called lumped system analysis, reduces a thermal system to a number of discrete “lumps” and assumes that the temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex differential heat equations. It was developed as a mathematical analog of electrical capacitance, although it also includes thermal analogs of electrical resistance as well.
The lumped-capacitance model is a common approximation in transient conduction, which may be used whenever heat conduction within an object is much faster than heat transfer across the boundary of the object. The method of approximation then suitably reduces one aspect of the transient conduction system (spatial temperature variation within the object) to a more mathematically tractable form (that is, it is assumed that the temperature within the object is completely uniform in space, although this spatially uniform temperature value changes over time). The rising uniform temperature within the object or part of a system, can then be treated like a capacitative reservoir which absorbs heat until it reaches a steady thermal state in time (after which temperature does not change within it).
An early-discovered example of a lumped-capacitance system which exhibits mathematically simple behavior due to such physical simplifications, are systems which conform to Newton's law of cooling. This law simply states that the temperature of a hot (or cold) object progresses toward the temperature of its environment in a simple exponential fashion. Objects follow this law strictly only if the rate of heat conduction within them is much larger than the heat flow into or out of them. In such cases it makes sense to talk of a single "object temperature" at any given time (since there is no spatial temperature variation within the object) and also the uniform temperatures within the object allow its total thermal energy excess or deficit to vary proportionally to its surface temperature, thus setting up the Newton's law of cooling requirement that the rate of temperature decrease is proportional to difference between the object and the environment. This in turn leads to simple exponential heating or cooling behavior (details below).
=== Method ===
To determine the number of lumps, the Biot number (Bi), a dimensionless parameter of the system, is used. Bi is defined as the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary with a uniform bath of different temperature. When the thermal resistance to heat transferred into the object is larger than the resistance to heat being diffused completely within the object, the Biot number is less than 1. In this case, particularly for Biot numbers which are even smaller, the approximation of spatially uniform temperature within the object can begin to be used, since it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
If the Biot number is less than 0.1 for a solid object, then the entire material will be nearly the same temperature, with the dominant temperature difference being at the surface. It may be regarded as being "thermally thin". The Biot number must generally be less than 0.1 for usefully accurate approximation and heat transfer analysis. The mathematical solution to the lumped-system approximation gives Newton's law of cooling.
A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body.
The single capacitance approach can be expanded to involve many resistive and capacitive elements, with Bi < 0.1 for each lump. As the Biot number is calculated based upon a characteristic length of the system, the system can often be broken into a sufficient number of sections, or lumps, so that the Biot number is acceptably small.
Some characteristic lengths of thermal systems are:
Plate: thickness
Fin: thickness/2
Long cylinder: diameter/4
Sphere: diameter/6
For arbitrary shapes, it may be useful to consider the characteristic length to be volume / surface area.
==== Thermal purely resistive circuits ====
A useful concept used in heat transfer applications once the condition of steady state heat conduction has been reached, is the representation of thermal transfer by what is known as thermal circuits. A thermal circuit is the representation of the resistance to heat flow in each element of a circuit, as though it were an electrical resistor. The heat transferred is analogous to the electric current and the thermal resistance is analogous to the electrical resistor. The values of the thermal resistance for the different modes of heat transfer are then calculated as the denominators of the developed equations. The thermal resistances of the different modes of heat transfer are used in analyzing combined modes of heat transfer. The lack of "capacitative" elements in the following purely resistive example, means that no section of the circuit is absorbing energy or changing in distribution of temperature. This is equivalent to demanding that a state of steady state heat conduction (or transfer, as in radiation) has already been established.
The equations describing the three heat transfer modes and their thermal resistances in steady state conditions, as discussed previously, are summarized in the table below:
In cases where there is heat transfer through different media (for example, through a composite material), the equivalent resistance is the sum of the resistances of the components that make up the composite. Likely, in cases where there are different heat transfer modes, the total resistance is the sum of the resistances of the different modes. Using the thermal circuit concept, the amount of heat transferred through any medium is the quotient of the temperature change and the total thermal resistance of the medium.
As an example, consider a composite wall of cross-sectional area
A
{\displaystyle A}
. The composite is made of an
L
1
{\displaystyle L_{1}}
long cement plaster with a thermal coefficient
k
1
{\displaystyle k_{1}}
and
L
2
{\displaystyle L_{2}}
long paper faced fiber glass, with thermal coefficient
k
2
{\displaystyle k_{2}}
. The left surface of the wall is at
T
i
{\displaystyle T_{i}}
and exposed to air with a convective coefficient of
h
i
{\displaystyle h_{i}}
. The right surface of the wall is at
T
o
{\displaystyle T_{o}}
and exposed to air with convective coefficient
h
o
{\displaystyle h_{o}}
.
Using the thermal resistance concept, heat flow through the composite is as follows:
Q
˙
=
T
i
−
T
o
R
i
+
R
1
+
R
2
+
R
o
=
T
i
−
T
1
R
i
=
T
i
−
T
2
R
i
+
R
1
=
T
i
−
T
3
R
i
+
R
1
+
R
2
=
T
1
−
T
2
R
1
=
T
3
−
T
o
R
0
{\displaystyle {\dot {Q}}={\frac {T_{i}-T_{o}}{R_{i}+R_{1}+R_{2}+R_{o}}}={\frac {T_{i}-T_{1}}{R_{i}}}={\frac {T_{i}-T_{2}}{R_{i}+R_{1}}}={\frac {T_{i}-T_{3}}{R_{i}+R_{1}+R_{2}}}={\frac {T_{1}-T_{2}}{R_{1}}}={\frac {T_{3}-T_{o}}{R_{0}}}}
where
R
i
=
1
h
i
A
{\displaystyle R_{i}={\frac {1}{h_{i}A}}}
,
R
o
=
1
h
o
A
{\displaystyle R_{o}={\frac {1}{h_{o}A}}}
,
R
1
=
L
1
k
1
A
{\displaystyle R_{1}={\frac {L_{1}}{k_{1}A}}}
, and
R
2
=
L
2
k
2
A
{\displaystyle R_{2}={\frac {L_{2}}{k_{2}A}}}
==== Newton's law of cooling ====
Newton's law of cooling is an empirical relationship attributed to English physicist Sir Isaac Newton (1642–1727). This law stated in non-mathematical form is the following:
The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings.
Or, using symbols:
Rate of cooling
∼
Δ
T
{\displaystyle {\text{Rate of cooling}}\sim \Delta T}
An object at a different temperature from its surroundings will ultimately come to a common temperature with its surroundings. A relatively hot object cools as it warms its surroundings; a cool object is warmed by its surroundings. When considering how quickly (or slowly) something cools, we speak of its rate of cooling – how many degrees' change in temperature per unit of time.
The rate of cooling of an object depends on how much hotter the object is than its surroundings. The temperature change per minute of a hot apple pie will be more if the pie is put in a cold freezer than if it is placed on the kitchen table. When the pie cools in the freezer, the temperature difference between it and its surroundings is greater. On a cold day, a warm home will leak heat to the outside at a greater rate when there is a large difference between the inside and outside temperatures. Keeping the inside of a home at high temperature on a cold day is thus more costly than keeping it at a lower temperature. If the temperature difference is kept small, the rate of cooling will be correspondingly low.
As Newton's law of cooling states, the rate of cooling of an object – whether by conduction, convection, or radiation – is approximately proportional to the temperature difference ΔT. Frozen food will warm up faster in a warm room than in a cold room. Note that the rate of cooling experienced on a cold day can be increased by the added convection effect of the wind. This is referred to as wind chill. For example, a wind chill of -20 °C means that heat is being lost at the same rate as if the temperature were -20 °C without wind.
==== Applicable situations ====
This law describes many situations in which an object has a large thermal capacity and large conductivity, and is suddenly immersed in a uniform bath which conducts heat relatively poorly. It is an example of a thermal circuit with one resistive and one capacitative element. For the law to be correct, the temperatures at all points inside the body must be approximately the same at each time point, including the temperature at its surface. Thus, the temperature difference between the body and surroundings does not depend on which part of the body is chosen, since all parts of the body have effectively the same temperature. In these situations, the material of the body does not act to "insulate" other parts of the body from heat flow, and all of the significant insulation (or "thermal resistance") controlling the rate of heat flow in the situation resides in the area of contact between the body and its surroundings. Across this boundary, the temperature-value jumps in a discontinuous fashion.
In such situations, heat can be transferred from the exterior to the interior of a body, across the insulating boundary, by convection, conduction, or diffusion, so long as the boundary serves as a relatively poor conductor with regard to the object's interior. The presence of a physical insulator is not required, so long as the process which serves to pass heat across the boundary is "slow" in comparison to the conductive transfer of heat inside the body (or inside the region of interest—the "lump" described above).
In such a situation, the object acts as the "capacitative" circuit element, and the resistance of the thermal contact at the boundary acts as the (single) thermal resistor. In electrical circuits, such a combination would charge or discharge toward the input voltage, according to a simple exponential law in time. In the thermal circuit, this configuration results in the same behavior in temperature: an exponential approach of the object temperature to the bath temperature.
==== Mathematical statement ====
Newton's law is mathematically stated by the simple first-order differential equation:
d
Q
d
t
=
−
h
⋅
A
(
T
(
t
)
−
T
env
)
=
−
h
⋅
A
Δ
T
(
t
)
{\displaystyle {\frac {dQ}{dt}}=-h\cdot A(T(t)-T_{\text{env}})=-h\cdot A\Delta T(t)}
where
Q is thermal energy in joules
h is the heat transfer coefficient between the surface and the fluid
A is the surface area of the heat being transferred
T is the temperature of the object's surface and interior (since these are the same in this approximation)
Tenv is the temperature of the environment
ΔT(t) = T(t) − Tenv is the time-dependent thermal gradient between environment and object
Putting heat transfers into this form is sometimes not a very good approximation, depending on ratios of heat conductances in the system. If the differences are not large, an accurate formulation of heat transfers in the system may require analysis of heat flow based on the (transient) heat transfer equation in nonhomogeneous or poorly conductive media.
==== Solution in terms of object heat capacity ====
If the entire body is treated as lumped-capacitance heat reservoir, with total heat content which is proportional to simple total heat capacity
C
{\displaystyle C}
, and
T
{\displaystyle T}
, the temperature of the body, or
Q
=
C
T
{\displaystyle Q=CT}
. It is expected that the system will experience exponential decay with time in the temperature of a body.
From the definition of heat capacity
C
{\displaystyle C}
comes the relation
C
=
d
Q
/
d
T
{\displaystyle C=dQ/dT}
. Differentiating this equation with regard to time gives the identity (valid so long as temperatures in the object are uniform at any given time):
d
Q
/
d
t
=
C
(
d
T
/
d
t
)
{\displaystyle dQ/dt=C(dT/dt)}
. This expression may be used to replace
d
Q
/
d
t
{\displaystyle dQ/dt}
in the first equation which begins this section, above. Then, if
T
(
t
)
{\displaystyle T(t)}
is the temperature of such a body at time
t
{\displaystyle t}
, and
T
env
{\displaystyle T_{\text{env}}}
is the temperature of the environment around the body:
d
T
(
t
)
d
t
=
−
r
(
T
(
t
)
−
T
env
)
=
−
r
Δ
T
(
t
)
{\displaystyle {\frac {dT(t)}{dt}}=-r(T(t)-T_{\text{env}})=-r\Delta T(t)}
where
r
=
h
A
/
C
{\displaystyle r=hA/C}
is a positive constant characteristic of the system, which must be in units of
s
−
1
{\displaystyle s^{-1}}
, and is therefore sometimes expressed in terms of a characteristic time constant
t
0
{\displaystyle t_{0}}
given by:
t
0
=
1
/
r
=
−
Δ
T
(
t
)
/
(
d
T
(
t
)
/
d
t
)
{\displaystyle t_{0}=1/r=-\Delta T(t)/(dT(t)/dt)}
. Thus, in thermal systems,
t
0
=
C
/
h
A
{\displaystyle t_{0}=C/hA}
. (The total heat capacity
C
{\displaystyle C}
of a system may be further represented by its mass-specific heat capacity
c
p
{\displaystyle c_{p}}
multiplied by its mass
m
{\displaystyle m}
, so that the time constant
t
0
{\displaystyle t_{0}}
is also given by
m
c
p
/
h
A
{\displaystyle mc_{p}/hA}
).
The solution of this differential equation, by standard methods of integration and substitution of boundary conditions, gives:
T
(
t
)
=
T
e
n
v
+
(
T
(
0
)
−
T
e
n
v
)
e
−
r
t
.
{\displaystyle T(t)=T_{\mathrm {env} }+(T(0)-T_{\mathrm {env} })\ e^{-rt}.}
If:
Δ
T
(
t
)
{\displaystyle \Delta T(t)\quad }
is defined as :
T
(
t
)
−
T
e
n
v
,
{\displaystyle T(t)-T_{\mathrm {env} }\ ,\quad }
where
Δ
T
(
0
)
{\displaystyle \Delta T(0)\quad }
is the initial temperature difference at time 0,
then the Newtonian solution is written as:
Δ
T
(
t
)
=
Δ
T
(
0
)
e
−
r
t
=
Δ
T
(
0
)
e
−
t
/
t
0
.
{\displaystyle \Delta T(t)=\Delta T(0)\ e^{-rt}=\Delta T(0)\ e^{-t/t_{0}}.}
This same solution is almost immediately apparent if the initial differential equation is written in terms of
Δ
T
(
t
)
{\displaystyle \Delta T(t)}
, as the single function to be solved for.
d
T
(
t
)
d
t
=
d
Δ
T
(
t
)
d
t
=
−
1
t
0
Δ
T
(
t
)
{\displaystyle {\frac {dT(t)}{dt}}={\frac {d\Delta T(t)}{dt}}=-{\frac {1}{t_{0}}}\Delta T(t)}
=== Applications ===
This mode of analysis has been applied to forensic sciences to analyze the time of death of humans. Also, it can be applied to HVAC (heating, ventilating and air-conditioning, which can be referred to as "building climate control"), to ensure more nearly instantaneous effects of a change in comfort level setting.
== Mechanical systems ==
The simplifying assumptions in this domain are:
all objects are rigid bodies;
all interactions between rigid bodies take place via kinematic pairs (joints), springs and dampers.
== Acoustics ==
In this context, the lumped-component model extends the distributed concepts of acoustic theory subject to approximation. In the acoustical lumped-component model, certain physical components with acoustical properties may be approximated as behaving similarly to standard electronic components or simple combinations of components.
A rigid-walled cavity containing air (or similar compressible fluid) may be approximated as a capacitor whose value is proportional to the volume of the cavity. The validity of this approximation relies on the shortest wavelength of interest being significantly (much) larger than the longest dimension of the cavity.
A reflex port may be approximated as an inductor whose value is proportional to the effective length of the port divided by its cross-sectional area. The effective length is the actual length plus an end correction. This approximation relies on the shortest wavelength of interest being significantly larger than the longest dimension of the port.
Certain types of damping material can be approximated as a resistor. The value depends on the properties and dimensions of the material. The approximation relies in the wavelengths being long enough and on the properties of the material itself.
A loudspeaker drive unit (typically a woofer or subwoofer drive unit) may be approximated as a series connection of a zero-impedance voltage source, a resistor, a capacitor and an inductor. The values depend on the specifications of the unit and the wavelength of interest.
== Heat transfer for buildings ==
A simplifying assumption in this domain is that all heat transfer mechanisms are linear, implying that radiation and convection are linearised for each problem.
Several publications can be found that describe how to generate lumped-element models of buildings. In most cases, the building is considered a single thermal zone and in this case, turning multi-layered walls into lumped elements can be one of the most complicated tasks in the creation of the model. The dominant-layer method is one simple and reasonably accurate method. In this method, one of the layers is selected as the dominant layer in the whole construction, this layer is chosen considering the most relevant frequencies of the problem.
Lumped-element models of buildings have also been used to evaluate the efficiency of domestic energy systems, by running many simulations under different future weather scenarios.
== Fluid systems ==
Fluid systems can be described by means of lumped-element cardiovascular models by using voltage to represent pressure and current to represent flow; identical equations from the electrical circuit representation are valid after substituting these two variables. Such applications can, for example, study the response of the human cardiovascular system to ventricular assist device implantation.
== See also ==
System isomorphism
Model order reduction
== References ==
== External links ==
Advanced modelling and simulation techniques for magnetic components
IMTEK Mathematica Supplement (IMS), the Open Source IMTEK Mathematica Supplement (IMS) for lumped modelling | Wikipedia/Lumped-element_model |
In general relativity, an electrovacuum solution (electrovacuum) is an exact solution of the Einstein field equation in which the only nongravitational mass–energy present is the field energy of an electromagnetic field, which must satisfy the (curved-spacetime) source-free Maxwell equations appropriate to the given geometry. For this reason, electrovacuums are sometimes called (source-free) Einstein–Maxwell solutions.
== Definition ==
In general relativity, the geometric setting for physical phenomena is a Lorentzian manifold, which is interpreted as a curved spacetime, and which is specified by defining a metric tensor
g
a
b
{\displaystyle g_{ab}}
(or by defining a frame field). The Riemann curvature tensor
R
a
b
c
d
{\displaystyle R_{abcd}}
of this manifold and associated quantities such as the Einstein tensor
G
a
b
{\displaystyle G^{ab}}
, are well-defined. In general relativity, they can be interpreted as geometric manifestations (curvature and forces) of the gravitational field.
We also need to specify an electromagnetic field by defining an electromagnetic field tensor
F
a
b
{\displaystyle F_{ab}}
on our Lorentzian manifold. To be classified as an electrovacuum solution, these two tensors are required to satisfy two following conditions
The electromagnetic field tensor must satisfy the source-free curved spacetime Maxwell field equations
F
a
b
;
c
+
F
b
c
;
a
+
F
c
a
;
b
=
0
{\displaystyle \,F_{ab;c}+F_{bc;a}+F_{ca;b}=0}
and
F
j
b
;
j
=
0
{\displaystyle {F^{jb}}_{;j}=0}
The Einstein tensor must match the electromagnetic stress–energy tensor,
G
a
b
=
2
(
F
a
j
F
b
j
−
1
4
g
a
b
F
m
n
F
m
n
)
{\displaystyle G^{ab}=2\,\left(F^{a}{}_{j}F^{bj}-{\frac {1}{4}}g^{ab}\,F^{mn}\,F_{mn}\right)}
.
The first Maxwell equation is satisfied automatically if we define the field tensor in terms of an electromagnetic potential vector
A
→
{\displaystyle {\vec {A}}}
. In terms of the dual covector (or potential one-form) and the electromagnetic two-form, we can do this by setting
F
=
d
A
{\displaystyle F=dA}
. Then we need only ensure that the divergences vanish (i.e. that the second Maxwell equation is satisfied for a source-free field) and that the electromagnetic stress–energy matches the Einstein tensor.
== Invariants ==
The electromagnetic field tensor is antisymmetric, with only two algebraically independent scalar invariants,
I
=
⋆
(
F
∧
⋆
F
)
=
F
a
b
F
a
b
=
−
2
(
‖
E
→
‖
2
−
‖
B
→
‖
2
)
{\displaystyle I=\star (F\wedge \star F)=F_{ab}\,F^{ab}=-2\,\left(\|{\vec {E}}\|^{2}-\|{\vec {B}}\|^{2}\right)}
J
=
⋆
(
F
∧
F
)
=
F
a
b
⋆
F
a
b
=
−
4
E
→
⋅
B
→
{\displaystyle J=\star (F\wedge F)=F_{ab}\,{\star F}^{ab}=-4\,{\vec {E}}\cdot {\vec {B}}}
Here, the star is the Hodge star.
Using these, we can classify the possible electromagnetic fields as follows:
If
I
<
0
{\displaystyle I<0}
but
J
=
0
{\displaystyle J=0}
, we have an electrostatic field, which means that some observers will measure a static electric field, and no magnetic field.
If
I
>
0
{\displaystyle I>0}
but
J
=
0
{\displaystyle J=0}
, we have an magnetostatic field, which means that some observers will measure a static magnetic field, and no electric field.
If
I
=
J
=
0
{\displaystyle I=J=0}
, the electromagnetic field is said to be null, and we have a null electrovacuum.
Null electrovacuums are associated with electromagnetic radiation. An electromagnetic field which is not null is called non-null, and then we have a non-null electrovacuum.
== Einstein tensor ==
The components of a tensor computed with respect to a frame field rather than the coordinate basis are often called physical components, because these are the components which can (in principle) be measured by an observer.
In the case of an electrovacuum solution, an adapted frame
e
→
0
,
e
→
1
,
e
→
2
,
e
→
3
{\displaystyle {\vec {e}}_{0},\;{\vec {e}}_{1},\;{\vec {e}}_{2},\;{\vec {e}}_{3}}
can always be found in which the Einstein tensor has a particularly simple appearance.
Here, the first vector is understood to be a timelike unit vector field; this is everywhere tangent to the world lines of the corresponding family of adapted observers, whose motion is "aligned" with the electromagnetic field. The last three are spacelike unit vector fields.
For a non-null electrovacuum, an adapted frame can be found in which the Einstein tensor takes the form
G
a
^
b
^
=
8
π
ϵ
[
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
−
1
]
{\displaystyle G^{{\hat {a}}{\hat {b}}}=8\pi \epsilon \,\left[{\begin{matrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&-1\end{matrix}}\right]}
where
ϵ
{\displaystyle \epsilon }
is the energy density of the electromagnetic field, as measured by any adapted observer. From this expression, it is easy to see that the isotropy group of our non-null electrovacuum is generated by boosts in the
e
→
3
{\displaystyle {\vec {e}}_{3}}
direction and rotations about the
e
→
3
{\displaystyle {\vec {e}}_{3}}
axis. In other words, the isotropy group of any non-null electrovacuum is a two-dimensional abelian Lie group isomorphic to SO(1,1) x SO(2).
For a null electrovacuum, an adapted frame can be found in which the Einstein tensor takes the form
G
a
^
b
^
=
8
π
ϵ
[
1
0
0
±
1
0
0
0
0
0
0
0
0
±
1
0
0
1
]
{\displaystyle G^{{\hat {a}}{\hat {b}}}=8\pi \epsilon \,\left[{\begin{matrix}1&0&0&\pm 1\\0&0&0&0\\0&0&0&0\\\pm 1&0&0&1\end{matrix}}\right]}
From this it is easy to see that the isotropy group of our null electrovacuum includes rotations about the
e
→
3
{\displaystyle {\vec {e}}_{3}}
axis; two further generators are the two parabolic Lorentz transformations aligned with the
e
→
3
{\displaystyle {\vec {e}}_{3}}
direction given in the article on the Lorentz group. In other words, the isotropy group of any null electrovacuum is a three-dimensional Lie group isomorphic to E(2), the isometry group of the euclidean plane.
The fact that these results are exactly the same in curved spacetimes as for electrodynamics in flat Minkowski spacetime is one expression of the equivalence principle.
== Eigenvalues ==
The characteristic polynomial of the Einstein tensor of a non-null electrovacuum must have the form
χ
(
λ
)
=
(
λ
+
8
π
ϵ
)
2
(
λ
−
8
π
ϵ
)
2
{\displaystyle \chi (\lambda )=\left(\lambda +8\pi \epsilon \right)^{2}\,\left(\lambda -8\pi \epsilon \right)^{2}}
Using Newton's identities, this condition can be re-expressed in terms of the traces of the powers of the Einstein tensor as
t
1
=
t
3
=
0
,
t
4
=
t
2
2
/
4
{\displaystyle t_{1}=t_{3}=0,\;t_{4}=t_{2}^{2}/4}
where
t
1
=
G
a
a
,
t
2
=
G
a
b
G
b
a
,
t
3
=
G
a
b
G
b
c
G
c
a
,
t
4
=
G
a
b
G
b
c
G
c
d
G
d
a
{\displaystyle t_{1}={G^{a}}_{a},\;t_{2}={G^{a}}_{b}\,{G^{b}}_{a},\;t_{3}={G^{a}}_{b}\,{G^{b}}_{c}\,{G^{c}}_{a},\;t_{4}={G^{a}}_{b}\,{G^{b}}_{c}\,{G^{c}}_{d}\,{G^{d}}_{a}}
This necessary criterion can be useful for checking that a putative non-null electrovacuum solution is plausible, and is sometimes useful for finding non-null electrovacuum solutions.
The characteristic polynomial of a null electrovacuum vanishes identically, even if the energy density is nonzero. This possibility is a tensor analogue of the well known that a null vector always has vanishing length, even if it is not the zero vector. Thus, every null electrovacuum has one quadruple eigenvalue, namely zero.
== Rainich conditions ==
In 1925, George Yuri Rainich presented purely mathematical conditions which are both necessary and sufficient for a Lorentzian manifold to admit an interpretation in general relativity as a non-null electrovacuum. These comprise three algebraic conditions and one differential condition. The conditions are sometimes useful for checking that a putative non-null electrovacuum really is what it claims, or even for finding such solutions.
Analogous necessary and sufficient conditions for a null electrovacuum have been found by Charles Torre.
== Test fields ==
Sometimes one can assume that the field energy of any electromagnetic field is so small that its gravitational effects can be neglected. Then, to obtain an approximate electrovacuum solution, we need only solve the Maxwell equations on a given vacuum solution. In this case, the electromagnetic field is often called a test field, in analogy with the term test particle (denoting a small object whose mass is too small to contribute appreciably to the ambient gravitational field).
Here, it is useful to know that any Killing vectors which may be present will (in the case of a vacuum solution) automatically satisfy the curved spacetime Maxwell equations.
Note that this procedure amounts to assuming that the electromagnetic field, but not the gravitational field, is "weak". Sometimes we can go even further; if the gravitational field is also considered "weak", we can independently solve the linearised Einstein field equations and the (flat spacetime) Maxwell equations on a Minkowksi vacuum background. Then the (weak) metric tensor gives the approximate geometry; the Minkowski background is unobservable by physical means, but mathematically much simpler to work with, whenever we can get away with such a sleight-of-hand.
== Examples ==
Noteworthy individual non-null electrovacuum solutions include:
Reissner–Nordström electrovacuum (which describes the geometry around a charged spherical mass),
Kerr–Newman electrovacuum (which describes the geometry around a charged, rotating object),
Melvin electrovacuum (a model of a cylindrically symmetric magnetostatic field),
Garfinkle–Melvin electrovacuum (like the preceding, but including a gravitational wave traveling along the axis of symmetry),
Bertotti–Robinson electrovacuum: this is a simple spacetime having a remarkable product structure; it arises from a kind of "blow up" of the horizon of the Reissner–Nordström electrovacuum,
Witten electrovacuums (discovered by Louis Witten).
Noteworthy individual null electrovacuum solutions include:
the monochromatic electromagnetic plane wave, an exact solution which is the general relativitistic analogue of the plane waves in classical electromagnetism,
Bell–Szekeres electrovacuum (a colliding plane wave model).
Some well known families of electrovacuums are:
Weyl–Maxwell electrovacuums: this is the family of all static axisymmetric electrovacuum solutions; it includes the Reissner–Nordström electrovacuum,
Ernst–Maxwell electrovacuums: this is the family of all stationary axisymmetric electrovacuum solutions; it includes the Kerr–Newman electrovacuum,
Beck–Maxwell electrovacuums: all nonrotating cylindrically symmetric electrovacuum solutions,
Ehlers–Maxwell electrovacuums: all stationary cylindrically symmetric electrovacuum solutions,
Szekeres electrovacuums: all pairs of colliding plane waves, where each wave may contain both gravitational and electromagnetic radiation; these solutions are null electrovacuums outside the interaction zone, but generally non-null electrovacuums inside the interaction zone, due to the non-linear interaction of the two waves after they collide.
Many pp-wave spacetimes admit an electromagnetic field tensor turning them into exact null electrovacuum solutions.
== See also ==
Classification of electromagnetic fields
Exact solutions in general relativity
Lorentz group
== References ==
Stephani, Hans; Kramer, Dietrich; MacCallum, Malcolm; Hoenselaers, Cornelius; Herlt, Eduard (2003). Exact Solutions of Einstein's Field Equations. Cambridge: Cambridge University Press. ISBN 0-521-46136-7. See section 5.4 for the Rainich conditions, section 19.4 for the Weyl–Maxwell electrovacuums, section 21.1 for the Ernst-Maxwell electrovacuums, section 24.5 for pp-waves, section 25.5 for Szekeres electrovacuums, etc.
Griffiths, J. B. (1991). Colliding Plane Waves in General Relativity. Oxford: Clarendon Press. ISBN 0-19-853209-1. The definitive resource on colliding plane waves, including the examples mentioned above. | Wikipedia/Electrovacuum_solution |
In electromagnetism and applications, an inhomogeneous electromagnetic wave equation, or nonhomogeneous electromagnetic wave equation, is one of a set of wave equations describing the propagation of electromagnetic waves generated by nonzero source charges and currents. The source terms in the wave equations make the partial differential equations inhomogeneous, if the source terms are zero the equations reduce to the homogeneous electromagnetic wave equations, which follow from Maxwell's equations.
== Maxwell's equations ==
For reference, Maxwell's equations are summarized below in SI units and Gaussian units. They govern the electric field E and magnetic field B due to a source charge density ρ and current density J:
where ε0 is the vacuum permittivity and μ0 is the vacuum permeability. Throughout, the relation
ε
0
μ
0
=
1
c
2
{\displaystyle \varepsilon _{0}\mu _{0}={\dfrac {1}{c^{2}}}}
is also used.
== SI units ==
=== E and B fields ===
Maxwell's equations can directly give inhomogeneous wave equations for the electric field E and magnetic field B. Substituting Gauss's law for electricity and Ampère's law into the curl of Faraday's law of induction, and using the curl of the curl identity ∇ × (∇ × X) = ∇(∇ ⋅ X) − ∇2X (The last term in the right side is the vector Laplacian, not Laplacian applied on scalar functions.) gives the wave equation for the electric field E:
1
c
2
∂
2
E
∂
t
2
−
∇
2
E
=
−
(
1
ε
0
∇
ρ
+
μ
0
∂
J
∂
t
)
.
{\displaystyle {\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =-\left({\dfrac {1}{\varepsilon _{0}}}\nabla \rho +\mu _{0}{\dfrac {\partial \mathbf {J} }{\partial t}}\right)\,.}
Similarly substituting Gauss's law for magnetism into the curl of Ampère's circuital law (with Maxwell's additional time-dependent term), and using the curl of the curl identity, gives the wave equation for the magnetic field B:
1
c
2
∂
2
B
∂
t
2
−
∇
2
B
=
μ
0
∇
×
J
.
{\displaystyle {\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =\mu _{0}\nabla \times \mathbf {J} \,.}
The left hand sides of each equation correspond to wave motion (the D'Alembert operator acting on the fields), while the right hand sides are the wave sources. The equations imply that EM waves are generated if there are gradients in charge density ρ, circulations in current density J, time-varying current density, or any mixture of these.
The above equation for the electric field can be transformed to a homogeneous wave equation with a so called damping term if we study a problem where Ohm's law in differential form
J
f
=
σ
E
{\displaystyle \mathbf {J_{f}} =\sigma \mathbf {E} }
hold (we assume
J
b
=
0
{\displaystyle \mathbf {J_{b}} =0}
that is we dealing with homogeneous conductors that have relative permeability and permittivity around 1), and by substituting
1
ε
0
∇
ρ
=
∇
(
∇
⋅
E
)
{\displaystyle {\dfrac {1}{\varepsilon _{0}}}\nabla \rho =\nabla (\nabla \cdot \mathbf {E} )}
from the differential form of Gauss law and
J
=
J
b
+
J
f
=
σ
E
{\displaystyle \mathbf {J=J_{b}+J_{f}} =\sigma \mathbf {E} }
The final homogeneous equation with only the unknown electric field and its partial derivatives is
1
c
2
∂
2
E
∂
t
2
−
∇
2
E
+
∇
(
∇
⋅
E
)
+
σ
μ
0
∂
E
∂
t
=
0
{\displaystyle {\dfrac {1}{c^{2}}}{\dfrac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} +\nabla (\nabla \cdot \mathbf {E} )+\sigma \mu _{0}{\dfrac {\partial \mathbf {E} }{\partial t}}=0}
The solutions for the above homogeneous equation for the electric field are infinitely many and we must specify boundary conditions for the electric field in order to find specific solutions
These forms of the wave equations are not often used in practice, as the source terms are inconveniently complicated. A simpler formulation more commonly encountered in the literature and used in theory use the electromagnetic potential formulation, presented next.
=== A and φ potential fields ===
Introducing the electric potential φ (a scalar potential) and the magnetic potential A (a vector potential) defined from the E and B fields by:
E
=
−
∇
φ
−
∂
A
∂
t
,
B
=
∇
×
A
.
{\displaystyle \mathbf {E} =-\nabla \varphi -{\frac {\partial \mathbf {A} }{\partial t}}\,,\quad \mathbf {B} =\nabla \times \mathbf {A} \,.}
The four Maxwell's equations in a vacuum with charge ρ and current J sources reduce to two equations, Gauss's law for electricity is:
∇
2
φ
+
∂
∂
t
(
∇
⋅
A
)
=
−
1
ε
0
ρ
,
{\displaystyle \nabla ^{2}\varphi +{\frac {\partial }{\partial t}}\left(\nabla \cdot \mathbf {A} \right)=-{\frac {1}{\varepsilon _{0}}}\rho \,,}
where
∇
2
{\displaystyle \nabla ^{2}}
here is the Laplacian applied on scalar functions, and the Ampère-Maxwell law is:
∇
2
A
−
1
c
2
∂
2
A
∂
t
2
−
∇
(
1
c
2
∂
φ
∂
t
+
∇
⋅
A
)
=
−
μ
0
J
{\displaystyle \nabla ^{2}\mathbf {A} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}-\nabla \left({\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}+\nabla \cdot \mathbf {A} \right)=-\mu _{0}\mathbf {J} \,}
where
∇
2
{\displaystyle \nabla ^{2}}
here is the vector Laplacian applied on vector fields. The source terms are now much simpler, but the wave terms are less obvious. Since the potentials are not unique, but have gauge freedom, these equations can be simplified by gauge fixing. A common choice is the Lorenz gauge condition:
1
c
2
∂
φ
∂
t
+
∇
⋅
A
=
0
{\displaystyle {\frac {1}{c^{2}}}{\frac {\partial \varphi }{\partial t}}+\nabla \cdot \mathbf {A} =0}
Then the nonhomogeneous wave equations become uncoupled and symmetric in the potentials:
∇
2
φ
−
1
c
2
∂
2
φ
∂
t
2
=
−
1
ε
0
ρ
,
∇
2
A
−
1
c
2
∂
2
A
∂
t
2
=
−
μ
0
J
.
{\displaystyle {\begin{aligned}\nabla ^{2}\varphi -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\varphi }{\partial t^{2}}}&=-{\frac {1}{\varepsilon _{0}}}\rho \,,\\[2.75ex]\nabla ^{2}\mathbf {A} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}&=-\mu _{0}\mathbf {J} \,.\end{aligned}}}
For reference, in cgs units these equations are
∇
2
φ
−
1
c
2
∂
2
φ
∂
t
2
=
−
4
π
ρ
∇
2
A
−
1
c
2
∂
2
A
∂
t
2
=
−
4
π
c
J
{\displaystyle {\begin{aligned}\nabla ^{2}\varphi -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\varphi }{\partial t^{2}}}&=-4\pi \rho \\[2ex]\nabla ^{2}\mathbf {A} -{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {A} }{\partial t^{2}}}&=-{\frac {4\pi }{c}}\mathbf {J} \end{aligned}}}
with the Lorenz gauge condition
1
c
∂
φ
∂
t
+
∇
⋅
A
=
0
.
{\displaystyle {\frac {1}{c}}{\frac {\partial \varphi }{\partial t}}+\nabla \cdot \mathbf {A} =0\,.}
== Covariant form of the inhomogeneous wave equation ==
The relativistic Maxwell's equations can be written in covariant form as
◻
A
μ
=
d
e
f
∂
β
∂
β
A
μ
=
d
e
f
A
μ
,
β
β
=
−
μ
0
J
μ
SI
◻
A
μ
=
d
e
f
∂
β
∂
β
A
μ
=
d
e
f
A
μ
,
β
β
=
−
4
π
c
J
μ
cgs
{\displaystyle {\begin{aligned}\Box A^{\mu }&\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ \partial _{\beta }\partial ^{\beta }A^{\mu }\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ {A^{\mu ,\beta }}_{\beta }=-\mu _{0}J^{\mu }&&{\text{SI}}\\[1.15ex]\Box A^{\mu }&\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ \partial _{\beta }\partial ^{\beta }A^{\mu }\ {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\ {A^{\mu ,\beta }}_{\beta }=-{\tfrac {4\pi }{c}}J^{\mu }&&{\text{cgs}}\end{aligned}}}
where
◻
=
∂
β
∂
β
=
∇
2
−
1
c
2
∂
2
∂
t
2
{\displaystyle \Box =\partial _{\beta }\partial ^{\beta }=\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}}
is the d'Alembert operator,
J
μ
=
(
c
ρ
,
J
)
{\displaystyle J^{\mu }=\left(c\rho ,\mathbf {J} \right)}
is the four-current,
∂
∂
x
a
=
d
e
f
∂
a
=
d
e
f
,
a
=
d
e
f
(
∂
/
∂
c
t
,
∇
)
{\displaystyle {\frac {\partial }{\partial x^{a}}}\ {\stackrel {\mathrm {def} }{=}}\ \partial _{a}\ {\stackrel {\mathrm {def} }{=}}\ {}_{,a}\ {\stackrel {\mathrm {def} }{=}}\ (\partial /\partial ct,\nabla )}
is the 4-gradient, and
A
μ
=
(
φ
/
c
,
A
)
SI
A
μ
=
(
φ
,
A
)
cgs
{\displaystyle {\begin{aligned}A^{\mu }&=(\varphi /c,\mathbf {A} )&&{\text{SI}}\\[1ex]A^{\mu }&=(\varphi ,\mathbf {A} )&&{\text{cgs}}\end{aligned}}}
is the electromagnetic four-potential with the Lorenz gauge condition
∂
μ
A
μ
=
0
.
{\displaystyle \partial _{\mu }A^{\mu }=0\,.}
== Curved spacetime ==
The electromagnetic wave equation is modified in two ways in curved spacetime, the derivative is replaced with the covariant derivative and a new term that depends on the curvature appears (SI units).
−
A
α
;
β
β
+
R
α
β
A
β
=
μ
0
J
α
{\displaystyle -{A^{\alpha ;\beta }}_{\beta }+{R^{\alpha }}_{\beta }A^{\beta }=\mu _{0}J^{\alpha }}
where
R
α
β
{\displaystyle {R^{\alpha }}_{\beta }}
is the Ricci curvature tensor. Here the semicolon indicates covariant differentiation. To obtain the equation in cgs units, replace the permeability with 4π/c.
The Lorenz gauge condition in curved spacetime is assumed:
A
μ
;
μ
=
0
.
{\displaystyle {A^{\mu }}_{;\mu }=0\,.}
== Solutions to the inhomogeneous electromagnetic wave equation ==
In the case that there are no boundaries surrounding the sources, the solutions (cgs units) of the nonhomogeneous wave equations are
φ
(
r
,
t
)
=
∫
δ
(
t
′
+
1
c
|
r
−
r
′
|
−
t
)
|
r
−
r
′
|
ρ
(
r
′
,
t
′
)
d
3
r
′
d
t
′
{\displaystyle \varphi (\mathbf {r} ,t)=\int {\frac {\delta \left(t'+{\frac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\rho (\mathbf {r} ',t')\,d^{3}\mathbf {r} 'dt'}
and
A
(
r
,
t
)
=
∫
δ
(
t
′
+
1
c
|
r
−
r
′
|
−
t
)
|
r
−
r
′
|
J
(
r
′
,
t
′
)
c
d
3
r
′
d
t
′
{\displaystyle \mathbf {A} (\mathbf {r} ,t)=\int {\frac {\delta \left(t'+{\frac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}{\frac {\mathbf {J} (\mathbf {r} ',t')}{c}}\,d^{3}\mathbf {r} 'dt'}
where
δ
(
t
′
+
1
c
|
r
−
r
′
|
−
t
)
{\displaystyle \delta \left(t'+{\tfrac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}
is a Dirac delta function.
These solutions are known as the retarded Lorenz gauge potentials. They represent a superposition of spherical light waves traveling outward from the sources of the waves, from the present into the future.
There are also advanced solutions (cgs units)
φ
(
r
,
t
)
=
∫
δ
(
t
′
−
1
c
|
r
−
r
′
|
−
t
)
|
r
−
r
′
|
ρ
(
r
′
,
t
′
)
d
3
r
′
d
t
′
{\displaystyle \varphi (\mathbf {r} ,t)=\int {\frac {\delta \left(t'-{\tfrac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\rho (\mathbf {r} ',t')\,d^{3}\mathbf {r} 'dt'}
and
A
(
r
,
t
)
=
∫
δ
(
t
′
−
1
c
|
r
−
r
′
|
−
t
)
|
r
−
r
′
|
J
(
r
′
,
t
′
)
c
d
3
r
′
d
t
′
.
{\displaystyle \mathbf {A} (\mathbf {r} ,t)=\int {\frac {\delta \left(t'-{\tfrac {1}{c}}{\left|\mathbf {r} -\mathbf {r} '\right|}-t\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}{\mathbf {J} (\mathbf {r} ',t') \over c}\,d^{3}\mathbf {r} 'dt'\,.}
These represent a superposition of spherical waves travelling from the future into the present.
== See also ==
Wave equation
Sinusoidal plane-wave solutions of the electromagnetic wave equation
Larmor formula
Covariant formulation of classical electromagnetism
Maxwell's equations in curved spacetime
Abraham–Lorentz force
Green's function
== References ==
=== Electromagnetics ===
==== Journal articles ====
==== Undergraduate-level textbooks ====
==== Graduate-level textbooks ====
=== Vector Calculus & Further Topics === | Wikipedia/Nonhomogeneous_electromagnetic_wave_equation |
In electrical engineering, power conversion is the process of converting electric energy from one form to another.
A power converter is an electrical device for converting electrical energy between alternating current (AC) and direct current (DC). It can also change the voltage or frequency of the current.
Power converters include simple devices such as transformers, and more complex ones like resonant converters. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation.
Power converters are classified based on the type of power conversion they perform. One way of classifying power conversion systems is based on whether the input and output is alternating or direct current.
== DC power conversion ==
=== DC to DC ===
The following devices can convert DC to DC:
Linear regulator
Voltage regulator
Motor–generator
Rotary converter
Switched-mode power supply
=== DC to AC ===
The following devices can convert DC to AC:
Power inverter
Motor–generator
Rotary converter
Switched-mode power supply
Chopper (electronics)
== AC power conversion ==
=== AC to DC ===
The following devices can convert AC to DC:
Rectifier
Mains power supply unit (PSU)
Motor–generator
Rotary converter
Switched-mode power supply
=== AC to AC ===
The following devices can convert AC to AC:
Transformer or autotransformer
Voltage converter
Voltage regulator
Cycloconverter
Variable-frequency transformer
Motor–generator
Rotary converter
Switched-mode power supply
== Other systems ==
There are also devices and methods to convert between power systems designed for single and three-phase operation.
The standard power voltage and frequency vary from country to country and sometimes within a country. In North America and northern South America, it is usually 120 volts, 60 hertz (Hz), but in Europe, Asia, Africa, and many other parts of the world, it is usually 230 volts, 50 Hz. Aircraft often use 400 Hz power internally, so 50 Hz or 60 Hz to 400 Hz frequency conversion is needed for use in the ground power unit used to power the airplane while it is on the ground. Conversely, internal 400 Hz internal power may be converted to 50 Hz or 60 Hz for convenience power outlets available to passengers during flight.
Certain specialized circuits can also be considered power converters, such as the flyback transformer subsystem powering a CRT, generating high voltage at approximately 15 kHz.
Consumer electronics usually include an AC adapter (a type of power supply) to convert mains-voltage AC current to low-voltage DC suitable for consumption by microchips. Consumer voltage converters (also known as "travel converters") are used when traveling between countries that use ~120 V versus ~240 V AC mains power. (There are also consumer "adapters" which merely form an electrical connection between two differently shaped AC power plugs and sockets, but these change neither voltage nor frequency.)
== Why use transformers in power converters ==
Transformers are used in power converters to incorporate electrical isolation and voltage step-down or step up.
The secondary circuit is floating, when you touch the secondary circuit, you merely drag its potential to your body's potential or the earth's potential. There will be no current flowing through your body. That's why you can use your cellphone safely when it is being charged, even if your cellphone has a metal shell and is connected to the secondary circuit.
Operating at high frequency and supplying low power, power converters have much smaller transformers compared with those of fundamental-frequency, high-power applications.
The current in the primary winding of a transformer help to sets up the mutual flux in accordance with Ampere's law and balances the demagnetizing effect of the load current in the secondary winding.
Flyback converter's transformer works differently, like an inductor. In each cycle, the flyback converter's transformer first gets charged and then releases its energy to the load. Accordingly, the flyback converter's transformer air gap has two functions. It not only determines inductance but also stores energy. For the flyback converter, the transformer gap can have the function of energy transmission through cycles of charging and discharging.
W
e
=
1
2
B
H
=
1
2
B
2
μ
{\displaystyle W_{e}={\frac {1}{2}}BH={\frac {1}{2}}{\frac {B^{2}}{\mu }}}
The core's relative permeability
μ
r
{\displaystyle \mu _{r}}
can be > 1,000, even > 10,000. While the air gap features much lower permeability, accordingly it has higher energy density.
== See also ==
Power supply
Cascade converter
Motor-generator
Resonant converter
Rotary converter
== References ==
Abraham I. Pressman (1997). Switching Power Supply Design. McGraw-Hill. ISBN 0-07-052236-7.
Ned Mohan, Tore M. Undeland, William P. Robbins (2002). Power Electronics: Converters, Applications, and Design. Wiley. ISBN 0-471-22693-9.
Fang Lin Luo, Hong Ye, Muhammad H. Rashid (2005). Digital Power Electronics and Applications. Elsevier. ISBN 0-12-088757-6.
Fang Lin Luo, Hong Ye (2004). Advanced DC/DC Converters. CRC Press. ISBN 0-8493-1956-0.
Mingliang Liu (2006). Demystifying Switched-Capacitor Circuits. Elsevier. ISBN 0-7506-7907-7.
== External links ==
A general description of DC-DC converters
U.S. based 50 Hz, 60 Hz, and 400 Hz frequency converter manufacturer Archived 2020-02-21 at the Wayback Machine
GlobTek, Inc. Glossary of electric power supply and power conversion terms | Wikipedia/Electric_power_conversion |
The Model V was among the early electromechanical general purpose computers, designed by George Stibitz and built by Bell Telephone Laboratories, operational in 1946.
Only two machines were built: first one was installed at National Advisory Committee for Aeronautics (NACA, later NASA), the second (1947) at the US Army’s Ballistic Research Laboratory (BRL).
== Construction ==
Design was started in 1944. The tape-controlled (Harvard architecture) machine had two (design allowed for a total of six) processors ("computers") that could operate independently, an early form of multiprocessing.
The Model V weighed about 10 short tons (9.1 t).
== Significance ==
Inspired Richard Hamming to investigate the automatic error-correction, which led to invention of Hamming codes
One of the early electromechanical general purpose computers
First American machine and first George Stibitz design to use floating-point arithmetic
Had an early form of multiprocessing
Had a very primitive form of an operating system, albeit in hardware. A separate hardware control unit existed to direct the sequence of computer operations.
== Model VI ==
Built and used internally by Bell Telephone Laboratories, operational in 1949.
Simplified version of the Model V (only one processor, about half the relays) but with several improvements, including one of the earliest use of the microcode.
== Bibliography ==
Research, United States Office of Naval (1953). A survey of automatic digital computers. Models V and VI. Office of Naval Research, Dept. of the Navy. pp. 9–10 (in reader: 15–16).
"The relay computers at Bell Labs : those were the machines, part 2". Datamation. The relay computers at Bell Labs : those were the machines, parts 1 and 2 | 102724647 | Computer History Museum. part 2: pp. 47, 49. May 1967.
Irvine, M. M. (July 2001). "Early digital computers at Bell Telephone Laboratories". IEEE Annals of the History of Computing. 23 (3): 25–27. doi:10.1109/85.948904. ISSN 1058-6180. pdf
Kaisler, Stephen H. (2016). "Chapter Three: Stibitz's Relay Computers". Birthing the Computer: From Relays to Vacuum Tubes. Cambridge Scholars Publishing. pp. 35–37. ISBN 9781443896313.
"Г. – Bell Labs – Model V" [G. – Bell Labs – Model V]. oplib.ru (in Russian). Archived from the original on September 29, 2022. Retrieved 2017-10-11.
== Further reading ==
Alt, Franz L. (1948). "A Bell Telephone Laboratories' computing machine. I". Mathematics of Computation. 3 (21): 1–13. doi:10.1090/S0025-5718-1948-0023118-1. ISSN 0025-5718.
Alt, Franz L. (1948). "A Bell Telephone Laboratories' computing machine. II". Mathematics of Computation. 3 (22): 69–84. doi:10.1090/S0025-5718-1948-0025271-2. ISSN 0025-5718.
Tomash, Erwin (2008). "The Erwin Tomash Library on the History of Computing: An Annotated and Illustrated Catalog". www.cbi.umn.edu. CBI Hosted Publications. Image: Bell Labs Model V.drawing of Model V, description: A Chapter, pp. 36-37. Retrieved 2018-05-08.
Andrews, Ernest G. (1949-09-01). "The Bell Computer, Model VI". Proceedings of a Second Symposium on Large-scale Digital Calculating Machinery: 20–31 (58–69).
"Bell Laboratories Digital Computers". Bell Laboratories Record. www.americanradiohistory.com. XXXV (3): 81–84. Mar 1957.
Ceruzzi, Paul E. (1983). "4. Number, Please - Computers at Bell Labs". Reckoners: The Prehistory of the Digital Computer, from Relays to the Stored Program Concept, 1935-1945. Greenwood Publishing Group, Incorporated. pp. 95–99. ISBN 9780313233821.
Bullynck, Maarten (2015). "3. Bell Model V Calculator: Tapes and Controls". Programming men and machines. Changing organisation in the artillery computations at Aberdeen Proving Ground (1916-1946). pp. 9–12.
== References ==
== External links ==
"Control Panel, Bell Telephone Laboratories Model 5 Computer". National Museum of American History. | Wikipedia/Model_V |
Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pigeon post is not. Ancient signalling systems, although sometimes quite extensive and sophisticated as in China, were generally not capable of transmitting arbitrary text messages. Possible messages were fixed and predetermined, so such systems are thus not true telegraphs.
The earliest true telegraph put into widespread use was the Chappe telegraph, an optical telegraph invented by Claude Chappe in the late 18th century. The system was used extensively in France, and European nations occupied by France, during the Napoleonic era. The electric telegraph started to replace the optical telegraph in the mid-19th century. It was first taken up in Britain in the form of the Cooke and Wheatstone telegraph, initially used mostly as an aid to railway signalling. This was quickly followed by a different system developed in the United States by Samuel Morse. The electric telegraph was slower to develop in France due to the established optical telegraph system, but an electrical telegraph was put into use with a code compatible with the Chappe optical telegraph. The Morse system was adopted as the international standard in 1865, using a modified Morse code developed in Germany in 1848.
The heliograph is a telegraph system using reflected sunlight for signalling. It was mainly used in areas where the electrical telegraph had not been established and generally used the same code. The most extensive heliograph network established was in Arizona and New Mexico during the Apache Wars. The heliograph was standard military equipment as late as World War II. Wireless telegraphy developed in the early 20th century became important for maritime use, and was a competitor to electrical telegraphy using submarine telegraph cables in international communications.
Telegrams became a popular means of sending messages once telegraph prices had fallen sufficiently. Traffic became high enough to spur the development of automated systems—teleprinters and punched tape transmission. These systems led to new telegraph codes, starting with the Baudot code. However, telegrams were never able to compete with the letter post on price, and competition from the telephone, which removed their speed advantage, drove the telegraph into decline from 1920 onwards. The few remaining telegraph applications were largely taken over by alternatives on the internet towards the end of the 20th century.
== Terminology ==
The word telegraph (from Ancient Greek: τῆλε (têle) 'at a distance' and γράφειν (gráphein) 'to write') was coined by the French inventor of the semaphore telegraph, Claude Chappe, who also coined the word semaphore.
A telegraph is a device for transmitting and receiving messages over long distances, i.e., for telegraphy. The word telegraph alone generally refers to an electrical telegraph. Wireless telegraphy is transmission of messages over radio with telegraphic codes.
Contrary to the extensive definition used by Chappe, Morse argued that the term telegraph can strictly be applied only to systems that transmit and record messages at a distance. This is to be distinguished from semaphore, which merely transmits messages. Smoke signals, for instance, are to be considered semaphore, not telegraph. According to Morse, telegraph dates only from 1832 when Pavel Schilling invented one of the earliest electrical telegraphs.
A telegraph message sent by an electrical telegraph operator or telegrapher using Morse code (or a printing telegraph operator using plain text) was known as a telegram. A cablegram was a message sent by a submarine telegraph cable, often shortened to "cable" or "wire". The suffix -gram is derived from ancient Greek: γραμμα (gramma), meaning something written, i.e. telegram means something written at a distance and cablegram means something written via a cable, whereas telegraph implies the process of writing at a distance.
Later, a Telex was a message sent by a Telex network, a switched network of teleprinters similar to a telephone network.
A wirephoto or wire picture was a newspaper picture that was sent from a remote location by a facsimile telegraph. A diplomatic telegram, also known as a diplomatic cable, is a confidential communication between a diplomatic mission and the foreign ministry of its parent country. These continue to be called telegrams or cables regardless of the method used for transmission.
== History ==
=== Early signalling ===
Passing messages by signalling over distance is an ancient practice. One of the oldest examples is the signal towers of the Great Wall of China. By 400 BC, signals could be sent by beacon fires or drum beats, and by 200 BC complex flag signalling had developed. During the Han dynasty (202 BC – 220 AD), signallers mainly used flags and wood fires—via the light of the flames swung high into the air at night, and via dark smoke produced by the addition of wolf dung during the day—to send signals. By the Tang dynasty (618–907) a message could be sent 1,100 kilometres (700 mi) in 24 hours. The Ming dynasty (1368–1644) used artillery as another possible signalling method. While the signalling was complex (for instance, flags of different colours could be used to indicate enemy strength), only predetermined messages could be sent. The Chinese signalling system extended well beyond the Great Wall. Signal towers away from the wall were used to give early warning of an attack. Others were built even further out as part of the protection of trade routes, especially the Silk Road.
Signal fires were widely used in Europe and elsewhere for military purposes. The Roman army made frequent use of them, as did their enemies, and the remains of some of the stations still exist. Few details have been recorded of European/Mediterranean signalling systems and the possible messages. One of the few for which details are known is a system invented by Aeneas Tacticus (4th century BC). Tacticus's system had water filled pots at the two signal stations which were drained in synchronisation. Annotation on a floating scale indicated which message was being sent or received. Signals sent by means of torches indicated when to start and stop draining to keep the synchronisation.
None of the signalling systems discussed above are true telegraphs in the sense of a system that can transmit arbitrary messages over arbitrary distances. Lines of signalling relay stations can send messages to any required distance, but all these systems are limited to one extent or another in the range of messages that they can send. A system like flag semaphore, with an alphabetic code, can certainly send any given message, but the system is designed for short-range communication between two persons. An engine order telegraph, used to send instructions from the bridge of a ship to the engine room, fails to meet both criteria; it has a limited distance and very simple message set. There was only one ancient signalling system described that does meet these criteria. That was a system using the Polybius square to encode an alphabet. Polybius (2nd century BC) suggested using two successive groups of torches to identify the coordinates of the letter of the alphabet being transmitted. The number of said torches held up signalled the grid square that contained the letter. There is no definite record of the system ever being used, but there are several passages in ancient texts that some think are suggestive. Holzmann and Pehrson, for instance, suggest that Livy is describing its use by Philip V of Macedon in 207 BC during the First Macedonian War. Nothing else that could be described as a true telegraph existed until the 17th century.: 26–29 Possibly the first alphabetic telegraph code in the modern era is due to Franz Kessler who published his work in 1616. Kessler used a lamp placed inside a barrel with a moveable shutter operated by the signaller. The signals were observed at a distance with the newly invented telescope.: 32–34
=== Optical telegraph ===
An optical telegraph is a telegraph consisting of a line of stations in towers or natural high points which signal to each other by means of shutters or paddles. Signalling by means of indicator pointers was called semaphore. Early proposals for an optical telegraph system were made to the Royal Society by Robert Hooke in 1684 and were first implemented on an experimental level by Sir Richard Lovell Edgeworth in 1767. The first successful optical telegraph network was invented by Claude Chappe and operated in France from 1793. The two most extensive systems were Chappe's in France, with branches into neighbouring countries, and the system of Abraham Niclas Edelcrantz in Sweden.: ix–x, 47
During 1790–1795, at the height of the French Revolution, France needed a swift and reliable communication system to thwart the war efforts of its enemies. In 1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. On 2 March 1791, at 11 am, they sent the message "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory) between Brulon and Parce, a distance of 16 kilometres (10 mi). The first means used a combination of black and white panels, clocks, telescopes, and codebooks to send their message.
In 1792, Claude was appointed Ingénieur-Télégraphiste and charged with establishing a line of stations between Paris and Lille, a distance of 230 kilometres (140 mi). It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. A decision to replace the system with an electric telegraph was made in 1846, but it took a decade before it was fully taken out of service. The fall of Sevastopol was reported by Chappe telegraph in 1855.: 92–94
The Prussian system was put into effect in the 1830s. However, they were highly dependent on good weather and daylight to work and even then could accommodate only about two words per minute. The last commercial semaphore link ceased operation in Sweden in 1880. As of 1895, France still operated coastal commercial semaphore telegraph stations, for ship-to-shore communication.
=== Electrical telegraph ===
The early ideas for an electric telegraph included in 1753 using electrostatic deflections of pith balls, proposals for electrochemical bubbles in acid by Campillo in 1804 and von Sömmering in 1809. The first experimental system over a substantial distance was by Ronalds in 1816 using an electrostatic generator. Ronalds offered his invention to the British Admiralty, but it was rejected as unnecessary, the existing optical telegraph connecting the Admiralty in London to their main fleet base in Portsmouth being deemed adequate for their purposes. As late as 1844, after the electrical telegraph had come into use, the Admiralty's optical telegraph was still used, although it was accepted that poor weather ruled it out on many days of the year.: 16, 37 France had an extensive optical telegraph system dating from Napoleonic times and was even slower to take up electrical systems.: 217–218
Eventually, electrostatic telegraphs were abandoned in favour of electromagnetic systems. An early experimental system (Schilling, 1832) led to a proposal to establish a telegraph between St Petersburg and Kronstadt, but it was never completed. The first operative electric telegraph (Gauss and Weber, 1833) connected Göttingen Observatory to the Institute of Physics about 1 km away during experimental investigations of the geomagnetic field.
The first commercial telegraph was by Cooke and Wheatstone following their English patent of 10 June 1837. It was demonstrated on the London and Birmingham Railway in July of the same year. In July 1839, a five-needle, five-wire system was installed to provide signalling over a record distance of 21 km on a section of the Great Western Railway between London Paddington station and West Drayton. However, in trying to get railway companies to take up his telegraph more widely for railway signalling, Cooke was rejected several times in favour of the more familiar, but shorter range, steam-powered pneumatic signalling. Even when his telegraph was taken up, it was considered experimental and the company backed out of a plan to finance extending the telegraph line out to Slough. However, this led to a breakthrough for the electric telegraph, as up to this point the Great Western had insisted on exclusive use and refused Cooke permission to open public telegraph offices. Cooke extended the line at his own expense and agreed that the railway could have free use of it in exchange for the right to open it up to the public.: 19–20
Most of the early electrical systems required multiple wires (Ronalds' system was an exception), but the system developed in the United States by Morse and Vail was a single-wire system. This was the system that first used the soon-to-become-ubiquitous Morse code. By 1844, the Morse system connected Baltimore to Washington, and by 1861 the west coast of the continent was connected to the east coast. The Cooke and Wheatstone telegraph, in a series of improvements, also ended up with a one-wire system, but still using their own code and needle displays.
The electric telegraph quickly became a means of more general communication. The Morse system was officially adopted as the standard for continental European telegraphy in 1851 with a revised code, which later became the basis of International Morse Code. However, Great Britain and the British Empire continued to use the Cooke and Wheatstone system, in some places as late as the 1930s. Likewise, the United States continued to use American Morse code internally, requiring translation operators skilled in both codes for international messages.
=== Railway telegraphy ===
Railway signal telegraphy was developed in Britain from the 1840s onward. It was used to manage railway traffic and to prevent accidents as part of the railway signalling system. On 12 June 1837 Cooke and Wheatstone were awarded a patent for an electric telegraph. This was demonstrated between Euston railway station—where Wheatstone was located—and the engine house at Camden Town—where Cooke was stationed, together with Robert Stephenson, the London and Birmingham Railway line's chief engineer. The messages were for the operation of the rope-haulage system for pulling trains up the 1 in 77 bank. The world's first permanent railway telegraph was completed in July 1839 between London Paddington and West Drayton on the Great Western Railway with an electric telegraph using a four-needle system.
The concept of a signalling "block" system was proposed by Cooke in 1842. Railway signal telegraphy did not change in essence from Cooke's initial concept for more than a century. In this system each line of railway was divided into sections or blocks of varying length. Entry to and exit from the block was to be authorised by electric telegraph and signalled by the line-side semaphore signals, so that only a single train could occupy the rails. In Cooke's original system, a single-needle telegraph was adapted to indicate just two messages: "Line Clear" and "Line Blocked". The signaller would adjust his line-side signals accordingly. As first implemented in 1844 each station had as many needles as there were stations on the line, giving a complete picture of the traffic. As lines expanded, a sequence of pairs of single-needle instruments were adopted, one pair for each block in each direction.
=== Wigwag ===
Wigwag is a form of flag signalling using a single flag. Unlike most forms of flag signalling, which are used over relatively short distances, wigwag is designed to maximise the distance covered—up to 32 km (20 mi) in some cases. Wigwag achieved this by using a large flag—a single flag can be held with both hands unlike flag semaphore which has a flag in each hand—and using motions rather than positions as its symbols since motions are more easily seen. It was invented by US Army surgeon Albert J. Myer in the 1850s who later became the first head of the Signal Corps. Wigwag was used extensively during the American Civil War where it filled a gap left by the electrical telegraph. Although the electrical telegraph had been in use for more than a decade, the network did not yet reach everywhere and portable, ruggedized equipment suitable for military use was not immediately available. Permanent or semi-permanent stations were established during the war, some of them towers of enormous height and the system was extensive enough to be described as a communications network.
=== Heliograph ===
A heliograph is a telegraph that transmits messages by flashing sunlight with a mirror, usually using Morse code. The idea for a telegraph of this type was first proposed as a modification of surveying equipment (Gauss, 1821). Various uses of mirrors were made for communication in the following years, mostly for military purposes, but the first device to become widely used was a heliograph with a moveable mirror (Mance, 1869). The system was used by the French during the 1870–71 siege of Paris, with night-time signalling using kerosene lamps as the source of light. An improved version (Begbie, 1870) was used by British military in many colonial wars, including the Anglo-Zulu War (1879). At some point, a morse key was added to the apparatus to give the operator the same degree of control as in the electric telegraph.
Another type of heliograph was the heliostat or heliotrope fitted with a Colomb shutter. The heliostat was essentially a surveying instrument with a fixed mirror and so could not transmit a code by itself. The term heliostat is sometimes used as a synonym for heliograph because of this origin. The Colomb shutter (Bolton and Colomb, 1862) was originally invented to enable the transmission of morse code by signal lamp between Royal Navy ships at sea.
The heliograph was heavily used by Nelson A. Miles in Arizona and New Mexico after he took over command (1886) of the fight against Geronimo and other Apache bands in the Apache Wars. Miles had previously set up the first heliograph line in the US between Fort Keogh and Fort Custer in Montana. He used the heliograph to fill in vast, thinly populated areas that were not covered by the electric telegraph. Twenty-six stations covered an area 320 by 480 km (200 by 300 mi). In a test of the system, a message was relayed 640 km (400 mi) in four hours. Miles' enemies used smoke signals and flashes of sunlight from metal, but lacked a sophisticated telegraph code. The heliograph was ideal for use in the American Southwest due to its clear air and mountainous terrain on which stations could be located. It was found necessary to lengthen the morse dash (which is much shorter in American Morse code than in the modern International Morse code) to aid differentiating from the morse dot.
Use of the heliograph declined from 1915 onwards, but remained in service in Britain and British Commonwealth countries for some time. Australian forces used the heliograph as late as 1942 in the Western Desert Campaign of World War II. Some form of heliograph was used by the mujahideen in the Soviet–Afghan War (1979–1989).
=== Teleprinter ===
A teleprinter is a telegraph machine that can send messages from a typewriter-like keyboard and print incoming messages in readable text with no need for the operators to be trained in the telegraph code used on the line. It developed from various earlier printing telegraphs and resulted in improved transmission speeds. The Morse telegraph (1837) was originally conceived as a system marking indentations on paper tape. A chemical telegraph making blue marks improved the speed of recording (Bain, 1846), but was delayed by a patent challenge from Morse. The first true printing telegraph (that is printing in plain text) used a spinning wheel of types in the manner of a daisy wheel printer (House, 1846, improved by Hughes, 1855). The system was adopted by Western Union.
Early teleprinters used the Baudot code, a five-bit sequential binary code. This was a telegraph code developed for use on the French telegraph using a five-key keyboard (Baudot, 1874). Teleprinters generated the same code from a full alphanumeric keyboard. A feature of the Baudot code, and subsequent telegraph codes, was that, unlike Morse code, every character has a code of the same length making it more machine friendly. The Baudot code was used on the earliest ticker tape machines (Calahan, 1867), a system for mass distributing information on current price of publicly listed companies.
=== Automated punched-tape transmission ===
In a punched-tape system, the message is first typed onto punched tape using the code of the telegraph system—Morse code for instance. It is then, either immediately or at some later time, run through a transmission machine which sends the message to the telegraph network. Multiple messages can be sequentially recorded on the same run of tape. The advantage of doing this is that messages can be sent at a steady, fast rate making maximum use of the available telegraph lines. The economic advantage of doing this is greatest on long, busy routes where the cost of the extra step of preparing the tape is outweighed by the cost of providing more telegraph lines. The first machine to use punched tape was Bain's teleprinter (Bain, 1843), but the system saw only limited use. Later versions of Bain's system achieved speeds up to 1000 words per minute, far faster than a human operator could achieve.
The first widely used system (Wheatstone, 1858) was first put into service with the British General Post Office in 1867. A novel feature of the Wheatstone system was the use of bipolar encoding. That is, both positive and negative polarity voltages were used. Bipolar encoding has several advantages, one of which is that it permits duplex communication. The Wheatstone tape reader was capable of a speed of 400 words per minute.: 190
=== Oceanic telegraph cables ===
A worldwide communication network meant that telegraph cables would have to be laid across oceans. On land cables could be run uninsulated suspended from poles. Underwater, a good insulator that was both flexible and capable of resisting the ingress of seawater was required. A solution presented itself with gutta-percha, a natural rubber from the Palaquium gutta tree, after William Montgomerie sent samples to London from Singapore in 1843. The new material was tested by Michael Faraday and in 1845 Wheatstone suggested that it should be used on the cable planned between Dover and Calais by John Watkins Brett. The idea was proved viable when the South Eastern Railway company successfully tested a three-kilometre (two-mile) gutta-percha insulated cable with telegraph messages to a ship off the coast of Folkestone. The cable to France was laid in 1850 but was almost immediately severed by a French fishing vessel. It was relaid the next year and connections to Ireland and the Low Countries soon followed.
Getting a cable across the Atlantic Ocean proved much more difficult. The Atlantic Telegraph Company, formed in London in 1856, had several failed attempts. A cable laid in 1858 worked poorly for a few days, sometimes taking all day to send a message despite the use of the highly sensitive mirror galvanometer developed by William Thomson (the future Lord Kelvin) before being destroyed by applying too high a voltage. Its failure and slow speed of transmission prompted Thomson and Oliver Heaviside to find better mathematical descriptions of long transmission lines. The company finally succeeded in 1866 with an improved cable laid by SS Great Eastern, the largest ship of its day, designed by Isambard Kingdom Brunel.
An overland telegraph from Britain to India was first connected in 1866 but was unreliable so a submarine telegraph cable was connected in 1870. Several telegraph companies were combined to form the Eastern Telegraph Company in 1872. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin.
From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable-laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted while it was able to quickly cut Germany's cables worldwide.
=== Facsimile ===
In 1843, Scottish inventor Alexander Bain invented a device that could be considered the first facsimile machine. He called his invention a "recording telegraph". Bain's telegraph was able to transmit images by electrical wires. Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. In 1855, an Italian priest, Giovanni Caselli, also created an electric telegraph that could transmit images. Caselli called his invention "Pantelegraph". Pantelegraph was successfully tested and approved for a telegraph line between Paris and Lyon.
In 1881, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. Around 1900, German physicist Arthur Korn invented the Bildtelegraph widespread in continental Europe especially since a widely noticed transmission of a wanted-person photograph from Paris to London in 1908 used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s, the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission.
=== Wireless telegraphy ===
The late 1880s through to the 1890s saw the discovery and then development of a newly understood phenomenon into a form of wireless telegraphy, called Hertzian wave wireless telegraphy, radiotelegraphy, or (later) simply "radio". Between 1886 and 1888, Heinrich Rudolf Hertz published the results of his experiments where he was able to transmit electromagnetic waves (radio waves) through the air, proving James Clerk Maxwell's 1873 theory of electromagnetic radiation. Many scientists and inventors experimented with this new phenomenon but the consensus was that these new waves (similar to light) would be just as short range as light, and, therefore, useless for long range communication.
At the end of 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building a commercial wireless telegraphy system based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Building on the ideas of previous scientists and inventors Marconi re-engineered their apparatus by trial and error attempting to build a radio-based wireless telegraphic system that would function the same as wired telegraphy. He would work on the system through 1895 in his lab and then in field tests making improvements to extend its range. After many breakthroughs, including applying the wired telegraphy concept of grounding the transmitter and receiver, Marconi was able, by early 1896, to transmit radio far beyond the short ranges that had been predicted. Having failed to interest the Italian government, the 22-year-old inventor brought his telegraphy system to Britain in 1896 and met William Preece, a Welshman, who was a major figure in the field and Chief Engineer of the General Post Office. A series of demonstrations for the British government followed—by March 1897, Marconi had transmitted Morse code signals over a distance of about 6 km (3+1⁄2 mi) across Salisbury Plain.
On 13 May 1897, Marconi, assisted by George Kemp, a Cardiff Post Office engineer, transmitted the first wireless signals over water to Lavernock (near Penarth in Wales) from Flat Holm. His star rising, he was soon sending signals across the English Channel (1899), from shore to ship (1899) and finally across the Atlantic (1901). A study of these demonstrations of radio, with scientists trying to work out how a phenomenon predicted to have a short range could transmit "over the horizon", led to the discovery of a radio reflecting layer in the Earth's atmosphere in 1902, later called the ionosphere.
Radiotelegraphy proved effective for rescue work in sea disasters by enabling effective communication between ships and from ship to shore. In 1904, Marconi began the first commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907. Notably, Marconi's apparatus was used to help rescue efforts after the sinking of RMS Titanic. Britain's postmaster-general summed up, referring to the Titanic disaster, "Those who have been saved, have been saved through one man, Mr. Marconi...and his marvellous invention."
==== Non-radio wireless telegraphy ====
The successful development of radiotelegraphy was preceded by a 50-year history of ingenious but ultimately unsuccessful experiments by inventors to achieve wireless telegraphy by other means.
===== Ground, water, and air conduction =====
Several wireless electrical signaling schemes based on the (sometimes erroneous) idea that electric currents could be conducted long-range through water, ground, and air were investigated for telegraphy before practical radio systems became available.
The original telegraph lines used two wires between the two stations to form a complete electrical circuit or "loop". In 1837, however, Carl August von Steinheil of Munich, Germany, found that by connecting one leg of the apparatus at each station to metal plates buried in the ground, he could eliminate one wire and use a single wire for telegraphic communication. This led to speculation that it might be possible to eliminate both wires and therefore transmit telegraph signals through the ground without any wires connecting the stations. Other attempts were made to send the electric current through bodies of water, to span rivers, for example. Prominent experimenters along these lines included Samuel F. B. Morse in the United States and James Bowman Lindsay in Great Britain, who in August 1854, was able to demonstrate transmission across a mill dam at a distance of 500 yards (457 metres).
US inventors William Henry Ward (1871) and Mahlon Loomis (1872) developed electrical conduction systems based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. They thought atmosphere current, connected with a return path using "Earth currents" would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile.
In the 1890s inventor Nikola Tesla worked on an air and ground conduction wireless electric power transmission system, similar to Loomis', which he planned to include wireless telegraphy. Tesla's experiments had led him to incorrectly conclude that he could use the entire globe of the Earth to conduct electrical energy and his 1901 large scale application of his ideas, a high-voltage wireless power station, now called Wardenclyffe Tower, lost funding and was abandoned after a few years.
Telegraphic communication using earth conductivity was eventually found to be limited to impractically short distances, as was communication conducted through water, or between trenches during World War I.
===== Electrostatic and electromagnetic induction =====
Both electrostatic and electromagnetic induction were used to develop wireless telegraph systems that saw limited commercial application. In the United States, Thomas Edison, in the mid-1880s, patented an electromagnetic induction system he called "grasshopper telegraphy", which allowed telegraphic signals to jump the short distance between a running train and telegraph wires running parallel to the tracks. This system was successful technically but not economically, as there turned out to be little interest by train travelers in the use of an on-board telegraph service. During the Great Blizzard of 1888, this system was used to send and receive wireless messages from trains buried in snowdrifts. The disabled trains were able to maintain communications via their Edison induction wireless telegraph systems, perhaps the first successful use of wireless telegraphy to send distress calls. Edison would also help to patent a ship-to-shore communication system based on electrostatic induction.
The most successful creator of an electromagnetic induction telegraph system was William Preece, chief engineer of Post Office Telegraphs of the General Post Office (GPO) in the United Kingdom. Preece first noticed the effect in 1884 when overhead telegraph wires in Grays Inn Road were accidentally carrying messages sent on buried cables. Tests in Newcastle succeeded in sending a quarter of a mile using parallel rectangles of wire.: 243 In tests across the Bristol Channel in 1892, Preece was able to telegraph across gaps of about 5 kilometres (3.1 miles). However, his induction system required extensive lengths of antenna wires, many kilometers long, at both the sending and receiving ends. The length of those sending and receiving wires needed to be about the same length as the width of the water or land to be spanned. For example, for Preece's station to span the English Channel from Dover, England, to the coast of France would require sending and receiving wires of about 30 miles (48 kilometres) along the two coasts. These facts made the system impractical on ships, boats, and ordinary islands, which are much smaller than Great Britain or Greenland. Also, the relatively short distances that a practical Preece system could span meant that it had few advantages over underwater telegraph cables.
=== Telegram services ===
A telegram service is a company or public entity that delivers telegraphed messages directly to the recipient. Telegram services were not inaugurated until electric telegraphy became available. Earlier optical systems were largely limited to official government and military purposes.
Historically, telegrams were sent between a network of interconnected telegraph offices. A person visiting a local telegraph office paid by the word to have a message telegraphed to another office and delivered to the addressee on a paper form.: 276 Messages (i.e. telegrams) sent by telegraph could be delivered by telegraph messenger faster than mail, and even in the telephone age, the telegram remained popular for social and business correspondence. At their peak in 1929, an estimated 200 million telegrams were sent.: 274
In 1919, the Central Bureau for Registered Addresses was established in the financial district of New York City. The bureau was created to ease the growing problem of messages being delivered to the wrong recipients. To combat this issue, the bureau offered telegraph customers the option to register unique code names for their telegraph addresses. Customers were charged $2.50 per year per code. By 1934, 28,000 codes had been registered.
Telegram services still operate in much of the world (see worldwide use of telegrams by country), but e-mail and text messaging have rendered telegrams obsolete in many countries, and the number of telegrams sent annually has been declining rapidly since the 1980s. Where telegram services still exist, the transmission method between offices is no longer by telegraph, but by telex or IP link.
==== Telegram length ====
As telegrams have been traditionally charged by the word, messages were often abbreviated to pack information into the smallest possible number of words, in what came to be called "telegram style".
The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer. According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters. For German telegrams, the mean length is 11.5 words or 72.4 characters. At the end of the 19th century, the average length of a German telegram was calculated as 14.2 words.
=== Telex ===
Telex (telegraph exchange) was a public switched network of teleprinters. It used rotary-telephone-style pulse dialling for automatic routing through the network. It initially used the Baudot code for messages. Telex development began in Germany in 1926, becoming an operational service in 1933 run by the Reichspost (the German imperial postal service). It had a speed of 50 baud—approximately 66 words per minute. Up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Telex was introduced into Canada in July 1957, and the United States in 1958. A new code, ASCII, was introduced in 1963 by the American Standards Association. ASCII was a seven-bit code and could thus support a larger number of characters than Baudot. In particular, ASCII supported upper and lower case whereas Baudot was upper case only.
=== Decline ===
Telegraph use began to permanently decline around 1920.: 248 The decline began with the growth of the use of the telephone.: 253 Ironically, the invention of the telephone grew out of the development of the harmonic telegraph, a device which was supposed to increase the efficiency of telegraph transmission and improve the profits of telegraph companies. Western Union gave up its patent battle with Alexander Graham Bell because it believed the telephone was not a threat to its telegraph business. The Bell Telephone Company was formed in 1877 and had 230 subscribers which grew to 30,000 by 1880. By 1886 there were a quarter of a million phones worldwide,: 276–277 and nearly 2 million by 1900.: 204 The decline was briefly postponed by the rise of special occasion congratulatory telegrams. Traffic continued to grow between 1867 and 1893 despite the introduction of the telephone in this period,: 274 but by 1900 the telegraph was definitely in decline.: 277
There was a brief resurgence in telegraphy during World War I but the decline continued as the world entered the Great Depression years of the 1930s.: 277 After the Second World War new technology improved communication in the telegraph industry. Telegraph lines continued to be an important means of distributing news feeds from news agencies by teleprinter machine until the rise of the internet in the 1990s. For Western Union, one service remained highly profitable—the wire transfer of money. This service kept Western Union in business long after the telegraph had ceased to be important.: 277 In the modern era, the telegraph that began in 1837 has been gradually replaced by digital data transmission based on computer information systems.
== Social implications ==
Optical telegraph lines were installed by governments, often for a military purpose, and reserved for official use only. In many countries, this situation continued after the introduction of the electric telegraph. Starting in Germany and the UK, electric telegraph lines were installed by railway companies. Railway use quickly led to private telegraph companies in the UK and the US offering a telegraph service to the public using telegraph along railway lines. The availability of this new form of communication brought on widespread social and economic changes.
The electric telegraph freed communication from the time constraints of postal mail and revolutionized the global economy and society. By the end of the 19th century, the telegraph was becoming an increasingly common medium of communication for ordinary people. The telegraph isolated the message (information) from the physical movement of objects or the process.
There was some fear of the new technology. According to author Allan J. Kimmel, some people "feared that the telegraph would erode the quality of public discourse through the transmission of irrelevant, context-free information." Henry David Thoreau thought of the Transatlantic cable "...perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough." Kimmel says these fears anticipate many of the characteristics of the modern internet age.
Initially, the telegraph was expensive, but it had an enormous effect on three industries: finance, newspapers, and railways. Telegraphy facilitated the growth of organizations "in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms". In the US, there were 200 to 300 stock exchanges before the telegraph, but most of these were unnecessary and unprofitable once the telegraph made financial transactions at a distance easy and drove down transaction costs.: 274–75 This immense growth in the business sectors influenced society to embrace the use of telegrams once the cost had fallen.
Worldwide telegraphy changed the gathering of information for news reporting. Journalists were using the telegraph for war reporting as early as 1846 when the Mexican–American War broke out. News agencies were formed, such as the Associated Press, for the purpose of reporting news by telegraph.: 274–75 Messages and information would now travel far and wide, and the telegraph demanded a language "stripped of the local, the regional; and colloquial", to better facilitate a worldwide media language. Media language had to be standardized, which led to the gradual disappearance of different forms of speech and styles of journalism and storytelling.
The spread of the railways created a need for an accurate standard time to replace local standards based on local noon. The means of achieving this synchronisation was the telegraph. This emphasis on precise time has led to major societal changes such as the concept of the time value of money.: 273–74
During the telegraph era there was widespread employment of women in telegraphy. The shortage of men to work as telegraph operators in the American Civil War opened up the opportunity for women of a well-paid skilled job.: 274 In the UK, there was widespread employment of women as telegraph operators even earlier – from the 1850s by all the major companies. The attraction of women for the telegraph companies was that they could pay them less than men. Nevertheless, the jobs were popular with women for the same reason as in the US; most other work available for women was very poorly paid.: 77 : 85
The economic impact of the telegraph was not much studied by economic historians until parallels started to be drawn with the rise of the internet. In fact, the electric telegraph was as important as the invention of printing in this respect. According to economist Ronnie J. Phillips, the reason for this may be that institutional economists paid more attention to advances that required greater capital investment. The investment required to build railways, for instance, is orders of magnitude greater than that for the telegraph.: 269–70
== In popular culture ==
The optical telegraph was quickly forgotten once it went out of service. While it was in operation, it was very familiar to the public across Europe. Examples appear in many paintings of the period. Poems include "Le Telégraphe" by Victor Hugo, and the collection Telegrafen: Optisk kalender för 1858 by Elias Sehlstedt is dedicated to the telegraph. In novels, the telegraph is a major component in Lucien Leuwen by Stendhal, and it features in The Count of Monte Cristo, by Alexandre Dumas.: vii–ix Joseph Chudy's 1796 opera, Der Telegraph oder die Fernschreibmaschine, was written to publicise Chudy's telegraph (a binary code with five lamps) when it became clear that Chappe's design was being taken up.: 42–43
Rudyard Kipling wrote a poem in praise of submarine telegraph cables; "And a new Word runs between: whispering, 'Let us be one!'" Kipling's poem represented a widespread idea in the late nineteenth century that international telegraphy (and new technology in general) would bring peace and mutual understanding to the world. When a submarine telegraph cable first connected America and Britain, the New York Post declared:
It is the harbinger of an age when international difficulties will not have time to ripen into bloody results, and when, in spite of the fatuity and perveseness of rulers, war will be impossible.
=== Newspaper names ===
Numerous newspapers and news outlets in various countries, such as The Daily Telegraph in Britain, The Telegraph in India, De Telegraaf in the Netherlands, and the Jewish Telegraphic Agency in the US, were given names which include the word "telegraph" due to their having received news by means of electric telegraphy. Some of these names are retained even though different means of news acquisition are now used.
== See also ==
Familygram
First transcontinental telegraph
Globotype
Radiogram
Telecommunications
== References ==
== Further reading ==
== External links ==
"Telegraph" . Encyclopædia Britannica (11th ed.). 1911.
Telegraph at the Encyclopædia Britannica
The Porthcurno Telegraph Museum (Archived 27 September 2013 at the Wayback Machine)—The biggest telegraph station in the world, now a museum
Distant Writing—The History of the Telegraph Companies in Britain between 1838 and 1868
Western Union Telegraph Company Records, 1820–1995—Archives Center, National Museum of American History, Smithsonian Institution.
Early telegraphy and fax engineering, still operable in a German computer museum (Archived 20 April 2012 at the Wayback Machine)
"Telegram Falls Silent Stop Era Ends Stop", The New York Times, 6 February 2006
International Facilities of the American Carriers—an overview of the U.S. international cable network in 1950
Elizabeth Bruton: "Communication Technology", in the 1914-1918-online. International Encyclopedia of the First World War | Wikipedia/Telegraphy |
Applied Physics Letters is a weekly peer-reviewed scientific journal that is published by the American Institute of Physics. Its focus is rapid publication and dissemination of new experimental and theoretical papers regarding applications of physics in all disciplines of science, engineering, and modern technology. Additionally, there is an emphasis on fundamental and new developments which lay the groundwork for fields that are rapidly evolving.
The journal was established in 1962. The editor-in-chief is physicist Maria Antonietta Loi of the University of Groningen.
== Abstracting and indexing ==
This journal is indexed in the following databases:
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
Science Citation Index Expanded
According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.5.
== References ==
== External links ==
Official website | Wikipedia/Applied_Physics_Letters |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.