text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Designed to Sell is an American HGTV reality television show produced by Pie Town Productions in Los Angeles and Chicago and Edelman Productions in Washington, D.C., and Atlanta. Each 30-minute episode focuses on fixing up a home that is about to go on the market or that has been on the market but has not attracted buyers. The show began airing in 2004 and was canceled in 2011.
The show provides expert real estate and design advice and general contractors, who are given a $2,000 budget for materials to get a maximum offer for the house. To add excitement to the show, the renovations generally take place over a period of three to seven days, before the home's open house, generally spread out over the course of three or four weeks. The show pays the contractor's fees and the salaries of the carpenters, landscapers, painters, plumbers, and other workers. Most changes are cosmetic, but some require drastic demolition and reconstruction.
== Description ==
Each show follows the same general format:
Homeowner introduction
Recognized problems
Professional real estate agent appraisal
Redesign plan
Demolition, if any
Construction, painting, etc.
Review of changes
Budget breakdown
Home Staging
Open house
Result
At the beginning of each episode, the homeowners are introduced and explain why they are selling. The most common reasons are upsizing for growing families, downsizing for empty-nesters, and job transfers. The homeowners may discuss their views on why the property is not selling.
The host and a real estate expert walk through the property while the homeowners watch on closed-circuit television from a neighbor's home. Because the goal is to get the most money for the house in the shortest possible time, they are very direct in their opinion of the home's assets and flaws and recommend three rooms or areas to be re-done. If the property has been neglected or the décor is too eccentric for the most likely home-buyers, their assessments can be brutal.
After the appraisal, the designer reviews the main defects the real estate expert pointed out and describes their plan to fix them. Then the homeowners, the host, the designer, and the laborers get to work, usually explaining to the audience what they are doing. This often includes some demolition and building and/or installation of new features, such as shelves, awnings, etc. While the episode's budget pays for materials, it does not include any carpentry labor costs.
Often, as the date for the open house nears, they may run into unanticipated problems and may have to work around them or work longer in order to meet the deadline.
After all the changes are complete, the designer reviews the changes with the homeowners and the viewers are shown "before" and "after" views of the improved areas. This is often accompanied by host voiceover.
Afterwards, the host discusses how the $2,000 budget was spent, such as for paint, construction materials, accents, etc. The show normally comes in within $10 of the $2,000 limit.
The homeowners and designer leave before the open house commences. The host stays and asks prospective buyers their opinions of the home, which are almost universally gushing. The host often also asks if the visitors are considering making an offer.
After the open house, the host tells the homeowners the general opinions of the open house viewers and any offers they've already received. The show ends describing how quickly the house was sold. Often the house sells above the asking price. While the asking price is never revealed, the host or homeowners usually disclose how much above the asking price the house sold for.
== Locations ==
Designed to Sell features homes in four real estate markets (and nearby suburbs): Los Angeles, California, Washington, D.C., Chicago, Illinois, and Atlanta, Georgia. Each location has its own specific host, designer, carpenters, and real estate experts.
The first three seasons of the show took place solely in Los Angeles. Later more locations were added. Four metro areas were featured regularly, the Los Angeles team still being the marquee crew, with HGTV regularly running short segments called "Lisa La Pointers" and "Clive Unleashed" during commercial breaks. Clive and Lisa are also featured in most ads and promos for the show. Even though the Washington D.C., Chicago, and Atlanta episodes have their own hosts who talk to the homeowners and help in the transformation of the home, Clive Pearse narrates many of the non-L.A. episodes from off-location. When Clive was unavailable for hosting in L.A., Michael Johnson or Shane Tallant were the guest host.
=== Los Angeles ===
Host: Clive Pearse
Designer: Lisa LaPorta
Carpenters: Jim Collins (2004–05), Brad Haviland (2004–05), Marcus Hunt (2008), Jason Eslinger, Steve Hanneman, Brooks Utley, Sean Anthony Moran, Deus Xavier Scott and Greg Plitt
Real Estate Experts: Donna Freeman and Shannon Freeman
=== Chicago ===
Host: Michael Johnson
Designer: Monica Pedersen
Carpenters: Robert North, Jeff Alba, Chad Lopez and Lynn Kegan
Real Estate Experts: Kathy Quaid, Brandie Malay and Bethany Souza
Producers: Sarah Patton, Jennifer Bernardi
Production Assistants: Becky McCallum, Melanie Pot, Jerry Goodwin, Leslie Weiner, Robert Dressel
=== Washington, D.C. ===
Host: Shane Tallant
Designer: Taniya Nayak
Carpenters: John Allen (thru 2007), Matt Steele (thru 2007), Barr Huefner, Lynn Kegan, Simon Ley
Real Estate Experts: Shirley Mattam-Male and Terry Haas
=== Atlanta ===
Host: Rachel Reenstra (thru 2010), Chi-Lan Lieu
Designer: John Gidding
Carpenters: David Wint and Chip Wade (now with Gidding on Curb Appeal: The Block)
Real Estate Experts: Heyward Young and Tonya M. Williams
Photography & Seamster: Joshua Mark Thomas
== Designed to Sell: Room by Room ==
In April 2008, Designed to Sell: Room by Room debuted on HGTV's real estate website, FrontDoor.com with more than 80 video clips and slideshows cut from episode archives. Users are able to search by room type for home staging ideas and inspiration delivered in short three- to four-minute webisodes and before-and-after slideshows.
== See also ==
Lynn Kegan and Taniya Nayak later starred in Restaurant: Impossible
== References ==
== External links ==
Designed to Sell at IMDb | Wikipedia/Designed_to_Sell |
In mathematical analysis, the Dirac delta function (or δ distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
δ
(
x
)
=
{
0
,
x
≠
0
∞
,
x
=
0
{\displaystyle \delta (x)={\begin{cases}0,&x\neq 0\\{\infty },&x=0\end{cases}}}
such that
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)dx=1.}
Since there is no function having this property, modelling the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.
The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1. The mathematical rigor of the delta function was disputed until Laurent Schwartz developed the theory of distributions, where it is defined as a linear form acting on functions.
== Motivation and overview ==
The graph of the Dirac delta is usually thought of as following the whole x-axis and the positive y-axis.: 174 The Dirac delta is used to model a tall narrow spike function (an impulse), and other similar abstractions such as a point charge, point mass or electron point. For example, to calculate the dynamics of a billiard ball being struck, one can approximate the force of the impact by a Dirac delta. In doing so, one not only simplifies the equations, but one also is able to calculate the motion of the ball, by only considering the total impulse of the collision, without a detailed model of all of the elastic energy transfer at subatomic levels (for instance).
To be specific, suppose that a billiard ball is at rest. At time
t
=
0
{\displaystyle t=0}
it is struck by another ball, imparting it with a momentum P, with units kg⋅m⋅s−1. The exchange of momentum is not actually instantaneous, being mediated by elastic processes at the molecular and subatomic level, but for practical purposes it is convenient to consider that energy transfer as effectively instantaneous. The force therefore is P δ(t); the units of δ(t) are s−1.
To model this situation more rigorously, suppose that the force instead is uniformly distributed over a small time interval
Δ
t
=
[
0
,
T
]
{\displaystyle \Delta t=[0,T]}
. That is,
F
Δ
t
(
t
)
=
{
P
/
Δ
t
0
<
t
≤
T
,
0
otherwise
.
{\displaystyle F_{\Delta t}(t)={\begin{cases}P/\Delta t&0<t\leq T,\\0&{\text{otherwise}}.\end{cases}}}
Then the momentum at any time t is found by integration:
p
(
t
)
=
∫
0
t
F
Δ
t
(
τ
)
d
τ
=
{
P
t
≥
T
P
t
/
Δ
t
0
≤
t
≤
T
0
otherwise.
{\displaystyle p(t)=\int _{0}^{t}F_{\Delta t}(\tau )\,d\tau ={\begin{cases}P&t\geq T\\P\,t/\Delta t&0\leq t\leq T\\0&{\text{otherwise.}}\end{cases}}}
Now, the model situation of an instantaneous transfer of momentum requires taking the limit as Δt → 0, giving a result everywhere except at 0:
p
(
t
)
=
{
P
t
>
0
0
t
<
0.
{\displaystyle p(t)={\begin{cases}P&t>0\\0&t<0.\end{cases}}}
Here the functions
F
Δ
t
{\displaystyle F_{\Delta t}}
are thought of as useful approximations to the idea of instantaneous transfer of momentum.
The delta function allows us to construct an idealized limit of these approximations. Unfortunately, the actual limit of the functions (in the sense of pointwise convergence)
lim
Δ
t
→
0
+
F
Δ
t
{\textstyle \lim _{\Delta t\to 0^{+}}F_{\Delta t}}
is zero everywhere but a single point, where it is infinite. To make proper sense of the Dirac delta, we should instead insist that the property
∫
−
∞
∞
F
Δ
t
(
t
)
d
t
=
P
,
{\displaystyle \int _{-\infty }^{\infty }F_{\Delta t}(t)\,dt=P,}
which holds for all
Δ
t
>
0
{\displaystyle \Delta t>0}
, should continue to hold in the limit. So, in the equation
F
(
t
)
=
P
δ
(
t
)
=
lim
Δ
t
→
0
F
Δ
t
(
t
)
{\textstyle F(t)=P\,\delta (t)=\lim _{\Delta t\to 0}F_{\Delta t}(t)}
, it is understood that the limit is always taken outside the integral.
In applied mathematics, as we have done here, the delta function is often manipulated as a kind of limit (a weak limit) of a sequence of functions, each member of which has a tall spike at the origin: for example, a sequence of Gaussian distributions centered at the origin with variance tending to zero.
The Dirac delta is not truly a function, at least not a usual one with domain and range in real numbers. For example, the objects f(x) = δ(x) and g(x) = 0 are equal everywhere except at x = 0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable and the integrals of f and g are identical. A rigorous approach to regarding the Dirac delta function as a mathematical object in its own right requires measure theory or the theory of distributions.
== History ==
In physics, the Dirac delta function was popularized by Paul Dirac in this book The Principles of Quantum Mechanics published in 1930. However, Oliver Heaviside, 35 years before Dirac, described an impulsive function called the Heaviside step function for purposes and with properties analogous to Dirac's work. Even earlier several mathematicians and physicists used limits of sharply peaked functions in derivations.
An infinitesimal formula for an infinitely tall, unit impulse delta function (infinitesimal version of Cauchy distribution) explicitly appears in an 1827 text of Augustin-Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of wave propagation as did Gustav Kirchhoff somewhat later. Kirchhoff and Hermann von Helmholtz also introduced the unit impulse as a limit of Gaussians, which also corresponded to Lord Kelvin's notion of a point heat source. The Dirac delta function as such was introduced by Paul Dirac in his 1927 paper The Physical Interpretation of the Quantum Dynamics. He called it the "delta function" since he used it as a continuum analogue of the discrete Kronecker delta.
Mathematicians refer to the same concept as a distribution rather than a function.: 33
Joseph Fourier presented what is now called the Fourier integral theorem in his treatise Théorie analytique de la chaleur in the form:
f
(
x
)
=
1
2
π
∫
−
∞
∞
d
α
f
(
α
)
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
,
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ \ d\alpha \,f(\alpha )\ \int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ ,}
which is tantamount to the introduction of the δ-function in the form:
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
d
p
cos
(
p
x
−
p
α
)
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }dp\ \cos(px-p\alpha )\ .}
Later, Augustin Cauchy expressed the theorem using exponentials:
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
.
{\displaystyle f(x)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\ e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp.}
Cauchy pointed out that in some circumstances the order of integration is significant in this result (contrast Fubini's theorem).
As justified using the theory of distributions, the Cauchy equation can be rearranged to resemble Fourier's original formulation and expose the δ-function as
f
(
x
)
=
1
2
π
∫
−
∞
∞
e
i
p
x
(
∫
−
∞
∞
e
−
i
p
α
f
(
α
)
d
α
)
d
p
=
1
2
π
∫
−
∞
∞
(
∫
−
∞
∞
e
i
p
x
e
−
i
p
α
d
p
)
f
(
α
)
d
α
=
∫
−
∞
∞
δ
(
x
−
α
)
f
(
α
)
d
α
,
{\displaystyle {\begin{aligned}f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ipx}\left(\int _{-\infty }^{\infty }e^{-ip\alpha }f(\alpha )\,d\alpha \right)\,dp\\[4pt]&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\left(\int _{-\infty }^{\infty }e^{ipx}e^{-ip\alpha }\,dp\right)f(\alpha )\,d\alpha =\int _{-\infty }^{\infty }\delta (x-\alpha )f(\alpha )\,d\alpha ,\end{aligned}}}
where the δ-function is expressed as
δ
(
x
−
α
)
=
1
2
π
∫
−
∞
∞
e
i
p
(
x
−
α
)
d
p
.
{\displaystyle \delta (x-\alpha )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ip(x-\alpha )}\,dp\ .}
A rigorous interpretation of the exponential form and the various limitations upon the function f necessary for its application extended over several centuries. The problems with a classical interpretation are explained as follows:
The greatest drawback of the classical Fourier transformation is a rather narrow class of functions (originals) for which it can be effectively computed. Namely, it is necessary that these functions decrease sufficiently rapidly to zero (in the neighborhood of infinity) to ensure the existence of the Fourier integral. For example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed and this removed many obstacles.
Further developments included generalization of the Fourier integral, "beginning with Plancherel's pathbreaking L2-theory (1910), continuing with Wiener's and Bochner's works (around 1930) and culminating with the amalgamation into L. Schwartz's theory of distributions (1945) ...", and leading to the formal development of the Dirac delta function.
== Definitions ==
The Dirac delta function
δ
(
x
)
{\displaystyle \delta (x)}
can be loosely thought of as a function on the real line which is zero everywhere except at the origin, where it is infinite,
δ
(
x
)
≃
{
+
∞
,
x
=
0
0
,
x
≠
0
{\displaystyle \delta (x)\simeq {\begin{cases}+\infty ,&x=0\\0,&x\neq 0\end{cases}}}
and which is also constrained to satisfy the identity
∫
−
∞
∞
δ
(
x
)
d
x
=
1.
{\displaystyle \int _{-\infty }^{\infty }\delta (x)\,dx=1.}
This is merely a heuristic characterization. The Dirac delta is not a function in the traditional sense as no extended real number valued function defined on the real numbers has these properties.
=== As a measure ===
One way to rigorously capture the notion of the Dirac delta function is to define a measure, called Dirac measure, which accepts a subset A of the real line R as an argument, and returns δ(A) = 1 if 0 ∈ A, and δ(A) = 0 otherwise. If the delta function is conceptualized as modeling an idealized point mass at 0, then δ(A) represents the mass contained in the set A. One may then define the integral against δ as the integral of a function against this mass distribution. Formally, the Lebesgue integral provides the necessary analytic device. The Lebesgue integral with respect to the measure δ satisfies
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=f(0)}
for all continuous compactly supported functions f. The measure δ is not absolutely continuous with respect to the Lebesgue measure—in fact, it is a singular measure. Consequently, the delta measure has no Radon–Nikodym derivative (with respect to Lebesgue measure)—no true function for which the property
∫
−
∞
∞
f
(
x
)
δ
(
x
)
d
x
=
f
(
0
)
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (x)\,dx=f(0)}
holds. As a result, the latter notation is a convenient abuse of notation, and not a standard (Riemann or Lebesgue) integral.
As a probability measure on R, the delta measure is characterized by its cumulative distribution function, which is the unit step function.
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0.
{\displaystyle H(x)={\begin{cases}1&{\text{if }}x\geq 0\\0&{\text{if }}x<0.\end{cases}}}
This means that H(x) is the integral of the cumulative indicator function 1(−∞, x] with respect to the measure δ; to wit,
H
(
x
)
=
∫
R
1
(
−
∞
,
x
]
(
t
)
δ
(
d
t
)
=
δ
(
(
−
∞
,
x
]
)
,
{\displaystyle H(x)=\int _{\mathbf {R} }\mathbf {1} _{(-\infty ,x]}(t)\,\delta (dt)=\delta \!\left((-\infty ,x]\right),}
the latter being the measure of this interval. Thus in particular the integration of the delta function against a continuous function can be properly understood as a Riemann–Stieltjes integral:
∫
−
∞
∞
f
(
x
)
δ
(
d
x
)
=
∫
−
∞
∞
f
(
x
)
d
H
(
x
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (dx)=\int _{-\infty }^{\infty }f(x)\,dH(x).}
All higher moments of δ are zero. In particular, characteristic function and moment generating function are both equal to one.
=== As a distribution ===
In the theory of distributions, a generalized function is considered not a function in itself but only through how it affects other functions when "integrated" against them. In keeping with this philosophy, to define the delta function properly, it is enough to say what the "integral" of the delta function is against a sufficiently "good" test function φ. Test functions are also known as bump functions. If the delta function is already understood as a measure, then the Lebesgue integral of a test function against that measure supplies the necessary integral.
A typical space of test functions consists of all smooth functions on R with compact support that have as many derivatives as required. As a distribution, the Dirac delta is a linear functional on the space of test functions and is defined by
for every test function φ.
For δ to be properly a distribution, it must be continuous in a suitable topology on the space of test functions. In general, for a linear functional S on the space of test functions to define a distribution, it is necessary and sufficient that, for every positive integer N there is an integer MN and a constant CN such that for every test function φ, one has the inequality
|
S
[
φ
]
|
≤
C
N
∑
k
=
0
M
N
sup
x
∈
[
−
N
,
N
]
|
φ
(
k
)
(
x
)
|
{\displaystyle \left|S[\varphi ]\right|\leq C_{N}\sum _{k=0}^{M_{N}}\sup _{x\in [-N,N]}\left|\varphi ^{(k)}(x)\right|}
where sup represents the supremum. With the δ distribution, one has such an inequality (with CN = 1) with MN = 0 for all N. Thus δ is a distribution of order zero. It is, furthermore, a distribution with compact support (the support being {0}).
The delta distribution can also be defined in several equivalent ways. For instance, it is the distributional derivative of the Heaviside step function. This means that for every test function φ, one has
δ
[
φ
]
=
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
.
{\displaystyle \delta [\varphi ]=-\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx.}
Intuitively, if integration by parts were permitted, then the latter integral should simplify to
∫
−
∞
∞
φ
(
x
)
H
′
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
δ
(
x
)
d
x
,
{\displaystyle \int _{-\infty }^{\infty }\varphi (x)\,H'(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,\delta (x)\,dx,}
and indeed, a form of integration by parts is permitted for the Stieltjes integral, and in that case, one does have
−
∫
−
∞
∞
φ
′
(
x
)
H
(
x
)
d
x
=
∫
−
∞
∞
φ
(
x
)
d
H
(
x
)
.
{\displaystyle -\int _{-\infty }^{\infty }\varphi '(x)\,H(x)\,dx=\int _{-\infty }^{\infty }\varphi (x)\,dH(x).}
In the context of measure theory, the Dirac measure gives rise to distribution by integration. Conversely, equation (1) defines a Daniell integral on the space of all compactly supported continuous functions φ which, by the Riesz representation theorem, can be represented as the Lebesgue integral of φ with respect to some Radon measure.
Generally, when the term Dirac delta function is used, it is in the sense of distributions rather than measures, the Dirac measure being among several terms for the corresponding notion in measure theory. Some sources may also use the term Dirac delta distribution.
=== Generalizations ===
The delta function can be defined in n-dimensional Euclidean space Rn as the measure such that
∫
R
n
f
(
x
)
δ
(
d
x
)
=
f
(
0
)
{\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (d\mathbf {x} )=f(\mathbf {0} )}
for every compactly supported continuous function f. As a measure, the n-dimensional delta function is the product measure of the 1-dimensional delta functions in each variable separately. Thus, formally, with x = (x1, x2, ..., xn), one has
The delta function can also be defined in the sense of distributions exactly as above in the one-dimensional case. However, despite widespread use in engineering contexts, (2) should be manipulated with care, since the product of distributions can only be defined under quite narrow circumstances.
The notion of a Dirac measure makes sense on any set. Thus if X is a set, x0 ∈ X is a marked point, and Σ is any sigma algebra of subsets of X, then the measure defined on sets A ∈ Σ by
δ
x
0
(
A
)
=
{
1
if
x
0
∈
A
0
if
x
0
∉
A
{\displaystyle \delta _{x_{0}}(A)={\begin{cases}1&{\text{if }}x_{0}\in A\\0&{\text{if }}x_{0}\notin A\end{cases}}}
is the delta measure or unit mass concentrated at x0.
Another common generalization of the delta function is to a differentiable manifold where most of its properties as a distribution can also be exploited because of the differentiable structure. The delta function on a manifold M centered at the point x0 ∈ M is defined as the following distribution:
for all compactly supported smooth real-valued functions φ on M. A common special case of this construction is a case in which M is an open set in the Euclidean space Rn.
On a locally compact Hausdorff space X, the Dirac delta measure concentrated at a point x is the Radon measure associated with the Daniell integral (3) on compactly supported continuous functions φ. At this level of generality, calculus as such is no longer possible, however a variety of techniques from abstract analysis are available. For instance, the mapping
x
0
↦
δ
x
0
{\displaystyle x_{0}\mapsto \delta _{x_{0}}}
is a continuous embedding of X into the space of finite Radon measures on X, equipped with its vague topology. Moreover, the convex hull of the image of X under this embedding is dense in the space of probability measures on X.
== Properties ==
=== Scaling and symmetry ===
The delta function satisfies the following scaling property for a non-zero scalar α:
∫
−
∞
∞
δ
(
α
x
)
d
x
=
∫
−
∞
∞
δ
(
u
)
d
u
|
α
|
=
1
|
α
|
{\displaystyle \int _{-\infty }^{\infty }\delta (\alpha x)\,dx=\int _{-\infty }^{\infty }\delta (u)\,{\frac {du}{|\alpha |}}={\frac {1}{|\alpha |}}}
and so
Scaling property proof:
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
a
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
a
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{a}}g(0).}
where a change of variable x′ = ax is used. If a is negative, i.e., a = −|a|, then
∫
−
∞
∞
d
x
g
(
x
)
δ
(
a
x
)
=
1
−
|
a
|
∫
∞
−
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
∫
−
∞
∞
d
x
′
g
(
x
′
a
)
δ
(
x
′
)
=
1
|
a
|
g
(
0
)
.
{\displaystyle \int \limits _{-\infty }^{\infty }dx\ g(x)\delta (ax)={\frac {1}{-\left\vert a\right\vert }}\int \limits _{\infty }^{-\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}\int \limits _{-\infty }^{\infty }dx'\ g\left({\frac {x'}{a}}\right)\delta (x')={\frac {1}{\left\vert a\right\vert }}g(0).}
Thus,
δ
(
a
x
)
=
1
|
a
|
δ
(
x
)
{\displaystyle \delta (ax)={\frac {1}{\left\vert a\right\vert }}\delta (x)}
.
In particular, the delta function is an even distribution (symmetry), in the sense that
δ
(
−
x
)
=
δ
(
x
)
{\displaystyle \delta (-x)=\delta (x)}
which is homogeneous of degree −1.
=== Algebraic properties ===
The distributional product of δ with x is equal to zero:
x
δ
(
x
)
=
0.
{\displaystyle x\,\delta (x)=0.}
More generally,
(
x
−
a
)
n
δ
(
x
−
a
)
=
0
{\displaystyle (x-a)^{n}\delta (x-a)=0}
for all positive integers
n
{\displaystyle n}
.
Conversely, if xf(x) = xg(x), where f and g are distributions, then
f
(
x
)
=
g
(
x
)
+
c
δ
(
x
)
{\displaystyle f(x)=g(x)+c\delta (x)}
for some constant c.
=== Translation ===
The integral of any function multiplied by the time-delayed Dirac delta
δ
T
(
t
)
=
δ
(
t
−
T
)
{\displaystyle \delta _{T}(t){=}\delta (t{-}T)}
is
∫
−
∞
∞
f
(
t
)
δ
(
t
−
T
)
d
t
=
f
(
T
)
.
{\displaystyle \int _{-\infty }^{\infty }f(t)\,\delta (t-T)\,dt=f(T).}
This is sometimes referred to as the sifting property or the sampling property. The delta function is said to "sift out" the value of f(t) at t = T.
It follows that the effect of convolving a function f(t) with the time-delayed Dirac delta is to time-delay f(t) by the same amount:
(
f
∗
δ
T
)
(
t
)
=
d
e
f
∫
−
∞
∞
f
(
τ
)
δ
(
t
−
T
−
τ
)
d
τ
=
∫
−
∞
∞
f
(
τ
)
δ
(
τ
−
(
t
−
T
)
)
d
τ
since
δ
(
−
x
)
=
δ
(
x
)
by (4)
=
f
(
t
−
T
)
.
{\displaystyle {\begin{aligned}(f*\delta _{T})(t)\ &{\stackrel {\mathrm {def} }{=}}\ \int _{-\infty }^{\infty }f(\tau )\,\delta (t-T-\tau )\,d\tau \\&=\int _{-\infty }^{\infty }f(\tau )\,\delta (\tau -(t-T))\,d\tau \qquad {\text{since}}~\delta (-x)=\delta (x)~~{\text{by (4)}}\\&=f(t-T).\end{aligned}}}
The sifting property holds under the precise condition that f be a tempered distribution (see the discussion of the Fourier transform below). As a special case, for instance, we have the identity (understood in the distribution sense)
∫
−
∞
∞
δ
(
ξ
−
x
)
δ
(
x
−
η
)
d
x
=
δ
(
η
−
ξ
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta (\xi -x)\delta (x-\eta )\,dx=\delta (\eta -\xi ).}
=== Composition with a function ===
More generally, the delta distribution may be composed with a smooth function g(x) in such a way that the familiar change of variables formula holds (where
u
=
g
(
x
)
{\displaystyle u=g(x)}
), that
∫
R
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
g
′
(
x
)
|
d
x
=
∫
g
(
R
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} }\delta {\bigl (}g(x){\bigr )}f{\bigl (}g(x){\bigr )}\left|g'(x)\right|dx=\int _{g(\mathbb {R} )}\delta (u)\,f(u)\,du}
provided that g is a continuously differentiable function with g′ nowhere zero. That is, there is a unique way to assign meaning to the distribution
δ
∘
g
{\displaystyle \delta \circ g}
so that this identity holds for all compactly supported test functions f. Therefore, the domain must be broken up to exclude the g′ = 0 point. This distribution satisfies δ(g(x)) = 0 if g is nowhere zero, and otherwise if g has a real root at x0, then
δ
(
g
(
x
)
)
=
δ
(
x
−
x
0
)
|
g
′
(
x
0
)
|
.
{\displaystyle \delta (g(x))={\frac {\delta (x-x_{0})}{|g'(x_{0})|}}.}
It is natural therefore to define the composition δ(g(x)) for continuously differentiable functions g by
δ
(
g
(
x
)
)
=
∑
i
δ
(
x
−
x
i
)
|
g
′
(
x
i
)
|
{\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}}
where the sum extends over all roots of g(x), which are assumed to be simple. Thus, for example
δ
(
x
2
−
α
2
)
=
1
2
|
α
|
[
δ
(
x
+
α
)
+
δ
(
x
−
α
)
]
.
{\displaystyle \delta \left(x^{2}-\alpha ^{2}\right)={\frac {1}{2|\alpha |}}{\Big [}\delta \left(x+\alpha \right)+\delta \left(x-\alpha \right){\Big ]}.}
In the integral form, the generalized scaling property may be written as
∫
−
∞
∞
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∑
i
f
(
x
i
)
|
g
′
(
x
i
)
|
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\,\delta (g(x))\,dx=\sum _{i}{\frac {f(x_{i})}{|g'(x_{i})|}}.}
=== Indefinite integral ===
For a constant
a
∈
R
{\displaystyle a\in \mathbb {R} }
and a "well-behaved" arbitrary real-valued function y(x),
∫
y
(
x
)
δ
(
x
−
a
)
d
x
=
y
(
a
)
H
(
x
−
a
)
+
c
,
{\displaystyle \displaystyle {\int }y(x)\delta (x-a)dx=y(a)H(x-a)+c,}
where H(x) is the Heaviside step function and c is an integration constant.
=== Properties in n dimensions ===
The delta distribution in an n-dimensional space satisfies the following scaling property instead,
δ
(
α
x
)
=
|
α
|
−
n
δ
(
x
)
,
{\displaystyle \delta (\alpha {\boldsymbol {x}})=|\alpha |^{-n}\delta ({\boldsymbol {x}})~,}
so that δ is a homogeneous distribution of degree −n.
Under any reflection or rotation ρ, the delta function is invariant,
δ
(
ρ
x
)
=
δ
(
x
)
.
{\displaystyle \delta (\rho {\boldsymbol {x}})=\delta ({\boldsymbol {x}})~.}
As in the one-variable case, it is possible to define the composition of δ with a bi-Lipschitz function g: Rn → Rn uniquely so that the following holds
∫
R
n
δ
(
g
(
x
)
)
f
(
g
(
x
)
)
|
det
g
′
(
x
)
|
d
x
=
∫
g
(
R
n
)
δ
(
u
)
f
(
u
)
d
u
{\displaystyle \int _{\mathbb {R} ^{n}}\delta (g({\boldsymbol {x}}))\,f(g({\boldsymbol {x}}))\left|\det g'({\boldsymbol {x}})\right|d{\boldsymbol {x}}=\int _{g(\mathbb {R} ^{n})}\delta ({\boldsymbol {u}})f({\boldsymbol {u}})\,d{\boldsymbol {u}}}
for all compactly supported functions f.
Using the coarea formula from geometric measure theory, one can also define the composition of the delta function with a submersion from one Euclidean space to another one of different dimension; the result is a type of current. In the special case of a continuously differentiable function g : Rn → R such that the gradient of g is nowhere zero, the following identity holds
∫
R
n
f
(
x
)
δ
(
g
(
x
)
)
d
x
=
∫
g
−
1
(
0
)
f
(
x
)
|
∇
g
|
d
σ
(
x
)
{\displaystyle \int _{\mathbb {R} ^{n}}f({\boldsymbol {x}})\,\delta (g({\boldsymbol {x}}))\,d{\boldsymbol {x}}=\int _{g^{-1}(0)}{\frac {f({\boldsymbol {x}})}{|{\boldsymbol {\nabla }}g|}}\,d\sigma ({\boldsymbol {x}})}
where the integral on the right is over g−1(0), the (n − 1)-dimensional surface defined by g(x) = 0 with respect to the Minkowski content measure. This is known as a simple layer integral.
More generally, if S is a smooth hypersurface of Rn, then we can associate to S the distribution that integrates any compactly supported smooth function g over S:
δ
S
[
g
]
=
∫
S
g
(
s
)
d
σ
(
s
)
{\displaystyle \delta _{S}[g]=\int _{S}g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}})}
where σ is the hypersurface measure associated to S. This generalization is associated with the potential theory of simple layer potentials on S. If D is a domain in Rn with smooth boundary S, then δS is equal to the normal derivative of the indicator function of D in the distribution sense,
−
∫
R
n
g
(
x
)
∂
1
D
(
x
)
∂
n
d
x
=
∫
S
g
(
s
)
d
σ
(
s
)
,
{\displaystyle -\int _{\mathbb {R} ^{n}}g({\boldsymbol {x}})\,{\frac {\partial 1_{D}({\boldsymbol {x}})}{\partial n}}\,d{\boldsymbol {x}}=\int _{S}\,g({\boldsymbol {s}})\,d\sigma ({\boldsymbol {s}}),}
where n is the outward normal. For a proof, see e.g. the article on the surface delta function.
In three dimensions, the delta function is represented in spherical coordinates by:
δ
(
r
−
r
0
)
=
{
1
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
δ
(
ϕ
−
ϕ
0
)
x
0
,
y
0
,
z
0
≠
0
1
2
π
r
2
sin
θ
δ
(
r
−
r
0
)
δ
(
θ
−
θ
0
)
x
0
=
y
0
=
0
,
z
0
≠
0
1
4
π
r
2
δ
(
r
−
r
0
)
x
0
=
y
0
=
z
0
=
0
{\displaystyle \delta ({\boldsymbol {r}}-{\boldsymbol {r}}_{0})={\begin{cases}\displaystyle {\frac {1}{r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})\delta (\phi -\phi _{0})&x_{0},y_{0},z_{0}\neq 0\\\displaystyle {\frac {1}{2\pi r^{2}\sin \theta }}\delta (r-r_{0})\delta (\theta -\theta _{0})&x_{0}=y_{0}=0,\ z_{0}\neq 0\\\displaystyle {\frac {1}{4\pi r^{2}}}\delta (r-r_{0})&x_{0}=y_{0}=z_{0}=0\end{cases}}}
== Derivatives ==
The derivative of the Dirac delta distribution, denoted δ′ and also called the Dirac delta prime or Dirac delta derivative as described in Laplacian of the indicator, is defined on compactly supported smooth test functions φ by
δ
′
[
φ
]
=
−
δ
[
φ
′
]
=
−
φ
′
(
0
)
.
{\displaystyle \delta '[\varphi ]=-\delta [\varphi ']=-\varphi '(0).}
The first equality here is a kind of integration by parts, for if δ were a true function then
∫
−
∞
∞
δ
′
(
x
)
φ
(
x
)
d
x
=
δ
(
x
)
φ
(
x
)
|
−
∞
∞
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
∫
−
∞
∞
δ
(
x
)
φ
′
(
x
)
d
x
=
−
φ
′
(
0
)
.
{\displaystyle \int _{-\infty }^{\infty }\delta '(x)\varphi (x)\,dx=\delta (x)\varphi (x)|_{-\infty }^{\infty }-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }\delta (x)\varphi '(x)\,dx=-\varphi '(0).}
By mathematical induction, the k-th derivative of δ is defined similarly as the distribution given on test functions by
δ
(
k
)
[
φ
]
=
(
−
1
)
k
φ
(
k
)
(
0
)
.
{\displaystyle \delta ^{(k)}[\varphi ]=(-1)^{k}\varphi ^{(k)}(0).}
In particular, δ is an infinitely differentiable distribution.
The first derivative of the delta function is the distributional limit of the difference quotients:
δ
′
(
x
)
=
lim
h
→
0
δ
(
x
+
h
)
−
δ
(
x
)
h
.
{\displaystyle \delta '(x)=\lim _{h\to 0}{\frac {\delta (x+h)-\delta (x)}{h}}.}
More properly, one has
δ
′
=
lim
h
→
0
1
h
(
τ
h
δ
−
δ
)
{\displaystyle \delta '=\lim _{h\to 0}{\frac {1}{h}}(\tau _{h}\delta -\delta )}
where τh is the translation operator, defined on functions by τhφ(x) = φ(x + h), and on a distribution S by
(
τ
h
S
)
[
φ
]
=
S
[
τ
−
h
φ
]
.
{\displaystyle (\tau _{h}S)[\varphi ]=S[\tau _{-h}\varphi ].}
In the theory of electromagnetism, the first derivative of the delta function represents a point magnetic dipole situated at the origin. Accordingly, it is referred to as a dipole or the doublet function.
The derivative of the delta function satisfies a number of basic properties, including:
δ
′
(
−
x
)
=
−
δ
′
(
x
)
x
δ
′
(
x
)
=
−
δ
(
x
)
{\displaystyle {\begin{aligned}\delta '(-x)&=-\delta '(x)\\x\delta '(x)&=-\delta (x)\end{aligned}}}
which can be shown by applying a test function and integrating by parts.
The latter of these properties can also be demonstrated by applying distributional derivative definition, Leibniz 's theorem and linearity of inner product:
⟨
x
δ
′
,
φ
⟩
=
⟨
δ
′
,
x
φ
⟩
=
−
⟨
δ
,
(
x
φ
)
′
⟩
=
−
⟨
δ
,
x
′
φ
+
x
φ
′
⟩
=
−
⟨
δ
,
x
′
φ
⟩
−
⟨
δ
,
x
φ
′
⟩
=
−
x
′
(
0
)
φ
(
0
)
−
x
(
0
)
φ
′
(
0
)
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
−
x
(
0
)
⟨
δ
,
φ
′
⟩
=
−
x
′
(
0
)
⟨
δ
,
φ
⟩
+
x
(
0
)
⟨
δ
′
,
φ
⟩
=
⟨
x
(
0
)
δ
′
−
x
′
(
0
)
δ
,
φ
⟩
⟹
x
(
t
)
δ
′
(
t
)
=
x
(
0
)
δ
′
(
t
)
−
x
′
(
0
)
δ
(
t
)
=
−
x
′
(
0
)
δ
(
t
)
=
−
δ
(
t
)
{\displaystyle {\begin{aligned}\langle x\delta ',\varphi \rangle \,&=\,\langle \delta ',x\varphi \rangle \,=\,-\langle \delta ,(x\varphi )'\rangle \,=\,-\langle \delta ,x'\varphi +x\varphi '\rangle \,=\,-\langle \delta ,x'\varphi \rangle -\langle \delta ,x\varphi '\rangle \,=\,-x'(0)\varphi (0)-x(0)\varphi '(0)\\&=\,-x'(0)\langle \delta ,\varphi \rangle -x(0)\langle \delta ,\varphi '\rangle \,=\,-x'(0)\langle \delta ,\varphi \rangle +x(0)\langle \delta ',\varphi \rangle \,=\,\langle x(0)\delta '-x'(0)\delta ,\varphi \rangle \\\Longrightarrow x(t)\delta '(t)&=x(0)\delta '(t)-x'(0)\delta (t)=-x'(0)\delta (t)=-\delta (t)\end{aligned}}}
Furthermore, the convolution of δ′ with a compactly-supported, smooth function f is
δ
′
∗
f
=
δ
∗
f
′
=
f
′
,
{\displaystyle \delta '*f=\delta *f'=f',}
which follows from the properties of the distributional derivative of a convolution.
=== Higher dimensions ===
More generally, on an open set U in the n-dimensional Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, the Dirac delta distribution centered at a point a ∈ U is defined by
δ
a
[
φ
]
=
φ
(
a
)
{\displaystyle \delta _{a}[\varphi ]=\varphi (a)}
for all
φ
∈
C
c
∞
(
U
)
{\displaystyle \varphi \in C_{c}^{\infty }(U)}
, the space of all smooth functions with compact support on U. If
α
=
(
α
1
,
…
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
is any multi-index with
|
α
|
=
α
1
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}}
and
∂
α
{\displaystyle \partial ^{\alpha }}
denotes the associated mixed partial derivative operator, then the α-th derivative ∂αδa of δa is given by
⟨
∂
α
δ
a
,
φ
⟩
=
(
−
1
)
|
α
|
⟨
δ
a
,
∂
α
φ
⟩
=
(
−
1
)
|
α
|
∂
α
φ
(
x
)
|
x
=
a
for all
φ
∈
C
c
∞
(
U
)
.
{\displaystyle \left\langle \partial ^{\alpha }\delta _{a},\,\varphi \right\rangle =(-1)^{|\alpha |}\left\langle \delta _{a},\partial ^{\alpha }\varphi \right\rangle =(-1)^{|\alpha |}\partial ^{\alpha }\varphi (x){\Big |}_{x=a}\quad {\text{ for all }}\varphi \in C_{c}^{\infty }(U).}
That is, the α-th derivative of δa is the distribution whose value on any test function φ is the α-th derivative of φ at a (with the appropriate positive or negative sign).
The first partial derivatives of the delta function are thought of as double layers along the coordinate planes. More generally, the normal derivative of a simple layer supported on a surface is a double layer supported on that surface and represents a laminar magnetic monopole. Higher derivatives of the delta function are known in physics as multipoles.
Higher derivatives enter into mathematics naturally as the building blocks for the complete structure of distributions with point support. If S is any distribution on U supported on the set {a} consisting of a single point, then there is an integer m and coefficients cα such that
S
=
∑
|
α
|
≤
m
c
α
∂
α
δ
a
.
{\displaystyle S=\sum _{|\alpha |\leq m}c_{\alpha }\partial ^{\alpha }\delta _{a}.}
== Representations ==
=== Nascent delta function ===
The delta function can be viewed as the limit of a sequence of functions
δ
(
x
)
=
lim
ε
→
0
+
η
ε
(
x
)
,
{\displaystyle \delta (x)=\lim _{\varepsilon \to 0^{+}}\eta _{\varepsilon }(x),}
where ηε(x) is sometimes called a nascent delta function. This limit is meant in a weak sense: either that
for all continuous functions f having compact support, or that this limit holds for all smooth functions f with compact support. The difference between these two slightly different modes of weak convergence is often subtle: the former is convergence in the vague topology of measures, and the latter is convergence in the sense of distributions.
==== Approximations to the identity ====
Typically a nascent delta function ηε can be constructed in the following manner. Let η be an absolutely integrable function on R of total integral 1, and define
η
ε
(
x
)
=
ε
−
1
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\eta \left({\frac {x}{\varepsilon }}\right).}
In n dimensions, one uses instead the scaling
η
ε
(
x
)
=
ε
−
n
η
(
x
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-n}\eta \left({\frac {x}{\varepsilon }}\right).}
Then a simple change of variables shows that ηε also has integral 1. One may show that (5) holds for all continuous compactly supported functions f, and so ηε converges weakly to δ in the sense of measures.
The ηε constructed in this way are known as an approximation to the identity. This terminology is because the space L1(R) of absolutely integrable functions is closed under the operation of convolution of functions: f ∗ g ∈ L1(R) whenever f and g are in L1(R). However, there is no identity in L1(R) for the convolution product: no element h such that f ∗ h = f for all f. Nevertheless, the sequence ηε does approximate such an identity in the sense that
f
∗
η
ε
→
f
as
ε
→
0.
{\displaystyle f*\eta _{\varepsilon }\to f\quad {\text{as }}\varepsilon \to 0.}
This limit holds in the sense of mean convergence (convergence in L1). Further conditions on the ηε, for instance that it be a mollifier associated to a compactly supported function, are needed to ensure pointwise convergence almost everywhere.
If the initial η = η1 is itself smooth and compactly supported then the sequence is called a mollifier. The standard mollifier is obtained by choosing η to be a suitably normalized bump function, for instance
η
(
x
)
=
{
1
I
n
exp
(
−
1
1
−
|
x
|
2
)
if
|
x
|
<
1
0
if
|
x
|
≥
1.
{\displaystyle \eta (x)={\begin{cases}{\frac {1}{I_{n}}}\exp {\Big (}-{\frac {1}{1-|x|^{2}}}{\Big )}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1.\end{cases}}}
(
I
n
{\displaystyle I_{n}}
ensuring that the total integral is 1).
In some situations such as numerical analysis, a piecewise linear approximation to the identity is desirable. This can be obtained by taking η1 to be a hat function. With this choice of η1, one has
η
ε
(
x
)
=
ε
−
1
max
(
1
−
|
x
ε
|
,
0
)
{\displaystyle \eta _{\varepsilon }(x)=\varepsilon ^{-1}\max \left(1-\left|{\frac {x}{\varepsilon }}\right|,0\right)}
which are all continuous and compactly supported, although not smooth and so not a mollifier.
==== Probabilistic considerations ====
In the context of probability theory, it is natural to impose the additional condition that the initial η1 in an approximation to the identity should be positive, as such a function then represents a probability distribution. Convolution with a probability distribution is sometimes favorable because it does not result in overshoot or undershoot, as the output is a convex combination of the input values, and thus falls between the maximum and minimum of the input function. Taking η1 to be any probability distribution at all, and letting ηε(x) = η1(x/ε)/ε as above will give rise to an approximation to the identity. In general this converges more rapidly to a delta function if, in addition, η has mean 0 and has small higher moments. For instance, if η1 is the uniform distribution on
[
−
1
2
,
1
2
]
{\textstyle \left[-{\frac {1}{2}},{\frac {1}{2}}\right]}
, also known as the rectangular function, then:
η
ε
(
x
)
=
1
ε
rect
(
x
ε
)
=
{
1
ε
,
−
ε
2
<
x
<
ε
2
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x}{\varepsilon }}\right)={\begin{cases}{\frac {1}{\varepsilon }},&-{\frac {\varepsilon }{2}}<x<{\frac {\varepsilon }{2}},\\0,&{\text{otherwise}}.\end{cases}}}
Another example is with the Wigner semicircle distribution
η
ε
(
x
)
=
{
2
π
ε
2
ε
2
−
x
2
,
−
ε
<
x
<
ε
,
0
,
otherwise
.
{\displaystyle \eta _{\varepsilon }(x)={\begin{cases}{\frac {2}{\pi \varepsilon ^{2}}}{\sqrt {\varepsilon ^{2}-x^{2}}},&-\varepsilon <x<\varepsilon ,\\0,&{\text{otherwise}}.\end{cases}}}
This is continuous and compactly supported, but not a mollifier because it is not smooth.
==== Semigroups ====
Nascent delta functions often arise as convolution semigroups. This amounts to the further constraint that the convolution of ηε with ηδ must satisfy
η
ε
∗
η
δ
=
η
ε
+
δ
{\displaystyle \eta _{\varepsilon }*\eta _{\delta }=\eta _{\varepsilon +\delta }}
for all ε, δ > 0. Convolution semigroups in L1 that form a nascent delta function are always an approximation to the identity in the above sense, however the semigroup condition is quite a strong restriction.
In practice, semigroups approximating the delta function arise as fundamental solutions or Green's functions to physically motivated elliptic or parabolic partial differential equations. In the context of applied mathematics, semigroups arise as the output of a linear time-invariant system. Abstractly, if A is a linear operator acting on functions of x, then a convolution semigroup arises by solving the initial value problem
{
∂
∂
t
η
(
t
,
x
)
=
A
η
(
t
,
x
)
,
t
>
0
lim
t
→
0
+
η
(
t
,
x
)
=
δ
(
x
)
{\displaystyle {\begin{cases}{\dfrac {\partial }{\partial t}}\eta (t,x)=A\eta (t,x),\quad t>0\\[5pt]\displaystyle \lim _{t\to 0^{+}}\eta (t,x)=\delta (x)\end{cases}}}
in which the limit is as usual understood in the weak sense. Setting ηε(x) = η(ε, x) gives the associated nascent delta function.
Some examples of physically important convolution semigroups arising from such a fundamental solution include the following.
===== The heat kernel =====
The heat kernel, defined by
η
ε
(
x
)
=
1
2
π
ε
e
−
x
2
2
ε
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\sqrt {2\pi \varepsilon }}}\mathrm {e} ^{-{\frac {x^{2}}{2\varepsilon }}}}
represents the temperature in an infinite wire at time t > 0, if a unit of heat energy is stored at the origin of the wire at time t = 0. This semigroup evolves according to the one-dimensional heat equation:
∂
u
∂
t
=
1
2
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {1}{2}}{\frac {\partial ^{2}u}{\partial x^{2}}}.}
In probability theory, ηε(x) is a normal distribution of variance ε and mean 0. It represents the probability density at time t = ε of the position of a particle starting at the origin following a standard Brownian motion. In this context, the semigroup condition is then an expression of the Markov property of Brownian motion.
In higher-dimensional Euclidean space Rn, the heat kernel is
η
ε
=
1
(
2
π
ε
)
n
/
2
e
−
x
⋅
x
2
ε
,
{\displaystyle \eta _{\varepsilon }={\frac {1}{(2\pi \varepsilon )^{n/2}}}\mathrm {e} ^{-{\frac {x\cdot x}{2\varepsilon }}},}
and has the same physical interpretation, mutatis mutandis. It also represents a nascent delta function in the sense that ηε → δ in the distribution sense as ε → 0.
===== The Poisson kernel =====
The Poisson kernel
η
ε
(
x
)
=
1
π
I
m
{
1
x
−
i
ε
}
=
1
π
ε
ε
2
+
x
2
=
1
2
π
∫
−
∞
∞
e
i
ξ
x
−
|
ε
ξ
|
d
ξ
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi }}\mathrm {Im} \left\{{\frac {1}{x-\mathrm {i} \varepsilon }}\right\}={\frac {1}{\pi }}{\frac {\varepsilon }{\varepsilon ^{2}+x^{2}}}={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\mathrm {e} ^{\mathrm {i} \xi x-|\varepsilon \xi |}\,d\xi }
is the fundamental solution of the Laplace equation in the upper half-plane. It represents the electrostatic potential in a semi-infinite plate whose potential along the edge is held at fixed at the delta function. The Poisson kernel is also closely related to the Cauchy distribution and Epanechnikov and Gaussian kernel functions. This semigroup evolves according to the equation
∂
u
∂
t
=
−
(
−
∂
2
∂
x
2
)
1
2
u
(
t
,
x
)
{\displaystyle {\frac {\partial u}{\partial t}}=-\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}u(t,x)}
where the operator is rigorously defined as the Fourier multiplier
F
[
(
−
∂
2
∂
x
2
)
1
2
f
]
(
ξ
)
=
|
2
π
ξ
|
F
f
(
ξ
)
.
{\displaystyle {\mathcal {F}}\left[\left(-{\frac {\partial ^{2}}{\partial x^{2}}}\right)^{\frac {1}{2}}f\right](\xi )=|2\pi \xi |{\mathcal {F}}f(\xi ).}
==== Oscillatory integrals ====
In areas of physics such as wave propagation and wave mechanics, the equations involved are hyperbolic and so may have more singular solutions. As a result, the nascent delta functions that arise as fundamental solutions of the associated Cauchy problems are generally oscillatory integrals. An example, which comes from a solution of the Euler–Tricomi equation of transonic gas dynamics, is the rescaled Airy function
ε
−
1
/
3
Ai
(
x
ε
−
1
/
3
)
.
{\displaystyle \varepsilon ^{-1/3}\operatorname {Ai} \left(x\varepsilon ^{-1/3}\right).}
Although using the Fourier transform, it is easy to see that this generates a semigroup in some sense—it is not absolutely integrable and so cannot define a semigroup in the above strong sense. Many nascent delta functions constructed as oscillatory integrals only converge in the sense of distributions (an example is the Dirichlet kernel below), rather than in the sense of measures.
Another example is the Cauchy problem for the wave equation in R1+1:
c
−
2
∂
2
u
∂
t
2
−
Δ
u
=
0
u
=
0
,
∂
u
∂
t
=
δ
for
t
=
0.
{\displaystyle {\begin{aligned}c^{-2}{\frac {\partial ^{2}u}{\partial t^{2}}}-\Delta u&=0\\u=0,\quad {\frac {\partial u}{\partial t}}=\delta &\qquad {\text{for }}t=0.\end{aligned}}}
The solution u represents the displacement from equilibrium of an infinite elastic string, with an initial disturbance at the origin.
Other approximations to the identity of this kind include the sinc function (used widely in electronics and telecommunications)
η
ε
(
x
)
=
1
π
x
sin
(
x
ε
)
=
1
2
π
∫
−
1
ε
1
ε
cos
(
k
x
)
d
k
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\pi x}}\sin \left({\frac {x}{\varepsilon }}\right)={\frac {1}{2\pi }}\int _{-{\frac {1}{\varepsilon }}}^{\frac {1}{\varepsilon }}\cos(kx)\,dk}
and the Bessel function
η
ε
(
x
)
=
1
ε
J
1
ε
(
x
+
1
ε
)
.
{\displaystyle \eta _{\varepsilon }(x)={\frac {1}{\varepsilon }}J_{\frac {1}{\varepsilon }}\left({\frac {x+1}{\varepsilon }}\right).}
=== Plane wave decomposition ===
One approach to the study of a linear partial differential equation
L
[
u
]
=
f
,
{\displaystyle L[u]=f,}
where L is a differential operator on Rn, is to seek first a fundamental solution, which is a solution of the equation
L
[
u
]
=
δ
.
{\displaystyle L[u]=\delta .}
When L is particularly simple, this problem can often be resolved using the Fourier transform directly (as in the case of the Poisson kernel and heat kernel already mentioned). For more complicated operators, it is sometimes easier first to consider an equation of the form
L
[
u
]
=
h
{\displaystyle L[u]=h}
where h is a plane wave function, meaning that it has the form
h
=
h
(
x
⋅
ξ
)
{\displaystyle h=h(x\cdot \xi )}
for some vector ξ. Such an equation can be resolved (if the coefficients of L are analytic functions) by the Cauchy–Kovalevskaya theorem or (if the coefficients of L are constant) by quadrature. So, if the delta function can be decomposed into plane waves, then one can in principle solve linear partial differential equations.
Such a decomposition of the delta function into plane waves was part of a general technique first introduced essentially by Johann Radon, and then developed in this form by Fritz John (1955). Choose k so that n + k is an even integer, and for a real number s, put
g
(
s
)
=
Re
[
−
s
k
log
(
−
i
s
)
k
!
(
2
π
i
)
n
]
=
{
|
s
|
k
4
k
!
(
2
π
i
)
n
−
1
n
odd
−
|
s
|
k
log
|
s
|
k
!
(
2
π
i
)
n
n
even.
{\displaystyle g(s)=\operatorname {Re} \left[{\frac {-s^{k}\log(-is)}{k!(2\pi i)^{n}}}\right]={\begin{cases}{\frac {|s|^{k}}{4k!(2\pi i)^{n-1}}}&n{\text{ odd}}\\[5pt]-{\frac {|s|^{k}\log |s|}{k!(2\pi i)^{n}}}&n{\text{ even.}}\end{cases}}}
Then δ is obtained by applying a power of the Laplacian to the integral with respect to the unit sphere measure dω of g(x · ξ) for ξ in the unit sphere Sn−1:
δ
(
x
)
=
Δ
x
(
n
+
k
)
/
2
∫
S
n
−
1
g
(
x
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \delta (x)=\Delta _{x}^{(n+k)/2}\int _{S^{n-1}}g(x\cdot \xi )\,d\omega _{\xi }.}
The Laplacian here is interpreted as a weak derivative, so that this equation is taken to mean that, for any test function φ,
φ
(
x
)
=
∫
R
n
φ
(
y
)
d
y
Δ
x
n
+
k
2
∫
S
n
−
1
g
(
(
x
−
y
)
⋅
ξ
)
d
ω
ξ
.
{\displaystyle \varphi (x)=\int _{\mathbf {R} ^{n}}\varphi (y)\,dy\,\Delta _{x}^{\frac {n+k}{2}}\int _{S^{n-1}}g((x-y)\cdot \xi )\,d\omega _{\xi }.}
The result follows from the formula for the Newtonian potential (the fundamental solution of Poisson's equation). This is essentially a form of the inversion formula for the Radon transform because it recovers the value of φ(x) from its integrals over hyperplanes. For instance, if n is odd and k = 1, then the integral on the right hand side is
c
n
Δ
x
n
+
1
2
∬
S
n
−
1
φ
(
y
)
|
(
y
−
x
)
⋅
ξ
|
d
ω
ξ
d
y
=
c
n
Δ
x
(
n
+
1
)
/
2
∫
S
n
−
1
d
ω
ξ
∫
−
∞
∞
|
p
|
R
φ
(
ξ
,
p
+
x
⋅
ξ
)
d
p
{\displaystyle {\begin{aligned}&c_{n}\Delta _{x}^{\frac {n+1}{2}}\iint _{S^{n-1}}\varphi (y)|(y-x)\cdot \xi |\,d\omega _{\xi }\,dy\\[5pt]&\qquad =c_{n}\Delta _{x}^{(n+1)/2}\int _{S^{n-1}}\,d\omega _{\xi }\int _{-\infty }^{\infty }|p|R\varphi (\xi ,p+x\cdot \xi )\,dp\end{aligned}}}
where Rφ(ξ, p) is the Radon transform of φ:
R
φ
(
ξ
,
p
)
=
∫
x
⋅
ξ
=
p
f
(
x
)
d
n
−
1
x
.
{\displaystyle R\varphi (\xi ,p)=\int _{x\cdot \xi =p}f(x)\,d^{n-1}x.}
An alternative equivalent expression of the plane wave decomposition is:
δ
(
x
)
=
{
(
n
−
1
)
!
(
2
π
i
)
n
∫
S
n
−
1
(
x
⋅
ξ
)
−
n
d
ω
ξ
n
even
1
2
(
2
π
i
)
n
−
1
∫
S
n
−
1
δ
(
n
−
1
)
(
x
⋅
ξ
)
d
ω
ξ
n
odd
.
{\displaystyle \delta (x)={\begin{cases}{\frac {(n-1)!}{(2\pi i)^{n}}}\displaystyle \int _{S^{n-1}}(x\cdot \xi )^{-n}\,d\omega _{\xi }&n{\text{ even}}\\{\frac {1}{2(2\pi i)^{n-1}}}\displaystyle \int _{S^{n-1}}\delta ^{(n-1)}(x\cdot \xi )\,d\omega _{\xi }&n{\text{ odd}}.\end{cases}}}
=== Fourier transform ===
The delta function is a tempered distribution, and therefore it has a well-defined Fourier transform. Formally, one finds
δ
^
(
ξ
)
=
∫
−
∞
∞
e
−
2
π
i
x
ξ
δ
(
x
)
d
x
=
1.
{\displaystyle {\widehat {\delta }}(\xi )=\int _{-\infty }^{\infty }e^{-2\pi ix\xi }\,\delta (x)dx=1.}
Properly speaking, the Fourier transform of a distribution is defined by imposing self-adjointness of the Fourier transform under the duality pairing
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
of tempered distributions with Schwartz functions. Thus
δ
^
{\displaystyle {\widehat {\delta }}}
is defined as the unique tempered distribution satisfying
⟨
δ
^
,
φ
⟩
=
⟨
δ
,
φ
^
⟩
{\displaystyle \langle {\widehat {\delta }},\varphi \rangle =\langle \delta ,{\widehat {\varphi }}\rangle }
for all Schwartz functions φ. And indeed it follows from this that
δ
^
=
1.
{\displaystyle {\widehat {\delta }}=1.}
As a result of this identity, the convolution of the delta function with any other tempered distribution S is simply S:
S
∗
δ
=
S
.
{\displaystyle S*\delta =S.}
That is to say that δ is an identity element for the convolution on tempered distributions, and in fact, the space of compactly supported distributions under convolution is an associative algebra with identity the delta function. This property is fundamental in signal processing, as convolution with a tempered distribution is a linear time-invariant system, and applying the linear time-invariant system measures its impulse response. The impulse response can be computed to any desired degree of accuracy by choosing a suitable approximation for δ, and once it is known, it characterizes the system completely. See LTI system theory § Impulse response and convolution.
The inverse Fourier transform of the tempered distribution f(ξ) = 1 is the delta function. Formally, this is expressed as
∫
−
∞
∞
1
⋅
e
2
π
i
x
ξ
d
ξ
=
δ
(
x
)
{\displaystyle \int _{-\infty }^{\infty }1\cdot e^{2\pi ix\xi }\,d\xi =\delta (x)}
and more rigorously, it follows since
⟨
1
,
f
^
⟩
=
f
(
0
)
=
⟨
δ
,
f
⟩
{\displaystyle \langle 1,{\widehat {f}}\rangle =f(0)=\langle \delta ,f\rangle }
for all Schwartz functions f.
In these terms, the delta function provides a suggestive statement of the orthogonality property of the Fourier kernel on R. Formally, one has
∫
−
∞
∞
e
i
2
π
ξ
1
t
[
e
i
2
π
ξ
2
t
]
∗
d
t
=
∫
−
∞
∞
e
−
i
2
π
(
ξ
2
−
ξ
1
)
t
d
t
=
δ
(
ξ
2
−
ξ
1
)
.
{\displaystyle \int _{-\infty }^{\infty }e^{i2\pi \xi _{1}t}\left[e^{i2\pi \xi _{2}t}\right]^{*}\,dt=\int _{-\infty }^{\infty }e^{-i2\pi (\xi _{2}-\xi _{1})t}\,dt=\delta (\xi _{2}-\xi _{1}).}
This is, of course, shorthand for the assertion that the Fourier transform of the tempered distribution
f
(
t
)
=
e
i
2
π
ξ
1
t
{\displaystyle f(t)=e^{i2\pi \xi _{1}t}}
is
f
^
(
ξ
2
)
=
δ
(
ξ
1
−
ξ
2
)
{\displaystyle {\widehat {f}}(\xi _{2})=\delta (\xi _{1}-\xi _{2})}
which again follows by imposing self-adjointness of the Fourier transform.
By analytic continuation of the Fourier transform, the Laplace transform of the delta function is found to be
∫
0
∞
δ
(
t
−
a
)
e
−
s
t
d
t
=
e
−
s
a
.
{\displaystyle \int _{0}^{\infty }\delta (t-a)\,e^{-st}\,dt=e^{-sa}.}
==== Fourier kernels ====
In the study of Fourier series, a major question consists of determining whether and in what sense the Fourier series associated with a periodic function converges to the function. The n-th partial sum of the Fourier series of a function f of period 2π is defined by convolution (on the interval [−π,π]) with the Dirichlet kernel:
D
N
(
x
)
=
∑
n
=
−
N
N
e
i
n
x
=
sin
(
(
N
+
1
2
)
x
)
sin
(
x
/
2
)
.
{\displaystyle D_{N}(x)=\sum _{n=-N}^{N}e^{inx}={\frac {\sin \left(\left(N+{\frac {1}{2}}\right)x\right)}{\sin(x/2)}}.}
Thus,
s
N
(
f
)
(
x
)
=
D
N
∗
f
(
x
)
=
∑
n
=
−
N
N
a
n
e
i
n
x
{\displaystyle s_{N}(f)(x)=D_{N}*f(x)=\sum _{n=-N}^{N}a_{n}e^{inx}}
where
a
n
=
1
2
π
∫
−
π
π
f
(
y
)
e
−
i
n
y
d
y
.
{\displaystyle a_{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }f(y)e^{-iny}\,dy.}
A fundamental result of elementary Fourier series states that the Dirichlet kernel restricted to the interval [−π,π] tends to a multiple of the delta function as N → ∞. This is interpreted in the distribution sense, that
s
N
(
f
)
(
0
)
=
∫
−
π
π
D
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle s_{N}(f)(0)=\int _{-\pi }^{\pi }D_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported smooth function f. Thus, formally one has
δ
(
x
)
=
1
2
π
∑
n
=
−
∞
∞
e
i
n
x
{\displaystyle \delta (x)={\frac {1}{2\pi }}\sum _{n=-\infty }^{\infty }e^{inx}}
on the interval [−π,π].
Despite this, the result does not hold for all compactly supported continuous functions: that is DN does not converge weakly in the sense of measures. The lack of convergence of the Fourier series has led to the introduction of a variety of summability methods to produce convergence. The method of Cesàro summation leads to the Fejér kernel
F
N
(
x
)
=
1
N
∑
n
=
0
N
−
1
D
n
(
x
)
=
1
N
(
sin
N
x
2
sin
x
2
)
2
.
{\displaystyle F_{N}(x)={\frac {1}{N}}\sum _{n=0}^{N-1}D_{n}(x)={\frac {1}{N}}\left({\frac {\sin {\frac {Nx}{2}}}{\sin {\frac {x}{2}}}}\right)^{2}.}
The Fejér kernels tend to the delta function in a stronger sense that
∫
−
π
π
F
N
(
x
)
f
(
x
)
d
x
→
2
π
f
(
0
)
{\displaystyle \int _{-\pi }^{\pi }F_{N}(x)f(x)\,dx\to 2\pi f(0)}
for every compactly supported continuous function f. The implication is that the Fourier series of any continuous function is Cesàro summable to the value of the function at every point.
=== Hilbert space theory ===
The Dirac delta distribution is a densely defined unbounded linear functional on the Hilbert space L2 of square-integrable functions. Indeed, smooth compactly supported functions are dense in L2, and the action of the delta distribution on such functions is well-defined. In many applications, it is possible to identify subspaces of L2 and to give a stronger topology on which the delta function defines a bounded linear functional.
==== Sobolev spaces ====
The Sobolev embedding theorem for Sobolev spaces on the real line R implies that any square-integrable function f such that
‖
f
‖
H
1
2
=
∫
−
∞
∞
|
f
^
(
ξ
)
|
2
(
1
+
|
ξ
|
2
)
d
ξ
<
∞
{\displaystyle \|f\|_{H^{1}}^{2}=\int _{-\infty }^{\infty }|{\widehat {f}}(\xi )|^{2}(1+|\xi |^{2})\,d\xi <\infty }
is automatically continuous, and satisfies in particular
δ
[
f
]
=
|
f
(
0
)
|
<
C
‖
f
‖
H
1
.
{\displaystyle \delta [f]=|f(0)|<C\|f\|_{H^{1}}.}
Thus δ is a bounded linear functional on the Sobolev space H1. Equivalently δ is an element of the continuous dual space H−1 of H1. More generally, in n dimensions, one has δ ∈ H−s(Rn) provided s > n/2.
==== Spaces of holomorphic functions ====
In complex analysis, the delta function enters via Cauchy's integral formula, which asserts that if D is a domain in the complex plane with smooth boundary, then
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
,
z
∈
D
{\displaystyle f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}},\quad z\in D}
for all holomorphic functions f in D that are continuous on the closure of D. As a result, the delta function δz is represented in this class of holomorphic functions by the Cauchy integral:
δ
z
[
f
]
=
f
(
z
)
=
1
2
π
i
∮
∂
D
f
(
ζ
)
d
ζ
ζ
−
z
.
{\displaystyle \delta _{z}[f]=f(z)={\frac {1}{2\pi i}}\oint _{\partial D}{\frac {f(\zeta )\,d\zeta }{\zeta -z}}.}
Moreover, let H2(∂D) be the Hardy space consisting of the closure in L2(∂D) of all holomorphic functions in D continuous up to the boundary of D. Then functions in H2(∂D) uniquely extend to holomorphic functions in D, and the Cauchy integral formula continues to hold. In particular for z ∈ D, the delta function δz is a continuous linear functional on H2(∂D). This is a special case of the situation in several complex variables in which, for smooth domains D, the Szegő kernel plays the role of the Cauchy integral.
Another representation of the delta function in a space of holomorphic functions is on the space
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
of square-integrable holomorphic functions in an open set
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
. This is a closed subspace of
L
2
(
D
)
{\displaystyle L^{2}(D)}
, and therefore is a Hilbert space. On the other hand, the functional that evaluates a holomorphic function in
H
(
D
)
∩
L
2
(
D
)
{\displaystyle H(D)\cap L^{2}(D)}
at a point
z
{\displaystyle z}
of
D
{\displaystyle D}
is a continuous functional, and so by the Riesz representation theorem, is represented by integration against a kernel
K
z
(
ζ
)
{\displaystyle K_{z}(\zeta )}
, the Bergman kernel. This kernel is the analog of the delta function in this Hilbert space. A Hilbert space having such a kernel is called a reproducing kernel Hilbert space. In the special case of the unit disc, one has
δ
w
[
f
]
=
f
(
w
)
=
1
π
∬
|
z
|
<
1
f
(
z
)
d
x
d
y
(
1
−
z
¯
w
)
2
.
{\displaystyle \delta _{w}[f]=f(w)={\frac {1}{\pi }}\iint _{|z|<1}{\frac {f(z)\,dx\,dy}{(1-{\bar {z}}w)^{2}}}.}
==== Resolutions of the identity ====
Given a complete orthonormal basis set of functions {φn} in a separable Hilbert space, for example, the normalized eigenvectors of a compact self-adjoint operator, any vector f can be expressed as
f
=
∑
n
=
1
∞
α
n
φ
n
.
{\displaystyle f=\sum _{n=1}^{\infty }\alpha _{n}\varphi _{n}.}
The coefficients {αn} are found as
α
n
=
⟨
φ
n
,
f
⟩
,
{\displaystyle \alpha _{n}=\langle \varphi _{n},f\rangle ,}
which may be represented by the notation:
α
n
=
φ
n
†
f
,
{\displaystyle \alpha _{n}=\varphi _{n}^{\dagger }f,}
a form of the bra–ket notation of Dirac. Adopting this notation, the expansion of f takes the dyadic form:
f
=
∑
n
=
1
∞
φ
n
(
φ
n
†
f
)
.
{\displaystyle f=\sum _{n=1}^{\infty }\varphi _{n}\left(\varphi _{n}^{\dagger }f\right).}
Letting I denote the identity operator on the Hilbert space, the expression
I
=
∑
n
=
1
∞
φ
n
φ
n
†
,
{\displaystyle I=\sum _{n=1}^{\infty }\varphi _{n}\varphi _{n}^{\dagger },}
is called a resolution of the identity. When the Hilbert space is the space L2(D) of square-integrable functions on a domain D, the quantity:
φ
n
φ
n
†
,
{\displaystyle \varphi _{n}\varphi _{n}^{\dagger },}
is an integral operator, and the expression for f can be rewritten
f
(
x
)
=
∑
n
=
1
∞
∫
D
(
φ
n
(
x
)
φ
n
∗
(
ξ
)
)
f
(
ξ
)
d
ξ
.
{\displaystyle f(x)=\sum _{n=1}^{\infty }\int _{D}\,\left(\varphi _{n}(x)\varphi _{n}^{*}(\xi )\right)f(\xi )\,d\xi .}
The right-hand side converges to f in the L2 sense. It need not hold in a pointwise sense, even when f is a continuous function. Nevertheless, it is common to abuse notation and write
f
(
x
)
=
∫
δ
(
x
−
ξ
)
f
(
ξ
)
d
ξ
,
{\displaystyle f(x)=\int \,\delta (x-\xi )f(\xi )\,d\xi ,}
resulting in the representation of the delta function:
δ
(
x
−
ξ
)
=
∑
n
=
1
∞
φ
n
(
x
)
φ
n
∗
(
ξ
)
.
{\displaystyle \delta (x-\xi )=\sum _{n=1}^{\infty }\varphi _{n}(x)\varphi _{n}^{*}(\xi ).}
With a suitable rigged Hilbert space (Φ, L2(D), Φ*) where Φ ⊂ L2(D) contains all compactly supported smooth functions, this summation may converge in Φ*, depending on the properties of the basis φn. In most cases of practical interest, the orthonormal basis comes from an integral or differential operator (e.g. the heat kernel), in which case the series converges in the distribution sense.
=== Infinitesimal delta functions ===
Cauchy used an infinitesimal α to write down a unit impulse, infinitely tall and narrow Dirac-type delta function δα satisfying
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
in a number of articles in 1827. Cauchy defined an infinitesimal in Cours d'Analyse (1827) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology.
Non-standard analysis allows one to rigorously treat infinitesimals. The article by Yamashita (2007) contains a bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Here the Dirac delta can be given by an actual function, having the property that for every real function F one has
∫
F
(
x
)
δ
α
(
x
)
d
x
=
F
(
0
)
{\textstyle \int F(x)\delta _{\alpha }(x)\,dx=F(0)}
as anticipated by Fourier and Cauchy.
== Dirac comb ==
A so-called uniform "pulse train" of Dirac delta measures, which is known as a Dirac comb, or as the Sha distribution, creates a sampling function, often used in digital signal processing (DSP) and discrete time signal analysis. The Dirac comb is given as the infinite sum, whose limit is understood in the distribution sense,
Ш
(
x
)
=
∑
n
=
−
∞
∞
δ
(
x
−
n
)
,
{\displaystyle \operatorname {\text{Ш}} (x)=\sum _{n=-\infty }^{\infty }\delta (x-n),}
which is a sequence of point masses at each of the integers.
Up to an overall normalizing constant, the Dirac comb is equal to its own Fourier transform. This is significant because if f is any Schwartz function, then the periodization of f is given by the convolution
(
f
∗
Ш
)
(
x
)
=
∑
n
=
−
∞
∞
f
(
x
−
n
)
.
{\displaystyle (f*\operatorname {\text{Ш}} )(x)=\sum _{n=-\infty }^{\infty }f(x-n).}
In particular,
(
f
∗
Ш
)
∧
=
f
^
Ш
^
=
f
^
Ш
{\displaystyle (f*\operatorname {\text{Ш}} )^{\wedge }={\widehat {f}}{\widehat {\operatorname {\text{Ш}} }}={\widehat {f}}\operatorname {\text{Ш}} }
is precisely the Poisson summation formula.
More generally, this formula remains to be true if f is a tempered distribution of rapid descent or, equivalently, if
f
^
{\displaystyle {\widehat {f}}}
is a slowly growing, ordinary function within the space of tempered distributions.
== Sokhotski–Plemelj theorem ==
The Sokhotski–Plemelj theorem, important in quantum mechanics, relates the delta function to the distribution p.v. 1/x, the Cauchy principal value of the function 1/x, defined by
⟨
p
.
v
.
1
x
,
φ
⟩
=
lim
ε
→
0
+
∫
|
x
|
>
ε
φ
(
x
)
x
d
x
.
{\displaystyle \left\langle \operatorname {p.v.} {\frac {1}{x}},\varphi \right\rangle =\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {\varphi (x)}{x}}\,dx.}
Sokhotsky's formula states that
lim
ε
→
0
+
1
x
±
i
ε
=
p
.
v
.
1
x
∓
i
π
δ
(
x
)
,
{\displaystyle \lim _{\varepsilon \to 0^{+}}{\frac {1}{x\pm i\varepsilon }}=\operatorname {p.v.} {\frac {1}{x}}\mp i\pi \delta (x),}
Here the limit is understood in the distribution sense, that for all compactly supported smooth functions f,
∫
−
∞
∞
lim
ε
→
0
+
f
(
x
)
x
±
i
ε
d
x
=
∓
i
π
f
(
0
)
+
lim
ε
→
0
+
∫
|
x
|
>
ε
f
(
x
)
x
d
x
.
{\displaystyle \int _{-\infty }^{\infty }\lim _{\varepsilon \to 0^{+}}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+\lim _{\varepsilon \to 0^{+}}\int _{|x|>\varepsilon }{\frac {f(x)}{x}}\,dx.}
== Relationship to the Kronecker delta ==
The Kronecker delta δij is the quantity defined by
δ
i
j
=
{
1
i
=
j
0
i
≠
j
{\displaystyle \delta _{ij}={\begin{cases}1&i=j\\0&i\not =j\end{cases}}}
for all integers i, j. This function then satisfies the following analog of the sifting property: if ai (for i in the set of all integers) is any doubly infinite sequence, then
∑
i
=
−
∞
∞
a
i
δ
i
k
=
a
k
.
{\displaystyle \sum _{i=-\infty }^{\infty }a_{i}\delta _{ik}=a_{k}.}
Similarly, for any real or complex valued continuous function f on R, the Dirac delta satisfies the sifting property
∫
−
∞
∞
f
(
x
)
δ
(
x
−
x
0
)
d
x
=
f
(
x
0
)
.
{\displaystyle \int _{-\infty }^{\infty }f(x)\delta (x-x_{0})\,dx=f(x_{0}).}
This exhibits the Kronecker delta function as a discrete analog of the Dirac delta function.
== Applications ==
=== Probability theory ===
In probability theory and statistics, the Dirac delta function is often used to represent a discrete distribution, or a partially discrete, partially continuous distribution, using a probability density function (which is normally used to represent absolutely continuous distributions). For example, the probability density function f(x) of a discrete distribution consisting of points x = {x1, ..., xn}, with corresponding probabilities p1, ..., pn, can be written as
f
(
x
)
=
∑
i
=
1
n
p
i
δ
(
x
−
x
i
)
.
{\displaystyle f(x)=\sum _{i=1}^{n}p_{i}\delta (x-x_{i}).}
As another example, consider a distribution in which 6/10 of the time returns a standard normal distribution, and 4/10 of the time returns exactly the value 3.5 (i.e. a partly continuous, partly discrete mixture distribution). The density function of this distribution can be written as
f
(
x
)
=
0.6
1
2
π
e
−
x
2
2
+
0.4
δ
(
x
−
3.5
)
.
{\displaystyle f(x)=0.6\,{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}+0.4\,\delta (x-3.5).}
The delta function is also used to represent the resulting probability density function of a random variable that is transformed by continuously differentiable function. If Y = g(X) is a continuous differentiable function, then the density of Y can be written as
f
Y
(
y
)
=
∫
−
∞
+
∞
f
X
(
x
)
δ
(
y
−
g
(
x
)
)
d
x
.
{\displaystyle f_{Y}(y)=\int _{-\infty }^{+\infty }f_{X}(x)\delta (y-g(x))\,dx.}
The delta function is also used in a completely different way to represent the local time of a diffusion process (like Brownian motion). The local time of a stochastic process B(t) is given by
ℓ
(
x
,
t
)
=
∫
0
t
δ
(
x
−
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\int _{0}^{t}\delta (x-B(s))\,ds}
and represents the amount of time that the process spends at the point x in the range of the process. More precisely, in one dimension this integral can be written
ℓ
(
x
,
t
)
=
lim
ε
→
0
+
1
2
ε
∫
0
t
1
[
x
−
ε
,
x
+
ε
]
(
B
(
s
)
)
d
s
{\displaystyle \ell (x,t)=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\varepsilon }}\int _{0}^{t}\mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}(B(s))\,ds}
where
1
[
x
−
ε
,
x
+
ε
]
{\displaystyle \mathbf {1} _{[x-\varepsilon ,x+\varepsilon ]}}
is the indicator function of the interval
[
x
−
ε
,
x
+
ε
]
.
{\displaystyle [x-\varepsilon ,x+\varepsilon ].}
=== Quantum mechanics ===
The delta function is expedient in quantum mechanics. The wave function of a particle gives the probability amplitude of finding a particle within a given region of space. Wave functions are assumed to be elements of the Hilbert space L2 of square-integrable functions, and the total probability of finding a particle within a given interval is the integral of the magnitude of the wave function squared over the interval. A set {|φn⟩} of wave functions is orthonormal if
⟨
φ
n
∣
φ
m
⟩
=
δ
n
m
,
{\displaystyle \langle \varphi _{n}\mid \varphi _{m}\rangle =\delta _{nm},}
where δnm is the Kronecker delta. A set of orthonormal wave functions is complete in the space of square-integrable functions if any wave function |ψ⟩ can be expressed as a linear combination of the {|φn⟩} with complex coefficients:
ψ
=
∑
c
n
φ
n
,
{\displaystyle \psi =\sum c_{n}\varphi _{n},}
where cn = ⟨φn|ψ⟩. Complete orthonormal systems of wave functions appear naturally as the eigenfunctions of the Hamiltonian (of a bound system) in quantum mechanics that measures the energy levels, which are called the eigenvalues. The set of eigenvalues, in this case, is known as the spectrum of the Hamiltonian. In bra–ket notation this equality implies the resolution of the identity:
I
=
∑
|
φ
n
⟩
⟨
φ
n
|
.
{\displaystyle I=\sum |\varphi _{n}\rangle \langle \varphi _{n}|.}
Here the eigenvalues are assumed to be discrete, but the set of eigenvalues of an observable can also be continuous. An example is the position operator, Qψ(x) = xψ(x). The spectrum of the position (in one dimension) is the entire real line and is called a continuous spectrum. However, unlike the Hamiltonian, the position operator lacks proper eigenfunctions. The conventional way to overcome this shortcoming is to widen the class of available functions by allowing distributions as well, i.e., to replace the Hilbert space with a rigged Hilbert space. In this context, the position operator has a complete set of generalized eigenfunctions, labeled by the points y of the real line, given by
φ
y
(
x
)
=
δ
(
x
−
y
)
.
{\displaystyle \varphi _{y}(x)=\delta (x-y).}
The generalized eigenfunctions of the position operator are called the eigenkets and are denoted by φy = |y⟩.
Similar considerations apply to any other (unbounded) self-adjoint operator with continuous spectrum and no degenerate eigenvalues, such as the momentum operator P. In that case, there is a set Ω of real numbers (the spectrum) and a collection of distributions φy with y ∈ Ω such that
P
φ
y
=
y
φ
y
.
{\displaystyle P\varphi _{y}=y\varphi _{y}.}
That is, φy are the generalized eigenvectors of P. If they form an "orthonormal basis" in the distribution sense, that is:
⟨
φ
y
,
φ
y
′
⟩
=
δ
(
y
−
y
′
)
,
{\displaystyle \langle \varphi _{y},\varphi _{y'}\rangle =\delta (y-y'),}
then for any test function ψ,
ψ
(
x
)
=
∫
Ω
c
(
y
)
φ
y
(
x
)
d
y
{\displaystyle \psi (x)=\int _{\Omega }c(y)\varphi _{y}(x)\,dy}
where c(y) = ⟨ψ, φy⟩. That is, there is a resolution of the identity
I
=
∫
Ω
|
φ
y
⟩
⟨
φ
y
|
d
y
{\displaystyle I=\int _{\Omega }|\varphi _{y}\rangle \,\langle \varphi _{y}|\,dy}
where the operator-valued integral is again understood in the weak sense. If the spectrum of P has both continuous and discrete parts, then the resolution of the identity involves a summation over the discrete spectrum and an integral over the continuous spectrum.
The delta function also has many more specialized applications in quantum mechanics, such as the delta potential models for a single and double potential well.
=== Structural mechanics ===
The delta function can be used in structural mechanics to describe transient loads or point loads acting on structures. The governing equation of a simple mass–spring system excited by a sudden force impulse I at time t = 0 can be written
m
d
2
ξ
d
t
2
+
k
ξ
=
I
δ
(
t
)
,
{\displaystyle m{\frac {d^{2}\xi }{dt^{2}}}+k\xi =I\delta (t),}
where m is the mass, ξ is the deflection, and k is the spring constant.
As another example, the equation governing the static deflection of a slender beam is, according to Euler–Bernoulli theory,
E
I
d
4
w
d
x
4
=
q
(
x
)
,
{\displaystyle EI{\frac {d^{4}w}{dx^{4}}}=q(x),}
where EI is the bending stiffness of the beam, w is the deflection, x is the spatial coordinate, and q(x) is the load distribution. If a beam is loaded by a point force F at x = x0, the load distribution is written
q
(
x
)
=
F
δ
(
x
−
x
0
)
.
{\displaystyle q(x)=F\delta (x-x_{0}).}
As the integration of the delta function results in the Heaviside step function, it follows that the static deflection of a slender beam subject to multiple point loads is described by a set of piecewise polynomials.
Also, a point moment acting on a beam can be described by delta functions. Consider two opposing point forces F at a distance d apart. They then produce a moment M = Fd acting on the beam. Now, let the distance d approach the limit zero, while M is kept constant. The load distribution, assuming a clockwise moment acting at x = 0, is written
q
(
x
)
=
lim
d
→
0
(
F
δ
(
x
)
−
F
δ
(
x
−
d
)
)
=
lim
d
→
0
(
M
d
δ
(
x
)
−
M
d
δ
(
x
−
d
)
)
=
M
lim
d
→
0
δ
(
x
)
−
δ
(
x
−
d
)
d
=
M
δ
′
(
x
)
.
{\displaystyle {\begin{aligned}q(x)&=\lim _{d\to 0}{\Big (}F\delta (x)-F\delta (x-d){\Big )}\\[4pt]&=\lim _{d\to 0}\left({\frac {M}{d}}\delta (x)-{\frac {M}{d}}\delta (x-d)\right)\\[4pt]&=M\lim _{d\to 0}{\frac {\delta (x)-\delta (x-d)}{d}}\\[4pt]&=M\delta '(x).\end{aligned}}}
Point moments can thus be represented by the derivative of the delta function. Integration of the beam equation again results in piecewise polynomial deflection.
== See also ==
Atom (measure theory)
Degenerate distribution
Laplacian of the indicator
Uncertainty principle
== Notes ==
== References ==
Aratyn, Henrik; Rasinariu, Constantin (2006), A short course in mathematical methods with Maple, World Scientific, ISBN 978-981-256-461-0.
Arfken, G. B.; Weber, H. J. (2000), Mathematical Methods for Physicists (5th ed.), Boston, Massachusetts: Academic Press, ISBN 978-0-12-059825-0.
atis (2013), ATIS Telecom Glossary, archived from the original on 2013-03-13
Bracewell, R. N. (1986), The Fourier Transform and Its Applications (2nd ed.), McGraw-Hill, Bibcode:1986ftia.book.....B.
Bracewell, R. N. (2000), The Fourier Transform and Its Applications (3rd ed.), McGraw-Hill.
Córdoba, A. (1988), "La formule sommatoire de Poisson", Comptes Rendus de l'Académie des Sciences, Série I, 306: 373–376.
Courant, Richard; Hilbert, David (1962), Methods of Mathematical Physics, Volume II, Wiley-Interscience.
Davis, Howard Ted; Thomson, Kendall T (2000), Linear algebra and linear operators in engineering with applications in Mathematica, Academic Press, ISBN 978-0-12-206349-7
Dieudonné, Jean (1976), Treatise on analysis. Vol. II, New York: Academic Press [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-215502-4, MR 0530406.
Dieudonné, Jean (1972), Treatise on analysis. Vol. III, Boston, Massachusetts: Academic Press, MR 0350769
Dirac, Paul (1930), The Principles of Quantum Mechanics (1st ed.), Oxford University Press.
Driggers, Ronald G. (2003), Encyclopedia of Optical Engineering, CRC Press, Bibcode:2003eoe..book.....D, ISBN 978-0-8247-0940-2.
Duistermaat, Hans; Kolk (2010), Distributions: Theory and applications, Springer.
Federer, Herbert (1969), Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, vol. 153, New York: Springer-Verlag, pp. xiv+676, ISBN 978-3-540-60656-7, MR 0257325.
Gannon, Terry (2008), "Vertex operator algebras", Princeton Companion to Mathematics, Princeton University Press, ISBN 978-1400830398.
Gelfand, I. M.; Shilov, G. E. (1966–1968), Generalized functions, vol. 1–5, Academic Press, ISBN 9781483262246.
Hartmann, William M. (1997), Signals, sound, and sensation, Springer, ISBN 978-1-56396-283-7.
Hazewinkel, Michiel (1995). Encyclopaedia of Mathematics (set). Springer Science & Business Media. ISBN 978-1-55608-010-4.
Hazewinkel, Michiel (2011). Encyclopaedia of mathematics. Vol. 10. Springer. ISBN 978-90-481-4896-7. OCLC 751862625.
Hewitt, E; Stromberg, K (1963), Real and abstract analysis, Springer-Verlag.
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 978-3-540-12104-6, MR 0717035.
Isham, C. J. (1995), Lectures on quantum theory: mathematical and structural foundations, Imperial College Press, Bibcode:1995lqtm.book.....I, ISBN 978-81-7764-190-5.
John, Fritz (1955), Plane waves and spherical means applied to partial differential equations, Interscience Publishers, New York-London, MR 0075429. Reprinted, Dover Publications, 2004, ISBN 9780486438047.
Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4757-2698-5, ISBN 978-0-387-94841-6, MR 1476913.
Lange, Rutger-Jan (2012), "Potential theory, path integrals and the Laplacian of the indicator", Journal of High Energy Physics, 2012 (11): 29–30, arXiv:1302.0864, Bibcode:2012JHEP...11..032L, doi:10.1007/JHEP11(2012)032, S2CID 56188533.
Laugwitz, D. (1989), "Definite values of infinite sums: aspects of the foundations of infinitesimal analysis around 1820", Arch. Hist. Exact Sci., 39 (3): 195–245, doi:10.1007/BF00329867, S2CID 120890300.
Levin, Frank S. (2002), "Coordinate-space wave functions and completeness", An introduction to quantum theory, Cambridge University Press, pp. 109ff, ISBN 978-0-521-59841-5
Li, Y. T.; Wong, R. (2008), "Integral and series representations of the Dirac delta function", Commun. Pure Appl. Anal., 7 (2): 229–247, arXiv:1303.1943, doi:10.3934/cpaa.2008.7.229, MR 2373214, S2CID 119319140.
de la Madrid Modino, R. (2001). Quantum mechanics in rigged Hilbert space language (PhD thesis). Universidad de Valladolid.
de la Madrid, R.; Bohm, A.; Gadella, M. (2002), "Rigged Hilbert Space Treatment of Continuous Spectrum", Fortschr. Phys., 50 (2): 185–216, arXiv:quant-ph/0109154, Bibcode:2002ForPh..50..185D, doi:10.1002/1521-3978(200203)50:2<185::AID-PROP185>3.0.CO;2-S, S2CID 9407651.
McMahon, D. (2005-11-22), "An Introduction to State Space" (PDF), Quantum Mechanics Demystified, A Self-Teaching Guide, Demystified Series, New York: McGraw-Hill, p. 108, ISBN 978-0-07-145546-6, retrieved 2008-03-17.
van der Pol, Balth.; Bremmer, H. (1987), Operational calculus (3rd ed.), New York: Chelsea Publishing Co., ISBN 978-0-8284-0327-6, MR 0904873.
Rudin, Walter (1966). Devine, Peter R. (ed.). Real and complex analysis (3rd ed.). New York: McGraw-Hill (published 1987). ISBN 0-07-100276-6.
Rudin, Walter (1991), Functional Analysis (2nd ed.), McGraw-Hill, ISBN 978-0-07-054236-5.
Vallée, Olivier; Soares, Manuel (2004), Airy functions and applications to physics, London: Imperial College Press, ISBN 9781911299486.
Saichev, A I; Woyczyński, Wojbor Andrzej (1997), "Chapter1: Basic definitions and operations", Distributions in the Physical and Engineering Sciences: Distributional and fractal calculus, integral transforms, and wavelets, Birkhäuser, ISBN 978-0-8176-3924-2
Schwartz, L. (1950), Théorie des distributions, vol. 1, Hermann.
Schwartz, L. (1951), Théorie des distributions, vol. 2, Hermann.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 978-0-691-08078-9.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 978-0-8493-8273-4.
Vladimirov, V. S. (1971), Equations of mathematical physics, Marcel Dekker, ISBN 978-0-8247-1713-1.
Weisstein, Eric W. "Delta Function". MathWorld.
Yamashita, H. (2006), "Pointwise analysis of scalar fields: A nonstandard approach", Journal of Mathematical Physics, 47 (9): 092301, Bibcode:2006JMP....47i2301Y, doi:10.1063/1.2339017
Yamashita, H. (2007), "Comment on "Pointwise analysis of scalar fields: A nonstandard approach" [J. Math. Phys. 47, 092301 (2006)]", Journal of Mathematical Physics, 48 (8): 084101, Bibcode:2007JMP....48h4101Y, doi:10.1063/1.2771422
== External links ==
Media related to Dirac distribution at Wikimedia Commons
"Delta-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
KhanAcademy.org video lesson
The Dirac Delta function, a tutorial on the Dirac delta function.
Video Lectures – Lecture 23, a lecture by Arthur Mattuck.
The Dirac delta measure is a hyperfunction
We show the existence of a unique solution and analyze a finite element approximation when the source term is a Dirac delta measure
Non-Lebesgue measures on R. Lebesgue-Stieltjes measure, Dirac delta measure. Archived 2008-03-07 at the Wayback Machine | Wikipedia/Dirac's_delta_function |
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix, or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
The Laplacian matrix relates to many functional graph properties. Kirchhoff's theorem can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by Cheeger's inequality. The spectral decomposition of the Laplacian matrix allows the construction of low-dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing. Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal.
The Laplacian matrix is the easiest to define for a simple graph but more common in applications for an edge-weighted graph, i.e., with weights on its edges — the entries of the graph adjacency matrix. Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices.
== Definitions for simple graphs ==
=== Laplacian matrix ===
Given a simple graph
G
{\displaystyle G}
with
n
{\displaystyle n}
vertices
v
1
,
…
,
v
n
{\displaystyle v_{1},\ldots ,v_{n}}
, its Laplacian matrix
L
n
×
n
{\textstyle L_{n\times n}}
is
defined element-wise as
L
i
,
j
:=
{
deg
(
v
i
)
if
i
=
j
−
1
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
,
{\displaystyle L_{i,j}:={\begin{cases}\deg(v_{i})&{\mbox{if}}\ i=j\\-1&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}},\end{cases}}}
or equivalently by the matrix
L
=
D
−
A
,
{\displaystyle L=D-A,}
where D is the degree matrix, and A is the graph's adjacency matrix. Since
G
{\textstyle G}
is a simple graph,
A
{\textstyle A}
only contains 1s or 0s and its diagonal elements are all 0s.
Here is a simple example of a labelled, undirected graph and its Laplacian matrix.
We observe for the undirected graph that both the adjacency matrix and the Laplacian matrix are symmetric and that the row- and column-sums of the Laplacian matrix are all zeros (which directly implies that the Laplacian matrix is singular).
For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example:
In the directed graph, the adjacency matrix and Laplacian matrix are asymmetric. In its Laplacian matrix, column-sums or row-sums are zero, depending on whether the indegree or outdegree has been used.
=== Laplacian matrix for an undirected graph via the oriented incidence matrix ===
The
|
v
|
×
|
e
|
{\textstyle |v|\times |e|}
oriented incidence matrix B with element Bve for the vertex v and the edge e (connecting vertices
v
i
{\textstyle v_{i}}
and
v
j
{\textstyle v_{j}}
, with i ≠ j) is defined by
B
v
e
=
{
1
,
if
v
=
v
i
−
1
,
if
v
=
v
j
0
,
otherwise
.
{\displaystyle B_{ve}=\left\{{\begin{array}{rl}1,&{\text{if }}v=v_{i}\\-1,&{\text{if }}v=v_{j}\\0,&{\text{otherwise}}.\end{array}}\right.}
Even though the edges in this definition are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian
|
v
|
×
|
v
|
{\textstyle |v|\times |v|}
matrix L defined as
L
=
B
B
T
{\displaystyle L=BB^{\textsf {T}}}
where
B
T
{\textstyle B^{\textsf {T}}}
is the matrix transpose of B.
An alternative product
B
T
B
{\displaystyle B^{\textsf {T}}B}
defines the so-called
|
e
|
×
|
e
|
{\textstyle |e|\times |e|}
edge-based Laplacian, as opposed to the original commonly used vertex-based Laplacian matrix L.
=== Symmetric Laplacian for a directed graph ===
The Laplacian matrix of a directed graph is by definition generally non-symmetric, while, e.g., traditional spectral clustering is primarily developed for undirected graphs with symmetric adjacency and Laplacian matrices. A trivial approach to applying techniques requiring the symmetry is to turn the original directed graph into an undirected graph and build the Laplacian matrix for the latter.
In the matrix notation, the adjacency matrix of the undirected graph could, e.g., be defined as a Boolean sum of the adjacency matrix
A
{\displaystyle A}
of the original directed graph and its matrix transpose
A
T
{\displaystyle A^{T}}
, where the zero and one entries of
A
{\displaystyle A}
are treated as logical, rather than numerical, values, as in the following example:
=== Laplacian matrix normalization ===
A vertex with a large degree, also called a heavy node, results in a large diagonal entry in the Laplacian matrix dominating the matrix properties. Normalization is aimed to make the influence of such vertices more equal to that of other vertices, by dividing the entries of the Laplacian matrix by the vertex degrees. To avoid division by zero, isolated vertices with zero degrees are excluded from the process of the normalization.
==== Symmetrically normalized Laplacian ====
The symmetrically normalized Laplacian matrix is defined as:
L
sym
:=
(
D
+
)
1
/
2
L
(
D
+
)
1
/
2
=
I
−
(
D
+
)
1
/
2
A
(
D
+
)
1
/
2
,
{\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=I-(D^{+})^{1/2}A(D^{+})^{1/2},}
where
D
+
{\displaystyle D^{+}}
is the Moore–Penrose inverse of the degree matrix.
The elements of
L
sym
{\textstyle L^{\text{sym}}}
are thus given by
L
i
,
j
sym
:=
{
1
if
i
=
j
and
deg
(
v
i
)
≠
0
−
1
deg
(
v
i
)
deg
(
v
j
)
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
.
{\displaystyle L_{i,j}^{\text{sym}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\sqrt {\deg(v_{i})\deg(v_{j})}}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}
The symmetrically normalized Laplacian matrix is symmetric if and only if the adjacency matrix is symmetric.
For a non-symmetric adjacency matrix of a directed graph, either of indegree and outdegree can be used for normalization:
==== Left (random-walk) and right normalized Laplacians ====
The left (random-walk) normalized Laplacian matrix is defined as:
L
rw
:=
D
+
L
=
I
−
D
+
A
,
{\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A,}
where
D
+
{\displaystyle D^{+}}
is the Moore–Penrose inverse.
The elements of
L
rw
{\textstyle L^{\text{rw}}}
are given by
L
i
,
j
rw
:=
{
1
if
i
=
j
and
deg
(
v
i
)
≠
0
−
1
deg
(
v
i
)
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
.
{\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if }}i=j{\mbox{ and }}\deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}
Similarly, the right normalized Laplacian matrix is defined as
L
D
+
=
I
−
A
D
+
{\displaystyle LD^{+}=I-AD^{+}}
.
The left or right normalized Laplacian matrix is not symmetric if the adjacency matrix is symmetric, except for the trivial case of all isolated vertices. For example,
The example also demonstrates that if
G
{\displaystyle G}
has no isolated vertices, then
D
+
A
{\displaystyle D^{+}A}
right stochastic and hence is the matrix of a random walk, so that the left normalized Laplacian
L
rw
:=
D
+
L
=
I
−
D
+
A
{\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A}
has each row summing to zero. Thus we sometimes alternatively call
L
rw
{\displaystyle L^{\text{rw}}}
the random-walk normalized Laplacian. In the less uncommonly used right normalized Laplacian
L
D
+
=
I
−
A
D
+
{\displaystyle LD^{+}=I-AD^{+}}
each column sums to zero since
A
D
+
{\displaystyle AD^{+}}
is left stochastic.
For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization:
The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic
D
out
+
A
{\displaystyle D_{\text{out}}^{+}A}
, while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic
A
D
in
+
{\displaystyle AD_{\text{in}}^{+}}
.
== Definitions for graphs with weighted edges ==
Common in applications graphs with weighted edges are conveniently defined by their adjacency matrices where values of the entries are numeric and no longer limited to zeros and ones. In spectral clustering and graph-based signal processing, where graph vertices represent data points, the edge weights can be computed, e.g., as inversely proportional to the distances between pairs of data points, leading to all weights being non-negative with larger values informally corresponding to more similar pairs of data points. Using correlation and anti-correlation between the data points naturally leads to both positive and negative weights. Most definitions for simple graphs are trivially extended to the standard case of non-negative weights, while negative weights require more attention, especially in normalization.
=== Laplacian matrix ===
The Laplacian matrix is defined by
L
=
D
−
A
,
{\displaystyle L=D-A,}
where D is the degree matrix and A is the adjacency matrix of the graph.
For directed graphs, either the indegree or outdegree might be used, depending on the application, as in the following example:
Graph self-loops, manifesting themselves by non-zero entries on the main diagonal of the adjacency matrix, are allowed but do not affect the graph Laplacian values.
=== Symmetric Laplacian via the incidence matrix ===
For graphs with weighted edges one can define a weighted incidence matrix B and use it to construct the corresponding symmetric Laplacian as
L
=
B
B
T
{\displaystyle L=BB^{\textsf {T}}}
. An alternative cleaner approach, described here, is to separate the weights from the connectivity: continue using the incidence matrix as for regular graphs and introduce a matrix just holding the values of the weights. A spring system is an example of this model used in mechanics to describe a system of springs of given stiffnesses and unit length, where the values of the stiffnesses play the role of the weights of the graph edges.
We thus reuse the definition of the weightless
|
v
|
×
|
e
|
{\textstyle |v|\times |e|}
incidence matrix B with element Bve for the vertex v and the edge e (connecting vertexes
v
i
{\textstyle v_{i}}
and
v
j
{\textstyle v_{j}}
, with i > j) defined by
B
v
e
=
{
1
,
if
v
=
v
i
−
1
,
if
v
=
v
j
0
,
otherwise
.
{\displaystyle B_{ve}=\left\{{\begin{array}{rl}1,&{\text{if }}v=v_{i}\\-1,&{\text{if }}v=v_{j}\\0,&{\text{otherwise}}.\end{array}}\right.}
We now also define a diagonal
|
e
|
×
|
e
|
{\textstyle |e|\times |e|}
matrix W containing the edge weights. Even though the edges in the definition of B are technically directed, their directions can be arbitrary, still resulting in the same symmetric Laplacian
|
v
|
×
|
v
|
{\textstyle |v|\times |v|}
matrix L defined as
L
=
B
W
B
T
{\displaystyle L=BWB^{\textsf {T}}}
where
B
T
{\textstyle B^{\textsf {T}}}
is the matrix transpose of B.
The construction is illustrated in the following example, where every edge
e
i
{\textstyle e_{i}}
is assigned the weight value i, with
i
=
1
,
2
,
3
,
4.
{\textstyle i=1,2,3,4.}
=== Symmetric Laplacian for a directed graph ===
Just like for simple graphs, the Laplacian matrix of a directed weighted graph is by definition generally non-symmetric. The symmetry can be enforced by turning the original directed graph into an undirected graph first before constructing the Laplacian. The adjacency matrix of the undirected graph could, e.g., be defined as a sum of the adjacency matrix
A
{\displaystyle A}
of the original directed graph and its matrix transpose
A
T
{\displaystyle A^{T}}
as in the following example:
where the zero and one entries of
A
{\displaystyle A}
are treated as numerical, rather than logical as for simple graphs, values, explaining the difference in the results - for simple graphs, the symmetrized graph still needs to be simple with its symmetrized adjacency matrix having only logical, not numerical values, e.g., the logical sum is 1 v 1 = 1, while the numeric sum is 1 + 1 = 2.
Alternatively, the symmetric Laplacian matrix can be calculated from the two Laplacians using the indegree and outdegree, as in the following example:
The sum of the out-degree Laplacian transposed and the in-degree Laplacian equals to the symmetric Laplacian matrix.
=== Laplacian matrix normalization ===
The goal of normalization is, like for simple graphs, to make the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.
Graph self-loops, i.e., non-zero entries on the main diagonal of the adjacency matrix, do not affect the graph Laplacian values, but may need to be counted for calculation of the normalization factors.
==== Symmetrically normalized Laplacian ====
The symmetrically normalized Laplacian is defined as
L
sym
:=
(
D
+
)
1
/
2
L
(
D
+
)
1
/
2
=
I
−
(
D
+
)
1
/
2
A
(
D
+
)
1
/
2
,
{\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=I-(D^{+})^{1/2}A(D^{+})^{1/2},}
where L is the unnormalized Laplacian, A is the adjacency matrix, D is the degree matrix, and
D
+
{\displaystyle D^{+}}
is the Moore–Penrose inverse. Since the degree matrix D is diagonal, its reciprocal square root
(
D
+
)
1
/
2
{\textstyle (D^{+})^{1/2}}
is just the diagonal matrix whose diagonal entries are the reciprocals of the square roots of the diagonal entries of D. If all the edge weights are nonnegative then all the degree values are automatically also nonnegative and so every degree value has a unique positive square root. To avoid the division by zero, vertices with zero degrees are excluded from the process of the normalization, as in the following example:
The symmetrically normalized Laplacian is a symmetric matrix if and only if the adjacency matrix A is symmetric and the diagonal entries of D are nonnegative, in which case we can use the term the symmetric normalized Laplacian.
The symmetric normalized Laplacian matrix can be also written as
L
sym
:=
(
D
+
)
1
/
2
L
(
D
+
)
1
/
2
=
(
D
+
)
1
/
2
B
W
B
T
(
D
+
)
1
/
2
=
S
S
T
{\displaystyle L^{\text{sym}}:=(D^{+})^{1/2}L(D^{+})^{1/2}=(D^{+})^{1/2}BWB^{\textsf {T}}(D^{+})^{1/2}=SS^{T}}
using the weightless
|
v
|
×
|
e
|
{\textstyle |v|\times |e|}
incidence matrix B and the diagonal
|
e
|
×
|
e
|
{\textstyle |e|\times |e|}
matrix W containing the edge weights and defining the new
|
v
|
×
|
e
|
{\textstyle |v|\times |e|}
weighted incidence matrix
S
=
(
D
+
)
1
/
2
B
W
1
/
2
{\textstyle S=(D^{+})^{1/2}BW^{{1}/{2}}}
whose rows are indexed by the vertices and whose columns are indexed by the edges of G such that each column corresponding to an edge e = {u, v} has an entry
1
d
u
{\textstyle {\frac {1}{\sqrt {d_{u}}}}}
in the row corresponding to u, an entry
−
1
d
v
{\textstyle -{\frac {1}{\sqrt {d_{v}}}}}
in the row corresponding to v, and has 0 entries elsewhere.
==== Random walk normalized Laplacian ====
The random walk normalized Laplacian is defined as
L
rw
:=
D
+
L
=
I
−
D
+
A
{\displaystyle L^{\text{rw}}:=D^{+}L=I-D^{+}A}
where D is the degree matrix. Since the degree matrix D is diagonal, its inverse
D
+
{\textstyle D^{+}}
is simply defined as a diagonal matrix, having diagonal entries which are the reciprocals of the corresponding diagonal entries of D. For the isolated vertices (those with degree 0), a common choice is to set the corresponding element
L
i
,
i
rw
{\textstyle L_{i,i}^{\text{rw}}}
to 0. The matrix elements of
L
rw
{\textstyle L^{\text{rw}}}
are given by
L
i
,
j
rw
:=
{
1
if
i
=
j
and
deg
(
v
i
)
≠
0
−
1
deg
(
v
i
)
if
i
≠
j
and
v
i
is adjacent to
v
j
0
otherwise
.
{\displaystyle L_{i,j}^{\text{rw}}:={\begin{cases}1&{\mbox{if}}\ i=j\ {\mbox{and}}\ \deg(v_{i})\neq 0\\-{\frac {1}{\deg(v_{i})}}&{\mbox{if}}\ i\neq j\ {\mbox{and}}\ v_{i}{\mbox{ is adjacent to }}v_{j}\\0&{\mbox{otherwise}}.\end{cases}}}
The name of the random-walk normalized Laplacian comes from the fact that this matrix is
L
rw
=
I
−
P
{\textstyle L^{\text{rw}}=I-P}
, where
P
=
D
+
A
{\textstyle P=D^{+}A}
is simply the transition matrix of a random walker on the graph, assuming non-negative weights. For example, let
e
i
{\textstyle e_{i}}
denote the i-th standard basis vector. Then
x
=
e
i
P
{\textstyle x=e_{i}P}
is a probability vector representing the distribution of a random walker's locations after taking a single step from vertex
i
{\textstyle i}
; i.e.,
x
j
=
P
(
v
i
→
v
j
)
{\textstyle x_{j}=\mathbb {P} \left(v_{i}\to v_{j}\right)}
. More generally, if the vector
x
{\textstyle x}
is a probability distribution of the location of a random walker on the vertices of the graph, then
x
′
=
x
P
t
{\textstyle x'=xP^{t}}
is the probability distribution of the walker after
t
{\textstyle t}
steps.
The random walk normalized Laplacian can also be called the left normalized Laplacian
L
rw
:=
D
+
L
{\displaystyle L^{\text{rw}}:=D^{+}L}
since the normalization is performed by multiplying the Laplacian by the normalization matrix
D
+
{\displaystyle D^{+}}
on the left. It has each row summing to zero since
P
=
D
+
A
{\displaystyle P=D^{+}A}
is right stochastic, assuming all the weights are non-negative.
In the less uncommonly used right normalized Laplacian
L
D
+
=
I
−
A
D
+
{\displaystyle LD^{+}=I-AD^{+}}
each column sums to zero since
A
D
+
{\displaystyle AD^{+}}
is left stochastic.
For a non-symmetric adjacency matrix of a directed graph, one also needs to choose indegree or outdegree for normalization:
The left out-degree normalized Laplacian with row-sums all 0 relates to right stochastic
D
out
+
A
{\displaystyle D_{\text{out}}^{+}A}
, while the right in-degree normalized Laplacian with column-sums all 0 contains left stochastic
A
D
in
+
{\displaystyle AD_{\text{in}}^{+}}
.
==== Negative weights ====
Negative weights present several challenges for normalization:
The presence of negative weights may naturally result in zero row- and/or column-sums for non-isolated vertices. A vertex with a large row-sum of positive weights and equally negatively large row-sum of negative weights, together summing up to zero, could be considered a heavy node and both large values scaled, while the diagonal entry remains zero, like for an isolated vertex.
Negative weights may also give negative row- and/or column-sums, so that the corresponding diagonal entry in the non-normalized Laplacian matrix would be negative and a positive square root needed for the symmetric normalization would not exist.
Arguments can be made to take the absolute value of the row- and/or column-sums for the purpose of normalization, thus treating a possible value -1 as a legitimate unit entry of the main diagonal of the normalized Laplacian matrix.
== Properties ==
For an (undirected) graph G and its Laplacian matrix L with eigenvalues
λ
0
≤
λ
1
≤
⋯
≤
λ
n
−
1
{\textstyle \lambda _{0}\leq \lambda _{1}\leq \cdots \leq \lambda _{n-1}}
:
L is symmetric.
L is positive-semidefinite (that is
λ
i
≥
0
{\textstyle \lambda _{i}\geq 0}
for all
i
{\textstyle i}
). This can be seen from the fact that the Laplacian is symmetric and diagonally dominant.
L is an M-matrix (its off-diagonal entries are nonpositive, yet the real parts of its eigenvalues are nonnegative).
Every row sum and column sum of L is zero. Indeed, in the sum, the degree of the vertex is summed with a "−1" for each neighbor.
In consequence,
λ
0
=
0
{\textstyle \lambda _{0}=0}
, because the vector
v
0
=
(
1
,
1
,
…
,
1
)
{\textstyle \mathbf {v} _{0}=(1,1,\dots ,1)}
satisfies
L
v
0
=
0
.
{\textstyle L\mathbf {v} _{0}=\mathbf {0} .}
This also implies that the Laplacian matrix is singular.
The number of connected components in the graph is the dimension of the nullspace of the Laplacian and the algebraic multiplicity of the 0 eigenvalue.
The smallest non-zero eigenvalue of L is called the spectral gap.
The second smallest eigenvalue of L (could be zero) is the algebraic connectivity (or Fiedler value) of G and approximates the sparsest cut of a graph.
The Laplacian is an operator on the n-dimensional vector space of functions
f
:
V
→
R
{\textstyle f:V\to \mathbb {R} }
, where
V
{\textstyle V}
is the vertex set of G, and
n
=
|
V
|
{\textstyle n=|V|}
.
When G is k-regular, the normalized Laplacian is:
L
=
1
k
L
=
I
−
1
k
A
{\textstyle {\mathcal {L}}={\tfrac {1}{k}}L=I-{\tfrac {1}{k}}A}
, where A is the adjacency matrix and I is an identity matrix.
For a graph with multiple connected components, L is a block diagonal matrix, where each block is the respective Laplacian matrix for each component, possibly after reordering the vertices (i.e. L is permutation-similar to a block diagonal matrix).
The trace of the Laplacian matrix L is equal to
2
m
{\textstyle 2m}
where
m
{\textstyle m}
is the number of edges of the considered graph.
Now consider an eigendecomposition of
L
{\textstyle L}
, with unit-norm eigenvectors
v
i
{\textstyle \mathbf {v} _{i}}
and corresponding eigenvalues
λ
i
{\textstyle \lambda _{i}}
:
λ
i
=
v
i
T
L
v
i
=
v
i
T
M
T
M
v
i
=
(
M
v
i
)
T
(
M
v
i
)
.
{\displaystyle {\begin{aligned}\lambda _{i}&=\mathbf {v} _{i}^{\textsf {T}}L\mathbf {v} _{i}\\&=\mathbf {v} _{i}^{\textsf {T}}M^{\textsf {T}}M\mathbf {v} _{i}\\&=\left(M\mathbf {v} _{i}\right)^{\textsf {T}}\left(M\mathbf {v} _{i}\right).\\\end{aligned}}}
Because
λ
i
{\textstyle \lambda _{i}}
can be written as the inner product of the vector
M
v
i
{\textstyle M\mathbf {v} _{i}}
with itself, this shows that
λ
i
≥
0
{\textstyle \lambda _{i}\geq 0}
and so the eigenvalues of
L
{\textstyle L}
are all non-negative.
All eigenvalues of the normalized symmetric Laplacian satisfy 0 = μ0 ≤ … ≤ μn−1 ≤ 2. These eigenvalues (known as the spectrum of the normalized Laplacian) relate well to other graph invariants for general graphs.
One can check that:
L
rw
=
I
−
D
−
1
2
(
I
−
L
sym
)
D
1
2
{\displaystyle L^{\text{rw}}=I-D^{-{\frac {1}{2}}}\left(I-L^{\text{sym}}\right)D^{\frac {1}{2}}}
,
i.e.,
L
rw
{\textstyle L^{\text{rw}}}
is similar to the normalized Laplacian
L
sym
{\textstyle L^{\text{sym}}}
. For this reason, even if
L
rw
{\textstyle L^{\text{rw}}}
is in general not symmetric, it has real eigenvalues — exactly the same as the eigenvalues of the normalized symmetric Laplacian
L
sym
{\textstyle L^{\text{sym}}}
.
== Interpretation as the discrete Laplace operator approximating the continuous Laplacian ==
The graph Laplacian matrix can be further viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian operator obtained by the finite difference method.
(See Discrete Poisson equation) In this interpretation, every graph vertex is treated as a grid point; the local connectivity of the vertex determines the finite difference approximation stencil at this grid point, the grid size is always one for every edge, and there are no constraints on any grid points, which corresponds to the case of the homogeneous Neumann boundary condition, i.e., free boundary. Such an interpretation allows one, e.g., generalizing the Laplacian matrix to the case of graphs with an infinite number of vertices and edges, leading to a Laplacian matrix of an infinite size.
== Generalizations and extensions of the Laplacian matrix ==
=== Generalized Laplacian ===
The generalized Laplacian
Q
{\displaystyle Q}
is defined as:
{
Q
i
,
j
<
0
if
i
≠
j
and
v
i
is adjacent to
v
j
Q
i
,
j
=
0
if
i
≠
j
and
v
i
is not adjacent to
v
j
any number
otherwise
.
{\displaystyle {\begin{cases}Q_{i,j}<0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is adjacent to }}v_{j}\\Q_{i,j}=0&{\mbox{if }}i\neq j{\mbox{ and }}v_{i}{\mbox{ is not adjacent to }}v_{j}\\{\mbox{any number}}&{\mbox{otherwise}}.\end{cases}}}
Notice the ordinary Laplacian is a generalized Laplacian.
=== Admittance matrix of an AC circuit ===
The Laplacian of a graph was first introduced to model electrical networks.
In an alternating current (AC) electrical network, real-valued resistances are replaced by complex-valued impedances.
The weight of edge (i, j) is, by convention, minus the reciprocal of the impedance directly between i and j.
In models of such networks, the entries of the adjacency matrix are complex, but the Kirchhoff matrix remains symmetric, rather than being Hermitian.
Such a matrix is usually called an "admittance matrix", denoted
Y
{\displaystyle Y}
, rather than a "Laplacian".
This is one of the rare applications that give rise to complex symmetric matrices.
=== Magnetic Laplacian ===
There are other situations in which entries of the adjacency matrix are complex-valued, and the Laplacian does become a Hermitian matrix. The Magnetic Laplacian for a directed graph with real weights
w
i
j
{\displaystyle w_{ij}}
is constructed as the Hadamard product of the real symmetric matrix of the symmetrized Laplacian and the Hermitian phase matrix with the complex entries
γ
q
(
i
,
j
)
=
e
i
2
π
q
(
w
i
j
−
w
j
i
)
{\displaystyle \gamma _{q}(i,j)=e^{i2\pi q(w_{ij}-w_{ji})}}
which encode the edge direction into the phase in the complex plane.
In the context of quantum physics, the magnetic Laplacian can be interpreted as the operator that describes the phenomenology of a free charged particle on a graph, which is subject to the action of a magnetic field and the parameter
q
{\displaystyle q}
is called electric charge.
In the following example
q
=
1
/
4
{\displaystyle q=1/4}
:
=== Deformed Laplacian ===
The deformed Laplacian is commonly defined as
Δ
(
s
)
=
I
−
s
A
+
s
2
(
D
−
I
)
{\displaystyle \Delta (s)=I-sA+s^{2}(D-I)}
where I is the identity matrix, A is the adjacency matrix, D is the degree matrix, and s is a (complex-valued) number. The standard Laplacian is just
Δ
(
1
)
{\textstyle \Delta (1)}
and
Δ
(
−
1
)
=
D
+
A
{\textstyle \Delta (-1)=D+A}
is the signless Laplacian.
=== Signless Laplacian ===
The signless Laplacian is defined as
Q
=
D
+
A
{\displaystyle Q=D+A}
where
D
{\displaystyle D}
is the degree matrix, and
A
{\displaystyle A}
is the adjacency matrix. Like the signed Laplacian
L
{\displaystyle L}
, the signless Laplacian
Q
{\displaystyle Q}
also is positive semi-definite as it can be factored as
Q
=
R
R
T
{\displaystyle Q=RR^{\textsf {T}}}
where
R
{\textstyle R}
is the incidence matrix.
Q
{\displaystyle Q}
has a 0-eigenvector if and only if it has a bipartite connected component (isolated vertices being bipartite connected components). This can be shown as
x
T
Q
x
=
x
T
R
R
T
x
⟹
R
T
x
=
0
.
{\displaystyle \mathbf {x} ^{\textsf {T}}Q\mathbf {x} =\mathbf {x} ^{\textsf {T}}RR^{\textsf {T}}\mathbf {x} \implies R^{\textsf {T}}\mathbf {x} =\mathbf {0} .}
This has a solution where
x
≠
0
{\displaystyle \mathbf {x} \neq \mathbf {0} }
if and only if the graph has a bipartite connected component.
=== Directed multigraphs ===
An analogue of the Laplacian matrix can be defined for directed multigraphs. In this case the Laplacian matrix L is defined as
L
=
D
−
A
{\displaystyle L=D-A}
where D is a diagonal matrix with Di,i equal to the outdegree of vertex i and A is a matrix with Ai,j equal to the number of edges from i to j (including loops).
== Open source software implementations ==
SciPy
NetworkX
Julia
== Application software ==
scikit-learn Spectral Clustering
PyGSP: Graph Signal Processing in Python
megaman: Manifold Learning for Millions of Points
smoothG
Laplacian Change Point Detection for Dynamic Graphs (KDD 2020)
LaplacianOpt (A Julia Package for Maximizing Laplacian's Second Eigenvalue of Weighted Graphs)
LigMG (Large Irregular Graph MultiGrid)
Laplacians.jl
== See also ==
Stiffness matrix
Resistance distance
Transition rate matrix
Calculus on finite weighted graphs
Graph Fourier transform
== References == | Wikipedia/Graph_Laplacian |
In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.
== Applications ==
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality).
The transforms are usually invertible.
The exponential functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones. Therefore, the behavior of a linear time-invariant system can be analyzed at each frequency independently.
By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as signal filtering, polynomial multiplication, and multiplying large numbers.
The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms.
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When a function
s
(
t
)
{\displaystyle s(t)}
is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function
S
(
f
)
{\displaystyle S(f)}
at frequency
f
{\displaystyle f}
represents the amplitude of a frequency component whose initial phase is given by the angle of
S
(
f
)
{\displaystyle S(f)}
(polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Some examples include:
Equalization of audio recordings with a series of bandpass filters;
Digital radio reception without a superheterodyne circuit, as in a modern cell phone or radio scanner;
Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera;
Cross correlation of similar images for co-alignment;
X-ray crystallography to reconstruct a crystal structure from its diffraction pattern;
Fourier-transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field;
Many other forms of spectroscopy, including infrared and nuclear magnetic resonance spectroscopies;
Generation of sound spectrograms used to analyze sounds;
Passive sonar used to classify targets based on machinery noise.
== Variants of Fourier analysis ==
=== (Continuous) Fourier transform ===
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (
t
{\displaystyle t}
), and the domain of the output (final) function is ordinary frequency, the transform of function
s
(
t
)
{\displaystyle s(t)}
at frequency
f
{\displaystyle f}
is given by the complex number:
S
(
f
)
=
∫
−
∞
∞
s
(
t
)
⋅
e
−
i
2
π
f
t
d
t
.
{\displaystyle S(f)=\int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi ft}\,dt.}
Evaluating this quantity for all values of
f
{\displaystyle f}
produces the frequency-domain function. Then
s
(
t
)
{\displaystyle s(t)}
can be represented as a recombination of complex exponentials of all possible frequencies:
s
(
t
)
=
∫
−
∞
∞
S
(
f
)
⋅
e
i
2
π
f
t
d
f
,
{\displaystyle s(t)=\int _{-\infty }^{\infty }S(f)\cdot e^{i2\pi ft}\,df,}
which is the inverse transform formula. The complex number,
S
(
f
)
,
{\displaystyle S(f),}
conveys both amplitude and phase of frequency
f
.
{\displaystyle f.}
See Fourier transform for much more information, including:
conventions for amplitude normalization and frequency scaling/units
transform properties
tabulated transforms of specific functions
an extension/generalization for functions of multiple dimensions, such as images.
=== Fourier series ===
The Fourier transform of a periodic function,
s
P
(
t
)
,
{\displaystyle s_{_{P}}(t),}
with period
P
,
{\displaystyle P,}
becomes a Dirac comb function, modulated by a sequence of complex coefficients:
S
[
k
]
=
1
P
∫
P
s
P
(
t
)
⋅
e
−
i
2
π
k
P
t
d
t
,
k
∈
Z
,
{\displaystyle S[k]={\frac {1}{P}}\int _{P}s_{_{P}}(t)\cdot e^{-i2\pi {\frac {k}{P}}t}\,dt,\quad k\in \mathbb {Z} ,}
(where
∫
P
{\displaystyle \int _{P}}
is the integral over any interval of length
P
{\displaystyle P}
).
The inverse transform, known as Fourier series, is a representation of
s
P
(
t
)
{\displaystyle s_{_{P}}(t)}
in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
s
P
(
t
)
=
F
−
1
{
∑
k
=
−
∞
+
∞
S
[
k
]
δ
(
f
−
k
P
)
}
=
∑
k
=
−
∞
∞
S
[
k
]
⋅
e
i
2
π
k
P
t
.
{\displaystyle s_{_{P}}(t)\ \ =\ \ {\mathcal {F}}^{-1}\left\{\sum _{k=-\infty }^{+\infty }S[k]\,\delta \left(f-{\frac {k}{P}}\right)\right\}\ \ =\ \ \sum _{k=-\infty }^{\infty }S[k]\cdot e^{i2\pi {\frac {k}{P}}t}.}
Any
s
P
(
t
)
{\displaystyle s_{_{P}}(t)}
can be expressed as a periodic summation of another function,
s
(
t
)
{\displaystyle s(t)}
:
s
P
(
t
)
≜
∑
m
=
−
∞
∞
s
(
t
−
m
P
)
,
{\displaystyle s_{_{P}}(t)\,\triangleq \,\sum _{m=-\infty }^{\infty }s(t-mP),}
and the coefficients are proportional to samples of
S
(
f
)
{\displaystyle S(f)}
at discrete intervals of
1
P
{\displaystyle {\frac {1}{P}}}
:
S
[
k
]
=
1
P
⋅
S
(
k
P
)
.
{\displaystyle S[k]={\frac {1}{P}}\cdot S\left({\frac {k}{P}}\right).}
Note that any
s
(
t
)
{\displaystyle s(t)}
whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering
s
(
t
)
{\displaystyle s(t)}
(and therefore
S
(
f
)
{\displaystyle S(f)}
) from just these samples (i.e. from the Fourier series) is that the non-zero portion of
s
(
t
)
{\displaystyle s(t)}
be confined to a known interval of duration
P
,
{\displaystyle P,}
which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
=== Discrete-time Fourier transform (DTFT) ===
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
S
1
T
(
f
)
≜
∑
k
=
−
∞
∞
S
(
f
−
k
T
)
≡
∑
n
=
−
∞
∞
s
[
n
]
⋅
e
−
i
2
π
f
n
T
⏞
Fourier series (DTFT)
⏟
Poisson summation formula
=
F
{
∑
n
=
−
∞
∞
s
[
n
]
δ
(
t
−
n
T
)
}
,
{\displaystyle S_{\tfrac {1}{T}}(f)\ \triangleq \ \underbrace {\sum _{k=-\infty }^{\infty }S\left(f-{\frac {k}{T}}\right)\equiv \overbrace {\sum _{n=-\infty }^{\infty }s[n]\cdot e^{-i2\pi fnT}} ^{\text{Fourier series (DTFT)}}} _{\text{Poisson summation formula}}={\mathcal {F}}\left\{\sum _{n=-\infty }^{\infty }s[n]\ \delta (t-nT)\right\},\,}
which is known as the DTFT. Thus the DTFT of the
s
[
n
]
{\displaystyle s[n]}
sequence is also the Fourier transform of the modulated Dirac comb function.
The Fourier series coefficients (and inverse transform), are defined by:
s
[
n
]
≜
T
∫
1
T
S
1
T
(
f
)
⋅
e
i
2
π
f
n
T
d
f
=
T
∫
−
∞
∞
S
(
f
)
⋅
e
i
2
π
f
n
T
d
f
⏟
≜
s
(
n
T
)
.
{\displaystyle s[n]\ \triangleq \ T\int _{\frac {1}{T}}S_{\tfrac {1}{T}}(f)\cdot e^{i2\pi fnT}\,df=T\underbrace {\int _{-\infty }^{\infty }S(f)\cdot e^{i2\pi fnT}\,df} _{\triangleq \,s(nT)}.}
Parameter
T
{\displaystyle T}
corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence,
s
[
n
]
,
{\displaystyle s[n],}
is proportional to samples of an underlying continuous function,
s
(
t
)
,
{\displaystyle s(t),}
one can observe a periodic summation of the continuous Fourier transform,
S
(
f
)
.
{\displaystyle S(f).}
Note that any
s
(
t
)
{\displaystyle s(t)}
with the same discrete sample values produces the same DTFT. But under certain idealized conditions one can theoretically recover
S
(
f
)
{\displaystyle S(f)}
and
s
(
t
)
{\displaystyle s(t)}
exactly. A sufficient condition for perfect recovery is that the non-zero portion of
S
(
f
)
{\displaystyle S(f)}
be confined to a known frequency interval of width
1
T
.
{\displaystyle {\tfrac {1}{T}}.}
When that interval is
[
−
1
2
T
,
1
2
T
]
,
{\displaystyle \left[-{\tfrac {1}{2T}},{\tfrac {1}{2T}}\right],}
the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.
Another reason to be interested in
S
1
T
(
f
)
{\displaystyle S_{\tfrac {1}{T}}(f)}
is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:
normalized frequency units
windowing (finite-length sequences)
transform properties
tabulated transforms of specific functions
=== Discrete Fourier transform (DFT) ===
Similar to a Fourier series, the DTFT of a periodic sequence,
s
N
[
n
]
,
{\displaystyle s_{_{N}}[n],}
with period
N
{\displaystyle N}
, becomes a Dirac comb function, modulated by a sequence of complex coefficients (see DTFT § Periodic data):
S
[
k
]
=
∑
n
s
N
[
n
]
⋅
e
−
i
2
π
k
N
n
,
k
∈
Z
,
{\displaystyle S[k]=\sum _{n}s_{_{N}}[n]\cdot e^{-i2\pi {\frac {k}{N}}n},\quad k\in \mathbb {Z} ,}
(where
∑
n
{\displaystyle \sum _{n}}
is the sum over any sequence of length
N
.
{\displaystyle N.}
)
The
S
[
k
]
{\displaystyle S[k]}
sequence is customarily known as the DFT of one cycle of
s
N
.
{\displaystyle s_{_{N}}.}
It is also
N
{\displaystyle N}
-periodic, so it is never necessary to compute more than
N
{\displaystyle N}
coefficients. The inverse transform, also known as a discrete Fourier series, is given by:
s
N
[
n
]
=
1
N
∑
k
S
[
k
]
⋅
e
i
2
π
n
N
k
,
{\displaystyle s_{_{N}}[n]={\frac {1}{N}}\sum _{k}S[k]\cdot e^{i2\pi {\frac {n}{N}}k},}
where
∑
k
{\displaystyle \sum _{k}}
is the sum over any sequence of length
N
.
{\displaystyle N.}
When
s
N
[
n
]
{\displaystyle s_{_{N}}[n]}
is expressed as a periodic summation of another function:
s
N
[
n
]
≜
∑
m
=
−
∞
∞
s
[
n
−
m
N
]
,
{\displaystyle s_{_{N}}[n]\,\triangleq \,\sum _{m=-\infty }^{\infty }s[n-mN],}
and
s
[
n
]
≜
T
⋅
s
(
n
T
)
,
{\displaystyle s[n]\,\triangleq \,T\cdot s(nT),}
the coefficients are samples of
S
1
T
(
f
)
{\displaystyle S_{\tfrac {1}{T}}(f)}
at discrete intervals of
1
P
=
1
N
T
{\displaystyle {\tfrac {1}{P}}={\tfrac {1}{NT}}}
:
S
[
k
]
=
S
1
T
(
k
P
)
.
{\displaystyle S[k]=S_{\tfrac {1}{T}}\left({\frac {k}{P}}\right).}
Conversely, when one wants to compute an arbitrary number
(
N
)
{\displaystyle (N)}
of discrete samples of one cycle of a continuous DTFT,
S
1
T
(
f
)
,
{\displaystyle S_{\tfrac {1}{T}}(f),}
it can be done by computing the relatively simple DFT of
s
N
[
n
]
,
{\displaystyle s_{_{N}}[n],}
as defined above. In most cases,
N
{\displaystyle N}
is chosen equal to the length of the non-zero portion of
s
[
n
]
.
{\displaystyle s[n].}
Increasing
N
,
{\displaystyle N,}
known as zero-padding or interpolation, results in more closely spaced samples of one cycle of
S
1
T
(
f
)
.
{\displaystyle S_{\tfrac {1}{T}}(f).}
Decreasing
N
,
{\displaystyle N,}
causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see Discrete-time Fourier transform § L=N×I) In most cases of practical interest, the
s
[
n
]
{\displaystyle s[n]}
sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
transform properties
applications
tabulated transforms of specific functions
=== Summary ===
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of s(•) to be limited to the period, P or N. But these formulas do not require that condition.
== Symmetry properties ==
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
Time domain
s
=
s
RE
+
s
RO
+
i
s
IE
+
i
s
IO
⏟
⇕
F
⇕
F
⇕
F
⇕
F
⇕
F
Frequency domain
S
=
S
RE
+
i
S
IO
⏞
+
i
S
IE
+
S
RO
{\displaystyle {\begin{array}{rccccccccc}{\text{Time domain}}&s&=&s_{_{\text{RE}}}&+&s_{_{\text{RO}}}&+&is_{_{\text{IE}}}&+&\underbrace {i\ s_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\text{Frequency domain}}&S&=&S_{\text{RE}}&+&\overbrace {\,i\ S_{\text{IO}}\,} &+&iS_{\text{IE}}&+&S_{\text{RO}}\end{array}}}
From this, various relationships are apparent, for example:
The transform of a real-valued function
(
s
R
E
+
s
R
O
)
{\displaystyle (s_{_{RE}}+s_{_{RO}})}
is the conjugate symmetric function
S
R
E
+
i
S
I
O
.
{\displaystyle S_{RE}+i\ S_{IO}.}
Conversely, a conjugate symmetric transform implies a real-valued time-domain.
The transform of an imaginary-valued function
(
i
s
I
E
+
i
s
I
O
)
{\displaystyle (i\ s_{_{IE}}+i\ s_{_{IO}})}
is the conjugate antisymmetric function
S
R
O
+
i
S
I
E
,
{\displaystyle S_{RO}+i\ S_{IE},}
and the converse is true.
The transform of a conjugate symmetric function
(
s
R
E
+
i
s
I
O
)
{\displaystyle (s_{_{RE}}+i\ s_{_{IO}})}
is the real-valued function
S
R
E
+
S
R
O
,
{\displaystyle S_{RE}+S_{RO},}
and the converse is true.
The transform of a conjugate antisymmetric function
(
s
R
O
+
i
s
I
E
)
{\displaystyle (s_{_{RO}}+i\ s_{_{IE}})}
is the imaginary-valued function
i
S
I
E
+
i
S
I
O
,
{\displaystyle i\ S_{IE}+i\ S_{IO},}
and the converse is true.
== History ==
An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).
The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see Deferent and epicycle § Mathematical formalism).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,
which has been described as the first formula for the DFT,
and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.
Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.
An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:
Lagrange transformed the roots
x
1
,
{\displaystyle x_{1},}
x
2
,
{\displaystyle x_{2},}
x
3
{\displaystyle x_{3}}
into the resolvents:
r
1
=
x
1
+
x
2
+
x
3
r
2
=
x
1
+
ζ
x
2
+
ζ
2
x
3
r
3
=
x
1
+
ζ
2
x
2
+
ζ
x
3
{\displaystyle {\begin{aligned}r_{1}&=x_{1}+x_{2}+x_{3}\\r_{2}&=x_{1}+\zeta x_{2}+\zeta ^{2}x_{3}\\r_{3}&=x_{1}+\zeta ^{2}x_{2}+\zeta x_{3}\end{aligned}}}
where ζ is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series. Independently of Fourier, astronomer Friedrich Wilhelm Bessel also introduced Fourier series to solve Kepler's equation. His work was published in 1819, unaware of Fourier's work which remained unpublished until 1822.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.
== Time–frequency transforms ==
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
== Fourier transforms on arbitrary locally compact abelian topological groups ==
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets, even discrete cosets.
== See also ==
== Notes ==
== References ==
== Further reading ==
== External links ==
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
An Intuitive Explanation of Fourier Theory by Steven Lehar.
Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 6 is on the 1- and 2-D Fourier Transform. Lectures 7–15 make use of it., by Alan Peters
Moriarty, Philip; Bowley, Roger (2009). "Σ Summation (and Fourier Analysis)". Sixty Symbols. Brady Haran for the University of Nottingham.
Introduction to Fourier analysis of time series at Medium | Wikipedia/Fourier_theory |
In mathematical heat conduction, the Green's function number is used to uniquely categorize certain fundamental solutions of the heat equation to make existing solutions easier to identify, store, and retrieve.
Numbers have long been used to identify types of boundary conditions. The Green's function number system was proposed by Beck and Litkouhi in 1988 and has seen increasing use since then. The number system has been used to catalog a large collection of Green's functions and related solutions.
Although the examples given below are for the heat equation, this number system applies to any phenomena described by differential equations such as diffusion, acoustics, electromagnetics, fluid dynamics, etc.
== Notation ==
The Green's function number specifies the coordinate system and the type of boundary conditions that a Green's function satisfies. The Green's function number has two parts, a letter designation followed by a number designation. The letter(s) designate the coordinate system, while the numbers designate the type of boundary conditions that are satisfied.
Some of the designations for the Greens function number system are given next. Coordinate system designations include: X, Y, and Z for Cartesian coordinates; R, Z, φ for cylindrical coordinates; and, RS, φ, θ for spherical coordinates.
Designations for several boundary conditions are given in Table 1. The zeroth boundary condition is important for identifying the presence of a coordinate boundary where no physical boundary exists, for example, far away in a semi-infinite body or at the center of a cylindrical or spherical body.
== Examples in Cartesian coordinates ==
=== X11 ===
As an example, number X11 denotes the Green's function that satisfies the heat equation in the domain (0 < x < L) for boundary conditions of type 1 (Dirichlet) at both boundaries x = 0 and x = L. Here X denotes the Cartesian coordinate and 11 denotes the type 1 boundary condition at both sides of the body. The boundary value problem for the X11 Green's function is given by
Here
α
{\displaystyle \alpha }
is the thermal diffusivity (m2/s) and
δ
{\displaystyle \delta }
is the Dirac delta function.
This GF is developed elsewhere.
=== X20 ===
As another Cartesian example, number X20 denotes the Green's function in the semi-infinite body (
0
<
x
<
∞
{\displaystyle 0<x<\infty }
) with a Neumann (type 2) boundary at x = 0. Here X denotes the Cartesian coordinate, 2 denotes the type 2 boundary condition at x = 0 and 0 denotes the zeroth type boundary condition (boundedness) at
x
=
∞
{\displaystyle x=\infty }
. The boundary value problem for the X20 Green's function is given by
This GF is published elsewhere.
=== X10Y20 ===
As a two-dimensional example, number X10Y20 denotes the Green's function in the quarter-infinite body (
0
<
x
<
∞
{\displaystyle 0<x<\infty }
,
0
<
y
<
∞
{\displaystyle 0<y<\infty }
) with a Dirichlet (type 1) boundary at x = 0 and a Neumann (type 2) boundary at y = 0. The boundary value problem for the X10Y20 Green's function is given by
Applications of related half-space and quarter-space GF are available.
== Examples in cylindrical coordinates ==
=== R03 ===
As an example in the cylindrical coordinate system, number R03 denotes the Green's function that satisfies the heat equation in the solid cylinder (0 < r < a) with a boundary condition of type 3 (Robin) at r = a. Here letter R denotes the cylindrical coordinate system, number 0 denotes the zeroth boundary condition (boundedness) at the center of the cylinder (r = 0), and number 3 denotes the type 3 (Robin) boundary condition at r = a. The boundary value problem for R03 Green's function is given by
Here
k
{\displaystyle k}
is thermal conductivity (W/(m K)) and
h
{\displaystyle h}
is the heat transfer coefficient (W/(m2 K)).
See Carslaw & Jaeger (1959, p. 369), Cole et al. (2011, p. 543) for this GF.
=== R10 ===
As another example, number R10 denotes the Green's function in a large body containing a cylindrical void (a < r <
∞
{\displaystyle \infty }
) with a type 1 (Dirichlet) boundary condition at r = a. Again letter R denotes the cylindrical coordinate system, number 1 denotes the type 1 boundary at r = a, and number 0 denotes the type zero boundary (boundedness) at large values of r. The boundary value problem for the R10 Green's function is given by
This GF is available elsewhere.
=== R01φ00 ===
As a two dimensional example, number R01φ00 denotes the Green's function in a solid cylinder with angular dependence, with a type 1 (Dirichlet) boundary condition at r = a. Here letter φ denotes the angular (azimuthal) coordinate, and numbers 00 denote the type zero boundaries for angle; here no physical boundary takes the form of the periodic boundary condition. The boundary value problem for the R01φ00 Green's function is given by
Both a transient and steady form of this GF are available.
== Example in spherical coordinates ==
=== RS02 ===
As an example in the spherical coordinate system, number RS02 denotes the Green's function for a solid sphere (0 < r < b) with a type 2 (Neumann) boundary condition at r = b. Here letters RS denote the radial-spherical coordinate system, number 0 denotes the zeroth boundary condition (boundedness) at r = 0, and number 2 denotes the type 2 boundary at r = b. The boundary value problem for the RS02 Green's function is given by
This GF is available elsewhere.
== See also ==
Fundamental solution
Dirichlet boundary condition
Neumann boundary condition
Robin boundary condition
Heat equation
== References == | Wikipedia/Green's_function_number |
Shock is an abrupt discontinuity in the flow field and it occurs in flows when the local flow speed exceeds the local sound speed. More specifically, it is a flow whose Mach number exceeds 1.
== Explanation of phenomena ==
Shock is formed due to coalescence of various small pressure pulses. Sound waves are pressure waves and it is at the speed of the sound wave the disturbances are communicated in the medium. When an object is moving in a flow field the object sends out disturbances which propagate at the speed of sound and adjusts the remaining flow field accordingly. However, if the object itself happens to travel at speed greater than sound, then the disturbances created by the object would not have traveled and communicated to the rest of the flow field and this results in an abrupt change of property, which is termed as shock in gas dynamics terminology.
Shocks are characterized by discontinuous changes in flow properties such as velocity, pressure, temperature, etc. Typically, shock thickness is of a few mean free paths (of the order of 10−8 m). Shocks are irreversible occurrences in supersonic flows (i.e. the entropy increases).
== Normal shock formulas ==
T
02
=
T
01
{\displaystyle \mathbf {T_{02}} =\mathbf {T_{01}} }
M
2
=
(
2
γ
−
1
+
M
1
2
2
γ
γ
−
1
M
1
2
−
1
)
0.5
{\displaystyle M_{2}=\left({\frac {{\frac {2}{\gamma -1}}+{M_{1}}^{2}}{{\frac {2\gamma }{\gamma -1}}{M_{1}}^{2}-1}}\right)^{0.5}}
p
2
p
1
=
1
+
γ
M
1
2
1
+
γ
M
2
2
=
2
γ
γ
+
1
M
1
2
−
γ
−
1
γ
+
1
{\displaystyle {\frac {p_{2}}{p_{1}}}={\frac {1+\gamma M_{1}^{2}}{1+\gamma M_{2}^{2}}}={\frac {2\gamma }{\gamma +1}}M_{1}^{2}-{\frac {\gamma -1}{\gamma +1}}}
T
2
T
1
=
1
+
γ
−
1
2
M
1
2
1
+
γ
−
1
2
M
2
2
=
(
1
+
γ
−
1
2
M
1
2
)
(
2
γ
γ
−
1
M
1
2
−
1
)
(
γ
+
1
)
2
M
1
2
2
(
γ
−
1
)
{\displaystyle {\frac {T_{2}}{T_{1}}}={\frac {1+{\frac {\gamma -1}{2}}M_{1}^{2}}{1+{\frac {\gamma -1}{2}}M_{2}^{2}}}={\frac {(1+{\frac {\gamma -1}{2}}M_{1}^{2})({\frac {2\gamma }{\gamma -1}}M_{1}^{2}-1)}{\frac {(\gamma +1)^{2}M_{1}^{2}}{2(\gamma -1)}}}}
a
2
a
1
=
(
T
2
T
1
)
0.5
{\displaystyle {\frac {a_{2}}{a_{1}}}={\left({\frac {T_{2}}{T_{1}}}\right)}^{0.5}}
ρ
2
ρ
1
=
p
2
p
1
T
1
T
2
{\displaystyle {\frac {\rho _{2}}{\rho _{1}}}={\frac {p_{2}}{p_{1}}}{\frac {T_{1}}{T_{2}}}}
p
01
p
1
=
(
1
+
γ
−
1
2
M
1
2
)
γ
γ
−
1
{\displaystyle {\frac {p_{01}}{p_{1}}}=(1+{\frac {\gamma -1}{2}}M_{1}^{2})^{\frac {\gamma }{\gamma -1}}}
p
02
p
2
=
(
1
+
γ
−
1
2
M
2
2
)
γ
γ
−
1
{\displaystyle {\frac {p_{02}}{p_{2}}}=(1+{\frac {\gamma -1}{2}}M_{2}^{2})^{\frac {\gamma }{\gamma -1}}}
Where, the index 1 refers to upstream properties, and the index 2 refers to down stream properties. The subscript 0 refers to total or stagnation properties. T is temperature, M is the mach number, P is pressure, ρ is density, and γ is the ratio of specific heats.
== See also ==
Mach number
Sound barrier
supersonic flow
== References == | Wikipedia/Shock_(fluid_dynamics) |
Nitrate is a polyatomic ion with the chemical formula NO−3. Salts containing this ion are called nitrates. Nitrates are common components of fertilizers and explosives. Almost all inorganic nitrates are soluble in water. An example of an insoluble nitrate is bismuth oxynitrate.
== Chemical structure ==
The nitrate anion is the conjugate base of nitric acid, consisting of one central nitrogen atom surrounded by three identically bonded oxygen atoms in a trigonal planar arrangement. The nitrate ion carries a formal charge of −1. This charge results from a combination formal charge in which each of the three oxygens carries a −2⁄3 charge, whereas the nitrogen carries a +1 charge, all these adding up to formal charge of the polyatomic nitrate ion. This arrangement is commonly used as an example of resonance. Like the isoelectronic carbonate ion, the nitrate ion can be represented by three resonance structures:
== Chemical and biochemical properties ==
In the NO−3 anion, the oxidation state of the central nitrogen atom is V (+5). This corresponds to the highest possible oxidation number of nitrogen. Nitrate is a potentially powerful oxidizer as evidenced by its explosive behaviour at high temperature when it is detonated in ammonium nitrate (NH4NO3), or black powder, ignited by the shock wave of a primary explosive. In contrast to red fuming nitric acid (HNO3/N2O4), or concentrated nitric acid (HNO3), nitrate in aqueous solution at neutral or high pH is only a weak oxidizing agent in redox reactions in which the reductant does not produce hydrogen ions (such as mercury going to calomel). However it is still a strong oxidizer when the reductant does produce hydrogen ions, such as in the oxidation of hydrogen itself. Nitrate is stable in the absence of microorganisms or reductants such as organic matter. In fact, nitrogen gas is thermodynamically stable in the presence of 1 atm of oxygen only in very acidic conditions, and otherwise should combine with the oxygen to form nitrate. This is shown by subtracting the two oxidation reactions:
N2 + 6 H2O → 2 NO−3 + 12 H+ + 10 e−
E
0
=
1.246
−
0.0709
pH
+
0.0591
10
log
(
N
O
3
−
)
2
P
N
2
{\displaystyle \qquad E_{0}=1.246-0.0709{\text{ pH }}+{\frac {0.0591}{10}}\log {\frac {(NO_{3}^{-})^{2}}{P_{N_{2}}}}}
2 H2O → O2 + 4 H+ + 4 e−
E
0
=
1.228
−
0.0591
pH
+
0.0591
4
log
P
O
2
{\displaystyle \qquad \qquad \qquad E_{0}=1.228-0.0591{\text{ pH }}+{\frac {0.0591}{4}}\log {P_{O_{2}}}}
giving:
2 N2 + 5 O2 + 2 H2O → 4 NO−3 + 4 H+
0
=
0.018
−
0.0118
pH
+
0.0591
10
log
(
N
O
3
−
)
2
P
N
2
−
0.0591
4
log
P
O
2
{\displaystyle \qquad 0=0.018-0.0118{\text{ pH }}+{\frac {0.0591}{10}}\log {\frac {(NO_{3}^{-})^{2}}{P_{N_{2}}}}-{\frac {0.0591}{4}}\log {P_{O_{2}}}}
Dividing by 0.0118 and rearranging gives the equilibrium relation:
log
(
N
O
3
−
)
P
N
2
1
/
2
P
O
2
5
/
4
=
pH
−
1.5
{\displaystyle \log {\frac {(NO_{3}^{-})}{P_{N_{2}}^{1/2}P_{O_{2}}^{5/4}}}={\text{ pH }}-1.5}
However, in reality, nitrogen, oxygen, and water do not combine directly to form nitrate. Rather, a reductant such as hydrogen reacts with nitrogen to produce "fixed nitrogen" such as ammonia, which is then oxidized, eventually becoming nitrate. Nitrate does not accumulate to high levels in nature because it reacts with reductants in the process called denitrification (see Nitrogen cycle).
Nitrate is used as a powerful terminal electron acceptor by denitrifying bacteria to deliver the energy they need to thrive. Under anaerobic conditions, nitrate is the strongest electron acceptor used by prokaryote microorganisms (bacteria and archaea) to respirate. The redox couple NO−3/N2 is at the top of the redox scale for the anaerobic respiration, just below the couple oxygen (O2/H2O), but above the couples Mn(IV)/Mn(II), Fe(III)/Fe(II), SO2−4/HS−, CO2/CH4. In natural waters, inevitably contaminated by microorganisms, nitrate is a quite unstable and labile dissolved chemical species because it is metabolised by denitrifying bacteria. Water samples for nitrate/nitrite analyses need to be kept at 4 °C in a refrigerated room and analysed as quick as possible to limit the loss of nitrate.
In the first step of the denitrification process, dissolved nitrate (NO−3) is catalytically reduced into nitrite (NO−2) by the enzymatic activity of bacteria. In aqueous solution, dissolved nitrite, N(III), is a more powerful oxidizer that nitrate, N(V), because it has to accept less electrons and its reduction is less kinetically hindered than that of nitrate.
During the biological denitrification process, further nitrite reduction also gives rise to another powerful oxidizing agent: nitric oxide (NO). NO can fix on myoglobin, accentuating its red coloration. NO is an important biological signaling molecule and intervenes in the vasodilation process. Still, it can also produce free radicals in biological tissues, accelerating their degradation and aging process. The reactive oxygen species (ROS) generated by NO contribute to the oxidative stress, a condition involved in vascular dysfunction and atherogenesis.
== Detection in chemical analysis ==
The nitrate anion is commonly analysed in water by ion chromatography (IC) along with other anions also present in the solution. The main advantage of IC is its ease and the simultaneous analysis of all the anions present in the aqueous sample. Since the emergence of IC instruments in the 1980s, this separation technique, coupled with many detectors, has become commonplace in the chemical analysis laboratory and is the preferred and most widely used method for nitrate and nitrite analyses.
Previously, nitrate determination relied on spectrophotometric and colorimetric measurements after a specific reagent is added to the solution to reveal a characteristic color (often red because it absorbs visible light in the blue). Because of interferences with the brown color of dissolved organic matter (DOM: humic and fulvic acids) often present in soil pore water, artefacts can easily affect the absorbance values. In case of weak interference, a blank measurement with only a naturally brown-colored water sample can be sufficient to subtract the undesired background from the measured sample absorbance. If the DOM brown color is too intense, the water samples must be pretreated, and inorganic nitrogen species must be separated before measurement. Meanwhile, for clear water samples, colorimetric instruments retain the advantage of being less expensive and sometimes portable, making them an affordable option for fast routine controls or field measurements.
Colorimetric methods for the specific detection of nitrate (NO−3) often rely on its conversion to nitrite (NO−2) followed by nitrite-specific tests. The reduction of nitrate to nitrite can be effected by a copper-cadmium alloy, metallic zinc, or hydrazine. The most popular of these assays is the Griess test, whereby nitrite is converted to a deeply red colored azo dye suited for UV–vis spectrophotometry analysis. The method exploits the reactivity of nitrous acid (HNO2) derived from the acidification of nitrite. Nitrous acid selectively reacts with aromatic amines to give diazonium salts, which in turn couple with a second reagent to give the azo dye. The detection limit is 0.02 to 2 μM. Such methods have been highly adapted to biological samples and soil samples.
In the dimethylphenol method, 1 mL of concentrated sulfuric acid (H2SO4) is added to 200 μL of the solution being tested for nitrate. Under strongly acidic conditions, nitrate ions react with 2,6-dimethylphenol, forming a yellow compound, 4-nitro-2,6-dimethylphenol. This occurs through electrophilic aromatic substitution where the intermediate nitronium (+NO2) ions attack the aromatic ring of dimethylphenol. The resulting product (ortho- or para-nitro-dimethylphenol) is analyzed using UV-vis spectrophotometry at 345 nm according to the Lambert-Beer law.
Another colorimetric method based on the chromotropic acid (dihydroxynaphthalene-disulfonic acid) was also developed by West and Lyles in 1960 for the direct spectrophotometric determination of nitrate anions.
If formic acid is added to a mixture of brucine (an alkaloid related to strychnine) and potassium nitrate (KNO3), its color instantly turns red. This reaction has been used for the direct colorimetric detection of nitrates.
For direct online chemical analysis using a flow-through system, the water sample is introduced by a peristaltic pump in a flow injection analyzer, and the nitrate or resulting nitrite-containing effluent is then combined with a reagent for its colorimetric detection.
== Occurrence and production ==
Nitrate salts are found naturally on earth in arid environments as large deposits, particularly of nitratine, a major source of sodium nitrate.
Nitrates are produced by a number of species of nitrifying bacteria in the natural environment using ammonia or urea as a source of nitrogen and source of free energy. Nitrate compounds for gunpowder were historically produced, in the absence of mineral nitrate sources, by means of various fermentation processes using urine and dung.
Lightning strikes in earth's nitrogen- and oxygen-rich atmosphere produce a mixture of oxides of nitrogen, which form nitrous ions and nitrate ions, which are washed from the atmosphere by rain or in occult deposition.
Nitrates are produced industrially from nitric acid.
== Uses ==
=== Agriculture ===
Nitrate is a chemical compound that serves as a primary form of nitrogen for many plants. This essential nutrient is used by plants to synthesize proteins, nucleic acids, and other vital organic molecules. The transformation of atmospheric nitrogen into nitrate is facilitated by certain bacteria and lightning in the nitrogen cycle, which exemplifies nature's ability to convert a relatively inert molecule into a form that is crucial for biological productivity.
Nitrates are used as fertilizers in agriculture because of their high solubility and biodegradability. The main nitrate fertilizers are ammonium, sodium, potassium, calcium, and magnesium salts. Several billion kilograms are produced annually for this purpose. The significance of nitrate extends beyond its role as a nutrient since it acts as a signaling molecule in plants, regulating processes such as root growth, flowering, and leaf development.
While nitrate is beneficial for agriculture since it enhances soil fertility and crop yields, its excessive use can lead to nutrient runoff, water pollution, and the proliferation of aquatic dead zones. Therefore, sustainable agricultural practices that balance productivity with environmental stewardship are necessary. Nitrate's importance in ecosystems is evident since it supports the growth and development of plants, contributing to biodiversity and ecological balance.
=== Firearms ===
Nitrates are used as oxidizing agents, most notably in explosives, where the rapid oxidation of carbon compounds liberates large volumes of gases (see gunpowder as an example).
=== Industrial ===
Sodium nitrate is used to remove air bubbles from molten glass and some ceramics. Mixtures of molten salts are used to harden the surface of some metals.
=== Photographic film ===
Nitrate was also used as a film stock through nitrocellulose. Due to its high combustibility, the film making studios swapped to cellulose acetate safety film in 1950.
=== Medicinal and pharmaceutical use ===
In the medical field, nitrate-derived organic esters, such as glyceryl trinitrate, isosorbide dinitrate, and isosorbide mononitrate, are used in the prophylaxis and management of acute coronary syndrome, myocardial infarction, acute pulmonary oedema. This class of drug, to which amyl nitrite also belongs, is known as nitrovasodilators.
== Toxicity and safety ==
The two areas of concerns about the toxicity of nitrate are the following:
nitrate reduced by the microbial activity of nitrate reducing bacteria is the precursor of nitrite in water and in the lower gastrointestinal tract. Nitrite is a precursor to carcinogenic nitrosamines, and;
via the formation of nitrite, nitrate is implicated in methemoglobinemia, a disorder of hemoglobin in red blood cells susceptible to especially affect infants and toddlers.
=== Methemoglobinemia ===
One of the most common cause of methemoglobinemia in infants is due to the ingestion of nitrates and nitrites through well water or foods.
In fact, nitrates (NO−3), often present at too high concentration in drinkwater, are only the precursor chemical species of nitrites (NO−2), the real culprits of methemoglobinemia. Nitrites produced by the microbial reduction of nitrate (directly in the drinkwater, or after ingestion by the infant, in his digestive system) are more powerful oxidizers than nitrates and are the chemical agent really responsible for the oxidation of Fe2+ into Fe3+ in the tetrapyrrole heme of hemoglobin. Indeed, nitrate anions are too weak oxidizers in aqueous solution to be able to directly, or at least sufficiently rapidly, oxidize Fe2+ into Fe3+, because of kinetics limitations.
Infants younger than 4 months are at greater risk given that they drink more water per body weight, they have a lower NADH-cytochrome b5 reductase activity, and they have a higher level of fetal hemoglobin which converts more easily to methemoglobin. Additionally, infants are at an increased risk after an episode of gastroenteritis due to the production of nitrites by bacteria.
However, other causes than nitrates can also affect infants and pregnant women. Indeed, the blue baby syndrome can also be caused by a number of other factors such as the cyanotic heart disease, a congenital heart defect resulting in low levels of oxygen in the blood, or by gastric upset, such as diarrheal infection, protein intolerance, heavy metal toxicity, etc.
=== Drinking water standards ===
Through the Safe Drinking Water Act, the United States Environmental Protection Agency has set a maximum contaminant level of 10 mg/L or 10 ppm of nitrate in drinking water.
An acceptable daily intake (ADI) for nitrate ions was established in the range of 0–3.7 mg (kg body weight)−1 day−1 by the Joint FAO/WHO Expert Committee on Food Additives (JEFCA).
=== Aquatic toxicity ===
In freshwater or estuarine systems close to land, nitrate can reach concentrations that are lethal to fish. While nitrate is much less toxic than ammonia, levels over 30 ppm of nitrate can inhibit growth, impair the immune system and cause stress in some aquatic species. Nitrate toxicity remains a subject of debate.
In most cases of excess nitrate concentrations in aquatic systems, the primary sources are wastewater discharges, as well as surface runoff from agricultural or landscaped areas that have received excess nitrate fertilizer. The resulting eutrophication and algae blooms result in anoxia and dead zones. As a consequence, as nitrate forms a component of total dissolved solids, they are widely used as an indicator of water quality.
== Human impacts on ecosystems through nitrate deposition ==
Nitrate deposition into ecosystems has markedly increased due to anthropogenic activities, notably from the widespread application of nitrogen-rich fertilizers in agriculture and the emissions from fossil fuel combustion. Annually, about 195 million metric tons of synthetic nitrogen fertilizers are used worldwide, with nitrates constituting a significant portion of this amount. In regions with intensive agriculture, such as parts of the U.S., China, and India, the use of nitrogen fertilizers can exceed 200 kilograms per hectare.
The impact of increased nitrate deposition extends beyond plant communities to affect soil microbial populations. The change in soil chemistry and nutrient dynamics can disrupt the natural processes of nitrogen fixation, nitrification, and denitrification, leading to altered microbial community structures and functions. This disruption can further impact the nutrient cycling and overall ecosystem health.
== Dietary nitrate ==
A source of nitrate in the human diets arises from the consumption of leafy green foods, such as spinach and arugula. NO−3 can be present in beetroot juice. Drinking water represents also a primary nitrate intake source.
Nitrate ingestion rapidly increases the plasma nitrate concentration by a factor of 2 to 3, and this elevated nitrate concentration can be maintained for more than 2 weeks. Increased plasma nitrate enhances the production of nitric oxide, NO. Nitric oxide is a physiological signaling molecule which intervenes in, among other things, regulation of muscle blood flow and mitochondrial respiration.
=== Cured meats ===
Nitrite (NO−2) consumption is primarily determined by the amount of processed meats eaten, and the concentration of nitrates (NO−3) added to these meats (bacon, sausages…) for their curing. Although nitrites are the nitrogen species chiefly used in meat curing, nitrates are used as well and can be transformed into nitrite by microorganisms, or in the digestion process, starting by their dissolution in saliva and their contact with the microbiota of the mouth. Nitrites lead to the formation of carcinogenic nitrosamines. The production of nitrosamines may be inhibited by the use of the antioxidants vitamin C and the alpha-tocopherol form of vitamin E during curing.
Many meat processors claim their meats (e.g. bacon) is "uncured" – which is a marketing claim with no factual basis: there is no such thing as "uncured" bacon (as that would be, essentially, raw sliced pork belly). "Uncured" meat is in fact actually cured with nitrites with virtually no distinction in process – the only difference being the USDA labeling requirement between nitrite of vegetable origin (such as from celery) vs. "synthetic" sodium nitrite. An analogy would be purified "sea salt" vs. sodium chloride – both being exactly the same chemical with the only essential difference being the origin.
Anti-hypertensive diets, such as the DASH diet, typically contain high levels of nitrates, which are first reduced to nitrite in the saliva, as detected in saliva testing, prior to forming nitric oxide (NO).
== Domestic animal feed ==
Symptoms of nitrate poisoning in domestic animals include increased heart rate and respiration; in advanced cases blood and tissue may turn a blue or brown color. Feed can be tested for nitrate; treatment consists of supplementing or substituting existing supplies with lower nitrate material. Safe levels of nitrate for various types of livestock are as follows:
The values above are on a dry (moisture-free) basis.
== Salts and covalent derivatives ==
Nitrate formation with elements of the periodic table:
== See also ==
Ammonium
Eutrophication
f-ratio in oceanography
Frost diagram
Nitrification
Nitratine
Nitrite, the anion NO−2
Nitrogen oxide
Nitrogen trioxide, the neutral radical NO3
Peroxynitrate, OONO–2
Sodium nitrate
== References ==
== External links ==
ATSDR – Case Studies in Environmental Medicine – Nitrate/Nitrite Toxicity (archive) | Wikipedia/Nitrates |
The Hamburg University of Applied Sciences (German: Hochschule für Angewandte Wissenschaften Hamburg) is a higher education and applied research institution located in Hamburg, Germany. Formerly known as Fachhochschule Hamburg, the Hamburg University of Applied Sciences was founded in 1970. In terms of student enrolment, the HAW is the second-largest university in Hamburg and the fourth-largest applied sciences university in Germany, with a student body of 16.454.
== History ==
Source:
The Hamburg University of Applied Sciences was founded in 1970 as the Fachhochschule Hamburg. Four engineering schools and six vocational schools were brought together with the goal of developing a new form of higher education. The focus was on the application of knowledge, with degree programmes that included placements in industry, laboratory work and practice-related projects.
The Fachhochschule Hamburg initially had 13 departments. Its Business School was founded in 1994.
In 1998, as part of the increased internationalisation within higher education in Germany, the Conference of the Ministers of Education and Cultural Affairs allowed the Fachhochschulen to add 'university of applied sciences' to their names. In 2001, the Fachhochschule Hamburg decided to take this a step further and changed its German name to Hochschule für Angewandte Wissenschaften Hamburg (HAW Hamburg), which more closely reflected the English name, Hamburg University of Applied Sciences.
All of the Bachelor's programmes offered at the university are taught in German, with the exception of Information Engineering, which offers both English and German options.
At the end of 2007, the Faculty of Business and Public Management and the Faculty of Social Work and Nursing were joined to form one faculty. Today the university has four faculties at four different campus locations in Hamburg:
Engineering and Computer Science
Life Sciences
Design, Media and Information
Business and Social Sciences
As of 1 January 2006, the Architecture, Civil Engineering and Geomatics departments joined the building faculties of two other Hamburg universities to become the new HafenCity University.
== Faculties ==
Source:
Business and Social Sciences [Berliner Tor Campus]
Design, Media and Information [Armgartstrasse/Finkenau Campus]
Engineering and Computer Science [Berliner Tor Campus]
Life Sciences [Bergedorf Campus]
== Research ==
Source:
Energy and Sustainability
Biomass
Environmental Analysis and Ecotoxicology
Fuel Cells
Lifetec Process Engineering
Health and Nutrition
Biomedical Systems/Networks in Diagnostics
Evaluation Research in Social, Health and Education Sectors
Food Science
Public Health
Mobility and Transport
New Flight
AERO - Aircraft, Design, and Systems Group
Driver Assistance and Autonomous Systems (FAUST)
Application of Dynamic Systems (ADYS)
IT, Communication and Media
Ambient Intelligence
iNET - Internet Technologies Research Group
Interactive Multimedia Systems
Knowledge Access and Accessibility
Information and the Development of the Internet
MARS - Multi-Agent Research and Simulation
Sound Analysis and Design
Visual Thinking
Games
Diverse Research Activities
Dynamics and Interactions of Fluid and Structures (DISS)
Optical Sensing and Image Processing
Innovation in Medium Sized Companies
Family Relationships/Children at Risk
Integrated Industrial Business Processes
== Partnerships ==
HAW Hamburg has partnerships with various other universities, including the University of Shanghai for Science and Technology (China), the University of Huelva (Spain) and the Institute of Technology Tallaght in Dublin (Ireland). A 'joint college' collaboration between HAW Hamburg and the University of Shanghai for Science and Technology offers Mechanical Engineering and Electrical Engineering degree courses to Chinese students. A third of the courses take place at the University of Shanghai campus.
HAW Hamburg partners with California State University – Long Beach (CSULB) every fall and summer for a short-term study abroad programme. German students host CSULB students in the summer and are hosted in the fall in return.
HAW Hamburg also partners with the Department of Aerospace and Ocean Engineering at Virginia Tech.
== Campuses ==
The HAW has four campuses. The main campus and official address is in St. Georg; the Bergedorf Campus is in Lohbrügge.
== See also ==
Education in Hamburg
== References ==
== External links ==
Hamburg University of Applied Sciences (in German)
Official Campus Locations Archived 2017-12-10 at the Wayback Machine | Wikipedia/Hamburg_University_of_Applied_Sciences |
A conventional fixed-wing aircraft flight control system (AFCS) consists of flight control surfaces, the respective cockpit controls, connecting linkages, and the necessary operating mechanisms to control an aircraft's direction in flight. Aircraft engine controls are also considered flight controls as they change speed.
The fundamentals of aircraft controls are explained in flight dynamics. This article centers on the operating mechanisms of the flight controls. The basic system in use on aircraft first appeared in a readily recognizable form as early as April 1908, on Louis Blériot's Blériot VIII pioneer-era monoplane design.
== Cockpit controls ==
=== Primary controls ===
Generally, the primary cockpit flight controls are arranged as follows:
A control yoke (also known as a control column), centre stick or side-stick (the latter two also colloquially known as a control or joystick), governs the aircraft's roll and pitch by moving the ailerons (or activating wing warping on some very early aircraft designs) when turned or deflected left and right, and moves the elevators when moved backwards or forwards.
Rudder pedals, or the earlier, pre-1919 "rudder bar", control yaw by moving the rudder; the left foot forward will move the rudder left for instance.
Thrust lever or throttle, which controls engine speed or thrust for powered aircraft.
The control yokes also vary greatly among aircraft. There are yokes where roll is controlled by rotating the yoke clockwise/counterclockwise (like steering a car) and pitch is controlled by moving the control column towards or away from the pilot, but in others the pitch is controlled by sliding the yoke into and out of the instrument panel (like most Cessnas, such as the 152 and 172), and in some the roll is controlled by sliding the whole yoke to the left and right (like the Cessna 162). Centre sticks also vary between aircraft. Some are directly connected to the control surfaces using cables, others (fly-by-wire airplanes) have a computer in between which then controls the electrical actuators.
Even when an aircraft uses variant flight control surfaces such as a V-tail ruddervator, flaperons, or elevons, because these various combined-purpose control surfaces control rotation about the same three axes in space, the aircraft's flight control system will still be designed so that the stick or yoke controls pitch and roll conventionally, as will the rudder pedals for yaw. The basic pattern for modern flight controls was pioneered by French aviation figure Robert Esnault-Pelterie, with fellow French aviator Louis Blériot popularizing Esnault-Pelterie's control format initially on Louis' Blériot VIII monoplane in April 1908, and standardizing the format on the July 1909 Channel-crossing Blériot XI. Flight control has long been taught in such fashion for many decades, as popularized in ab initio instructional books such as the 1944 work Stick and Rudder.
In some aircraft, the control surfaces are not manipulated with a linkage. In ultralight aircraft and motorized hang gliders, for example, there is no mechanism at all. Instead, the pilot just grabs the lifting surface by hand (using a rigid frame that hangs from its underside) and moves it.
=== Secondary controls ===
In addition to the primary flight controls for roll, pitch, and yaw, there are often secondary controls available to give the pilot finer control over flight or to ease the workload. The most commonly available control is a wheel or other device to control elevator trim, so that the pilot does not have to maintain constant backward or forward pressure to hold a specific pitch attitude (other types of trim, for rudder and ailerons, are common on larger aircraft but may also appear on smaller ones). Many aircraft have wing flaps, controlled by a switch or a mechanical lever or in some cases are fully automatic by computer control, which alter the shape of the wing for improved control at the slower speeds used for take-off and landing. Other secondary flight control systems may include slats, spoilers, air brakes and variable-sweep wings.
== Flight control systems ==
=== Mechanical ===
Mechanical or manually operated flight control systems are the most basic method of controlling an aircraft. They were used in early aircraft and are currently used in small aircraft where the aerodynamic forces are not excessive. Very early aircraft, such as the Wright Flyer I, Blériot XI and Fokker Eindecker used a system of wing warping where no conventionally hinged control surfaces were used on the wing, and sometimes not even for pitch control as on the Wright Flyer I and original versions of the 1909 Etrich Taube, which only had a hinged/pivoting rudder in addition to the warping-operated pitch and roll controls. A manual flight control system uses a collection of mechanical parts such as pushrods, tension cables, pulleys, counterweights, and sometimes chains to transmit the forces applied to the cockpit controls directly to the control surfaces. Turnbuckles are often used to adjust control cable tension. The Cessna Skyhawk is a typical example of an aircraft that uses this type of system. Gust locks are often used on parked aircraft with mechanical systems to protect the control surfaces and linkages from damage from wind. Some aircraft have gust locks fitted as part of the control system.
Increases in the control surface area, and the higher airspeeds required by faster aircraft resulted in higher aerodynamic loads on the flight control systems. As a result, the forces required to move them also become significantly larger. Consequently, complicated mechanical gearing arrangements were developed to extract maximum mechanical advantage in order to reduce the forces required from the pilots. This arrangement can be found on bigger or higher performance propeller aircraft such as the Fokker 50.
Some mechanical flight control systems use servo tabs that provide aerodynamic assistance. Servo tabs are small surfaces hinged to the control surfaces. The flight control mechanisms move these tabs, aerodynamic forces in turn move, or assist the movement of the control surfaces reducing the amount of mechanical forces needed. This arrangement was used in early piston-engined transport aircraft and in early jet transports. The Boeing 737 incorporates a system, whereby in the unlikely event of total hydraulic system failure, it automatically and seamlessly reverts to being controlled via servo-tab.
=== Hydro-mechanical ===
The complexity and weight of mechanical flight control systems increase considerably with the size and performance of the aircraft. Hydraulically powered control surfaces help to overcome these limitations. With hydraulic flight control systems, the aircraft's size and performance are limited by economics rather than a pilot's muscular strength. At first, only-partially boosted systems were used in which the pilot could still feel some of the aerodynamic loads on the control surfaces (feedback).
A hydro-mechanical flight control system has two parts:
The mechanical circuit, which links the cockpit controls with the hydraulic circuits. Like the mechanical flight control system, it consists of rods, cables, pulleys, and sometimes chains.
The hydraulic circuit, which has hydraulic pumps, reservoirs, filters, pipes, valves and actuators. The actuators are powered by the hydraulic pressure generated by the pumps in the hydraulic circuit. The actuators convert hydraulic pressure into control surface movements. The electro-hydraulic servo valves control the movement of the actuators.
The pilot's movement of a control causes the mechanical circuit to open the matching servo valve in the hydraulic circuit. The hydraulic circuit powers the actuators which then move the control surfaces. As the actuator moves, the servo valve is closed by a mechanical feedback linkage - one that stops movement of the control surface at the desired position.
This arrangement was found in the older-designed jet transports and in some high-performance aircraft. Examples include the Antonov An-225 and the Lockheed SR-71.
==== Artificial feel devices ====
With purely mechanical flight control systems, the aerodynamic forces on the control surfaces are transmitted through the mechanisms and are felt directly by the pilot, allowing tactile feedback of airspeed. With hydromechanical flight control systems, the load on the surfaces cannot be felt and there is a risk of overstressing the aircraft through excessive control surface movement. To overcome this problem, artificial feel systems can be used. For example, for the controls of the RAF's Avro Vulcan jet bomber and the RCAF's Avro Canada CF-105 Arrow supersonic interceptor (both 1950s-era designs), the required force feedback was achieved by a spring device. The fulcrum of this device was moved in proportion to the square of the air speed (for the elevators) to give increased resistance at higher speeds. For the controls of the American Vought F-8 Crusader and the LTV A-7 Corsair II warplanes, a 'bob-weight' was used in the pitch axis of the control stick, giving force feedback that was proportional to the airplane's normal acceleration.
==== Stick shaker ====
A stick shaker is a device that is attached to the control column in some hydraulic aircraft. It shakes the control column when the aircraft is approaching stall conditions. Some aircraft such as the McDonnell Douglas DC-10 are equipped with a back-up electrical power supply that can be activated to enable the stick shaker in case of hydraulic failure.
==== Power-by-wire ====
In most current systems the power is provided to the control actuators by high-pressure hydraulic systems. In fly-by-wire systems the valves, which control these systems, are activated by electrical signals. In power-by-wire systems, electrical actuators are used in favour of hydraulic pistons. The power is carried to the actuators by electrical cables. These are lighter than hydraulic pipes, easier to install and maintain, and more reliable. Elements of the F-35 flight control system are power-by-wire. The actuators in such an electro-hydrostatic actuation (EHA) system are self-contained hydraulic devices, small closed-circuit hydraulic systems. The overall aim is towards more- or all-electric aircraft and an early example of the approach was the Avro Vulcan. Serious consideration was given to using the approach on the Airbus A380.
=== Fly-by-wire control systems ===
A fly-by-wire (FBW) system replaces manual flight control of an aircraft with an electronic interface. The movements of flight controls are converted to electronic signals transmitted by wires (hence the term fly-by-wire), and flight control computers determine how to move the actuators at each control surface to provide the expected response. Commands from the computers are also input without the pilot's knowledge to stabilize the aircraft and perform other tasks. Electronics for aircraft flight control systems are part of the field known as avionics.
Fly-by-optics, also known as fly-by-light, is a further development using fiber-optic cables.
== Research ==
Several technology research and development efforts exist to integrate the functions of flight control systems such as ailerons, elevators, elevons, flaps, and flaperons into wings to perform the aerodynamic purpose with the advantages of less: mass, cost, drag, inertia (for faster, stronger control response), complexity (mechanically simpler, fewer moving parts or surfaces, less maintenance), and radar cross section for stealth. These may be used in many unmanned aerial vehicles (UAVs) and 6th generation fighter aircraft. Two promising approaches are flexible wings, and fluidics.
=== Flexible wings ===
In flexible wings, also known as "morphing aerofoils", much or all of a wing surface can change shape in flight to deflect air flow much like an ornithopter. Adaptive compliant wings are a military and commercial effort. The X-53 Active Aeroelastic Wing was a US Air Force, NASA, and Boeing effort. Notable efforts have also been made by FlexSys, who have conducted flight tests using flexible aerofoils retrofitted to a Gulf stream III aircraft.
=== Active Flow Control ===
In active flow control systems, forces in vehicles occur via circulation control, in which larger and more complex mechanical parts are replaced by smaller, simpler fluidic systems (slots which emit air flows) where larger forces in fluids are diverted by smaller jets or flows of fluid intermittently, to change the direction of vehicles. In this use, active flow control promises simplicity and lower mass, costs (up to half less), and inertia and response times. This was demonstrated in the Demon UAV, which flew for the first time in the UK in September 2010.
== See also ==
Dual control (aviation)
Flight envelope protection
Flight with disabled controls
Helicopter flight controls
HOTAS
Kite control systems
List of airliner crashes involving loss of control
Matthew Piers Watt Boulton, inventor of the aileron (1868)
Thrust vectoring
Weight-shift control
Wing warping, an early method for controlling roll
== References ==
=== Notes ===
=== Bibliography ===
USAF & NATO Report RTO-TR-015 AC/323/(HFM-015)/TP-1 (2001).
== External links ==
Airbus A380 cockpit.
Airbus A380 cockpit - a 360-degree Panorama
Touchdown: the Development of Propulsion Controlled Aircraft at NASA-Dryden by Tom Tucker | Wikipedia/Aircraft_flight_control_system |
Fox Business (officially known as Fox Business Network, or FBN) is an American conservative business news channel and website publication owned by the Fox News Media division of Fox Corporation. The channel broadcasts primarily from studios at 1211 Avenue of the Americas in Midtown Manhattan. Launched on October 15, 2007, the network features trading day coverage and a nightly lineup of opinion-based talk shows.
Day-to-day operations are run by Kevin Magee, executive vice president of Fox News; Neil Cavuto was the vice president and managing editor for the network and business news operation overall.
As of February 2015, Fox Business Network is available to approximately 74,224,000 pay television households (63.8% of households with television) in the United States.
== History ==
News Corporation chairman Rupert Murdoch confirmed the launch at his keynote address at the 2007 McGraw-Hill Media Summit on February 8, 2007. Murdoch had publicly stated that if News Corporation's purchase of The Wall Street Journal went through and if it were legally possible, he would have rechristened the channel with a name that has "Journal" in it. However, on July 11, 2007, News Corporation announced that the new channel would be called Fox Business Network (FBN), a name chosen over Fox Business Channel due to the pre-existing (though seldom used) legal abbreviation of "FBC" for the co-owned broadcast network Fox Broadcasting Company.
Before the network launched, few specific facts were made public as to the type of programming approach Fox Business would be taking. However, some details emerged as to how it would differentiate itself from its main competitor, CNBC. At a media summit hosted by BusinessWeek magazine, Rupert Murdoch was quoted as saying CNBC was too "negative towards business". They promised to make Fox Business more "business friendly". In addition, it was expected that Fox Business would not be "poaching" a lot of CNBC's on-air talent in the immediate future, as most key on-air personalities had been locked into a long-term contract. However, that still left open the possibility of the network taking some of CNBC's other staff, including editors, producers and other reporters.
The channel launched on October 15, 2007. The network is placed on channel 43 in the New York City market in the basic-tier pay-TV package, which is home to the NYSE and NASDAQ stock exchanges. It is paired with sister network Fox News Channel, which moved to channel 44 (CNBC is carried on channel 15 on Time Warner Cable's New York City area systems). FBN received carriage on Cablevision channel 106, only available via subscription to its IO Digital Cable package. According to an article in Multichannel News, NBC Universal paid up to "several million dollars" in order to ensure that CNBC and Fox Business would be separated on the dial, and in order to retain CNBC's "premium" channel slot. At the time FBN was carried on Time Warner Cable only on its analog service in New York City (most systems have since switched to digital-only); in other markets, the channel's carriage was limited to premium digital cable packages at extra cost. Verizon's FiOS TV also carries the network on its premier lineup (SD channel 117 and HD channel 617). Dish Network began carrying FBN on channel 206 on February 2, 2009. FBN also received carriage on DirecTV channel 359. As its prominence grew, some providers indeed moved the channel to their basic package, and some have paired Bloomberg Television, CNBC and FBN next to each other as part of 'genre' channel maps.
On November 10, 2015, Fox Business Network, along with The Wall Street Journal hosted its first Republican presidential primary debate, setting a ratings record for the network with 13.5 million viewers. The debate also delivered 1.4 million concurrent streams, making it the most watched livestreaming primary debate in history and beating out the 2015 Super Bowl by 100,000 streams. Fox Business Network hosted its second Republican primary debate on January 14, 2016, in Charleston, South Carolina with Neil Cavuto and Maria Bartiromo serving as moderators. Both of these primetime debates also included earlier debates featuring presidential candidates who were not ranked as highly in the national polls as well as those based in Iowa or New Hampshire.On December 14, 2017, 21st Century Fox announced it would sell a majority of its assets to The Walt Disney Company in a transaction valued at over $52 billion. Fox Business Network was not included in the deal and was spun off to the significantly downsized Fox Corporation, along with the Fox Broadcasting Company, Fox News Channel and Fox Sports 1 and 2. The deal was approved by Disney and Fox shareholders on July 27, 2018, and was completed on March 19, 2019.
On September 29, 2019, Fox Business Network unveiled a new slogan, "Investing in You", a new on-air graphics scheme based on one recently adopted by Fox News Channel, and updated digital platforms. The channel also announced the new Friday-night program Barron's Roundtable.
== Programming and on-air staff ==
David Asman, Maria Bartiromo, Cheryl Casone, Dagen McDowell, and Stuart Varney are anchors for Fox Business Network; they also appear on Fox News Channel. In addition, Brenda Buttner was also on the roster on FBN until her passing in 2017.
Other anchors include Peter Barnes, Tom Sullivan, Jenna Lee, Nicole Petallides and Cody Willard. Reporters include Jeff Flock (a CNN "original"), Shibani Joshi (from News 12 Westchester), and Connell McShane (from Bloomberg Television). The network previously had former Hewlett-Packard CEO Carly Fiorina (a 2016 presidential candidate) as a contributor.
Dave Ramsey had a one-hour prime time show, similar in format to his syndicated radio show, until June 2010. Tom Sullivan broadcast his Tom Sullivan Show on the radio, with plans to syndicate the show nationwide with the assistance of Fox News Radio. Adam Shapiro (formerly with Cleveland's WEWS-TV and New York City's WNBC) was added to the Fox Business Network to report from the Washington, D.C. bureau. On October 18, 2007, former CNBC anchor Liz Claman joined the Fox Business Network as co-anchor of the 2-3 p.m. portion of the dayside business news block with David Asman. Her first assignment for FBN was an interview with Warren Buffett.
In April 2008, Brian Sullivan (no relation to Tom) joined FBN, coming over from Bloomberg Television. Sullivan, who reunited with his Bloomberg colleague Connell McShane, anchored the 10 a.m.-12 p.m. portion of the business news block with Dagen McDowell.
On May 12, 2008, Fox Business Network revamped its daytime lineup, which included the debut of two new programs, Countdown to the Closing Bell and Fox Business Bulls & Bears. On April 20, 2009, Money for Breakfast, The Opening Bell on Fox Business (both hosted by Alexis Glick), The Noon Show with Tom Sullivan and Cheryl Casone, Countdown to the Closing Bell, Fox Business Bulls & Bears, and Cavuto all moved to the network's new Studio G set. All six of those shows shared the same set in Studio G, which was unveiled on Money for Breakfast the same day.
In September 2009, Don Imus and FBN reached an agreement to carry his show, Imus in the Morning, on Fox Business. The show began airing on October 5, 2009. Fox had previously been in negotiations with Imus to bring his show to the network. In November 2007 (when Imus was just returning to radio, and Fox Business was just starting), negotiations fell through and Imus instead signed with rural-oriented network RFD-TV.
On December 23, 2009, Alexis Glick left FBN. Announcing that that day's episode of The Opening Bell would be her last, she said "I know this is not the norm, but I don't believe in abrupt departures." The only reason given by Glick for her departure was that she was leaving to "embark on a new venture," but a number of sources have noted that Don Imus' new morning show had a significant effect upon Glick's screen time since he signed with the network.
On November 10, 2010, FBN announced that former CNN anchor Lou Dobbs would join the channel. His program, Lou Dobbs Tonight, moved to FBN in March 2011.
On February 24, 2014, former CNBC host Maria Bartiromo moved to FBN, where she would host Opening Bell with Maria Bartiromo, and become a Fox News contributor.
In April 2015, it was reported that Fox Business would drop the Imus in the Morning simulcast, as Imus was planning to move from New York City to Texas. On May 11, the network officially announced a new daytime lineup that would begin June 1; FBN AM would air from 5-6 a.m. ET, and Bartiromo moved to the 6-9 a.m. ET timeslot formerly held by Imus to host Mornings with Maria. Varney & Company was moved up to 9 a.m. and expanded to three hours, Neil Cavuto would host the new midday program Cavuto: Coast to Coast, and Trish Regan (moving from Bloomberg Television) would host the new afternoon program The Intelligence Report, and Melissa Francis moved to co-anchor After the Bell alongside David Asman.
Former UK Independence Party head Nigel Farage was announced as a commentator on January 20, 2017, the day of Donald Trump's presidential inauguration. Farage will provide political analysis for both Fox Business and Fox News.
=== Sports programming ===
Fox Business Network has occasionally served as an overflow channel for Fox Sports telecasts in the event of programming conflicts across Fox, Fox Sports 1, and Fox Sports 2, particularly college football. For instance in 2017, a game between Baylor and Oklahoma State aired on Fox Business due to a weather-delayed game on FS1. It was reported in May 2018 that, following a controversial decision in November 2017 to move the first quarter of a Pac-12 football game between Washington and Stanford from FS1 to FS2 (which does not have wide carriage) due to a NASCAR Camping World Truck Series overrun, that Fox would prefer the use of FBN for future Pac-12 overflow situations, as it has significantly wider distribution (if not slightly wider than FS1 in terms of total households) than FS2, and that it would carry minimal impact to programming.
In 2025, Fox Business will expand its sports coverage for the first time. In March, the network will air the MotoGP Grand Prix of Argentina. In June, the network will air second round coverage of the LIV Golf tournament from Robert Trent Jones Golf Club in Washington D.C..
== On-air staff ==
=== Anchors/hosts ===
=== Reporters ===
These reporters are based in New York unless otherwise stated.
=== Contributors ===
Nigel Farage
Jonathan Hoenig
Dave Ramsey
=== Former on-air staff ===
Deirdre Bolton (2014–2020), now with ABC News
Brenda Buttner (2007–2016, deceased)
Neil Cavuto (2007–2024)
Sean Duffy (2023–2024), The Bottom Line; now United States Secretary of Transportation
Melissa Francis (2012–2020), now with Newsmax
Alexis Glick (2007–2009), Money for Breakfast and The Opening Bell on Fox Business; no longer in the television industry
Lou Dobbs (2011–2021, deceased)
Terry Keenan, host of Cashin' In (2002–2009, deceased)
John Layfield, now at WWE
Jenna Lee (2007–2010), Fox Business Morning; no longer in the television industry
Connell McShane (2007-2023), now with NewsNation
Nicole Petallides (2007–2018), now with TD Ameritrade Network)
Trish Regan (2015–2020)
Adam Shapiro, now with Yahoo! Finance
John Stossel (2009–2016)
Brian Sullivan (2008-2011), now with CNBC
Tom Sullivan (2007-2017), now hosts a syndicated talk radio program
== Ratings ==
On January 4, 2008, The New York Times and several other media outlets reported that FBN had registered an average of 6,300 viewers, far below Nielsen's 35,000-viewer threshold. The number was so low that neither Nielsen nor FBN were allowed to confirm the number. The Times and other media outlets noted the network is less than four months old and only in one-third as many households as is CNBC.
In July 2008, Nielsen estimated that FBN averaged 8,000 viewers per daytime hour and 20,000 per prime time hour, compared to 284,000 and 191,000 (respectively) for CNBC. Because FBN's viewership remained low, Nielsen had difficulty estimating viewership, and the estimates are not statistically significant. At the time, FBN was available in approximately 40 million homes to CNBC's over 90 million.
In the fall 2008, FBN was losing to CNBC in the ratings by over 10 to 1.
By June 2009, showed FBN with an average of 21,000 viewers between 5 a.m. and 9 p.m., still under the Nielsen threshold, and less than 10% of CNBC's 232,000 for the same time span. At this point, FBN was available in about 49 million U.S. homes.
Reports of ratings from the first episode of Imus in the Morning reported an average of 177,000 viewers (and a peak of 202,000 in the 7:00 a.m. hour) in the time slot, mostly over the age of 65; this was a more than tenfold increase compared to the network's previous morning show, Money for Breakfast. The program even beat CNBC's Squawk Box in the time slot.
In 2012, Lou Dobbs Tonight was challenging CNBC's Larry Kudlow, earning 141,000 total viewers on Fox Business Network.
The first quarter of 2016 had FBN experience its strongest ratings in its history with day programming up 111 percent in total viewers and 130 percent in the key age 25 to 54 demographic, compared to a year before.
As of August 2017, Fox Business had surpassed CNBC's ratings for nine consecutive months, and Lou Dobbs Tonight was the most-watched program in business news. CNBC announced in 2015 that it would no longer rely on Nielsen ratings to measure its daytime audience, turning to rival Cogent Reports instead.
== Controversies ==
=== COVID-19 pandemic ===
On March 27, 2020, Trish Regan departed the network, amid criticism of a segment on the March 9 episode of Trish Regan Primetime, where she accused Democrats of exploiting the COVID-19 pandemic solely to blame President Donald Trump for it, and launch another round of impeachment hearings.
On December 23, 2020, Mornings with Maria aired an interview with a person who claimed to be Smithfield Foods CEO Dennis Organ, but was actually an animal rights activist from Direct Action Everywhere who warned that the meat packing industry could "effectively [bring] on the next pandemic.” Bartiromo issued a correction at the end of the show, admitting that they had been "punked".
=== Smartmatic election fraud claims ===
In November 2020, Fox Business anchors Maria Bartiromo and Lou Dobbs promoted conspiracy theories during their programs, tying the voting machine manufacturer Smartmatic to voter fraud during the 2020 presidential election. This included claims that it had ties to Dominion Voting Systems and the country of Venezuela. In December 2020, Smartmatic requested a retraction of the coverage by Fox Business, Fox News, Newsmax and One America News Network, stating that it was "false and defamatory". To comply with the request, the two anchors' programs, as well as that of Fox News anchor Jeanine Pirro, all aired a pre-recorded interview with Edward Perez, an election technology expert at the Open Source Election Technology Institute, which fact-checked various election fraud claims (including those surrounding Smartmatic).
On February 4, 2021, Smartmatic filed a $2.7 billion lawsuit against Fox News Media for defamation, specifically naming Bartiromo, Dobbs, and Pirro. The next day, Fox Business abruptly canceled Lou Dobbs Tonight.
== Availability ==
=== Outside the United States ===
On April 20, 2009, the Canadian Radio-television and Telecommunications Commission approved Fox Business Network for distribution in Canada; it is currently available through Rogers Cable's 'Ignite TV' service.
As of July 2011, the channel is carried on Sky Italia (a fellow News Corporation company at the time), its first European carriage deal. Fox Business HD was first broadcast in Israel by cable provider Hot in 2015, and it is also carried by Cellcom TV and Partner TV.
In Australia, Sky News Business Channel (subsequently relaunched as Your Money in October 2018) simulcast Fox Business Network during overnight hours since its launch in January 2008, until the channel was closed down in May 2019. The channel was operated by Australian News Channel Pty Ltd, which was partly owned by Sky plc in the United Kingdom (a fellow 21st Century Fox company at the time) until December 2016, when News Corp Australia (a fellow Rupert Murdoch company) acquired the Australian broadcaster in its entirety.
In Central America, the Dominican Republic and the Caribbean, Fox Business is carried by Central American, Dominican and Caribbean TV operators. In Costa Rica, Fox Business is available for streaming on the Fox News International mobile application and the channel is carried by Costa Rican TV operators.
In Mexico, Fox Business is available for streaming on the Fox News International mobile application and the channel is carried by Mexican TV operators.
In Spain, Fox Business is available for streaming on the Fox News International mobile application and the channel is carried by other Spanish TV operators.
In the United Kingdom, Fox Business is available for streaming on the Fox News International mobile application.
=== Dispute with Spectrum ===
In 2018, Fox Business Network decided to remove FBN and FNC from the Spectrum cable network, with both channels subsequently removed on May 1, 2018. However, both channels returned to Spectrum on May 10, 2018.
== High definition ==
The high-definition simulcast of Fox Business Network is broadcast in 720p. Programming shown on this feed was originally produced in high-definition, but was cropped to a 4:3 image and pushed to the left side of the screen, with the extra room used for additional content, such as statistics and charts, and a wider ticker with more room; the information sidebar was named "The Fox HD Wing" (competitor channel CNBC HD used the enhanced HD format until October 13, 2014, when it was discontinued altogether).
The sidebar graphic was dropped as a result of the network's switch to a 16:9 letterboxed format on September 17, 2012, ending the enhanced HD format altogether. The enhanced ticker and headlines, which were previously seen in the old sidebar graphic, were moved to the lower-third of the screen. Both the SD and HD feeds now use the same exact 16:9 letterbox format, just like its other Fox-owned sister networks.
== The Fox 50 ==
The Fox 50 is an industrial index of large companies that is used by FBN; it consists primarily of "the largest U.S. companies that make the products you know and use every day." The index includes:
Anheuser-Busch and Merrill Lynch were included in the original index, but each was acquired by other companies in 2008. They were replaced by Wells Fargo and HP. In March 2011, CBS Corporation, Charles Schwab Corporation, Lowe's, Sprint Nextel, and Yahoo! were removed, and replaced by DuPont, Ford, JPMorgan, Pfizer, and UnitedHealth Group.
This index is not available to purchase in the form of an index fund or ETF. The fund received criticism from some financial bloggers for putting together an index with so many competing brands (such as FedEx and UPS; McDonald's and Yum! Brands; WalMart, Target and Costco; Apple, Dell and Microsoft; and Coca-Cola and PepsiCo).
== Competitors ==
Bloomberg Television
CNBC
== References ==
== External links ==
Official website | Wikipedia/Fox_Business_Network |
Cenovus Energy Inc. (pronounced se-nō-vus) is a Canadian integrated oil and natural gas company headquartered in Calgary, Alberta. Its offices are located at Brookfield Place, having completed a move from the neighbouring Bow in 2019.
== History ==
Cenovus was formed in 2009 when Encana Corporation split into two distinct companies, with Cenovus becoming focused on oil sands assets.
In 2017, Cenovus purchased ConocoPhillips' 50 percent share of their Foster Creek Christina Lake (FCCL) oil sands projects and most of their conventional assets in Alberta and British Columbia, including the Deep Basin. Cenovus completed the acquisition of Husky Energy for C$3.9 billion in stock in January 2021. The combined company is Canada’s third-largest crude oil and natural gas producer and the second-largest Canadian-based refiner and upgrader.
== Operations ==
=== Oil sands ===
Cenovus has four producing projects in the oil sands – Foster Creek, Christina Lake (Alberta), Sunrise (jointly owned with BP Canada and operated by Cenovus) and Tucker. All projects use the drilling method of steam-assisted gravity drainage (SAGD). On May 17, 2017, Foster Creek and Christina Lake became 100 percent owned and operated by Cenovus. In December 2021, Cenovus announced the sale of the Tucker oil sands project to Strathcona Resources. In June 2022, Cenovus announced it would acquire the outstanding 50% interest in the Sunrise oil sands asset and assume full ownership.
=== Conventional oil and gas ===
Cenovus once held conventional oil and natural gas operations across Alberta and Saskatchewan, including the Weyburn oilfield in Saskatchewan, which is the largest CO2 enhanced oil recovery operation in Canada. It's also the site of the largest geological greenhouse gas storage project in the world, with about 30 million tonnes of CO2 safely stored underground and extensively studied by researchers as part of the International Energy Agency Greenhouse Gas Weyburn-Midale CO2 Monitoring and Storage Project.
In May 2017, Cenovus assumed ownership of ConocoPhillips' conventional assets in Alberta and British Columbia. Cenovus’s current conventional assets include the Deep Basin, a liquids-rich natural gas fairway located in northwestern Alberta and northeastern British Columbia, and the Marten Hills heavy oil project. The Deep Basin asset comprises approximately 2.8 million net acres of land and produced more than 125,000 barrels of oil equivalent. Cenovus also holds a significant land position in the Marten Hills region for potential development. In November 2020, Cenovus announced the sale of the Marten Hills assets to Headwater Exploration Inc.
=== Refining ===
Following the acquisition of Husky Energy in January 2021, Cenovus became Canada’s second-largest Canadian-based refiner and upgrader. Cenovus owns the Lima Refinery in Lima, Ohio, the Superior Refinery in Superior, Wisconsin and the Lloydminster refinery in Lloydminster, Alberta and upgrader in Lloydminster, Saskatchewan. Cenovus has 50 percent ownership in two refineries in the United States: the Wood River Refinery and Borger, Texas refinery. Phillips 66 is the co-owner and operator. In August 2022, Cenovus reached an agreement to purchase BP's 50% interest in the BP-Husky Toledo Refinery in Toledo, Ohio. Cenovus has owned the other 50% of the refinery since its combination with Husky Energy in 2021.
=== Transportation ===
Cenovus owns a crude-by-rail loading facility near Edmonton Alberta – the Bruderheim Energy Terminal. The company was recognized for its rail safety performance in 2016, and for safe transportation of chemical products in 2017.
=== Retail ===
Cenovus owns a group of travel centres under the Husky brand, which were included in its acquisition of Husky Energy. They offer fuels under the Esso brand.
== Technology ==
The primary technology Cenovus uses at its Foster Creek and Christina Lake projects is called steam-assisted gravity drainage (SAGD). Cenovus also applies different associated technologies to enhance the SAGD process, such as electric submersible pumps at Foster Creek and solvent aided process (SAP) at Christina Lake.
In 2011, the company began applying its blowdown boiler technology to improve the efficiency of water use at its oil sands operations. In 2013, Cenovus developed its "SkyStrat" drilling rig that allows an exploratory rig to be flown into remote areas by helicopter piece-by-piece, set up to drill a test well, dismantled and airlifted away. The process requires no roads, meaning little disturbance to the boreal forest. The company received an Environmental Performance award for the SkyStrat program.
=== Potential mitigation of climate impacts ===
Cenovus is a member of Oil Sands Pathways to Net Zero initiative, an alliance of oil sands companies working collectively with the federal and Alberta governments to achieve net zero greenhouse gas (GHG) emissions from the companies oil sands operations by 2050. According to Cenovus's Chief Sustainability Officer, the company is pursuing government support for decarbonization efforts, because "[t]hese are not projects that make revenue. So for a corporation that is owned by shareholders to put 100 per cent of the costs into a project that doesn’t bring any revenue back, that is not something that a corporation can do." However, a report by the Canadian Institute for Climate Choices, a source of independent analysis on climate change issues funded by Environment Canada, recommended investing limited public dollars to capture "a share of growing, transition-opportunity markets" rather than in "assets at elevated risk of being stranded in global low-carbon scenarios" as fossil fuel demand "inevitably decline[s] globally".
== Leadership ==
=== Chairman of the Board ===
Michael Anthony Grandin, 2009–2017
Patrick Darold Daniel, 2017–2020
Keith Allan John MacPhail, 2020–2023
Alexander John Pourbaix, 2023–
== See also ==
Axe Lake Aerodrome
Husky Energy
Esso
== References ==
== External links ==
Official website | Wikipedia/Cenovus_Energy |
A corporate action is an event initiated by a public company that brings or could bring an actual change to the debt securities—equity or debt—issued by the company. Corporate actions are typically agreed upon by a company's board of directors and authorized by the shareholders. For some events, shareholders or bondholders are permitted to vote on the event. Examples of corporate actions include stock splits, dividends, mergers and acquisitions, rights issues, and spin-offs.
Some corporate actions such as a dividend (for equity securities) or coupon payment (for debt securities) may have a direct financial impact on the shareholders or bondholders; another example is a call (early redemption) of a debt security. Other corporate actions such as stock split may have an indirect financial impact, as the increased liquidity of shares may cause the price of the stock to decrease. Some corporate actions, such as name changes or ticker symbol changes to better reflect a company's business focus, have no direct financial impact on the shareholders; securities may be listed under a different security identifier (e.g. ISIN, CUSIP, Sedol) however. For example, "Apple Computers" changed its name to Apple Inc.
== Overview ==
=== Types ===
There are three types of corporate actions: voluntary, mandatory, and mandatory with choice.
Mandatory corporate action: A mandatory corporate action is an event initiated by the board of directors of the corporation that affects all shareholders. Participation of shareholders are mandatory for these corporate actions. An example of a mandatory corporate action is cash dividend. A shareholder does not need to act to receive the dividend. Other examples of mandatory corporate actions include stock splits, mergers, pre-refunding, return of capital, bonus issue, asset ID change, and spin-offs. Strictly speaking, the word "mandatory" is not appropriate because the shareholder is not required to do anything; the shareholder is just a passive beneficiary in all the cases cited above. There is nothing the shareholder has to do or does in a Mandatory Corporate Action.
Voluntary corporate action: A voluntary corporate action is an action where the shareholders elect to participate in the action. A response is required for the corporation to process the action. An example of a voluntary corporate action is a tender offer. A corporation may request shareholders to tender their shares at a predetermined price. The shareholder may or may not participate in the tender offer. Shareholders send their responses to the corporation's agents, and the corporation will send the proceeds of the action to the shareholders who elect to participate.
Mandatory with choice corporate action: This corporate action is a mandatory corporate action where shareholders are given a chance to choose among several options. An example is cash or stock dividend option with one of the options as default. Shareholders may or may not submit their elections. In case a shareholder does not submit the election, the default option will be applied.
Some market participants use a different method to distinguish the corporate action types. For example, "mandatory corporate action" and "mandatory with choice corporate action" may be used together. DTC uses the terms distributions, redemptions and reorganizations.
=== Purpose ===
The primary reasons companies use corporate actions are:
Return profits to shareholders: Cash dividends are a classic example where a public company declares a dividend to be paid on each outstanding share. Bonus is another case where the shareholder is rewarded. In a stricter sense, the bonus issue should not impact the share price but in reality, in rare cases, it does and results in an overall increase in value.
Influence the share price: If the price of a stock is too high or too low, the liquidity of the stock suffers. Stocks priced too high will not be affordable to all investors and stocks priced too low may be delisted. Corporate actions such as stock splits or reverse stock splits increase or decrease the number of outstanding shares to decrease or increase the stock price respectively. Buybacks are another example of influencing the stock price where a corporation buys back shares from the market in an attempt to reduce the number of outstanding shares thereby increasing the price.
Corporate restructuring: Corporations restructure in order to increase profitability. Examples include mergers (where two companies that are competitive or complementary join forces) and spin-offs (where a company breaks itself up in order to focus on its core competencies).
=== Impact ===
As an owner, the impact of a corporate action is usually measured in terms of changes to the securities and/or cash positions, so corporate actions can be divided into two categories:
Benefits: Actions that result in an increase to the position holder’s securities or cash position, without altering the underlying security. Examples include bonus issues, which is a Mandatory With Options Action/Event.
Reorganizations: Actions that reshape or restructure the beneficial owner's underlying securities position, which sometimes also results in a cash payout. Examples include equity restructures, conversions, and subscriptions.
== Notification requirement ==
In order to keep investors and the market informed of corporate actions, they need to be announced. For public companies listed on exchanges, the exchanges themselves handle the announcement, notifying shareholders as well as making information about the corporate action available online. For companies that trade in the over-the-counter (OTC) marketplace, U.S. federal securities regulators task Financial Industry Regulatory Authority (FINRA), a self-regulatory organization, with processing the corporate action announcement.
The event information flow for public companies where shareholders or bondholders can vote usually involves numerous parties. The information is first announced by the company to the exchange. Financial data companies which provide economic and financial data to customers collect such information and disseminate it via their own services to banks, institutional investors, managed service providers, and other market participants. In addition, the central securities depository (CSD) of the respective market collects the data and informs the CSD participants holding the respective share or bond in custody about the upcoming corporate action. The CSD sets a deadline for its participants by which the elections must be returned. The CSD participants then further disseminate the information to its clients (e.g. banks, institutional investors or private clients), which in turn must submit their election by the deadline set by the CSD participant.
== References ==
== External links ==
Securities Market Practice Group List of Corporate Action Events: Actions WG/A_Final Market Practices/4_SMPG_CA_EventTemplates_SR2014_V1_1.docx
Corporate Actions Glossary: [1]
List of voluntary corporate actions: [2]
ISO15022 MT564 message format for corporate actions data messages:[3]
SIX Financial Information Corporate Actions data offering: [4]
Assessing the Risk in the Corporate Actions Process: Industry Insight [5] | Wikipedia/Corporate_action |
Corporate synergy is a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. Corporate synergy occurs when corporations interact congruently with one another, creating additional value.
Synergies are divided into two groups: operational (revenue enhancement and cost reduction) and financial (decrease in cost of capital, tax benefits). Seeking for synergies is a nearly ubiquitous feature and motivation of corporate mergers and acquisitions and is an important negotiating point between the buyer and seller that impacts the final price both parties agree to; see Mergers and acquisitions § Business valuation.
The synergy value should not be confused with the control premium; these metrics should be calculated separately.
Positive synergies arise when the combined corporation will bring about better results than the two independent corporations, as in the saying "the whole is better than the sum of the parts". If the corporations do not do due diligence, negative synergies may arise, in which the corporations would have been better off existing on their own.
== Cost ==
A cost synergy refers to the opportunity of a combined corporate entity to reduce, or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the headquarters of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of economies of scale. This leads to companies sometimes trying to reduce costs too much and make that their main goal after merging, which was found in the study from McKinsey. McKinsey is a global consultancy making revenues and therefore suffers due to neglecting day-to-day activities that will bring in revenue. For example when Kraft took over Cadbury, they tried to reduce costs by shutting down a factory that employed 400 staff. This led to greater problems as Cadbury's staff became uncertain about their job security which resulted in Cadbury's staff changing their attitude to work due to the fears that arose.
== Advantages ==
=== Managerial synergy ===
An increase in managerial effectiveness, which is required for the success of a corporation, will result in more innovative ideas that will improve the performance of the corporation as a whole. Synergies, therefore, result in more creative ideas and people are more likely to take risks due to the merging of ideas so there are more innovative solutions brought up compared to working alone Hunt, & Osborn, 1991). Synergy thus results in the strength of one corporation complementing the other.
Thus, corporate synergies are able to overcome problems faced by independent firms and are able to reach positions that could take six years if these firms existed independently. Subsidiaries are offered the most advantages.
=== Tax advantages ===
The amount of tax a corporation pays is based on the amount of profit. So, they could merge with a corporation taking a loss in order to reduce their tax burden. However, this has been discouraged.
=== Increase in size ===
Corporate synergies due to mergers result in larger firm size which is perceived as more attractive to some investors as well as a larger firm gives a competitive advantage in an industry as higher market share allows firms to be more dominant and able control the market more.
== Disadvantages with corporate synergy ==
Managerial bias conflicts with the aims of synergies. This is because of the executives' view that the advantages that synergies bring along are their job, so their thinking is distorted rather than focusing on the most important aspects. This consists of:
=== Synergy ===
Managers consciously or unconsciously underestimate the costs of the synergy and overestimate the benefits so as to give themselves reason for the organization to go ahead with the synergy whether or not its benefits will outweigh the costs. Some executives base their achievement in an organization on this, and therefore make it their most important priority. A 2012 survey by Bain & Company found that overestimating synergies was the second biggest cause of post-deal disappointment.
=== Parenting bias ===
Managers compel the business units to cooperate in the synergy. It encourages executive managers to intervene greatly, which could lead to more harm than good.
=== Skills bias ===
Managers assume that the know-how that is required for the synergy is within the organization and a lot of the time, this is not the case. This bias comes hand in hand with parenting bias because if you intervene to make synergies occur than you going to assume that your corporation has the skills required thereby overlooking the skills gap. This then makes it difficult for a positive synergy to occur and might then make the joint corporation a waste of resources and cause resulting in a negative synergy.
=== Upside bias ===
Executives concentrating on the benefits of the synergy and ignore or overlook their potential drawbacks. “In large part, this upside bias is a natural accompaniment to the synergy bias: if parent managers are inclined to think the best of synergy, they will look for evidence that backs up their position while avoiding evidence to the contrary. “
== References == | Wikipedia/Corporate_synergy |
A corporate identity or corporate image is the manner in which a corporation, firm or business enterprise presents itself to the public. The corporate identity is typically visualized by branding and with the use of trademarks, but it can also include things like product design, advertising, public relations etc. Corporate identity is a primary goal of corporate communication, aiming to build and maintain company identity.
In general, this amounts to a corporate title, logo (logotype and/or logogram) and supporting devices commonly assembled within a set of corporate guidelines. These guidelines govern how the identity is applied and usually include approved color palettes, typefaces, page layouts, fonts, and others.
== Integrated marketing communications (IMC) ==
Corporate identity is the set of multi-sensory elements that marketers employ to communicate a visual statement about the brand to consumers. These multi-sensory elements include but are not limited to company name, logo, slogan, buildings, décor, uniforms, company colors and in some cases, even the physical appearance of customer-facing employees. Corporate Identity is either weak or strong; to understand this concept, it is beneficial to consider exactly what constitutes a strong corporate identity.
Consonance, in the context of marketing, is a unified message offered to consumers from all fronts of the organization (Laurie & Mortimer, 2011). In the context of corporate identity, consonance is the alignment of all touch points. For example, Apple has strong brand consonance because at every point at which the consumer interacts with the brand, a consistent message is conveyed. This is seen in Apple TV advertisements, the Apple Store design, the physical presentation of customer facing Apple employees and the actual products, such as the iPhone, iPad and MacBook laptops. Every Apple touch point is communicating a unified message: From the advertising of the brand to the product packaging, the message sent to consumers is 'we are simple, sophisticated, fun and user friendly'. Brand consonance solidifies corporate identity and encourages brand acceptance, on the grounds that when a consumer is exposed to a consistent message multiple times across the entirety of a brand, the message is easier to trust and the existence of the brand is easier to accept. Strong brand consonance is imperative to achieving strong corporate identity.
Strong consonance, and in turn, strong corporate identity can be achieved through the implementation and integration of integrated marketing communications (IMC). IMC is a collective of concepts and communications processes that seek to establish clarity and consistency in the positioning of a brand in the mind of consumers. As espoused by Holm (cited in Laurie & Mortimer, 2011), at its ultimate stage, IMC is implemented at a corporate level and consolidates all aspects of the organization; this initiates brand consonance which in turn inspires strong corporate identity. To appreciate this idea with heavier mental weight, it is important to regard the different levels of IMC integration.
The communication-based model, advanced by Duncan and Moriarty (as cited in Laurie & Mortimer, 2011) contends that there are three levels of IMC integration; Duncan and Moriarty affirm that the lowest level of IMC integration is level one where IMC decisions are made by marketing communication level message sources. These sources include personal sales, advertising, sales promotion, direct marketing, public relations, packaging and events departments. The stake holders concerned at this stage are consumers, local communities, media and interest groups (Duncan and Moriarty, 1998 as cited in Laurie & Mortimer, 2011). At the second stage of IMC integration, Duncan and Moriarty (as cited in Laurie & Mortimer, 2011) establish that level one integration departments still have decision-making power but are now guided by marketing level message sources. At stage two integration the message sources are those departments in which product mix, price mix, marketing communication and distribution mix are settled; appropriately, stakeholders at this stage of integration are distributors, suppliers and competition (Duncan and Moriarty, 1998 as cited in Laurie & Mortimer, 2011). It is at this stage of integration that consumers interact with the organization (Duncan and Moriarty, 1998 as cited in Laurie & Mortimer, 2011). Moving forward, the last stage of Duncan and Moriarty's Communication Based Model (as cited in Laurie and Mortimer, 2011) is stage three where message sources are at the corporate level of the organization; these message sources include administration, manufacturing operations, marketing, finance, human resources and legal departments. The stakeholders at this level of IMC integration are employees, investors, financial community, government and regulators (Duncan and Moriarty, 1998 as cited in Laurie & Mortimer, 2011). At the final stages of IMC integration, IMC decisions are made not only by corporate level departments but also by departments classified in stages one and two. It is the inclusion of all organizational departments by which a horizontal, non linear method of communication with consumers is achieved. By unifying all fronts of the marketing firm, communications are synchronized to achieve consistency, consonance and ultimately strong corporate identity.
== Organizational point of view ==
In a recent monograph on Chinese corporate identity (Routledge, 2006), Peter Peverelli, proposes a new definition of corporate identity, based on the general organization theory proposed in his earlier work, in particular Peverelli (2000). This definition regards identity as a result of social interaction:
Corporate identity is the way corporate actors (actors who perceive themselves as acting on behalf of the company) make sense of their company in ongoing social interaction with other actors in a specific context. It includes shared perceptions of reality, ways-to-do-things, etc., and interlocked behavior.
In this process, the corporate actors are of equal importance as those others; corporate identity pertains to the company (the group of corporate actors) as well as to the relevant others;
=== Best practices ===
The following four key brand requirements are critical for a successful corporate identity strategy.
Differentiation. In today's highly competitive market, brands need to have a clear differentiation or reason for being. What they represent needs to stand apart from others in order to be noticed, make an impression, and to ultimately be preferred.
Relevance. Brands need to connect to what people care about in the world. To build demand, they need to understand and fulfill the needs and aspirations of their intended audiences.
Coherence. To assure credibility with their audiences, brands must be coherent in what they say and do. All the messages, all the marketing communication, all the brand experiences, and all of the product delivery need to hang together and add up to something meaningful.
Esteem. A brand that is differentiated, relevant and coherent is one that is valued by both its internal and external audiences. Esteem is the reputation a brand has earned by executing clearly on both its promised and delivered experience.
== Visual identity ==
Corporate visual identity plays a significant role in the way an organization presents itself to both internal and external stakeholders. In general terms, a corporate visual identity expresses the values and ambitions of an organization, its business, and its characteristics. Four functions of corporate visual identity can be distinguished. Three of these are aimed at external stakeholders.
First, a corporate visual identity provides an organization with visibility and "recognizability". For virtually all profit and non-profit organizations, it is of vital importance that people know that the organization exists and remember its name and core business at the right time.
Second, a corporate visual identity symbolizes an organization for external stakeholders, and, hence, contributes to its image and reputation (Schultz, Hatch and Larsen, 2000). Van den Bosch, De Jong and Elving (2005) explored possible relationships between corporate visual identity and reputation, and concluded that corporate visual identity plays a supportive role in corporate reputation.
Third, a corporate visual identity expresses the structure of an organization to its external stakeholders, visualizing its coherence as well as the relationships between divisions or units. Olins (1989) is well known for his "corporate identity structure", which consists of three concepts: monolithic brands for companies which have a single brand, identity in which different brands are developed for parts of the organization or for different product lines, and an endorsed identity with different brands which are (visually) connected to each other. Although these concepts introduced by Olins are often presented as the corporate identity structure, they merely provide an indication of the visual presentation of (parts of) the organization. It is therefore better to describe it as a "corporate visual identity structure".
A fourth, internal function of corporate visual identity relates to employees' identification with the organization as a whole and/or the specific departments they work for (depending on the corporate visual strategy in this respect). Identification appears to be crucial for employees, and corporate visual identity probably plays a symbolic role in creating such identification.
The definition of the corporate visual identity management is:
Corporate visual identity management involves the planned maintenance, assessment and development of a corporate visual identity as well as associated tools and support, anticipating developments both inside and outside the organization, and engaging employees in applying it, with the objective of contributing to employees' identification with and appreciation of the organization as well as recognition and appreciation among external stakeholders.
Special attention is paid to corporate identity in times of organizational change. Once a new corporate identity is implemented, attention to corporate identity related issues generally tends to decrease. However, corporate identity needs to be managed on a structural basis, to be internalized by the employees and to harmonize with future organizational developments.
Efforts to manage the corporate visual identity will result in more consistency and the corporate visual identity management mix should include structural, cultural and strategic aspects. Guidelines, procedures and tools can be summarized as the structural aspects of managing the corporate visual identity.
However, as important as the structural aspects may be, they must be complemented by two other types of aspects. Among the cultural aspects of corporate visual identity management, socialization – i.e., formal and informal learning processes – turned out to influence the consistency of a corporate visual identity. Managers are important as a role models and they can clearly set an example. This implies that they need to be aware of the impact of their behavior, which has an effect on how employees behave. If managers pay attention to the way they convey the identity of their organization, including the use of a corporate visual identity, this will have a positive effect on the attention employees give to the corporate visual identity.
Further, it seems to be important that the organization communicates the strategic aspects of the corporate visual identity. Employees need to have knowledge of the corporate visual identity of their organization – not only the general reasons for using the corporate visual identity, such as its role in enhancing the visibility and "recognizability" of the organization, but also aspects of the story behind the corporate visual identity. The story should explain why the design fits the organization and what the design – in all of its elements – is intended to express.
== Corporate colors ==
Corporate colors (or company colors) are one of the most instantly recognizable elements of a corporate visual identity and promote a strong non-verbal message on the company's behalf. Examples of corporate colors:
Red for Coca-Cola and SMRT
Blue for IBM, nicknamed "Big Blue"
Brown for UPS, "What can Brown do for you"
Blue for Korean Air
Purple and Orange for SBS Transit
== Visual identity history ==
Nearly 7,000 years ago, Transylvanian potters inscribed their personal marks on the earthenware they created. If one potter made better pots than another, naturally, his mark held more value than his competitors'. Religions created some of the most recognized identity marks: the Christian cross, the Judaic Star of David, and the Islamic crescent moon. In addition, Kings and nobles in medieval times had clothing, armor, flags, shields, tableware, entryways, and manuscript bindings that all bore coats of arms and royal seals. The symbols depicted a lord's lineage, aspirations, familial virtues, as well as memoirs to cavalry, infantry, and mercenaries of who they were fighting for on the battlefields.
A trademark became a symbol of individuals' professional qualifications to perform a particular skill by the 15th century. For example, the Rod of Asclepius on a physician's sign signified that the doctor was a well-trained practitioner of the medical arts. Simple graphics such as the caduceus carried so much socio-economic and political weight by the 16th century, that government offices were established throughout Europe to register and protect the growing collection of trademarks used by numerous craft guilds.
The concept of visually trademarking one's business spread widely during the Industrial Revolution. The shift of business in favor of non-agricultural enterprise caused business, and corporate consciousness, to boom. Logo use became a mainstream part of identification, and over time, it held more power than being a simple identifier. Some logos held more value than others, and served more as assets than symbols.
Logos are now the visual identifiers of corporations. They became components of corporate identities by communicating brands and unifying messages. Logos commonly function as a solution to the challenge of distinguishing one brand from another. The evolution of symbols went from a way for a king to seal a letter, to how businesses establish their credibility and sell everything from financial services to hamburgers. Therefore, although the specific terms "corporate image" and "[brand identity]" didn't enter business or design vocabulary until the 1940s, within twenty years they became key elements to business success.
== Media and corporate identity ==
As technology and mass media have continued to develop at exponential rates, the role of the media in business increases as well. The media has a large effect on the formation of corporate identity by reinforcing a company's image and reputation. Global television networks and the rise of business news have caused the public representation of organizations to critically influence the construction and deconstruction of certain organizational identities more than ever before.
Many companies pro-actively choose to create media attention and use it as a tool for identity construction and strengthening, and also to reinvent their images under the pressure of new technology. The media also has the power to produce and diffuse meanings a corporation holds, therefore giving stakeholders a negotiation of the organizational identity.
== See also ==
Brand equity – Marketing term
Brand management – Process in brand marketing
Corporate anniversary
Corporate propaganda – Claims made by a corporation/s, for the purpose of manipulating market opinion
Federal Identity Program – Program of the Government of Canada
Graphic charter – Project document
Marketing – Study and process of exploring, creating, and delivering value to customers
Product management – Organizational role in companies
Product naming – Process of deciding a brand name for a product
== References ==
== Further reading ==
Balmer, J.M.T., & Gray, E.R., (2000). Corporate identity and corporate communications: creating a competitive advantage. Industrial and Commercial Training, 32 (7), pp. 256–262.
Balmer, John M. T. & Greyser, Stephen A. eds. (2003), Revealing the Corporation: Perspectives on identity, image, reputation, corporate branding, and corporate-level marketing, London, Routledge, ISBN 0-415-28421-X.
Birkigt, K., & Stadler, M.M., (1986). Corporate identity. Grundlagen, Funktionen, Fallbeispiele. [Corporate identity. Foundation, functions, case descriptions]. Landsberg am Lech: Verlag Moderne Industrie.
Bromley, D.B., (2001). Relationships between personal and corporate reputation, European Journal of Marketing, 35 (3/4), pp. 316–334.
Brown, Jared & A. Miller, (1998). What Logos Do and How They Do It. pp. 6–7.
Chouliaraki, Lilie & M. Morsing. (2010) Media, Organizations and Identity. p. 95
Dowling, G.R., (1993). Developing your company image into a corporate asset. Long Range Planning, 26 (2), pp. 101–109.
Du Gay, P., (2000). Markets and meanings: re-imagining organizational life. In: M. Schultz, Dutton, J.E., Dukerich, J.M., & Harquail, C.V., (1994). Organizational images and member identification. Administrative Science Quarterly, 39 (2), pp. 239–263.
Fiell, Charlotte; Fiell, Peter (2005). Design of the 20th Century (25th anniversary ed.). Köln: Taschen. p. 181. ISBN 9783822840788. OCLC 809539744.
M.J. Hatch, & M.H. Larsen, (Eds.). The expressive organization: linking identity, reputation and the corporate brand (pp. 66–74). Oxford: Oxford University Press.
Kiriakidou, O, & Millward, L.J., (2000). Corporate identity: external reality or internal fit?, Corporate Communications: An International Journal, 5 (1), pp. 49–58.
Olins, W., (1989). Corporate identity: making business strategy visible through design. London: Thames & Hudson.
Paksoy, HB (2001). IDENTITIES: How Governed, Who Pays?
Pratihari, Suvendu K. and Uzma, Shigufta H. (2018), "CSR and corporate branding effect on brand loyalty: a study on Indian banking sector", Journal of Product and Brand Management, Vol. 27 Iss. 1, pp. 57–78, doi:10.1108/JPBM-05-2016-1194
Pratihari, Suvendu K. and Uzma, Shigufta H. (2018), "Corporate Social Identity: An Analysis of the Indian Banking Sector”, International Journal of Bank Marketing, Vol. 36, Iss. 6, pp. 1248–1284, doi:10.1108/IJBM-03-2017-0046
Pratihari, Suvendu K. and Uzma, Shigufta H. (2019), "A Survey on Bankers’ Perception of Corporate Social Responsibility in India”, Social Responsibility Journal, doi:10.1108/SRJ-11-2016-0198
Rowden, Mark, (2000) The Art of Identity: Creating and Managing a successful corporate identity. Gower. ISBN 0-566-08318-3
Rowden, Mark, (2004) Identity: Transforming Performance through Integrated Identity Management. Gower. ISBN 978-0-566-08618-2
Schultz, M., Hatch, M.J., & Larsen, M., (2000). The expressive organization: linking identity, reputation and the corporate brand. Oxford: Oxford University Press.
Stuart, H, (1999). Towards a definitive model of the corporate identity management process, Corporate Communications: An International Journal, 4 (4), pp. 200–207.
Van den Bosch, A.L.M., (2005). Corporate Visual Identity Management: current practices, impact and assessment. Doctoral dissertation, University of Twente, Enschede, The Netherlands.
Van den Bosch, A.L.M., De Jong, M.D.T., & Elving, W.J.L., (2005). How corporate visual identity supports reputation. Corporate Communications: An International Journal, 10 (2), pp. 108–116.
Van Riel, C.B.M., (1995). Principles of corporate communication. London: Prentice Hall.
Veronica Napoles, Corporate identity design. New York, Wiley, 1988. With bibl., index. ISBN 0-471-28947-7
Wheeler, Alina, Designing brand identity. A complete guide to creating, building, and maintaining strong brands, 2nd ed. New York, Wiley, 2006. With bibl., index. ISBN 0-471-74684-3
Wally Olins, The new guide to identity. How to create and sustain change through managing identity. Aldershot, Gower, 1995. With bibl., index. ISBN 0-566-07750-7 (hbk.) or 0-566-07737-X (pbk.) | Wikipedia/Corporate_image |
A control premium is an amount that a buyer is sometimes willing to pay over the current market price of a publicly traded company in order to acquire a controlling share in that company.
If the market perceives that a public company's profit and cash flow is not being maximized, capital structure is not optimal, or other factors that can be changed are impacting the company's share
== Overview of concept ==
Transactions involving small blocks of shares in public companies occur regularly and serve to establish the market price per share of company stock. Acquiring a controlling number of shares sometimes requires offering a premium over the current market price per share in order to induce existing shareholders to sell. It is made through a tender offer with specific terms, including the price. Higher control premiums are often associated with classified boards.: 165
The amount of control is the acquirer's decision and is based on its belief that the target company's share price is not optimized. An acquirer would not be making a prudent investment decision if a tender offer made is higher than the future benefit of the acquisition.
== Control premium vs. minority discount ==
The control premium and the minority discount could be considered to be the same dollar amount. Stated as a percentage, this dollar amount would be higher as a percentage of the lower minority marketable value or, conversely,
lower as a percentage of the higher control value.
Minority discount
=
1 –
(
1
1 + Control premium
)
{\displaystyle {\mbox{Minority discount}}={\mbox{1 – }}\left({1 \over {\mbox{1 + Control premium}}}\right)}
Source:
== Size of premium ==
In general, the maximum value that an acquirer firm would be willing to pay should equal the sum of the target firm's intrinsic value, synergies that the acquiring firm can expect to achieve between the two firms, and the opportunity cost of not acquiring the target firm (i.e. loss to the acquirer if a rival firm acquires the target firm instead). A premium paid, if any, will be specific to the acquirer and the target; actual premiums paid have varied widely. In business practice, control premiums may vary from 20% to 40%. Larger control premiums indicate a low minority shareholders' protection.
== Example ==
Company XYZ has an EBITDA of $1,500,000 and its shares are currently trading at an EV/EBITDA multiple of 5x. This results in a valuation of XYZ of $7,500,000 (=$1,500,000 * 5) on an EV basis. A potential buyer may believe that EBITDA can be improved to $2,000,000 by eliminating the CEO, who would become redundant after the transaction. Thus, the buyer could potentially value the target at $10,000,000 since the value expected to be achieved by replacing the CEO is the accretive $500,000 (=$2,000,000–$1,500,000) in EBITDA, which in turn translates to $2,500,000 (=$500,000 * 5 or =$10,000,000–$7,500,000) premium over the pre-transaction value of the target.
== See also ==
Business valuation
Divestment
Equity value
Enterprise value
Goodwill (accounting)
M&A
Takeover
== References ==
== External links ==
Control Premiums, Minority Discounts & Marketability Discounts
Marketability Discounts and Control Premium Example | Wikipedia/Control_premium |
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.).
In many languages, the term technical physics is also used.
It has been used since 1861 by the German physics teacher J. Frick in his publications.
== Terminology ==
In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics.
In some universities and their institutions, an engineering physics (or applied physics) major is a discipline or specialization within the scope of engineering science, or applied science.
Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering".
== Expertise ==
Unlike traditional engineering disciplines, engineering science or engineering physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science or engineering physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis.
== Degrees ==
In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics.
== Awards ==
There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field.
== See also ==
Applied physics
Engineering
Engineering science and mechanics
Environmental engineering science
Index of engineering science and mechanics articles
Industrial engineering
== Notes and references ==
== External links ==
"Engineering Physics at Xavier"
"The Engineering Physicist Profession"
"Engineering Physicist Professional Profile"
Society of Engineering Science Inc. Archived 2017-08-07 at the Wayback Machine | Wikipedia/Engineering_science |
Building material is material used for construction. Many naturally occurring substances, such as clay, rocks, sand, wood, and even twigs and leaves, have been used to construct buildings and other structures, like bridges. Apart from naturally occurring materials, many man-made products are in use, some more and some less synthetic. The manufacturing of building materials is an established industry in many countries and the use of these materials is typically segmented into specific specialty trades, such as carpentry, insulation, plumbing, and roofing work. They provide the make-up of habitats and structures including homes.
== The total cost of building materials ==
In history, there are trends in building materials from being natural to becoming more human-made and composite; biodegradable to imperishable; indigenous (local) to being transported globally; repairable to disposable; chosen for increased levels of fire-safety, and improved seismic resistance. These trends tend to increase the initial and long-term economic, ecological, energy, and social costs of building materials.
=== Economic costs ===
The initial economic cost of building materials is the purchase price. This is often what governs decision making about what materials to use. Sometimes people take into consideration the energy savings or durability of the materials and see the value of paying a higher initial cost in return for a lower lifetime cost. For example, an asphalt shingle roof costs less than a metal roof to install, but the metal roof will last longer so the lifetime cost is less per year. Some materials may require more care than others, maintaining costs specific to some materials may also influence the final decision. Risks when considering lifetime cost of a material is if the building is damaged such as by fire or wind, or if the material is not as durable as advertised. The cost of materials should be taken into consideration to bear the risk to buy combustive materials to enlarge the lifetime. It is said that, "if it must be done, it must be done well".
=== Ecological costs ===
Pollution costs can be macro and micro. The macro, environmental pollution of extraction industries building materials rely on such as mining, petroleum, and logging produce environmental damage at their source and in transportation of the raw materials, manufacturing, transportation of the products, retailing, and installation. An example of the micro aspect of pollution is the off-gassing of the building materials in the building or indoor air pollution. Red List building materials are materials found to be harmful. Also the carbon footprint, the total set of greenhouse gas emissions produced in the life of the material. A life-cycle analysis also includes the reuse, recycling, or disposal of construction waste. Two concepts in building which account for the ecological economics of building materials are green building and sustainable development.
=== Energy costs ===
The initial energy costs include the amount of energy consumed to produce, deliver and install the material. The long term energy cost is the economic, ecological, and social costs of continuing to produce and deliver energy to the building for its use, maintenance, and eventual removal. The initial embodied energy of a structure is the energy consumed to extract, manufacture, deliver, install, the materials. The lifetime embodied energy continues to grow with the use, maintenance, and reuse/recycling/disposal of the building materials themselves and how the materials and design help minimize the life-time energy consumption of the structure.
=== Social costs ===
Social costs are injury and health of the people producing and transporting the materials and potential health problems of the building occupants if there are problems with the building biology. Globalization has had significant impacts on people both in terms of jobs, skills, and self-sufficiency are lost when manufacturing facilities are closed and the cultural aspects of where new facilities are opened. Aspects of fair trade and labor rights are social costs of global building material manufacturing.
== Naturally occurring substances ==
Bio-based materials (especially plant-based materials) are used in a variety of building applications, including load-bearing, filling, insulating, and plastering materials. These materials vary in structure depending on the formulation used. Plant fibres can be combined with binders and then used in construction to provide thermal, hydric or structural functions. The behaviour of concrete based on plant fibre is mainly governed by the amount of the fibre constituting the material. Several studies have shown that increasing the amount of these plant particles increases porosity, moisture buffering capacity, and maximum absorbed water content on the one side, while decreasing density, thermal conductivity, and compressive strength on the other.
Plant-based materials are largely derived from renewable resources and mainly use co-products from agriculture or the wood industry. When used as insulation materials, most bio-based materials exhibit (unlike most other insulation materials) hygroscopic behaviour, combining high water vapour permeability and moisture regulation.
=== Brush ===
Brush structures are built entirely from plant parts and were used in various cultures such as Native Americans and pygmy peoples in Africa. These are built mostly with branches, twigs and leaves, and bark, similar to a beaver's lodge. These were variously named wikiups, lean-tos, and so forth.
An extension on the brush building idea is the wattle and daub process in which clay soils or dung, usually cow, are used to fill in and cover a woven brush structure. This gives the structure more thermal mass and strength. Wattle and daub is one of the oldest building techniques. Many older timber frame buildings incorporate wattle and daub as non load bearing walls between the timber frames.
=== Ice and snow ===
Snow and occasionally ice, were used by the Inuit peoples for igloos and snow is used to build a shelter called a quinzhee. Ice has also been used for ice hotels as a tourist attraction in northern climates.
=== Mud and clay ===
Clay based buildings usually come in two distinct types. One being when the walls are made directly with the mud mixture, and the other being walls built by stacking air-dried building blocks called mud bricks.
Other uses of clay in building is combined with straws to create light clay, wattle and daub, and mud plaster.
==== Wet-laid clay walls ====
Wet-laid, or damp, walls are made by using the mud or clay mixture directly without forming blocks and drying them first. The amount of and type of each material in the mixture used leads to different styles of buildings. The deciding factor is usually connected with the quality of the soil being used. Larger amounts of clay are usually employed in building with cob, while low-clay soil is usually associated with sod house or sod roof construction. The other main ingredients include more or less sand/gravel and straw/grasses. Rammed earth is both an old and newer take on creating walls, once made by compacting clay soils between planks by hand; nowadays forms and mechanical pneumatic compressors are used.
Soil, and especially clay, provides good thermal mass; it is very good at keeping temperatures at a constant level. Homes built with earth tend to be naturally cool in the summer heat and warm in cold weather. Clay holds heat or cold, releasing it over a period of time like stone. Earthen walls change temperature slowly, so artificially raising or lowering the temperature can use more resources than in say a wood built house, but the heat/coolness stays longer.
People building with mostly dirt and clay, such as cob, sod, and adobe, created homes that have been built for centuries in western and northern Europe, Asia, as well as the rest of the world, and continue to be built, though on a smaller scale. Some of these buildings have remained habitable for hundreds of years.
==== Structural clay blocks and bricks ====
Mud-bricks, also known by their Spanish name adobe are ancient building materials with evidence dating back thousands of years BC. Compressed earth blocks are a more modern type of brick used for building more frequently in industrialized society since the building blocks can be manufactured off site in a centralized location at a brickworks and transported to multiple building locations. These blocks can also be monetized more easily and sold.
Structural mud bricks are almost always made using clay, often clay soil and a binder are the only ingredients used, but other ingredients can include sand, lime, concrete, stone and other binders. The formed or compressed block is then air dried and can be laid dry or with a mortar or clay slip.
=== Sand ===
Sand is used with cement, and sometimes lime, to make mortar for masonry work and plaster. Sand is also used as a part of the concrete mix. An important low-cost building material in countries with high sand content soils is the Sandcrete block, which is weaker but cheaper than fired clay bricks. Sand reinforced polyester composite are used as bricks.
=== Stone or rock ===
Rock structures have existed for as long as history can recall. It is the longest-lasting building material available, and is usually readily available. There are many types of rock, with differing attributes that make them better or worse for particular uses. Rock is a very dense material so it gives a lot of protection; its main drawback as a building material is its weight and the difficulty of working it. Its energy density is both an advantage and disadvantage. Stone is hard to warm without consuming considerable energy but, once warm, its thermal mass means that can retain heat for useful periods of time.
Dry-stone walls and huts have been built for as long as humans have put one stone on top of another. Eventually, different forms of mortar were used to hold the stones together, cement being the most commonplace now.
The granite-strewn uplands of Dartmoor National Park, United Kingdom, for example, provided ample resources for early settlers. Circular huts were constructed from loose granite rocks throughout the Neolithic and early Bronze Age, and the remains of an estimated 5,000 can still be seen today. Granite continued to be used throughout the Medieval period (see Dartmoor longhouse) and into modern times. Slate is another stone type, commonly used as roofing material in the United Kingdom and other parts of the world where it is found.
Stone buildings can be seen in most major cities, and some civilizations built predominantly with stone, such as the Egyptian and Aztec pyramids and the structures of the Inca civilization.
=== Thatch ===
Thatch is one of the oldest of building materials known. "Thatch" is another word for "grass"; grass is a good insulator and easily harvested. Many African tribes have lived in homes made completely of grasses and sand year-round. In Europe, thatch roofs on homes were once prevalent but the material fell out of favor as industrialization and improved transport increased the availability of other materials. Today, though, the practice is undergoing a revival. In the Netherlands, for instance, many new buildings have thatched roofs with special ridge tiles on top.
=== Wood and timber ===
Wood has been used as a building material for thousands of years in its natural state. Today, engineered wood is becoming very common in industrialized countries.
Wood is a product of trees, and sometimes other fibrous plants, used for construction purposes when cut or pressed into lumber and timber, such as boards, planks and similar materials. It is a generic building material and is used in building just about any type of structure in most climates. Wood can be very flexible under loads, keeping strength while bending, and is incredibly strong when compressed vertically. There are many differing qualities to the different types of wood, even among same tree species. This means specific species are better suited for various uses than others. And growing conditions are important for deciding quality.
"Timber" is the term used for construction purposes except the term "lumber" is used in the United States. Raw wood (a log, trunk, bole) becomes timber when the wood has been "converted" (sawn, hewn, split) in the forms of minimally-processed logs stacked on top of each other, timber frame construction, and light-frame construction. The main problems with timber structures are fire risk and moisture-related problems.
In modern times softwood is used as a lower-value bulk material, whereas hardwood is usually used for finishings and furniture. Historically timber frame structures were built with oak in western Europe, recently douglas fir has become the most popular wood for most types of structural building.
Many families or communities, in rural areas, have a personal woodlot from which the family or community will grow and harvest trees to build with or sell. These lots are tended to like a garden. This was much more prevalent in pre-industrial times, when laws existed as to the amount of wood one could cut at any one time to ensure there would be a supply of timber for the future, but is still a viable form of agriculture.
== Man-made substances ==
=== Fired bricks and clay blocks ===
Bricks are made in a similar way to mud-bricks except without the fibrous binder such as straw and are fired ("burned" in a brick clamp or kiln) after they have air-dried to permanently harden them. Kiln fired clay bricks are a ceramic material. Fired bricks can be solid or have hollow cavities to aid in drying and make them lighter and easier to transport. The individual bricks are placed upon each other in courses using mortar. Successive courses being used to build up walls, arches, and other architectural elements. Fired brick walls are usually substantially thinner than cob/adobe while keeping the same vertical strength. They require more energy to create but are easier to transport and store, and are lighter than stone blocks. Romans extensively used fired brick of a shape and type now called Roman bricks. Building with brick gained much popularity in the mid-18th century and 19th centuries. This was due to lower costs with increases in brick manufacturing and fire-safety in increasingly crowded cities.
The cinder block supplemented or replaced fired bricks in the late 20th century often being used for the inner parts of masonry walls and by themselves.
Structural clay tiles (clay blocks) are clay or terracotta and typically are perforated with holes.
=== Cement composites ===
Cement bonded composites are made of hydrated cement paste that binds wood, particles, or fibers to make pre-cast building components. Various fiberous materials, including paper, fiberglass, and carbon-fiber have been used as binders.
Wood and natural fibers are composed of various soluble organic compounds like carbohydrates, glycosides and phenolics. These compounds are known to retard cement setting. Therefore, before using a wood in making cement bonded composites, its compatibility with cement is assessed.
Wood-cement compatibility is the ratio of a parameter related to the property of a wood-cement composite to that of a neat cement paste. The compatibility is often expressed as a percentage value. To determine wood-cement compatibility, methods based on different properties are used, such as, hydration characteristics, strength, interfacial bond and morphology. Various methods are used by researchers such as the measurement of hydration characteristics of a cement-aggregate mix; the comparison of the mechanical properties of cement-aggregate mixes and the visual assessment of microstructural properties of the wood-cement mixes. It has been found that the hydration test by measuring the change in hydration temperature with time is the most convenient method. Recently, Karade et al. have reviewed these methods of compatibility assessment and suggested a method based on the ‘maturity concept’ i.e. taking in consideration both time and temperature of cement hydration reaction. Recent work on aging of lignocellulosic materials in the cement paste showed hydrolysis of hemicelluloses and lignin that affects the interface between particles or fibers and concrete and causes degradation.
Bricks were laid in lime mortar from the time of the Romans until supplanted by Portland cement mortar in the early 20th century. Cement blocks also sometimes are filled with grout or covered with a parge coat.
=== Concrete ===
Concrete is a composite building material made from the combination of aggregate and a binder such as cement. The most common form of concrete is Portland cement concrete, which consists of mineral aggregate (generally gravel and sand), portland cement and water.
After mixing, the cement hydrates and eventually hardens into a stone-like material. When used in the generic sense, this is the material referred to by the term "concrete".
For a concrete construction of any size, as concrete has a rather low tensile strength, it is generally strengthened using steel rods or bars (known as rebars). This strengthened concrete is then referred to as reinforced concrete. In order to minimise any air bubbles, that would weaken the structure, a vibrator is used to eliminate any air that has been entrained when the liquid concrete mix is poured around the ironwork. Concrete has been the predominant building material in the modern age due to its longevity, formability, and ease of transport. Recent advancements, such as insulating concrete forms, combine the concrete forming and other construction steps (installation of insulation). All materials must be taken in required proportions as described in standards.
=== Fabric ===
The tent is the home of choice among nomadic groups all over the world. Two well-known types include the conical teepee and the circular yurt. The tent has been revived as a major construction technique with the development of tensile architecture and synthetic fabrics. Modern buildings can be made of flexible material such as fabric membranes, and supported by a system of steel cables, rigid or internal, or by air pressure.
=== Foam ===
Recently, synthetic polystyrene or polyurethane foam has been used in combination with structural materials, such as concrete. It is lightweight, easily shaped, and an excellent insulator. Foam is usually used as part of a structural insulated panel, wherein the foam is sandwiched between wood or cement or insulating concrete forms.
=== Glass ===
Glassmaking is considered an art form as well as an industrial process or material.
Clear windows have been used since the invention of glass to cover small openings in a building. Glass panes provided humans with the ability to both let light into rooms while at the same time keeping inclement weather outside.
Glass is generally made from mixtures of sand and silicates, in a very hot fire stove called a kiln, and is very brittle. Additives are often included the mixture used to produce glass with shades of colors or various characteristics (such as bulletproof glass or lightbulbs).
The use of glass in architectural buildings has become very popular in the modern culture. Glass "curtain walls" can be used to cover the entire facade of a building, or it can be used to span over a wide roof structure in a "space frame". These uses though require some sort of frame to hold sections of glass together, as glass by itself is too brittle and would require an overly large kiln to be used to span such large areas by itself.
Glass bricks were invented in the early 20th century.
=== Gypsum concrete ===
Gypsum concrete is a mixture of gypsum plaster and fibreglass rovings. Although plaster and fibres fibrous plaster have been used for many years, especially for ceilings, it was not until the early 1990s that serious studies of the strength and qualities of a walling system Rapidwall, using a mixture of gypsum plaster and 300mm plus fibreglass rovings, were investigated. With an abundance of gypsum (naturally occurring and by-product chemical FGD and phospho gypsums) available worldwide, Gypsum concrete-based building products, which are fully recyclable, offer significant environmental benefits.
=== Metal ===
Metal is used as structural framework for larger buildings such as skyscrapers, or as an external surface covering. There are many types of metals used for building. Metal figures quite prominently in prefabricated structures such as the Quonset hut, and can be seen used in most cosmopolitan cities. It requires a great deal of human labor to produce metal, especially in the large amounts needed for the building industries. Corrosion is metal's prime enemy when it comes to longevity.
Steel is a metal alloy whose major component is iron, and is the usual choice for metal structural building materials. It is strong, flexible, and if refined well and/or treated lasts a long time.
The lower density and better corrosion resistance of aluminium alloys and tin sometimes overcome their greater cost.
Copper is a valued building material because of its advantageous properties (see: Copper in architecture). These include corrosion resistance, durability, low thermal movement, light weight, radio frequency shielding, lightning protection, sustainability, recyclability, and a wide range of finishes. Copper is incorporated into roofing, flashing, gutters, downspouts, domes, spires, vaults, wall cladding, building expansion joints, and indoor design elements.
Other metals used include chrome, gold, silver, and titanium. Titanium can be used for structural purposes, but it is much more expensive than steel. Chrome, gold, and silver are used as decoration, because these materials are expensive and lack structural qualities such as tensile strength or hardness.
=== Plastics ===
The term plastics covers a range of synthetic or semi-synthetic organic condensation or polymerization products that can be molded or extruded into objects, films, or fibers. Their name is derived from the fact that in their semi-liquid state they are malleable, or have the property of plasticity. Plastics vary immensely in heat tolerance, hardness, and resiliency. Combined with this adaptability, the general uniformity of composition and lightness of plastics ensures their use in almost all industrial applications today. High performance plastics such as ETFE have become an ideal building material due to its high abrasion resistance and chemical inertness. Notable buildings that feature it include: the Beijing National Aquatics Center and the Eden Project biomes.
Around twenty percent of all plastics and seventy percent of all polyvinyl chloride (PVC) produced in the world each year are used by the construction industry. It is predicted that much more will be produced and used in the future. "In Europe, approximately 20% of all plastics produced are used in the construction sector including different classes of plastics, waste and nanomaterials." There are both direct use (construction materials containing plastics) and indirect use (packaging of construction materials) in different parts of the building processes.
=== Papers and membranes ===
Building papers and membranes are used for many reasons in construction. One of the oldest building papers is red rosin paper which was known to be in use before 1850 and was used as an underlayment in exterior walls, roofs, and floors and for protecting a jobsite during construction. Tar paper was invented late in the 19th century and was used for similar purposes as rosin paper and for gravel roofs. Tar paper has largely fallen out of use supplanted by asphalt felt paper. Felt paper has been supplanted in some uses by synthetic underlayments, particularly in roofing by synthetic underlayments and siding by housewraps.
There are a wide variety of damp proofing and waterproofing membranes used for roofing, basement waterproofing, and geomembranes.
=== Ceramics ===
Fired clay bricks have been used since the time of the Romans. Special tiles are used for roofing, siding, flooring, ceilings, pipes, flue liners, and more.
== Living building materials ==
A relatively new category of building materials, living building materials are materials that are either composed of, or created by a living organism; or materials that behave in a manner that's reminiscent of such. Potential use cases include self-healing materials, and materials that replicate (reproduce) rather than be manufactured.
== Building products ==
In the market place, the term "building products" often refers to ready-made particles or sections made from various materials, that are fitted in architectural hardware and decorative hardware parts of a building. The list of building products excludes the building materials used to construct the building architecture and supporting fixtures, like windows, doors, cabinets, millwork components, etc. Building products, rather, support and make building materials work in a modular fashion.
"Building products" may also refer to items used to put such hardware together, such as caulking, glues, paint, and anything else bought for the purpose of constructing a building.
== Research and development ==
To facilitate and optimize the use of new materials and up-to-date technologies, ongoing research is being undertaken to improve efficiency, productivity and competitiveness in world markets.
Material research and development may be commercial, academical or both, and can be conducted at any scale.
Rapid prototyping allows researchers to develop and test materials quickly, making adjustments and solving issues during the process. Rather than developing materials theoretically and then testing them, only to discover fundamental flaws, rapid prototypes allow for comparatively quick development and testing, shortening the time to market for a new materials to a matter of months, rather than years.
== Sustainability ==
In 2017, buildings and construction together consumed 36% of the final energy produced globally while being responsible for 39% of the global energy related CO2 emissions. The shares from the construction industry alone were 6% and 11% respectively. Energy consumption during building material production is a dominant contributor to the construction industry's overall share, predominantly due to the use of electricity during production. Embodied energy of relevant building materials in the US are provided in the table below.
== Testing and certification ==
ASTM International
UL (safety organization)
ETL SEMKO — Building Product Testing Laboratory in the USA, part of Intertek, based in London
EU Construction Product Regulation
== See also ==
== References ==
== Further reading ==
Svoboda, Luboš(2018). Stavební hmoty (Building materials), 1000 p.
"Download = Souhlasím". People.fsv.cvut.cz. Archived from the original on 2013-10-16. Retrieved 2018-10-03.
== External links ==
Materiales de Construcción – Bilingual (Spanish/English) Scientific journal published by Consejo Superior de Investigaciones Científicas, Spain.
Informes de la Construcción – Scientific journal published by Consejo Superior de Investigaciones Científicas, Spain. | Wikipedia/Building_materials |
Environmental engineering science (EES) is a multidisciplinary field of engineering science that combines the biological, chemical and physical sciences with the field of engineering. This major traditionally requires the student to take basic engineering classes in fields such as thermodynamics, advanced math, computer modeling and simulation and technical classes in subjects such as statics, mechanics, hydrology, and fluid dynamics. As the student progresses, the upper division elective classes define a specific field of study for the student with a choice in a range of science, technology and engineering related classes.
== Difference with related fields ==
As a recently created program, environmental engineering science has not yet been incorporated into the terminology found among environmentally focused professionals. In the few engineering colleges that offer this major, the curriculum shares more classes in common with environmental engineering than it does with environmental science. Typically, EES students follow a similar course curriculum with environmental engineers until their fields diverge during the last year of college. The majority of the environmental engineering students must take classes designed to connect their knowledge of the environment to modern building materials and construction methods. This is meant to direct the environmental engineer into a field where they will more than likely assist in building treatment facilities, preparing environmental impact assessments or helping to mitigate air pollution from specific point sources.
Meanwhile, the environmental engineering science student will choose a direction for their career. From the range of electives they have to choose from, these students can move into a fields such as the design of nuclear storage facilities, bacterial bioreactors or environmental policies. These students combine the practical design background of an engineer with the detailed theory found in many of the biological and physical sciences.
== Description at universities ==
=== Stanford University ===
The Civil and Environmental Engineering department at Stanford University provides the following description for their program in Environmental Engineering and Science:
The Environmental Engineering and Science (EES) program focuses on the chemical and biological processes involved in water quality engineering, water and air pollution, remediation and hazardous substance control, human exposure to pollutants, environmental biotechnology, and environmental protection.
=== UC Berkeley ===
The College of Engineering at UC Berkeley defines Environmental Engineering Science, including the following:
This is a multidisciplinary field requiring an integration of physical, chemical and biological principles with engineering analysis for environmental protection and restoration. The program incorporates courses from many departments on campus to create a discipline that is rigorously based in science and engineering, while addressing a wide variety of environmental issues. Although an environmental engineering option exists within the civil engineering major, the engineering science curriculum provides a more broadly based foundation in the sciences than is possible in civil engineering
=== Massachusetts Institute of Technology ===
At MIT, the major is described in their curriculum, including the following:
The Bachelor of Science in Environmental Engineering Science emphasizes the fundamental physical, chemical, and biological processes necessary for understanding the interactions between man and the environment. Issues considered include the provision of clean and reliable water supplies, flood forecasting and protection, development of renewable and nonrenewable energy sources, causes and implications of climate change, and the impact of human activities on natural cycles
=== University of Florida ===
The College of Engineering at UF defines Environmental Engineering Science as follows:
The broad undergraduate environmental engineering curriculum of EES has earned the department a ranking as a leading undergraduate program. The ABET accredited engineering bachelor's degree is comprehensively based on physical, chemical, and biological principles to solve environmental problems affecting air, land, and water resources. An advising scheme including select faculty, led by the undergraduate coordinator, guides each student through the program.
The program educational objectives of the EES program at the University of Florida are to produce engineering practitioners and graduate students who 3-5 years after graduation:
Continue to learn, develop and apply their knowledge and skills to identify, prevent, and solve environmental problems.
Have careers that benefit society as a result of their educational experiences in science, engineering analysis and design, as well as in their social and cultural studies.
Communicate and work effectively in all work settings including those that are multidisciplinary.
== Lower division coursework ==
Lower division coursework in this field requires the student to take several laboratory-based classes in calculus-based physics, chemistry, biology, programming and analysis. This is intended to give the student background information in order to introduce them to the engineering fields and to prepare them for more technical information in their upper division coursework.
== Upper division coursework ==
The upper division classes in Environmental Engineering Science prepares the student for work in the fields of engineering and science with coursework in subjects including the following:
Fluid mechanics
Mechanics of materials
Thermodynamics
Environmental engineering
Advanced math and statistics
Geology
Physical, organic and atmospheric chemistry
Biochemistry
Microbiology
Ecology
== Electives ==
=== Process engineering ===
On this track, students are introduced to the fundamental reaction mechanisms in the field of chemical and biochemical engineering.
=== Resource engineering ===
For this track, students take classes introducing them to ways to conserve natural resources. This can include classes in water chemistry, sanitation, combustion, air pollution and radioactive waste management.
=== Geoengineering ===
This examines geoengineering in detail.
=== Ecology ===
This prepares the students for using their engineering and scientific knowledge to solve the interactions between plants, animals and the biosphere.
=== Biology ===
This includes further education about microbial, molecular and cell biology. Classes can include cell biology, virology, microbial and plant biology
=== Policy ===
This covers in more detail ways the environment can be protected through political means. This is done by introducing students to qualitative and quantitative tools in classes such as economics, sociology, political science and energy and resources.
== Fields of work ==
The multidisciplinary approach in Environmental Engineering Science gives the student expertise in technical fields related to their own personal interest. While some graduates choose to use this major to go to graduate school, students who choose to work often go into the fields of civil and environmental engineering, biotechnology, and research. However, the less technical math, programming and writing background gives the students opportunities to pursue IT work and technical writing.
== See also ==
Civil engineering
Environmental engineering
Environmental science
Sustainability
Green building
Sustainable engineering
== Notes ==
== References ==
"MIT Course Catalog: Department of Civil and Environmental Engineering." Massachusetts Institute of Technology. <http://web.mit.edu/catalogue/degre.engin.civil.shtml>.
2008-2009 Announcement. Brochure. Berkeley, 2008. Engineering Announcement 2008-2009. University of California, Berkeley. <https://web.archive.org/web/20081203005457/http://coe.berkeley.edu/students/EngAnn08.pdf>.
== External links ==
Engineering Engineering and Science program at Stanford University [1]
What people go on to do in Engineering Science at UC Berkeley [2]
Curriculum at University of Florida [3]
Curriculum at MIT [4]
Curriculum at University of Illinois [5] | Wikipedia/Environmental_Engineering_Science |
A phonograph record (also known as a gramophone record, especially in British English) or a vinyl record (for later varieties only) is an analog sound storage medium in the form of a flat disc with an inscribed, modulated spiral groove. The groove usually starts near the outside edge and ends near the center of the disc. The stored sound information is made audible by playing the record on a phonograph (or "gramophone", "turntable", or "record player").
Records have been produced in different formats with playing times ranging from a few minutes to around 30 minutes per side. For about half a century, the discs were commonly made from shellac and these records typically ran at a rotational speed of 78 rpm, giving it the nickname "78s" ("seventy-eights"). After the 1940s, "vinyl" records made from polyvinyl chloride (PVC) became standard replacing the old 78s and remain so to this day; they have since been produced in various sizes and speeds, most commonly 7-inch discs played at 45 rpm (typically for singles, also called 45s ("forty-fives")), and 12-inch discs played at 33⅓ rpm (known as an LP, "long-playing records", typically for full-length albums) – the latter being the most prevalent format today.
== Overview ==
The phonograph record was the primary medium used for music reproduction throughout the 20th century. It had co-existed with the phonograph cylinder from the late 1880s and had effectively superseded it by around 1912. Records retained the largest market share even when new formats such as the compact cassette were mass-marketed. By the 1980s, digital media, in the form of the compact disc, had gained a larger market share, and the record left the mainstream in 1991. Since the 1990s, records continue to be manufactured and sold on a smaller scale, and during the 1990s and early 2000s were commonly used by disc jockeys (DJs), especially in dance music genres. They were also listened to by a growing number of audiophiles. The phonograph record has made a niche resurgence in the early 21st century, growing increasingly popular throughout the 2010s and 2020s.
Phonograph records are generally described by their diameter in inches (12-inch, 10-inch, 7-inch), the rotational speed in revolutions per minute (rpm) at which they are played (8+1⁄3, 16+2⁄3, 33+1⁄3, 45, 78), and their time capacity, determined by their diameter and speed (LP [long play], 12-inch disc, 33+1⁄3 rpm; EP [extended play], 12-inch disc or 7-inch disc, 33+1⁄3 or 45 rpm; Single, 7-inch or 10-inch disc, 45 or 78 rpm); their reproductive quality, or level of fidelity (high-fidelity, orthophonic, full-range, etc.); and the number of audio channels (mono, stereo, quad, etc.).
The phrase broken record refers to a malfunction when the needle skips/jumps back to the previous groove and plays the same section over and over again indefinitely.
=== Naming ===
The various names have included phonograph record (American English), gramophone record (British English), record, vinyl, LP (originally a trademark of Columbia Records), black disc, album, and more informally platter, wax, or liquorice pizza.
== Early development ==
Manufacture of disc records began in the late 19th century, at first competing with earlier cylinder records. Price, ease of use and storage made the disc record dominant by the 1910s. The standard format of disc records became known to later generations as "78s" after their playback speed in revolutions per minute, although that speed only became standardized in the late 1920s. In the late 1940s new formats pressed in vinyl, the 45 rpm single and 33 rpm long playing "LP", were introduced, gradually overtaking the formerly standard "78s" over the next decade. The late 1950s saw the introduction of stereophonic sound on commercial discs.
=== Predecessors ===
The phonautograph was invented by 1857 by Frenchman Édouard-Léon Scott de Martinville. It could not, however, play back recorded sound, as Scott intended for people to read back the tracings, which he called phonautograms. Prior to this, tuning forks had been used in this way to create direct tracings of the vibrations of sound-producing objects, as by English physicist Thomas Young in 1807.
In 1877, Thomas Edison invented the first phonograph, which etched sound recordings onto phonograph cylinders. Unlike the phonautograph, Edison's phonograph could both record and reproduce sound, via two separate needles, one for each function.
=== The first disc records ===
The first commercially sold disc records were created by Emile Berliner in the 1880s. Emile Berliner improved the quality of recordings while his manufacturing associate Eldridge R. Johnson, who owned a machine shop in Camden, New Jersey, eventually improved the mechanism of the gramophone with a spring motor and a speed regulating governor, resulting in a sound quality equal to Edison's cylinders. Abandoning Berliner's "Gramophone" trademark for legal reasons in the United States, Johnson's and Berliner's separate companies reorganized in 1901 to form the Victor Talking Machine Company in Camden, New Jersey, whose products would come to dominate the market for several decades.
Berliner's Montreal factory, which became the Canadian branch of RCA Victor, still exists. There is a dedicated museum in Montreal for Berliner (Musée des ondes Emile Berliner).
== 78 rpm disc developments ==
=== Early speeds ===
Early disc recordings were produced in a variety of speeds ranging from 60 to 130 rpm, and a variety of sizes. As early as 1894, Emile Berliner's United States Gramophone Company was selling single-sided 7-inch discs with an advertised standard speed of "about 70 rpm".
One standard audio recording handbook describes speed regulators, or governors, as being part of a wave of improvement introduced rapidly after 1897. A picture of a hand-cranked 1898 Berliner Gramophone shows a governor and says that spring drives had replaced hand drives. It notes that:
The speed regulator was furnished with an indicator that showed the speed when the machine was running so that the records, on reproduction, could be revolved at exactly the same speed...The literature does not disclose why 78 rpm was chosen for the phonograph industry, apparently this just happened to be the speed created by one of the early machines and, for no other reason continued to be used.
In 1912, the Gramophone Company set 78 rpm as their recording standard, based on the average of recordings they had been releasing at the time, and started selling players whose governors had a nominal speed of 78 rpm. By 1925, 78 rpm was becoming standardized across the industry. However, the exact speed differed between places with alternating current electricity supply at 60 hertz (cycles per second, Hz) and those at 50 Hz. Where the mains supply was 60 Hz, the actual speed was 78.26 rpm: that of a 60 Hz stroboscope illuminating 92-bar calibration markings. Where it was 50 Hz, it was 77.92 rpm: that of a 50 Hz stroboscope illuminating 77-bar calibration markings.
At least one attempt to lengthen playing time was made in the early 1920s. World Records produced records that played at a constant linear velocity, controlled by Noel Pemberton Billing's patented add-on speed governor.
=== Acoustic recording ===
Early recordings were made entirely acoustically, the sound was collected by a horn and piped to a diaphragm, which vibrated the cutting stylus. Sensitivity and frequency range were poor, and frequency response was irregular, giving acoustic recordings an instantly recognizable tonal quality. A singer almost had to put their face in the recording horn. A way of reducing resonance was to wrap the recording horn with tape.
Even drums, if planned and placed properly, could be effectively recorded and heard on even the earliest jazz and military band recordings. The loudest instruments such as the drums and trumpets were positioned the farthest away from the collecting horn. Lillian Hardin Armstrong, a member of King Oliver's Creole Jazz Band, which recorded at Gennett Records in 1923, remembered that at first Oliver and his young second trumpet, Louis Armstrong, stood next to each other and Oliver's horn could not be heard. "They put Louis about fifteen feet over in the corner, looking all sad."
=== Electrical recording ===
During the first half of the 1920s, engineers at Western Electric, as well as independent inventors such as Orlando Marsh, developed technology for capturing sound with a microphone, amplifying it with vacuum tubes (known as valves in the UK), and then using the amplified signal to drive an electromechanical recording head. Western Electric's innovations resulted in a broader and smoother frequency response, which produced a dramatically fuller, clearer and more natural-sounding recording. Soft or distant sounds that were previously impossible to record could now be captured. Volume was now limited only by the groove spacing on the record and the amplification of the playback device. Victor and Columbia licensed the new electrical system from Western Electric and recorded the first electrical discs during the spring of 1925. The first electrically recorded Victor Red Seal record was Chopin's "Impromptus" and Schubert's "Litanei" performed by pianist Alfred Cortot at Victor's studios in Camden, New Jersey.
A 1926 Wanamaker's ad in The New York Times offers records "by the latest Victor process of electrical recording". It was recognized as a breakthrough; in 1930, a Times music critic stated:
... the time has come for serious musical criticism to take account of performances of great music reproduced by means of the records. To claim that the records have succeeded in exact and complete reproduction of all details of symphonic or operatic performances ... would be extravagant ... [but] the article of today is so far in advance of the old machines as hardly to admit classification under the same name. Electrical recording and reproduction have combined to retain vitality and color in recitals by proxy.
The Orthophonic Victrola had an interior folded exponential horn, a sophisticated design informed by impedance-matching and transmission-line theory, and designed to provide a relatively flat frequency response. Victor's first public demonstration of the Orthophonic Victrola on 6 October 1925, at the Waldorf-Astoria Hotel was front-page news in The New York Times, which reported:
The audience broke into applause ... John Philip Sousa [said]: '[Gentlemen], that is a band. This is the first time I have ever heard music with any soul to it produced by a mechanical talking machine' ... The new instrument is a feat of mathematics and physics. It is not the result of innumerable experiments, but was worked out on paper in advance of being built in the laboratory ... The new machine has a range of from 100 to 5,000 [cycles per second], or five and a half octaves ... The 'phonograph tone' is eliminated by the new recording and reproducing process.
Sales of records plummeted precipitously during the early years of the Great Depression of the 1930s, and the entire record industry in America nearly foundered. In 1932, RCA Victor introduced a basic, inexpensive turntable called the Duo Jr., which was designed to be connected to their radio receivers. According to Edward Wallerstein (the general manager of the RCA Victor Division), this device was "instrumental in revitalizing the industry".
=== 78 rpm materials ===
The production of shellac records continued throughout the 78 rpm era, which lasted until 1948 in industrialized nations.
During the Second World War, the United States Armed Forces produced thousands of 12-inch vinyl 78 rpm V-Discs for use by the troops overseas. After the war, the use of vinyl became more practical as new record players with lightweight crystal pickups and precision-ground styli made of sapphire or an exotic osmium alloy proliferated. In late 1945, RCA Victor began offering "De Luxe" transparent red vinylite pressings of some Red Seal classical 78s, at a de luxe price. Later, Decca Records introduced vinyl Deccalite 78s, while other record companies used various vinyl formulations trademarked as Metrolite, Merco Plastic, and Sav-o-flex, but these were mainly used to produce "unbreakable" children's records and special thin vinyl DJ pressings for shipment to radio stations.
=== 78 rpm recording time ===
The playing time of a phonograph record is directly proportional to the available groove length divided by the turntable speed. Total groove length in turn depends on how closely the grooves are spaced, in addition to the record diameter. At the beginning of the 20th century, the early discs played for two minutes, the same as cylinder records. The 12-inch disc, introduced by Victor in 1903, increased the playing time to three and a half minutes. Because the standard 10-inch 78 rpm record could hold about three minutes of sound per side, most popular recordings were limited to that duration. For example, when King Oliver's Creole Jazz Band, including Louis Armstrong on his first recordings, recorded 13 sides at Gennett Records in Richmond, Indiana, in 1923, one side was 2:09 and four sides were 2:52–2:59.
In January 1938, Milt Gabler started recording for Commodore Records, and to allow for longer continuous performances, he recorded some 12-inch discs. Eddie Condon explained: "Gabler realized that a jam session needs room for development." The first two 12-inch recordings did not take advantage of their capability: "Carnegie Drag" was 3m 15s; "Carnegie Jump", 2m 41s. But at the second session, on 30 April, the two 12-inch recordings were longer: "Embraceable You" was 4m 05s; "Serenade to a Shylock", 4m 32s. Another way to overcome the time limitation was to issue a selection extending to both sides of a single record. Vaudeville stars Gallagher and Shean recorded "Mr. Gallagher and Mr. Shean", written by themselves or, allegedly, by Bryan Foy, as two sides of a 10-inch 78 in 1922 for Victor. Longer musical pieces were released as a set of records. In 1903 The Gramophone Company in England made the first complete recording of an opera, Verdi's Ernani, on 40 single-sided discs.
In 1940, Commodore released Eddie Condon and his Band's recording of "A Good Man Is Hard to Find" in four parts, issued on both sides of two 12-inch 78s. The limited duration of recordings persisted from their advent until the introduction of the LP record in 1948. In popular music, the time limit of 3+1⁄2 minutes on a 10-inch 78 rpm record meant that singers seldom recorded long pieces. One exception is Frank Sinatra's recording of Rodgers and Hammerstein's "Soliloquy", from Carousel, made on 28 May 1946. Because it ran 7m 57s, longer than both sides of a standard 78 rpm 10-inch record, it was released on Columbia's Masterwork label (the classical division) as two sides of a 12-inch record.
In the 78 era, classical-music and spoken-word items generally were released on the longer 12-inch 78s, about 4–5 minutes per side. For example, on 10 June 1924, four months after the 12 February premier of Rhapsody in Blue, George Gershwin recorded an abridged version of the seventeen-minute work with Paul Whiteman and His Orchestra. It was released on two sides of Victor 55225 and ran for 8m 59s.
=== Record albums ===
"Record albums" were originally booklets containing collections of multiple disc records of related material, the name being related to photograph albums or scrap albums. German record company Odeon pioneered the album in 1909 when it released the Nutcracker Suite by Tchaikovsky on four double-sided discs in a specially designed package. It was not until the LP era that an entire album of material could be included on a single record.
=== 78 rpm releases in the microgroove era ===
In 1968, when the hit movie Thoroughly Modern Millie was inspiring revivals of Jazz Age music, Reprise planned to release a series of 78-rpm singles from their artists on their label at the time, called the Reprise Speed Series. Only one disc actually saw release, Randy Newman's "I Think It's Going to Rain Today", a track from his self-titled debut album (with "The Beehive State" on the flipside). Reprise did not proceed further with the series due to a lack of sales for the single, and a lack of general interest in the concept.
In 1978, guitarist and vocalist Leon Redbone released a promotional 78-rpm single featuring two songs ("Alabama Jubilee" and "Please Don't Talk About Me When I'm Gone") from his Champagne Charlie album.
In the same vein of Tin Pan Alley revivals, R. Crumb & His Cheap Suit Serenaders issued a number of 78-rpm singles on their Blue Goose record label. The most familiar of these releases is probably R. Crumb & His Cheap Suit Serenaders' Party Record (1980, issued as a "Red Goose" record on a 12-inch single), with the double-entendre "My Girl's Pussy" on the "A" side and the X-rated "Christopher Columbus" on the "B" side.
In the 1990s Rhino Records issued a series of boxed sets of 78-rpm reissues of early rock and roll hits, intended for owners of vintage jukeboxes. The records were made of vinyl, however, and some of the earlier vintage 78-rpm jukeboxes and record players (the ones that were pre-war) were designed with heavy tone arms to play the hard slate-impregnated shellac records of their time. These vinyl Rhino 78s were softer and would be destroyed by old juke boxes and old record players, but play well on newer 78-capable turntables with modern lightweight tone arms and jewel needles.
As a special release for Record Store Day 2011, Capitol re-released The Beach Boys single "Good Vibrations" in the form of a 10-inch 78-rpm record (b/w "Heroes and Villains"). More recently, The Reverend Peyton's Big Damn Band has released their tribute to blues guitarist Charley Patton Peyton on Patton on both 12-inch LP and 10-inch 78s.
== New sizes and materials after WWII ==
CBS Laboratories had long been at work for Columbia Records to develop a phonograph record that would hold at least 20 minutes per side.
Research began in 1939, was suspended during World War II, and then resumed in 1945. Columbia Records unveiled the LP at a press conference in the Waldorf-Astoria on 21 June 1948, in two formats: 10 inches (25 centimetres) in diameter, matching that of 78 rpm singles, and 12 inches (30 centimetres) in diameter.
Unwilling to accept and license Columbia's system, in February 1949, RCA Victor released the first 45 rpm single, 7 inches in diameter with a large center hole. The 45 rpm player included a changing mechanism that allowed multiple disks to be stacked, much as a conventional changer handled 78s. Also like 78s, the short playing time of a single 45 rpm side meant that long works, such as symphonies and operas, had to be released on multiple 45s instead of a single LP, but RCA Victor claimed that the new high-speed changer rendered side breaks so brief as to be inconsequential. Early 45 rpm records were made from either vinyl or polystyrene. They had a playing time of eight minutes.
At first the two systems were marketed in competition, in what was called "The War of the Speeds".
=== Speeds ===
==== Shellac era ====
The older 78 rpm format continued to be mass-produced alongside the newer formats using new materials in decreasing numbers until the summer of 1958 in the U.S., and in a few countries, such as the Philippines and India (both countries issued recordings by the Beatles on 78s), into the late 1960s. For example, Columbia Records' last reissue of Frank Sinatra songs on 78 rpm records was an album called Young at Heart, issued in November 1954.
==== Microgroove and vinyl era ====
Columbia and RCA Victor each pursued their R&D secretly.
The commercial rivalry between RCA Victor and Columbia Records led to RCA Victor's introduction of what it had intended to be a competing vinyl format, the 7-inch (175 mm) 45 rpm disc, with a much larger center hole. For a two-year period from 1948 to 1950, record companies and consumers faced uncertainty over which of these formats would ultimately prevail in what was known as the "War of the Speeds" (see also Format war). In 1949 Capitol and Decca adopted the new LP format and RCA Victor gave in and issued its first LP in January 1950. The 45 rpm size was gaining in popularity, too, and Columbia issued its first 45s in February 1951. By 1954, 200 million 45s had been sold.
Eventually the 12-inch (300 mm) 33+1⁄3 rpm LP prevailed as the dominant format for musical albums, and 10-inch LPs were no longer issued. The last Columbia Records reissue of any Frank Sinatra songs on a 10-inch LP record was an album called Hall of Fame, CL 2600, issued on 26 October 1956, containing six songs, one each by Tony Bennett, Rosemary Clooney, Johnnie Ray, Frank Sinatra, Doris Day, and Frankie Laine.
The 45 rpm discs also came in a variety known as extended play (EP), which achieved up to 10–15 minutes play at the expense of attenuating (and possibly compressing) the sound to reduce the width required by the groove. EP discs were cheaper to produce and were used in cases where unit sales were likely to be more limited or to reissue LP albums on the smaller format for those people who had only 45 rpm players. LP albums could be purchased one EP at a time, with four items per EP, or in a boxed set with three EPs or twelve items. The large center hole on 45s allows easier handling by jukebox mechanisms. EPs were generally discontinued by the late 1950s in the U.S. as three- and four-speed record players replaced the individual 45 players. One indication of the decline of the 45 rpm EP is that the last Columbia Records reissue of Frank Sinatra songs on 45 rpm EP records, called Frank Sinatra (Columbia B-2641) was issued on 7 December 1959.
The Seeburg Corporation introduced the Seeburg Background Music System in 1959, using a 16+2⁄3 rpm 9-inch record with 2-inch center hole. Each record held 40 minutes of music per side, recorded at 420 grooves per inch.
From the mid-1950s through the 1960s, in the U.S. the common home record player or "stereo" (after the introduction of stereo recording) would typically have had these features: a three- or four-speed player (78, 45, 33+1⁄3, and sometimes 16+2⁄3 rpm); with changer, a tall spindle that would hold several records and automatically drop a new record on top of the previous one when it had finished playing, a combination cartridge with both 78 and microgroove styli and a way to flip between the two; and some kind of adapter for playing the 45s with their larger center hole. The adapter could be a small solid circle that fit onto the bottom of the spindle (meaning only one 45 could be played at a time) or a larger adapter that fit over the entire spindle, permitting a stack of 45s to be played.
RCA Victor 45s were also adapted to the smaller spindle of an LP player with a plastic snap-in insert known as a "45 rpm adapter". These inserts were commissioned by RCA president David Sarnoff and were invented by Thomas Hutchison.
Capacitance Electronic Discs were videodiscs invented by RCA, based on mechanically tracked ultra-microgrooves (9541 grooves/inch) on a 12-inch conductive vinyl disc.
=== High fidelity ===
The term "high fidelity" was coined in the 1920s by some manufacturers of radio receivers and phonographs to differentiate their better-sounding products claimed as providing "perfect" sound reproduction. The term began to be used by some audio engineers and consumers through the 1930s and 1940s. After 1949 a variety of improvements in recording and playback technologies, especially stereo recordings, which became widely available in 1958, gave a boost to the "hi-fi" classification of products, leading to sales of individual components for the home such as amplifiers, loudspeakers, phonographs, and tape players. High Fidelity and Audio were two magazines that hi-fi consumers and engineers could read for reviews of playback equipment and recordings.
=== Stereophonic sound ===
A stereophonic phonograph provides two channels of audio, one left and one right. This is achieved by adding another vertical dimension of movement to the needle in addition to the horizontal one. As a result, the needle now moves not only left and right, but also up and down. But since those two dimensions do not have the same sensitivity to vibration, the difference needs to be evened out by having each channel take half its information from each direction by turning the channels 45 degrees from horizontal.
As a result of the 45-degree turn and some vector addition, it can be demonstrated that out of the new horizontal and vertical directions, one would represent the sum of the two channels, and the other representing the difference. Record makers decide to pick the directions such that the traditional horizontal direction codes for the sum. As a result, an ordinary mono disk is decoded correctly as "no difference between channels", and an ordinary mono player would simply play the sum of a stereophonic record without too much loss of information.
In 1957 the first commercial stereo two-channel records were issued first by Audio Fidelity followed by a translucent blue vinyl on Bel Canto Records, the first of which was a multi-colored-vinyl sampler featuring A Stereo Tour of Los Angeles narrated by Jack Wagner on one side, and a collection of tracks from various Bel Canto albums on the back.
=== Noise reduction systems ===
A similar scheme aiming at the high-end audiophile market, and achieving a noise reduction of about 20 to 25 dB(A), was the Telefunken/Nakamichi High-Com II noise reduction system being adapted to vinyl in 1979. A decoder was commercially available but only one demo record is known to have been produced in this format.
The availability of encoded disks in any of these formats stopped in the mid-1980s.
Yet another noise reduction system for vinyl records was the UC compander system developed by Zentrum Wissenschaft und Technik (ZWT) of Kombinat Rundfunk und Fernsehen (RFT). The system deliberately reduced disk noise by 10 to 12 dB(A) only to remain virtually free of recognizable acoustical artifacts even when records were played back without an UC expander. In fact, the system was undocumented yet introduced into the market by several East-German record labels since 1983. Over 500 UC-encoded titles were produced without an expander becoming available to the public. The only UC expander was built into a turntable manufactured by Phonotechnik Pirna/Zittau.
== Formats ==
=== Types of records ===
The usual diameters of the holes on an EP record are 0.286 inches (7.26 mm).
Sizes of records in the United States and the UK are generally measured in inches, e.g. 7-inch records, which are generally 45 rpm records. LPs were 10-inch records at first, but soon the 12-inch size became by far the most common. Generally, 78s were 10-inch, but 12-inch and 7-inch and even smaller were made—the so-called "little wonders".
=== Standard formats ===
Notes:
=== Less common formats ===
Flexi discs were thin flexible records that were distributed with magazines and as promotional gifts from the 1960s to the 1980s.
In March 1949, as RCA Victor released the 45, Columbia released several hundred 7-inch, 33+1⁄3 rpm, small-spindle-hole singles. This format was soon dropped as it became clear that the RCA Victor 45 was the single of choice and the Columbia 12-inch LP would be the album of choice.
The first release of the 45 came in seven colors: black 47-xxxx popular series, yellow 47-xxxx juvenile series, green (teal) 48-xxxx country series, deep red 49-xxxx classical series, bright red (cerise) 50-xxxx blues/spiritual series, light blue 51-xxxx international series, dark blue 52-xxxx light classics. Most colors were soon dropped in favor of black because of production problems. However, yellow and deep red were continued until about 1952.
The first 45 rpm record created for sale was "PeeWee the Piccolo" RCA Victor 47-0147 pressed in yellow translucent vinyl at the Sherman Avenue plant, Indianapolis on 7 December 1948, by R. O. Price, plant manager.
In the 1950s and 1960s Ribs were created within Soviet Union countries as a result of cultural censorship. These black market records were of banned music, printed onto x-ray films scavenged from hospital bins.
In the 1970s, the government of Bhutan produced now-collectible postage stamps on playable vinyl mini-discs.
=== Recent developments ===
In 2018, an Austrian startup, Rebeat Innovation GmBH, received US$4.8 million in funding to develop high definition vinyl records that purport to contain longer play times, louder volumes and higher fidelity than conventional vinyl LPs. Rebeat Innovation, headed by CEO Günter Loibl, has called the format 'HD Vinyl'. The HD process works by converting audio to a digital 3D topography map that is then inscribed onto the vinyl stamper via lasers, resulting in less loss of information. Many critics have expressed skepticism regarding the cost and quality of HD records.
In May 2019, at the Making Vinyl conference in Berlin, Loibl unveiled the software "Perfect Groove" for creating 3D topographic audio data files. The software provides a map for laser-engraving for HD Vinyl stampers. The audio engineering software was created with mastering engineers Scott Hull and Darcy Proper, a four-time Grammy winner. The demonstration offered the first simulations of what HD Vinyl records are likely to sound like, ahead of actual HD vinyl physical record production. Loibl discussed the software "Perfect Groove" at a presentation titled "Vinyl 4.0 The next generation of making records" before offering demonstrations to attendees.
== Structure ==
Increasingly from the early 20th century, and almost exclusively since the 1920s, both sides of the record have been used to carry the grooves. Occasional records have been issued since then with a recording on only one side. In the 1980s Columbia records briefly issued a series of less expensive one-sided 45 rpm singles.
Since its inception in 1948, vinyl record standards for the United States follow the guidelines of the Recording Industry Association of America (RIAA).
=== Vinyl quality ===
The composition of vinyl used to press records (a blend of polyvinyl chloride and polyvinyl acetate) has varied considerably over the years. Virgin vinyl is preferred, but during the 1970s energy crisis, as a cost-cutting move, much of the industry began reducing the thickness and quality of vinyl used in mass-market manufacturing. Sound quality suffered, with increased ticks, pops, and other surface noises. RCA Records marketed their lightweight LP as Dynaflex, which, at the time, was considered inferior by many record collectors.
It became commonplace to use recycled vinyl. New or "virgin" heavy/heavyweight (180–220 g) vinyl is commonly used for modern audiophile vinyl releases in all genres. Many collectors prefer to have heavyweight vinyl albums, which have been reported to have better sound than normal vinyl because of their higher tolerance against deformation caused by normal play.
Following the vinyl revival of the 21st century, select manufacturers adopted bioplastic-based records due to concerns over the environmental impact of widespread PVC use.
== Limitations ==
=== Shellac ===
One problem with shellac was that the size of the disks tended to be larger because it was limited to 80–100 groove walls per inch before the risk of groove collapse became too high, whereas vinyl could have up to 260 groove walls per inch.
=== Vinyl ===
Although vinyl records are strong and do not break easily, they scratch due to vinyl's soft material properties, sometimes resulting in ruining the record. Vinyl readily acquires a static charge, attracting dust that is difficult to remove completely. Dust and scratches cause audio clicks and pops. In extreme cases, they can cause the needle to skip over a series of grooves, or worse yet, cause the needle to skip backward, creating a "locked groove" that repeats over and over. This is the origin of the phrase "like a broken record" or "like a scratched record", which is often used to describe a person or thing that continually repeats itself.
A further limitation of the gramophone record is that fidelity steadily declines as playback progresses; there is more vinyl per second available for fine reproduction of high frequencies at the large-diameter beginning of the groove than exist at the smaller diameters close to the end of the side. At the start of a groove on an LP there are 510 mm of vinyl per second traveling past the stylus while the ending of the groove gives 200–210 mm of vinyl per second—less than half the linear resolution.
There is controversy about the relative quality of CD sound and LP sound when the latter is heard under the best conditions (see Comparison of analog and digital recording). One technical advantage with vinyl compared to the optical CD is that if correctly handled and stored, the vinyl record can be playable for decades and possibly centuries, which is longer than some versions of the optical CD. For vinyl records to be playable for years to come, they need to be handled with care and stored properly. Guidelines for proper vinyl storage include not stacking records on top of each other, avoiding heat or direct sunlight and placing them in a temperature-controlled area that help prevent vinyl records from warping and scratching. Collectors store their records in a variety of boxes, cubes, shelves and racks.
=== Sound fidelity ===
At the time of the introduction of the compact disc (CD) in 1982, the stereo LP pressed in vinyl continued to suffer from a variety of limitations:
The stereo image was not made up of fully discrete left and right channels; each channel's signal coming out of the cartridge contained a small amount of the signal from the other channel, with more crosstalk at higher frequencies. High-quality disc cutting equipment was capable of making a master disc with 30–40 dB of stereo separation at 1,000 Hz, but the playback cartridges had lesser performance of about 20 to 30 dB of separation at 1000 Hz, with separation decreasing as frequency increased, such that at 12 kHz the separation was about 10–15 dB. A common modern view is that stereo isolation must be higher than this to achieve a proper stereo soundstage. However, in the 1950s the BBC determined in a series of tests that only 20–25 dB is required for the impression of full stereo separation.
Thin, closely spaced spiral grooves that allow for increased playing time on a 33+1⁄3 rpm microgroove LP lead to a tinny pre-echo warning of upcoming loud sounds. The cutting stylus unavoidably transfers some of the subsequent groove wall's impulse signal into the previous groove wall. It is discernible by some listeners throughout certain recordings, but a quiet passage followed by a loud sound allows anyone to hear a faint pre-echo of the loud sound occurring 1.8 seconds ahead of time.
=== LP versus CD ===
Audiophiles have differed over the relative merits of the LP versus the CD since the digital disc was introduced. Digital sampling can theoretically completely reproduce a sound wave within a given range of frequencies if the sampling rate is high enough. Vinyl's drawbacks, however, include surface noise, less resolution due to a lower dynamic range, and greater sensitivity to handling. Modern anti-aliasing filters and oversampling systems used in digital recordings have eliminated perceived problems observed with early CD players.
There is a theory that vinyl records can audibly represent higher frequencies than compact discs, though most of this is noise and not relevant to human hearing. According to Red Book specifications, the compact disc has a frequency response of 20 Hz up to 22,050 Hz, and most CD players measure flat within a fraction of a decibel from at least 0 Hz to 20 kHz at full output. Due to the distance required between grooves, it is not possible for an LP to reproduce as low frequencies as a CD. Additionally, turntable rumble and acoustic feedback obscures the low-end limit of vinyl but the upper end can be, with some cartridges, reasonably flat within a few decibels to 30 kHz, with gentle roll-off. Carrier signals of Quad LPs popular in the 1970s were at 30 kHz to be out of the range of human hearing. The average human auditory system is sensitive to frequencies from 20 Hz to a maximum of around 20,000 Hz. The upper and lower frequency limits of human hearing vary per person. High frequency sensitivity decreases as a person ages, a process called presbycusis.
== Preservation ==
As the playing of gramophone records causes gradual degradation of the recording, they are best preserved by transferring them onto other media and playing the records as rarely as possible. They need to be stored on edge, and do best under environmental conditions that most humans would find comfortable. The longevity and optimal performance of vinyl records can be improved through certain accessories and cleaning supplies. Slipmats provide a soft and cushioned surface between the record and the turntable platter, minimizing friction and preventing potential scratches or damage to the vinyl surface.
Where old disc recordings are considered to be of artistic or historic interest, from before the era of tape or where no tape master exists, archivists play back the disc on suitable equipment and record the result, typically onto a digital format, which can be copied and manipulated to remove analog flaws without any further damage to the source recording. For example, Nimbus Records uses a specially built horn record player to transfer 78s. Anyone can do this using a standard record player with a suitable pickup, a phono-preamp (pre-amplifier) and a typical personal computer. However, for accurate transfer, professional archivists carefully choose the correct stylus shape and diameter, tracking weight, equalisation curve and other playback parameters and use high-quality analogue-to-digital converters.
As an alternative to playback with a stylus, a recording can be read optically, processed with software that calculates the velocity that the stylus would be moving in the mapped grooves and converted to a digital recording format. This does no further damage to the disc and generally produces a better sound than normal playback. This technique also has the potential to allow for reconstruction of broken or otherwise damaged discs.
== Popularity and current status ==
Groove recordings, first designed in the final quarter of the 19th century, held a predominant position for nearly a century—withstanding competition from reel-to-reel tape, the 8-track cartridge, and the compact cassette. The widespread popularity of Sony's Walkman was a factor that contributed to the vinyl's lessening usage in the 1980s.
In 1988, the compact disc surpassed the gramophone record in unit sales. Vinyl records experienced a sudden decline in popularity between 1988 and 1991, when the major label distributors restricted their return policies, which retailers had been relying on to maintain and swap out stocks of relatively unpopular titles. First the distributors began charging retailers more for new products if they returned unsold vinyl, and then they stopped providing any credit at all for returns. Retailers, fearing they would be stuck with anything they ordered, only ordered proven, popular titles that they knew would sell, and devoted more shelf space to CDs and cassettes. Record companies also removed many vinyl titles from production and distribution, further undermining the availability of the format and leading to the closure of pressing plants. This rapid decline in the availability of records accelerated the format's decline in popularity, and is seen by some as a deliberate ploy to make consumers switch to CDs, which unlike today, were more profitable for the record companies.
The more modern CD format held numerous advantages over the record such as its portability, digital audio and its elimination of background hiss and surface noise, instant switching and searching of tracks, longer playing time, lack of continuous degradation (most analog formats wear out as they get played), programmability (e.g. shuffle, repeat), and ability to be played on and copied to a personal computer. In spite of their flaws, records continued to have enthusiastic supporters, partly due to a preference of its "warmer" sound and its larger sleeve artwork. Records continued to be format of choice by disc jockeys in dance clubs during the 1990s and 2000s due to its better mixing capabilities.
=== Revival era ===
A niche resurgence of vinyl records began in the late 2000s, mainly among rock fans. The Entertainment Retailers Association in the United Kingdom found in 2011 that consumers were willing to pay on average £16.30 (€19.37, US$25.81) for a single vinyl record, as opposed to £7.82 (€9.30, US$12.38) for a CD and £6.80 (€8.09, US$10.76) for a digital download. The resurgence accelerated throughout the 2010s, and in 2015 reached $416 million revenue in the US, their highest level since 1988. As of 2017, it comprised 14% of all physical album sales. According to the RIAA's midyear report in 2020, phonograph record revenues surpassed those of CDs for the first time since the 1980s.
In 2021, Taylor Swift sold 102,000 copies of her ninth studio album Evermore on vinyl in one week. The sales of the record beat the largest sales in one week on vinyl since Nielsen started tracking vinyl sales in 1991. The sales record was previously held by Jack White, who sold 40,000 copies of his second solo release, Lazaretto, on vinyl in its first week of release in 2014.
Approximately 180 million LP records are produced annually at global pressing plants, as of 2021.
=== Present production ===
As of 2017, 48 record pressing facilities exist worldwide. The increased popularity of the record has led to the investment in new and modern record-pressing machines. Only two producers of lacquer master discs remain: Apollo Masters in California, and MDC in Japan. On 6 February 2020, a fire destroyed the Apollo Masters plant. According to the Apollo Masters website, their future is still uncertain. Hand Drawn Pressing opened in 2016 as the world's first fully automated record pressing plant.
== Less common recording formats ==
=== VinylVideo ===
VinylVideo is a format to store a low resolution black and white video on a vinyl record alongside encoded audio.
=== Capacitance Electronic Disc ===
Another example is the Capacitance Electronic Disc, a color video format, slightly better than VHS.
== See also ==
Album cover
Apollo Masters Corporation fire
Capacitance Electronic Disc
Conservation and restoration of vinyl discs
Electrical transcription
LP record
The New Face of Vinyl: Youth's Digital Devolution (photo documentary)
Phonograph cylinder
Pocket Disc
Record Store Day
Sound recording and reproduction
Unusual types of gramophone records
== References ==
== Further reading ==
== External links ==
How do vinyl record works at vinyl-place website
Playback equalization for 78 rpm shellacs and early LPs (EQ curves, index of record labels): Audacity Wiki
The manufacturing and production of shellac records. Educational video, 1942.
Reproduction of 78 rpm records including equalization data for different makes of 78s and LPs.
The Secret Society of Lathe Trolls, a site devoted to all aspects of the making of Gramophone records.
How to digitize gramophone records: Audacity Tutorial
Actual list of vinyl pressing plants: vinyl-pressing-plants.com
Dedicated museum for sound history: Musée des ondes Emile Berliner, Montreal, Canada
Smart Vinyl: The First Computerized
Semi-automatic metadata extraction from shellac and vinyl disc
Vinyl Player 2.0 | Wikipedia/Phonograph_record |
A phonograph record (also known as a gramophone record, especially in British English) or a vinyl record (for later varieties only) is an analog sound storage medium in the form of a flat disc with an inscribed, modulated spiral groove. The groove usually starts near the outside edge and ends near the center of the disc. The stored sound information is made audible by playing the record on a phonograph (or "gramophone", "turntable", or "record player").
Records have been produced in different formats with playing times ranging from a few minutes to around 30 minutes per side. For about half a century, the discs were commonly made from shellac and these records typically ran at a rotational speed of 78 rpm, giving it the nickname "78s" ("seventy-eights"). After the 1940s, "vinyl" records made from polyvinyl chloride (PVC) became standard replacing the old 78s and remain so to this day; they have since been produced in various sizes and speeds, most commonly 7-inch discs played at 45 rpm (typically for singles, also called 45s ("forty-fives")), and 12-inch discs played at 33⅓ rpm (known as an LP, "long-playing records", typically for full-length albums) – the latter being the most prevalent format today.
== Overview ==
The phonograph record was the primary medium used for music reproduction throughout the 20th century. It had co-existed with the phonograph cylinder from the late 1880s and had effectively superseded it by around 1912. Records retained the largest market share even when new formats such as the compact cassette were mass-marketed. By the 1980s, digital media, in the form of the compact disc, had gained a larger market share, and the record left the mainstream in 1991. Since the 1990s, records continue to be manufactured and sold on a smaller scale, and during the 1990s and early 2000s were commonly used by disc jockeys (DJs), especially in dance music genres. They were also listened to by a growing number of audiophiles. The phonograph record has made a niche resurgence in the early 21st century, growing increasingly popular throughout the 2010s and 2020s.
Phonograph records are generally described by their diameter in inches (12-inch, 10-inch, 7-inch), the rotational speed in revolutions per minute (rpm) at which they are played (8+1⁄3, 16+2⁄3, 33+1⁄3, 45, 78), and their time capacity, determined by their diameter and speed (LP [long play], 12-inch disc, 33+1⁄3 rpm; EP [extended play], 12-inch disc or 7-inch disc, 33+1⁄3 or 45 rpm; Single, 7-inch or 10-inch disc, 45 or 78 rpm); their reproductive quality, or level of fidelity (high-fidelity, orthophonic, full-range, etc.); and the number of audio channels (mono, stereo, quad, etc.).
The phrase broken record refers to a malfunction when the needle skips/jumps back to the previous groove and plays the same section over and over again indefinitely.
=== Naming ===
The various names have included phonograph record (American English), gramophone record (British English), record, vinyl, LP (originally a trademark of Columbia Records), black disc, album, and more informally platter, wax, or liquorice pizza.
== Early development ==
Manufacture of disc records began in the late 19th century, at first competing with earlier cylinder records. Price, ease of use and storage made the disc record dominant by the 1910s. The standard format of disc records became known to later generations as "78s" after their playback speed in revolutions per minute, although that speed only became standardized in the late 1920s. In the late 1940s new formats pressed in vinyl, the 45 rpm single and 33 rpm long playing "LP", were introduced, gradually overtaking the formerly standard "78s" over the next decade. The late 1950s saw the introduction of stereophonic sound on commercial discs.
=== Predecessors ===
The phonautograph was invented by 1857 by Frenchman Édouard-Léon Scott de Martinville. It could not, however, play back recorded sound, as Scott intended for people to read back the tracings, which he called phonautograms. Prior to this, tuning forks had been used in this way to create direct tracings of the vibrations of sound-producing objects, as by English physicist Thomas Young in 1807.
In 1877, Thomas Edison invented the first phonograph, which etched sound recordings onto phonograph cylinders. Unlike the phonautograph, Edison's phonograph could both record and reproduce sound, via two separate needles, one for each function.
=== The first disc records ===
The first commercially sold disc records were created by Emile Berliner in the 1880s. Emile Berliner improved the quality of recordings while his manufacturing associate Eldridge R. Johnson, who owned a machine shop in Camden, New Jersey, eventually improved the mechanism of the gramophone with a spring motor and a speed regulating governor, resulting in a sound quality equal to Edison's cylinders. Abandoning Berliner's "Gramophone" trademark for legal reasons in the United States, Johnson's and Berliner's separate companies reorganized in 1901 to form the Victor Talking Machine Company in Camden, New Jersey, whose products would come to dominate the market for several decades.
Berliner's Montreal factory, which became the Canadian branch of RCA Victor, still exists. There is a dedicated museum in Montreal for Berliner (Musée des ondes Emile Berliner).
== 78 rpm disc developments ==
=== Early speeds ===
Early disc recordings were produced in a variety of speeds ranging from 60 to 130 rpm, and a variety of sizes. As early as 1894, Emile Berliner's United States Gramophone Company was selling single-sided 7-inch discs with an advertised standard speed of "about 70 rpm".
One standard audio recording handbook describes speed regulators, or governors, as being part of a wave of improvement introduced rapidly after 1897. A picture of a hand-cranked 1898 Berliner Gramophone shows a governor and says that spring drives had replaced hand drives. It notes that:
The speed regulator was furnished with an indicator that showed the speed when the machine was running so that the records, on reproduction, could be revolved at exactly the same speed...The literature does not disclose why 78 rpm was chosen for the phonograph industry, apparently this just happened to be the speed created by one of the early machines and, for no other reason continued to be used.
In 1912, the Gramophone Company set 78 rpm as their recording standard, based on the average of recordings they had been releasing at the time, and started selling players whose governors had a nominal speed of 78 rpm. By 1925, 78 rpm was becoming standardized across the industry. However, the exact speed differed between places with alternating current electricity supply at 60 hertz (cycles per second, Hz) and those at 50 Hz. Where the mains supply was 60 Hz, the actual speed was 78.26 rpm: that of a 60 Hz stroboscope illuminating 92-bar calibration markings. Where it was 50 Hz, it was 77.92 rpm: that of a 50 Hz stroboscope illuminating 77-bar calibration markings.
At least one attempt to lengthen playing time was made in the early 1920s. World Records produced records that played at a constant linear velocity, controlled by Noel Pemberton Billing's patented add-on speed governor.
=== Acoustic recording ===
Early recordings were made entirely acoustically, the sound was collected by a horn and piped to a diaphragm, which vibrated the cutting stylus. Sensitivity and frequency range were poor, and frequency response was irregular, giving acoustic recordings an instantly recognizable tonal quality. A singer almost had to put their face in the recording horn. A way of reducing resonance was to wrap the recording horn with tape.
Even drums, if planned and placed properly, could be effectively recorded and heard on even the earliest jazz and military band recordings. The loudest instruments such as the drums and trumpets were positioned the farthest away from the collecting horn. Lillian Hardin Armstrong, a member of King Oliver's Creole Jazz Band, which recorded at Gennett Records in 1923, remembered that at first Oliver and his young second trumpet, Louis Armstrong, stood next to each other and Oliver's horn could not be heard. "They put Louis about fifteen feet over in the corner, looking all sad."
=== Electrical recording ===
During the first half of the 1920s, engineers at Western Electric, as well as independent inventors such as Orlando Marsh, developed technology for capturing sound with a microphone, amplifying it with vacuum tubes (known as valves in the UK), and then using the amplified signal to drive an electromechanical recording head. Western Electric's innovations resulted in a broader and smoother frequency response, which produced a dramatically fuller, clearer and more natural-sounding recording. Soft or distant sounds that were previously impossible to record could now be captured. Volume was now limited only by the groove spacing on the record and the amplification of the playback device. Victor and Columbia licensed the new electrical system from Western Electric and recorded the first electrical discs during the spring of 1925. The first electrically recorded Victor Red Seal record was Chopin's "Impromptus" and Schubert's "Litanei" performed by pianist Alfred Cortot at Victor's studios in Camden, New Jersey.
A 1926 Wanamaker's ad in The New York Times offers records "by the latest Victor process of electrical recording". It was recognized as a breakthrough; in 1930, a Times music critic stated:
... the time has come for serious musical criticism to take account of performances of great music reproduced by means of the records. To claim that the records have succeeded in exact and complete reproduction of all details of symphonic or operatic performances ... would be extravagant ... [but] the article of today is so far in advance of the old machines as hardly to admit classification under the same name. Electrical recording and reproduction have combined to retain vitality and color in recitals by proxy.
The Orthophonic Victrola had an interior folded exponential horn, a sophisticated design informed by impedance-matching and transmission-line theory, and designed to provide a relatively flat frequency response. Victor's first public demonstration of the Orthophonic Victrola on 6 October 1925, at the Waldorf-Astoria Hotel was front-page news in The New York Times, which reported:
The audience broke into applause ... John Philip Sousa [said]: '[Gentlemen], that is a band. This is the first time I have ever heard music with any soul to it produced by a mechanical talking machine' ... The new instrument is a feat of mathematics and physics. It is not the result of innumerable experiments, but was worked out on paper in advance of being built in the laboratory ... The new machine has a range of from 100 to 5,000 [cycles per second], or five and a half octaves ... The 'phonograph tone' is eliminated by the new recording and reproducing process.
Sales of records plummeted precipitously during the early years of the Great Depression of the 1930s, and the entire record industry in America nearly foundered. In 1932, RCA Victor introduced a basic, inexpensive turntable called the Duo Jr., which was designed to be connected to their radio receivers. According to Edward Wallerstein (the general manager of the RCA Victor Division), this device was "instrumental in revitalizing the industry".
=== 78 rpm materials ===
The production of shellac records continued throughout the 78 rpm era, which lasted until 1948 in industrialized nations.
During the Second World War, the United States Armed Forces produced thousands of 12-inch vinyl 78 rpm V-Discs for use by the troops overseas. After the war, the use of vinyl became more practical as new record players with lightweight crystal pickups and precision-ground styli made of sapphire or an exotic osmium alloy proliferated. In late 1945, RCA Victor began offering "De Luxe" transparent red vinylite pressings of some Red Seal classical 78s, at a de luxe price. Later, Decca Records introduced vinyl Deccalite 78s, while other record companies used various vinyl formulations trademarked as Metrolite, Merco Plastic, and Sav-o-flex, but these were mainly used to produce "unbreakable" children's records and special thin vinyl DJ pressings for shipment to radio stations.
=== 78 rpm recording time ===
The playing time of a phonograph record is directly proportional to the available groove length divided by the turntable speed. Total groove length in turn depends on how closely the grooves are spaced, in addition to the record diameter. At the beginning of the 20th century, the early discs played for two minutes, the same as cylinder records. The 12-inch disc, introduced by Victor in 1903, increased the playing time to three and a half minutes. Because the standard 10-inch 78 rpm record could hold about three minutes of sound per side, most popular recordings were limited to that duration. For example, when King Oliver's Creole Jazz Band, including Louis Armstrong on his first recordings, recorded 13 sides at Gennett Records in Richmond, Indiana, in 1923, one side was 2:09 and four sides were 2:52–2:59.
In January 1938, Milt Gabler started recording for Commodore Records, and to allow for longer continuous performances, he recorded some 12-inch discs. Eddie Condon explained: "Gabler realized that a jam session needs room for development." The first two 12-inch recordings did not take advantage of their capability: "Carnegie Drag" was 3m 15s; "Carnegie Jump", 2m 41s. But at the second session, on 30 April, the two 12-inch recordings were longer: "Embraceable You" was 4m 05s; "Serenade to a Shylock", 4m 32s. Another way to overcome the time limitation was to issue a selection extending to both sides of a single record. Vaudeville stars Gallagher and Shean recorded "Mr. Gallagher and Mr. Shean", written by themselves or, allegedly, by Bryan Foy, as two sides of a 10-inch 78 in 1922 for Victor. Longer musical pieces were released as a set of records. In 1903 The Gramophone Company in England made the first complete recording of an opera, Verdi's Ernani, on 40 single-sided discs.
In 1940, Commodore released Eddie Condon and his Band's recording of "A Good Man Is Hard to Find" in four parts, issued on both sides of two 12-inch 78s. The limited duration of recordings persisted from their advent until the introduction of the LP record in 1948. In popular music, the time limit of 3+1⁄2 minutes on a 10-inch 78 rpm record meant that singers seldom recorded long pieces. One exception is Frank Sinatra's recording of Rodgers and Hammerstein's "Soliloquy", from Carousel, made on 28 May 1946. Because it ran 7m 57s, longer than both sides of a standard 78 rpm 10-inch record, it was released on Columbia's Masterwork label (the classical division) as two sides of a 12-inch record.
In the 78 era, classical-music and spoken-word items generally were released on the longer 12-inch 78s, about 4–5 minutes per side. For example, on 10 June 1924, four months after the 12 February premier of Rhapsody in Blue, George Gershwin recorded an abridged version of the seventeen-minute work with Paul Whiteman and His Orchestra. It was released on two sides of Victor 55225 and ran for 8m 59s.
=== Record albums ===
"Record albums" were originally booklets containing collections of multiple disc records of related material, the name being related to photograph albums or scrap albums. German record company Odeon pioneered the album in 1909 when it released the Nutcracker Suite by Tchaikovsky on four double-sided discs in a specially designed package. It was not until the LP era that an entire album of material could be included on a single record.
=== 78 rpm releases in the microgroove era ===
In 1968, when the hit movie Thoroughly Modern Millie was inspiring revivals of Jazz Age music, Reprise planned to release a series of 78-rpm singles from their artists on their label at the time, called the Reprise Speed Series. Only one disc actually saw release, Randy Newman's "I Think It's Going to Rain Today", a track from his self-titled debut album (with "The Beehive State" on the flipside). Reprise did not proceed further with the series due to a lack of sales for the single, and a lack of general interest in the concept.
In 1978, guitarist and vocalist Leon Redbone released a promotional 78-rpm single featuring two songs ("Alabama Jubilee" and "Please Don't Talk About Me When I'm Gone") from his Champagne Charlie album.
In the same vein of Tin Pan Alley revivals, R. Crumb & His Cheap Suit Serenaders issued a number of 78-rpm singles on their Blue Goose record label. The most familiar of these releases is probably R. Crumb & His Cheap Suit Serenaders' Party Record (1980, issued as a "Red Goose" record on a 12-inch single), with the double-entendre "My Girl's Pussy" on the "A" side and the X-rated "Christopher Columbus" on the "B" side.
In the 1990s Rhino Records issued a series of boxed sets of 78-rpm reissues of early rock and roll hits, intended for owners of vintage jukeboxes. The records were made of vinyl, however, and some of the earlier vintage 78-rpm jukeboxes and record players (the ones that were pre-war) were designed with heavy tone arms to play the hard slate-impregnated shellac records of their time. These vinyl Rhino 78s were softer and would be destroyed by old juke boxes and old record players, but play well on newer 78-capable turntables with modern lightweight tone arms and jewel needles.
As a special release for Record Store Day 2011, Capitol re-released The Beach Boys single "Good Vibrations" in the form of a 10-inch 78-rpm record (b/w "Heroes and Villains"). More recently, The Reverend Peyton's Big Damn Band has released their tribute to blues guitarist Charley Patton Peyton on Patton on both 12-inch LP and 10-inch 78s.
== New sizes and materials after WWII ==
CBS Laboratories had long been at work for Columbia Records to develop a phonograph record that would hold at least 20 minutes per side.
Research began in 1939, was suspended during World War II, and then resumed in 1945. Columbia Records unveiled the LP at a press conference in the Waldorf-Astoria on 21 June 1948, in two formats: 10 inches (25 centimetres) in diameter, matching that of 78 rpm singles, and 12 inches (30 centimetres) in diameter.
Unwilling to accept and license Columbia's system, in February 1949, RCA Victor released the first 45 rpm single, 7 inches in diameter with a large center hole. The 45 rpm player included a changing mechanism that allowed multiple disks to be stacked, much as a conventional changer handled 78s. Also like 78s, the short playing time of a single 45 rpm side meant that long works, such as symphonies and operas, had to be released on multiple 45s instead of a single LP, but RCA Victor claimed that the new high-speed changer rendered side breaks so brief as to be inconsequential. Early 45 rpm records were made from either vinyl or polystyrene. They had a playing time of eight minutes.
At first the two systems were marketed in competition, in what was called "The War of the Speeds".
=== Speeds ===
==== Shellac era ====
The older 78 rpm format continued to be mass-produced alongside the newer formats using new materials in decreasing numbers until the summer of 1958 in the U.S., and in a few countries, such as the Philippines and India (both countries issued recordings by the Beatles on 78s), into the late 1960s. For example, Columbia Records' last reissue of Frank Sinatra songs on 78 rpm records was an album called Young at Heart, issued in November 1954.
==== Microgroove and vinyl era ====
Columbia and RCA Victor each pursued their R&D secretly.
The commercial rivalry between RCA Victor and Columbia Records led to RCA Victor's introduction of what it had intended to be a competing vinyl format, the 7-inch (175 mm) 45 rpm disc, with a much larger center hole. For a two-year period from 1948 to 1950, record companies and consumers faced uncertainty over which of these formats would ultimately prevail in what was known as the "War of the Speeds" (see also Format war). In 1949 Capitol and Decca adopted the new LP format and RCA Victor gave in and issued its first LP in January 1950. The 45 rpm size was gaining in popularity, too, and Columbia issued its first 45s in February 1951. By 1954, 200 million 45s had been sold.
Eventually the 12-inch (300 mm) 33+1⁄3 rpm LP prevailed as the dominant format for musical albums, and 10-inch LPs were no longer issued. The last Columbia Records reissue of any Frank Sinatra songs on a 10-inch LP record was an album called Hall of Fame, CL 2600, issued on 26 October 1956, containing six songs, one each by Tony Bennett, Rosemary Clooney, Johnnie Ray, Frank Sinatra, Doris Day, and Frankie Laine.
The 45 rpm discs also came in a variety known as extended play (EP), which achieved up to 10–15 minutes play at the expense of attenuating (and possibly compressing) the sound to reduce the width required by the groove. EP discs were cheaper to produce and were used in cases where unit sales were likely to be more limited or to reissue LP albums on the smaller format for those people who had only 45 rpm players. LP albums could be purchased one EP at a time, with four items per EP, or in a boxed set with three EPs or twelve items. The large center hole on 45s allows easier handling by jukebox mechanisms. EPs were generally discontinued by the late 1950s in the U.S. as three- and four-speed record players replaced the individual 45 players. One indication of the decline of the 45 rpm EP is that the last Columbia Records reissue of Frank Sinatra songs on 45 rpm EP records, called Frank Sinatra (Columbia B-2641) was issued on 7 December 1959.
The Seeburg Corporation introduced the Seeburg Background Music System in 1959, using a 16+2⁄3 rpm 9-inch record with 2-inch center hole. Each record held 40 minutes of music per side, recorded at 420 grooves per inch.
From the mid-1950s through the 1960s, in the U.S. the common home record player or "stereo" (after the introduction of stereo recording) would typically have had these features: a three- or four-speed player (78, 45, 33+1⁄3, and sometimes 16+2⁄3 rpm); with changer, a tall spindle that would hold several records and automatically drop a new record on top of the previous one when it had finished playing, a combination cartridge with both 78 and microgroove styli and a way to flip between the two; and some kind of adapter for playing the 45s with their larger center hole. The adapter could be a small solid circle that fit onto the bottom of the spindle (meaning only one 45 could be played at a time) or a larger adapter that fit over the entire spindle, permitting a stack of 45s to be played.
RCA Victor 45s were also adapted to the smaller spindle of an LP player with a plastic snap-in insert known as a "45 rpm adapter". These inserts were commissioned by RCA president David Sarnoff and were invented by Thomas Hutchison.
Capacitance Electronic Discs were videodiscs invented by RCA, based on mechanically tracked ultra-microgrooves (9541 grooves/inch) on a 12-inch conductive vinyl disc.
=== High fidelity ===
The term "high fidelity" was coined in the 1920s by some manufacturers of radio receivers and phonographs to differentiate their better-sounding products claimed as providing "perfect" sound reproduction. The term began to be used by some audio engineers and consumers through the 1930s and 1940s. After 1949 a variety of improvements in recording and playback technologies, especially stereo recordings, which became widely available in 1958, gave a boost to the "hi-fi" classification of products, leading to sales of individual components for the home such as amplifiers, loudspeakers, phonographs, and tape players. High Fidelity and Audio were two magazines that hi-fi consumers and engineers could read for reviews of playback equipment and recordings.
=== Stereophonic sound ===
A stereophonic phonograph provides two channels of audio, one left and one right. This is achieved by adding another vertical dimension of movement to the needle in addition to the horizontal one. As a result, the needle now moves not only left and right, but also up and down. But since those two dimensions do not have the same sensitivity to vibration, the difference needs to be evened out by having each channel take half its information from each direction by turning the channels 45 degrees from horizontal.
As a result of the 45-degree turn and some vector addition, it can be demonstrated that out of the new horizontal and vertical directions, one would represent the sum of the two channels, and the other representing the difference. Record makers decide to pick the directions such that the traditional horizontal direction codes for the sum. As a result, an ordinary mono disk is decoded correctly as "no difference between channels", and an ordinary mono player would simply play the sum of a stereophonic record without too much loss of information.
In 1957 the first commercial stereo two-channel records were issued first by Audio Fidelity followed by a translucent blue vinyl on Bel Canto Records, the first of which was a multi-colored-vinyl sampler featuring A Stereo Tour of Los Angeles narrated by Jack Wagner on one side, and a collection of tracks from various Bel Canto albums on the back.
=== Noise reduction systems ===
A similar scheme aiming at the high-end audiophile market, and achieving a noise reduction of about 20 to 25 dB(A), was the Telefunken/Nakamichi High-Com II noise reduction system being adapted to vinyl in 1979. A decoder was commercially available but only one demo record is known to have been produced in this format.
The availability of encoded disks in any of these formats stopped in the mid-1980s.
Yet another noise reduction system for vinyl records was the UC compander system developed by Zentrum Wissenschaft und Technik (ZWT) of Kombinat Rundfunk und Fernsehen (RFT). The system deliberately reduced disk noise by 10 to 12 dB(A) only to remain virtually free of recognizable acoustical artifacts even when records were played back without an UC expander. In fact, the system was undocumented yet introduced into the market by several East-German record labels since 1983. Over 500 UC-encoded titles were produced without an expander becoming available to the public. The only UC expander was built into a turntable manufactured by Phonotechnik Pirna/Zittau.
== Formats ==
=== Types of records ===
The usual diameters of the holes on an EP record are 0.286 inches (7.26 mm).
Sizes of records in the United States and the UK are generally measured in inches, e.g. 7-inch records, which are generally 45 rpm records. LPs were 10-inch records at first, but soon the 12-inch size became by far the most common. Generally, 78s were 10-inch, but 12-inch and 7-inch and even smaller were made—the so-called "little wonders".
=== Standard formats ===
Notes:
=== Less common formats ===
Flexi discs were thin flexible records that were distributed with magazines and as promotional gifts from the 1960s to the 1980s.
In March 1949, as RCA Victor released the 45, Columbia released several hundred 7-inch, 33+1⁄3 rpm, small-spindle-hole singles. This format was soon dropped as it became clear that the RCA Victor 45 was the single of choice and the Columbia 12-inch LP would be the album of choice.
The first release of the 45 came in seven colors: black 47-xxxx popular series, yellow 47-xxxx juvenile series, green (teal) 48-xxxx country series, deep red 49-xxxx classical series, bright red (cerise) 50-xxxx blues/spiritual series, light blue 51-xxxx international series, dark blue 52-xxxx light classics. Most colors were soon dropped in favor of black because of production problems. However, yellow and deep red were continued until about 1952.
The first 45 rpm record created for sale was "PeeWee the Piccolo" RCA Victor 47-0147 pressed in yellow translucent vinyl at the Sherman Avenue plant, Indianapolis on 7 December 1948, by R. O. Price, plant manager.
In the 1950s and 1960s Ribs were created within Soviet Union countries as a result of cultural censorship. These black market records were of banned music, printed onto x-ray films scavenged from hospital bins.
In the 1970s, the government of Bhutan produced now-collectible postage stamps on playable vinyl mini-discs.
=== Recent developments ===
In 2018, an Austrian startup, Rebeat Innovation GmBH, received US$4.8 million in funding to develop high definition vinyl records that purport to contain longer play times, louder volumes and higher fidelity than conventional vinyl LPs. Rebeat Innovation, headed by CEO Günter Loibl, has called the format 'HD Vinyl'. The HD process works by converting audio to a digital 3D topography map that is then inscribed onto the vinyl stamper via lasers, resulting in less loss of information. Many critics have expressed skepticism regarding the cost and quality of HD records.
In May 2019, at the Making Vinyl conference in Berlin, Loibl unveiled the software "Perfect Groove" for creating 3D topographic audio data files. The software provides a map for laser-engraving for HD Vinyl stampers. The audio engineering software was created with mastering engineers Scott Hull and Darcy Proper, a four-time Grammy winner. The demonstration offered the first simulations of what HD Vinyl records are likely to sound like, ahead of actual HD vinyl physical record production. Loibl discussed the software "Perfect Groove" at a presentation titled "Vinyl 4.0 The next generation of making records" before offering demonstrations to attendees.
== Structure ==
Increasingly from the early 20th century, and almost exclusively since the 1920s, both sides of the record have been used to carry the grooves. Occasional records have been issued since then with a recording on only one side. In the 1980s Columbia records briefly issued a series of less expensive one-sided 45 rpm singles.
Since its inception in 1948, vinyl record standards for the United States follow the guidelines of the Recording Industry Association of America (RIAA).
=== Vinyl quality ===
The composition of vinyl used to press records (a blend of polyvinyl chloride and polyvinyl acetate) has varied considerably over the years. Virgin vinyl is preferred, but during the 1970s energy crisis, as a cost-cutting move, much of the industry began reducing the thickness and quality of vinyl used in mass-market manufacturing. Sound quality suffered, with increased ticks, pops, and other surface noises. RCA Records marketed their lightweight LP as Dynaflex, which, at the time, was considered inferior by many record collectors.
It became commonplace to use recycled vinyl. New or "virgin" heavy/heavyweight (180–220 g) vinyl is commonly used for modern audiophile vinyl releases in all genres. Many collectors prefer to have heavyweight vinyl albums, which have been reported to have better sound than normal vinyl because of their higher tolerance against deformation caused by normal play.
Following the vinyl revival of the 21st century, select manufacturers adopted bioplastic-based records due to concerns over the environmental impact of widespread PVC use.
== Limitations ==
=== Shellac ===
One problem with shellac was that the size of the disks tended to be larger because it was limited to 80–100 groove walls per inch before the risk of groove collapse became too high, whereas vinyl could have up to 260 groove walls per inch.
=== Vinyl ===
Although vinyl records are strong and do not break easily, they scratch due to vinyl's soft material properties, sometimes resulting in ruining the record. Vinyl readily acquires a static charge, attracting dust that is difficult to remove completely. Dust and scratches cause audio clicks and pops. In extreme cases, they can cause the needle to skip over a series of grooves, or worse yet, cause the needle to skip backward, creating a "locked groove" that repeats over and over. This is the origin of the phrase "like a broken record" or "like a scratched record", which is often used to describe a person or thing that continually repeats itself.
A further limitation of the gramophone record is that fidelity steadily declines as playback progresses; there is more vinyl per second available for fine reproduction of high frequencies at the large-diameter beginning of the groove than exist at the smaller diameters close to the end of the side. At the start of a groove on an LP there are 510 mm of vinyl per second traveling past the stylus while the ending of the groove gives 200–210 mm of vinyl per second—less than half the linear resolution.
There is controversy about the relative quality of CD sound and LP sound when the latter is heard under the best conditions (see Comparison of analog and digital recording). One technical advantage with vinyl compared to the optical CD is that if correctly handled and stored, the vinyl record can be playable for decades and possibly centuries, which is longer than some versions of the optical CD. For vinyl records to be playable for years to come, they need to be handled with care and stored properly. Guidelines for proper vinyl storage include not stacking records on top of each other, avoiding heat or direct sunlight and placing them in a temperature-controlled area that help prevent vinyl records from warping and scratching. Collectors store their records in a variety of boxes, cubes, shelves and racks.
=== Sound fidelity ===
At the time of the introduction of the compact disc (CD) in 1982, the stereo LP pressed in vinyl continued to suffer from a variety of limitations:
The stereo image was not made up of fully discrete left and right channels; each channel's signal coming out of the cartridge contained a small amount of the signal from the other channel, with more crosstalk at higher frequencies. High-quality disc cutting equipment was capable of making a master disc with 30–40 dB of stereo separation at 1,000 Hz, but the playback cartridges had lesser performance of about 20 to 30 dB of separation at 1000 Hz, with separation decreasing as frequency increased, such that at 12 kHz the separation was about 10–15 dB. A common modern view is that stereo isolation must be higher than this to achieve a proper stereo soundstage. However, in the 1950s the BBC determined in a series of tests that only 20–25 dB is required for the impression of full stereo separation.
Thin, closely spaced spiral grooves that allow for increased playing time on a 33+1⁄3 rpm microgroove LP lead to a tinny pre-echo warning of upcoming loud sounds. The cutting stylus unavoidably transfers some of the subsequent groove wall's impulse signal into the previous groove wall. It is discernible by some listeners throughout certain recordings, but a quiet passage followed by a loud sound allows anyone to hear a faint pre-echo of the loud sound occurring 1.8 seconds ahead of time.
=== LP versus CD ===
Audiophiles have differed over the relative merits of the LP versus the CD since the digital disc was introduced. Digital sampling can theoretically completely reproduce a sound wave within a given range of frequencies if the sampling rate is high enough. Vinyl's drawbacks, however, include surface noise, less resolution due to a lower dynamic range, and greater sensitivity to handling. Modern anti-aliasing filters and oversampling systems used in digital recordings have eliminated perceived problems observed with early CD players.
There is a theory that vinyl records can audibly represent higher frequencies than compact discs, though most of this is noise and not relevant to human hearing. According to Red Book specifications, the compact disc has a frequency response of 20 Hz up to 22,050 Hz, and most CD players measure flat within a fraction of a decibel from at least 0 Hz to 20 kHz at full output. Due to the distance required between grooves, it is not possible for an LP to reproduce as low frequencies as a CD. Additionally, turntable rumble and acoustic feedback obscures the low-end limit of vinyl but the upper end can be, with some cartridges, reasonably flat within a few decibels to 30 kHz, with gentle roll-off. Carrier signals of Quad LPs popular in the 1970s were at 30 kHz to be out of the range of human hearing. The average human auditory system is sensitive to frequencies from 20 Hz to a maximum of around 20,000 Hz. The upper and lower frequency limits of human hearing vary per person. High frequency sensitivity decreases as a person ages, a process called presbycusis.
== Preservation ==
As the playing of gramophone records causes gradual degradation of the recording, they are best preserved by transferring them onto other media and playing the records as rarely as possible. They need to be stored on edge, and do best under environmental conditions that most humans would find comfortable. The longevity and optimal performance of vinyl records can be improved through certain accessories and cleaning supplies. Slipmats provide a soft and cushioned surface between the record and the turntable platter, minimizing friction and preventing potential scratches or damage to the vinyl surface.
Where old disc recordings are considered to be of artistic or historic interest, from before the era of tape or where no tape master exists, archivists play back the disc on suitable equipment and record the result, typically onto a digital format, which can be copied and manipulated to remove analog flaws without any further damage to the source recording. For example, Nimbus Records uses a specially built horn record player to transfer 78s. Anyone can do this using a standard record player with a suitable pickup, a phono-preamp (pre-amplifier) and a typical personal computer. However, for accurate transfer, professional archivists carefully choose the correct stylus shape and diameter, tracking weight, equalisation curve and other playback parameters and use high-quality analogue-to-digital converters.
As an alternative to playback with a stylus, a recording can be read optically, processed with software that calculates the velocity that the stylus would be moving in the mapped grooves and converted to a digital recording format. This does no further damage to the disc and generally produces a better sound than normal playback. This technique also has the potential to allow for reconstruction of broken or otherwise damaged discs.
== Popularity and current status ==
Groove recordings, first designed in the final quarter of the 19th century, held a predominant position for nearly a century—withstanding competition from reel-to-reel tape, the 8-track cartridge, and the compact cassette. The widespread popularity of Sony's Walkman was a factor that contributed to the vinyl's lessening usage in the 1980s.
In 1988, the compact disc surpassed the gramophone record in unit sales. Vinyl records experienced a sudden decline in popularity between 1988 and 1991, when the major label distributors restricted their return policies, which retailers had been relying on to maintain and swap out stocks of relatively unpopular titles. First the distributors began charging retailers more for new products if they returned unsold vinyl, and then they stopped providing any credit at all for returns. Retailers, fearing they would be stuck with anything they ordered, only ordered proven, popular titles that they knew would sell, and devoted more shelf space to CDs and cassettes. Record companies also removed many vinyl titles from production and distribution, further undermining the availability of the format and leading to the closure of pressing plants. This rapid decline in the availability of records accelerated the format's decline in popularity, and is seen by some as a deliberate ploy to make consumers switch to CDs, which unlike today, were more profitable for the record companies.
The more modern CD format held numerous advantages over the record such as its portability, digital audio and its elimination of background hiss and surface noise, instant switching and searching of tracks, longer playing time, lack of continuous degradation (most analog formats wear out as they get played), programmability (e.g. shuffle, repeat), and ability to be played on and copied to a personal computer. In spite of their flaws, records continued to have enthusiastic supporters, partly due to a preference of its "warmer" sound and its larger sleeve artwork. Records continued to be format of choice by disc jockeys in dance clubs during the 1990s and 2000s due to its better mixing capabilities.
=== Revival era ===
A niche resurgence of vinyl records began in the late 2000s, mainly among rock fans. The Entertainment Retailers Association in the United Kingdom found in 2011 that consumers were willing to pay on average £16.30 (€19.37, US$25.81) for a single vinyl record, as opposed to £7.82 (€9.30, US$12.38) for a CD and £6.80 (€8.09, US$10.76) for a digital download. The resurgence accelerated throughout the 2010s, and in 2015 reached $416 million revenue in the US, their highest level since 1988. As of 2017, it comprised 14% of all physical album sales. According to the RIAA's midyear report in 2020, phonograph record revenues surpassed those of CDs for the first time since the 1980s.
In 2021, Taylor Swift sold 102,000 copies of her ninth studio album Evermore on vinyl in one week. The sales of the record beat the largest sales in one week on vinyl since Nielsen started tracking vinyl sales in 1991. The sales record was previously held by Jack White, who sold 40,000 copies of his second solo release, Lazaretto, on vinyl in its first week of release in 2014.
Approximately 180 million LP records are produced annually at global pressing plants, as of 2021.
=== Present production ===
As of 2017, 48 record pressing facilities exist worldwide. The increased popularity of the record has led to the investment in new and modern record-pressing machines. Only two producers of lacquer master discs remain: Apollo Masters in California, and MDC in Japan. On 6 February 2020, a fire destroyed the Apollo Masters plant. According to the Apollo Masters website, their future is still uncertain. Hand Drawn Pressing opened in 2016 as the world's first fully automated record pressing plant.
== Less common recording formats ==
=== VinylVideo ===
VinylVideo is a format to store a low resolution black and white video on a vinyl record alongside encoded audio.
=== Capacitance Electronic Disc ===
Another example is the Capacitance Electronic Disc, a color video format, slightly better than VHS.
== See also ==
Album cover
Apollo Masters Corporation fire
Capacitance Electronic Disc
Conservation and restoration of vinyl discs
Electrical transcription
LP record
The New Face of Vinyl: Youth's Digital Devolution (photo documentary)
Phonograph cylinder
Pocket Disc
Record Store Day
Sound recording and reproduction
Unusual types of gramophone records
== References ==
== Further reading ==
== External links ==
How do vinyl record works at vinyl-place website
Playback equalization for 78 rpm shellacs and early LPs (EQ curves, index of record labels): Audacity Wiki
The manufacturing and production of shellac records. Educational video, 1942.
Reproduction of 78 rpm records including equalization data for different makes of 78s and LPs.
The Secret Society of Lathe Trolls, a site devoted to all aspects of the making of Gramophone records.
How to digitize gramophone records: Audacity Tutorial
Actual list of vinyl pressing plants: vinyl-pressing-plants.com
Dedicated museum for sound history: Musée des ondes Emile Berliner, Montreal, Canada
Smart Vinyl: The First Computerized
Semi-automatic metadata extraction from shellac and vinyl disc
Vinyl Player 2.0 | Wikipedia/Phonograph_records |
The Country Network is an American cable, streaming and broadcast television network that specializes in broadcasting country music videos and exclusive original music-based content; its playlist of videos extends from the 1990s through the present day. The network also airs occasional infomercials and traditional advertising.
The network is headquartered in Haltom City, Texas, with offices in Nashville, Tennessee, and New York.
== History ==
The network first launched on January 7, 2009, as the Artists & Fans Network; the music video that inaugurated the network was the Kid Rock video "All Summer Long". AFN was first carried on satellite through DirecTV on channel 236.
In August 2009, after suffering from financial problems, Southern Venture Capital Group sold all the assets of the company to one of the founders, Warren Hansen, who then changed its name to the American Music Video Network, and rolled out the programming with a new look and feel. On February 15, 2010, the company was renamed The Country Network to represent its focus on country music. Around this time, The Country Network began to transition into a digital multicast network, carried over-the-air on broadcast television stations across the United States as well as the first broadcast network to simulcast to Roku, iPhone, iPad, web, and other OTT outlets.
On May 20, 2013, Zuus Media announced its acquisition of The Country Network. On June 1, 2013, Zuus Media announced the rebranding as Zuus Country. Zuus Country was to be the first of several music video networks of various formats. Only one of these other formats, Zuus Latino, ever made it to air.
In January 2016, the network was purchased by a Texas-based company, TCN Country LLC, with a 43,000-square-foot studio, production and broadcast facility. TCN Country changed the brand back to The Country Network, reviving its original name and logo for the revival of the network.
In 2021, The Country Network, after having previously placed its online Web stream behind a paywall, launched TCN FAST (Free Advertising Supported Television), a free online feed of the channel that is distributed through advertiser-supported over-the-top streaming services.
== Affiliates ==
As of 2013, Zuus Country has television stations in over 41 television markets in 26 states, covering approximately 34 million over the air households an 18 million cable subscribers. ZUUS Country (at the time, still named The Country Network) signed a deal with Sinclair Broadcast Group in August 2010 to be carried on digital subchannels of Sinclair stations in most of its media markets; the network began airing on Sinclair owned and/or operated stations on October 10, 2010. After Sinclair's original drop of several affiliates in late 2015, the network was down to 24 markets (The contract with Sinclair was expired in June 2017). When TCN Country LLC purchased the network, they immediately started growing the distribution and as of January 30, 2017 the network was up to 54 markets along with the launch of a Roku channel and a slot on smart TVs manufactured by Hitachi and Panasonic.
As of 2019, most of The Country Network's affiliates are low-powered stations controlled by HC2 Holdings or its subsidiary DTV America.
=== Current affiliates ===
== References ==
== External links ==
Official website | Wikipedia/The_Country_Network |
Sound design is the art and practice of creating auditory elements of media. It involves specifying, acquiring and creating audio using production techniques and equipment or software. It is employed in a variety of disciplines including filmmaking, television production, video game development, theatre, sound recording and reproduction, live performance, sound art, post-production, radio, new media and musical instrument development. Sound design commonly involves performing (see e.g. Foley) and editing of previously composed or recorded audio, such as sound effects and dialogue for the purposes of the medium, but it can also involve creating sounds from scratch through synthesizers. A sound designer is one who practices sound design.
== History ==
The use of sound to evoke emotion, reflect mood and underscore actions in plays and dances began in prehistoric times when it was used in religious practices for healing or recreation. In ancient Japan, theatrical events called kagura were performed in Shinto shrines with music and dance.
Plays were performed in medieval times in a form of theatre called Commedia dell'arte, which used music and sound effects to enhance performances. The use of music and sound in the Elizabethan Theatre followed, in which music and sound effects were produced off-stage using devices such as bells, whistles, and horns. Cues would be written in the script for music and sound effects to be played at the appropriate time.
Italian composer Luigi Russolo built mechanical sound-making devices, called "intonarumori," for futurist theatrical and music performances starting around 1913. These devices were meant to simulate natural and man-made sounds, such as trains or bombs. Russolo's treatise, The Art of Noises, is one of the earliest written documents on the use of abstract noise in the theatre. After his death, his intonarumori' were used in more conventional theatre performances to create realistic sound effects.
=== Recorded sound ===
Possibly the first use of recorded sound in the theatre was a phonograph playing a baby's cry in a London theatre in 1890. Sixteen years later, Herbert Beerbohm Tree used recordings in his London production of Stephen Phillips’ tragedy NERO. The event is marked in the Theatre Magazine (1906) with two photographs; one showing a musician blowing a bugle into a large horn attached to a disc recorder, the other with an actor recording the agonizing shrieks and groans of the tortured martyrs. The article states: “these sounds are all realistically reproduced by the gramophone”. As cited by Bertolt Brecht, there was a play about Rasputin written in (1927) by Alexej Tolstoi and directed by Erwin Piscator that included a recording of Lenin's voice. Whilst the term "sound designer" was not yet in use, some stage managers specialised as "effects men", creating and performing offstage sound effects using a mix of vocal mimicry, mechanical and electrical contraptions and gramophone records. A great deal of care and attention was paid to the construction and performance of these effects, both naturalistic and abstract. Over the twentieth century recorded sound effects began to replace live sound effects, though often it was the stage manager's duty to find the sound effects, and an electrician played the recordings during performances.
Between 1980 and 1988, Charlie Richmond, USITT's first Sound Design Commissioner, oversaw efforts of their Sound Design Commission to define the duties, responsibilities, standards and procedures expected of a theatre sound designer in North America. He summarized his conclusions in a document which, although somewhat dated, provides a succinct record of what was then expected. It was subsequently provided to the ADC and David Goodman at the Florida USA local when they both planned to represent sound designers in the 1990s.
=== Digital technology ===
MIDI and digital audio technology have contributed to the evolution of sound production techniques in the 1980s and 1990s. Digital audio workstations (DAW) and a variety of digital signal processing algorithms applied in them allow more complicated soundtracks with more tracks and auditory effects to be realized. Features such as unlimited undo and sample-level editing allows fine control over the soundtracks.
In theatre sound, features of computerized theatre sound design systems have also been recognized as being essential for live show control systems at Walt Disney World and, as a result, Disney utilized systems of that type to control many facilities at their Disney-MGM Studios theme park, which opened in 1989. These features were incorporated into the MIDI Show Control (MSC) specification, an open communications protocol for interacting with diverse devices. The first show to fully utilize the MSC specification was the Magic Kingdom Parade at Walt Disney World's Magic Kingdom in September 1991.
The rise of interest in game audio has also brought more advanced interactive audio tools that are also accessible without a background in computer programming. Some of such software tools (termed "implementation tools" or "audio engines") feature a workflow that's similar to that in more conventional DAW programs and can also allow the sound production personnel to undertake some of the more creative interactive sound tasks (that are considered to be part of sound design for computer applications) that previously would have required a computer programmer. Interactive applications have also given rise to many techniques in "dynamic audio" which loosely means sound that's "parametrically" adjusted during the program's run-time. This allows for a broader expression in sounds, more similar to that in films, because this way the sound designer can e.g. create footstep sounds that vary in a believable and non-repeating way and that also corresponds to what's seen in the picture. The digital audio workstation cannot directly "communicate" with game engines, because the game's events often occur in an unpredictable order, whereas traditional digital audio workstations as well as so called linear media (TV, film etc.) have everything occur in the same order every time the production is run. Especially, games have also brought in dynamic or adaptive mixing.
The World Wide Web has greatly enhanced the ability of sound designers to acquire source material quickly, easily and cheaply. Nowadays, a designer can preview and download crisper, more "believable" sounds as opposed to toiling through time- and budget-draining "shot-in-the-dark" searches through record stores, libraries and "the grapevine" for (often) inferior recordings. In addition, software innovation has enabled sound designers to take more of a DIY (or "do-it-yourself") approach. From the comfort of their home and at any hour, they can simply use a computer, speakers and headphones rather than renting (or buying) costly equipment or studio space and time for editing and mixing. This provides for faster creation and negotiation with the director.
== Applications ==
=== Film ===
In motion picture production, a Sound Editor/Designer is a member of a film crew responsible for the entirety or some specific parts of a film's soundtrack. In the American film industry, the title Sound Designer is not controlled by any professional organization, unlike titles such as Director or Screenwriter.
The terms sound design and sound designer began to be used in the motion picture industry in 1969. At that time, The title of Sound Designer was first granted to Walter Murch by Francis Ford Coppola in recognition for Murch's contributions to the film The Rain People. The original meaning of the title Sound Designer, as established by Coppola and Murch, was "an individual ultimately responsible for all aspects of a film's audio track, from the dialogue and sound effects recording to the re-recording (mix) of the final track". The term sound designer has replaced monikers like supervising sound editor or re-recording mixer for the same position: the head designer of the final sound track. Editors and mixers like Murray Spivack (King Kong), George Groves (The Jazz Singer), James G. Stewart (Citizen Kane), and Carl Faulkner (Journey to the Center of the Earth) served in this capacity during Hollywood's studio era, and are generally considered to be sound designers by a different name.
The advantage of calling oneself a sound designer beginning in later decades was two-fold. It strategically allowed for a single person to work as both an editor and mixer on a film without running into issues pertaining to the jurisdictions of editors and mixers, as outlined by their respective unions. Additionally, it was a rhetorical move that legitimised the field of post-production sound at a time when studios were downsizing their sound departments, and when producers were routinely skimping on budgets and salaries for sound editors and mixers. In so doing, it allowed those who called themselves sound designers to compete for contract work and to negotiate higher salaries. The position of Sound Designer therefore emerged in a manner similar to that of Production Designer, which was created in the 1930s when William Cameron Menzies made revolutionary contributions to the craft of art direction in the making of Gone with the Wind.
The audio production team is a principal member of the production staff, with creative output comparable to that of the film editor and director of photography. Several factors have led to the promotion of audio production to this level, when previously it was considered subordinate to other parts of film:
Cinema sound systems became capable of high-fidelity reproduction, particularly after the adoption of Dolby Stereo. Before stereo soundtracks, film sound was of such low fidelity that only the dialogue and occasional sound effects were practical. These sound systems were originally devised as gimmicks to increase theater attendance, but their widespread implementation created a content vacuum that had to be filled by competent professionals. Dolby's immersive Dolby Atmos format, introduced in 2012, provides the sound team with 128 tracks of audio that can be assigned to a 7.1.2 bed that utilizes two overhead channels, leaving 118 tracks for audio objects that can be positioned around the theater independent of the sound bed. Object positions are informed by metadata that places them based on x,y,z coordinates and the number of speakers available in the room. This immersive sound format expands creative opportunities for the use of sound beyond what was achievable with older 5.1 and 7.1 surround sound systems. The greater dynamic range of the new systems, coupled with the ability to produce sounds at the sides, behind, or above the audience, provided the audio post-production team new opportunities for creative expression in film sound.
Some directors were interested in realizing the new potential of the medium. A new generation of filmmakers, the so-called "Easy Riders and Raging Bulls"—Martin Scorsese, Steven Spielberg, George Lucas, and others—were aware of the creative potential of sound and wanted to use it.
Filmmakers were inspired by the popular music of the era. Concept albums of groups such as Pink Floyd and The Beatles suggested new modes of storytelling and creative techniques that could be adapted to motion pictures.
New filmmakers made their early films outside the Hollywood establishment, away from the influence of film labor unions and the then rapidly dissipating studio system.
The contemporary title of sound designer can be compared with the more traditional title of supervising sound editor; many sound designers use both titles interchangeably. The role of supervising sound editor, or sound supervisor, developed in parallel with the role of sound designer. The demand for more sophisticated soundtracks was felt both inside and outside Hollywood, and the supervising sound editor became the head of the large sound department, with a staff of dozens of sound editors, that was required to realize a complete sound job with a fast turnaround.
=== Theatre ===
Sound design, as a distinct discipline, is one of the youngest fields in stagecraft, second only to the use of projection and other multimedia displays, although the ideas and techniques of sound design have been around almost since theatre started. Dan Dugan, working with three stereo tape decks routed to ten loudspeaker zones during the 1968–69 season of American Conservatory Theater (ACT) in San Francisco, was the first person in the USA to be called a sound designer.
A theatre sound designer is responsible for everything the audience hears in the performance space, including music, sound effects, sonic textures, and soundscapes. These elements are created by the sound designer, or sourced from other sound professionals, such as a composer in the case of music. Pre-recorded music must be licensed from a legal entity that represents the artist's work. This can be the artist themselves, a publisher, record label, performing rights organization or music licensing company. The theatre sound designer is also in charge of choosing and installing the sound system —speakers, sound desks, interfaces and convertors, playout/cueing software, microphones, radio mics, foldback, cables, computers, and outboard equipment like FX units and dynamics processors.
Modern audio technology has enabled theatre sound designers to produce flexible, complex, and inexpensive designs that can be easily integrated into live performance. The influence of film and television on playwriting is seeing plays being written increasingly with shorter scenes, which is difficult to achieve with scenery but easily conveyed with sound. The development of film sound design is giving writers and directors higher expectations and knowledge of sound design. Consequently, theatre sound design is widespread and accomplished sound designers commonly establish long-term collaborations with directors.
==== Musicals ====
Sound design for musicals often focuses on the design and implementation of a sound reinforcement system that will fulfill the needs of the production. If a sound system is already installed in the performance venue, it is the sound designer's job to tune the system for the best use for a particular production. Sound system tuning employs various methods including equalization, delay, volume, speaker and microphone placement, and in some cases, the addition of new equipment. In conjunction with the director and musical director, if any, the sound reinforcement designer determines the use and placement of microphones for actors and musicians. The sound reinforcement designer ensures that the performance can be heard and understood by everyone in the audience, regardless of the shape, size or acoustics of the venue, and that performers can hear everything needed to enable them to do their jobs. While sound design for a musical largely focuses on the artistic merits of sound reinforcement, many musicals, such as Into the Woods also require significant sound scores (see Sound Design for Plays). Sound Reinforcement Design was recognized by the American Theatre Wing's Tony Awards with the Tony Award for Best Sound Design of a Musical until the 2014–15 season, later reinstating in the 2017–18 season.
==== Plays ====
Sound design for plays often involves the selection of music and sounds (sound score) for a production based on intimate familiarity with the play, and the design, installation, calibration and utilization of the sound system that reproduces the sound score. The sound designer for a play and the production's director work together to decide the themes and emotions to be explored. Based on this, the sound designer for plays, in collaboration with the director and possibly the composer, decides upon the sounds that will be used to create the desired moods. In some productions, the sound designer might also be hired to compose music for the play. The sound designer and the director usually work together to "spot" the cues in the play (i.e., decide when and where sound will be used in the play). Some productions might use music only during scene changes, whilst others might use sound effects. Likewise, a scene might be underscored with music, sound effects or abstract sounds that exist somewhere between the two. Some sound designers are accomplished composers, writing and producing music for productions as well as designing sound. Many sound designs for plays also require significant sound reinforcement (see Sound Design for Musicals). Sound Design for plays was recognized by the American Theatre Wing's Tony Awards with the Tony Award for Best Sound Design of a Play until the 2014–15 season, later reinstating the award in the 2017–18 season.
==== Professional organizations ====
Theatrical Sound Designers and Composers Association (TSDCA)
The Association for Sound Design and Production is a charity representing theatre sound designers and engineers in the UK.
United Scenic Artists (USA) Local USA 829, which is integrated within IATSE, represents theatrical sound designers in the United States.
Theatrical Sound Designers in English Canada are represented by the Associated Designers of Canada (ADC), and in Québec by l'Association des professionnels des arts du Québec (APASQ).
=== Music ===
In the contemporary music business, especially in the production of rock music, ambient music, progressive rock, and similar genres, the record producer and recording engineer play important roles in the creation of the overall sound (or soundscape) of a recording, and less often, of a live performance. A record producer is responsible for extracting the best performance possible from the musicians and for making both musical and technical decisions about the instrumental timbres, arrangements, etc. On some, particularly more electronic music projects, artists and producers in more conventional genres have sometimes sourced additional help from artists often credited as "sound designers", to contribute specific auditory effects, ambiences etc. to the production. These people are usually more versed in e.g. electronic music composition and synthesizers than the other musicians on board.
In the application of electroacoustic techniques (e.g. binaural sound) and sound synthesis for contemporary music or film music, a sound designer (often also an electronic musician) sometimes refers to an artist who works alongside a composer to realize the more electronic aspects of a musical production. This is because sometimes there exists a difference in interests between composers and electronic musicians or sound designers. The latter specialises in electronic music techniques, such as sequencing and synthesizers, but the former is more experienced in writing music in a variety of genres. Since electronic music itself is quite broad in techniques and often separate from techniques applied in other genres, this kind of collaboration can be seen as natural and beneficial.
Notable examples of (recognized) sound design in music are the contributions of Michael Brook to the U2 album The Joshua Tree, George Massenburg to the Jennifer Warnes album Famous Blue Raincoat, Chris Thomas to the Pink Floyd album The Dark Side of the Moon, and Brian Eno to the Paul Simon album Surprise.
In 1974, Suzanne Ciani started her own production company, Ciani/Musica. Inc., which became the #1 sound design music house in New York.
=== Fashion ===
In fashion shows, the sound designer often works with the artistic director to create an atmosphere fitting the theme of a collection, commercial campaign or event.
=== Computer applications and other applications ===
Sound is widely used in a variety of human–computer interfaces, in computer games and video games. There are a few extra requirements for sound production for computer applications, including re-usability, interactivity and low memory and CPU usage. For example, most computational resources are usually devoted to graphics. Audio production should account for computational limits for sound playback with audio compression or voice allocating systems.
Sound design for video games requires proficient knowledge of audio recording and editing using a digital audio workstation, and an understanding of game audio integration using audio engine software, audio authoring tools, or middleware to integrate audio into the game engine. Audio middleware is a third-party toolset that sits between the game engine and the audio hardware.
Interactivity with computer sound can involve using a variety of playback systems or logic, using tools that allow the production of interactive sound (e.g. Max/MSP, Wwise). Implementation might require software or electrical engineering of the systems that modify sound or process user input. In interactive applications, a sound designer often collaborates with an engineer (e.g. a sound programmer) who's concerned with designing the playback systems and their efficiency.
== Awards ==
Sound designers have been recognized by awards organizations for some time, and new awards have emerged more recently in response to advances in sound design technology and quality. The Motion Picture Sound Editors and the Academy of Motion Picture Arts and Sciences recognizes the finest or most aesthetic sound design for a film with the Golden Reel Awards for Sound Editing in the film, broadcast, and game industries, and the Academy Award for Best Sound respectively. In 2021, the 93rd Academy Awards merged Best Sound Editing and Best Sound Mixing into one general Best Sound category. In 2007, the Tony Award for Best Sound Design was created to honor the best sound design in American theatre on Broadway.
North American theatrical award organizations that recognize sound designers include these:
Dora Mavor Moore Awards
Drama Desk Awards
Helen Hayes Awards
Obie Awards
Joseph Jefferson Awards
Major British award organizations include the Olivier Awards. The Tony Awards retired the awards for Sound Design as of the 2014–2015 season, then reinstated the categories in the 2017–18 season.
== See also ==
Audio engineering
Berberian Sound Studio
Crash box
Director of audiography
List of sound designers
Musique concrète
IEZA Framework – a framework for conceptual game sound design
Video production – in connection with short music films
Sound logo
== References ==
== External links ==
FilmSound.org: A Learning Space dedicated to the Art of Sound Design
Kai's Theater Sound Hand Book
Association of Sound Designers
sounDesign: online publication about Sound Communication | Wikipedia/Sound_designer |
BPI (British Recorded Music Industry) Limited, trading as British Phonographic Industry (BPI), is the British recorded music industry's trade association. It runs the BRIT Awards; is home to the Mercury Prize; co-owns the Official Charts Company with the Entertainment Retailers Association; and awards UK music sales through the BRIT Certified Awards.
== Structure ==
Its membership comprises hundreds of music companies, including (Sony Music UK, Universal Music UK, Warner Music UK), and over 500 independent record labels and small to medium-sized music businesses.
The BPI council is the management and policy forum of the BPI. It is chaired by the Chair of BPI, and includes the Chief Executive, Chief Operating Officer (COO), General Counsel, Chief Strategy Officer and 12 representatives from the recorded music sector: six from major labels – two each from the three "major" companies – and six from the independent sector, who are selected by voting of all BPI independent label members.
== History ==
BPI has represented the interests of British record companies since being formally incorporated in 1973, when the principal aim was to promote British music and fight copyright infringement.
In 2007, the association's legal name was changed from "British Phonographic Industry Limited (The)" to "BPI (British Recorded Music Industry) Limited".
In September 2008, the BPI became one of the founding members of UK Music, an umbrella organisation representing the interests of all parts of the industry.
In July 2022, YolanDa Brown was appointed chair of BPI, replacing Ged Doherty, who had served in that role the previous seven years.
In July 2023, Jo Twist was appointed chief executive of BPI, replacing Geoff Taylor, who had served in the role since 2007.
=== Awards ===
BPI founded the annual BRIT Awards for the British music industry in 1977, and, later, the Classic BRIT Awards. The organising company, BRIT Awards Limited, is a fully owned subsidiary of the BPI. Proceeds from both shows go to the BRIT Trust, the charitable arm of the BPI that has distributed almost £30m to charitable causes nationwide since its foundation in 1989. In September 2013, the BPI presented the first ever BRITs Icon Award to Elton John. The BPI also endorsed the launch of the Mercury Prize for the Album of the Year in 1992, and since 2016 has organised the Prize.
The recorded music industry's Certified Awards programme, which attributes Platinum, Gold and Silver status to singles, albums and music videos (Platinum and Gold only) based on their sales performance (see BRIT Certified Awards), has been administered by the BPI since its inception in 1973.
== BRIT Trust ==
The BRIT Trust is the recognised charitable arm of the BPI. It was conceived in 1989 by BPI and a collection of music industry individuals. The BRIT Trust is the only music charity actively supporting all types of music education. Proceeds from the BRIT Awards and The Music Industry Trusts Award (MITS) go to the BRIT Trust, which has donated almost £30m to charitable causes nationwide since its foundation. As of 2024, beneficiaries include The BRIT School, Nordoff and Robbins, East London Arts and Music, Music Support, and Key 4 Life.
== BRIT School ==
Opened in September 1991, the BRIT School is a joint venture between The BRIT Trust and the Department for Education and Skills (DfES). Based at Selhurst in Croydon, the comprehensive school describes itself as the leading performing and creative arts school in the UK and is completely free to attend. It teaches over 1,400 students each year aged from 14 to 19 years in music, dance, drama, musical theatre, production, media and art and design. Students are from diverse backgrounds and are not required to stick to their own discipline; dancers learn songwriting, pianists can learn photography.
In August 2023, the Department for Education approved BPI’s plan to open a new specialist creative school in Bradford, West Yorkshire, inspired by the successful model of the BRIT School in Croydon.
== Certifications ==
The BPI administers the BRIT Certified Platinum, Gold and Silver awards scheme for music releases in the United Kingdom. The level of the award varies depending on the format of the release (albums, singles or music videos) and the level of sales achieved. Although the awards programme was for many years based on the level of shipments by record labels to retailers, since July 2013 certifications have been automatically allocated by the BPI upon the relevant sales thresholds being achieved in accordance with Official Charts Company data.
Since July 2014, streaming media has been included for singles and from June 2015 audio streams were added to album certifications. In July 2018 video streams were included in singles certifications for the first time. Streaming's contributions to chart-eligible sales totals for singles and albums are calculated using the methodology employed by the Official Charts Company for consumption at title level.
In April 2018, a new Breakthrough certification was introduced, pertaining to an artist's first album to reach 30,000 sales. Additionally, the programme was re-branded as BRIT Certified, with public promotion of the programme being assumed by the BRIT Awards' social media outlets and digital properties. Former Chief Executive Geoff Taylor justified the change by stating that it was part of an effort to cross-promote the certifications with "the UK's biggest platform for artistic achievement".
In May 2023, BPI launched an expansion of the BRIT Certified Awards Scheme with BRIT Billion, which celebrates outstanding achievement in recorded music by surpassing the landmark of one billion career UK streams – as calculated by the Official Charts Company. Recipients to date include RAYE, Billie Eilish, Queen, The Rolling Stones, Olivia Rodrigo, Katy Perry, Whitney Houston, Mariah Carey, Wizkid and Coldplay. In Autumn 2023, Ed Sheeran was presented with a special edition Gold BRIT Billion Award, celebrating his achievement as the first British artist to surpass ten billion career UK streams.
== Anti-piracy operations ==
The BPI have developed bespoke software and automated crawling tools created in-house by the BPI which search for members' repertoire across more than 400 known infringing sites and generate URLs which are sent to Google as a DMCA Notice for removal within hours of receipt. Additionally, personnel are also seconded to the City of London Police Intellectual Property Crime Unit to support anti-"piracy" operations.
== See also ==
Home Taping Is Killing Music
Official Charts Company
List of music recording certifications
Parental Advisory
== Notes ==
== References == | Wikipedia/British_Phonographic_Industry |
Phonograph cylinders (also referred to as Edison cylinders after its creator Thomas Edison) are the earliest commercial medium for recording and reproducing sound. Commonly known simply as "records" in their heyday (c. 1896–1916), a name which has been passed on to their disc-shaped successor, these hollow cylindrical objects have an audio recording engraved on the outside surface which can be reproduced when they are played on a mechanical cylinder phonograph. The first cylinders were wrapped with tin foil but the improved version made of wax was created a decade later, after which they were commercialized. In the 1910s, the competing disc record system triumphed in the marketplace to become the dominant commercial audio medium.
== Early development ==
In December 1877, Thomas Edison and his team invented the phonograph using a thin sheet of tin foil wrapped around a hand-cranked, grooved metal cylinder. Tin foil was not a practical recording medium for either commercial or artistic purposes, and the crude hand-cranked phonograph was only marketed as a novelty, to little or no profit. Edison moved on to developing a practical incandescent electric light, and the next improvements to sound recording technology were made by others.
Following seven years of research and experimentation at their Volta Laboratory, Charles Sumner Tainter, Alexander Graham Bell, and Chichester Bell introduced wax as the recording medium, and engraving, rather than indenting, as the recording method. In 1887, their "Graphophone" system was being put to the test of practical use by official reporters of the US Congress, with commercial units later being produced by the Dictaphone Corporation. After this system was demonstrated to Edison's representatives, Edison quickly resumed work on the phonograph. He settled on a thicker all-wax cylinder, the surface of which could be repeatedly shaved down for reuse. Both the Graphophone and Edison's "Perfected Phonograph" were commercialized in 1888. Eventually, a patent-sharing agreement was signed, and the wax-coated cardboard tubes were abandoned in favor of Edison's all-wax cylinders as an interchangeable standard format.
Beginning in 1889, prerecorded wax cylinders were marketed. These have professionally made recordings of songs, instrumental music or humorous monologues in their grooves. At first, the only customers for them were proprietors of nickelodeons—the first jukeboxes—installed in arcades and taverns, but within a few years, private owners of phonographs were increasingly buying them for home use. Unlike later, shorter-playing high-speed cylinders, early cylinder recordings were usually cut at a speed of about 120 rpm and can play for as long as three minutes. They were made of a relatively soft wax formulation and would wear out after they were played a few dozen times. The buyer could then use a mechanism which left their surfaces shaved smooth so new recordings could be made on them.
Cylinder machines of the late 1880s and the 1890s were usually sold with recording attachments. The ability to record as well as play back sound was an advantage of cylinder phonographs over the competition from cheaper disc record phonographs, which began to be mass-marketed at the end of the 1890s, as the disc system machines could be used only to play back prerecorded sound.
In the earliest stages of phonograph manufacturing, various incompatible, competing types of cylinder recordings were made. A standard system was decided upon by Edison Records, Columbia Phonograph, and other companies in the late 1880s. The standard cylinders are about 4 inches (10 cm) long, 2+1⁄4 inches (5.7 cm) in diameter, and play about two minutes of recorded material.
Originally, all cylinders sold needed to be recorded live on the softer brown wax, which wore out after as few as 20 plays. Later cylinders were reproduced either mechanically or by linking phonographs together with rubber tubes.
Over the years, the type of wax used in cylinders was improved and hardened, so that cylinders could be played with good quality over 100 times. In 1902, Edison Records launched a line of improved, hard wax cylinders marketed as "Edison Gold Moulded Records". The major development of this line of cylinders is that Edison had developed a process that allowed a mold to be made from a master cylinder, which then permitted the production of several hundred cylinders to be made from the mold. The process was labeled "Gold Moulded" because of the gold vapor that was given off by gold electrodes used in the process.
== Commercial packaging ==
The earliest soft wax cylinders were sold wrapped in thick cotton batting. Later, molded hard-wax cylinders were sold in boxes with a cotton lining. Celluloid cylinders were sold in unlined boxes. These protective boxes were normally kept and used to house the cylinders after purchase. Their general appearance allowed bandleader John Philip Sousa to deride their contents as "canned music", an epithet he borrowed from Mark Twain.
== Hard plastic cylinders ==
On March 20, 1900, Thomas B. Lambert was granted a US patent (645,920) that described a process for mass-producing cylinders made from celluloid, an early hard plastic. (Henri Jules Lioret of France was producing celluloid cylinders as early as 1893, but they were individually recorded rather than molded.) That same year, the Lambert Company of Chicago began selling cylinder records made of the material. They would not break if dropped and could be played thousands of times without wearing out. The color was changed to black in 1903, but brown and blue cylinders were also produced. The coloring was purportedly because the dyes reduced surface noise. Unlike wax, the hard, inflexible material could not be shaved and recorded over, but it had the advantage of being nearly permanent. A 1905 Edison Phonograph may be seen and heard playing a celluloid cylinder at the Musical Museum, Brentford, England and the quality of the sound is surprisingly good.
This superior technology was licensed by the Indestructible Record Company in 1906 and Columbia Phonograph Company in 1908. The Edison Bell company in Europe had separately licensed the technology and were able to market Edison's titles in both wax (popular series) and celluloid (indestructible series).
In late 1908, Edison had introduced wax cylinders that played for nominally four minutes (instead of the usual two) under the Amberol brand. They were made from a harder (and more fragile) form of wax to withstand the smaller stylus used to play them. The longer playing time was achieved by reducing the groove size and placing them half as far apart. In 1912, the Edison company eventually acquired Lambert's patents to the celluloid technology, and almost immediately started production under a variation of their existing Amberol brand as Edison Blue Amberol Records.
Edison designed several phonograph types, both with internal and external horns for playing these improved cylinder records. The internal horn models were called Amberolas. Edison marketed its "Fireside" model phonograph with a gearshift and a 'model K' reproducer with two different styli, which allowed it to play both two-minute and four-minute cylinders.
== Decline ==
Cylinder records continued to compete with the growing disc record market into the 1910s, when discs won the commercial battle. In 1912, Columbia Records, which had been selling both discs and cylinders, dropped the cylinder format, while Edison introduced his Diamond Disc format, played with a diamond stylus. Beginning in 1915, new Edison cylinder issues consisted of acoustic dubbings from Edison disc masters; they therefore had lower audio quality than the disc originals. Although his cylinders continued to be sold in steadily dwindling and eventually minuscule quantities, Edison continued to support the owners of cylinder phonographs by making new titles available in that format until the company ceased manufacturing all records and phonographs in November 1929. Many of the later issued Blue Amberols were dubbed electrically from electrical recorded masters
== Later applications ==
Cylinder phonograph technology continued to be used for Dictaphone and Ediphone recordings for office use for decades.
In 1947, Dictaphone replaced wax cylinders with their Dictabelt technology, which cut a mechanical groove into a plastic belt instead of into a wax cylinder. This was later replaced by magnetic tape recording. However, cylinders for older style dictating machines continued to be available for some years, and it was not unusual to encounter cylinder dictating machines into the 1950s.
In the late 20th and early 21st century, new recordings have been made on cylinders for the novelty effect of using obsolete technology. Probably the most famous of these are by They Might Be Giants, who in 1996 recorded "I Can Hear You" and three other songs, performed without electricity, on an 1898 Edison wax recording studio phonograph at the Edison National Historic Site in West Orange, New Jersey. This song was released on Factory Showroom in 1996 and re-released on the 2002 compilation Dial-A-Song: 20 Years of They Might Be Giants. The other songs recorded were "James K. Polk", "Maybe I Know", and "The Edison Museum", the last a song about the site of the recording. These recordings were officially released online as MP3 files in 2001.
Small numbers of cylinders have been manufactured in the 21st century out of modern long-lasting materials. Two companies engaged in such enterprise are the Vulcan Cylinder Record Company of Sheffield, England, and the Wizard Cylinder Records Company in Baldwin, New York.
In 2010 the British musical group The Men That Will Not Be Blamed for Nothing released the track "Sewer", from their debut album, Now That's What I Call Steampunk! Volume 1 on a wax cylinder in a limited edition of 40, of which only 30 were put on sale. The box set came with instructions on how to make a cylinder player for less than £20. The BBC covered the release on Television on BBC Click, on BBC Online, and on Radio 5 Live.
In June 2017 the Cthulhu Breakfast Club podcast released a special limited wax cylinder edition of a show.
In April 2019, the podcast Hello Internet released ten limited edition wax cylinder recordings.
In May 2023, Needlejuice Records released wax cylinder singles for Lemon Demon songs "Touch-Tone Telephone" and "The Oldest Man On MySpace", from albums Spirit Phone and Dinosaurchestra, respectively.
== Preservation of cylinder recordings ==
Because of the nature of the recording medium, playback of many cylinders can cause degradation of the recording. The replay of cylinders diminishes their fidelity and degrades their recorded signals. Additionally, when exposed to humidity, mold can penetrate a cylinder's surface and cause the recording to have surface noise. Currently, the only professional machines manufactured for the playback of cylinder recordings are the Archéophone player, designed by Henri Chamoux and the "Endpoint Cylinder and Dictabelt Machine" by Nicholas Bergh. The Archéophone is used by the Edison National Historic Site, Bowling Green State University (Bowling Green, Ohio), the Department of Special Collections at the University of California, Santa Barbara Library, and many other libraries and archives, including the Endpoint by The New York Public Library for the Performing Arts.
In an attempt to preserve the historic content of the recordings, cylinders can be read with a confocal microscope and converted to a digital audio format. The resulting sound clip in most cases sounds better than stylus playback from the original cylinder. Having an electronic version of the original recordings enables archivists to open access to the recordings to a wider audience. This technique also has the potential to allow for reconstruction of damaged or broken cylinders.
== Gallery ==
== See also ==
Archéophone
Audio format
Audio storage
Cylinder Audio Archive
Mapleson Cylinders
Telediphone
Volta Laboratory and Bureau
== References ==
=== General references ===
Fedeyev, Vitaliy; Haber, Carl; Radding, Zachary; Maul, Christian; McBride, John; Golden, Mitchell (May 2004). "Reconstruction of Mechanically Recorded Sound by Image Processing" (PDF). Journal of the Acoustical Society of America. 115 (5): 172. Bibcode:2001ASAJ..115.2494F. doi:10.1121/1.4782907. S2CID 7371031.
Read, Oliver; Welch, Walter L. (1976). From Tin Foil to Stereo: Evolution of the Phonograph (2nd ed.). Indianapolis, Indiana: Howard W. Sams. ISBN 978-0672212062.
=== Inline citations ===
== Further reading ==
Frow, George L.; Sefl, Albert F. (1978). The Edison Cylinder Phonographs 1877–1929. Sevenoaks, Kent: George F. Frow. ISBN 0-9505462-2-4.
Koenigsberg, Allen (1987). Edison Cylinder Records, 1889–1912, With an illustrated history of the phonograph. Brooklyn, New York: APM Press. ISBN 0-937612-07-3.
Morton, David L. Jr. (2004). Sound Recording – The Life Story of a Technology. Baltimore, Maryland: Johns Hopkins University Press.
Schüller, Dietrich (2004). "Technology for the Future". In A. Seeger; S. Chaudhuri (eds.). Archives for the Future: Global Perspectives on Audiovisual Archives in the 21st Century. Calcutta, India: Seagull Books.
== External links ==
Tinfoil.com – History of phonograph cylinders; listen to many examples dating from 1878 through 1912
UCSB Cylinder Audio Archive, University of California, Santa Barbara: Streaming and downloadable versions of over 10,000 cylinders.
Vulcan Cylinder Record Company
Ethnographic wax cylinders from the British Library | Wikipedia/Phonograph_cylinder |
In digital electronics, analogue electronics and entertainment, the user interface may include media controls, transport controls or player controls, to enact and change or adjust the process of video playback, audio playback, and alike. These controls are commonly depicted as widely known symbols found in a multitude of products, exemplifying what is known as dominant design.
== Symbols ==
Media control symbols are commonly found on both software and physical media players, remote controls, and multimedia keyboards. Their application is described in ISO/IEC 18035.
The main symbols date back to the 1960s, with the Pause symbol having reportedly been invented at Ampex during that decade for use on reel-to-reel audio recorder controls, due to the difficulty of translating the word "pause" into some languages used in foreign markets. The Pause symbol was designed as a combination of the existing square Stop symbol and the caesura, and was intended to evoke the concept of an interruption or "stutter stop". The right-pointing triangle was adopted to indicate the direction of tape movement during playback. This design choice was straightforward: the arrow pointed in the direction the tape advanced. Over time, this symbol became standardized across various media devices, from cassette players to CD players, and eventually digital interfaces.
== In popular culture ==
=== Consumer products ===
The Play symbol is arguably the most widely used of the media control symbols. In many ways, this symbol has become synonymous with music culture and more broadly the digital download era. As such, there are now a multitude of items such as T-shirts, posters, and tattoos that feature this symbol. Similar cultural references can be observed with the Power symbol which is especially popular among video gamers and technology enthusiasts.
=== Branding ===
Media symbols can be found on an array of advertisements: from live music venues to streaming services.
In 2012, Google rebranded its digital download store to Google Play, using the Play symbol in its logo. The Play symbol also serves as a logo for YouTube since 2017. Television station owners Morgan Murphy Media and TEGNA have begun to institute the Play symbol into the logos of their stations to further connect their websites to their over-the-air television presences.
== Use on appliances and other mechanical devices ==
In recent years, there has been a proliferation of electronics that use media control symbols in order to represent the Run, Stop, and Pause functions. Likewise, user interface programing pertaining to these functions has also been influenced by that of media players.
For example, some washers and dryers with an illuminated Play/pause button are programmed such that it stays lit when the appliance is running. A line of Philips pasta makers has the Play/pause button for controlling the pasta-making process.
== See also ==
List of international common standards
Power symbol
Miscellaneous Technical
== References == | Wikipedia/Media_control_symbols |
A law firm network (law firm association or legal network) is a membership organization consisting of independent law firms. These networks are one type of professional services networks similar to networks found in the accounting profession. The common purpose is to expand the resources available to each member for providing services to their clients. Prominent primary law firm networks include CICERO League of International Lawyers, First Law International, Alliott Group (multidisciplinary), Lex Mundi, World Services Group (multidisciplinary), TerraLex, Meritas, Multilaw, The Network of Trial Law Firms, Inc., the State Capital Group, and Pacific Rim Advisory Council. Some of the largest legal networks span the globe, boasting over 10,000 attorneys spread across hundreds of offices worldwide.
Firms within a network can have formal or informal connections with each other, depending on the network's purpose.
== History of law firm networks ==
There are two different reasons for networks developing in the legal profession. The first reason is that law firms needed international connections due to globalization in the 1990s. The second reason is the expansion of many large United States firms to become "national". Smaller firms or firms with a niche practice requested expertise from these networks.
The internationalization of the legal profession began later than that of the accounting profession. Unlike accounting firms that conducted worldwide audits, law firms in each country were able to deal with national client matters. This changed in 1949, when Baker & McKenzie began to expand to non-United States markets to assist U.S. clients that were expanding overseas following WWII.
Internationalization was slow because the legal profession was more restrictive than accounting in allowing foreign firms to enter and practice in other countries. One of the requirements is that the names of the partners should be present in the name of the firm.
In the late 1980s, U.S. and English firms began establishing branches in the primary commercial centers. This new competition in local markets had the immediate effect of forcing local firms to evaluate alternative ways of providing services to their international clients.
The first international networks, called clubs, generally consisted of ten firms in different countries. The typical format was to hold several meetings a year among managing partners, to compare notes on management-related issues. They were secretive networks because the members feared losing business from other firms. On the other hand, they would advertise to their clients that they had foreign connections and correspondents. Today the clubs are commonly known as "best friends networks". Examples today are Leading Counsel Network and Slaughter & May.
The clubs evolved into networks in the 1980s. The networks were not secretive and published directories, materials, and brochures (Interlaw being one of the first examples). The members met annually. Some focused on specific practices, such as litigation, while others were more generic. Because networks were not thought of as strategic models, the membership selection process was not rigorous. This selection process is reflected today in the networks that have firms with a wide range of sizes, i.e., small firms in locations where there are firms three and four times the size of the country.
Lex Mundi was formed in 1989. It was the first network where each member had to be among the largest and most established firms in a state or country. It was a business that provided members with many alternatives to expand their resources. Lex Mundi is a network organized around strategic objectives, rather than objectives being defined after the network was established. While different from the accounting network, the concept was that of an entity that provided services to members and should also have an established brand. The staff, board, councils, and members collaborated to achieve the objectives.
Other networks like TerraLex, Meritas also known as the Commercial Law Affiliates and International Jurists, soon followed. These networks operated as businesses. Their stated objective was to create a branded alternative to the large United States and English law firms that had expanded into their countries.
In the 1980s, the United States witnessed the development of specialized national networks. One example of such a network focused on insurance litigation was ALFA. The State Capital Law Firm Group also emerged in the 1980s, initially as a national network for firms specializing in government affairs. Membership in this network reportedly required a former governor to be affiliated with the firm. Notably, both ALFA and the State Capital Law Firm Group later expanded internationally, becoming ALFA International and the State Capital Global Legal Network, respectively.
== Management structure ==
The typical network is run on a day-to-day basis by an administrative office called a secretariat or home office.
The head office is commonly located in major commercial centers in Europe or North America and does not practice law. Depending on the nature of the network (extension of the members or an independent business), the person responsible for the network will be an executive director or president/CEO.
The network may be governed by a board of directors. Networks also have representatives who form a management board with the executive director.
== Membership ==
There are three different types of networks. The original large networks – Lex Mundi, ALFA, TerraLex, and State Capital Group – tended to members who were large firms. This was because they were competing against large firms opening offices. Newer networks are more likely to include smaller firms. Specialty law firms may participate in boutique networks based on their field of law.
Law firm networks may offer their members territorial exclusivity. When that occurs, another firm can not be admitted within its exclusive territory.
Network members may together have hundreds of offices. Firms listed in the list of largest U.S. law firms have at most 4,000 attorneys. The largest networks include more than 10,000 attorneys.
Membership of networks may be open to accounting firms, and accounting networks also form alliances with law networks. Some commentators take the position that bringing together a group of lawyers and accountants to create a multidisciplinary association ultimately benefits clients as they often need a wide range of professional services advisors when involved in large transactions, for example, when incorporating a new business or in litigation matters, when for lawyers, the litigation support services of accountants can be very valuable. Bringing lawyers and accountants together does not create a multidisciplinary practice (MDP), as all firms are separate legal entities. MDPs that involve law firms partnering with non-law firms remain highly regulated or forbidden in most nations and jurisdictions.
== Functions ==
=== Retaining firm independence ===
Becoming part of a network may help firms serve new marketplaces while retaining independence, creating economies of scale, and pool resources.
=== Client retention ===
Networks allow firms to refer their clients to similar-sized members in another jurisdiction, rather than lose clients to a larger international firm. Many firms believe that being part of a network provides clients with reassurance that they will receive similar levels of service from any firm in the network
=== Practice development ===
Networks allow firms to attract larger clients operating on a multi-jurisdictional basis. Others disagree that law firm networks offer practice development opportunities.
=== Advantages of exclusivity ===
Networks may offer firms exclusivity. This may be a city, state, or country.
=== Branding ===
Membership in a network gives members the right to promote their affiliation in its territory using the network's logo. Use of the brand is encouraged, but not usually required, and would typically be implemented across firm stationery, marketing brochures, and web pages.
=== Open discussion in a non-competitive environment ===
Many networks consist of non-competing firms and therefore provide their members with the opportunity to openly discuss issues affecting their firm.
== See also ==
Bar association
Umbrella organization
Business networking
Professional services networks
Multidisciplinary professional services networks
Accounting networks and associations
== References == | Wikipedia/Law_firm_network |
Risk and strategic consulting refers to the provision of information, analysis and associated services in the field of international politics and economics, with the aim of providing a better understanding of the risks and opportunities facing businesses, governments and other groups.
In contrast to management consulting, which primarily concerns internal organization and performance, risk and strategic consulting aims to provide clients with an improved understanding of the political and economic climate in which they operate. Most such consultancy is focused on those developing countries and emerging markets in which political and business risks may be greater, harder to manage, or harder to assess. Risk and strategic consulting is sometimes carried out alongside other activities such as corporate investigation, forensic accounting, employee screening or vetting, and the provision of security systems, training or procedures. Some of the largest groups in the industry include Kroll Inc. and Control Risks Group, though the size and range of consultancies varies widely, with groups such as Black Cube and Hakluyt & Company providing boutique services.
Risk and strategic consultancy does not generally involve the operational 'risk assessment' carried out by many companies and consultancies. Risk assessment in this sense covers the identification and management of commercial, operational and technical risks within existing operations or known markets. Risk and strategic consultancy also concerns countries and concerns similar to those of interest to private military companies, though the two industries are distinct. Risk and strategy consultancies should not be confused with international lobbying or advocacy groups, though there are occasional overlaps. Clients of risk and strategic consulting firms include companies, governments and government agencies, charities and non-government organizations, academic institutions and individuals.
== See also ==
Foreign policy interest group
== References == | Wikipedia/Risk_and_strategic_consulting |
Multidisciplinary professional services networks are organizations formed by law, accounting and other professional services firms to offer clients new multidisciplinary approaches solving increasingly complex issues. They are a type of professional services network which operates to provide services to their members. They operate in the same way as accounting firm networks and associations and law firm networks. They do not practice a profession such as law or accounting but provide services to members so they can serve clients needs. Their aim is to provide members involved in doing business internationally with access to experienced, tried and tested, reliable, and responsive professional advisers who know their local jurisdiction intimately as well as the intricacies of cross border business.
There are 10 multidisciplinary networks. The largest are: Alliott Group, MSI Global Alliance, Morison International, Geneva Group, International Practice Group, WSG - World Services Group and Russell Bedford International. These networks have more than 100 member firms in as many as 90 countries in hundreds of offices. The members employ thousands of professionals.
== History ==
Multidisciplinary networks are not new but found in a number of professions. They became important during the end of the 1990s when the accounting firms began to expand to the legal profession. The history is well documented.
The American Bar Association Commission on Multidisciplinary Practice refers to five multidisciplinary models. They are the cooperative, command and control, ancillary practice, network and multidisciplinary partnership models.
=== Big Six accounting firms – multidisciplinary practices ===
The multidisciplinary issues first arose in the 1940s but was dealt with by the American Bar Association prohibiting lawyers working for accounting firms to represent clients before the IRS. The foundation of multidisciplinary practice began when the Big 4 accounting firms reached their natural growth limits. Accounting, auditing and tax services could generate only a finite amount of revenue for the Big 6. The Big 4’s concept was simple: use the extensive list of clients to market non-traditional accounting services such as legal, recruitment, risk management, technology consulting, etc. The objective was to bring these non-traditional services “in-house” using the time-tested network model.
Having reached their natural limit on growth the Big 4 branched out to become multidisciplinary in legal, technology, and employment services. Since the essential infrastructure was in place, it was thought to be relatively simple to incorporate other services into the existing network. The expansion could easily be financed using revenue from the traditional services. As a network, it was natural to create independent entities in these other professions which themselves could be part of the network. The method and structures varied from firm to firm but the fundamental premise was the sharing of revenue between lawyers and accountants.
The accounting firms were initially very successful in creating these alternative businesses. Soon a number of Big 6 firms had multi-billion dollar technology consulting businesses. Other services were more difficult to bring in-house. Some, like legal services, demanded a different approach because of ethical considerations. The key factor among such different approaches is an internal control system within a firm.
=== Reaction of the legal profession ===
The initial to expand to legal services focus was on the United States which represented the largest potential market for these services. In Europe and South America the bar rules were not as developed as in the United States, and therefore did not restrict the sharing of revenues. The basis for this expansion was the law firm network that established under the umbrella of the Big Six. The issues were sharing profits with accountants and other professionals and the possible conflicts of interest.
When the Big 6 began its expansion to the legal profession, it was met with fierce opposition from law firms and bar associations. Lawyers saw that the accounting profession would subsume the legal profession with its vast resources.
Commissions, panels and committees were established by legal and accounting firms to argue their positions. The American Bar Association established committees and taskforces to address the issue, but the problem spread outside of the United States, first to Europe but then to other countries where lawyers were not protected from this new foreign competition. Government agencies were enlisted. For more than five (5) years the debate escalated.
This movement ended abruptly with the fall of Arthur Andersen as a result of its association with Enron. Sarbanes Oxley followed, which effectively ended this trend of multidisciplinary networks established by the Big 5.
=== Enron and Sarbanes–Oxley ===
There was, however, a fatal flaw in the multidisciplinary network concepts of the Big 6. The raison d’etre of the Big 6 was to audit public companies. Each service which is provided to an audited client contained an inherent conflict of interest. This conflict was illustrated by the perfect storm created by Enron. The additional services that Arthur Anderson was offering created a conflict in their role as the auditor. Multidisciplinary networks by the accounting firms were DOA. The final nail in the coffin was Sarbanes Oxley which meant that the accounting firms had to divest their consulting practices.
== Multidisciplinary networks today ==
The multidisciplinary network model was not dead but transformed to account for the issues. If the member firms were themselves independent, there was no prohibition on having a multidisciplinary network. This was recognized by the ABA.
Today there are at least eleven networks. The largest are in the legal and accounting professions. A few of the legal and accounting networks include investment banking. The primary networks are focused on tax, employment, intellectual property, insurance and immigration.
== See also ==
Umbrella organization
Business networking
Organizational Studies
Command and Control
Professional services networks
Law firm network
Accounting networks and associations
== References == | Wikipedia/Multidisciplinary_professional_services_networks |
Professional services networks are business networks of independent firms who come together to provide professional services to clients through an organized framework. They are notably found in law and accounting. Any profession that operates in one location, but has clients in multiple locations, may provide potential members for a professional network. This entry focuses on accounting, legal, multidisciplinary and specialty practice networks. According to statistics from 2010, members of these networks employ more than one million professionals and staff and have cumulative annual revenues that exceed $200 billion.
The accounting networks developed first to meet the US Securities and Exchange Commission's requirement for public company audits. They include the well-known accounting networks like PwC, Deloitte, Ernst & Young and KPMG (also known as the Big 4 Audit Firms) as well as more than 30 other accounting networks and associations. They are highly structured entities.
The law firm network developed in the late 1980s. They include legal and law firm based multidisciplinary networks like Lex Mundi, Alliott Group, World Services Group, TerraLex, Meritas, IR Global and the State Capital Group.
There are more than 175 known networks in law, 40 in accounting, and 20 specialty networks. Individual networks have revenues exceeding $20 billion.
== Recognizing a network – the disclaimer ==
Every network from accounting networks like PwC and KPMG to law firm networks like Lex Mundi, Multilaw, and multidisciplinary networks like World Services Group (WSG) uses a "network disclaimer". A network disclaimer states that the network members are independent firms that do not practice jointly and are not responsible for the negligence of each other. It further states that generally the network does not practice a profession or otherwise provide services to clients of the network's members. This independence is the foundation of both network operations and governance.
== Why a network rather than a company ==
A major factor influencing the need for networks is the globalization of the economy. Supply and demand are no longer local but global. The price of commodities is affected by the weather halfway across the world or by demand in developing countries. Production takes place wherever the assets and human resources can most effectively deploy. Professional services providers must be able to reach out globally to represent their clients everywhere in the world. Networks are the practical and cost-effective method to accomplish these objectives. Members of networks have access to other members who understand the local economic, legal and political factors.
From a theoretical point of view, networks are an effective model of enhancing services. The members and the networks are different parts of the resource equation for providing members quality, reliable, local and global services. There is no real limit of what can be accomplished through a network when the network and its membership work in combination with each other. This collaboration is at the heart of the network.
Networks do not practice a profession or provide the services that their members provide to their clients. Networks do not provide accounting or legal services. They operate for the benefit of the members by supporting their operations. The network can combine the resources of the individual members without risking the loss of their personal identities or financial independence.
A network is more than a support organization or collaborative framework in which the members can meet clients' needs. It is an entity that has a common corporate identity or brand. The network name can represent a standard that is required of all its members. The logo and brand are owned by the network, not the members. Membership can create a global corporate identity. The objective of this identity is network participation that will translate into business for the individual independent members.
For a company to internally develop a global and local presence would take decades and billions of dollars. For a company/firms to start a network that develops the same market penetration may take a decade and cost only millions of dollars. However, these costs are allocated among the full membership so the cost per member is low. The cost for future members to gain direct and immediate access to these resources is de minimis.
== The formation of a network ==
Professional service networks are sui generis, and each network is formed for a different reason. Current and potential members are attracted to networks in which they can pursue their own individual objectives. While networks clearly do have things in common, each must be viewed in the context of its uniqueness. A successful network is one that meets the expectations of all of its members.
The objective of a network is to create a framework which can allow the members to expand their services. Within the network they can operate to pursue their interests. These interests can include referrals, joint venturing, access to expertise, developing regional expertise, publishing articles for clients, branding, technical information exchange, market positioning, pro bono services, etc. The scope of these interests is defined not by the members but by the network.
Network organizations are defined by their purpose, structure and process. The purpose of a network is different from that of a company or professional firm in that it is limited to specific activities that will benefit its members and enhance its performance. The network's structure reflects the activities it seeks to promote and the underlying cultures of the members. Accounting, legal, multidisciplinary and specialty networks will each be different. The process is defined by how they are governed and operated.
Networks are created around common specialized assets, joint control, and a collective purpose. The specialized assets reflect the defined activities of the network. To have joint control of the assets, there needs to be collaborations among members. The collaboration necessitates a commonly understood purpose or purposes. A professional services network is neither a mere extension of the members nor only a support organization for independent professional services firms, but is rather an independent organization. It is also a business, and very different from professional associations such as bar, accounting and other associations whose membership is generally open to all qualified professionals.
== Reasons for joining networks ==
When asked why they joined, members usually state that they joined for tangible reasons: to receive referrals from other members, to have reliable firms to which they can refer, to maintain independence, to meet clients' needs, to retain existing clients by being able to provide services in other states or countries, and to obtain new clients in their market who know of the membership. They may also want to exchange knowledge that can reduce risks in their own firm's operations, or gain access to other resources. Network members also minimize possible losses by spreading risks. Membership is a proactive way to profit from change and at the same time to conserve resources. Membership can also enhance the prestige of the member by being associated with prestigious firms that the client is already using. Networks achieve these objectives through different corporate structures in which executives have command and control.
== See also ==
Professional services
Umbrella organization
Business networking
Organizational studies
Command and control
Law firm network
Accounting networks and associations
Big Four accounting firms
Big Three (management consultancies)
Multidisciplinary professional services networks
== References == | Wikipedia/Professional_services_network |
Strategy+Business (stylized as strategy+business) is a business magazine focusing on management and corporate strategy. Headquartered in New York, it is published by member firms of the PricewaterhouseCoopers (PwC) network.
Articles cover industry topics of interest to CEOs and other senior executives, as well as to business academics and researchers. The articles, written in English, are authored by a mix of figures from both the executive suite and academia in addition to journalists and consultants from PwC.
The magazine's founding editor-in-chief, Joel Kurtzman, coined the term thought leadership when he published interviews with influential business figures under the rubric “Thought Leaders.” Interviews with “Thought Leaders” remain a recurring feature on the strategy+business website.
== History ==
Before the separation of Booz & Company (now Strategy&) from Booz Allen Hamilton in 2008, strategy+business was published by Booz Allen Hamilton as Strategy & Business since its launch in 1995. Full issues of strategy+business appear in print and digital form every quarter, and other original material is published daily on its website.
Joel Kurtzman, formerly editor-in-chief of the Harvard Business Review and a business editor and columnist at The New York Times — together with a group of partners at Booz & Company, which was then part of Booz Allen Hamilton — founded strategy+business in 1995. A collection of Kurtzman's "Thought Leader" columns was published in book form as Thought Leaders: Insights on the Future of Business (Jossey-Bass, 1997). Kurtzman served as editor-in-chief between 1995 and 1999.
Randall Rothenberg succeeded Kurtzman, serving as editor-in-chief between 2000 and 2005. Previously, Rothenberg had been an editor of The New York Times Magazine and had also served as the newspaper's advertising columnist. He redesigned strategy+business, introduced the “Best Business Books” section, and expanded coverage of electronic media. Rothenberg's first major issue, which was published in February 2000, was titled “E-Business: Lessons from Planet Earth,” and contained articles that prophesied the dot-com crash that occurred several months later. During Rothenberg's tenure, the strategy+business staff was formally brought into the Booz Allen Hamilton operation; before that, the magazine had been a standalone, contracted enterprise.
By 2002, the firm's e-commerce business was not doing well, but Booz Allen's partners decided to keep publishing it. Rothenberg and Cesare Mainardi (former chief executive officer at Strategy&) developed what they called the “functional agenda.” They started to build a body of research and practice around six major functions: strategy and leadership; innovation; organizations and people; marketing and sales; mergers and restructuring; and operations. Many of the regular features in strategy+business, including the “Global Innovation 1000” survey of top R&D spenders and the “CEO Succession” report on CEO tenure, date back to this effort.
Art Kleiner succeeded Rothenberg in 2005 and served as editor-in-chief until January 2020. A writer, lecturer, and commentator, Kleiner is the author of The Age of Heretics: A History of the Radical Thinkers Who Reinvented Corporate Management (Currency/Doubleday, 1996; rev’d. ed., 2008, Jossey-Bass) and Who Really Matters: The Core Group Theory of Power, Privilege, and Success (Currency/Doubleday, 2003).
During Kleiner's tenure, the magazine published influential articles on neuroscience and leadership (a 2006 article by David Rock and Jeffrey Schwartz led to the establishment of the field of neuroleadership), women in emerging markets, investment in infrastructure, organizational culture, theories of economic change, and market dislocation. In 2020, financial and economic journalist Daniel Gross was named editor-in-chief.
After a private equity takeover by The Carlyle Group in 2008, Booz Allen Hamilton was split into two entities. Strategy+business became the flagship publication of the commercial firm, Booz & Company (now known as Strategy&). Strategy& is part of PricewaterhouseCoopers, which acquired it (as Booz & Company) on April 3, 2014.
== Readership ==
Strategy+business has a global audience of more than 1,000,000 readers, with a circulation of about 600,000 through its digital editions. The magazine has drawn more than 500,000 web registrants and more than 350,000 readers on social media. According to a 2012 study by Readex Research analyzing s+b’s readership, 34 percent of s+b print readers are C-suite and senior executives, and 39 percent have served on a board of directors. Approximately 80 percent of readers have pursued post-graduate degrees, 92 percent hold professional or managerial positions, and their average household net worth is more than US$1.6 million.
== Contributors ==
The magazine's contributors have included Warren Bennis, Ram Charan, Stewart Brand, Nicholas Carr, Denise Caruso, Glenn Hubbard, Sheena Iyengar, Rosabeth Moss Kanter, Jon Katzenbach, A.G. Lafley, Franco Modigliani, Kenichi Ohmae, C.K. Prahalad (including a posthumous article), Sally Helgesen, Marshall Goldsmith, Sylvia Ann Hewlett, and Peter Senge.
Those interviewed as "thought leaders" include Bob Wright, Vineet Jain, Sir Martin Sorrell, Tom Peters, Joe Kaeser, Jonathan Haidt, Bran Ferren, Frances Hesselbein, Andrew Ng, Geoffrey West, Mark Bertolini, Ellen Langer, Zhang Ruimin, Rita Gunther McGrath, Eric Ries, Douglas Rushkoff, David Kantor, Douglas Conant, Otto Scharmer, Clayton Christensen, Betty Sue Flowers, Rakesh Khurana, Philip Bobbitt, John Chambers, Arie de Geus, Gary Hamel, Charles Handy, Daniel Kahneman, John Kao, Sylvia Nasar, Carlota Perez, Paul Romer, Anne-Marie Slaughter, Shelly Palmer, Vineet Jain, Kenji Yoshino, Ian Bremmer, John Coyle, Sally Blount, Michael Useem, Harbir Singh, Daniel Gross, Linda Hasenfratz, and Meg Wheatley.
The magazine also features a variety of illustrators and photographers, including Guy Billout, Seymour Chwast, Lars Leetaru, Peter Gregoire, and Dan Page.
== Features ==
Regular features include “Thought Leader” interviews; “Recent Research” columns, which are reports on academic studies' implications for corporate action; "Young Profs" interviews and columns, which are interviews with up-and-coming business leaders and academics; and “Books in Brief,” which are reviews of new books on business and management.
Strategy+Business also publishes “Global Innovation 1000,” a report that examines corporate spending on research and development each year, based on research conducted by Strategy&. The magazine's most popular pieces were collected in “15 Years, 50 Classics,” published in 2010. In 2015, s+b celebrated two decades with a special collection of online essays, including an interactive history of management theory titled "20 Questions for Business Leaders."
=== Best Business Books ===
Strategy+business publishes an annual feature called “Best Business Books,” where business books are reviewed systematically. Writers of these essays (who select the books in each category) have included Bethany McLean, Frances Cairncross, Clive Crook,Krisztina "Z" Holly, Walter Kiechel III, Steven Levy, Nell Minow, Howard Rheingold, Kenneth Roman, David Warsh, James Surowiecki, Duff McDonald, J. Bradford DeLong, and Dov Zakheim.
== References ==
== External links ==
Official website | Wikipedia/Strategy+Business |
An aerated lagoon (or aerated pond) is a simple wastewater treatment system consisting of a pond with artificial aeration to promote the biological oxidation of wastewaters.
There are many other aerobic biological processes for treatment of wastewaters, for example activated sludge, trickling filters, rotating biological contactors and biofilters. They all have in common the use of oxygen (or air) and microbial action to reduce the pollutants in wastewaters.
== Types ==
Suspension mixed lagoons, where there is less energy provided by the aeration equipment to keep the sludge in suspension.
Facultative lagoons, where there is insufficient energy provided by the aeration equipment to keep the sludge in suspension and solids settle to the lagoon floor. The biodegradable solids in the settled sludge then degrade as in an anaerobic lagoon.
=== Suspension mixed lagoons ===
Suspension mixed lagoons flow through activated sludge systems where the effluent has the same composition as the mixed liquor in the lagoon. Typically the sludge will have a residence time or sludge age of 1 to 5 days. This means that the chemical oxygen demand (COD) removed is relatively little and the effluent is therefore unacceptable for discharge into receiving waters. The objective of the lagoon is therefore to act as a biologically assisted flocculator which converts the soluble biodegradable organics in the influent to a biomass which is able to settle as a sludge. Usually the effluent is then put in a second pond where the sludge can settle. The effluent can then be removed from the top with a low chemical oxygen demand, while the sludge accumulates on the floor and undergoes anaerobic stabilisation.
== Methods of aerating lagoons or basins ==
There are many methods for aerating a lagoon or basin:
Motor-driven submerged or floating jet aerators
Motor-driven floating surface aerators
Motor-driven fixed-in-place surface aerators
Injection of compressed air through submerged diffusers
=== Floating surface aerators ===
Ponds or basins using floating surface aerators achieve 80 to 90% removal of BOD with retention times of 1 to 10 days. The ponds or basins may range in depth from 1.5 to 5.0 meters.
In a surface-aerated system, the aerators provide two functions: they transfer air into the basins required by the biological oxidation reactions, and they provide the mixing required for dispersing the air and for contacting the reactants (that is, oxygen, wastewater and microbes). Typically, the floating high speed surface aerators are rated to deliver the amount of air equivalent to 1 to 1.2 kg [[O2]]/kWh. However, they do not provide as good mixing as is normally achieved in activated sludge systems and therefore aerated basins do not achieve the same performance level as activated sludge units.
With low speed surface aerators SOTE (Standard Oxygen Transfer Efficiency) is higher thanks to better mixing capacity. This mixing capacity of an impeller depends highly on the impeller diameter. Low speed surface aerator present such high diameter. Therefore SOTE for low speed surface aerators is about 2 to 2.5 kg O2/kWh. This is why low speed surface aerators are mostly used in sewage or industrial treatment as WWTP are bigger and sparing energy becomes very interesting.
Biological oxidation processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 °C and 32 °C.
=== Submerged diffused aeration ===
Submerged diffused air is essentially a form of a diffuser grid inside a lagoon. There are two main types of submerged diffused aeration systems for lagoon applications: floating lateral and submerged lateral. Both these systems utilize fine or medium bubble diffusers to provide aeration and mixing to the process water. The diffusers can be suspended slightly above the lagoon floor or may rest on the bottom. Flexible airline or weighted air hose supplies air to the diffuser unit from the air lateral (either floating or submerged).
== See also ==
Industrial wastewater treatment
List of waste water treatment technologies
Retention basin
Rotating biological contactor
Sewage treatment
Waste stabilization pond
Water aeration
Water pollution
== References ==
== External links ==
Wastewater Lagoon Systems in Maine
Aerated, Partial Mix Lagoons (Wastewater Technology Fact Sheet by the U.S. Environmental Protection Agency)
Aerated Lagoon Technology (Linvil G. Rich, Professor Emeritus, Department of Environmental Engineering and Science, Clemson University) | Wikipedia/Aerated_lagoon |
Blue Plains Advanced Wastewater Treatment Plant in Washington, D.C., is the largest advanced wastewater treatment plant in the world. The facility is operated by the District of Columbia Water and Sewer Authority (DC Water). The plant opened in 1937 as a primary treatment facility, and advanced treatment capacity was added in the 1970s and 1980s. The effluent that leaves Blue Plains is discharged to the Potomac River and meets some of the most stringent permit limits in the United States.
== Current operations ==
=== Capacity and service area ===
The plant has a treatment capacity of 384 million gallons per day (mgd) or 1.45 billion liters per day, with a peak capacity (partial treatment during large storms) of over 1 billion gallons per day (3.8 billion liters/day). The plant occupies 153 acres (0.62 km2) in the southwest quadrant of Washington, D.C., and discharges to the Potomac River. It serves over 1.6 million customers in Washington, large portions of adjacent Prince George's County and Montgomery County in Maryland, and portions of Fairfax County and Loudoun County in Virginia.
=== Nutrient pollution control ===
Wastewater treatment plants historically have contributed nutrients such as phosphorus and nitrogen to the waterways in which they discharge. These nutrients deplete oxygen and cause algal blooms in rivers and coastal waters, a process that is detrimental to fish and other aquatic life.
Since the mid-1980s, Blue Plains has reduced its phosphorus discharges to the limit of technology, primarily in support of water quality goals of the Potomac River, but also for the restoration of the Chesapeake Bay. The 1987 Chesapeake Bay Agreement was a first step in reducing nitrogen discharge to waterways that are tributaries of the Chesapeake Bay. Under the agreement, the Bay states and the District of Columbia government committed to voluntarily reduce nitrogen loads by 40 percent from their 1985 levels. Blue Plains was the first plant in the region to achieve that goal. Furthermore, in every year since the full-scale implementation of the biological nitrogen removal (BNR) process was completed in 2000, Blue Plains has every year successfully achieved and exceeded that goal of a 40 percent reduction. In Fiscal Year 2009, the BNR process at Blue Plains reduced the nitrogen load by more than 58 percent. Installation of enhanced nutrient control systems was completed in 2014. The enhanced plant achieves nitrogen effluent levels at 4 mg/L.
=== Operational award ===
In 2010, DC Water received the "Platinum Peak Performance Award" from the National Association of Clean Water Agencies. This award is presented to member agencies for exceptional compliance for their National Pollutant Discharge Elimination System (NPDES) permit limits.
=== Sludge treatment ===
DC Water began operating its thermal hydrolysis system, for improved treatment of sewage sludge, in 2015. This is the largest thermal hydrolysis facility in the world as of 2016. The system generates high quality sludge that is used as soil amendments (200,000 tons per year). A portion of the sludge is processed in an anaerobic digestion system which generates 10 MW of electricity that is used elsewhere at the treatment plant.
== History ==
The original Blue Plains facility opened in 1937 as a primary treatment facility. It discharged under 100 mgd, serving a population of 650,000. Population increases in the 1950s led to the construction of secondary treatment units in 1959, with an expanded discharge capacity of 240 mgd. In the 1970s a major expansion commenced that led to construction of advanced wastewater treatment components, and by 1983 the capacity was 300 mgd.
=== Service connections for Maryland suburbs ===
The Washington Suburban Sanitary Commission (WSSC) was established in Maryland in 1918 and operated sewer systems in portions of Montgomery and Prince George's Counties. The commission began to install sewer connections from its service area to the Blue Plains plant in the late 1930s and 1940s. WSSC had built its own sewage treatment plant in Bladensburg, Maryland in the 1940s. In the early 1950s WSSC reached agreement with the District of Columbia government to connect the Bladensburg area to Blue Plains, and the Bladensburg plant was closed.
=== Expanded service to Virginia communities ===
As the Virginia suburbs expanded in the 1950s-1960s, additional sewage treatment capacity was needed for that area. Planners in the Washington metropolitan area, led by the Metropolitan Washington Council of Governments, recommended that the areas around the new Dulles International Airport (which opened to the public in 1962) be served by the Blue Plains plant. This decision required the construction of a 43 miles (69 km) interceptor sewer from the Dulles area to Blue Plains. Congress authorized construction of the Potomac Interceptor in 1960. Construction of the main interceptor system took place in 1962. Subsequently there have been several pipe extension and maintenance projects. (Other areas in the northern Virginia suburbs are served by treatment plants operated by Arlington County, the City of Alexandria, Fairfax County, Prince William County and the Upper Occoquan Sewage Authority.)
== References ==
== External links ==
Official website
Blue Plains Advanced Wastewater Treatment Plant at The Living New Deal | Wikipedia/Blue_Plains_Advanced_Wastewater_Treatment_Plant |
Secondary treatment (mostly biological wastewater treatment) is the removal of biodegradable organic matter (in solution or suspension) from sewage or similar kinds of wastewater.: 11 The aim is to achieve a certain degree of effluent quality in a sewage treatment plant suitable for the intended disposal or reuse option. A "primary treatment" step often precedes secondary treatment, whereby physical phase separation is used to remove settleable solids. During secondary treatment, biological processes are used to remove dissolved and suspended organic matter measured as biochemical oxygen demand (BOD). These processes are performed by microorganisms in a managed aerobic or anaerobic process depending on the treatment technology. Bacteria and protozoa consume biodegradable soluble organic contaminants (e.g. sugars, fats, and organic short-chain carbon molecules from human waste, food waste, soaps and detergent) while reproducing to form cells of biological solids. Secondary treatment is widely used in sewage treatment and is also applicable to many agricultural and industrial wastewaters.
Secondary treatment systems are classified as fixed-film or suspended-growth systems, and as aerobic versus anaerobic. Fixed-film or attached growth systems include trickling filters, constructed wetlands, bio-towers, and rotating biological contactors, where the biomass grows on media and the sewage passes over its surface.: 11–13 The fixed-film principle has further developed into moving bed biofilm reactors (MBBR) and Integrated Fixed-Film Activated Sludge (IFAS) processes. Suspended-growth systems include activated sludge, which is an aerobic treatment system, based on the maintenance and recirculation of a complex biomass composed of micro-organisms (bacteria and protozoa) able to absorb and adsorb the organic matter carried in the wastewater. Constructed wetlands are also being used. An example for an anaerobic secondary treatment system is the upflow anaerobic sludge blanket reactor.
Fixed-film systems are more able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems.: 11–13 Most of the aerobic secondary treatment systems include a secondary clarifier to settle out and separate biological floc or filter material grown in the secondary treatment bioreactor.
== Definitions ==
=== Primary treatment ===
=== Secondary treatment ===
Primary treatment settling removes about half of the solids and a third of the BOD from raw sewage. Secondary treatment is defined as the "removal of biodegradable organic matter (in solution or suspension) and suspended solids. Disinfection is also typically included in the definition of conventional secondary treatment.": 11 Biological nutrient removal is regarded by some sanitary engineers as secondary treatment and by others as tertiary treatment.: 11
After this kind of treatment, the wastewater may be called secondary-treated wastewater.
=== Tertiary treatment ===
== Process types ==
Secondary treatment systems are classified as fixed-film or suspended-growth systems A great number of secondary treatment processes exist, see List of wastewater treatment technologies. The main ones are explained below.
=== Fixed film systems ===
==== Filter beds (oxidizing beds) ====
In older plants and those receiving variable loadings, trickling filter beds are used where the settled sewage liquor is spread onto the surface of a bed made up of coke (carbonized coal), limestone chips or specially fabricated plastic media. Such media must have large surface areas to support the biofilms that form. The liquor is typically distributed through perforated spray arms. The distributed liquor trickles through the bed and is collected in drains at the base. These drains also provide a source of air which percolates up through the bed, keeping it aerobic. Biofilms of bacteria, protozoa and fungi form on the media’s surfaces and eat or otherwise reduce the organic content.: 12 The filter removes a small percentage of the suspended organic matter, while the majority of the organic matter supports microorganism reproduction and cell growth from the biological oxidation and nitrification taking place in the filter. With this aerobic oxidation and nitrification, the organic solids are converted into biofilm grazed by insect larvae, snails, and worms which help maintain an optimal thickness. Overloading of beds may increase biofilm thickness leading to anaerobic conditions and possible bioclogging of the filter media and ponding on the surface.
==== Rotating biological contactors ====
==== Constructed wetlands ====
=== Suspended growth systems ===
==== Activated sludge ====
Activated sludge is a common suspended-growth method of secondary treatment. Activated sludge plants encompass a variety of mechanisms and processes using dissolved oxygen to promote growth of biological floc that substantially removes organic material.: 12–13 Biological floc is an ecosystem of living biota subsisting on nutrients from the inflowing primary clarifier effluent. These mostly carbonaceous dissolved solids undergo aeration to be broken down and either biologically oxidized to carbon dioxide or converted to additional biological floc of reproducing micro-organisms. Nitrogenous dissolved solids (amino acids, ammonia, etc.) are similarly converted to biological floc or oxidized by the floc to nitrites, nitrates, and, in some processes, to nitrogen gas through denitrification. While denitrification is encouraged in some treatment processes, denitrification often impairs the settling of the floc causing poor quality effluent in many suspended aeration plants. Overflow from the activated sludge mixing chamber is sent to a clarifier where the suspended biological floc settles out while the treated water moves into tertiary treatment or disinfection. Settled floc is returned to the mixing basin to continue growing in primary effluent. Like most ecosystems, population changes among activated sludge biota can reduce treatment efficiency. Nocardia, a floating brown foam sometimes misidentified as sewage fungus, is the best known of many different fungi and protists that can overpopulate the floc and cause process upsets. Elevated concentrations of toxic wastes including pesticides, industrial metal plating waste, or extreme pH, can kill the biota of an activated sludge reactor ecosystem.
===== Sequencing batch reactors =====
One type of system that combines secondary treatment and settlement is the cyclic activated sludge (CASSBR), or sequencing batch reactor (SBR). Typically, activated sludge is mixed with raw incoming sewage, and then mixed and aerated. The settled sludge is run off and re-aerated before a proportion is returned to the headworks.
The disadvantage of the CASSBR process is that it requires a precise control of timing, mixing and aeration. This precision is typically achieved with computer controls linked to sensors. Such a complex, fragile system is unsuited to places where controls may be unreliable, poorly maintained, or where the power supply may be intermittent.
===== Package plants =====
Extended aeration package plants use separate basins for aeration and settling, and are somewhat larger than SBR plants with reduced timing sensitivity.
===== Membrane bioreactors =====
Membrane bioreactors (MBR) are activated sludge systems using a membrane liquid-solid phase separation process. The membrane component uses low pressure microfiltration or ultrafiltration membranes and eliminates the need for a secondary clarifier or filtration. The membranes are typically immersed in the aeration tank; however, some applications utilize a separate membrane tank. One of the key benefits of an MBR system is that it effectively overcomes the limitations associated with poor settling of sludge in conventional activated sludge (CAS) processes. The technology permits bioreactor operation with considerably higher mixed liquor suspended solids (MLSS) concentration than CAS systems, which are limited by sludge settling. The process is typically operated at MLSS in the range of 8,000–12,000 mg/L, while CAS are operated in the range of 2,000–3,000 mg/L. The elevated biomass concentration in the MBR process allows for very effective removal of both soluble and particulate biodegradable materials at higher loading rates. Thus increased sludge retention times, usually exceeding 15 days, ensure complete nitrification even in extremely cold weather.
The cost of building and operating an MBR is often higher than conventional methods of sewage treatment. Membrane filters can be blinded with grease or abraded by suspended grit and lack a clarifier's flexibility to pass peak flows. The technology has become increasingly popular for reliably pretreated waste streams and has gained wider acceptance where infiltration and inflow have been controlled, however, and the life-cycle costs have been steadily decreasing. The small footprint of MBR systems, and the high quality effluent produced, make them particularly useful for water reuse applications.
===== Aerobic granulation =====
Aerobic granular sludge can be formed by applying specific process conditions that favour slow growing organisms such as PAOs (polyphosphate accumulating organisms) and GAOs (glycogen accumulating organisms). Another key part of granulation is selective wasting whereby slow settling floc-like sludge is discharged as waste sludge and faster settling biomass is retained. This process has been commercialized as Nereda process.
==== Surface-aerated lagoons or ponds ====
Aerated lagoons are a low technology suspended-growth method of secondary treatment using motor-driven aerators floating on the water surface to increase atmospheric oxygen transfer to the lagoon and to mix the lagoon contents. The floating surface aerators are typically rated to deliver the amount of air equivalent to 1.8 to 2.7 kg O2/kW·h. Aerated lagoons provide less effective mixing than conventional activated sludge systems and do not achieve the same performance level. The basins may range in depth from 1.5 to 5.0 metres. Surface-aerated basins achieve 80 to 90 percent removal of BOD with retention times of 1 to 10 days. Many small municipal sewage systems in the United States (1 million gal./day or less) use aerated lagoons.
=== Emerging technologies ===
Biological Aerated (or Anoxic) Filter (BAF) or Biofilters combine filtration with biological carbon reduction, nitrification or denitrification. BAF usually includes a reactor filled with a filter media. The media is either in suspension or supported by a gravel layer at the foot of the filter. The dual purpose of this media is to support highly active biomass that is attached to it and to filter suspended solids. Carbon reduction and ammonia conversion occurs in aerobic mode and sometime achieved in a single reactor while nitrate conversion occurs in anoxic mode. BAF is operated either in upflow or downflow configuration depending on design specified by manufacturer.
Integrated Fixed-Film Activated Sludge
Moving Bed Biofilm Reactors (MBBRs) typically requires smaller footprint than suspended-growth systems.
== Design considerations ==
The United States Environmental Protection Agency (EPA) defined secondary treatment based on the performance observed at late 20th-century bioreactors treating typical United States municipal sewage. Secondary treated sewage is expected to produce effluent with a monthly average of less than 30 mg/L BOD and less than 30 mg/L suspended solids. Weekly averages may be up to 50 percent higher. A sewage treatment plant providing both primary and secondary treatment is expected to remove at least 85 percent of the BOD and suspended solids from domestic sewage. The EPA regulations describe stabilization ponds as providing treatment equivalent to secondary treatment removing 65 percent of the BOD and suspended solids from incoming sewage and discharging approximately 50 percent higher effluent concentrations than modern bioreactors. The regulations also recognize the difficulty of meeting the specified removal percentages from combined sewers, dilute industrial wastewater, or Infiltration/Inflow.
=== Process upsets ===
Process upsets are temporary decreases in treatment plant performance caused by significant population change within the secondary treatment ecosystem. Conditions likely to create upsets include toxic chemicals and unusually high or low concentrations of organic waste BOD providing food for the bioreactor ecosystem.
Measures creating uniform wastewater loadings tend to reduce the probability of upsets. Fixed-film or attached growth secondary treatment bioreactors are similar to a plug flow reactor model circulating water over surfaces colonized by biofilm, while suspended-growth bioreactors resemble a continuous stirred-tank reactor keeping microorganisms suspended while water is being treated. Secondary treatment bioreactors may be followed by a physical phase separation to remove biological solids from the treated water. Upset duration of fixed film secondary treatment systems may be longer because of the time required to recolonize the treatment surfaces. Suspended growth ecosystems may be restored from a population reservoir. Activated sludge recycle systems provide an integrated reservoir if upset conditions are detected in time for corrective action. Sludge recycle may be temporarily turned off to prevent sludge washout during peak storm flows when dilution keeps BOD concentrations low. Suspended growth activated sludge systems can be operated in a smaller space than fixed-film trickling filter systems that treat the same amount of water; but fixed-film systems are better able to cope with drastic changes in the amount of biological material and can provide higher removal rates for organic material and suspended solids than suspended growth systems.: 11–13
Wastewater flow variations may be reduced by limiting stormwater collection by the sewer system, and by requiring industrial facilities to discharge batch process wastes to the sewer over a time interval rather than immediately after creation. Discharge of appropriate organic industrial wastes may be timed to sustain the secondary treatment ecosystem through periods of low residential waste flow. Sewage treatment systems experiencing holiday waste load fluctuations may provide alternative food to sustain secondary treatment ecosystems through periods of reduced use. Small facilities may prepare a solution of soluble sugars. Others may find compatible agricultural wastes, or offer disposal incentives to septic tank pumpers during low use periods.
=== Toxicity ===
Waste containing biocide concentrations exceeding the secondary treatment ecosystem tolerance level may kill a major fraction of one or more important ecosystem species. BOD reduction normally accomplished by that species temporarily ceases until other species reach a suitable population to utilize that food source, or the original population recovers as biocide concentrations decline.
=== Dilution ===
Waste containing unusually low BOD concentrations may fail to sustain the secondary treatment population required for normal waste concentrations. The reduced population surviving the starvation event may be unable to completely utilize available BOD when waste loads return to normal. Dilution may be caused by addition of large volumes of relatively uncontaminated water such as stormwater runoff into a combined sewer. Smaller sewage treatment plants may experience dilution from cooling water discharges, major plumbing leaks, firefighting, or draining large swimming pools.
A similar problem occurs as BOD concentrations drop when low flow increases waste residence time within the secondary treatment bioreactor. Secondary treatment ecosystems of college communities acclimated to waste loading fluctuations from student work/sleep cycles may have difficulty surviving school vacations. Secondary treatment systems accustomed to routine production cycles of industrial facilities may have difficulty surviving industrial plant shutdown. Populations of species feeding on incoming waste initially decline as concentration of those food sources decrease. Population decline continues as ecosystem predator populations compete for a declining population of lower trophic level organisms.
=== Peak waste load ===
High BOD concentrations initially exceed the ability of the secondary treatment ecosystem to utilize available food. Ecosystem populations of aerobic organisms increase until oxygen transfer limitations of the secondary treatment bioreactor are reached. Secondary treatment ecosystem populations may shift toward species with lower oxygen requirements, but failure of those species to use some food sources may produce higher effluent BOD concentrations. More extreme increases in BOD concentrations may drop oxygen concentrations before the secondary treatment ecosystem population can adjust, and cause an abrupt population decrease among important species. Normal BOD removal efficiency will not be restored until populations of aerobic species recover after oxygen concentrations rise to normal.
=== Temperature ===
Biological oxidation processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 °C and 32 °C.
== See also ==
List of wastewater treatment technologies
Sanitation
== References ==
=== Sources ===
Abbett, Robert W. (1956). American Civil Engineering Practice. Vol. II. New York: John Wiley & Sons.
Fair, Gordon Maskew; Geyer, John Charles; Okun, Daniel Alexander (1968). Water and Wastewater Engineering. Vol. 2. New York: John Wiley & Sons.
Hammer, Mark J. (1975). Water and Waste-Water Technology. New York: John Wiley & Sons. ISBN 0-471-34726-4.
King, James J. (1995). The Environmental Dictionary (Third ed.). New York: John Wiley & Sons. ISBN 0-471-11995-4.
Metcalf; Eddy (1972). Wastewater Engineering. New York: McGraw-Hill Book Company.
Reed, Sherwood C.; Middlebrooks, E. Joe; Crites, Ronald W. (1988). Natural Systems for Waste Management and Treatment. New York: McGraw-Hill Book Company. ISBN 0-07-051521-2.
Steel, E.W.; McGhee, Terence J. (1979). Water Supply and Sewerage (Fifth ed.). New York: McGraw-Hill Book Company. ISBN 0-07-060929-2. | Wikipedia/Secondary_treatment |
The Urban Waste Water Treatment Directive 1991 (91/271/EEC) European Union directive concerning urban waste water "collection, treatment and discharge of urban waste water and the treatment and discharge of waste water from certain industrial sectors". It aims "to protect the environment from adverse effects of waste water discharges from cities and "certain industrial sectors". Council Directive 91/271/EEC on Urban Wastewater Treatment was adopted on 21 May 1991, amended by the Commission Directive 98/15/EC.
It prescribes the waste water collection and treatment in urban agglomerations with a population equivalent of over 2000, and more advanced treatment in places with a population equivalent greater than 10,000 in "sensitive areas".
== Description ==
The Urban Waste Water Treatment Directive (full title "Council Directive 91/271/EEC of 21 May 1991 concerning urban waste-water treatment") is a European Union directive regarding urban wastewater collection, wastewater treatment and its discharge, as well as the treatment and discharge of "waste water from certain industrial sectors". It was adopted on 21 May 1991. It aims "to protect the environment from the adverse effects of urban waste water discharges and discharges from certain industrial sectors" by mandating waste water collection and treatment in urban agglomerations with a population equivalent of over 2000, and more advanced treatment in places with a population equivalent above 10,000 in sensitive areas.
Member states in the European Union maintain and operate waste-water treatment plants to conform to the Urban Waste Water Treatment Directive which sets standards for both treatment and disposal of sewage for communities of more than 200 person equivalents. Each member state is obliged to enact the requirements of the directive through appropriate local legislation. This directive also links to the Bathing Waters Directive and to the environmental standards set in the Water Framework Directive which are designed to protect all legitimate end uses of the receiving environment.
Commission Decision 93/481/EEC defines the information that Member States should provide the commission on the state of implementation of the Directive.
Conventional wastewater treatment plants currently service over 90% of the EU population. Continuing implementation of the Urban Wastewater Treatment Directives plans to lower the EU's contribution to global microplastics discharge into the oceans. According to a cost-benefit analysis prepared for the proposed Directive, the investment required to implement quaternary treatment in wastewater treatment plants with a capacity of at least 10,000 person equivalents in the EU is estimated to be around €2.6 billion per year.
=== Sensitive areas ===
The directive defines sensitive areas, as "freshwater bodies, estuaries and coastal waters which are eutrophic or which may become eutrophic if protective action is not taken", "surface freshwaters intended for the abstraction of drinking water which contain or are likely to contain more than 50 mg/L of nitrates", areas where further treatment is necessary to comply with other directives, such as the directives on fish waters, on bathing waters, on shellfish waters, on the conservation of wild birds and natural habitats, etc.
The directive contains a derogation for areas designated as "less sensitive"; such derogations were approved for areas in Portugal.
== Implementation ==
Member states were required to make waste water treatment facilities available
By 31 December 1998 for all places with a population equivalent of over 10,000 where the effluent discharged into a sensitive area.
By 31 December 1998 for all places with a population equivalent of over 15,000, which discharged their effluent into so-called "normal areas" and that biodegradable waste water produced by food-processing plants,which discharged directly into water bodies, fulfilled certain conditions.
by 31 December 2005 for all places with a population equivalent between 2000 and 10 000 where effluent is discharged into a sensitive area,
by 31 December 2005 for all places with a population equivalent between 10,000 and 15,000 where the effluent is not discharged into such a sensitive area
In a 2004 Commission report on implementation by the member states, the Commission noted that some member states, in particular France and Spain, had been tardy in providing the required information, and infringement procedures had been initiated. The report mentioned Spain's non-provision of any advanced treatment in the catchment areas of rivers identified as sensitive in their downstream section, such as the Ebro and the Guadalquivir; Italy's implementation in the catchment area of the Po River, the delta and adjacent coastal waters; and the United Kingdom's interpretation and implementation of the directive in regard to the catchment areas of sensitive areas. Most member states planned to achieve conformity with the Directive by 2005 or 2008 at the latest.
In 2020 the Commission published its latest implementation report that covers over 23,600 agglomerations where people (and to a limited extent industry) generate wastewater. As the UWWTD will soon be revised in light of meeting the goals of the European Green Deal, this report carries out an evaluation of the directive. This was followed by an impact assessment in order to determine policy options for an update, fit for the future UWWTD. Over the last decade, the compliance rates have gone up, with 95% for collection, 88% for secondary (biological) treatment, and 86% for more stringent treatment. There is positive trend in general, but full compliance with the directive is still not achieved. This is necessary, because this would show significant reductions in pollutant loads in the Member States. In the long term, more investments are needed to reach and maintain compliance with the directive. Several towns and cities are still building or renewing infrastructure for the collection of wastewater. To support the Member States, the commission has set up funding and financial initiatives.
== Political significance ==
The Urban Waste Water Directive marked a shift from legislation aimed at end-use standards to stricter legislation aimed at regulating water quality at the source. The directive applied both to domestic waste water and to waste water from industrial sectors, both of which account for much of the pollution. The Directive is an example of the detailed nature of European Union legislation and resulted in "significant costs in many member states".
Nine years after the directive was adopted, considerable variations remained in the provision of sewage treatment in the different member states.
== Planned Revision ==
On 13 July 2018, the European Commission published a Consultation on the Evaluation of the Urban Waste Water Treatment Directive ahead of a potential revision. Since its adoption in 1991, new technical advances on treatment techniques for waste and emerging pollutants have been identified that might require removal. In addition, the EU has since enlarged from 12 to 28 countries and new different experiences and challenges need to be taken into account.
However, the biggest challenge of the revision will be to exploit the potential the wastewater treatment sector can contribute to the circular economy agenda and the fight against climate change. Globally, the wastewater treatment sector consumes 1% of the global total energy consumption. Under a business as usual scenario, this figure is expected to increase by 60% by 2040 compared to 2014. With the introduction of energy efficiency requirements, the energy consumption of the wastewater treatment sector can be reduced by 50% only by using current technologies. On top of that, there are also opportunities to produce enough energy from wastewater to turn the whole water sector energy neutral. It uses the energy embedded in the sludge by producing biogas through anaerobic digestion. These features have been mainly overlooked due to the over-riding objective for utilities to meet existing and future needs for wastewater treatment.
In October 2022, the planned revision included stricter goals and policies. These had time frames on either 2030, 2035, or 2040. The revision would have inclusion of areas with smaller populations, stricter limits on Nitrogen and Phosphorus, reduction of micropollutants, a goal of Energy Neutrality for all purification plants with over 10,000 person equivalents by 2040, tracking of diseases, additional sanctions, and other goals. This proposal has currently been read by the European Council.
== See also ==
Water supply and sanitation in the European Union
Population equivalent
Sustainable Development Goal 6
Water, energy and food security nexus
Sewage sludge treatment
== Notes and references ==
== External links ==
Text of the directive (as amended)
Original text of the directive and other legislative information | Wikipedia/Urban_Waste_Water_Treatment_Directive |
Sewage treatment is a type of wastewater treatment which aims to remove contaminants from sewage to produce an effluent that is suitable to discharge to the surrounding environment or an intended reuse application, thereby preventing water pollution from raw sewage discharges. Sewage contains wastewater from households and businesses and possibly pre-treated industrial wastewater. There are a high number of sewage treatment processes to choose from. These can range from decentralized systems (including on-site treatment systems) to large centralized systems involving a network of pipes and pump stations (called sewerage) which convey the sewage to a treatment plant. For cities that have a combined sewer, the sewers will also carry urban runoff (stormwater) to the sewage treatment plant. Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes and nutrient removal. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage, using aerobic or anaerobic biological processes. A so-called quaternary treatment step (sometimes referred to as advanced treatment) can also be added for the removal of organic micropollutants, such as pharmaceuticals. This has been implemented in full-scale for example in Sweden.
A large number of sewage treatment technologies have been developed, mostly using biological treatment processes. Design engineers and decision makers need to take into account technical and economical criteria of each alternative when choosing a suitable technology.: 215 Often, the main criteria for selection are: desired effluent quality, expected construction and operating costs, availability of land, energy requirements and sustainability aspects. In developing countries and in rural areas with low population densities, sewage is often treated by various on-site sanitation systems and not conveyed in sewers. These systems include septic tanks connected to drain fields, on-site sewage systems (OSS), vermifilter systems and many more. On the other hand, advanced and relatively expensive sewage treatment plants may include tertiary treatment with disinfection and possibly even a fourth treatment stage to remove micropollutants.
At the global level, an estimated 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%.
The treatment of sewage is part of the field of sanitation. Sanitation also includes the management of human waste and solid waste as well as stormwater (drainage) management. The term sewage treatment plant is often used interchangeably with the term wastewater treatment plant.
== Terminology ==
The term sewage treatment plant (STP) (or sewage treatment works) is nowadays often replaced with the term wastewater treatment plant (WWTP). Strictly speaking, the latter is a broader term that can also refer to industrial wastewater treatment.
The terms water recycling center or water reclamation plants are also in use as synonyms.
== Purposes and overview ==
The overall aim of treating sewage is to produce an effluent that can be discharged to the environment while causing as little water pollution as possible, or to produce an effluent that can be reused in a useful manner. This is achieved by removing contaminants from the sewage. It is a form of waste management.
With regards to biological treatment of sewage, the treatment objectives can include various degrees of the following: to transform or remove organic matter, nutrients (nitrogen and phosphorus), pathogenic organisms, and specific trace organic constituents (micropollutants).: 548
Some types of sewage treatment produce sewage sludge which can be treated before safe disposal or reuse. Under certain circumstances, the treated sewage sludge might be termed biosolids and can be used as a fertilizer.
== Sewage characteristics ==
== Collection ==
== Types of treatment processes ==
Sewage can be treated close to where the sewage is created, which may be called a decentralized system or even an on-site system (on-site sewage facility, septic tanks, etc.). Alternatively, sewage can be collected and transported by a network of pipes and pump stations to a municipal treatment plant. This is called a centralized system (see also sewerage and pipes and infrastructure).
A large number of sewage treatment technologies have been developed, mostly using biological treatment processes (see list of wastewater treatment technologies). Very broadly, they can be grouped into high tech (high cost) versus low tech (low cost) options, although some technologies might fall into either category. Other grouping classifications are intensive or mechanized systems (more compact, and frequently employing high tech options) versus extensive or natural or nature-based systems (usually using natural treatment processes and occupying larger areas) systems. This classification may be sometimes oversimplified, because a treatment plant may involve a combination of processes, and the interpretation of the concepts of high tech and low tech, intensive and extensive, mechanized and natural processes may vary from place to place.
=== Low tech, extensive or nature-based processes ===
Examples for more low-tech, often less expensive sewage treatment systems are shown below. They often use little or no energy. Some of these systems do not provide a high level of treatment, or only treat part of the sewage (for example only the toilet wastewater), or they only provide pre-treatment, like septic tanks. On the other hand, some systems are capable of providing a good performance, satisfactory for several applications. Many of these systems are based on natural treatment processes, requiring large areas, while others are more compact. In most cases, they are used in rural areas or in small to medium-sized communities.
For example, waste stabilization ponds are a low cost treatment option with practically no energy requirements but they require a lot of land.: 236 Due to their technical simplicity, most of the savings (compared with high tech systems) are in terms of operation and maintenance costs.: 220–243
Examples for systems that can provide full or partial treatment for toilet wastewater only:
Composting toilet (see also dry toilets in general)
Urine-diverting dry toilet
Vermifilter toilet
=== High tech, intensive or mechanized processes ===
Examples for more high-tech, intensive or mechanized, often relatively expensive sewage treatment systems are listed below. Some of them are energy intensive as well. Many of them provide a very high level of treatment. For example, broadly speaking, the activated sludge process achieves a high effluent quality but is relatively expensive and energy intensive.: 239
=== Disposal or treatment options ===
There are other process options which may be classified as disposal options, although they can also be understood as basic treatment options. These include: Application of sludge, irrigation, soak pit, leach field, fish pond, floating plant pond, water disposal/groundwater recharge, surface disposal and storage.: 138
The application of sewage to land is both: a type of treatment and a type of final disposal.: 189 It leads to groundwater recharge and/or to evapotranspiration. Land application include slow-rate systems, rapid infiltration, subsurface infiltration, overland flow. It is done by flooding, furrows, sprinkler and dripping. It is a treatment/disposal system that requires a large amount of land per person.
== Design aspects ==
=== Population equivalent ===
The per person organic matter load is a parameter used in the design of sewage treatment plants. This concept is known as population equivalent (PE). The base value used for PE can vary from one country to another. Commonly used definitions used worldwide are: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. This concept is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage.
=== Process selection ===
When choosing a suitable sewage treatment process, decision makers need to take into account technical and economical criteria.: 215 Therefore, each analysis is site-specific. A life cycle assessment (LCA) can be used, and criteria or weightings are attributed to the various aspects. This makes the final decision subjective to some extent.: 216 A range of publications exist to help with technology selection.: 221
In industrialized countries, the most important parameters in process selection are typically efficiency, reliability, and space requirements. In developing countries, they might be different and the focus might be more on construction and operating costs as well as process simplicity.: 218
Choosing the most suitable treatment process is complicated and requires expert inputs, often in the form of feasibility studies. This is because the main important factors to be considered when evaluating and selecting sewage treatment processes are numerous. They include: process applicability, applicable flow, acceptable flow variation, influent characteristics, inhibiting or refractory compounds, climatic aspects, process kinetics and reactor hydraulics, performance, treatment residuals, sludge processing, environmental constraints, requirements for chemical products, energy and other resources; requirements for personnel, operating and maintenance; ancillary processes, reliability, complexity, compatibility, area availability.: 219
With regards to environmental impacts of sewage treatment plants the following aspects are included in the selection process: Odors, vector attraction, sludge transportation, sanitary risks, air contamination, soil and subsoil contamination, surface water pollution or groundwater contamination, devaluation of nearby areas, inconvenience to the nearby population.: 220
=== Odor control ===
Odors emitted by sewage treatment are typically an indication of an anaerobic or septic condition. Early stages of processing will tend to produce foul-smelling gases, with hydrogen sulfide being most common in generating complaints. Large process plants in urban areas will often treat the odors with carbon reactors, a contact media with bio-slimes, small doses of chlorine, or circulating fluids to biologically capture and metabolize the noxious gases. Other methods of odor control exist, including addition of iron salts, hydrogen peroxide, calcium nitrate, etc. to manage hydrogen sulfide levels.
=== Energy requirements ===
The energy requirements vary with type of treatment process as well as sewage strength. For example, constructed wetlands and stabilization ponds have low energy requirements. In comparison, the activated sludge process has a high energy consumption because it includes an aeration step. Some sewage treatment plants produce biogas from their sewage sludge treatment process by using a process called anaerobic digestion. This process can produce enough energy to meet most of the energy needs of the sewage treatment plant itself.: 1505
For activated sludge treatment plants in the United States, around 30 percent of the annual operating costs is usually required for energy.: 1703 Most of this electricity is used for aeration, pumping systems and equipment for the dewatering and drying of sewage sludge. Advanced sewage treatment plants, e.g. for nutrient removal, require more energy than plants that only achieve primary or secondary treatment.: 1704
Small rural plants using trickling filters may operate with no net energy requirements, the whole process being driven by gravitational flow, including tipping bucket flow distribution and the desludging of settlement tanks to drying beds. This is usually only practical in hilly terrain and in areas where the treatment plant is relatively remote from housing because of the difficulty in managing odors.
=== Co-treatment of industrial effluent ===
In highly regulated developed countries, industrial wastewater usually receives at least pretreatment if not full treatment at the factories themselves to reduce the pollutant load, before discharge to the sewer. The pretreatment has the following two main aims: Firstly, to prevent toxic or inhibitory compounds entering the biological stage of the sewage treatment plant and reduce its efficiency. And secondly to avoid toxic compounds from accumulating in the produced sewage sludge which would reduce its beneficial reuse options. Some industrial wastewater may contain pollutants which cannot be removed by sewage treatment plants. Also, variable flow of industrial waste associated with production cycles may upset the population dynamics of biological treatment units.
=== Design aspects of secondary treatment processes ===
=== Non-sewered areas ===
Urban residents in many parts of the world rely on on-site sanitation systems without sewers, such as septic tanks and pit latrines, and fecal sludge management in these cities is an enormous challenge.
For sewage treatment the use of septic tanks and other on-site sewage facilities (OSSF) is widespread in some rural areas, for example serving up to 20 percent of the homes in the U.S.
== Available process steps ==
Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes. Different types of sewage treatment may utilize some or all of the process steps listed below.
=== Preliminary treatment ===
Preliminary treatment (sometimes called pretreatment) removes coarse materials that can be easily collected from the raw sewage before they damage or clog the pumps and sewage lines of primary treatment clarifiers.
==== Screening ====
The influent in sewage water passes through a bar screen to remove all large objects like cans, rags, sticks, plastic packets, etc. carried in the sewage stream. This is most commonly done with an automated mechanically raked bar screen in modern plants serving large populations, while in smaller or less modern plants, a manually cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed in a landfill, or incinerated. Bar screens or mesh screens of varying sizes may be used to optimize solids removal. If gross solids are not removed, they become entrained in pipes and moving parts of the treatment plant, and can cause substantial damage and inefficiency in the process.: 9
==== Grit removal ====
Grit consists of sand, gravel, rocks, and other heavy materials. Preliminary treatment may include a sand or grit removal channel or chamber, where the velocity of the incoming sewage is reduced to allow the settlement of grit. Grit removal is necessary to (1) reduce formation of deposits in primary sedimentation tanks, aeration tanks, anaerobic digesters, pipes, channels, etc. (2) reduce the frequency of tank cleaning caused by excessive accumulation of grit; and (3) protect moving mechanical equipment from abrasion and accompanying abnormal wear. The removal of grit is essential for equipment with closely machined metal surfaces such as comminutors, fine screens, centrifuges, heat exchangers, and high pressure diaphragm pumps.
Grit chambers come in three types: horizontal grit chambers, aerated grit chambers, and vortex grit chambers. Vortex grit chambers include mechanically induced vortex, hydraulically induced vortex, and multi-tray vortex separators. Given that traditionally, grit removal systems have been designed to remove clean inorganic particles that are greater than 0.210 millimetres (0.0083 in), most of the finer grit passes through the grit removal flows under normal conditions. During periods of high flow deposited grit is resuspended and the quantity of grit reaching the treatment plant increases substantially.
==== Flow equalization ====
Equalization basins can be used to achieve flow equalization. This is especially useful for combined sewer systems which produce peak dry-weather flows or peak wet-weather flows that are much higher than the average flows.: 334 Such basins can improve the performance of the biological treatment processes and the secondary clarifiers.: 334
Disadvantages include the basins' capital cost and space requirements. Basins can also provide a place to temporarily hold, dilute and distribute batch discharges of toxic or high-strength wastewater which might otherwise inhibit biological secondary treatment (such was wastewater from portable toilets or fecal sludge that is brought to the sewage treatment plant in vacuum trucks). Flow equalization basins require variable discharge control, typically include provisions for bypass and cleaning, and may also include aerators and odor control.
==== Fat and grease removal ====
In some larger plants, fat and grease are removed by passing the sewage through a small tank where skimmers collect the fat floating on the surface. Air blowers in the base of the tank may also be used to help recover the fat as a froth. Many plants, however, use primary clarifiers with mechanical surface skimmers for fat and grease removal.
=== Primary treatment ===
Primary treatment is the "removal of a portion of the suspended solids and organic matter from the sewage".: 11 It consists of allowing sewage to pass slowly through a basin where heavy solids can settle to the bottom while oil, grease and lighter solids float to the surface and are skimmed off. These basins are called primary sedimentation tanks or primary clarifiers and typically have a hydraulic retention time (HRT) of 1.5 to 2.5 hours.: 398 The settled and floating materials are removed and the remaining liquid may be discharged or subjected to secondary treatment. Primary settling tanks are usually equipped with mechanically driven scrapers that continually drive the collected sludge towards a hopper in the base of the tank where it is pumped to sludge treatment facilities.: 9–11
Sewage treatment plants that are connected to a combined sewer system sometimes have a bypass arrangement after the primary treatment unit. This means that during very heavy rainfall events, the secondary and tertiary treatment systems can be bypassed to protect them from hydraulic overloading, and the mixture of sewage and storm-water receives primary treatment only.
Primary sedimentation tanks remove about 50–70% of the suspended solids, and 25–40% of the biological oxygen demand (BOD).: 396
=== Secondary treatment ===
The main processes involved in secondary sewage treatment are designed to remove as much of the solid material as possible. They use biological processes to digest and remove the remaining soluble material, especially the organic fraction. This can be done with either suspended-growth or biofilm processes. The microorganisms that feed on the organic matter present in the sewage grow and multiply, constituting the biological solids, or biomass. These grow and group together in the form of flocs or biofilms and, in some specific processes, as granules. The biological floc or biofilm and remaining fine solids form a sludge which can be settled and separated. After separation, a liquid remains that is almost free of solids, and with a greatly reduced concentration of pollutants.
Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage, using aerobic or anaerobic processes. The organisms involved in these processes are sensitive to the presence of toxic materials, although these are not expected to be present at high concentrations in typical municipal sewage.
=== Tertiary treatment ===
Advanced sewage treatment generally involves three main stages, called primary, secondary and tertiary treatment but may also include intermediate stages and final polishing processes. The purpose of tertiary treatment (also called advanced treatment) is to provide a final treatment stage to further improve the effluent quality before it is discharged to the receiving water body or reused. More than one tertiary treatment process may be used at any treatment plant. If disinfection is practiced, it is always the final process. It is also called effluent polishing. Tertiary treatment may include biological nutrient removal (alternatively, this can be classified as secondary treatment), disinfection and partly removal of micropollutants, such as environmental persistent pharmaceutical pollutants.
Tertiary treatment is sometimes defined as anything more than primary and secondary treatment in order to allow discharge into a highly sensitive or fragile ecosystem such as estuaries, low-flow rivers or coral reefs. Treated water is sometimes disinfected chemically or physically (for example, by lagoons and microfiltration) prior to discharge into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, greenway or park. If it is sufficiently clean, it can also be used for groundwater recharge or agricultural purposes.
Sand filtration removes much of the residual suspended matter.: 22–23 Filtration over activated carbon, also called carbon adsorption, removes residual toxins.: 19 Micro filtration or synthetic membranes are used in membrane bioreactors and can also remove pathogens.: 854
Settlement and further biological improvement of treated sewage may be achieved through storage in large human-made ponds or lagoons. These lagoons are highly aerobic, and colonization by native macrophytes, especially reeds, is often encouraged.
=== Disinfection ===
Disinfection of treated sewage aims to kill pathogens (disease-causing microorganisms) prior to disposal. It is increasingly effective after more elements of the foregoing treatment sequence have been completed.: 359 The purpose of disinfection in the treatment of sewage is to substantially reduce the number of pathogens in the water to be discharged back into the environment or to be reused. The target level of reduction of biological contaminants like pathogens is often regulated by the presiding governmental authority. The effectiveness of disinfection depends on the quality of the water being treated (e.g. turbidity, pH, etc.), the type of disinfection being used, the disinfectant dosage (concentration and time), and other environmental variables. Water with high turbidity will be treated less successfully, since solid matter can shield organisms, especially from ultraviolet light or if contact times are low. Generally, short contact times, low doses and high flows all militate against effective disinfection. Common methods of disinfection include ozone, chlorine, ultraviolet light, or sodium hypochlorite.: 16 Monochloramine, which is used for drinking water, is not used in the treatment of sewage because of its persistence.
Chlorination remains the most common form of treated sewage disinfection in many countries due to its low cost and long-term history of effectiveness. One disadvantage is that chlorination of residual organic material can generate chlorinated-organic compounds that may be carcinogenic or harmful to the environment. Residual chlorine or chloramines may also be capable of chlorinating organic material in the natural aquatic environment. Further, because residual chlorine is toxic to aquatic species, the treated effluent must also be chemically dechlorinated, adding to the complexity and cost of treatment.
Ultraviolet (UV) light can be used instead of chlorine, iodine, or other chemicals. Because no chemicals are used, the treated water has no adverse effect on organisms that later consume it, as may be the case with other methods. UV radiation causes damage to the genetic structure of bacteria, viruses, and other pathogens, making them incapable of reproduction. The key disadvantages of UV disinfection are the need for frequent lamp maintenance and replacement and the need for a highly treated effluent to ensure that the target microorganisms are not shielded from the UV radiation (i.e., any solids present in the treated effluent may protect microorganisms from the UV light). In many countries, UV light is becoming the most common means of disinfection because of the concerns about the impacts of chlorine in chlorinating residual organics in the treated sewage and in chlorinating organics in the receiving water.
As with UV treatment, heat sterilization also does not add chemicals to the water being treated. However, unlike UV, heat can penetrate liquids that are not transparent. Heat disinfection can also penetrate solid materials within wastewater, sterilizing their contents. Thermal effluent decontamination systems provide low resource, low maintenance effluent decontamination once installed.
Ozone (O3) is generated by passing oxygen (O2) through a high voltage potential resulting in a third oxygen atom becoming attached and forming O3. Ozone is very unstable and reactive and oxidizes most organic material it comes in contact with, thereby destroying many pathogenic microorganisms. Ozone is considered to be safer than chlorine because, unlike chlorine which has to be stored on site (highly poisonous in the event of an accidental release), ozone is generated on-site as needed from the oxygen in the ambient air. Ozonation also produces fewer disinfection by-products than chlorination. A disadvantage of ozone disinfection is the high cost of the ozone generation equipment and the requirements for special operators. Ozone sewage treatment requires the use of an ozone generator, which decontaminates the water as ozone bubbles percolate through the tank.
Membranes can also be effective disinfectants, because they act as barriers, avoiding the passage of the microorganisms. As a result, the final effluent may be devoid of pathogenic organisms, depending on the type of membrane used. This principle is applied in membrane bioreactors.
=== Biological nutrient removal ===
Sewage may contain high levels of the nutrients nitrogen and phosphorus. Typical values for nutrient loads per person and nutrient concentrations in raw sewage in developing countries have been published as follows: 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L).: 57 The typical ranges for these values are: 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L).: 57
Excessive release to the environment can lead to nutrient pollution, which can manifest itself in eutrophication. This process can lead to algal blooms, a rapid growth, and later decay, in the population of algae. In addition to causing deoxygenation, some algal species produce toxins that contaminate drinking water supplies.
Ammonia nitrogen, in the form of free ammonia (NH3) is toxic to fish. Ammonia nitrogen, when converted to nitrite and further to nitrate in a water body, in the process of nitrification, is associated with the consumption of dissolved oxygen. Nitrite and nitrate may also have public health significance if concentrations are high in drinking water, because of a disease called metahemoglobinemia.: 42
Phosphorus removal is important as phosphorus is a limiting nutrient for algae growth in many fresh water systems. Therefore, an excess of phosphorus can lead to eutrophication. It is also particularly important for water reuse systems where high phosphorus concentrations may lead to fouling of downstream equipment such as reverse osmosis.
A range of treatment processes are available to remove nitrogen and phosphorus. Biological nutrient removal (BNR) is regarded by some as a type of secondary treatment process, and by others as a tertiary (or advanced) treatment process.
==== Nitrogen removal ====
Nitrogen is removed through the biological oxidation of nitrogen from ammonia to nitrate (nitrification), followed by denitrification, the reduction of nitrate to nitrogen gas. Nitrogen gas is released to the atmosphere and thus removed from the water.
Nitrification itself is a two-step aerobic process, each step facilitated by a different type of bacteria. The oxidation of ammonia (NH4+) to nitrite (NO2−) is most often facilitated by bacteria such as Nitrosomonas spp. (nitroso refers to the formation of a nitroso functional group). Nitrite oxidation to nitrate (NO3−), though traditionally believed to be facilitated by Nitrobacter spp. (nitro referring the formation of a nitro functional group), is now known to be facilitated in the environment predominantly by Nitrospira spp.
Denitrification requires anoxic conditions to encourage the appropriate biological communities to form. Anoxic conditions refers to a situation where oxygen is absent but nitrate is present. Denitrification is facilitated by a wide diversity of bacteria. The activated sludge process, sand filters, waste stabilization ponds, constructed wetlands and other processes can all be used to reduce nitrogen.: 17–18 Since denitrification is the reduction of nitrate to dinitrogen (molecular nitrogen) gas, an electron donor is needed. This can be, depending on the wastewater, organic matter (from the sewage itself), sulfide, or an added donor like methanol. The sludge in the anoxic tanks (denitrification tanks) must be mixed well (mixture of recirculated mixed liquor, return activated sludge, and raw influent) e.g. by using submersible mixers in order to achieve the desired denitrification.
Over time, different treatment configurations for activated sludge processes have evolved to achieve high levels of nitrogen removal. An initial scheme was called the Ludzack–Ettinger Process. It could not achieve a high level of denitrification.: 616 The Modified Ludzak–Ettinger Process (MLE) came later and was an improvement on the original concept. It recycles mixed liquor from the discharge end of the aeration tank to the head of the anoxic tank. This provides nitrate for the facultative bacteria.: 616
There are other process configurations, such as variations of the Bardenpho process.: 160 They might differ in the placement of anoxic tanks, e.g. before and after the aeration tanks.
==== Phosphorus removal ====
Studies of United States sewage in the late 1960s estimated mean per capita contributions of 500 grams (18 oz) in urine and feces, 1,000 grams (35 oz) in synthetic detergents, and lesser variable amounts used as corrosion and scale control chemicals in water supplies. Source control via alternative detergent formulations has subsequently reduced the largest contribution, but naturally the phosphorus content of urine and feces remained unchanged.
Phosphorus can be removed biologically in a process called enhanced biological phosphorus removal. In this process, specific bacteria, called polyphosphate-accumulating organisms (PAOs), are selectively enriched and accumulate large quantities of phosphorus within their cells (up to 20 percent of their mass).: 148–155
Phosphorus removal can also be achieved by chemical precipitation, usually with salts of iron (e.g. ferric chloride) or aluminum (e.g. alum), or lime.: 18 This may lead to a higher sludge production as hydroxides precipitate and the added chemicals can be expensive. Chemical phosphorus removal requires significantly smaller equipment footprint than biological removal, is easier to operate and is often more reliable than biological phosphorus removal. Another method for phosphorus removal is to use granular laterite or zeolite.
Some systems use both biological phosphorus removal and chemical phosphorus removal. The chemical phosphorus removal in those systems may be used as a backup system, for use when the biological phosphorus removal is not removing enough phosphorus, or may be used continuously. In either case, using both biological and chemical phosphorus removal has the advantage of not increasing sludge production as much as chemical phosphorus removal on its own, with the disadvantage of the increased initial cost associated with installing two different systems.
Once removed, phosphorus, in the form of a phosphate-rich sewage sludge, may be sent to landfill or used as fertilizer in admixture with other digested sewage sludges. In the latter case, the treated sewage sludge is also sometimes referred to as biosolids. 22% of the world's phosphorus needs could be satisfied by recycling residential wastewater.
=== Fourth treatment stage ===
Micropollutants such as pharmaceuticals, ingredients of household chemicals, chemicals used in small businesses or industries, environmental persistent pharmaceutical pollutants (EPPP) or pesticides may not be eliminated in the commonly used sewage treatment processes (primary, secondary and tertiary treatment) and therefore lead to water pollution. Although concentrations of those substances and their decomposition products are quite low, there is still a chance of harming aquatic organisms. For pharmaceuticals, the following substances have been identified as toxicologically relevant: substances with endocrine disrupting effects, genotoxic substances and substances that enhance the development of bacterial resistances. They mainly belong to the group of EPPP.
Techniques for elimination of micropollutants via a fourth treatment stage during sewage treatment are implemented in Germany, Switzerland, Sweden and the Netherlands and tests are ongoing in several other countries. In Switzerland it has been enshrined in law since 2016. Since 1 January 2025, there has been a recast of the Urban Waste Water Treatment Directive in the European Union. Due to the large number of amendments that have now been made, the directive was rewritten on November 27, 2024 as Directive (EU) 2024/3019, published in the EU Official Journal on December 12, and entered into force on January 1, 2025. The member states now have 31 months, i.e. until July 31, 2027, to adapt their national legislation to the new directive ("implementation of the directive").
The amendment stipulates that, in addition to stricter discharge values for nitrogen and phosphorus, persistent trace substances must at least be partially separated. The target, similar to Switzerland, is that 80% of 6 key substances out of 12 must be removed between discharge into the sewage treatment plant and discharge into the water body. At least 80% of the investments and operating costs for the fourth treatment stage will be passed on to the pharmaceutical and cosmetics industry according to the polluter pays principle in order to relieve the population financially and provide an incentive for the development of more environmentally friendly products. In addition, the municipal wastewater treatment sector is to be energy neutral by 2045 and the emission of microplastics and PFAS is to be monitored.
The implementation of the framework guidelines is staggered until 2045, depending on the size of the sewage treatment plant and its population equivalents (PE). Sewage treatment plants with over 150,000 PE have priority and should be adapted immediately, as a significant proportion of the pollution comes from them. The adjustments are staggered at national level in:
20% of the plants by 31 December 2033,
60% of the plants by 31 December 2039,
100% of the plants by 31 December 2045.
Wastewater treatment plants with 10,000 to 150,000 PE that discharge into coastal waters or sensitive waters are staggered at national level in:
10% of the plants by 31 December 2033,
30% of the plants by 31 December 2036,
60% of the plants by 31 December 2039,
100% of the plants by 31 December 2045.
The latter concerns waters with a low dilution ratio, waters from which drinking water is obtained and those that are coastal waters, or those used as bathing waters or used for mussel farming. Member States will be given the option not to apply fourth treatment in these areas if a risk assessment shows that there is no potential risk from micropollutants to human health and/or the environment.
Such process steps mainly consist of activated carbon filters that adsorb the micropollutants. The combination of advanced oxidation with ozone followed by granular activated carbon (GAC) has been suggested as a cost-effective treatment combination for pharmaceutical residues. For a full reduction of microplasts the combination of ultrafiltration followed by GAC has been suggested. Also the use of enzymes such as laccase secreted by fungi is under investigation. Microbial biofuel cells are investigated for their property to treat organic matter in sewage.
To reduce pharmaceuticals in water bodies, source control measures are also under investigation, such as innovations in drug development or more responsible handling of drugs. In the US, the National Take Back Initiative is a voluntary program with the general public, encouraging people to return excess or expired drugs, and avoid flushing them to the sewage system.
=== Sludge treatment and disposal ===
== Environmental impacts ==
Sewage treatment plants can have significant effects on the biotic status of receiving waters and can cause some water pollution, especially if the treatment process used is only basic. For example, for sewage treatment plants without nutrient removal, eutrophication of receiving water bodies can be a problem.
In 2024, The Royal Academy of Engineering released a study into the effects wastewater on public health in the United Kingdom. The study gained media attention, with comments from the UKs leading health professionals, including Sir Chris Whitty. Outlining 15 recommendations for various UK bodies to dramatically reduce public health risks by increasing the water quality in its waterways, such as rivers and lakes.
After the release of the report, The Guardian newspaper interviewed Whitty, who stated that improving water quality and sewage treatment should be a high level of importance and a "public health priority". He compared it to eradicating cholera in the 19th century in the country following improvements to the sewage treatment network. The study also identified that low water flows in rivers saw high concentration levels of sewage, as well as times of flooding or heavy rainfall. While heavy rainfall had always been associated with sewage overflows into streams and rivers, the British media went as far to warn parents of the dangers of paddling in shallow rivers during warm weather.
Whitty's comments came after the study revealed that the UK was experiencing a growth in the number of people that were using coastal and inland waters recreationally. This could be connected to a growing interest in activities such as open water swimming or other water sports. Despite this growth in recreation, poor water quality meant some were becoming unwell during events. Most notably, the 2024 Paris Olympics had to delay numerous swimming-focused events like the triathlon due to high levels of sewage in the River Seine.
== Reuse ==
=== Irrigation ===
Increasingly, people use treated or even untreated sewage for irrigation to produce crops. Cities provide lucrative markets for fresh produce, so are attractive to farmers. Because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with sewage directly to water their crops. There can be significant health hazards related to using water loaded with pathogens in this way. The World Health Organization developed guidelines for safe use of wastewater in 2006. They advocate a 'multiple-barrier' approach to wastewater use, where farmers are encouraged to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight, applying water carefully so it does not contaminate leaves likely to be eaten raw, cleaning vegetables with disinfectant or allowing fecal sludge used in farming to dry before being used as a human manure.
=== Reclaimed water ===
== Global situation ==
Before the 20th century in Europe, sewers usually discharged into a body of water such as a river, lake, or ocean. There was no treatment, so the breakdown of the human waste was left to the ecosystem. This could lead to satisfactory results if the assimilative capacity of the ecosystem is sufficient which is nowadays not often the case due to increasing population density.: 78
Today, the situation in urban areas of industrialized countries is usually that sewers route their contents to a sewage treatment plant rather than directly to a body of water. In many developing countries, however, the bulk of municipal and industrial wastewater is discharged to rivers and the ocean without any treatment or after preliminary treatment or primary treatment only. Doing so can lead to water pollution. Few reliable figures exist on the share of the wastewater collected in sewers that is being treated worldwide. A global estimate by UNDP and UN-Habitat in 2010 was that 90% of all wastewater generated is released into the environment untreated. A more recent study in 2021 estimated that globally, about 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. As of 2022, without sufficient treatment, more than 80% of all wastewater generated globally is released into the environment. High-income nations treat, on average, 70% of the wastewater they produce, according to UN Water. Only 8% of wastewater produced in low-income nations receives any sort of treatment.
The Joint Monitoring Programme (JMP) for Water Supply and Sanitation by WHO and UNICEF report in 2021 that 82% of people with sewer connections are connected to sewage treatment plants providing at least secondary treatment.: 55 However, this value varies widely between regions. For example, in Europe, North America, Northern Africa and Western Asia, a total of 31 countries had universal (>99%) wastewater treatment. However, in Albania, Bermuda, North Macedonia and Serbia "less than 50% of sewered wastewater received secondary or better treatment" and in Algeria, Lebanon and Libya the value was less than 20% of sewered wastewater that was being treated. The report also found that "globally, 594 million people have sewer connections that don't receive sufficient treatment. Many more are connected to wastewater treatment plants that do not provide effective treatment or comply with effluent requirements.".: 55
=== Global targets ===
Sustainable Development Goal 6 has a Target 6.3 which is formulated as follows: "By 2030, improve water quality by reducing pollution, eliminating,dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally." The corresponding Indicator 6.3.1 is the "proportion of wastewater safely treated". It is anticipated that wastewater production would rise by 24% by 2030 and by 51% by 2050.
Data in 2020 showed that there is still too much uncollected household wastewater: Only 66% of all household wastewater flows were collected at treatment facilities in 2020 (this is determined from data from 128 countries).: 17 Based on data from 42 countries in 2015, the report stated that "32 per cent of all wastewater flows generated from point sources received at least some treatment".: 17 For sewage that has indeed been collected at centralized sewage treatment plants, about 79% went on to be safely treated in 2020.: 18
== History ==
The history of sewage treatment had the following developments: It began with land application (sewage farms) in the 1840s in England, followed by chemical treatment and sedimentation of sewage in tanks, then biological treatment in the late 19th century, which led to the development of the activated sludge process starting in 1912.
== Regulations ==
In most countries, sewage collection and treatment are subject to local and national regulations and standards.
== By country ==
=== Overview ===
=== Europe ===
In the European Union, 0.8% of total energy consumption goes to wastewater treatment facilities. The European Union needs to make extra investments of €90 billion in the water and waste sector to meet its 2030 climate and energy goals.
In October 2021, British Members of Parliament voted to continue allowing untreated sewage from combined sewer overflows to be released into waterways.
=== Asia ===
==== India ====
The 'Delhi Jal Board' (DJB) is currently operating on the construction of the largest sewage treatment plant in India. It will be operational by the end of 2022 with an estimated capacity of 564 MLD. It is supposed to solve the existing situation wherein untreated sewage water is being discharged directly into the river 'Yamuna'.
==== Japan ====
=== Africa ===
==== Libya ====
=== Americas ===
==== United States ====
== See also ==
Decentralized wastewater system
List of largest wastewater treatment plants
List of water supply and sanitation by country
Organisms involved in water purification
Sanitary engineering
Waste disposal
== References ==
== External links ==
Water Environment Federation – Professional association focusing on municipal wastewater treatment | Wikipedia/Fourth_treatment_stage |
A mechanical biological treatment (MBT) system is a type of waste processing facility that combines a sorting facility with a form of biological treatment such as composting or anaerobic digestion. MBT plants are designed to process mixed household waste as well as commercial and industrial wastes.
== Process ==
The terms mechanical biological treatment or mechanical biological pre-treatment relate to a group of solid waste treatment systems. These systems enable the recovery of materials contained within the mixed waste and facilitate the stabilisation of the biodegradable component of the material. Twenty two facilities in the UK have implemented MBT/BMT treatment processes.
The sorting component of the plants typically resemble a materials recovery facility. This component is either configured to recover the individual elements of the waste or produce a refuse-derived fuel that can be used for the generation of power.
The components of the mixed waste stream that can be recovered include:
Ferrous metal
Non-ferrous metal
Plastic
Glass
== Terminology ==
MBT is also sometimes termed biological mechanical treatment (BMT), however this simply refers to the order of processing (i.e., the biological phase of the system precedes the mechanical sorting). MBT should not be confused with mechanical heat treatment (MHT).
== Mechanical sorting ==
The "mechanical" element is usually an automated mechanical sorting stage. This either removes recyclable elements from a mixed waste stream (such as metals, plastics, glass, and paper) or processes them. It typically involves factory style conveyors, industrial magnets, eddy current separators, trommels, shredders, and other tailor made systems, or the sorting is done manually at hand picking stations. The mechanical element has a number of similarities to a materials recovery facility (MRF).
Some systems integrate a wet MRF to separate by density and flotation and to recover and wash the recyclable elements of the waste in a form that can be sent for recycling. MBT can alternatively process the waste to produce a high calorific fuel termed refuse derived fuel (RDF). RDF can be used in cement kilns or thermal combustion power plants and is generally made up from plastics and biodegradable organic waste. Systems which are configured to produce RDF include the Herhof and Ecodeco processes. It is a common misconception that all MBT processes produce RDF; this is not the case, and depends strictly on system configuration and suitable local markets for MBT outputs.
== Biological processing ==
The "biological" element refers to either:
Anaerobic digestion
Composting
Biodrying
Anaerobic digestion harnesses anaerobic microorganisms to break down the biodegradable component of the waste to produce biogas and soil improver. The biogas can be used to generate electricity and heat.
Biological can also refer to a composting stage. Here the organic component is broken down by naturally occurring aerobic microorganisms. They break down the waste into carbon dioxide and compost. There is no green energy produced by systems employing only composting treatment for the biodegradable waste.
In the case of biodrying, the waste material undergoes a period of rapid heating through the action of aerobic microbes. During this partial composting stage the heat generated by the microbes result in rapid drying of the waste. These systems are often configured to produce a refuse-derived fuel where a dry, light material is advantageous for later transport and combustion.
Some systems incorporate both anaerobic digestion and composting. This may either take the form of a full anaerobic digestion phase, followed by the maturation (composting) of the digestate. Alternatively a partial anaerobic digestion phase can be induced on water that is percolated through the raw waste, dissolving the readily available sugars, with the remaining material being sent to a windrow composting facility.
By processing the biodegradable waste either by anaerobic digestion or by composting MBT technologies help to reduce the contribution of greenhouse gases to global warming.
Usable wastes for this system:
Municipal solid waste
Commercial and industrial waste
Sewage sludge
Possible products of this system:
Renewable fuel (biogas) leading to renewable power
Recovered recyclable materials such as metals, paper, plastics, glass etc.
Digestate - an organic fertiliser and soil improver
Carbon credits – additional revenues
High calorific fraction refuse derived fuel - renewable fuel content dependent upon biological component
Residual unusable materials prepared for their final safe treatment (e.g., incineration or gasification) and/or landfill
Further advantages:
Small fraction of inert residual waste
Reduction of the waste volume to be deposited to at least a half (density > 1.3 t/m3), thus the lifetime of the landfill is at least twice as long as usual
Utilisation of the leachate in the process
Landfill gas not problematic as biological component of waste has been stabilised
Daily covering of landfill not necessary
== Consideration of applications ==
MBT systems can form an integral part of a region's waste treatment infrastructure. These systems are typically integrated with kerbside collection schemes. In the event that a refuse-derived fuel is produced as a by-product then a combustion facility would be required. This could either be an incineration facility or a gasifier.
Alternatively MBT solutions can diminish the need for home separation and kerbside collection of recyclable elements of waste. This gives the ability of local authorities, municipalities and councils to reduce the use of waste vehicles on the roads and keep recycling rates high.
== Position of environmental groups ==
Friends of the Earth suggests that the best environmental route for residual waste is to firstly maximise removal of remaining recyclable materials from the waste stream (such as metals, plastics and paper). The amount of waste remaining should be composted or anaerobically digested and disposed of to landfill, unless sufficiently clean to be used as compost.
A report by Eunomia undertook a detailed analysis of the climate impacts of different residual waste technologies. It found that an MBT process that extracts both the metals and plastics prior to landfilling is one of the best options for dealing with our residual waste, and has a lower impact than either MBT processes producing RDF for incineration or incineration of waste without MBT.
Friends of the Earth does not support MBT plants that produce refuse derived fuel (RDF), and believes MBT processes should occur in small, localised treatment plants.
== See also ==
== References ==
== External links ==
Waste-to-Resources World's largest conference on mechanical biological treatment (MBT) of municipal solid waste (MSW) and material recovery facilities (MRF)
Environment Agency Waste Technology Data Centre An independent UK government review of advanced waste treatment technologies.
Kuehle-Weidemeier et al. (2007) Plants for Mechanical-Biological Waste Treatment Summary of the evaluation of all German MBT plants in the introduction phase 2005–2006. By order of the German EPA (Umweltbundesamt)
Juniper MBT report An independent study of MBT technologies commissioned with the use of UK landfill tax credits.
SEPA MBT Planning Information Sheet Fact Sheet for Scottish Planning Considerations
Compostinfo An independent comprehensive bibliography and review web site focusing on "mixed waste" sources
GTZ (2003) Sector project mechanical-biological waste treatment. Final report
Mechanical-biological waste treatment concept of FABER-AMBRA - Scientific results and videos | Wikipedia/Mechanical_biological_treatment |
Sewage treatment is a type of wastewater treatment which aims to remove contaminants from sewage to produce an effluent that is suitable to discharge to the surrounding environment or an intended reuse application, thereby preventing water pollution from raw sewage discharges. Sewage contains wastewater from households and businesses and possibly pre-treated industrial wastewater. There are a high number of sewage treatment processes to choose from. These can range from decentralized systems (including on-site treatment systems) to large centralized systems involving a network of pipes and pump stations (called sewerage) which convey the sewage to a treatment plant. For cities that have a combined sewer, the sewers will also carry urban runoff (stormwater) to the sewage treatment plant. Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes and nutrient removal. Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage, using aerobic or anaerobic biological processes. A so-called quaternary treatment step (sometimes referred to as advanced treatment) can also be added for the removal of organic micropollutants, such as pharmaceuticals. This has been implemented in full-scale for example in Sweden.
A large number of sewage treatment technologies have been developed, mostly using biological treatment processes. Design engineers and decision makers need to take into account technical and economical criteria of each alternative when choosing a suitable technology.: 215 Often, the main criteria for selection are: desired effluent quality, expected construction and operating costs, availability of land, energy requirements and sustainability aspects. In developing countries and in rural areas with low population densities, sewage is often treated by various on-site sanitation systems and not conveyed in sewers. These systems include septic tanks connected to drain fields, on-site sewage systems (OSS), vermifilter systems and many more. On the other hand, advanced and relatively expensive sewage treatment plants may include tertiary treatment with disinfection and possibly even a fourth treatment stage to remove micropollutants.
At the global level, an estimated 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%.
The treatment of sewage is part of the field of sanitation. Sanitation also includes the management of human waste and solid waste as well as stormwater (drainage) management. The term sewage treatment plant is often used interchangeably with the term wastewater treatment plant.
== Terminology ==
The term sewage treatment plant (STP) (or sewage treatment works) is nowadays often replaced with the term wastewater treatment plant (WWTP). Strictly speaking, the latter is a broader term that can also refer to industrial wastewater treatment.
The terms water recycling center or water reclamation plants are also in use as synonyms.
== Purposes and overview ==
The overall aim of treating sewage is to produce an effluent that can be discharged to the environment while causing as little water pollution as possible, or to produce an effluent that can be reused in a useful manner. This is achieved by removing contaminants from the sewage. It is a form of waste management.
With regards to biological treatment of sewage, the treatment objectives can include various degrees of the following: to transform or remove organic matter, nutrients (nitrogen and phosphorus), pathogenic organisms, and specific trace organic constituents (micropollutants).: 548
Some types of sewage treatment produce sewage sludge which can be treated before safe disposal or reuse. Under certain circumstances, the treated sewage sludge might be termed biosolids and can be used as a fertilizer.
== Sewage characteristics ==
== Collection ==
== Types of treatment processes ==
Sewage can be treated close to where the sewage is created, which may be called a decentralized system or even an on-site system (on-site sewage facility, septic tanks, etc.). Alternatively, sewage can be collected and transported by a network of pipes and pump stations to a municipal treatment plant. This is called a centralized system (see also sewerage and pipes and infrastructure).
A large number of sewage treatment technologies have been developed, mostly using biological treatment processes (see list of wastewater treatment technologies). Very broadly, they can be grouped into high tech (high cost) versus low tech (low cost) options, although some technologies might fall into either category. Other grouping classifications are intensive or mechanized systems (more compact, and frequently employing high tech options) versus extensive or natural or nature-based systems (usually using natural treatment processes and occupying larger areas) systems. This classification may be sometimes oversimplified, because a treatment plant may involve a combination of processes, and the interpretation of the concepts of high tech and low tech, intensive and extensive, mechanized and natural processes may vary from place to place.
=== Low tech, extensive or nature-based processes ===
Examples for more low-tech, often less expensive sewage treatment systems are shown below. They often use little or no energy. Some of these systems do not provide a high level of treatment, or only treat part of the sewage (for example only the toilet wastewater), or they only provide pre-treatment, like septic tanks. On the other hand, some systems are capable of providing a good performance, satisfactory for several applications. Many of these systems are based on natural treatment processes, requiring large areas, while others are more compact. In most cases, they are used in rural areas or in small to medium-sized communities.
For example, waste stabilization ponds are a low cost treatment option with practically no energy requirements but they require a lot of land.: 236 Due to their technical simplicity, most of the savings (compared with high tech systems) are in terms of operation and maintenance costs.: 220–243
Examples for systems that can provide full or partial treatment for toilet wastewater only:
Composting toilet (see also dry toilets in general)
Urine-diverting dry toilet
Vermifilter toilet
=== High tech, intensive or mechanized processes ===
Examples for more high-tech, intensive or mechanized, often relatively expensive sewage treatment systems are listed below. Some of them are energy intensive as well. Many of them provide a very high level of treatment. For example, broadly speaking, the activated sludge process achieves a high effluent quality but is relatively expensive and energy intensive.: 239
=== Disposal or treatment options ===
There are other process options which may be classified as disposal options, although they can also be understood as basic treatment options. These include: Application of sludge, irrigation, soak pit, leach field, fish pond, floating plant pond, water disposal/groundwater recharge, surface disposal and storage.: 138
The application of sewage to land is both: a type of treatment and a type of final disposal.: 189 It leads to groundwater recharge and/or to evapotranspiration. Land application include slow-rate systems, rapid infiltration, subsurface infiltration, overland flow. It is done by flooding, furrows, sprinkler and dripping. It is a treatment/disposal system that requires a large amount of land per person.
== Design aspects ==
=== Population equivalent ===
The per person organic matter load is a parameter used in the design of sewage treatment plants. This concept is known as population equivalent (PE). The base value used for PE can vary from one country to another. Commonly used definitions used worldwide are: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. This concept is also used as a comparison parameter to express the strength of industrial wastewater compared to sewage.
=== Process selection ===
When choosing a suitable sewage treatment process, decision makers need to take into account technical and economical criteria.: 215 Therefore, each analysis is site-specific. A life cycle assessment (LCA) can be used, and criteria or weightings are attributed to the various aspects. This makes the final decision subjective to some extent.: 216 A range of publications exist to help with technology selection.: 221
In industrialized countries, the most important parameters in process selection are typically efficiency, reliability, and space requirements. In developing countries, they might be different and the focus might be more on construction and operating costs as well as process simplicity.: 218
Choosing the most suitable treatment process is complicated and requires expert inputs, often in the form of feasibility studies. This is because the main important factors to be considered when evaluating and selecting sewage treatment processes are numerous. They include: process applicability, applicable flow, acceptable flow variation, influent characteristics, inhibiting or refractory compounds, climatic aspects, process kinetics and reactor hydraulics, performance, treatment residuals, sludge processing, environmental constraints, requirements for chemical products, energy and other resources; requirements for personnel, operating and maintenance; ancillary processes, reliability, complexity, compatibility, area availability.: 219
With regards to environmental impacts of sewage treatment plants the following aspects are included in the selection process: Odors, vector attraction, sludge transportation, sanitary risks, air contamination, soil and subsoil contamination, surface water pollution or groundwater contamination, devaluation of nearby areas, inconvenience to the nearby population.: 220
=== Odor control ===
Odors emitted by sewage treatment are typically an indication of an anaerobic or septic condition. Early stages of processing will tend to produce foul-smelling gases, with hydrogen sulfide being most common in generating complaints. Large process plants in urban areas will often treat the odors with carbon reactors, a contact media with bio-slimes, small doses of chlorine, or circulating fluids to biologically capture and metabolize the noxious gases. Other methods of odor control exist, including addition of iron salts, hydrogen peroxide, calcium nitrate, etc. to manage hydrogen sulfide levels.
=== Energy requirements ===
The energy requirements vary with type of treatment process as well as sewage strength. For example, constructed wetlands and stabilization ponds have low energy requirements. In comparison, the activated sludge process has a high energy consumption because it includes an aeration step. Some sewage treatment plants produce biogas from their sewage sludge treatment process by using a process called anaerobic digestion. This process can produce enough energy to meet most of the energy needs of the sewage treatment plant itself.: 1505
For activated sludge treatment plants in the United States, around 30 percent of the annual operating costs is usually required for energy.: 1703 Most of this electricity is used for aeration, pumping systems and equipment for the dewatering and drying of sewage sludge. Advanced sewage treatment plants, e.g. for nutrient removal, require more energy than plants that only achieve primary or secondary treatment.: 1704
Small rural plants using trickling filters may operate with no net energy requirements, the whole process being driven by gravitational flow, including tipping bucket flow distribution and the desludging of settlement tanks to drying beds. This is usually only practical in hilly terrain and in areas where the treatment plant is relatively remote from housing because of the difficulty in managing odors.
=== Co-treatment of industrial effluent ===
In highly regulated developed countries, industrial wastewater usually receives at least pretreatment if not full treatment at the factories themselves to reduce the pollutant load, before discharge to the sewer. The pretreatment has the following two main aims: Firstly, to prevent toxic or inhibitory compounds entering the biological stage of the sewage treatment plant and reduce its efficiency. And secondly to avoid toxic compounds from accumulating in the produced sewage sludge which would reduce its beneficial reuse options. Some industrial wastewater may contain pollutants which cannot be removed by sewage treatment plants. Also, variable flow of industrial waste associated with production cycles may upset the population dynamics of biological treatment units.
=== Design aspects of secondary treatment processes ===
=== Non-sewered areas ===
Urban residents in many parts of the world rely on on-site sanitation systems without sewers, such as septic tanks and pit latrines, and fecal sludge management in these cities is an enormous challenge.
For sewage treatment the use of septic tanks and other on-site sewage facilities (OSSF) is widespread in some rural areas, for example serving up to 20 percent of the homes in the U.S.
== Available process steps ==
Sewage treatment often involves two main stages, called primary and secondary treatment, while advanced treatment also incorporates a tertiary treatment stage with polishing processes. Different types of sewage treatment may utilize some or all of the process steps listed below.
=== Preliminary treatment ===
Preliminary treatment (sometimes called pretreatment) removes coarse materials that can be easily collected from the raw sewage before they damage or clog the pumps and sewage lines of primary treatment clarifiers.
==== Screening ====
The influent in sewage water passes through a bar screen to remove all large objects like cans, rags, sticks, plastic packets, etc. carried in the sewage stream. This is most commonly done with an automated mechanically raked bar screen in modern plants serving large populations, while in smaller or less modern plants, a manually cleaned screen may be used. The raking action of a mechanical bar screen is typically paced according to the accumulation on the bar screens and/or flow rate. The solids are collected and later disposed in a landfill, or incinerated. Bar screens or mesh screens of varying sizes may be used to optimize solids removal. If gross solids are not removed, they become entrained in pipes and moving parts of the treatment plant, and can cause substantial damage and inefficiency in the process.: 9
==== Grit removal ====
Grit consists of sand, gravel, rocks, and other heavy materials. Preliminary treatment may include a sand or grit removal channel or chamber, where the velocity of the incoming sewage is reduced to allow the settlement of grit. Grit removal is necessary to (1) reduce formation of deposits in primary sedimentation tanks, aeration tanks, anaerobic digesters, pipes, channels, etc. (2) reduce the frequency of tank cleaning caused by excessive accumulation of grit; and (3) protect moving mechanical equipment from abrasion and accompanying abnormal wear. The removal of grit is essential for equipment with closely machined metal surfaces such as comminutors, fine screens, centrifuges, heat exchangers, and high pressure diaphragm pumps.
Grit chambers come in three types: horizontal grit chambers, aerated grit chambers, and vortex grit chambers. Vortex grit chambers include mechanically induced vortex, hydraulically induced vortex, and multi-tray vortex separators. Given that traditionally, grit removal systems have been designed to remove clean inorganic particles that are greater than 0.210 millimetres (0.0083 in), most of the finer grit passes through the grit removal flows under normal conditions. During periods of high flow deposited grit is resuspended and the quantity of grit reaching the treatment plant increases substantially.
==== Flow equalization ====
Equalization basins can be used to achieve flow equalization. This is especially useful for combined sewer systems which produce peak dry-weather flows or peak wet-weather flows that are much higher than the average flows.: 334 Such basins can improve the performance of the biological treatment processes and the secondary clarifiers.: 334
Disadvantages include the basins' capital cost and space requirements. Basins can also provide a place to temporarily hold, dilute and distribute batch discharges of toxic or high-strength wastewater which might otherwise inhibit biological secondary treatment (such was wastewater from portable toilets or fecal sludge that is brought to the sewage treatment plant in vacuum trucks). Flow equalization basins require variable discharge control, typically include provisions for bypass and cleaning, and may also include aerators and odor control.
==== Fat and grease removal ====
In some larger plants, fat and grease are removed by passing the sewage through a small tank where skimmers collect the fat floating on the surface. Air blowers in the base of the tank may also be used to help recover the fat as a froth. Many plants, however, use primary clarifiers with mechanical surface skimmers for fat and grease removal.
=== Primary treatment ===
Primary treatment is the "removal of a portion of the suspended solids and organic matter from the sewage".: 11 It consists of allowing sewage to pass slowly through a basin where heavy solids can settle to the bottom while oil, grease and lighter solids float to the surface and are skimmed off. These basins are called primary sedimentation tanks or primary clarifiers and typically have a hydraulic retention time (HRT) of 1.5 to 2.5 hours.: 398 The settled and floating materials are removed and the remaining liquid may be discharged or subjected to secondary treatment. Primary settling tanks are usually equipped with mechanically driven scrapers that continually drive the collected sludge towards a hopper in the base of the tank where it is pumped to sludge treatment facilities.: 9–11
Sewage treatment plants that are connected to a combined sewer system sometimes have a bypass arrangement after the primary treatment unit. This means that during very heavy rainfall events, the secondary and tertiary treatment systems can be bypassed to protect them from hydraulic overloading, and the mixture of sewage and storm-water receives primary treatment only.
Primary sedimentation tanks remove about 50–70% of the suspended solids, and 25–40% of the biological oxygen demand (BOD).: 396
=== Secondary treatment ===
The main processes involved in secondary sewage treatment are designed to remove as much of the solid material as possible. They use biological processes to digest and remove the remaining soluble material, especially the organic fraction. This can be done with either suspended-growth or biofilm processes. The microorganisms that feed on the organic matter present in the sewage grow and multiply, constituting the biological solids, or biomass. These grow and group together in the form of flocs or biofilms and, in some specific processes, as granules. The biological floc or biofilm and remaining fine solids form a sludge which can be settled and separated. After separation, a liquid remains that is almost free of solids, and with a greatly reduced concentration of pollutants.
Secondary treatment can reduce organic matter (measured as biological oxygen demand) from sewage, using aerobic or anaerobic processes. The organisms involved in these processes are sensitive to the presence of toxic materials, although these are not expected to be present at high concentrations in typical municipal sewage.
=== Tertiary treatment ===
Advanced sewage treatment generally involves three main stages, called primary, secondary and tertiary treatment but may also include intermediate stages and final polishing processes. The purpose of tertiary treatment (also called advanced treatment) is to provide a final treatment stage to further improve the effluent quality before it is discharged to the receiving water body or reused. More than one tertiary treatment process may be used at any treatment plant. If disinfection is practiced, it is always the final process. It is also called effluent polishing. Tertiary treatment may include biological nutrient removal (alternatively, this can be classified as secondary treatment), disinfection and partly removal of micropollutants, such as environmental persistent pharmaceutical pollutants.
Tertiary treatment is sometimes defined as anything more than primary and secondary treatment in order to allow discharge into a highly sensitive or fragile ecosystem such as estuaries, low-flow rivers or coral reefs. Treated water is sometimes disinfected chemically or physically (for example, by lagoons and microfiltration) prior to discharge into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, greenway or park. If it is sufficiently clean, it can also be used for groundwater recharge or agricultural purposes.
Sand filtration removes much of the residual suspended matter.: 22–23 Filtration over activated carbon, also called carbon adsorption, removes residual toxins.: 19 Micro filtration or synthetic membranes are used in membrane bioreactors and can also remove pathogens.: 854
Settlement and further biological improvement of treated sewage may be achieved through storage in large human-made ponds or lagoons. These lagoons are highly aerobic, and colonization by native macrophytes, especially reeds, is often encouraged.
=== Disinfection ===
Disinfection of treated sewage aims to kill pathogens (disease-causing microorganisms) prior to disposal. It is increasingly effective after more elements of the foregoing treatment sequence have been completed.: 359 The purpose of disinfection in the treatment of sewage is to substantially reduce the number of pathogens in the water to be discharged back into the environment or to be reused. The target level of reduction of biological contaminants like pathogens is often regulated by the presiding governmental authority. The effectiveness of disinfection depends on the quality of the water being treated (e.g. turbidity, pH, etc.), the type of disinfection being used, the disinfectant dosage (concentration and time), and other environmental variables. Water with high turbidity will be treated less successfully, since solid matter can shield organisms, especially from ultraviolet light or if contact times are low. Generally, short contact times, low doses and high flows all militate against effective disinfection. Common methods of disinfection include ozone, chlorine, ultraviolet light, or sodium hypochlorite.: 16 Monochloramine, which is used for drinking water, is not used in the treatment of sewage because of its persistence.
Chlorination remains the most common form of treated sewage disinfection in many countries due to its low cost and long-term history of effectiveness. One disadvantage is that chlorination of residual organic material can generate chlorinated-organic compounds that may be carcinogenic or harmful to the environment. Residual chlorine or chloramines may also be capable of chlorinating organic material in the natural aquatic environment. Further, because residual chlorine is toxic to aquatic species, the treated effluent must also be chemically dechlorinated, adding to the complexity and cost of treatment.
Ultraviolet (UV) light can be used instead of chlorine, iodine, or other chemicals. Because no chemicals are used, the treated water has no adverse effect on organisms that later consume it, as may be the case with other methods. UV radiation causes damage to the genetic structure of bacteria, viruses, and other pathogens, making them incapable of reproduction. The key disadvantages of UV disinfection are the need for frequent lamp maintenance and replacement and the need for a highly treated effluent to ensure that the target microorganisms are not shielded from the UV radiation (i.e., any solids present in the treated effluent may protect microorganisms from the UV light). In many countries, UV light is becoming the most common means of disinfection because of the concerns about the impacts of chlorine in chlorinating residual organics in the treated sewage and in chlorinating organics in the receiving water.
As with UV treatment, heat sterilization also does not add chemicals to the water being treated. However, unlike UV, heat can penetrate liquids that are not transparent. Heat disinfection can also penetrate solid materials within wastewater, sterilizing their contents. Thermal effluent decontamination systems provide low resource, low maintenance effluent decontamination once installed.
Ozone (O3) is generated by passing oxygen (O2) through a high voltage potential resulting in a third oxygen atom becoming attached and forming O3. Ozone is very unstable and reactive and oxidizes most organic material it comes in contact with, thereby destroying many pathogenic microorganisms. Ozone is considered to be safer than chlorine because, unlike chlorine which has to be stored on site (highly poisonous in the event of an accidental release), ozone is generated on-site as needed from the oxygen in the ambient air. Ozonation also produces fewer disinfection by-products than chlorination. A disadvantage of ozone disinfection is the high cost of the ozone generation equipment and the requirements for special operators. Ozone sewage treatment requires the use of an ozone generator, which decontaminates the water as ozone bubbles percolate through the tank.
Membranes can also be effective disinfectants, because they act as barriers, avoiding the passage of the microorganisms. As a result, the final effluent may be devoid of pathogenic organisms, depending on the type of membrane used. This principle is applied in membrane bioreactors.
=== Biological nutrient removal ===
Sewage may contain high levels of the nutrients nitrogen and phosphorus. Typical values for nutrient loads per person and nutrient concentrations in raw sewage in developing countries have been published as follows: 8 g/person/d for total nitrogen (45 mg/L), 4.5 g/person/d for ammonia-N (25 mg/L) and 1.0 g/person/d for total phosphorus (7 mg/L).: 57 The typical ranges for these values are: 6–10 g/person/d for total nitrogen (35–60 mg/L), 3.5–6 g/person/d for ammonia-N (20–35 mg/L) and 0.7–2.5 g/person/d for total phosphorus (4–15 mg/L).: 57
Excessive release to the environment can lead to nutrient pollution, which can manifest itself in eutrophication. This process can lead to algal blooms, a rapid growth, and later decay, in the population of algae. In addition to causing deoxygenation, some algal species produce toxins that contaminate drinking water supplies.
Ammonia nitrogen, in the form of free ammonia (NH3) is toxic to fish. Ammonia nitrogen, when converted to nitrite and further to nitrate in a water body, in the process of nitrification, is associated with the consumption of dissolved oxygen. Nitrite and nitrate may also have public health significance if concentrations are high in drinking water, because of a disease called metahemoglobinemia.: 42
Phosphorus removal is important as phosphorus is a limiting nutrient for algae growth in many fresh water systems. Therefore, an excess of phosphorus can lead to eutrophication. It is also particularly important for water reuse systems where high phosphorus concentrations may lead to fouling of downstream equipment such as reverse osmosis.
A range of treatment processes are available to remove nitrogen and phosphorus. Biological nutrient removal (BNR) is regarded by some as a type of secondary treatment process, and by others as a tertiary (or advanced) treatment process.
==== Nitrogen removal ====
Nitrogen is removed through the biological oxidation of nitrogen from ammonia to nitrate (nitrification), followed by denitrification, the reduction of nitrate to nitrogen gas. Nitrogen gas is released to the atmosphere and thus removed from the water.
Nitrification itself is a two-step aerobic process, each step facilitated by a different type of bacteria. The oxidation of ammonia (NH4+) to nitrite (NO2−) is most often facilitated by bacteria such as Nitrosomonas spp. (nitroso refers to the formation of a nitroso functional group). Nitrite oxidation to nitrate (NO3−), though traditionally believed to be facilitated by Nitrobacter spp. (nitro referring the formation of a nitro functional group), is now known to be facilitated in the environment predominantly by Nitrospira spp.
Denitrification requires anoxic conditions to encourage the appropriate biological communities to form. Anoxic conditions refers to a situation where oxygen is absent but nitrate is present. Denitrification is facilitated by a wide diversity of bacteria. The activated sludge process, sand filters, waste stabilization ponds, constructed wetlands and other processes can all be used to reduce nitrogen.: 17–18 Since denitrification is the reduction of nitrate to dinitrogen (molecular nitrogen) gas, an electron donor is needed. This can be, depending on the wastewater, organic matter (from the sewage itself), sulfide, or an added donor like methanol. The sludge in the anoxic tanks (denitrification tanks) must be mixed well (mixture of recirculated mixed liquor, return activated sludge, and raw influent) e.g. by using submersible mixers in order to achieve the desired denitrification.
Over time, different treatment configurations for activated sludge processes have evolved to achieve high levels of nitrogen removal. An initial scheme was called the Ludzack–Ettinger Process. It could not achieve a high level of denitrification.: 616 The Modified Ludzak–Ettinger Process (MLE) came later and was an improvement on the original concept. It recycles mixed liquor from the discharge end of the aeration tank to the head of the anoxic tank. This provides nitrate for the facultative bacteria.: 616
There are other process configurations, such as variations of the Bardenpho process.: 160 They might differ in the placement of anoxic tanks, e.g. before and after the aeration tanks.
==== Phosphorus removal ====
Studies of United States sewage in the late 1960s estimated mean per capita contributions of 500 grams (18 oz) in urine and feces, 1,000 grams (35 oz) in synthetic detergents, and lesser variable amounts used as corrosion and scale control chemicals in water supplies. Source control via alternative detergent formulations has subsequently reduced the largest contribution, but naturally the phosphorus content of urine and feces remained unchanged.
Phosphorus can be removed biologically in a process called enhanced biological phosphorus removal. In this process, specific bacteria, called polyphosphate-accumulating organisms (PAOs), are selectively enriched and accumulate large quantities of phosphorus within their cells (up to 20 percent of their mass).: 148–155
Phosphorus removal can also be achieved by chemical precipitation, usually with salts of iron (e.g. ferric chloride) or aluminum (e.g. alum), or lime.: 18 This may lead to a higher sludge production as hydroxides precipitate and the added chemicals can be expensive. Chemical phosphorus removal requires significantly smaller equipment footprint than biological removal, is easier to operate and is often more reliable than biological phosphorus removal. Another method for phosphorus removal is to use granular laterite or zeolite.
Some systems use both biological phosphorus removal and chemical phosphorus removal. The chemical phosphorus removal in those systems may be used as a backup system, for use when the biological phosphorus removal is not removing enough phosphorus, or may be used continuously. In either case, using both biological and chemical phosphorus removal has the advantage of not increasing sludge production as much as chemical phosphorus removal on its own, with the disadvantage of the increased initial cost associated with installing two different systems.
Once removed, phosphorus, in the form of a phosphate-rich sewage sludge, may be sent to landfill or used as fertilizer in admixture with other digested sewage sludges. In the latter case, the treated sewage sludge is also sometimes referred to as biosolids. 22% of the world's phosphorus needs could be satisfied by recycling residential wastewater.
=== Fourth treatment stage ===
Micropollutants such as pharmaceuticals, ingredients of household chemicals, chemicals used in small businesses or industries, environmental persistent pharmaceutical pollutants (EPPP) or pesticides may not be eliminated in the commonly used sewage treatment processes (primary, secondary and tertiary treatment) and therefore lead to water pollution. Although concentrations of those substances and their decomposition products are quite low, there is still a chance of harming aquatic organisms. For pharmaceuticals, the following substances have been identified as toxicologically relevant: substances with endocrine disrupting effects, genotoxic substances and substances that enhance the development of bacterial resistances. They mainly belong to the group of EPPP.
Techniques for elimination of micropollutants via a fourth treatment stage during sewage treatment are implemented in Germany, Switzerland, Sweden and the Netherlands and tests are ongoing in several other countries. In Switzerland it has been enshrined in law since 2016. Since 1 January 2025, there has been a recast of the Urban Waste Water Treatment Directive in the European Union. Due to the large number of amendments that have now been made, the directive was rewritten on November 27, 2024 as Directive (EU) 2024/3019, published in the EU Official Journal on December 12, and entered into force on January 1, 2025. The member states now have 31 months, i.e. until July 31, 2027, to adapt their national legislation to the new directive ("implementation of the directive").
The amendment stipulates that, in addition to stricter discharge values for nitrogen and phosphorus, persistent trace substances must at least be partially separated. The target, similar to Switzerland, is that 80% of 6 key substances out of 12 must be removed between discharge into the sewage treatment plant and discharge into the water body. At least 80% of the investments and operating costs for the fourth treatment stage will be passed on to the pharmaceutical and cosmetics industry according to the polluter pays principle in order to relieve the population financially and provide an incentive for the development of more environmentally friendly products. In addition, the municipal wastewater treatment sector is to be energy neutral by 2045 and the emission of microplastics and PFAS is to be monitored.
The implementation of the framework guidelines is staggered until 2045, depending on the size of the sewage treatment plant and its population equivalents (PE). Sewage treatment plants with over 150,000 PE have priority and should be adapted immediately, as a significant proportion of the pollution comes from them. The adjustments are staggered at national level in:
20% of the plants by 31 December 2033,
60% of the plants by 31 December 2039,
100% of the plants by 31 December 2045.
Wastewater treatment plants with 10,000 to 150,000 PE that discharge into coastal waters or sensitive waters are staggered at national level in:
10% of the plants by 31 December 2033,
30% of the plants by 31 December 2036,
60% of the plants by 31 December 2039,
100% of the plants by 31 December 2045.
The latter concerns waters with a low dilution ratio, waters from which drinking water is obtained and those that are coastal waters, or those used as bathing waters or used for mussel farming. Member States will be given the option not to apply fourth treatment in these areas if a risk assessment shows that there is no potential risk from micropollutants to human health and/or the environment.
Such process steps mainly consist of activated carbon filters that adsorb the micropollutants. The combination of advanced oxidation with ozone followed by granular activated carbon (GAC) has been suggested as a cost-effective treatment combination for pharmaceutical residues. For a full reduction of microplasts the combination of ultrafiltration followed by GAC has been suggested. Also the use of enzymes such as laccase secreted by fungi is under investigation. Microbial biofuel cells are investigated for their property to treat organic matter in sewage.
To reduce pharmaceuticals in water bodies, source control measures are also under investigation, such as innovations in drug development or more responsible handling of drugs. In the US, the National Take Back Initiative is a voluntary program with the general public, encouraging people to return excess or expired drugs, and avoid flushing them to the sewage system.
=== Sludge treatment and disposal ===
== Environmental impacts ==
Sewage treatment plants can have significant effects on the biotic status of receiving waters and can cause some water pollution, especially if the treatment process used is only basic. For example, for sewage treatment plants without nutrient removal, eutrophication of receiving water bodies can be a problem.
In 2024, The Royal Academy of Engineering released a study into the effects wastewater on public health in the United Kingdom. The study gained media attention, with comments from the UKs leading health professionals, including Sir Chris Whitty. Outlining 15 recommendations for various UK bodies to dramatically reduce public health risks by increasing the water quality in its waterways, such as rivers and lakes.
After the release of the report, The Guardian newspaper interviewed Whitty, who stated that improving water quality and sewage treatment should be a high level of importance and a "public health priority". He compared it to eradicating cholera in the 19th century in the country following improvements to the sewage treatment network. The study also identified that low water flows in rivers saw high concentration levels of sewage, as well as times of flooding or heavy rainfall. While heavy rainfall had always been associated with sewage overflows into streams and rivers, the British media went as far to warn parents of the dangers of paddling in shallow rivers during warm weather.
Whitty's comments came after the study revealed that the UK was experiencing a growth in the number of people that were using coastal and inland waters recreationally. This could be connected to a growing interest in activities such as open water swimming or other water sports. Despite this growth in recreation, poor water quality meant some were becoming unwell during events. Most notably, the 2024 Paris Olympics had to delay numerous swimming-focused events like the triathlon due to high levels of sewage in the River Seine.
== Reuse ==
=== Irrigation ===
Increasingly, people use treated or even untreated sewage for irrigation to produce crops. Cities provide lucrative markets for fresh produce, so are attractive to farmers. Because agriculture has to compete for increasingly scarce water resources with industry and municipal users, there is often no alternative for farmers but to use water polluted with sewage directly to water their crops. There can be significant health hazards related to using water loaded with pathogens in this way. The World Health Organization developed guidelines for safe use of wastewater in 2006. They advocate a 'multiple-barrier' approach to wastewater use, where farmers are encouraged to adopt various risk-reducing behaviors. These include ceasing irrigation a few days before harvesting to allow pathogens to die off in the sunlight, applying water carefully so it does not contaminate leaves likely to be eaten raw, cleaning vegetables with disinfectant or allowing fecal sludge used in farming to dry before being used as a human manure.
=== Reclaimed water ===
== Global situation ==
Before the 20th century in Europe, sewers usually discharged into a body of water such as a river, lake, or ocean. There was no treatment, so the breakdown of the human waste was left to the ecosystem. This could lead to satisfactory results if the assimilative capacity of the ecosystem is sufficient which is nowadays not often the case due to increasing population density.: 78
Today, the situation in urban areas of industrialized countries is usually that sewers route their contents to a sewage treatment plant rather than directly to a body of water. In many developing countries, however, the bulk of municipal and industrial wastewater is discharged to rivers and the ocean without any treatment or after preliminary treatment or primary treatment only. Doing so can lead to water pollution. Few reliable figures exist on the share of the wastewater collected in sewers that is being treated worldwide. A global estimate by UNDP and UN-Habitat in 2010 was that 90% of all wastewater generated is released into the environment untreated. A more recent study in 2021 estimated that globally, about 52% of sewage is treated. However, sewage treatment rates are highly unequal for different countries around the world. For example, while high-income countries treat approximately 74% of their sewage, developing countries treat an average of just 4.2%. As of 2022, without sufficient treatment, more than 80% of all wastewater generated globally is released into the environment. High-income nations treat, on average, 70% of the wastewater they produce, according to UN Water. Only 8% of wastewater produced in low-income nations receives any sort of treatment.
The Joint Monitoring Programme (JMP) for Water Supply and Sanitation by WHO and UNICEF report in 2021 that 82% of people with sewer connections are connected to sewage treatment plants providing at least secondary treatment.: 55 However, this value varies widely between regions. For example, in Europe, North America, Northern Africa and Western Asia, a total of 31 countries had universal (>99%) wastewater treatment. However, in Albania, Bermuda, North Macedonia and Serbia "less than 50% of sewered wastewater received secondary or better treatment" and in Algeria, Lebanon and Libya the value was less than 20% of sewered wastewater that was being treated. The report also found that "globally, 594 million people have sewer connections that don't receive sufficient treatment. Many more are connected to wastewater treatment plants that do not provide effective treatment or comply with effluent requirements.".: 55
=== Global targets ===
Sustainable Development Goal 6 has a Target 6.3 which is formulated as follows: "By 2030, improve water quality by reducing pollution, eliminating,dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally." The corresponding Indicator 6.3.1 is the "proportion of wastewater safely treated". It is anticipated that wastewater production would rise by 24% by 2030 and by 51% by 2050.
Data in 2020 showed that there is still too much uncollected household wastewater: Only 66% of all household wastewater flows were collected at treatment facilities in 2020 (this is determined from data from 128 countries).: 17 Based on data from 42 countries in 2015, the report stated that "32 per cent of all wastewater flows generated from point sources received at least some treatment".: 17 For sewage that has indeed been collected at centralized sewage treatment plants, about 79% went on to be safely treated in 2020.: 18
== History ==
The history of sewage treatment had the following developments: It began with land application (sewage farms) in the 1840s in England, followed by chemical treatment and sedimentation of sewage in tanks, then biological treatment in the late 19th century, which led to the development of the activated sludge process starting in 1912.
== Regulations ==
In most countries, sewage collection and treatment are subject to local and national regulations and standards.
== By country ==
=== Overview ===
=== Europe ===
In the European Union, 0.8% of total energy consumption goes to wastewater treatment facilities. The European Union needs to make extra investments of €90 billion in the water and waste sector to meet its 2030 climate and energy goals.
In October 2021, British Members of Parliament voted to continue allowing untreated sewage from combined sewer overflows to be released into waterways.
=== Asia ===
==== India ====
The 'Delhi Jal Board' (DJB) is currently operating on the construction of the largest sewage treatment plant in India. It will be operational by the end of 2022 with an estimated capacity of 564 MLD. It is supposed to solve the existing situation wherein untreated sewage water is being discharged directly into the river 'Yamuna'.
==== Japan ====
=== Africa ===
==== Libya ====
=== Americas ===
==== United States ====
== See also ==
Decentralized wastewater system
List of largest wastewater treatment plants
List of water supply and sanitation by country
Organisms involved in water purification
Sanitary engineering
Waste disposal
== References ==
== External links ==
Water Environment Federation – Professional association focusing on municipal wastewater treatment | Wikipedia/Sewage_treatment_plant |
The water resources of Palestine are de facto fully controlled by Israel, and the division of groundwater is subject to provisions in the Oslo II Accord.
Generally, the water quality is considerably worse in the Gaza strip when compared to the West Bank. About a third to half of the delivered water in the Palestinian territories is lost in the distribution network. The lasting blockade of the Gaza Strip and the Gaza War (2008–2009) have caused severe damage to the infrastructure in the Gaza Strip.
Concerning wastewater, the existing treatment plants do not have the capacity to treat all of the produced wastewater, causing severe water pollution. The development of the sector highly depends on external financing.
== Overview ==
The region of Israel/Palestine is "water-stressed", like many other countries in the region, and macroanalysts consider working out how to share water resources the "single most important problem" for Middle Eastern peoples. One third of all water consumed in Israel was by the 1990s drawn from groundwater that in turn came from the rains over the West Bank, and the struggle over this resource has been described as a zero-sum game. According to Human Rights Watch Israel's confiscation of water violates the Hague Regulations of 1907, which prohibit an occupying power from expropriating the resources of occupied territory for its own benefit.
In the wake of 1967, Israel abrogated Palestinian water rights in the West Bank, and with Military Order 92 of August of that year invested all power over water management to the military authority, though under international law Palestinians were entitled to a share. Both of Israel's own aquifers originate in West Bank territory and its northern cities would run dry without them. According to John Cooley, West Bank Palestinian farmers' wells, which in Ottoman, British, Jordanian and Egyptian law were a private resource owned by villages, were a key element behind Israel's post-1967 strategy to keep the area and in order to protect "Jewish water supplies" from what was considered "encroachment" many existing wells were blocked or sealed, Palestinians were forbidden to drill new wells without military authorization, which was almost impossible to obtain, and restrictive quotas on Palestinian water use were imposed. 527 known springs in the West Bank furnish (2010) Palestinians with half of their domestic consumption. The historic wells furnishing Palestinian villages have often been expropriated for the exclusive use of settlements: thus the major well servicing al-Eizariya was taken over by Ma'ale Adumim in the 1980s, while most of its land was stripped from them leaving the villagers with 2,979 of their original 11,179 dunams.
Most of the Israeli water carrier Mekorot's drillings in the West Bank are located in the Jordan Valley, where Palestinians ended up by 2008 drawing 44% less water than what they accessed before the Interim Agreement of 1995. Under those Oslo Accords Israel obtained 80% of the West Bank's waters, with the remaining 20% Palestinian, a percentage which, however, did not concede the Palestinians any "ownership right". Of their agreed on allocation for 2011 of 138.5 MCM, Palestinians managed to extract only 87 MCM, given the difficulties in obtaining Israeli permits, and the shortfall caused by the drying up of half of Palestinian wells has to be partially offset by buying water from Israel, with the net effect that per capita Palestinian water use has declined 20%. The World Health Organization's minimum consumption per capita of water is 100 litres per diem
Model Palestinian new town urban developments, like the city of Rawabi, have been severely hampered by restrictions on their access to water.
In 2023, Israeli attacks on Palestinian water supplies both in the Gaza Strip and the West Bank amounted to roughly 25% of the 350 water conflicts which occurred that year globally. On average 7 such attacks, either by settlers or the army, resulting in either contaminated or destroyed water wells, pumps and irrigation systems, took place each month that year.
== History ==
Since the 1948 Arab–Israeli War, the issue of the development of the area's water resources, has been a critical issue in regional conflict and negotiations, initially involving Syria, Jordan and Israel. After the Six-Day War, when Israel occupied the Palestinian territories, water use and sanitation have been closely linked to developments in the Israeli–Palestinian conflict. The water and land resources in the West Bank in particular are considered to constitute the major obstacle to the resolution of conflict in the area. Palestinians claim they have a legal right to ownership, or claim to use of three water sources in the area:(a)the groundwater reservoir of the Mountain Aquifer, the Gaza Strip Coastal Aquifer and the Jordan River to the amount of 700 MCM/Y, over 50% of natural water resources between the Mediterranean Sea and the Jordan River.
In 1995, the Palestinian Water Authority (PWA) was established by a presidential decree. One year later, its functions, objectives and responsibilities were defined through a by-law, giving the PWA the mandate to manage water resources and execute the water policy.
During the Gaza war, the water system in the Gaza Strip was severely damaged, with half of its boreholes and desalination plants, and four of the six wastewater treatment plants, damaged or destroyed by May 2024.
== Water resources ==
=== Division in the Oslo II Accord ===
The 1995 Oslo II Accord allows the Palestinians in the West Bank the use of up to 118 million cubic meters (mcm) water per year. 80 mcm was supposed to come from newly-drilled wells. However, the PWA was able to drill new wells for only 30 mcm at the expense of the existing springs and wells. In the Oslo II Accord, the Israelis are allotted four times the Palestinian portion or 80% of the joint-aquifer resources. However, 94% (340 mcm) of the Western Aquifer was allotted to the Israelis for use within Israel. The allowed quantities have not been adapted after the end of the supposed five years interim period. The parties established the Joint Water Committee to carry out the provisions of the concerning article 40 of Annex III.
According to a World Bank report, Israel extracted 80% more water from the West Bank than agreed in the Oslo Accord, while Palestinian abstractions were within the agreed range. Contrary to expectations under Oslo II, the water actually extracted by Palestinians in the West Bank has dropped between 1999 and 2007. Due to the Israeli over-extraction, aquifer levels are near ″the point where irreversible damage is done to the aquifer.″ Israeli wells in the West Bank have dried up local Palestinian wells and springs.
=== Water from the Jordan River basin ===
The Upper Jordan River flows south into the Sea of Galilee, which provides the largest freshwater storage capacity along the Jordan River. Lake Tiberias drains into the Lower Jordan River, which winds further south through the Jordan Valley to its terminus in the Dead Sea. The Palestinians are denied any access to this water. About a quarter of the 420 million m3 Israel pumps from the Sea of Galilee goes to the local communities in Israel and to Jordan; the rest is diverted to Israel through the National Water Carrier (NWC) before it can reach the West Bank. Virtually all water from the Yarmouk River, north of the West Bank is diverted by Israel, Syria and Jordan. The water of the Tirza Stream, the largest stream in the central Jordan Valley, fed by rainwater, is diverted by Israel to the Tirza Reservoir and used by settlements in the area for irrigation of crops and for raising fish.
=== Other surface water ===
In Gaza, the only source of surface water has been the Wadi Gaza. There are claims that Israel diverts part of its water for agricultural purposes within Israel prior to its arrival to Gaza.
=== Groundwater ===
In the West Bank, the main groundwater resource is the Mountain Aquifer, which consists of three aquifers: Before the Israeli occupation of the West Bank Israel drew 60% of the water extracted from aquifers straddling the border between it and the West Bank. It now takes 80%, which overall means that 40% of Israel's water comes from West Bank aquifers.
The Western Aquifer, in Israel called the "Yarkon-Taninim Aquifer", is the largest one, with an annual safe yield of 362 million cubic metres (mcm), based on average annual estimate, (of which 40 mcm are brackish). Eighty percent of the recharge area of this basin is located within the West Bank, whereas 80% of the storage area is located within Israeli borders. Israelis exploit the aquifers of this basin by means of 300 deep groundwater wells to the west of the Green Line, as well as by deep wells within the West Bank boundary. Palestinians who have access to pre-existing wells and springs may draw on them, but are, as opposed to Israeli settlements, are forbidden to drill new wells.
The North-Eastern Aquifer, in Israel called the "Gilboa-Bet She'an Aquifer" or "Schechem-Gilboa Aquifer", has an annual safe yield of 145 mcm (of which 70 mcm are brackish). Almost 100% of its water comes from precipitation falling within the West Bank area, but then flows underground in a northerly direction into the Bisan (Bet She'an) and Jezreel valley.
The Eastern Aquifer, entirely within the West Bank, has an annual safe yield of 172 mcm (of which 70–80 mcm are brackish). This aquifer is mainly drained by springs.
According to Hiniker, at an average sustainable rate, the amount of renewable shared freshwater available throughout the entire Jordan Valley is roughly 2700 mcm per year, which is composed of 1400 million cubic metres of groundwater and 1300 million cubic metres of surface water. However, only a fraction of this can be used by Palestinians in the West Bank. Israel has denied Palestinians access to the entire Lower Jordan River since 1967. After the start of Israel's military occupation in 1967, Israel declared the West Bank land adjacent to the Jordan River a closed military zone, to which only Israeli settler farmers have been permitted access.
In 1982, the West Bank water infrastructure controlled by the Israeli army was handed over to the Israeli national water company Mekorot. As of 2009, Mekorot operates some 42 wells in the West Bank, mainly in the Jordan Valley region, which mostly supply the Israeli settlements. The amount of water Mekorot can sell to the Palestinians is subject to approval of the Israeli authorities.
Drilling of wells into the mountain aquifer by the Palestinians is restricted. Most of its water thus flows underground towards the slopes of the hills and into Israeli territory. According to different estimates, between 80 and 85% of groundwater in the West Bank is used either by Israeli settlers or flows into Israel.
The Coastal Aquifer is the only groundwater source in the Gaza strip. It runs beneath the coast of Israel, with Gaza downstream at the end of the basin. With the water flows underground mainly east–west, however, Palestinian extractions from the aquifer have no effect on the Israeli side. Israel, on the contrary, has installed a cordon of numerous deep wells along the Gaza border and in this way extracts much of the groundwater before it can reach Gaza. Israel sells a limited part of the water to the Palestinians in Gaza. While Israel transports water from the north of its territory to the south, the Palestinians are not allowed to move water from the West Bank to Gaza. This is a reason why this aquifer is heavily over-exploited, resulting in seawater intrusion. The aquifer is polluted by salt as well as nitrate from wastewater infiltration and fertilizers. Only 5-10% of the aquifer yields drinking water quality. By 2000, the water from the Coastal Aquifer in the Gaza region was considered no longer drinkable due to high salinity from the sea water intrusion and high nitrate pollution from agricultural activity. In 2013, an analysis of nine municipal groundwater wells reported a TDS ranging from 680.4 mg/L to 3106.6 mg/L, averaging 1996.5 mg/L, exceeding the 1000 mg/L WHO acceptable level, mainly due to high chloride and sodium.
Pursuant Oslo II (Annex III, Article 40.7), Israel committed itself to sell 5 mcm/year to the Gaza Strip. In 2015 Israel had doubled the amount to 10 mcm/year. Gaza also imports water, or produces drinkwater by means of desalination plants.
=== Desalination of brackish groundwater ===
In Gaza, desalinated brackish groundwater has become an important source of drinking water. Over 20,000 consumers in over 50% of the Gaza households have installed domestic ‘reverse osmosis’ (RO) units to desalinate water for drinking purposes. The water quality is high, though the water lacks basic minerals. As of January 2014, there were 18 neighborhood desalination plants in the Gaza strip, providing safe drinking water for free to 95,000 people who come to fill their canisters at the plants. 13 of these plants are operated by UNICEF.
In 2009, approximately 100 industrial desalination plants were operational. Due to the Israeli blockade of the Gaza Strip, the import of spare parts – essential to operate the desalination plants of industry, communities and households – as well as necessary chemicals, is problematic.
=== Desalinated seawater ===
As of 2007, there was one seawater desalination plant in Deir al-Balah in the Gaza Strip, built in 1997–99 with funding by the Austrian government. It has a capacity of 600 cubic metres (21,000 cu ft) per day and it is owned and operated by the Coastal Municipalities Water Utility. At least initially, the operating costs were subsidized by the Austrian government. The desalinated water is distributed to 13 water kiosks.
For over 20 years, a major desalination plant for Gaza has been discussed. The Palestinian Water Authority has approved a $500 million facility. The New York Times reported in 2013 that Israel supported this and had begun to offer Palestinians desalination training. In 2012 the French government committed a 10 million-euro grant for the plant. Arab countries, coordinated by the Islamic Development Bank, committed to provide half of the necessary funds, matching an expected European financial commitment. The European Investment Bank provides technical assistance.
Another major problem is that desalination is very energy-intensive, while the import of fuel to produce the necessary electricity is restricted by Israel and Egypt. Furthermore, revenues from drinking water tariffs are insufficient to cover the operating costs of the envisaged plant at the current tariff level.
=== Rainwater collection ===
In the West Bank, collection of rainwater is a very limited resource in addition to tanker truck water for Palestinians who lack connection to the water grid, notably in rural areas. However, Israeli authorities control even the collection of small quantities of rainwater. According to the 2009 report Troubled Waters by Amnesty International, some 180,000–200,000 Palestinians living in rural communities have no access to running water and the Israeli army often prevents them from even collecting rainwater. The Israeli army frequently destroys small rainwater harvesting cisterns built by Palestinian communities who have no access to running water, or prevents their construction.
=== Water reuse ===
In view of the limited availability of water resources, water reuse is seen as an important source. In the West Bank, Israel collects wastewater in two facilities in the Jordan Valley. Not only wastewater from Israelis in Jerusalem and settlements is collected, but also from Palestinians. All recycled water is used for irrigation in settlements in the Jordan Valley and northern Dead Sea area.
== Water use ==
=== Palestinians ===
As of 2007, the estimated average per capita supply in the West Bank had increased to about 98 liter per capita per day (98 lpcd). The estimated household use was 50 lpcd, with many households consuming as little as 20 lpcd, even if connected to the network. Due to the settlement of areas in the West Bank and its resultant fragmentation, movement of water from water-rich areas to Palestinian communities with water shortage is inhibited. Therefore, there are huge differences in water use in the eastern and southern West Bank. While the daily consumption in the Jericho district was 161 liters in 2009, in Jericho city even 225 liters, it was less than 100 liters in other areas. In the central Jordan Valley it was about 60 liters. Inhabitants of a-Nu’ima, east of Jericho, had only 24 liters. Residents of villages that are cut off from water supply have to buy water from watertanker operators. All of the eastern West Bank, except the Israeli settlements and Jericho are designated as a closed military area or as an area that for other reasons has access restrictions for Palestinians. In 2012, 90% of the small Palestinian communities living there had less than 60 lpcd. Over half of them, mostly Bedouin or herding communities, often cut off from their traditional wells, had even less than 30 litres per person per day.
As of 2009, the Palestinian Water Authority (PWA) or municipalities provided about 70 lpcd in Gaza, but could not reach all households.
For 2012, the Palestinian Central Bureau of Statistics (PCBS) provided the following figures (domestic use):
* MCM=million cubic meters per year
** lpcd=liter per capita per day
1) excl. East Jerusalem
2) including commercial and industrial uses; hence, the actual supply and consumption rates per capita are less than the indicated numbers; 93.9 MCM=105.6 lpcd and 67.9 MCM=76.4 lpcd (for given population over 365 days)
In 2012, about 44% of the groundwater was for use in agriculture. Industrial use was only 3% in 2005.
The household use is less than the supply, which includes industrial, commercial and public consumption as well as losses. In the Gaza strip, for example, the estimated average per capita supply in 2005 was 152 lpcd, but due to high network losses, the actual water use was only 60% of it, which would be about 91 liter. The minimum quantity for domestic use, recommended by the WHO is 100 lpcd.
=== Israeli settlers ===
In 2008, the settlements in the Jordan Valley and northern Dead Sea area were allocated 44.8 million m3 (MCM) of water, 97.5 percent of which (43.7 MCM) were for agricultural use. Seventy percent of it was provided by Mekorot. According to Israeli figures, the household use of settlers in the Jordan Valley was 487 liters per capita per day (lpcd) and in the northern Dead Sea area even 727 lpcd. That is three to four times the use of 165 liters in Israel. As the settlers in the eastern West Bank use nearly all the water use for agriculture, they in fact export water from the Palestinian Territories.
In 2009, settlers in Pnei Hever, Hebron District, consumed 194 liters per day; those in Efrat, east of Bethlehem, 217 liters.
=== Water use of Israelis versus Palestinians ===
According to the Palestinian Water Authority, the average Israeli consumption of water is 300 liter per person per day, which is more than 4 times that of the Palestinian use of 72 liters per day. Some Palestinian village communities live on even less water than the average Palestinian consumption, in some cases no more than 20 liters per person per day. According to the World Bank, water extractions per capita for West Bank Palestinians are about one quarter of those for Israelis, and have declined over the last decade. In 1999, Palestinians in the West Bank used only 190 lpcd from the West Bank resources, the settlers 870 lpcd, and the Israelis used even 1,000 lpcd. Israeli settlers in the West Bank thus used about 4.5 times the amount of water available to the Palestinians.
In 2008, the settlers in the Niran settlement, north of Jericho, used more than 5 times the amount of the nearby Palestinian village al-A’uja. The Argaman settlement, in the central Jordan Valley, used more than 5 times the amount of the adjacent Palestinian village a-Zubeidat. The household use in the Ro’i settlement, in the northern Jordan Valley, was per head 21 times that of the adjacent Bedouin community al-Hadidya, which is not connected to the regular water supply.
In 2009, the settlers in Efrat consumed, with 217 liters, three times the amount of the per capita use of 71 liters in the nearby Palestinian Bethlehem Governorate.
While many Palestinians living in rural communities have no access to running water, Israeli settlers who export their products have irrigated farms, lush gardens and swimming pools. The 450,000 settlers use as much or even more water than all 2.3 million Palestinians together. Many Palestinians have to buy water from Israel, of often dubious quality, delivered with tanker trucks at very high prices. Water tankers are forced to take long detours to avoid Israeli military checkpoints and roads which are out of bounds to Palestinians, resulting in steep increases in the price of water.
== Infrastructure ==
=== Connection to the water grid ===
According to the Joint Monitoring Program (JMP) of the World Health Organization (WHO) and UNICEF, about 90% of the Palestinians in the Territories had access to an improved water source.
A survey carried out by the Palestinian Central Bureau of Statistics (PCBS) found that the number of households in the Palestinian territories connected to the water network was 91.8% in 2011. In the West Bank, 89.4% of the households were connected while the connection share in the Gaza Strip was 96.3%.
According to a 2004 study by Karen Assaf, there are low service levels especially in small villages and refugee camps. The gap between urban and rural areas concerning water supply house connections may be due to the fact that available water resources are not accessible for the Palestinian actors in many cases. In 42% of the localities, water supply
got uninterrupted; 19% received it at least partially. Furthermore, about 40% of all served localities suffer from water shortages.
The Euro-Mediterranean Water Information System (EMWIS) states that continuity in the Palestinian territories is 62.8%
=== Water cisterns ===
Due to unreliable water delivery, virtually every Palestinian house has at least one, most several, water cisterns to store water. In the West Bank, the management of water resources is subject to debate due to military regulations.
=== Drinking water quality ===
Data of a survey carried out in 2011 revealed that 47.2% of the households in the Palestinian Territories consider the water quality as good. The share is significantly higher in the West Bank (70.9%) than in the Gaza Strip (5.3%). Compared to an earlier study, the results indicate that the percentage of households which consider the water quality as good decreased from 67.5% in 1999.
A 2013 water analysis study conducted in the central region of Gaza revealed that 74% of the water samples from distribution points had varying degrees of microbiological contamination. Specifically, 26% of the samples had a moderate contamination level, between 11 and 100 colonies per 100 ml, and 13% had high levels of contamination, exceeding 100 total coliform colonies per 100 ml.
=== Water losses and sewage problems ===
In 2012, the losses of water in the network were estimated some 28% in the West Bank and even half of the supplied amount in Gaza. In the West Bank, construction and maintenance of water and sewage infrastructure are problematic. The Palestinian areas are enclaves in the Israeli-controlled Area C. Therefore, all projects are subject to approval of the Joint Water Committee and the Israeli army. In Gaza, the infrastructure is subject to periodic large-scale destruction by Israeli attacks, such as in the 2004 Raid on Beit Hanoun, or the 2008/2009 Operation Cast Lead. The groundwater in Gaza is highly contaminated by leaked sewage.
The high water loss rates are ascribed to illegal connections, worn out pipe systems in the networks, and utility dysfunction. Especially in the Gaza Strip, high losses are caused by illegal connections. Illegal use of water is often the result of water shortages and insufficient supply. Furthermore, the conditions of water supply utilities suffer from grave deficiencies causing high leakage rates and a weak water pulse in the system, ascribed to both institutional weakness and the restrictions posed by the occupation on the development of the water and sanitation sectors, including the Gaza blockade.
=== Effects of the Gaza war and the Gaza blockade ===
Following the 2008–2009 Israel–Gaza conflict, the World Bank reported severe damages to the water and sanitation infrastructure in the Gaza Strip. Almost all sewage and water pumps were out of operation due to a lack of electricity and fuel. Spare parts and other maintenance supplies were in urgent need to be replenished. This situation resulted in a serious shortage of water and sewage overflows in urban areas, posing a threat to public health.
The Israeli blockade of the Gaza Strip impedes the provision of spare parts and thus contributes to exacerbate the problem. Several aid agencies and the top United Nations humanitarian official in the Palestinian territories therefore demanded the immediate opening of crossings. According to the United Nations, about 60% of the population in the Gaza Strip did not have access to continuous water supply in 2009.
== Effects of the Gaza war ==
Before the Gaza war, 26 percent of diseases observed in Gaza were water-related. On the eve of the war (6 October 2023), Gaza had 5 wastewater treatment plants, and 65 sewage pumping stations. Gaza’s only one of Gaza’s three desalisation plants was operational given the ongoing Israeli blockade of fuel and electricity.
In November, Israeli airstrikes partially destroyed infrastructure providing energy for the Gaza Central wastewater treatment plant, affecting 1 million people. The airstrikes led to a 95% reduction of water resources available to the population of the Strip, with the consequence that Gazans were limited to 3 litres per day, 12 litres under the UN emergency limit. By late April 2024, 63% of all water and sanitation infrastructure in Gaza had been significantly damaged, with the Rafah Governorate an exception at 6%. Following the Israel’s Rafah offensive, damage to its water-related infrastructure rose 24.5%.
== Wastewater treatment ==
About 90% of the Palestinians in the Territories had access to improved sanitation in 2008. Cesspits were used by 39% of households, while access to the sewer network increased to 55% in 2011, up from 39% in 1999.
In the Gaza strip, from the 110,000 m3 of wastewater per day which is produced in the Gaza Strip, 68,000 m3 was treated, according to a study from 2001. 20% of the treated wastewater was reused. The World Bank reported in 2009 that the three existing wastewater treatment plants work discontinuously. Damaged sewage infrastructure can often not be repaired due to the ongoing Israeli blockade. It leads to delays in repairs and a lack of electricity and fuel which would be necessary to operate the wastewater treatment facilities. The United Nations estimate that per day 50,000 to 80,000 cubic meters of untreated and partially treated wastewater are discharged into the Mediterranean Sea since January 2008, threatening the environment in the region.
In the West Bank, only 13,000 out of 85,000 m3 of wastewater were treated in five municipal wastewater treatment plants in Hebron, Jenin, Ramallah, Tulkarem and Al-Bireh. The Al Bireh plant was constructed in 2000 with funding by the German aid agency KfW. According to the World Bank report, the other four plants perform poorly concerning efficiency and quality.
== Responsibility for water supply and sanitation ==
=== Relevant laws ===
The current sector legislation was established after the 1995 Oslo Accords, with a by-law establishing the Palestinian Water Authority (PWA) in 1996, a 1998 Water Resources Management Strategy and the 2002 water law. The Water Law of 2002 clarifies the responsibilities of the Palestinian Water Authority (PWA) and establishes a National Water Council (NWC) with the task to set national water policies. It also establishes "national water utilities".
=== Policy and regulation ===
General water sector policies are set by the Palestinian cabinet of ministries and the National Water Council (NWC). The council has the authority to suspend or dismantle the services of the board of directors of the regional water and wastewater services providers. The members of the council include the main Palestinian ministries. The Palestinian Water Authority (PWA) acts as regulatory authority, responsible for the legislation, monitoring and human resources development in the sector. The PWA is also in charge of water resources management. It has the mandate to carry out regular inspections and to keep a register of all water related data and information. The authority shares responsibility for irrigation with the Ministry of Agriculture (MoA) and for environmental protection with the Environment Quality Authority (EQA).
==== The Joint Water Committee ====
As part of the 1995 Interim Agreement, a Joint Water Committee (JWC) has been established between Israel and the Palestinian territories. The JWC was expected to implement the regulations of article 40 of the agreement which concern water and sanitation. The committee is composed of an equal number of participants by the two parties and all decisions need consensus, which means that each side has a veto. The JWC is not independent from Israel and the PA. Instead, decisions can be passed to a higher political level. Jägerskog reports several delays concerning the implementation of Palestinian project proposals within the committee, partly due to missing Palestinian funding, time-consuming approval procedures, hydrological and political reasons.
=== Service provision ===
The Water Law No. 3 provided the legal basis for the establishment of "national water utilities". The PWA's goal is to establish four regional utilities, one in Gaza and three in the West Bank (North, Center and South). However, in reality as of 2011 only the regional utility for Gaza has been established.
West Bank. Water services in the West Bank continue to be provided by municipalities, two multi-municipal utilities and village councils. The largest and oldest multi-municipal utility in the West Bank is the Jerusalem Water Undertaking (JWU) in the Ramallah and Al-Bireh area. JWU, founded in 1966 when the West Bank was still part of Jordan, serves the two cities as well as 10 smaller towns, more than 43 villages and 5 refugee camps. A second much smaller multi-municipal utility is the Water Supply and Sewerage Authority (WSSA) that serves Bethlehem and the neighboring towns Beit Jala and Beit Sahour. In other cities such as Tulkarem, Qalqilya, Nablus, Jenin, Jericho and Hebron as well as in small towns, municipalities provide water and - if existing - sewer services. Both utilities and municipalities depend to a varying extent on bulk water supply by the Israeli water company Mekorot, which delivers about 80% of the water used by JWU. In rural areas, water is provided by Village Council water departments. In the North-Eastern Jenin area a Joint Service Council (JSC) formed by six villages provides water.
Gaza strip In all 25 municipalities in the Gaza strip, water provision is the responsibility of the Coastal Municipalities Water Utility (CMWU). However, the utility is still in the process of being set up and exercise its legal tasks. The intended procedure is that the municipalities receive technical assistance by the CMWU and gradually transfer their staff and assets to it. According to the World Bank, this model led to some improvements like faster repair of leakage and economies of scale. However, the plan is far from being fully implemented. The model experienced serious problems which are mainly caused by the unstable political conditions in the Gaza strip since 2008, including differences between municipalities governed by Hamas and Fatah, so that some municipalities refused to transfer their assets and staff to CMWU.
=== Non-governmental organizations and universities ===
Non-governmental organisations (NGOs) are very active in the field of water and wastewater treatment and reuse. One NGO network is the Palestinian Environmental NGOs Network (PENGON) that was initiated after the 2000 al-Aqsa Intifada. It has more than 20 members, including NGOs, universities and research centers.
=== Private sector participation ===
Two management contracts were awarded for Gaza in 1996 and for the Bethlehem area in 1999. In 2002, soon after the outbreak of the Second Intifada, the Bethlehem contract was terminated and the Gaza contract expired.
In Gaza, a four-year management contract was awarded to a joint venture of Lyonnaise des Eaux (now Suez) and Khatib and Alami in 1996. The contract was entirely funded by a US$25 million World Bank credit. According to a 1998 World Bank paper, water quality improved since the contract became active. Furthermore, water losses fell and water consumption and revenues rose. However, actual responsibility for service provision remained with municipalities. When the contract ended in 2000, it was renewed twice for one year until 2002. The World Bank reports that from 1996 to 2002, 16,000 illegal connections have been identified and more than 1,900 km of pipes have been observed for leakage. Moreover, 22,000 connections have been replaced, more than 20 km of pipes have been repaired and more than 30,000 water meters have been replaced. The amount of non-revenue water (NRW) decreased to about 30%. After the end of the contract, the Coastal Municipal Water Utility (CMWU) has been established to manage water and sanitation in the Gaza strip.
Another management contract was awarded in 1999 covering water supply of about 600,000 people in the governorates of Bethlehem and Hebron, with a focus on the former one. The contract was awarded to a joint venture of the French Vivendi and the Lebanese-Palestinian company Khatib and Alami. Among other things, it included the improvement of infrastructure and billing procedures. The contract was financed with a credit of US$21 million, while the European Investment Bank (EIB) provided US$35.7 million. Mainly due to the continuing hostilities and the premature cancellation of EIB support, the World Bank rates the total outcome of the project as unsatisfactory. According to the World Bank, non-revenue water was reduced from about 50% to 24% in Hebron and only 10% in Bethlehem in 2004. Illegal connections were eliminated in Hebron and more than halved in Bethlehem.
== Efficiency ==
About half (44%) of the produced water is non-revenue water (NRW), water which is not billed due to leakage or water theft. The share varies widely from 25% in Ramallah to 65% in Jericho. In the Gaza Strip, NRW is estimated to be about 45%, out of which 40% is caused by physical losses and 5% by unregistered connections and meter losses. For comparison leakage of water at Israeli municipal pipes amount to about 10% of water usage.
== Financial aspects ==
=== Tariffs and cost recovery ===
A water-pricing policy is under preparation. Currently, increasing block tariffs are applied in the Palestinian territories. There is no price differentiation according to the purpose (residential, commercial, industrial). The average cost of water supply is $22 per month ($25 in the West Bank and $10 in Gaza). Karen Assaf reported an average tariff of US$1.20 (5 NIS) per m³ in 2004. In areas where piped water is not available, water is purchased from water tankers for prices five to six times higher than for piped water. The long-term objective to recover water production costs, or at least operation and maintenance costs, is still not reached.
The following table gives an overview of the distribution of households in the Palestinian Territories by the cost of monthly consumed water in 2003.
Bill collection rates average 50% in the West Bank and only 20% in Gaza.
=== Investment and financing ===
The PWA issues periodic reports including information about projects and donor contributions. In the West Bank, the total investment cost of water projects from 1996 to 2002 amounted to about US$500 million, out of which 150 million were already spent on completed projects. The costs of ongoing projects were US$300 million, and the remaining US$50 million were committed to future projects. Out of the total cost of US$500 million, 200 million were invested in the water supply sector and 130 million in the wastewater sector. The remaining financial resources were spent in water conservation (80m), institutional and capacity building (30m), storm water, water resources and irrigation systems.
At the same time, the total investment costs of water projects in the Gaza Strip were about US$230 million, out of which most was spent on ongoing projects (US$170 million), while the remaining 60 million were implemented costs. About 90% of these investments were financed by grants and 10% by loans from the European Investment Bank (EIB) and the World Bank. US$100 million were invested in the water sector and 40 million in the wastewater sector.
It is estimated that a future investment of about US$1.1 billion for the West Bank and US$0.8 billion is needed for the planning period from 2003 to 2015.
== External cooperation ==
About 15 bilateral and multilateral donor agencies support the Palestinian water sector. In 2006, the PWA complained that coordination between PWA and donors was "still not successful" and that some donors and NGOs were "bypassing" the PWA. Donor coordination mechanisms in the sector include Emergency Water, Sanitation and Hygiene group (EWASH) regrouping UN agencies and NGOs as well as Emergency Water Operations Center (EWOC) led by USAID. Both were established to coordinate the reconstruction after the 2002 Israeli incursions into the West Bank.
=== European Union ===
The European Investment Bank (EIB) provided loan funding for refurbishing water reservoirs and was expected to fund the construction of the south regional wastewater treatment plant and a section of the north–south municipal water carrier in Gaza. Within the framework of the Facility for Euro-Mediterranean Investment and Partnership (FEMIP), the EIB financed operations with more than 137 million Euro in the West Bank and Gaza between 1995 and 2010. 10% of the funds were allocated to the water and environment sector.
=== France ===
The French development agency, Agence française de développement (AFD), supports several projects in the Palestinian territories. For example, AFD finances the connection of densely populated areas of Rafah to the sewage system, the construction of water pipes and reservoirs in Hebron, and the construction of a water distribution network in six villages in the district of Jenin.
=== Germany ===
German development cooperation has been engaged in the water and sanitation sector in the Palestinian Territories since 1994. It consists of financial cooperation through KfW and technical cooperation through GIZ, both working on behalf of the German Ministry for Economic Cooperation and Development.
KfW is engaged in Nablus, Tulkarem, Salfit, Ramallah / Al-Bireh, Jenin and Gaza City. The water supply activities focus on the reduction of non-revenue water so that the available water resources can be used more efficiently. One successful example of water loss reduction is the first phase of the KfW-supported program in Nablus: the frequency of supply for 8,000 inhabitants in the Rafidia neighbourhood was increased from every 4 days to every 2–3 days. This was achieved by reducing distribution losses from 40% to currently 30%. Sanitation activities include the construction of sewer networks and wastewater treatment. The town of Al Bireh had the only functioning wastewater treatment plant in the West Bank in 2009. The plant, which was funded by KfW, was commissioned in 2000 and operates in a satisfactory way despite the challenging environment of the West Bank. The construction of wastewater treatment plants in Gaza City, Western Nablus, Salfeet and the Tulkarem region, however, was substantially delayed as of 2009. Until 2008 new financial cooperation commitments were granted in the form of projects that identified specific investments at an early stage. This approach changed in 2008 with the approval of a new KfW-supported water and sanitation program for the West Bank and Gaza. This program is open for proposals from small and medium-sized towns if they comply with certain selection criteria. The program's main focus is on water loss reduction.
Outcomes of technical cooperation include improved performance for the Jerusalem Water Undertaking, the utility serving Ramallah, as a result of capacity building and training. Employees of the municipality Al-Bireh were trained in operating the town's wastewater treatment plant. Wells have been drilled or rehabilitated in the Nablus and Ramallah area, supplying 120,000 people with drinking water. GTZ also supported the creation of the National Water Council in 2006. Furthermore, at least 6,000 schoolchildren have been taught water conservation measures.
=== Sweden ===
The Swedish International Development Agency (SIDA) has participated in the development of feasibility and design studies for the north regional wastewater treatment plant and associated sewerage collection systems in Gaza.
=== United States Agency for International Development (USAID) ===
USAID is a leading development agency within the sector in the Palestinian territories. Their work includes the repair and rehabilitation of small scale water and sanitation facilities, rehabilitation of water and sewage networks as well as replacement of water pumps. In addition, USAID helps communities without access to piped water through water supply via tankers. In rural areas, the agency provides water collection cisterns to poor families. USAID helps to connect households to water and to install rainwater drainage pipes.
On the official web page, USAID announces to provide more than 60 km of water pipes in order to supply ten additional villages in the southern Nablus area with potable water. By 2009, USAID has improved water supply for more than 19,500 households while about 30,000 households gained improved sanitation and connections to sewage networks.
An example of USAID's work in the Palestinian territories is the Emergency Water and Sanitation and Other Infrastructure Program. Between 2008 and 2013 USAID finances the second phase of the program. It is supposed to address the urgent need of adequate water and sanitation systems e.g. by providing emergency relief and rehabilitation of existing systems.
Another program funded by USAID from 2008 to 2013 is the Infrastructure Needs Program. It does not only focus on water, but also includes the financing of other infrastructure that is critical for economic growth. With regard to water, several achievements have been accomplished in 2010. For example, a water transmission line, water distribution systems, reservoirs, and steel water pipes have been built.
=== World Bank ===
Under the Second Gaza Water and Sanitation Project which is active from 2005 to 2010, the World Bank provides US$20 million. One objective of the project is to develop a sustainable institutional structure of the water and sanitation sector. This is planned to be achieved through supporting the establishment of a Coastal Water Utility which is owned by the local governments and through increased private sector participation. In addition, the project seeks to strengthen the regulatory and institutional capacity of the PWA. The second objective of the project is the improvement of the water and sanitation services through rehabilitation, upgrade and expansion of the existing facilities.
In January 2008, another US$5 million for the project have been approved by the bank. The additional funding will contribute to finance the institutional strengthening of the Coastal Municipal Water Utility, which has suffered from a very difficult security situation. Moreover, operation and maintenance costs of the water and sanitation facilities in the Gaza Strip for one additional year are covered. One aim is to reduce non-revenue water from 45% to 35%, accompanied by an increase of revenues and customer satisfaction. The funding also provides for water meters, chemicals for water treatment and disinfection and the rehabilitation of water production wells.
In addition, the bank provides US$12 million for the North Gaza Emergency Sewage Treatment (NGEST) Project, which seeks to mitigate the health and environmental risks which arise from the Beit Lahia Waterwater Treatment Plant. Effluents of the treatment plant are discharged into a lake, putting the surrounding communities at risk. The objective of the project is to provide a long-term solution to wastewater treatment in the Gaza northern governorate. In order to achieve this, the lake is drained. New infiltration basins are built in another location, where the effluent of the lake will be transferred. A new wastewater treatment plant with improved quality standards will be built, covering the whole northern governorate.
In 2011, the World Bank approved three water and sanitation projects in the West Bank and Gaza. The Water Sector Capacity Building Project is supposed to support the Palestinian Water Authority by providing e.g. advisory support, technical assistance and staff training. The objective is to strengthen the PWA's capacity of monitoring, planning and regulating water sector development in the Palestinian territories.
Furthermore, the Water Supply and Sanitation Improvements for West Bethlehem Villages Project aims at the preparation of a feasibility study and a project concept for wastewater management and reuse in selected rural communities. Other components shall strengthen the capacity of the Water and Wastewater Department and increase the reliability of an existing water supply system.
Finally, the bank approved the third additional financing of the Second Gaza Emergency Water Project. Besides the capacity improvement of both the PWA and the coastal municipalities' water utility, the project shall ensure the management, operation and delivery of wastewater and water services.
== See also ==
Israeli expropriation of Palestinian springs in the West Bank
Israeli–Palestinian Joint Water Committee
Water politics in the Jordan River basin
Water politics in the Middle East
Water Rights in Israel-Palestine
Water, Sanitation and Hygiene Monitoring Program
Water supply and sanitation in Israel
== Notes ==
=== Citations ===
== Sources ==
=== Sources ===
Troubled Waters–Palestinians denied fair access to water. Amnesty International, October 2009. On Israel rations Palestinians to trickle of water
Dispossession and Exploitation: Israel's Policy in the Jordan Valley and Northern Dead Sea. B'Tselem, May 2011. On [3]
Assessment of Restrictions on Palestinian Water Sector Development, Report No. 47657-GZ, World Bank, 20 April 2009. On Responses to the Water Restrictions Report
Water for Life. Water, Sanitation and Hygiene Monitoring Program (WaSH MP) 2007/2008. Palestinian Hydrology Group (PHG) (7,3 MB)
The Truth Behind the Palestinian Water Libels by Prof. Haim Gvirtzman, Begin–Sadat Center for Strategic Studies, 24 February 2014
== Further reading ==
Alkasseh, Jaber (2018). "Contribution of Major Factors Affecting Non-Revenue Water to Water Supply Network in Gaza Strip, Palestine". Israa University Journal of Applied Science. 2: 1–16. doi:10.52865/SNMP9081.
Fergusson, James (2023). In Search of the River Jordan: A Story of Palestine, Israel and the Struggle for Water. New Haven, Conn.: Yale University Press. ISBN 978-0-300-26270-4. LCCN 2023930267.
Pasternak, Dov; Schlissel, Arnold, eds. (2001). "DESERTIFICATION IN THE WEST BANK AND GAZA STRIP Present Status and Future Aspirations by Issam Nofal and Tahseen Barakat". Combating Desertification with Plants. Boston, MA: Springer US. doi:10.1007/978-1-4615-1327-8. ISBN 978-1-4613-5499-4.
== External links ==
Palestinian Water Authority
The Palestinian Academic Society for the Study of International Affairs (PASSIA)
Palestinian Central Bureau of Statistics
Palestinian Economic Council for Development and Reconstruction (PECDAR)
Euro-Mediterranean Information System on know-how in the Water sector (EMWIS)
Palestinian Hydrology Group
Water For One People Only: Discriminatory Access and ‘Water-Apartheid’ in the OPT. Al-Haq, 9 April 2013
The Water and Sanitation Hygiene Monitoring Program Water For Life Campaign
The Israeli 'watergate' scandal: The facts about Palestinian water. Amira Hass, Haaretz, 16 February 2014 | Wikipedia/Wastewater_treatment_in_Palestine |
Industrial wastewater treatment describes the processes used for treating wastewater that is produced by industries as an undesirable by-product. After treatment, the treated industrial wastewater (or effluent) may be reused or released to a sanitary sewer or to a surface water in the environment. Some industrial facilities generate wastewater that can be treated in sewage treatment plants. Most industrial processes, such as petroleum refineries, chemical and petrochemical plants have their own specialized facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans.: 1412 This applies to industries that generate wastewater with high concentrations of organic matter (e.g. oil and grease), toxic pollutants (e.g. heavy metals, volatile organic compounds) or nutrients such as ammonia.: 180 Some industries install a pre-treatment system to remove some pollutants (e.g., toxic compounds), and then discharge the partially treated wastewater to the municipal sewer system.: 60
Most industries produce some wastewater. Recent trends have been to minimize such production or to recycle treated wastewater within the production process. Some industries have been successful at redesigning their manufacturing processes to reduce or eliminate pollutants. Sources of industrial wastewater include battery manufacturing, chemical manufacturing, electric power plants, food industry, iron and steel industry, metal working, mines and quarries, nuclear industry, oil and gas extraction, petroleum refining and petrochemicals, pharmaceutical manufacturing, pulp and paper industry, smelters, textile mills, industrial oil contamination, water treatment and wood preserving. Treatment processes include brine treatment, solids removal (e.g. chemical precipitation, filtration), oils and grease removal, removal of biodegradable organics, removal of other organics, removal of acids and alkalis, and removal of toxic materials.
== Types ==
Industrial facilities may generate the following industrial wastewater flows:
Manufacturing process wastestreams, which can include conventional pollutants (i.e. controllable with secondary treatment systems), toxic pollutants (e.g. solvents, heavy metals), and other harmful compounds such as nutrients
Non-process wastestreams: boiler blowdown and cooling water, which produce thermal pollution and other pollutants
Industrial site drainage, generated both by manufacturing facilities, service industries and energy and mining sites
Wastestreams from the energy and mining sectors: acid mine drainage, produced water from oil and gas extraction, radionuclides
Wastestreams that are by-products of treatment or cooling processes: backwashing (water treatment), brine.
== Contaminants ==
== Industrial sectors ==
The specific pollutants generated and the resultant effluent concentrations can vary widely among the industrial sectors.
=== Battery manufacturing ===
Battery manufacturers specialize in fabricating small devices for electronics and portable equipment (e.g., power tools), or larger, high-powered units for cars, trucks and other motorized vehicles. Pollutants generated at manufacturing plants includes cadmium, chromium, cobalt, copper, cyanide, iron, lead, manganese, mercury, nickel, silver, zinc, oil and grease.
=== Centralized waste treatment ===
A centralized waste treatment (CWT) facility processes liquid or solid industrial wastes generated by off-site manufacturing facilities. A manufacturer may send its wastes to a CWT plant, rather than perform treatment on site, due to constraints such as limited land availability, difficulty in designing and operating an on-site system, or limitations imposed by environmental regulations and permits. A manufacturer may determine that using a CWT is more cost-effective than treating the waste itself; this is often the case where the manufacturer is a small business.
CWT plants often receive wastes from a wide variety of manufacturers, including chemical plants, metal fabrication and finishing; and used oil and petroleum products from various manufacturing sectors. The wastes may be classified as hazardous, have high pollutant concentrations or otherwise be difficult to treat. In 2000 the U.S. Environmental Protection Agency published wastewater regulations for CWT facilities in the US.
=== Chemical manufacturing ===
==== Organic chemicals manufacturing ====
The specific pollutants discharged by organic chemical manufacturers vary widely from plant to plant, depending on the types of products manufactured, such as bulk organic chemicals, resins, pesticides, plastics, or synthetic fibers. Some of the organic compounds that may be discharged are benzene, chloroform, naphthalene, phenols, toluene and vinyl chloride. Biochemical oxygen demand (BOD), which is a gross measurement of a range of organic pollutants, may be used to gauge the effectiveness of a biological wastewater treatment system, and is used as a regulatory parameter in some discharge permits. Metal pollutant discharges may include chromium, copper, lead, nickel and zinc.
==== Inorganic chemicals manufacturing ====
The inorganic chemicals sector covers a wide variety of products and processes, although an individual plant may produce a narrow range of products and pollutants. Products include aluminum compounds; calcium carbide and calcium chloride; hydrofluoric acid; potassium compounds; borax; chrome and fluorine-based compounds; cadmium and zinc-based compounds. The pollutants discharged vary by product sector and individual plant, and may include arsenic, chlorine, cyanide, fluoride; and heavy metals such as chromium, copper, iron, lead, mercury, nickel and zinc.
=== Electric power plants ===
Fossil-fuel power stations, particularly coal-fired plants, are a major source of industrial wastewater. Many of these plants discharge wastewater with significant levels of metals such as lead, mercury, cadmium and chromium, as well as arsenic, selenium and nitrogen compounds (nitrates and nitrites). Wastewater streams include flue-gas desulfurization, fly ash, bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream.
Ash ponds, a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation, biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. Technological advancements in ion-exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet recent EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters.
=== Food industry ===
Wastewater generated from agricultural and food processing operations has distinctive characteristics that set it apart from common municipal wastewater managed by public or private sewage treatment plants throughout the world: it is biodegradable and non-toxic, but has high Biological Oxygen Demand (BOD) and suspended solids (SS). The constituents of food and agriculture wastewater are often complex to predict, due to the differences in BOD and pH in effluents from vegetable, fruit, and meat products and due to the seasonal nature of food processing and post-harvesting.
Processing of food from raw materials requires large volumes of high grade water. Vegetable washing generates water with high loads of particulate matter and some dissolved organic matter. It may also contain surfactants and pesticides.
Aquaculture facilities (fish farms) often discharge large amounts of nitrogen and phosphorus, as well as suspended solids. Some facilities use drugs and pesticides, which may be present in the wastewater.
Dairy processing plants generate conventional pollutants (BOD, SS).
Animal slaughter and processing produces organic waste from body fluids, such as blood, and gut contents. Pollutants generated include BOD, SS, coliform bacteria, oil and grease, organic nitrogen and ammonia.
Processing food for sale produces wastes generated from cooking which are often rich in plant organic material and may also contain salt, flavourings, colouring material and acids or alkali. Large quantities of fats, oil and grease ("FOG") may also be present, which in sufficient concentrations can clog sewer lines. Some municipalities require restaurants and food processing businesses to use grease interceptors and regulate the disposal of FOG in the sewer system.
Food processing activities such as plant cleaning, material conveying, bottling, and product washing create wastewater. Many food processing facilities require on-site treatment before operational wastewater can be land applied or discharged to a waterway or a sewer system. High suspended solids levels of organic particles increase BOD and can result in significant sewer surcharge fees. Sedimentation, wedge wire screening, or rotating belt filtration (microscreening) are commonly used methods to reduce suspended organic solids loading prior to discharge.
=== Glass manufacturing ===
Glass manufacturing wastes vary with the type of glass manufactured, which includes fiberglass, plate glass, rolled glass, and glass containers, among others. The wastewater discharged by glass plants may include ammonia, BOD, chemical oxygen demand (COD), fluoride, lead, oil, phenol, and/or phosphorus. The discharges may also be highly acidic (low pH) or alkaline (high pH).
=== Iron and steel industry ===
The production of iron from its ores involves powerful reduction reactions in blast furnaces. Cooling waters are inevitably contaminated with products especially ammonia and cyanide. Production of coke from coal in coking plants also requires water cooling and the use of water in by-products separation. Contamination of waste streams includes gasification products such as benzene, naphthalene, anthracene, cyanide, ammonia, phenols, cresols together with a range of more complex organic compounds known collectively as polycyclic aromatic hydrocarbons (PAH).
The conversion of iron or steel into sheet, wire or rods requires hot and cold mechanical transformation stages frequently employing water as a lubricant and coolant. Contaminants include hydraulic oils, tallow and particulate solids. Final treatment of iron and steel products before onward sale into manufacturing includes pickling in strong mineral acid to remove rust and prepare the surface for tin or chromium plating or for other surface treatments such as galvanisation or painting. The two acids commonly used are hydrochloric acid and sulfuric acid. Wastewater include acidic rinse waters together with waste acid. Although many plants operate acid recovery plants (particularly those using hydrochloric acid), where the mineral acid is boiled away from the iron salts, there remains a large volume of highly acid ferrous sulfate or ferrous chloride to be disposed of. Many steel industry wastewaters are contaminated by hydraulic oil, also known as soluble oil.
=== Metal working ===
Many industries perform work on metal feedstocks (e.g. sheet metal, ingots) as they fabricate their final products. The industries include automobile, truck and aircraft manufacturing; tools and hardware manufacturing; electronic equipment and office machines; ships and boats; appliances and other household products; and stationary industrial equipment (e.g. compressors, pumps, boilers). Typical processes conducted at these plants include grinding, machining, coating and painting, chemical etching and milling, solvent degreasing, electroplating and anodizing. Wastewater generated from these industries may contain heavy metals (common heavy metal pollutants from these industries include cadmium, chromium, copper, lead, nickel, silver and zinc), cyanide and various chemical solvents, oil, and grease.
=== Mines and quarries ===
The principal waste-waters associated with mines and quarries are slurries of rock particles in water. These arise from rainfall washing exposed surfaces and haul roads and also from rock washing and grading processes. Volumes of water can be very high, especially rainfall related arisings on large sites. Some specialized separation operations, such as coal washing to separate coal from native rock using density gradients, can produce wastewater contaminated by fine particulate haematite and surfactants. Oils and hydraulic oils are also common contaminants.
Wastewater from metal mines and ore recovery plants are inevitably contaminated by the minerals present in the native rock formations. Following crushing and extraction of the desirable materials, undesirable materials may enter the wastewater stream. For metal mines, this can include unwanted metals such as zinc and other materials such as arsenic. Extraction of high value metals such as gold and silver may generate slimes containing very fine particles in where physical removal of contaminants becomes particularly difficult.
Additionally, the geologic formations that harbour economically valuable metals such as copper and gold very often consist of sulphide-type ores. The processing entails grinding the rock into fine particles and then extracting the desired metal(s), with the leftover rock being known as tailings. These tailings contain a combination of not only undesirable leftover metals, but also sulphide components which eventually form sulphuric acid upon the exposure to air and water that inevitably occurs when the tailings are disposed of in large impoundments. The resulting acid mine drainage, which is often rich in heavy metals (because acids dissolve metals), is one of the many environmental impacts of mining.
=== Nuclear industry ===
The waste production from the nuclear and radio-chemicals industry is dealt with as Radioactive waste.
Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater.
=== Oil and gas extraction ===
Oil and gas well operations generate produced water, which may contain oils, toxic metals (e.g. arsenic, cadmium, chromium, mercury, lead), salts, organic chemicals and solids. Some produced water contains traces of naturally occurring radioactive material. Offshore oil and gas platforms also generate deck drainage, domestic waste and sanitary waste. During the drilling process, well sites typically discharge drill cuttings and drilling mud (drilling fluid).
=== Petroleum refining and petrochemicals ===
Pollutants discharged at petroleum refineries and petrochemical plants include conventional pollutants (BOD, oil and grease, suspended solids), ammonia, chromium, phenols and sulfides.
=== Pharmaceutical manufacturing ===
Pharmaceutical plants typically generate a variety of process wastewaters, including solvents, spent acid and caustic solutions, water from chemical reactions, product wash water, condensed steam, blowdown from air pollution control scrubbers, and equipment washwater. Non-process wastewaters typically include cooling water and site runoff. Pollutants generated by the industry include acetone, ammonia, benzene, BOD, chloroform, cyanide, ethanol, ethyl acetate, isopropanol, methylene chloride, methanol, phenol and toluene. Treatment technologies used include advanced biological treatment (e.g. activated sludge with nitrification), multimedia filtration, cyanide destruction (e.g. hydrolysis), steam stripping and wastewater recycling.
=== Pulp and paper industry ===
Effluent from the pulp and paper industry is generally high in suspended solids and BOD. Plants that bleach wood pulp for paper making may generate chloroform, dioxins (including 2,3,7,8-TCDD), furans, phenols, and chemical oxygen demand (COD). Stand-alone paper mills using imported pulp may only require simple primary treatment, such as sedimentation or dissolved air flotation. Increased BOD or COD loadings, as well as organic pollutants, may require biological treatment such as activated sludge or upflow anaerobic sludge blanket reactors. For mills with high inorganic loadings like salt, tertiary treatments may be required, either general membrane treatments like ultrafiltration or reverse osmosis or treatments to remove specific contaminants, such as nutrients.
=== Smelters ===
The pollutants discharged by nonferrous smelters vary with the base metal ore. Bauxite smelters generate phenols: 131 but typically use settling basins and evaporation to manage these wastes, with no need to routinely discharge wastewater.: 395 Aluminum smelters typically discharge fluoride, benzo(a)pyrene, antimony and nickel, as well as aluminum. Copper smelters typically generate cadmium, lead, zinc, arsenic and nickel, in addition to copper, in their wastewater. Lead smelters discharge lead and zinc. Nickel and cobalt smelters discharge ammonia and copper in addition to the base metals. Zinc smelters discharge arsenic, cadmium, copper, lead, selenium and zinc.
Typical treatment processes used in the industry are chemical precipitation, sedimentation and filtration.: 145
=== Textile mills ===
Textile mills, including carpet manufacturers, generate wastewater from a wide variety of processes, including cleaning and finishing, yarn manufacturing and fabric finishing (such as bleaching, dyeing, resin treatment, waterproofing and retardant flameproofing). Pollutants generated by textile mills include BOD, SS, oil and grease, sulfide, phenols and chromium. Insecticide residues in fleeces are a particular problem in treating waters generated in wool processing. Animal fats may be present in the wastewater, which if not contaminated, can be recovered for the production of tallow or further rendering.
Textile dyeing plants generate wastewater that contain synthetic (e.g., reactive dyes, acid dyes, basic dyes, disperse dyes, vat dyes, sulphur dyes, mordant dyes, direct dyes, ingrain dyes, solvent dyes, pigment dyes) and natural dyestuff, gum thickener (guar) and various wetting agents, pH buffers and dye retardants or accelerators. Following treatment with polymer-based flocculants and settling agents, typical monitoring parameters include BOD, COD, color (ADMI), sulfide, oil and grease, phenol, TSS and heavy metals (chromium, zinc, lead, copper).
=== Industrial oil contamination ===
Industrial applications where oil enters the wastewater stream may include vehicle wash bays, workshops, fuel storage depots, transport hubs and power generation. Often the wastewater is discharged into local sewer or trade waste systems and must meet local environmental specifications. Typical contaminants can include solvents, detergents, grit, lubricants and hydrocarbons.
=== Water treatment ===
Many industries have a need to treat water to obtain very high quality water for their processes. This might include pure chemical synthesis or boiler feed water. Also, some water treatment processes produce organic and mineral sludges from filtration and sedimentation which require treatment. Ion exchange using natural or synthetic resins removes calcium, magnesium and carbonate ions from water, typically replacing them with sodium, chloride, hydroxyl and/or other ions. Regeneration of ion-exchange columns with strong acids and alkalis produces a wastewater rich in hardness ions which are readily precipitated out, especially when in admixture with other wastewater constituents.
=== Wood preserving ===
Wood preserving plants generate conventional and toxic pollutants, including arsenic, COD, copper, chromium, abnormally high or low pH, phenols, suspended solids, oil and grease.
== Treatment methods ==
The various types of contamination of wastewater require a variety of strategies to remove the contamination. Most industrial processes, such as petroleum refineries, chemical and petrochemical plants have onsite facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans.: 1412 Constructed wetlands are being used in an increasing number of cases as they provided high quality and productive on-site treatment. Other industrial processes that produce a lot of waste-waters such as paper and pulp production have created environmental concern, leading to development of processes to recycle water use within plants before they have to be cleaned and disposed.
An industrial wastewater treatment plant may include one or more of the following rather than the conventional treatment sequence of sewage treatment plants:
An API oil-water separator, for removing separate phase oil from wastewater.: 180
A clarifier, for removing solids from wastewater.: 41–15
A roughing filter, to reduce the biochemical oxygen demand of wastewater.: 23–11
A carbon filtration plant, to remove toxic dissolved organic compounds from wastewater.: 210
An advanced electrodialysis reversal (EDR) system with ion-exchange membranes.
=== Brine treatment ===
Brine treatment involves removing dissolved salt ions from the waste stream. Although similarities to seawater or brackish water desalination exist, industrial brine treatment may contain unique combinations of dissolved ions, such as hardness ions or other metals, necessitating specific processes and equipment.
Brine treatment systems are typically optimized to either reduce the volume of the final discharge for more economic disposal (as disposal costs are often based on volume) or maximize the recovery of fresh water or salts. Brine treatment systems may also be optimized to reduce electricity consumption, chemical usage, or physical footprint.
Brine treatment is commonly encountered when treating cooling tower blowdown, produced water from steam-assisted gravity drainage (SAGD), produced water from natural gas extraction such as coal seam gas, frac flowback water, acid mine or acid rock drainage, reverse osmosis reject, chlor-alkali wastewater, pulp and paper mill effluent, and waste streams from food and beverage processing.
Brine treatment technologies may include: membrane filtration processes, such as reverse osmosis; ion-exchange processes such as electrodialysis or weak acid cation exchange; or evaporation processes, such as brine concentrators and crystallizers employing mechanical vapour recompression and steam. Due to the ever increasing discharge standards, there has been an emergence of the use of advance oxidation processes for the treatment of brine. Some notable examples such as Fenton's oxidation and ozonation have been employed for degradation of recalcitrant compounds in brine from industrial plants.
Reverse osmosis may not be viable for brine treatment, due to the potential for fouling caused by hardness salts or organic contaminants, or damage to the reverse osmosis membranes from hydrocarbons.
Evaporation processes are the most widespread for brine treatment as they enable the highest degree of concentration, as high as solid salt. They also produce the highest purity effluent, even distillate-quality. Evaporation processes are also more tolerant of organics, hydrocarbons, or hardness salts. However, energy consumption is high and corrosion may be an issue as the prime mover is concentrated salt water. As a result, evaporation systems typically employ titanium or duplex stainless steel materials.
==== Brine management ====
Brine management examines the broader context of brine treatment and may include consideration of government policy and regulations, corporate sustainability, environmental impact, recycling, handling and transport, containment, centralized compared to on-site treatment, avoidance and reduction, technologies, and economics. Brine management shares some issues with leachate management and more general waste management. In the recent years, there has been greater prevalence in brine management due to global push for zero liquid discharge (ZLD)/minimal liquid discharge (MLD). In ZLD/MLD techniques, a closed water cycle is used to minimize water discharges from a system for water reuse. This concept has been gaining traction in recent years, due to increased water discharges and recent advancement in membrane technology. Increasingly, there has been also greater efforts to increase the recovery of materials from brines, especially from mining, geothermal wastewater or desalination brines. Various literature demosntrates the vaibility of extraction of valuable materials like sodium bicarbonates, sodium chlorides and precious metals (like rubidium, cesium and lithium). The concept of ZLD/MLD encompasses the downstream management of wastewater brines, to reduce discharges and also derive valuable products from it.
=== Solids removal ===
Most solids can be removed using simple sedimentation techniques with the solids recovered as slurry or sludge. Very fine solids and solids with densities close to the density of water pose special problems. In such case filtration or ultrafiltration may be required. Although flocculation may be used, using alum salts or the addition of polyelectrolytes. Wastewater from industrial food processing often requires on-site treatment before it can be discharged to prevent or reduce sewer surcharge fees. The type of industry and specific operational practices determine what types of wastewater is generated and what type of treatment is required. Reducing solids such as waste product, organic materials, and sand is often a goal of industrial wastewater treatment. Some common ways to reduce solids include primary sedimentation (clarification), dissolved air flotation (DAF), belt filtration (microscreening), and drum screening.
=== Oils and grease removal ===
The effective removal of oils and grease is dependent on the characteristics of the oil in terms of its suspension state and droplet size, which will in turn affect the choice of separator technology. Oil in industrial waste water may be free light oil, heavy oil, which tends to sink, and emulsified oil, often referred to as soluble oil. Emulsified or soluble oils will typically required "cracking" to free the oil from its emulsion. In most cases this is achieved by lowering the pH of the water matrix.
Most separator technologies will have an optimum range of oil droplet sizes that can be effectively treated. Each separator technology will have its own performance curve outlining optimum performance based on oil droplet size. the most common separators are gravity tanks or pits, API oil-water separators or plate packs, chemical treatment via dissolved air flotations, centrifuges, media filters and hydrocyclones.
Analyzing the oily water to determine droplet size can be performed with a video particle analyser.
==== API oil-water separators ====
==== Hydrocyclone ====
Hydrocyclone separators operate on the process where wastewater enters the cyclone chamber and is spun under extreme centrifugal forces more than 1000 times the force of gravity. This force causes the water and oil droplets (or solid particles) to separate. The separated materials is discharged from one end of the cyclone where treated water is discharged through the opposite end for further treatment, filtration or discharge. Hydrocyclones can also be utilised in a variety of context from solid-liquid separation to oil-water separation.
=== Removal of biodegradable organics ===
Biodegradable organic material of plant or animal origin is usually possible to treat using extended conventional sewage treatment processes such as activated sludge or trickling filter. Problems can arise if the wastewater is excessively diluted with washing water or is highly concentrated such as undiluted blood or milk. The presence of cleaning agents, disinfectants, pesticides, or antibiotics can have detrimental impacts on treatment processes.
==== Activated sludge process ====
==== Trickling filter process ====
A trickling filter consists of a bed of rocks, gravel, slag, peat moss, or plastic media over which wastewater flows downward and contacts a layer (or film) of microbial slime covering the bed media. Aerobic conditions are maintained by forced air flowing through the bed or by natural convection of air. The process involves adsorption of organic compounds in the wastewater by the microbial slime layer, diffusion of air into the slime layer to provide the oxygen required for the biochemical oxidation of the organic compounds. The end products include carbon dioxide gas, water and other products of the oxidation. As the slime layer thickens, it becomes difficult for the air to penetrate the layer and an inner anaerobic layer is formed.
=== Removal of other organics ===
Synthetic organic materials including solvents, paints, pharmaceuticals, pesticides, products from coke production and so forth can be very difficult to treat. Treatment methods are often specific to the material being treated. Methods include advanced oxidation processing, distillation, adsorption, ozonation, vitrification, incineration, chemical immobilisation or landfill disposal. Some materials such as some detergents may be capable of biological degradation and in such cases, a modified form of wastewater treatment can be used.
=== Removal of acids and alkalis ===
Acids and alkalis can usually be neutralised under controlled conditions. Neutralisation frequently produces a precipitate that will require treatment as a solid residue that may also be toxic. In some cases, gases may be evolved requiring treatment for the gas stream. Some other forms of treatment are usually required following neutralisation.
Waste streams rich in hardness ions as from de-ionisation processes can readily lose the hardness ions in a buildup of precipitated calcium and magnesium salts. This precipitation process can cause severe furring of pipes and can, in extreme cases, cause the blockage of disposal pipes. A 1-metre diameter industrial marine discharge pipe serving a major chemicals complex was blocked by such salts in the 1970s. Treatment is by concentration of de-ionisation waste waters and disposal to landfill or by careful pH management of the released wastewater.
=== Removal of toxic materials ===
Toxic materials including many organic materials, metals (such as zinc, silver, cadmium, thallium, etc.) acids, alkalis, non-metallic elements (such as arsenic or selenium) are generally resistant to biological processes unless very dilute. Metals can often be precipitated out by changing the pH or by treatment with other chemicals. Many, however, are resistant to treatment or mitigation and may require concentration followed by landfilling or recycling. Dissolved organics can be incinerated within the wastewater by the advanced oxidation process.
==== Smart capsules ====
Molecular encapsulation is a technology that has the potential to provide a system for the recyclable removal of lead and other ions from polluted sources. Nano-, micro- and milli- capsules, with sizes in the ranges 10 nm–1μm,1μm–1mm and >1mm, respectively, are particles that have an active reagent (core) surrounded by a carrier (shell).There are three types of capsule under investigation: alginate-based capsules, carbon nanotubes, polymer swelling capsules. These capsules provide a possible means for the remediation of contaminated water.
=== Removal of thermal pollution ===
To remove heat from wastewater generated by power plants or manufacturing plants, and thus to reduce thermal pollution, the following technologies are used:
cooling ponds, engineered bodies of water designed for cooling by evaporation, convection, and radiation
cooling towers, which transfer waste heat to the atmosphere through evaporation or heat transfer
cogeneration, a process where waste heat is recycled for domestic or industrial heating purposes.
=== Other disposal methods ===
Some facilities such as oil and gas wells may be permitted to pump their wastewater underground through injection wells. However, wastewater injection has been linked to induced seismicity.
== Costs and trade waste charges ==
Economies of scale may favor a situation where industrial wastewater (with pre-treatment or without treatment) is discharged to the sewer and then treated at a large municipal sewage treatment plant. Typically, trade waste charges are applied in that case. Or it might be more economical to have full treatment of industrial wastewater on the same site where it is generated and then discharging this treated industrial wastewater to a suitable surface water body. This effectively reduces wastewater treatment charges collected by municipal sewage treatment plants by pre-treating wastewaters to reduce concentrations of pollutants measured to determine user fees.: 300–302
Industrial wastewater plants may also reduce raw water costs by converting selected wastewaters to reclaimed water used for different purposes.
== Society and culture ==
=== Global goals ===
The international community has defined the treatment of industrial wastewater as an important part of sustainable development by including it in Sustainable Development Goal 6. Target 6.3 of this goal is to "By 2030, improve water quality by reducing pollution, eliminating dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally". One of the indicators for this target is the "proportion of domestic and industrial wastewater flows safely treated".
== See also ==
Best management practice for water pollution (BMP)
List of waste water treatment technologies
Purified water (for industrial use)
Water purification (for drinking water)
== References ==
== Further reading ==
== External links ==
Water Environment Federation - Professional society
Industrial Wastewater Treatment Technology Database - EPA | Wikipedia/Industrial_wastewater |
In animal husbandry, a concentrated animal feeding operation (CAFO), as defined by the United States Department of Agriculture (USDA), is an intensive animal feeding operation (AFO) in which over 1,000 animal units are confined for over 45 days a year. An animal unit is the equivalent of 1,000 pounds of "live" animal weight. A thousand animal units equates to 700 dairy cows, 1,000 meat cows, 2,500 pigs weighing more than 55 pounds (25 kg), 10,000 pigs weighing under 55 pounds, 10,000 sheep, 55,000 turkeys, 125,000 chickens, or 82,000 egg laying hens or pullets.
CAFOs are governed by regulations that restrict how much waste can be distributed and the quality of the waste materials. As of 2012 there were around 212,000 AFOs in the United States,: 1.2 19,496 of which were CAFOs.
Livestock production has become increasingly dominated by CAFOs in the United States and other parts of the world. Most poultry was raised in CAFOs starting in the 1950s, and most cattle and pigs by the 1970s and 1980s. By the mid-2000s CAFOs dominated livestock and poultry production in the United States, and the scope of their market share is steadily increasing. In 1966, it took 1 million farms to house 57 million pigs; by 2001, it took only 80,000 farms to house the same number.
== Definition ==
There are roughly 212,000 AFOs in the United States,: 1.2 of which 19,496 met the more narrow criteria for CAFOs in 2016. The Environmental Protection Agency (EPA) has delineated three categories of CAFOs, ordered in terms of capacity: large, medium and small. The relevant animal unit for each category varies depending on species and capacity. For instance, large CAFOs house 1,000 or more cattle, medium CAFOs can have 300–999 cattle, and small CAFOs harbor no more than 300 cattle.
The table below provides some examples of the size thresholds for CAFOs:
The categorization of CAFOs affects whether a facility is subject to regulation under the Clean Water Act (CWA). EPA's 2008 rule specifies that "large CAFOs are automatically subject to EPA regulation; medium CAFOs must also meet one of two 'method of discharge' criteria to be defined as a CAFO (or may be designated as such); and small CAFOs can only be made subject to EPA regulations on a case-by-case basis." A small CAFO will also be designated a CAFO for purposes of the CWA if it discharges pollutants into waterways of the United States through a man-made conveyance such as a road, ditch or pipe. Alternatively, a small CAFO may be designated an ordinary animal feeding operation (AFO) once its animal waste management system is certified at the site.
Since it first coined the term, the EPA has changed the definition (and applicable regulations) for CAFOs on several occasions. Private groups and individuals use the term CAFO colloquially to mean many types of both regulated and unregulated facilities, both inside and outside the U.S. The definition used in everyday speech may thus vary considerably from the statutory definition in the CWA. CAFOs are commonly characterized as having large numbers of animals crowded into a confined space, a situation that results in the concentration of manure in a small area.
== Key issues ==
=== Environmental impact ===
The EPA has focused on regulating CAFOs because they generate millions of tons of manure every year. When improperly managed, the manure can pose substantial risks to the environment and public health. In order to manage their waste, CAFO operators have developed agricultural wastewater treatment plans. The most common type of facility used in these plans, the anaerobic lagoon, has significantly contributed to environmental and health problems attributed to the CAFO.
==== Water quality ====
The large amounts of animal waste from CAFOs present a risk to water quality and aquatic ecosystems. States with high concentrations of CAFOs experience on average 20 to 30 serious water quality problems per year as a result of manure management issues.Animal waste includes a number of potentially harmful pollutants. Pollutants associated with CAFO waste principally include:
nitrogen and phosphorus, collectively known as nutrient pollution;
organic matter;
solids, including the manure itself and other elements mixed with it such as spilled feed, bedding and litter materials, hair, feathers and animal corpses;
pathogens (disease-causing organisms such as bacteria and viruses);
salts;
trace elements such as arsenic;
odorous/volatile compounds such as carbon dioxide, methane, hydrogen sulfide, and ammonia;
antibiotics;
pesticides and hormones.
The two main contributors to water pollution caused by CAFOs are soluble nitrogen compounds and phosphorus. The eutrophication of water bodies from such waste is harmful to wildlife and water quality in aquatic system like streams, lakes, and oceans.
Groundwater and surface water are closely linked, so polluting one often affects the other. Surface water may be polluted by CAFO waste through the runoff of nutrients, organics, and pathogens from fields and storage. Waste can be transmitted to groundwater through the leaching of pollutants. Some facility designs, such as lagoons, can reduce the risk of groundwater contamination, but the microbial pathogens from animal waste may still pollute surface and groundwater, harming wildlife and human health.
A CAFO is responsible for one of the biggest environmental spills in U.S. history. In 1995, a 120,000-square-foot (11,000 m2) lagoon ruptured in North Carolina. North Carolina contains a lot of the United States' industrial hog operations, which disproportionally impact Black, Hispanic and Indian American residents. The spill released 25.8 million US gallons (98,000 m3) of effluvium into the New River and killed 10 million local fish. The spill also contributed to an outbreak of Pfiesteria piscicida, which caused health problems in nearby humans, including skin irritation and short-term cognitive problems.
==== Air quality ====
CAFOs reduce ambient air quality. They release several gases harmful to humans: ammonia, hydrogen sulfide, methane, and particulate matter. Larger CAFOs release more gas, mostly by the decomposition of large stores of animal manure. CAFOs also emit strains of antibiotic resistant bacteria into the surrounding air, particularly downwind. Levels of antibiotics measured downwind from swine CAFOs were three times higher than those measured upwind. The source is not widely known, but animal feed is suspected.Globally, ruminant livestock are responsible for about 115 Tg/a of the 330 Tg/a (35%) of anthropogenic greenhouse gas emissions released per year. Livestock operations are responsible for about 18% of greenhouse gas emissions globally and over 7% of greenhouse gas emissions in the U.S. Methane is the second most concentrated greenhouse gas contributing to global climate change, with livestock contributing nearly 30% of anthropogenic methane emissions. Only 17% of livestock-related emissions come from manure, whereas most come from enteric fermentation or gases produced during digestion. 76% of bacteria grown within a swine CAFO are Staphylococcus Aureus, followed by Group A Streptococci and Fecal Coliforms.
The Intergovernmental Panel on Climate Change (IPCC) acknowledges the big effect of livestock on methane emissions, antibiotic resistance, and climate change. To reduce emissions, it recommends removing sources of stress and changing how animals are fed, including sources of feed grain, amount of forage, and amount of digestible nutrients. The Humane Society of the United States (HSUS) argues for reducing use of non-therapeutic antibiotics, especially those that are widely used in human medicine, on the advice of over 350 organizations including the American Medical Association. If no change is made and methane emissions continue increasing in direct proportion to the number of livestock, global methane production is predicted to increase by 60% by 2030. Greenhouse gases and climate change make air worse, causing illnesses such as respiratory disorders, lung tissue damage, and allergies. Reducing the increase of greenhouse gas emissions from livestock could rapidly curb global warming. Also, people near CAFOs often complain of the smell, which comes from a complex mixture of ammonia, hydrogen sulfide, carbon dioxide, and volatile and semi-volatile organic compounds.
Waste disposal also makes air worse. Some CAFOs will use "spray fields" and pump the waste of thousands of animals into a machine that sprays it onto an open field. The spray can be carried by wind onto nearby homes, depositing pathogens, heavy metals, and antibiotic resistant bacteria into the air of poor or minority communities. It often contains respiratory and eye irritants including hydrogen sulfide and ammonia.
=== Economic impact ===
==== Increased role in the market ====
The economic role of CAFOs has expanded significantly in the U.S. in the past few decades, and there is clear evidence that CAFOs have come to dominate animal production industries. The rise in large-scale animal agriculture began in the 1930s with the modern mechanization of swine slaughterhouse operations.
The growth of corporate contracting has also contributed to a transition from a system of many small-scale farms to one of relatively few large industrial-scale farms. This has dramatically changed the animal agricultural sector in the United States. According to the National Agricultural Statistics Service, "In the 1930s, there were close to 7 million farms in the United States and as of the 2002 census, just over 2 million farms remain." From 1969 to 2002, the number of family farms dropped by 39%, yet the percentage of family farms has remained high. As of 2004, 98% of all U.S. farms were family-owned and -operated. Most meat and dairy products are now produced on large farms with single-species buildings or open-air pens.
Due to their increased efficiency, CAFOs provide a source of low cost animal products: meat, milk and eggs. CAFOs may also stimulate local economies through increased employment and use of local materials in their production. The development of modern animal agriculture has increased the efficiency of raising meat and dairy products. Improvements in animal breeding, mechanical innovations, and the introduction of specially formulated feeds (as well as animal pharmaceuticals) have contributed to the decrease in cost of animal products to consumers. The development of new technologies has also helped CAFO owners reduce production cost and increase business profits with less resources consumption. The growth of CAFOs has corresponded with an increase in the consumption of animal products in the United States. According to author Christopher L. Delgado, "milk production has doubled, meat production has tripled, and egg production has increased fourfold since 1960" in the United States.
Along with the noted benefits, there are also criticisms regarding CAFOs' impact on the economy. Many farmers in the United States find that it is difficult to earn a high income due to the low market prices of animal products. Such market factors often lead to low profit margins for production methods and a competitive disadvantage against CAFOs. Alternative animal production methods, like "free range" or "family farming" operations are losing their ability to compete, though they present few of the environmental and health risks associated with CAFOs.
==== Negative production externalities ====
The price of meat does not reflect the negative ecological impacts that result from industrial agricultural systems. The negative production externalities (when market prices inappropriately reflect or hide the societal harms incurred in the creation of a product) of CAFOs include damaging effects to the environment caused by, among others, ever-increasing amounts of often poorly managed waste. The costs from damage caused to the atmosphere (in the form of GHGs), water, soil, fisheries, and recreational areas, estimated at hundreds of billions of dollars, are typically not incurred by corporations that feature the use of CAFOs in their business models. Additionally, human antimicrobial resistance from antibiotic use in industrial animal agriculture represents a serious risk to societal wellbeing. Corporations that rely on using CAFOs through contract farming have an unfair economic advantage because the costs of managing animal waste is shifted to contract farmers and, when spills occur, to the areas surrounding them. Property values near CAFOs may plummet considerably due to the detrimental impacts that CAFOs can have on air, water, and land in the nearby areas. For instance, researchers found that there is a statistically significant relationship between property values declines and CAFO proximity.
==== Other economic criticisms ====
Critics of CAFOs also maintain that CAFOs benefit from industrial and agricultural tax breaks and subsidies, and the "vertical integration of giant agribusiness firms". The U.S. Department of Agriculture (USDA), for instance, spent an average of $16 billion annually between FY 1996 to FY 2002 on commodity-based subsidies. Lax enforcement of anti-competitive practices may be helping create a market monopoly. Critics also contend that CAFOs cut costs by overusing antibiotics.
=== Public health concerns ===
The direct discharge of manure from CAFOs and the accompanying pollutants (including nutrients, antibiotics, pathogens, and arsenic) is a serious public health risk. The contamination of groundwater with pathogenic organisms from CAFOs can threaten drinking water safety, and contamination of drinking water with pathogens can cause outbreaks of infectious disease. The EPA estimates that 53% of the United States population drinks groundwater.
Contamination of water by CAFOs causes various heart problems. Accidental ingestion of contaminated water can result in diarrhea or other gastrointestinal illnesses. Dermal exposure can result in irritation and infection of the skin, eyes or ear. High levels of nitrate in drinking water are associated with increased risk of hyperthyroidism, insulin-dependent diabetes, and central nervous system malformations.
Antibiotic contamination also threatens human health. To maximize animal production, CAFOs are using ever more antibiotics, which in turn increases bacterial resistance. This resistance makes it harder to treat bacterial infections. Contaminated surface water and groundwater is particularly concerning, as these can spread antibiotic-resistant bacteria. Antibiotic resistance can result due to DNA mutations, transformations and conjugations arising from various antibiotics and pharmaceutical drugs found in drinking water.
Antibiotics are used heavily in CAFOs to both treat and prevent illness in individual animals as well as groups. Animals in CAFOs are closer together, so pathogens spread easily. Even if their stock are not sick, CAFOs put low doses of antibiotics into feed "to reduce the chance for infection and to eliminate the need for animals to expend energy fighting off bacteria, with the assumption that saved energy will be translated into growth". This is a non-therapeutic use of antibiotics. Such antibiotic use is thought to allow animals to grow faster and bigger, increasing the CAFO's output. Regardless, the World Health Organization has recommended that the non-therapeutic use of antibiotics in animal husbandry be reevaluated, as such antibiotic overuse breeds antibiotic-resistant bacteria. When bacteria in or around animals are exposed to antibiotics, natural selection favours the spread of mutations with greater resistance. Use of antibiotics by CAFOs thus increases antimicrobial resistance. This threatens public health because resistant bacteria generated by CAFOs can be spread to the surrounding environment and communities via waste water discharge or the aerosolization of particles.
Air pollution caused by CAFOs can cause asthma, headaches, respiratory problems, eye irritation, nausea, weakness, and chest tightness. These affect farm workers and nearby residents, including children. The risks to nearby residents were highlighted in a study evaluating health outcomes of more than 100,000 individuals living in regions with high densities of CAFOs, finding a higher prevalence of pneumonia and unspecified infectious diseases in those with high exposures compared to controls. Furthermore, a Dutch cross-sectional study 2,308 adults found decreases in residents' lung function to be correlated with increases of particle emissions by nearby farms. In regards to workers, multiple respiratory consequences should be noted. Although "in many big CAFOs, it takes only a few workers to run a facility housing thousands of animals," the long exposure and close contact to animals puts CAFO employees at an increased risk. This includes a risk of contracting diseases like Novel H1N1 flu, which erupted globally in spring of 2009, or MRSA, a strain of antibiotic resistant bacteria. For instance, livestock-associated MRSA has been found in the nasal passages of CAFO workers, on the walls of the facilities they work in, and in the animals they tend. In addition, individuals working in CAFOs are at risk for chronic airway inflammatory diseases secondary to dust exposure, with studies suggesting the possible benefits to utilizing inhaler treatments empirically. Studies conducted by the University of Iowa show that the asthma rate of children of CAFO operators is higher than that of children from other farms.
=== Negative effects on minority populations ===
Low income and minority populations suffer disproportionately from proximity to CAFO and pollution and waste. These populations suffer the most due to their lack of political clout to oppose construction of CAFOs and are often not economically capable of simply moving somewhere else.
In southern United States, the "Black Belt", a roughly crescent-shaped geological formation of dark fertile soil in the Southern United States well suited to cotton farming, has seen the long-lasting effects of slavery. During and after the Civil War, this area consisted mostly of black people who worked as sharecroppers and tenant farmers. Due to ongoing discrimination in land sales and lending, many African American farmers were systematically deprived of farmland. Today, communities in the Black Belt experience poverty, poor housing, unemployment, poor health care and have little political power when it comes to the building of CAFOs. Black and brown people living near CAFOs often lack the resources to leave compromised areas and are further trapped by plummeting property values and poor quality of life. In addition to financial problems, CAFOs are also protected by "right-to-farm" law that protects them from residents that are living in CAFO occupied communities.
Not only are communities surrounded negatively affected by CAFOs, but the workers themselves experience harm from being on the job. In a study done in North Carolina that focused on twenty one Latino chicken catchers for a poultry-processing plant, the work place was found to be forcefully high intensity labor with high potential for injury and illness including trauma, respiratory illness, drug use and musculoskeletal injuries. Workers were also found to have little training about the job or safety. In the United States, agricultural workers are engaged in one of the most hazardous jobs in the country.
CAFO workers have historically been African American but there has been a surge of Hispanic and often undocumented Hispanic workers. Between 1980 and 2000, there was a clear shift in an ethnic and racially diverse workforce, led by Hispanic workforce growth.[7] Oftentimes, CAFO owners will preferably hire Hispanic workers because they are low-skilled workers who are willing to work longer hours and do more intensive work. Due to this, there are increased ICE raids on meat processing plants.
=== Animal health and welfare concerns ===
CAFO practices have raised concerns over animal welfare from an ethics standpoint. Some view such conditions as neglectful to basic animal welfare. According to David Nibert, professor of sociology at Wittenberg University, more than 10 billion animals are housed in "horrific conditions" in more than 20,000 CAFOs across the U.S. alone, where they "spend their last 100–120 days crammed together by the thousands standing in their own excrement, with little or no shelter from the elements." Many people believe that the harm to animals before their slaughter should be addressed through public policy. Laws regarding animal welfare in CAFOs have already been passed in the United States. For instance, in 2002, the state of Florida passed an amendment to the state's constitution banning the confinement of pregnant pigs in gestation crates. As a source for comparison, the use of battery cages for egg-laying hens and battery cage breeding methods have been completely outlawed in the European Union since 2012.
Whereas some people are concerned with animal welfare as an end in itself, others are concerned about animal welfare because of the effect of living conditions on consumer safety. Animals in CAFOs have lives that do not resemble those of animals found in the wild. Although CAFOs help secure a reliable supply of animal products, the quality of the goods produced is debated, with many arguing that the food produced is unnatural. For instance, confining animals into small areas requires the use of large quantities of antibiotics to prevent the spread of disease. There are debates over whether the use of antibiotics in meat production is harmful to humans.
Since 1960 average cow's milk production has increased from 5-kilogram /day (11 lb) to 30-kilogram /day (66 lb) by 2008, as noted by Dale Bauman and Jude Capper in the Efficiency of Dairy Production and its Carbon Footprint. The article points to the fact that the carbon footprint resulting from the production of a gallon of milk in 2007 is 37% of what it was in 1944.
== Regulation under the Clean Water Act ==
=== Basic structure of CAFO regulations under the CWA ===
The command-and-control permitting structure of the Clean Water Act (CWA) provides the basis for nearly all regulation of CAFOs in the United States. Generally speaking, the CWA prohibits the discharge of pollution to the "waters of the United States" from any "point source", unless the discharge is authorized by a National Pollutant Discharge Elimination System (NPDES) permit issued by the EPA (or a state delegated by the EPA). CAFOs are explicitly listed as a point source in the CWA. Unauthorized discharges made from CAFOs (and other point sources) violate the CWA, even if the discharges are "unplanned or accidental." CAFOs that do not apply for NPDES permits "operate at their own risk because any discharge from an unpermitted CAFO (other than agricultural stormwater) is a violation of the CWA subject to enforcement action, including third party citizen suits."
The benefit of an NPDES permit is that it provides some level of certainty to CAFO owners and operators. "Compliance with the permit is deemed compliance with the CWA... and thus acts as a shield against EPA or State CWA enforcement or against citizen suits under... the CWA." In addition, the "upset and bypass" provisions of the permit can give permitted CAFO owners a legal defense when "emergencies or natural disasters cause discharges beyond their reasonable control."
Under the CWA, the EPA specifies the maximum allowable amounts of pollution that can be discharged by facilities within an industrial category (like CAFOs). These general "effluent limitations guidelines" (ELG) then dictate the terms of the specific effluent limitations found in individual NPDES permits. The limits are based on the performance of specific technologies, but the EPA does not generally require the industry to use these technologies. Rather, the industry may use "any effective alternatives to meet the pollutant limits."
The EPA places minimum ELG requirements into each permit issued for CAFOs. The requirements can include both numeric discharge limits (the amount of a pollutant that can be released into waters of the United States) and other requirements related to ELGs (such as management practices, including technology standards).
=== History of regulations ===
The major CAFO regulatory developments occurred in the 1970s and in the 2000s. The EPA first promulgated ELGs for CAFOs in 1976. The 2003 rule issued by the EPA updated and modified the applicable ELGs for CAFOs, among other things. In 2005, the court decision in Waterkeeper Alliance v. EPA (see below) struck down parts of the 2003 rule. The EPA responded by issuing a revised rule in 2008.
A complete history of EPA's CAFO rulemaking activities is provided on the CAFO Rule History page.
==== Background laws ====
The Federal Water Pollution Control Act of 1948 was one of the first major efforts of the U.S. federal government to establish a comprehensive program for mitigating pollution in public water ways. The writers of the act aimed to improve water quality for the circulation of aquatic life, industry use, and recreation. Since 1948, the Act has been amended many times to expand programming, procedures, and standards.
President Richard Nixon's executive order, Reorganization Plan No. 3, created the EPA in 1970. The creation of the EPA was an effort to create a more comprehensive approach to pollution management. As noted in the order, a single polluter may simultaneously degrade a local environment's air, water, and land. President Nixon noted that a single government entity should be monitoring and mitigating pollution and considering all effects. As relevant to CAFO regulation, the EPA became the main federal authority on CAFO pollution monitoring and mitigation.
Congress passed the CWA in 1972 when it reworked the Federal Water Pollution Control Amendments. It specifically defines CAFOs as point source polluters and required operations managers and/or owners to obtain NPDES permits in order to legally discharge wastewater from its facilities.
==== Initial regulations (1970s) ====
The EPA began regulating water pollution discharges from CAFOs following passage of the 1972 CWA. ELGs for feedlot operations were promulgated in 1974, placing emphasis on best available technology in the industry at the time. In 1976 EPA began requiring all CAFOs to be first defined as AFOs. From that point, if the specific AFO met the appropriate criteria, it would then be classified as a CAFO and subject to appropriate regulation. That same year, EPA defined livestock and poultry CAFO facilities and established a specialized permitting program. NPDES permit procedures for CAFOs were also promulgated in 1976.
Prior to 1976, size had been the main defining criteria of CAFOs. However, after the 1976 regulations came into effect, the EPA stipulated some exceptions. Operations that were identified as particularly harmful to federal waterways could be classified as CAFOs, even if the facilities' sizes fall under AFOs standards. Additionally, some CAFOs were not required to apply for wastewater discharge permits if they met the two major operational-based exemptions. The first exception applied to operations that discharge wastewater only during a 25-year, 24-hour storm event. (The operation only discharges during a 24-hour rainfall period that occurs once every 25 years or more on average.) The second exception was when operations apply animal waste onto agricultural land.
==== Developments in the 1990s ====
In 1989, the Natural Resources Defense Council and Public Citizen filed a lawsuit against the EPA (and Administrator of the EPA, William Reilly). The plaintiffs claimed the EPA had not complied with the CWA with respect to CAFOs. The lawsuit, Natural Resources Defense Council v. Reilly (D.D.C. 1991), resulted in a court order mandating the EPA update its regulations. They did so in what would become the 2003 Final Rule.
In 1995, the EPA released a "Guide Manual on NPDES Regulations for Concentrated Animal Feeding Operations" to provide more clarity to the public on NPDES regulation after the EPA's report "Feedlots Case Studies of Selected States" revealed there was uncertainty in the public regarding CAFO regulatory terminology and criteria. Although the document is not a rule, it did offer insight and furthered public understanding of previous rules.
In his 1998 Clean Water Action Plan, President Bill Clinton directed the USDA and the EPA to join forces to develop a framework for future actions to improve national water quality standards for public health. The two federal agencies' specific responsibility was to improve the management of animal waste runoff from agricultural activities. In 1998, the USDA and the EPA hosted eleven public meetings across the country to discuss animal feeding operations (AFOs).
On March 9, 1999, the agencies released the framework titled the Unified National Strategy for Animal Feeding Operations. In the framework, the agencies recommended six major activities to be included in operations' Comprehensive Nutrient Management Plans (CNMPs):
feed management
manure handling and storage
land application of manure
land management
record keeping
activities that utilize manure.
The framework also outlined two types of related programs. First, "voluntary programs" were designed to assist AFO operators with addressing public health and water quality problems. The framework outlines three types of voluntary programs available: "locally led conservation," "environmental education," and "financial and technical assistance." The framework explained that those that participate in voluntary programs are not required to have a comprehensive nutrient management plan (CNMP). The second type of program outlined by the framework was regulatory, which includes command-and-control regulation with NPDES permitting.
=== EPA final rule (2003) ===
EPA's 2003 rule updated decades-old policies to reflect new technology advancements and increase the expected pollution mitigation from CAFOs. The EPA was also responding to a 1991 court order based on the district court's decision in Natural Resources Defense Council v. Reilly. The final rule took effect on April 14, 2003, and responded to public comments received following the issuance of the proposed rule in 2000. The EPA allowed authorized NPDES states until February 2005 to update their programs and develop technical standards.
The 2003 rule established "non-numerical best management practices" (BMPs) for CAFOs that apply both to the "production areas" (e.g. the animal confinement area and the manure storage area) and, for the first time ever, to the "land application area" (land to which manure and other animal waste is applied as fertilizer). The standards for BMPs in the 2003 rule vary depending on the regulated area of the CAFO:
Production Area: Discharges from a production area must meet a performance standard that requires CAFOs to "maintain waste containment structures that generally prohibit discharges except in the event of overflows or runoff resulting from a 25-year, 24-hour rainfall event." New sources are required to meet a standard of "no discharge" except in the event of a 100-year, 24-hour rainfall event.
Land Application Area: The BMPs for land application areas include different requirements, such as vegetative buffer strips and setback limits from water bodies.
The 2003 rule also requires CAFOs to submit an annual performance report to the EPA and to develop and implement a comprehensive nutrient management plan (NMP) for handling animal waste. Lastly, in an attempt to broaden the scope of regulated facilities, the 2003 rule expanded the number of CAFOs required to apply for NPDES permits by making it mandatory for all CAFOs (not just those who actually discharge pollutants into waters of the United States). Many of the provisions of the rule were affected by the Second Circuit's decision issued in Waterkeeper Alliance v. EPA.
==== Waterkeeper Alliance v. EPA (2nd Cir. 2005) ====
Environmental and farm industry groups challenged the 2003 final rule in court, and the Second Circuit Court of Appeals issued a decision in the consolidated case Waterkeeper Alliance, Inc. v. EPA, 399 F.3d 486 (2nd Cir. 2005). The Second Circuit's decision reflected a "partial victory" for both environmentalists and industry, as all parties were "unsatisfied to at least some extent" with the court's decision. The court's decision addressed four main issues with the 2003 final rule promulgated by the EPA:
Agricultural Stormwater Discharges: The EPA's authority to regulate CAFO waste that results in agricultural stormwater discharge was one of the "most controversial" aspects of the 2003 rule. The issue centered on the scope of the Clean Water Act (CWA), which provides for the regulation only of "point sources." The term was defined by the CWA to expressly include CAFOs but exclude "agricultural stormwater." The EPA was thus forced to interpret the statutory definition to "identify the conditions under which discharges from the land application area of [waste from] a CAFO are point source discharges that are subject to NPDES permitting requirements, and those which are agricultural stormwater discharges and thus are not point source discharges." In the face of widely divergent views of environmentalists and industry groups, the EPA in the 2003 rule determined that any runoff resulting from manure applied in accordance with agronomic rates would be exempt from the CWA permitting requirements (as "agricultural stormwater"). However, when such agronomic rates are not used, the EPA concluded that the resulting runoff from a land application is not "agricultural stormwater" and is therefore subject to the CWA (as a discharge from a point source, i.e. the CAFO). The Second Circuit upheld the EPA's definition as a "reasonable" interpretation of the statutory language in the CWA.
Duty to Apply for an NPDES Permit: The 2003 EPA rule imposed a duty on all CAFOs to apply for an NPDES permit (or demonstrate that they had no potential to discharge). The rationale for this requirement was the EPA's "presumption that most CAFOs have a potential to discharge pollutants into waters of the United States" and therefore must affirmatively comply with the requirements of the Clean Water Act. The Second Circuit sided with the farm industry plaintiffs on this point and ruled that this portion of the 2003 rule exceeded the EPA's authority. The court held that the EPA can require NPDES permits only where there is an actual discharge by a CAFO, not just a potential to discharge. The EPA later estimated that 25 percent fewer CAFOs would seek permits as a result of the Second Circuit's decision on this issue.
Nutrient Management Plans (NMPs): The fight in court over the portion of the 2003 rule on NMPs was a proxy for a larger battle over public participation by environmental groups in the implementation of the CWA. The 2003 rule required all permitted CAFOs that "land apply" animal waste to develop an NMP that satisfied certain minimum requirements (e.g. ensuring proper storage of manure and process wastewater). A copy of the NMP was to be kept on-site at the facility, available for viewing by the EPA or other permitting authority. The environmental plaintiffs argued that this portion of the rule violated the CWA and the Administrative Procedure Act by failing to make the NMP part of the NPDES permit itself (which would make the NMP subject to both public comments and enforcement in court by private citizens). The court sided with the environmental plaintiffs and vacated this portion of the rule.
Effluent Limitation Guidelines (ELGs) for CAFOs: The 2003 rule issued New Source Performance Standards (NSPS) for new sources of swine, poultry, and veal operations. The CWA requires that NSPS be based on what is called the "best available demonstrated control technology." The EPA's 2003 rule required that these new sources meet a "no discharge" standard, except in the case of a 100-year, 24-hour rainfall event (or a less restrictive measure for new CAFOs that voluntarily use new technologies and management practices). The Second Circuit ruled that the EPA did not provide an adequate basis (either in the statute or in evidence) for this portion of the rule. The Second Circuit also required the EPA to go back and provide additional justification for the requirements in the 2003 rule dealing with the "best control technology for conventional pollutants" (BCT) standards for reducing fecal coliform pathogen. Lastly, the court ordered the EPA to provide additional analysis on whether the more stringent "water quality-based effluent permit limitations" (WQBELs) should be required in certain instances for CAFO discharges from land application areas, a policy that the EPA had rejected in the 2003 rule.
=== EPA final rule (2008) ===
The EPA published revised regulations that address the Second Circuit court's decision in Waterkeeper Alliance, Inc. v. EPA on November 20, 2008 (effective December 22, 2008). The 2008 final rule revised and amended the 2003 final rule.
The 2008 rule addresses each point of the court's decision in Waterkeeper Alliance v. EPA. Specifically, the EPA adopted the following measures:
The EPA replaced the "duty to apply" standard with one that requires NPDES permit coverage for any CAFO that "discharges or proposes to discharge." The 2008 rule specifies that "a CAFO proposes to discharge if it is designed, constructed, operated, or maintained such that a discharge will occur." On May 28, 2010, the EPA issued guidance "designed to assist permitting authorities in implementing the [CAFO regulations] by specifying the kinds of operations and factual circumstances that EPA anticipates may trigger the duty to apply for permits." On March 15, 2011, the Fifth Circuit Court of Appeals in National Pork Producers Council v. EPA again struck down the EPA's rule on this issue, holding that the "propose to discharge" standard exceeds the EPA's authority under the CWA. After the Fifth Circuit's ruling, a CAFO cannot be required to apply for an NPDES permit unless it actually discharges into a water of the United States.
The EPA modified the requirements related to the nutrient management plans (NMP). In keeping with the court's decision in Waterkeeper Alliance v. EPA, the EPA instituted a requirement that the permitting authority (either the EPA or the State) incorporate the enforceable "terms of the NMP" into the actual permit. The "terms of the NMP" include the "information, protocols, best management practices (BMPs) and other conditions in the NMP necessary to meet the NMP requirements of the 2003 rule." The EPA must make the NMPs in the applications filed by CAFOs publicly available.
The EPA reiterated that in order to take advantage of the "agricultural stormwater" exception (upheld by the court in Waterkeeper Alliance v. EPA) an unpermitted CAFO must still implement "site-specific nutrient management practices that ensure appropriate agricultural utilization of the nutrients as specified previously under the 2003 rule." The unpermitted facility must keep documentation of such practices and make it available to the permitting authority in the case of a precipitation-related discharge.
The EPA addressed the Second Circuit's ruling on the effluent limitation guidelines (ELGs) for CAFOs. The agency deleted the provision allowing new sources of CAFOs to meet a 100-year, 24-hour precipitation-event standard, replacing it with a "no discharge" standard through the establishment of best management practices. The EPA also clarified and defended its previous positions on (1) the availability of water quality-based effluent limitations (WQBELs) and (2) the appropriateness of the best control technology (BCT) standards for fecal coliform. First, the 2008 rule "explicitly recognizes" that the permitting authority may impose WQBELs on all production area discharges and all land application discharges (other than those that meet the "agricultural stormwater" exemption) if the technology-based effluent limitations are deemed insufficient to meet the water quality standards of a particular body of water. In particular, the EPA noted that a case-by-case review should be adopted in cases where CAFOs discharge to the waters of the United States through a direct hydrologic connection to groundwater. Second, the EPA announced that it would not be promulgating more stringent standards for fecal coliform than in the 2003 rule because it reached the conclusion there is "no available, achievable, and cost reasonable technology on which to base such limitations."
The 2008 final rule also specifies two approaches that a CAFO may use to identify the "annual maximum rates of application of manure, litter, and process wastewater by field and crop for each year of permit coverage." The linear approach expresses the rate in terms of the "amount of nitrogen and phosphorus from manure, litter, and process wastewater allowed to be applied." The narrative rate approach expresses the amount in terms of a "narrative rate prescribing how to calculate the amount of manure, litter, and process wastewater allowed to be applied. The EPA believes that the narrative approach gives CAFO operators the most flexibility. Normally, CAFO operators are subject to the terms of their permit for a period of 5 years. Under the narrative approach, CAFO operators can use "real time" data to determine the rates of application. As a result, CAFO operators can more easily "change their crop rotation, form and source of manure, litter, and process wastewater, as well as the timing and method of application" without having to seek a revision to the terms of their NPDES permits.
=== Government assistance for compliance ===
The EPA points to several tools available to assist CAFO operators in meeting their obligations under the CWA. First, the EPA awards federal grants to provide technical assistance to livestock operators for preventing discharges of water pollution (and reducing air pollution). The EPA claims that CAFOs can obtain an NMP for free under these grants. Recently, the annual amount of the grant totaled $8 million. Second, a Manure Management Planner (MMP) software program has been developed by Purdue University in conjunction with funding by a federal grant. The MMP is tailored to each state's technical standards (including Phosphorus Indexes and other assessment tools). The MMP program provides free assistance to both permitting authorities and CAFO operators and can be found at the Purdue University website. Lastly, the EPA notes that the USDA offers a "range of support services," including a long-term program that aims to assist CAFOs with NMPs.
=== Debate over EPA policy ===
Environmentalists argue that the standards under the CWA are not strong enough. Researchers have identified regions in the country that have weak enforcement of regulations and, therefore, are popular locations for CAFO developers looking to reduce cost and expand operations without strict government oversight. Even when laws are enforced, there is the risk of environmental accidents. The massive 1995 manure spill in North Carolina highlights the reality that contamination can happen even when it is not done maliciously. The question of whether such a spill could have been avoided is a contributing factor in the debate for policy reform.
Environmental groups have criticized the EPA's regulation of CAFOs on several specific grounds, including the following.
Size threshold for "CAFO": Environmentalists favor reducing the size limits required to qualify as a CAFO; this would broaden the scope of the EPA's regulations on CAFOs to include more industry farming operations (currently classified as AFOs).
Duty to apply: Environmentalists strongly criticized the portion of the Court's ruling in Waterkeeper Alliance that deleted the EPA's 2003 rule that all CAFOs must apply for an NPDES permit. The EPA's revised permitted policy is now overly reactive, environmentalists maintain, because it "allow[s] CAFO operators to decide whether their situation poses enough risk of getting caught having a discharge to warrant the investment of time and resources in obtaining a permit." It is argued that CAFOs have very little incentive to seek an NPDES permit under the new rule.
Requirement for co-permitting entities that exercise "substantial operational control" over CAFOs: Environmental groups unsuccessfully petitioned the EPA to require "co-permitting of both the farmer who raises the livestock and the large companies that actually own the animals and contract with farmers." This modification to EPA regulations would have made the corporations legally responsible for the waste produced on the farms with which they contract.
Zero discharge requirement to groundwater when a direct hydrologic connection exists to surface water: The EPA omitted a provision in its 2003 rule that would have held CAFOs to a zero discharge limit from the CAFO's production area to "ground water that has a direct hydrologic connection to surface water." Environmentalists criticized the EPA's decision to omit this provision on the basis that ground water is often a drinking source in rural areas, where most all CAFOs are located.
Specific performance standards: Environmentalists urged the EPA to phase out the use of lagoons (holding animal waste in pond-like structures) and sprayfields (spraying waste onto crops). Environmentalists argued that these techniques for dealing with animal waste were outmoded and present an "unacceptable risk to public health and the environment" due to their ability to pollute both surface and groundwater following "weather events, human error, and system failures." Environmentalists suggested that whenever manure is land applied that it should be injected into the soil (and not sprayed).
Lack of regulation of air pollution: The revisions to the EPA's rules under the CWA did not address air pollutants. Environmentalists maintain that the air pollutants from CAFOs—which include ammonia, hydrogen sulfide, methane, volatile organic compounds, and particulate matter—should be subject to EPA regulation.
Conversely, industry groups criticize the EPA's rules as overly stringent. Industry groups vocally opposed the requirement in the 2008 rule (since struck down by the Fifth Circuit) that required CAFOs to seek a permit if they "propose to discharge" into waters of the United States. Generally speaking, the farm industry disputes the presumption that CAFOs do discharge pollutants and it therefore objects to the pressure that the EPA places on CAFOs to voluntarily seek an NPDES permit. As a starting point, farm industry groups "emphasize that most farmers are diligent stewards of the environment, since they depend on natural resources of the land, water, and air for their livelihoods and they, too, directly experience adverse impacts on water and air quality." Some of the agricultural industry groups continue to maintain that the EPA should have no authority to regulate any of the runoff from land application areas because they believe this constitutes a nonpoint source that is outside the scope of the CWA. According to this viewpoint, voluntary programs adequately address any problems with excess manure.
== States' role and authority ==
The role of the federal government in environmental issues is generally to set national guidelines and the state governments' role is to address specific issues. The framework of federal goals is as such that the responsibility to prevent, reduce, and eliminate pollution are the responsibility of the states.
The management of water and air standards follows this authoritative structure. States that have been authorized by the EPA to directly issue permits under NPDES (also known as "NPDES states") have received jurisdiction over CAFOs. As a result of this delegation of authority from the EPA, CAFO permitting procedures and standards may vary from state to state.
Specifically for water pollution, the federal government establishes federal standards for wastewater discharge and authorized states develop their own wastewater policies to fall in compliance. More specifically, what a state allows an individual CAFO to discharge must be as strict or stricter than the federal government's standard. This protection includes all waterways, whether or not the water body can safely sustain aquatic life or house public recreational activities. Higher standards are upheld in some cases of pristine publicly owned waterways, such as parks. They keep higher standards in order to maintain the pristine nature of the environment for preservation and recreation. Exceptions are in place for lower water quality standards in certain waterways if it is deemed economically significant. These policy patterns are significant when considering the role of state governments' in CAFO permitting.
=== State versus federal permit issuance ===
Federal law requires CAFOs to obtain NPDES permits before wastewater may be discharged from the facility. The state agency responsible for approving permits for CAFOs in a given state is dependent on the authorization of that state. The permitting process is divided into two main methods based on a state's authorization status. As of 2018, EPA has authorized 47 states to issue NPDES permits. Although they have their own state-specific permitting standards, permitting requirements in authorized states must be at least as stringent as the federal standards.: 13 In the remaining states and territories, an EPA regional office issues NPDES permits.
=== Permitting process ===
A state's authority and the state's environmental regulatory framework will determine the permit process and the state offices involved. Below are two examples of states' permitting organization.
==== Authorized state case study: Arizona ====
Arizona issues permits through a general permitting process. CAFOs must obtain both a general Arizona Pollutant Discharge Elimination System (AZPDES) Permit and a general Aquifer Protection Permit. The Arizona state agency tasked with managing permitting is the Arizona Department of Environmental Quality (ADEQ).
For the Aquifer Protection Permit, CAFOs are automatically permitted if they comply with the state's BMP outlined in the relevant state rule, listed on the ADEQ's website. Their compliance is evaluated through agency CAFO Inspection Program's onsite inspections. If a facility is found to be unlawfully discharging, then the agency may issue warnings and, if necessary, file suit against the facility. For the AZPDES permit, CAFOs are required to submit a Notice of Intent to the ADEQ. In addition, they must complete and submit a Nutrient Management Plan (NMP) for the state's annual report.
Even in an authorized state, the EPA maintains oversight of state permitting programs. This would be most likely to happen in the event that a complaint is filed with the EPA by a third party. For instance, in 2008, Illinois Citizens for Clean Air & Water filed a complaint with the EPA arguing that the state was not properly implementing its CAFO permitting program. The EPA responded with an "informal" investigation. In a report released in 2010, the agency sided with the environmental organization and provided a list of recommendations and required action for the state to meet.
==== Unauthorized state case study: Massachusetts ====
In unauthorized states, the EPA has the authority for issuing NPDES permits. In these states, such as Massachusetts, CAFOs communicate and file required documentation through an EPA regional office. In Massachusetts, the EPA issues a general permit for the entire state. The state's Department of Agricultural Resources (MDAR) has an agreement with the EPA for the implementation of CAFO rules. MDAR's major responsibility is educational. The agency assists operators in determining if their facility qualifies as a CAFO. Specifically they do onsite evaluations of facilities, provide advice on best practices, and provide information and technical assistance.
If a state has additional state specific rules for water quality standards, the state government maintains the authority for permitting. For instance, New Mexico, also an unauthorized state, requires CAFOs and AFOs to obtain a Groundwater Permit if the facilities discharge waste in a manner that might affect local groundwater. The EPA is not involved in the issuing of this state permit. Massachusetts, however, does not have additional state permit requirements.
== Zoning ordinances ==
State planning laws and local zoning ordinances represent the main policy tools for regulating land use. Many states have passed legislation that specifically exempt CAFOs (and other agricultural entities) from zoning regulations. The promulgation of so-called "right to farm" statutes have provided, in some instances, a shield from liability for CAFOs (and other potential nuisances in agricultural). More specifically, the right-to-farm statutes seek to "limit the circumstances under which agricultural operations can be deemed nuisances."
The history of these agricultural exemptions dates back to the 1950s. Right-to-farm statutes expanded in the 1970s when state legislatures became increasingly sensitive to the loss of rural farmland to urban expansion. The statutes were enacted at a time when CAFOs and "modern confinement operations did not factor into legislator's perceptions of the beneficiaries of [the] generosity" of such statutes. Forty-three (43) states now have some sort of statutory protection for farmers from nuisance. Some of these states (such as Iowa, Oklahoma, Wyoming, Tennessee, and Kansas) also provide specific protection to animal feeding operations (AFOs) and CAFOs. Right-to-farm statutes vary in form. Some states, for instance, require agricultural operation be located "within an acknowledged and approved agricultural district" in order to receive protection; other states do not.
Opponents of CAFOs have challenged right-to-farm statutes in court, and the constitutionality of such statutes is not entirely clear. The Iowa Supreme Court, for instance, in 1998 struck down a right-to-farm statute as a "taking" (in violation of the 5th and 14th Amendments of the U.S. Constitution) because the statute stripped neighboring landowners of property rights without compensation.
As of February 2023, 85 Iowa counties, the majority of Iowa counties, had passed a "Construction Evaluation Resolution"; pursuant to Iowa Code section 459 only counties which have adopted such a "construction evaluation resolution" can submit to the Iowa Department of Natural Resources a recommendation to approve or disapprove a construction permit application regarding a proposed confinement feeding operation which the board of supervisors received between February 1, 2023, and January 31, 2024.
== Regulation under the Clean Air Act ==
CAFOs are potentially subject to regulation under the Clean Air Act (CAA), but the emissions from CAFOs generally do not exceed established statutory thresholds. In addition, the EPA's regulations do not provide a clear methodology for measuring emissions from CAFOs, which has "vexed both regulators and the industry." Negotiations between the EPA and the agricultural industry did, however, result in an Air Compliance Agreement in January 2005. According to the agreement, certain animal feeding operations (AFOs) received a covenant not to sue from the EPA in exchange for payment of a civil penalty for past violations of the CAA and an agreement to allow their facilities to be monitored for a study on air pollution emissions in the agricultural sector. Results and analysis of the EPA's study are scheduled to be released later in 2011.
Environmental groups have formally proposed to tighten EPA regulation of air pollution from CAFOs. A coalition of environmental groups petitioned the EPA on April 6, 2011, to designate ammonia as a "criteria pollutant" and establish National Ambient Air Quality Standards (NAAQS) for ammonia from CAFOs. The petition alleges that "CAFOs are leading contributors to the nation's ammonia inventory; by one EPA estimate livestock account for approximately 80 percent of total emissions. CAFOs also emit a disproportionately large share of the ammonia in certain states and communities." If the EPA adopts the petition, CAFOs and other sources of ammonia would be subject to the permitting requirements of the CAA.
== See also ==
Animal feeding operation
Battery cage (chicken egg production)
Golden Triangle of Meat-packing
Intensive animal farming
Intensive pig farming
== Notes ==
== References == | Wikipedia/Concentrated_animal_feeding_operation |
An aerobic treatment system (ATS), often called an aerobic septic system, is a small scale sewage treatment system similar to a septic tank system, but which uses an aerobic process for digestion rather than just the anaerobic process used in septic systems. These systems are commonly found in rural areas where public sewers are not available, and may be used for a single residence or for a small group of homes.
Unlike the traditional septic system, the aerobic treatment system produces a high quality secondary effluent, which can be sterilized and used for surface irrigation. This allows much greater flexibility in the placement of the leach field, as well as cutting the required size of the leach field by as much as half.
== Process ==
The ATS process generally consists of the following phases:
Pre-treatment stage to remove large solids and other undesirable substances.
Aeration stage, where aerobic bacteria digest biological wastes.
Settling stage allows undigested solids to settle. This forms a sludge that must be periodically removed from the system.
Disinfecting stage, where chlorine or similar disinfectant is mixed with the water, to produce an antiseptic output. Another option is UV disinfection, where the water is exposed to UV light inside of a UV disinfection unit.
The disinfecting stage is optional, and is used where a sterile effluent is required, such as cases where the effluent is distributed above ground. The disinfectant typically used is tablets of calcium hypochlorite, which are specially made for waste treatment systems. The tablets are intended to break down quickly in sunlight. Stabilized forms of chlorine persist after the effluent is dispersed, and can kill plants in the leach field.
Since the ATS contains a living ecosystem of microbes to digest the waste products in the water, excessive amounts of items such as bleach or antibiotics can damage the ATS environment and reduce treatment effectiveness. Non-digestible items should also be avoided, as they will build up in the system and require more frequent sludge removal.
== Types of aerobic treatment systems ==
Small scale aerobic systems generally use one of two designs, fixed-film systems, or continuous flow, suspended growth aerobic systems (CFSGAS). The pre-treatment and effluent handling are similar for both types of systems, and the difference lies in the aeration stage.
=== Fixed film systems ===
Fixed film systems use a porous medium which provides a bed to support the biomass film that digests the waste material in the wastewater. Designs for fixed film systems vary widely, but fall into two basic categories (though some systems may combine both methods). The first is a system where the medium is moved relative to the wastewater, alternately immersing the film and exposing it to air, while the second uses a stationary medium, and varies the wastewater flow so the film is alternately submerged and exposed to air. In both cases, the biomass must be exposed to both wastewater and air for the aerobic digestion to occur. The film itself may be made of any suitable porous material, such as formed plastic or peat moss. Simple systems use stationary media, and rely on intermittent, gravity driven wastewater flow to provide periodic exposure to air and wastewater. A common moving media system is the rotating biological contactor (RBC), which uses disks rotating slowly on a horizontal shaft. Nearly 40 percent of the disks are submerged at any given time, and the shaft rotates at a rate of one or two revolutions per minute.
=== Continuous flow, suspended growth aerobic systems ===
CFSGAS systems, as the name implies, are designed to handle continuous flow, and do not provide a bed for a bacterial film, relying rather on bacteria suspended in the wastewater. The suspension and aeration are typically provided by an air pump, which pumps air through the aeration chamber, providing a constant stirring of the wastewater in addition to the oxygenation. A medium to promote fixed film bacterial growth may be added to some systems designed to handle higher than normal levels of biomass in the wastewater.
=== Retrofit or portable aerobic systems ===
Another increasingly common use of aerobic treatment is for the remediation of failing or failed anaerobic septic systems, by retrofitting an existing system with an aerobic feature. This class of product, known as aerobic remediation, is designed to remediate biologically failed and failing anaerobic distribution systems by significantly reducing the biochemical oxygen demand (BOD5) and total suspended solids (TSS) of the effluent. The reduction of the BOD5 and TSS reverses the developed bio-mat. Further, effluent with high dissolved oxygen and aerobic bacteria flow to the distribution component and digest the bio-mat.
=== Composting toilets ===
Composting toilets are designed to treat only toilet waste, rather than general residential waste water, and are typically used with water-free toilets rather than the flush toilets associated with the above types of aerobic treatment systems. These systems treat the waste as a moist solid, rather than in liquid suspension, and therefore separate urine from feces during treatment to maintain the correct moisture content in the system. An example of a composting toilet is the clivus multrum (Latin for 'inclined chamber'), which consists of an inclined chamber that separates urine and feces and a fan to provide positive ventilation and prevent odors from escaping through the toilet. Within the chamber, the urine and feces are independently broken down not only by aerobic bacteria, but also by fungi, arthropods, and earthworms. Treatment times are very long, with a minimum time between removals of solid waste of a year; during treatment the volume of the solid waste is decreased by 90 percent, with most being converted into water vapor and carbon dioxide. Pathogens are eliminated from the waste by the long durations in inhospitable conditions in the treatment chamber.
== Comparison to traditional septic systems ==
The aeration stage and the disinfecting stage are the primary differences from a traditional septic system; in fact, an aerobic treatment system can be used as a secondary treatment for septic tank effluent. These stages increase the initial cost of the aerobic system, and also the maintenance requirements over the passive septic system. Unlike many other biofilters, aerobic treatment systems require a constant supply of electricity to drive the air pump increasing overall system costs. The disinfectant tablets must be periodically replaced, as well as the electrical components (air compressor) and mechanical components (air diffusers). On the positive side, an aerobic system produces a higher quality effluent than a septic tank, and thus the leach field can be smaller than that of a conventional septic system, and the output can be discharged in areas too environmentally sensitive for septic system output. Some aerobic systems recycle the effluent through a sprinkler system, using it to water the lawn where regulations approve.
=== Effluent quality ===
Since the effluent from an ATS is often discharged onto the surface of the leach field, the quality is very important. A typical ATS will, when operating correctly, produce an effluent with less than 30 mg/liter BOD5, 25 mg/L TSS, and 10,000 cfu/mL fecal coliform bacteria. This is clean enough that it cannot support a biomat or "slime" layer like a septic tank.
ATS effluent is relatively odorless; a properly operating system will produce effluent that smells musty, but not like sewage. Aerobic treatment is so effective at reducing odors, that it is the preferred method for reducing odor from manure produced by farms.
== See also ==
List of waste-water treatment technologies
== References ==
== External links ==
Aerobic Treatment Units at Northern Arizona University | Wikipedia/Aerobic_treatment_system |
Wastewater treatment is a process which removes and eliminates contaminants from wastewater. It thus converts it into an effluent that can be returned to the water cycle. Once back in the water cycle, the effluent creates an acceptable impact on the environment. It is also possible to reuse it. This process is called water reclamation. The treatment process takes place in a wastewater treatment plant. There are several kinds of wastewater which are treated at the appropriate type of wastewater treatment plant. For domestic wastewater the treatment plant is called a Sewage Treatment. Municipal wastewater or sewage are other names for domestic wastewater. For industrial wastewater, treatment takes place in a separate Industrial wastewater treatment, or in a sewage treatment plant. In the latter case it usually follows pre-treatment. Further types of wastewater treatment plants include Agricultural wastewater treatment and leachate treatment plants.
One common process in wastewater treatment is phase separation, such as sedimentation. Biological and chemical processes such as oxidation are another example. Polishing is also an example. The main by-product from wastewater treatment plants is a type of sludge that is usually treated in the same or another wastewater treatment plant.: Ch.14 Biogas can be another by-product if the process uses anaerobic treatment. Treated wastewater can be reused as reclaimed water. The main purpose of wastewater treatment is for the treated wastewater to be able to be disposed or reused safely. However, before it is treated, the options for disposal or reuse must be considered so the correct treatment process is used on the wastewater.
The term "wastewater treatment" is often used to mean "sewage treatment".
== Types of treatment plants ==
Wastewater treatment plants may be distinguished by the type of wastewater to be treated. There are numerous processes that can be used to treat wastewater depending on the type and extent of contamination. The treatment steps include physical, chemical and biological treatment processes.
Types of wastewater treatment plants include:
Sewage treatment plants
Industrial wastewater treatment plants
Agricultural wastewater treatment plants
Leachate treatment plants
=== Sewage treatment plants ===
=== Industrial wastewater treatment plants ===
=== Agricultural wastewater treatment plants ===
=== Leachate treatment plants ===
Leachate treatment plants are used to treat leachate from landfills. Treatment options include: biological treatment, mechanical treatment by ultrafiltration, treatment with active carbon filters, electrochemical treatment including electrocoagulation by various proprietary technologies and reverse osmosis membrane filtration using disc tube module technology.
== Unit processes ==
The unit processes involved in wastewater treatment include physical processes such as settlement or flotation and biological processes such oxidation or anaerobic treatment. Some wastewaters require specialized treatment methods. At the simplest level, treatment of most wastewaters is carried out through separation of solids from liquids, usually by sedimentation. By progressively converting dissolved material into solids, usually a biological floc or biofilm, which is then settled out or separated, an effluent stream of increasing purity is produced.
=== Phase separation ===
Phase separation transfers impurities into a non-aqueous phase. Phase separation may occur at intermediate points in a treatment sequence to remove solids generated during oxidation or polishing. Grease and oil may be recovered for fuel or saponification. Solids often require dewatering of sludge in a wastewater treatment plant. Disposal options for dried solids vary with the type and concentration of impurities removed from water.
==== Sedimentation ====
Solids such as stones, grit, and sand may be removed from wastewater by gravity when density differences are sufficient to overcome dispersion by turbulence. This is typically achieved using a grit channel designed to produce an optimum flow rate that allows grit to settle and other less-dense solids to be carried forward to the next treatment stage. Gravity separation of solids is the primary treatment of sewage, where the unit process is called "primary settling tanks" or "primary sedimentation tanks". It is also widely used for the treatment of other types of wastewater. Solids that are denser than water will accumulate at the bottom of quiescent settling basins. More complex clarifiers also have skimmers to simultaneously remove floating grease such as soap scum and solids such as feathers, wood chips, or condoms. Containers like the API oil-water separator are specifically designed to separate non-polar liquids.: 111–138
=== Biological and chemical processes ===
==== Oxidation ====
Oxidation reduces the biochemical oxygen demand of wastewater, and may reduce the toxicity of some impurities. Secondary treatment converts organic compounds into carbon dioxide, water, and biosolids through oxidation and reduction reactions. Chemical oxidation is widely used for disinfection.
===== Biochemical oxidation (secondary treatment) =====
===== Chemical oxidation =====
Advanced oxidation processes are used to remove some persistent organic pollutants and concentrations remaining after biochemical oxidation.: 363–408 Disinfection by chemical oxidation kills bacteria and microbial pathogens by adding hydroxyl radicals such as ozone, chlorine or hypochlorite to wastewater.: 1220 These hydroxyl radical then break down complex compounds in the organic pollutants into simple compounds such as water, carbon dioxide, and salts.
==== Anaerobic treatment ====
Anaerobic wastewater treatment processes (for example UASB, EGSB) are also widely applied in the treatment of industrial wastewaters and biological sludge.
=== Polishing ===
Polishing refers to treatments made in further advanced treatment steps after the above methods (also called "fourth stage" treatment). These treatments may also be used independently for some industrial wastewater. Chemical reduction or pH adjustment minimizes chemical reactivity of wastewater following chemical oxidation.: 439 Carbon filtering removes remaining contaminants and impurities by chemical absorption onto activated carbon.: 1138 Filtration through sand (calcium carbonate) or fabric filters is the most common method used in municipal wastewater treatment.
== See also ==
List of largest wastewater treatment plants
List of wastewater treatment technologies
Water treatment
== References ==
== External links ==
Media related to Wastewater treatment at Wikimedia Commons | Wikipedia/Wastewater_treatment_plant |
An aquatic ecosystem is an ecosystem found in and around a body of water, in contrast to land-based terrestrial ecosystems. Aquatic ecosystems contain communities of organisms—aquatic life—that are dependent on each other and on their environment. The two main types of aquatic ecosystems are marine ecosystems and freshwater ecosystems. Freshwater ecosystems may be lentic (slow moving water, including pools, ponds, and lakes); lotic (faster moving water, for example streams and rivers); and wetlands (areas where the soil is saturated or inundated for at least part of the time).
== Types ==
=== Marine ecosystems ===
==== Marine coastal ecosystem ====
==== Marine surface ecosystem ====
=== Freshwater ecosystems ===
==== Lentic ecosystem (lakes) ====
==== Lotic ecosystem (rivers) ====
==== Wetlands ====
== Functions ==
Aquatic ecosystems perform many important environmental functions. For example, they recycle nutrients, purify water, attenuate floods, recharge ground water and provide habitats for wildlife. The biota of an aquatic ecosystem contribute to its self-purification, most notably microorganisms, phytoplankton, higher plants, invertebrates, fish, bacteria, protists, aquatic fungi, and more. These organisms are actively involved in multiple self-purification processes, including organic matter destruction and water filtration. It is crucial that aquatic ecosystems are reliably self-maintained, as they also provide habitats for species that reside in them.
In addition to environmental functions, aquatic ecosystems are also used for human recreation, and are very important to the tourism industry, especially in coastal regions. They are also used for religious purposes, such as the worshipping of the Jordan River by Christians, and educational purposes, such as the usage of lakes for ecological study.
== Biotic characteristics (living components) ==
The biotic characteristics are mainly determined by the organisms that occur. For example, wetland plants may produce dense canopies that cover large areas of sediment or snails or geese may graze the vegetation leaving large mud flats. Aquatic environments have relatively low oxygen levels, forcing adaptation by the organisms found there. For example, many wetland plants must produce aerenchyma to carry oxygen to roots. Other biotic characteristics are more subtle and difficult to measure, such as the relative importance of competition, mutualism or predation. There are a growing number of cases where predation by coastal herbivores including snails, geese and mammals appears to be a dominant biotic factor.
=== Autotrophic organisms ===
Autotrophic organisms are producers that generate organic compounds from inorganic material. Algae use solar energy to generate biomass from carbon dioxide and are possibly the most important autotrophic organisms in aquatic environments. The more shallow the water, the greater the biomass contribution from rooted and floating vascular plants. These two sources combine to produce the extraordinary production of estuaries and wetlands, as this autotrophic biomass is converted into fish, birds, amphibians and other aquatic species.
Chemosynthetic bacteria are found in benthic marine ecosystems. These organisms are able to feed on hydrogen sulfide in water that comes from volcanic vents. Great concentrations of animals that feed on these bacteria are found around volcanic vents. For example, there are giant tube worms (Riftia pachyptila) 1.5 m in length and clams (Calyptogena magnifica) 30 cm long.
=== Heterotrophic organisms ===
Heterotrophic organisms consume autotrophic organisms and use the organic compounds in their bodies as energy sources and as raw materials to create their own biomass.
Euryhaline organisms are salt tolerant and can survive in marine ecosystems, while stenohaline or salt intolerant species can only live in freshwater environments.
== Abiotic characteristics (non-living components) ==
An ecosystem is composed of biotic communities that are structured by biological interactions and abiotic environmental factors. Some of the important abiotic environmental factors of aquatic ecosystems include substrate type, water depth, nutrient levels, temperature, salinity, and flow. It is often difficult to determine the relative importance of these factors without rather large experiments. There may be complicated feedback loops. For example, sediment may determine the presence of aquatic plants, but aquatic plants may also trap sediment, and add to the sediment through peat.
The amount of dissolved oxygen in a water body is frequently the key substance in determining the extent and kinds of organic life in the water body. Fish need dissolved oxygen to survive, although their tolerance to low oxygen varies among species; in extreme cases of low oxygen, some fish even resort to air gulping. Plants often have to produce aerenchyma, while the shape and size of leaves may also be altered. Conversely, oxygen is fatal to many kinds of anaerobic bacteria.
Nutrient levels are important in controlling the abundance of many species of algae. The relative abundance of nitrogen and phosphorus can in effect determine which species of algae come to dominate. Algae are a very important source of food for aquatic life, but at the same time, if they become over-abundant, they can cause declines in fish when they decay. Similar over-abundance of algae in coastal environments such as the Gulf of Mexico produces, upon decay, a hypoxic region of water known as a dead zone.
The salinity of the water body is also a determining factor in the kinds of species found in the water body. Organisms in marine ecosystems tolerate salinity, while many freshwater organisms are intolerant of salt. The degree of salinity in an estuary or delta is an important control upon the type of wetland (fresh, intermediate, or brackish), and the associated animal species. Dams built upstream may reduce spring flooding, and reduce sediment accretion, and may therefore lead to saltwater intrusion in coastal wetlands.
Freshwater used for irrigation purposes often absorbs levels of salt that are harmful to freshwater organisms.
== Threats ==
The health of an aquatic ecosystem is degraded when the ecosystem's ability to absorb a stress has been exceeded. A stress on an aquatic ecosystem can be a result of physical, chemical or biological alterations to the environment. Physical alterations include changes in water temperature, water flow and light availability. Chemical alterations include changes in the loading rates of biostimulatory nutrients, oxygen-consuming materials, and toxins. Biological alterations include over-harvesting of commercial species and the introduction of exotic species. Human populations can impose excessive stresses on aquatic ecosystems. Climate change driven by anthropogenic activities can harm aquatic ecosystems by disrupting current distribution patterns of plants and animals. It has negatively impacted deep sea biodiversity, coastal fish diversity, crustaceans, coral reefs, and other biotic components of these ecosystems. Human-made aquatic ecosystems, such as ditches, aquaculture ponds, and irrigation channels, may also cause harm to naturally occurring ecosystems by trading off biodiversity with their intended purposes. For instance, ditches are primarily used for drainage, but their presence also negatively affects biodiversity.
There are many examples of excessive stresses with negative consequences. The environmental history of the Great Lakes of North America illustrates this problem, particularly how multiple stresses, such as water pollution, over-harvesting and invasive species can combine. The Norfolk Broadlands in England illustrate similar decline with pollution and invasive species. Lake Pontchartrain along the Gulf of Mexico illustrates the negative effects of different stresses including levee construction, logging of swamps, invasive species and salt water intrusion.
== See also ==
Aquatic plant – Plant that has adapted to living in an aquatic environment
Hydrobiology – Science of life and life processes in water
Hydrosphere – Total amount of water on a planet
Limnology – Science of inland aquatic ecosystems
Ocean
Stephen Alfred Forbes – American naturalist - one of the founders of aquatic ecosystem science
Stream metabolism
== References == | Wikipedia/Aquatic_ecosystems |
Sewage sludge treatment describes the processes used to manage and dispose of sewage sludge produced during sewage treatment. Sludge treatment is focused on reducing sludge weight and volume to reduce transportation and disposal costs, and on reducing potential health risks of disposal options. Water removal is the primary means of weight and volume reduction, while pathogen destruction is frequently accomplished through heating during thermophilic digestion, composting, or incineration. The choice of a sludge treatment method depends on the volume of sludge generated, and comparison of treatment costs required for available disposal options. Air-drying and composting may be attractive to rural communities, while limited land availability may make aerobic digestion and mechanical dewatering preferable for cities, and economies of scale may encourage energy recovery alternatives in metropolitan areas.
Sludge is mostly water with some amounts of solid material removed from liquid sewage. Primary sludge includes settleable solids removed during primary treatment in primary clarifiers. Secondary sludge is sludge separated in secondary clarifiers that are used in secondary treatment bioreactors or processes using inorganic oxidizing agents. In intensive sewage treatment processes, the sludge produced needs to be removed from the liquid line on a continuous basis because the volumes of the tanks in the liquid line have insufficient volume to store sludge. This is done in order to keep the treatment processes compact and in balance (production of sludge approximately equal to the removal of sludge). The sludge removed from the liquid line goes to the sludge treatment line. Aerobic processes (such as the activated sludge process) tend to produce more sludge compared with anaerobic processes. On the other hand, in extensive (natural) treatment processes, such as ponds and constructed wetlands, the produced sludge remains accumulated in the treatment units (liquid line) and is only removed after several years of operation.
Sludge treatment options depend on the amount of solids generated and other site-specific conditions. Composting is most often applied to small-scale plants with aerobic digestion for mid-sized operations, and anaerobic digestion for the larger-scale operations. The sludge is sometimes passed through a so-called pre-thickener which de-waters the sludge. Types of pre-thickeners include centrifugal sludge thickeners, rotary drum sludge thickeners and belt filter presses. Dewatered sludge may be incinerated or transported offsite for disposal in a landfill or use as an agricultural soil amendment.
Energy may be recovered from sludge through methane gas production during anaerobic digestion or through incineration of dried sludge, but energy yield is often insufficient to evaporate sludge water content or to power blowers, pumps, or centrifuges required for dewatering. Coarse primary solids and secondary sewage sludge may include toxic chemicals removed from liquid sewage by sorption onto solid particles in clarifier sludge. Reducing sludge volume may increase the concentration of some of these toxic chemicals in the sludge.
== Terminology ==
=== Biosolids ===
"Biosolids" is a term often used in wastewater engineering publications and public relations efforts by local water authorities when they want to put the focus on reuse of sewage sludge, after the sludge has undergone suitable treatment processes. In fact, biosolids are defined as organic wastewater solids that can be reused after stabilization processes such as anaerobic digestion and composting. The term "biosolids" was introduced by the Water Environment Federation in the U.S. in 1998. However, some people argue that the term is a euphemism to hide the fact that sewage sludge may also contain substances that could be harmful to the environment when the treated sludge is applied to land, for example environmental persistent pharmaceutical pollutants and heavy metal compounds.
== Treatment processes ==
The sludges accumulated in a wastewater treatment process must be treated and disposed of in a safe and effective manner. In many large plants the raw sludges are reduced in volume by a process of digestion.
=== Thickening ===
Thickening is often the first step in a sludge treatment process. Sludge from primary or secondary clarifiers may be stirred (often after addition of clarifying agents) to form larger, more rapidly settling aggregates. Primary sludge may be thickened to about 8 or 10 percent solids, while secondary sludge may be thickened to about 4 percent solids. Thickeners often resemble a clarifier with the addition of a stirring mechanism. Thickened sludge with less than ten percent solids may receive additional sludge treatment while liquid thickener overflow is returned to the sewage treatment process.
=== Dewatering ===
Water content of sludge may be reduced by centrifugation, filtration, and/or evaporation to reduce transportation costs of disposal, or to improve suitability for composting. Centrifugation may be a preliminary step to reduce sludge volume for subsequent filtration or evaporation. Filtration may occur through underdrains in a sand drying bed or as a separate mechanical process in a belt filter press. Filtrate and centrate are typically returned to the sewage treatment process. After dewatering sludge may be handled as a solid containing 50 to 75 percent water. Dewatered sludges with higher moisture content are usually handled as liquids.
=== Digestion ===
Many sludges are treated using a variety of digestion techniques, the purpose of which is to reduce the amount of organic matter and the number of disease-causing microorganisms present in the solids. The most common treatment options include anaerobic digestion, aerobic digestion, and composting. Sludge digestion offers significant cost advantages by reducing sludge quantity by nearly 50% and providing biogas as a valuable energy source.
The purpose of digestion is to reduce the amount of organic matter and the number of disease-causing microorganisms present in the solids. The process is often optimized to generate methane gas which can be used as a fuel to provide energy to power the plant or for sale.
==== Anaerobic digestion ====
Anaerobic digestion is a bacterial process that is carried out in the absence of oxygen. The process can either be thermophilic digestion, in which sludge is fermented in tanks at a temperature of 55 °C, or mesophilic, at a temperature of around 36 °C. Though allowing shorter retention time (and thus smaller tanks), thermophilic digestion is more expensive in terms of energy consumption for heating the sludge.
Mesophilic anaerobic digestion (MAD) is also a common method for treating sludge produced at sewage treatment plants. The sludge is fed into large tanks and held for a minimum of 12 days to allow the digestion process to perform the four stages necessary to digest the sludge. These are hydrolysis, acidogenesis, acetogenesis, and methanogenesis. In this process the complex proteins and sugars are broken down to form more simple compounds such as water, carbon dioxide, and methane.
Anaerobic digestion generates biogas with a high proportion of methane that may be used to both heat the tank and run engines or microturbines for other on-site processes. Methane generation is a key advantage of the anaerobic process. Its key disadvantage is the long time required for the process (up to 30 days) and the high capital cost. Many larger sites utilize the biogas for combined heat and power, using the cooling water from the generators to maintain the temperature of the digestion plant at the required 35 ± 3 °C. Sufficient energy can be generated in this way to produce more electricity than the machines require.
The Sludge Treatment Facility ("T-PARK") is able to provide electricity for its own operation and even to the Hong Kong public power grid by making use of the heat generated during the process of the sludge incineration.
==== Aerobic digestion ====
Aerobic digestion is a bacterial process occurring in the presence of oxygen resembling a continuation of the activated sludge process. Under aerobic conditions, bacteria rapidly consume organic matter and convert it into carbon dioxide. Once there is a lack of organic matter, bacteria die and are used as food by other bacteria. This stage of the process is known as endogenous respiration. Solids reduction occurs in this phase. Because the aerobic digestion occurs much faster than anaerobic digestion, the capital costs of aerobic digestion are lower. However, the operating costs are characteristically much greater for aerobic digestion because of energy used by the blowers, pumps and motors needed to add oxygen to the process. However, recent technological advances include non-electric aerated filter systems that use natural air currents for the aeration instead of electrically operated machinery.
Aerobic digestion can also be achieved by using diffuser systems or jet aerators to oxidize the sludge. Fine bubble diffusers are typically the more cost-efficient diffusion method, however, plugging is typically a problem due to sediment settling into the smaller air holes. Coarse bubble diffusers are more commonly used in activated sludge tanks or in the flocculation stages. A key component for selecting diffuser type is to ensure it will produce the required oxygen transfer rate.
=== Sidestream treatment technologies ===
Sludge treatment technologies that are used for thickening or dewatering of sludge have two products: the thickened or dewatered sludge, and a liquid fraction which is called sludge treatment liquids, sludge dewatering streams, liquors, centrate (if it stems from a centrifuge), filtrate (if it stems from a belt filter press) or similar. This liquid requires further treatment as it is high in nitrogen and phosphorus, particularly if the sludge has been anaerobically digested. The treatment can take place in the sewage treatment plant itself (by recycling the liquid to the start of the treatment process) or as a separate process.
==== Phosphorus recovery ====
One method for treating sludge dewatering streams is by using a process that is also used for phosphorus recovery. Another benefit for sewage treatment plant operators of treating sludge dewatering streams for phosphorus recovery is that it reduces the formation of obstructive struvite scale in pipes, pumps and valves. Such obstructions can be a maintenance headache particularly for biological nutrient removal plants where the phosphorus content in the sewage sludge is elevated. For example, the Canadian company Ostara Nutrient Recovery Technologies is marketing a process based on controlled chemical precipitation of phosphorus in a fluidized bed reactor that recovers struvite in the form of crystalline pellets from sludge dewatering streams. The resulting crystalline product is sold to the agriculture, turf and ornamental plants sectors as fertilizer under the registered trade name "Crystal Green".
=== Composting ===
Composting is an aerobic process of mixing sewage sludge with agricultural byproduct sources of carbon such as sawdust, straw or wood chips. In the presence of oxygen, bacteria digesting both the sewage sludge and the plant material generate heat to kill disease-causing microorganisms and parasites.: 20 Maintenance of aerobic conditions with 10 to 15 percent oxygen requires bulking agents allowing air to circulate through the fine sludge solids. Stiff materials like corn cobs, nut shells, shredded tree-pruning waste, or bark from lumber or paper mills better separate sludge for ventilation than softer leaves and lawn clippings. Light, biologically inert bulking agents like shredded tires may be used to provide structure where small, soft plant materials are the major source of carbon.
Uniform distribution of pathogen-killing temperatures may be aided by placing an insulating blanket of previously composted sludge over aerated composting piles. Initial moisture content of the composting mixture should be about 50 percent; but temperatures may be inadequate for pathogen reduction where wet sludge or precipitation raises compost moisture content above 60 percent. Composting mixtures may be piled on concrete pads with built-in air ducts to be covered by a layer of unmixed bulking agents. Odors may be minimized by using an aerating blower drawing vacuum through the composting pile via the underlying ducts and exhausting through a filtering pile of previously composted sludge to be replaced when moisture content reaches 70 percent. Liquid accumulating in the underdrain ducting may be returned to the sewage treatment plant; and composting pads may be roofed to provide better moisture content control.
After a composting interval sufficient for pathogen reduction, composted piles may be screened to recover undigested bulking agents for re-use; and composted solids passing through the screen may be used as a soil amendment material with similar benefits to peat. The optimum initial carbon-to-nitrogen ratio of a composting mixture is between 26-30:1; but the composting ratio of agricultural byproducts may be determined by the amount required to dilute concentrations of toxic chemicals in the sludge to acceptable levels for the intended compost use. Although toxicity is low in most agricultural byproducts, suburban grass clippings may have residual herbicide levels detrimental to some agricultural uses; and freshly composted wood byproducts may contain phytotoxins inhibiting germination of seedlings until detoxified by soil fungi.
=== Incineration ===
Incineration is also used, albeit to a much lesser degree.: 19–21 Incineration of sludge is less common because of air emissions concerns and the supplemental fuel (typically natural gas or fuel oil) required to burn the low calorific value sludge and vaporize residual water. On a dry solids basis, the fuel value of sludge varies from about 9,500 British thermal units per pound (5,300 cal/g) of undigested sewage sludge to 2,500 British thermal units per pound (1,400 cal/g) of digested primary sludge. Stepped multiple hearth incinerators with high residence time and fluidized bed incinerators are the most common systems used to combust wastewater sludge. Co-firing in municipal waste-to-energy plants is occasionally done, this option being less expensive assuming the facilities already exist for solid waste and there is no need for auxiliary fuel.: 20–21 Incineration tends to maximize heavy metal concentrations in the remaining solid ash requiring disposal; but the option of returning wet scrubber effluent to the sewage treatment process may reduce air emissions by increasing concentrations of dissolved salts in sewage treatment plant effluent.
=== Drying beds ===
Simple sludge drying beds are used in many countries, particularly in developing countries, as they are a cheap and simple method to dry sewage sludge. Drainage water must be captured; drying beds are sometimes covered but usually left uncovered. Mechanical devices to turn over the sludge in the initial stages of the drying process are also available on the market.
Drying beds are typically composed of four layers consisting of gravel and sand. The first layer is coarse gravel that is 15 to 20 centimeters thick. Followed by fine gravel that is 10 centimeters thick. The third layer is sand that can be between 10 and 15 centimeters and serves as the filter between the sludge and gravel. Sludge dries up and water percolates to the first layer that is collected at the drainage pipe that is beneath all layers.
=== Emerging technologies ===
Phosphorus recovery from sewage sludge or from sludge dewatering streams is receiving increased attention particularly in Sweden, Germany and Canada, as phosphorus is a limited resource (a concept also known as "peak phosphorus") and is needed as fertilizer to feed a growing world population. Phosphorus recovery methods from wastewater or sludge can be categorized by the origin of the used matter (wastewater, sludge liquor, digested or non-digested sludge, ash) or by the type of recovery processes (precipitation, wet-chemical extraction and precipitation, thermal treatment). Research on phosphorus recovery methods from sewage sludge has been carried out in Sweden and Germany since around 2003, but the technologies currently under development are not yet cost effective, given the current price of phosphorus on the world market.
The Omni Processor is a process that was under development in 2015 and treats sewage sludge. It can generate a surplus of electrical energy if the input materials have the right level of dryness.
Thermal depolymerization produces light hydrocarbons from sludge heated to 250 °C and compressed to 40 MPa.
Thermal hydrolysis is a two-stage process combining high-pressure boiling of sludge, followed by a rapid decompression. This combined action sterilizes the sludge and makes it more biodegradable, which improves digestion performance. Sterilization destroys pathogens in the sludge resulting in it exceeding the stringent requirements for land application (agriculture). Thermal hydrolysis systems are operating at sewage treatment plants in Europe, China and North America, and can generate electricity as well as high quality sludge.
The use of a green approach, such as phytoremediation, has been recently proposed as a valuable tool to improve sewage sludge contaminated by trace elements and persistent organic pollutants.
== Disposal or use as fertilizer ==
When a liquid sludge is produced, further treatment may be required to make it suitable for final disposal. Sludges are typically thickened and/or dewatered to reduce the volumes transported off-site for disposal. Processes for reducing water content include lagooning in drying beds to produce a cake that can be applied to land or incinerated; pressing, where sludge is mechanically filtered, often through cloth screens to produce a firm cake; and centrifugation where the sludge is thickened by centrifugally separating the solid and liquid. Sludges can be disposed of by liquid injection to land or by disposal in a landfill.
There is no process which completely eliminates the need to dispose of treated sewage sludge.
Much sludge originating from commercial or industrial areas is contaminated with toxic materials that are released into the sewers from industrial or commercial processes or from domestic sources. Elevated concentrations of such materials may make the sludge unsuitable for agricultural use and it may then have to be incinerated or disposed of to landfill.
Despite the apparent unsuitability of at least some sewage sludge, application to farm-land remains a commonly used option
=== Examples ===
==== Edmonton, Alberta, Canada ====
The Edmonton Composting Facility, in Edmonton, Alberta, Canada, is the largest sewage sludge composting site in North America.
==== New York City, U.S. ====
Sewage sludge can be superheated and converted it into pelletized granules that are high in nitrogen and other organic materials. In New York City, for example, several sewage treatment plants have dewatering facilities that use large centrifuges along with the addition of chemicals such as polymer to further remove liquid from the sludge. The product which is left is called "cake," and that is picked up by companies which turn it into fertilizer pellets. This product, also called biosolid, is then sold to local farmers and turf farms as a soil amendment or fertilizer, reducing the amount of space required to dispose of sludge in landfills.
==== Southern California, U.S. ====
In the very large metropolitan areas of southern California inland communities return sewage sludge to the sewer system of communities at lower elevations to be reprocessed at a few very large treatment plants on the Pacific coast. This reduces the required size of interceptor sewers and allows local recycling of treated wastewater while retaining the economy of a single sludge processing facility and is an example of how sewage sludge can help solve an energy crisis.
== See also ==
Fecal sludge management
== References ==
=== Sources ===
== External links ==
Media related to Sewage sludge treatment at Wikimedia Commons | Wikipedia/Sewage_sludge_treatment |
Industrialisation (UK) or industrialization (US) is the period of social and economic change that transforms a human group from an agrarian society into an industrial society. This involves an extensive reorganisation of an economy for the purpose of manufacturing. Industrialisation is associated with increase of polluting industries heavily dependent on fossil fuels. With the increasing focus on sustainable development and green industrial policy practices, industrialisation increasingly includes technological leapfrogging, with direct investment in more advanced, cleaner technologies.
The reorganisation of the economy has many unintended consequences both economically and socially. As industrial workers' incomes rise, markets for consumer goods and services of all kinds tend to expand and provide a further stimulus to industrial investment and economic growth. Moreover, family structures tend to shift as extended families tend to no longer live together in one household, location or place.
== Background ==
The first transformation from an agricultural to an industrial economy is known as the Industrial Revolution and took place from the mid-18th to early 19th century. It began in Great Britain, spreading to Belgium, Switzerland, Germany, and France and eventually to other areas in Europe and North America. Characteristics of this early industrialisation were technological progress, a shift from rural work to industrial labour, and financial investments in new industrial structures. Later commentators have called this the First Industrial Revolution.
The "Second Industrial Revolution" labels the later changes that came about in the mid-19th century after the refinement of the steam engine, the invention of the internal combustion engine, the harnessing of electricity and the construction of canals, railways, and electric-power lines. The invention of the assembly line gave this phase a boost. Coal mines, steelworks, and textile factories replaced homes as the place of work.
By the end of the 20th century, East Asia had become one of the most recently industrialised regions of the world.
There is considerable literature on the factors facilitating industrial modernisation and enterprise development.
== Social consequences ==
The Industrial Revolution was accompanied by significant changes in the social structure, the main change being a transition from farm work to factory-related activities. This has resulted in the concept of Social class, i.e., hierarchical social status defined by an individual's economic power. It has changed the family system as most people moved into cities, with extended family living apart becoming more common. The movement into more dense urban areas from less dense agricultural areas has consequently increased the transmission of diseases. The place of women in society has shifted from primary caregivers to breadwinners, thus reducing the number of children per household. Furthermore, industrialisation contributed to increased cases of child labour and thereafter education systems.
=== Urbanisation ===
As the Industrial Revolution was a shift from the agrarian society, people migrated from villages in search of jobs to places where factories were established. This shifting of rural people led to urbanisation and an increase in the population of towns. The concentration of labour in factories has increased urbanisation and the size of settlements, to serve and house the factory workers.
=== Exploitation ===
=== Changes in family structure ===
Family structure changes with industrialisation. Sociologist Talcott Parsons noted that in pre-industrial societies there is an extended family structure spanning many generations who probably remained in the same location for generations. In industrialised societies the nuclear family, consisting of only parents and their growing children, predominates. Families and children reaching adulthood are more mobile and tend to relocate to where jobs exist. Extended family bonds become more tenuous. One of the most important criticisms of industrialisation is that it caused children to stay away from home for many hours and to use them as cheap workers in factories.
== Industrialisation in East Asia ==
Between the early 1960s and 1990s, the Four Asian Tigers underwent rapid industrialisation and maintained exceptionally high growth rates.
== Current situation ==
As of 2018 the international development community (World Bank, Organisation for Economic Co-operation and Development (OECD), many United Nations departments, FAO WHO ILO and UNESCO, endorses development policies like water purification or primary education and co-operation amongst third world communities. Some members of the economic communities do not consider contemporary industrialisation policies as being adequate to the global south (Third World countries) or beneficial in the longer term, with the perception that they may only create inefficient local industries unable to compete in the free-trade dominated political order which industrialisation has fostered. Environmentalism and Green politics may represent more visceral reactions to industrial growth. Nevertheless, repeated examples in history of apparently successful industrialisation (Britain, Soviet Union, South Korea, China, etc.) may make conventional industrialisation seem like an attractive or even natural path forward, especially as populations grow, consumerist expectations rise and agricultural opportunities diminish.
The relationships among economic growth, employment, and poverty reduction are complex, and higher productivity can sometimes lead to static or even lower employment (see jobless recovery).
There are differences across sectors, whereby manufacturing is less able than the tertiary sector to accommodate both increased productivity and employment opportunities; more than 40% of the world's employees are "working poor", whose incomes fail to keep themselves and their families above the $2-a-day poverty line. There is also a phenomenon of deindustrialisation, as in the former USSR countries' transition to market economies, and the agriculture sector is often the key sector in absorbing the resultant unemployment.
== See also ==
== References ==
== Further reading ==
Ahmady, Kameel (2021). Traces of Exploitation in the World of Childhood (A Comprehensive Research on Forms, Causes and Consequences of Child Labour in Iran). Denmark: Avaye Buf. ISBN 9788793926646.
Chandler Jr., Alfred D. (1993). The Visible Hand: The Management Revolution in American Business. Belknap Press of Harvard University Press. ISBN 978-0674940529.
Hewitt, T., Johnson, H. and Wield, D. (Eds) (1992) industrialisation and Development, Oxford University Press: Oxford.
Hobsbawm, Eric (1962): The Age of Revolution. Abacus.
Kemp, Tom (1993) Historical Patterns of Industrialisation, Longman: London. ISBN 0-582-09547-6
Kiely, R (1998) industrialisation and Development: A comparative analysis, UCL Press:London.
Landes, David. S. (1969). The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge, New York: Press Syndicate of the University of Cambridge. ISBN 0-521-09418-6.
Pomeranz, Ken (2001)The Great Divergence: China, Europe and the Making of the Modern World Economy (Princeton Economic History of the Western World) by (Princeton University Press; New Ed edition, 2001)
Tilly, Richard H.: Industrialization as an Historical Process, European History Online, Main: Institute of European History, 2010, retrieved: 29 February 2011.
== External links == | Wikipedia/Industrialization |
Architectural design values make up an important part of what influences architects and designers when they make their design decisions. However, architects and designers are not always influenced by the same values and intentions. Value and intentions differ between different architectural movements. It also differs between different schools of architecture and schools of design as well as among individual architects and designers.
The differences in values and intentions are directly linked to the pluralism in design outcomes that exist within architecture and design. It is also a big contributing factor as to how an architect or designer operates in his/her relation to clients.
Different design values tend to have a considerable history and can be found in numerous design movements. The influence that each design value has had on design movements and individual designers has varied throughout history.
== Aesthetic design values ==
The expansion of architectural and industrial design ideas and vocabularies which took place during the last century has created a diverse aesthetic reality within these two domains. This pluralistic and diverse aesthetic reality has typically been created within different architectural and industrial design movements such as: Modernism, Postmodernism, Deconstructivism, Post-structuralism, Neoclassicism, New Expressionism, Supermodernism, etc. All of these aesthetic realities represent a number of divergent aesthetic values, in addition to differences in general values and theories found within these movements. Some of the stylistic distinctions found in these diverse aesthetic realities reflects profound differences in design values and thinking, but this is not the case for all stylistic distinctions, as some stylistic distinctions builds on similar thinking and values.
These aesthetic values and their diverse aesthetic expressions are to some degree a reflection of the development that has taken place in the art community. In addition, more general changes have taken place in Western societies, due to technological development, new economic realities, political changes etc. However, these diverse aesthetic expressions are also a reflection of individual architects and industrial designers’ personal expression, based on designers’ tendency to experiment with form, materials, and ornament to create new aesthetic styles and aesthetic vocabulary. Changes in aesthetic styles and expressions have been, and still are, both synchronic and diachronic, as different aesthetic styles are produced and promoted simultaneously.
A number of values which cannot be classified as aesthetic design values have influenced the development of the aesthetic reality, as well as contributed to the pluralistic aesthetic reality which characterises contemporary architecture and industrial design.
Aesthetic Design Values, contains seven values.
=== Artistic aspects and self-expression ===
It is characterised by a belief that individual self-expression—or one's inner spiritual self and creative imagination, inner resources and intuition—should be utilised and/or be the base used when designing. These sentiments are closely linked to a number of artistic values found in movements like Expressionism and the Avant-garde art. Thus, this design value is closely related to abstract forms and expression, personal creative liberty, elitism and being ahead of the rest of society.
=== The spirit of the time design value ===
This design value is based on the conception that every age has a certain spirit or set of shared attitudes that should be utilised when designing. The Spirit of the Times denotes the intellectual and cultural climate of a particular era, which can be linked to an experience of a certain worldview, sense of taste, collective consciousness and unconsciousness. Thus “form expression” which can be found, to some extent in the “air” of a given time and each generation, should generate an aesthetic style that expresses the uniqueness related to that time.
=== The structural, functional and material honesty design value ===
Structural Honesty is linked to the notion that a structure shall display its “true” purpose and not be decorative etc. Functional honesty is linked to the idea that a building or product form shall be shaped on the basis of its intended function, often known as “form follows function”. Material honesty implies that materials should be used and selected on the basis of their properties, and that the characteristics of a material should influence the form it is used for. Thus, a material must not be used as a substitute for another material as this subverts the materials “true” properties and it is “cheating” the spectator.
=== The simplicity and minimalism design value ===
This design value is based on the idea that simple forms, i.e. aesthetics without considerable ornaments, simple geometry, smooth surfaces etc., represents forms which are both truer to “real” art and represents “folk” wisdom. This design value implies that the more cultivated a person becomes, the more decoration disappears. In addition, it is linked to the notion that simple forms will free people from the everyday clutter, thus contribute to tranquillity and restfulness.
=== Nature and organic design value ===
This design value is based on the idea that nature (i.e. all sorts of living organisms, numerical laws etc.) can provide inspiration, functional clues and aesthetic forms that architects and industrial designers should use as a basis for designs. Designs based on this value tend to be characterized by free-flowing curves, asymmetrical lines and expressive forms. This design value can be summed up in “form follows flow” or “of the hill” as opposed to “on the hill”.
=== The classic, traditional and vernacular aesthetics design value ===
This value is based on a belief that a building and product should be designed from timeless principles that transcend particular designers, cultures and climates. Implicit in this design value is the notion that if these forms are used, the public will appreciate a structure's timeless beauty and understand immediately how to use a given building or product. This design value is also linked to regional differences i.e. varying climate etc. and folklore cultures, which creates distinctive aesthetical expressions.
=== The regionalism design value ===
This design value is based on the belief that building—and to some degree products—should be designed in accordance with the particular characteristics of a specific place. In addition, it is linked to the aim of achieving visual harmony between a building and its surroundings, as well as achieving continuity in a given area. In other words, it strives to create a connection between past and present forms of building. Finally, this value is also often related to preserving and creating regional and national identity.
== Social design values ==
Many architects and industrial designers have a strong motivation to serve the public good and the needs of the user population. Moreover, social awareness and social values within architecture and design reflect, to some degree, the emphasis these values are given in society at large.
Social values can have an aesthetical impact, but these aspects will not be explored as the main aesthetical impact found in design has been covered in the previous sections. Social design values are at times in conflict with other design values. This type of conflict can manifest itself between different design movements, but it can also be the cause of conflicts within a given design movement. It can be argued that conflicts between social values and other design values often represent the continuing debate between Rationalism and Romanticism commonly found within architecture and industrial design.
The Social Design Values category consisting of four design values.
=== The social change design value ===
This design value can be described as a commitment to change society for the better through architecture and industrial design. This design value is closely connected and associated with political movements and subsequent building programs. Architects and industrial designers that are committed to the design value of social change often see their work as a tool for transforming the built environment and those who live in it.
=== The consultation and participation design value ===
This design value is based on a belief that it is beneficial to involve stakeholders in the design process. This value is connected to a belief that user involvement leads to:
Meeting social needs and an effective use of resources.
Influencing in the design process as well as awareness of the consequences etc.
Providing relevant and up-to-date information for designers.
=== The crime prevention design value ===
This design value is based on the belief that the built environment can be manipulated to reduce crime levels, which is attempted accomplished through three main strategies that are:
Defensible space.
Crime prevention through environmental design.
Situational crime prevention.
=== The 'Third world' design value ===
This is based on an eagerness to help developing countries through architecture and design (i.e. a response to the needs of the poor and destitute within the Third World). This design value implies that social and economic circumstances found in the Third World necessitate the development of special solutions, which are distinct from what the same architects and industrial designers would recommend for the developed world.
== Environmental design values ==
The 20th century has been marked by the re-emergence of environmental values within Western societies. Concern for the environment is not new and can be found to a varying degree throughout history, and it is rooted in a number of perspectives including the aim of managing the ecosystems for sustained resource yields (sustainable development), and the idea that everything in nature has an intrinsic value (nature protection and preservation). Generally behind these types of thinking are the concepts of stewardship and that the present generation owes duties to generations not yet born.
Environmental problems and challenges found in the 19th and 20th centuries led to a development where environmental values became important in some sections of Western societies. It is therefore not surprising that these values can also be found among individual architects and industrial designers. The focus on environmental design has been marked with the rediscovery and further development of many “ancient” skills and techniques. In addition, new technology that approaches environmental concerns is also an important characteristic of the environmental approach found among architects and industrial designers. These rather different approaches to environmental building and product technology can be illustrated with the development of environmental high-tech architecture, and the more “traditional” environmental movement within is ecological based architecture.
Environmental technology, along with new environmental values, have affected development in cities across the world. Many cities have started to formulate and introduce "eco-regulations concerning renewable resources, energy consumption, sick buildings, smart buildings, recycled materials, and sustainability". This may not be surprising, as about 50% of all energy consumption in Europe and 60% in the US is building-related. However, environmental concerns are not restricted to energy consumption; environmental concerns take on a number of perspectives generally, which are reflected in the focus found among architects and industrial designers.
The environmental design values category consists of three design values.
=== Green and sustainability ===
This value is based on a belief that a sustainable and/or environmentally friendly building design is beneficial to users, society and future generations. Key concepts within this design value are: energy conservation, resource management, recycling, cradle-to-cradle, toxic free materials etc.
=== Re-use and modification ===
This is based on a belief that existing buildings, and to some degree products, can be continuously used through updates. Within this value there are two separate schools of thought with regards to aesthetics: one camp focuses on new elements that are sublimated to an overall aesthetic, and the other advocates for aesthetical contrast, dichotomy and even dissonance between the old and the new.
=== Health ===
This design value is based on the belief that the built environment can contribute to ensuring a healthy living environment. Built into this design value, are principles like: buildings should be freestanding; sites need to be distributed to maximize the amount of sunlight that reaches individual structures. Similarly, there is an emphasis on health based construction and reduction of toxic emissions through selection of appropriate materials.
== Traditional design values ==
Within both architecture and industrial design there is a long tradition of being both inspired by and re-use design elements of existing buildings and products. This is the case even if many architects and industrial designers argue that they are primarily using their creativity to create new and novel design solutions. Some architects and industrial designers have openly led themselves be inspired by existing building and products traditions, and have even used this inspiration as the main base for their designs solutions.
This design tradition has a considerable history, which can be indicated in many of the labels associated with this tradition; this includes labels such as Classicism, Vernacular, Restoration and Preservation etc. In addition, as indicated in the previous section “Classic, Traditional and Vernacular aesthetics”, an important element of this tradition is to re-use and be inspired by already existing aesthetical elements and styles. However, the traditional approach also implies other aspects such as functional aspects, preserving existing building traditions as well as individual buildings and products.
The Traditional Design Values category, consisting of three distinct values.
=== The tradition based design value ===
This relies on a belief that traditional “designs” are the preferred typology and template for buildings and products, because they “create” timeless and “functional” designs. Within this design value there are three main strategies:
Critical traditionalist/regionalist i.e. interpreting the traditional typologies and templates and applying them in an abstracted modern vocabulary.
Revivalists i.e. adhering to the most literal traditional form.
Contextualists who use historical forms when the surroundings “demands” it.
=== The design value of restoration and preservation ===
This is based on a commitment to preserve the best of buildings and products for future generations. This design value tends to represent restoring a building or product to its initial design and is usually rooted in three perspectives. These are:
An archaeological perspective (i.e. preserving buildings and products of historical interest).
An artistic perspective i.e. a desire to preserve something of beauty.
A social perspective (i.e. a desire to hold on to the familiar and reassuring).
=== The vernacular design value ===
This value is based on a belief that a simple life and its design, closely linked to nature, are superior to that of modernity. The design value of Vernacular includes key concept such as:
Reinvigorating tradition (i.e. evoking the vernacular).
Reinventing tradition i.e. the search for new paradigms.
Extending tradition i.e. using the vernacular in a modified manner.
Reinterpreting tradition i.e. the use of contemporary idioms.
== Gender-based design values ==
These design values are closely linked to the feminist movement and theories developed within the 19th and 20th centuries. Design values based on gender are related to three tenets found in architecture and industrial design, which are:
Gender differences related to critique and reconstruction of architectural practice and history.
The struggle for equal access to training, jobs and recognition in architecture and design.
The focus on gender based theories for the built environment, the architectural discourse, and cultural value systems.
Designers that adhere to the Design values based on gender typically have a focus on creating buildings that do not have the same barriers that children, parents and the elderly experience in much of the built environment. It also implies a focus on aesthetics that are deemed to be more 'feminine' than the 'masculine' aesthetics often created by male designers.
== The economic design value ==
Many architects and industrial designers often dread the financial and business side of architecture and industrial design practice, as their focus is often geared towards achieving successful design quality rather than achieving successful economic expectations.
This is the basis for a design value that can be characterised as 'voluntarism' or 'charrette ethos'. This value is commonly found among practising architects and designers. The 'volunteer' value is founded in the belief that good architecture and design requires commitment beyond the prearranged time, accountant's budget, and normal hours. Implicit in the 'volunteer' value are elements of the following claim present:
Best design works comes from offices or individual designers which are willing to put in overtime (sometimes unpaid) for the sake of the design outcome.
Good architecture and design is rarely possible within fees offered by clients.
Architects and designers should care enough about buildings or products to uphold high design standards regardless of the payment offered.
The 'volunteer' design value can be seen as a reaction to and a rejection of the client's influence and control over the design project.
== The novel design value ==
It is common within contemporary architecture and industrial design to find emphasis on creating novel design solutions. This emphasis is often accompanied by an equally common lack of emphasis on studying of the appropriateness of any already existing design solution.
The novel design value has historical roots dating back to early design movements such as Modernism, with its emphasis on “starting from zero”. The celebration of original and novel design solutions is, by many designers and design scholars, considered one of the main aspects of architecture and design. This design value is often manifested through the working methods of designers. Some architects and designers with their emphasis on the “big idea” will have a tendency to cling to major design ideas and themes, even if these themes and ideas are faced with insurmountable challenges. However, the emphasis on design novelty is also associated with progress and new design solutions that, without this emphasis, would not see the light of day.
The design value of novelty is not generally accepted within either architecture or design. This is indicated by the debate in architecture, focusing on whether buildings should harmonize with the surroundings in that they are situated in or not. Equally is the debate where architecture should be based on traditional topology and design styles i.e. classical and vernacular base architecture or if it should be an expression of its time. The same issues are indicated within the industrial design domain where it has been debated if retro design should be accepted or not as good design.
== Mathematical and scientific design values ==
A movement to base architectural design on scientific and mathematical understanding started with the early work of Christopher Alexander in the 1960s, Notes on the synthesis of form. Other contributors joined in, especially in investigations of form on the urban scale, which resulted in important developments such as Bill Hillier's Space syntax and Michael Batty's work on Spatial analysis. In architecture, the four-volume work The Nature of Order by Alexander summarizes his most recent results. An alternative architectural theory based on scientific laws, as for example A Theory of Architecture is now competing with purely aesthetic theories most common in architectural academia. This entire body of work can be seen as balancing and often questioning design movements that rely primarily upon aesthetics and novelty. At the same time, the scientific results that determine this approach in fact verify traditional and vernacular traditions in a way that purely historical appreciation cannot.
Social and environmental issues are given a new explanation, drawing upon biological phenomena and the interactivity of groups and individuals with their built environment. The new discipline of biophilia developed by E. O. Wilson plays a major role in explaining the human need for intimate contact with natural forms and living beings. This insight into the connection between human beings and the biological environment provides a new understanding for the need for ecological design. An extension of the biophilic phenomenon into artificial environments suggests a corresponding need for built structures that embody the same precepts as biological structures. These mathematical qualities include fractal forms, scaling, multiple symmetries, etc.. Applications and extensions of Wilson's original idea are now carried out by Stephen R. Kellert in the Biophilia hypothesis, and in by Nikos Salingaros and others in the book "Biophilic Design".
== See also ==
== Further reading ==
Bartlett School of Planning, University College London. A bibliography of design value for The Commission for Architecture and the Built Environment
Holm, Ivar (2006). Ideas and Beliefs in Architecture and Industrial design: How attitudes, orientations, and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 82-547-0174-1.[1]
BIOPHILIC DESIGN: THE THEORY, SCIENCE AND PRACTICE OF BRINGING BUILDINGS TO LIFE, edited by Stephen R. Kellert, Judith Heerwagen, and Martin Mador (John Wiley, New York, 2008). ISBN 978-0-470-16334-4
LERA, S. G. (1980). Designers' values and the evaluation of designs. PhD thesis, Department of Design Research. London, Royal College of Art. [2]
THOMPSON, I. H. (2000). Ecology, community and delight: sources of values in landscape architecture. London, E & FN Spon. ISBN 0-419-25150-2.
== References == | Wikipedia/Architectural_design_values |
A Master of Design (MDes, M.Des. or M.Design) is a postgraduate academic master degree in the field of Design awarded by several academic institutions around the world. The degree level has different equivalencies; some MDes are equivalent to Master of Fine Arts and others to a Master of Arts or Master of Science postgraduate degree in alternative disciplines. It often follows a Bachelor of Design degree and requires around two years of study and research in design.
== Awarding institutions ==
University of Alberta, Edmonton, Alberta, Canada awards two-year research-oriented Master of Design degrees in Industrial Design (ID) and Visual Communication Design (VCD).
Bezalel Academy of Arts and Design, Jerusalem, Israel, awards a M.Des. degree in Industrial Design in a two-year program.
University of California, Berkeley, United States, awards a 17-month studio based Master of Design degree in emerging technologies and design.
California College of the Arts (CCA), San Francisco, United States, awards a three-semester MDes degree in Interaction Design.
Carnegie Mellon University, Pittsburgh, United States, awards two-year MDes degrees in Communication Planning and Information Design (offered jointly with the School of English), as well as interaction design, both through the School of Design. Both are terminal degrees in a two-year program.
Central Institute of Technology, Kokrajhar, Assam, India, awards Masters of Design specializing in Multimedia and Communication Design (MCD) in a two-year programme.
Concordia University, Montreal, Quebec, Canada, awards a Master of Design focusing on multi-disciplinarity and sustainability over a two-year program
University of Cincinnati, Cincinnati, United States, awards a M.Des. terminal degree in a two-year program.
Centre for Design Cranfield University, Cranfield, United Kingdom, awards an MDes in Innovation and Creativity in Industry and an MDes in Design and Innovation for Sustainability, both through the Centre for Competitive Creative Design (C4D).
Coventry University,[[ West Midlands, United Kingdom, awards an MDes after a 4th year of study after the 3 year Product Design BA course
Columbus College of Art & Design, awards a two-year Master of Design degree in Integrative Design. Integrative Design offers a competitive advantage in many fields, including business, engineering, healthcare, and education.
Dhirubhai Ambani Institute of Information and Communication Technology (DAIICT), Gandhinagar, India, awards an M.Des. Communication design degree in a two-year program.
University of Dundee, Dundee, Scotland awards an MDes over one year of full-time study.
Edinburgh Napier University, Edinburgh, United Kingdom, awards MDes Design with pathways in digital arts, graphic design, interaction design, interdisciplinary design, interior architecture, product design, sustainability, and urbanism. The course is taken full-time over 3 trimesters (12 months)
Emily Carr University of Art + Design, Vancouver, British Columbia, Canada, awards a Master of Design degree that is a full-time research-oriented, two-year program in interdisciplinary design
Glasgow School of Art, Glasgow, United Kingdom, awards Master of Design degrees in design innovation (with service design, environmental design or citizenship), communication design (graphics, illustration, photography), fashion and textiles, interior design, digital culture, and sound for moving image. Courses are taken full-time during 3 trimesters over 12 or 24 months)
University of Gloucestershire awards a one year (full time) or two year (part time) Master of Design degree.
Harvard University, Harvard Graduate School of Design, Cambridge, Massachusetts, United States, awards a Master in Design Studies in a two-year program.
Heriot-Watt University, Edinburgh, United Kingdom, awards an MDes Games Design and Development over one year of full-time study.
Holon Institute of Technology, Holon, Israel, awards a M.Design degree in "Integrated Design" in a two-year program.
The Hong Kong Polytechnic University, Hong Kong, awards Master of Design degree in Design Practices, Design Strategies, Interaction Design, International Design and Business Management and Urban Environments Design. These are 1-year programs taken full-time during 3 trimesters.
The Oslo School of Architecture and Design, Oslo, Norway, awards a Master of Design degree in Service Design, Interaction Design, or Industrial Design. These are 2-year programs taken full-time during 4 semesters.
Indian Institute of Science, Center for Product Design and Manufacturing, Bangalore, India, awards a Master of Design in Product Design and Engineering in a two-year program.
Indian Institute of Information Technology Design and Manufacturing Kancheepuram, offers a M.Des course in Electronics, Communication and Mechanical system design.
Indian Institute of Technology Guwahati, India, awards a M.Des degree in a two-year program.
Indian Institute of Technology Kanpur, India, awards a M.Des degree in a two-year program.
Industrial Design Centre of the Indian Institute of Technology Bombay, Mumbai, Maharashtra, India, awards a Master of Design degree in Industrial Design, Visual Communication, Animation, Interaction Design, and Mobility and Vehicle Design.All programs require two years of study.
Indian Institute of Technology Hyderabad, India, awards a M.Des degree in a two-year program.
Indian Institute of Information Technology, Design and Manufacturing, Jabalpur, India awards a M.Des degree in Product Design, Interaction Design & Visual Communication and also PhD in Design
Department of Design of the Indian Institute of Technology Delhi, Delhi, India offers post-graduate programs in M.Des and the admission is through CEED (Common entrance exam of design).
Department of Design (DoD), Shiv Nadar University, Delhi (NCR), India, awards a M.Des. degree (Choice-based specialization available in Strategic Product Design, Visual Communication, User Experience Design and Information Design) in a two-year program.
IIT Institute of Design at the Illinois Institute of Technology, Chicago, United States, awards a Master of Design degree and a joint degree Master of Design/Master of Business Administration (with IIT Stuart School of Business).
Izmir University of Economics, Izmir, Turkey awards a Master of Design degree in Design Studies. The two-year program is offered in English.
Massachusetts College of Art and Design offers a 2 year Master of Design degree in Innovation
University of Nairobi, Kenya, awards a two-year research-based Master of Design degree through the School of the Arts and Design.
National Institute of Design (NID), offers a 2.5 years M.Des course in Product Design, Furniture Design, Graphic Design, Animation Film Design, Film & Video Communication Design, Exhibition Design, Textile Design, Toy & Game Design, Photography Design, Apparel Design, Transportation Design, Lifestyle Accessory Design, New Media Design, Information Design, Interaction Design, Retail Experience Design, Universal Design, Digital Game Design from its three campuses at Ahmedabad (Main campus), Bangalore (R&D campus) and Gandhinagar (PG campus), India.
National Institute of Fashion Technology, New Delhi, India, offers Master programmes in M.Des (master in design), M.F.M. (Master in Fashion Management) M. FTech. (Master in Fashion Technology)
NSCAD University, Halifax, Nova Scotia, Canada, awards a Master of Design following three semesters or one calendar year of study, requiring graduates to propose and complete a final research project.
Ontario College of Art & Design University, Toronto, Ontario, Canada, awards MDes degrees in Strategic Foresight and Innovation; Digital Futures; Inclusive Design; Interdisciplinary Master's in Art, Media and Design; and, Design for Health as two-year programs.
PUC-Rio, Rio de Janeiro, Brazil, awards a Master of Design degree in a two-year program.
Ravensbourne College of Design and Communication, London, United Kingdom, awards a Master of Design degree specializing in either Design management, Service Design, or Luxury Brand Management, in a one-year program (full-time) or two years (part-time).
Rhode Island School of Design, Providence, RI, offers a 2+ year program focused in Adaptive Reuse Architecture.
The Robert Gordon University, Aberdeen, Scotland, awards a Master of Design degree in Contextualised Practice in a one-year program (full-time) or two years (part-time).
Royal Academy of Art, The Hague, Netherlands, awards a Master of Design degree specializing in Type design and Interior Architecture, in a one-year program.
RMIT University, Melbourne, Australia, awards a Master of Design degree specializing in Communication Design, in a one and a half year program.
Sandberg Instituut, Amsterdam, The Netherlands, awards a Master of Design degree in a two-year program.
School of the Art Institute of Chicago, Chicago, United States, awards a MDes in Designed Objects and a MDes in Fashion, body, and Garment, both in two-year programs.
Universidad Iberoamericana, Tijuana, Mexico awards a Master in Strategic Digital Design in a two-year program.
University of Illinois at Chicago, Chicago, United States, awards a Master of Design in Graphic Design and Industrial Design, both in two-year programs.
Universidad de Palermo, Buenos Aires, Argentina awards a Master in Design Management in a two-year program (part-time).
University of Michigan, Stamps School of Art & Design, Ann Arbor, United States, awards a MDes degree in Integrative Design in a two-year program.
Shenkar College, Israel, Program of Master Degree in Design.
School of Design Studies – UPES Dehradun, Dehradun, India, offers M.Des degrees in Industrial Design, Product Design, Interior Design, and Transportation Design.
University of Washington, Seattle, United States, awards a Master of Design degree in a two-year program.
York University, Toronto, Ontario, Canada, awards an M.Des. terminal degree in a two-year program.
== Undergraduate studies ==
Some European institutions award an undergraduate MDes degree. Like all European master degrees, this usually requires a four-year program with a research project or dissertation.
Coventry University, Coventry, UK, awards MDes in various Industrial Design courses including Transport and Automotive design, in four-year programs.
University of Leeds, Leeds, UK, awards a Master of Design degree specializing in Product Design, in a 4-year undergraduate program. Students are awarded a BDes after 3 years and can continue to a 4th year after which they receive a MDes.
De Montfort University, Leicester, UK, awards a Master of Design degree specialising in Design Products, in a 4-year undergraduate program. It also offers part-time study opportunity to complete in 7 years. Students are awarded a MDes. after the successful completion of the programme.
== References == | Wikipedia/Master_of_Design |
In architecture, functionalism is the principle that buildings should be designed based solely on their purpose and function. An international functionalist architecture movement emerged in the wake of World War I, as part of the wave of Modernism. Its ideas were largely inspired by a desire to build a new and better world for the people, as broadly and strongly expressed by the social and political movements of Europe after the extremely devastating world war. In this respect, functionalist architecture is often linked with the ideas of socialism and modern humanism.
A new slight addition to this new wave of architecture was that not only should buildings and houses be designed around the purpose of functionality, architecture should also be used as a means to physically create a better world and a better life for people in the broadest sense. This new functionalist architecture had the strongest impact in Czechoslovakia, Germany, Poland, the USSR and the Netherlands, and from the 1930s also in Scandinavia and Finland.
This principle is a matter of confusion and controversy within the profession, particularly in regard to modern architecture, as it is less self-evident than it first appears.
== History of functionalism ==
The theoretical articulation of functionalism in buildings can be traced back to the Vitruvian triad, where utilitas (variously translated as 'commodity', 'convenience', 'utility') stands alongside firmitas (firmness) and venustas (beauty) as one of three classic goals of architecture. Functionalist views were typical of some Gothic Revival architects. In particular, Augustus Welby Pugin wrote that "there should be no features about a building which are not necessary for convenience, construction, or propriety" and "all ornament should consist of enrichment of the essential construction of the building".
In 1896, Chicago architect Louis Sullivan coined the phrase Form follows function. However, this aphorism does not relate to a contemporary understanding of the term 'function' as utility or the satisfaction of user needs; it was instead based in metaphysics, as the expression of organic essence and could be paraphrased as meaning 'destiny'.
In the mid-1930s, functionalism began to be discussed as an aesthetic approach rather than a matter of design integrity (use). The idea of functionalism was conflated with a lack of ornamentation, which is a different matter. It became a pejorative term associated with the baldest and most brutal ways to cover space, like cheap commercial buildings and sheds, then finally used, for example in academic criticism of Buckminster Fuller's geodesic domes, simply as a synonym for 'gauche'.
For 70 years the influential American architect Philip Johnson held that the profession has no functional responsibility whatsoever, and this is one of the many views today. The position of postmodern architect Peter Eisenman is based on a user-hostile theoretical basis and even more extreme: "I don't do function."
== Modernism ==
Popular notions of modern architecture are heavily influenced by the work of the Franco-Swiss architect Le Corbusier and the German architect Mies van der Rohe. Both were functionalists at least to the extent that their buildings were radical simplifications of previous styles. In 1923, Mies van der Rohe was working in Weimar Germany, and had begun his career of producing radically simplified, lovingly detailed structures that achieved Sullivan's goal of inherent architectural beauty. Le Corbusier famously said "a house is a machine for living in"; his 1923 book Vers une architecture was, and still is, very influential, and his early built work such as the Villa Savoye in Poissy, France, is thought of as prototypically function.
== In Europe ==
=== Czechoslovakia ===
The former Czechoslovakia was an early adopter of the functionalist style, with notable examples such as Villa Tugendhat in Brno, designed by Mies van der Rohe in 1928, Villa Müller in Prague, designed by Adolf Loos in 1930, and the majority of the city of Zlín, developed by the Bata shoe company as a factory town in the 1920s and designed by Le Corbusier's student František Lydie Gahura.
Numerous villas, apartment buildings and interiors, factories, office blocks and department stores can be found in the functionalist style throughout the country, which industrialised rapidly in the early 20th century while embracing the Bauhaus-style architecture that was emerging concurrently in Germany. Large urban extensions to Brno in particular contain numerous apartment buildings in the functionalist style, while the domestic interiors of Adolf Loos in Plzeň are also notable for their application of functionalist principles.
=== Nordic "funkis" ===
In Scandinavia and Finland, the international movement and ideas of modernist architecture became widely known among architects at the 1930 Stockholm Exhibition, under the guidance of director and Swedish architect Gunnar Asplund. Enthusiastic architects collected their ideas and inspirations in the manifesto acceptera and in the years thereafter, a functionalist architecture emerged throughout Scandinavia. The genre involves some peculiar features unique to Scandinavia and it is often referred to as "funkis", to distinguish it from functionalism in general. Some of the common features are flat roofing, stuccoed walls, architectural glazing and well-lit rooms, an industrial expression and nautical-inspired details, including round windows. The global stock market crisis and economic meltdown in 1929, instigated the needs to use affordable materials, such as brick and concrete, and to build quickly and efficiently. These needs became another signature of the Nordic version of functionalist architecture, in particular in buildings from the 1930s, and carried over into modernist architecture when industrial serial production became much more prevalent after World War II.
As most architectural styles, Nordic funkis was international in its scope and several architects designed Nordic funkis buildings throughout the region. Some of the most active architects working internationally with this style, includes Edvard Heiberg, Arne Jacobsen and Alvar Aalto. Nordic funkis features prominently in Scandinavian urban architecture, as the need for urban housing and new institutions for the growing welfare states exploded after World War II. Funkis had its heyday in the 1930s and 1940s, but functionalist architecture continued to be built long into the 1960s. These later structures, however, tend to be categorized as modernism in a Nordic context.
==== Denmark ====
Vilhelm Lauritzen, Arne Jacobsen and C.F. Møller were among the most active and influential Danish architects of the new functionalist ideas and Arne Jacobsen, Poul Kjærholm, Kaare Klint, and others, extended the new approach to design in general, most notably furniture which evolved to become Danish modern. Some Danish designers and artists who did not work as architects are sometimes also included in the Danish functionalist movement, such as Finn Juhl, Louis Poulsen and Poul Henningsen. In Denmark, bricks were largely preferred over reinforced concrete as construction material, and this included funkis buildings. Apart from institutions and apartment blocks, more than 100,000 single-family funkis houses were built in the years 1925–1945. However, the truly dedicated funkis design was often approached with caution. Many residential buildings only included some signature funkis elements such as round windows, corner windows or architectural glazing to signal modernity while not provoking conservative traditionalists too much. This branch of restrained approach to the funkis design created the Danish version of the bungalow building.
Fine examples of Danish functionalist architecture are the now listed Kastrup Airport 1939 terminal by Vilhelm Lauritzen, Aarhus University (by C. F. Møller et al.) and Aarhus City Hall (by Arne Jacobsen et al.), all including furniture and lamps specially designed for these buildings in the functionalist spirit. The largest functionalist complex in the Nordic countries is the 30,000-sq. m. residential compound of Hostrups Have in Copenhagen.
==== Finland ====
Some of the most prolific and notable architects in Finland, working in the funkis style, includes Alvar Aalto and Erik Bryggman who were both engaged from the very start in the 1930s. The Turku region pioneered this new style and the journal Arkkitehti mediated and discussed functionalism in a Finnish context. Many of the first buildings in the funkis style were industrial structures, institutions and offices but spread to other kinds of structures such as residential buildings, individual housing and churches. The functionalist design also spread to interior designs and furniture as exemplified by the iconic Paimio Sanatorium, designed in 1929 and built in 1933.
Aalto introduced standardised, precast concrete elements as early as the late 1920s, when he designed residential buildings in Turku. This technique became a cornerstone of later developments in modernist architecture after World War II, especially in the 1950s and 1960s. He also introduced serial produced wooden housing.
=== Poland ===
Interbellum avant-garde Polish architects in the years 1918–1939 made a notable impact in the legacy of European modern architecture and functionalism. A lot of Polish architects were fascinated by Le Corbusier like his Polish students and coworkers Jerzy Sołtan, Aleksander Kujawski (both co-authors of Unité d'habitation in Marseille) and his coworkers Helena Syrkus (Le Corbusier's companion on board of the S.S. Patris, an ocean liner journeying from Marseille to Athens in 1933 during the CIAM IV), Roman Piotrowski and Maciej Nowicki. Le Corbusier said about Poles (When the Cathedrals Were White, Paris 1937) "Academism has sent down roots everywhere. Nevertheless, the Dutch are relatively free of bias. The Czechs believe in 'modern' and the Polish also." Other Polish architects like Stanisław Brukalski was meeting with Gerrit Rietveld and inspired by him and his neoplasticism. Only a few years after the construction of Rietveld Schröder House, Polish architect Stanisław Brukalski built his own house in Warsaw in 1929 supposedly inspired by Schröder House he had visited. His Polish example of the modern house was awarded bronze medal in Paris world expo in 1937. Just before the Second World War, it was fashionable to build in Poland a lot of large districts of luxury houses in neighbourhoods full of greenery for wealthy Poles like, for example, district Saska Kępa in Warsaw or district Kamienna Góra in seaport Gdynia. The most characteristic features in Polish functionalist architecture 1918–1939 were portholes, roof terraces and marble interiors.
Probably the most outstanding work of Polish functionalist architecture is the entire city of Gdynia, modern Polish seaport established 1926.
=== Russia ===
In Russia and the former Soviet Union, functionalism was known as Constructivist architecture, and was the dominant style for major building projects between 1918 and 1932. The 1932 competition for the Palace of the Soviets and the winning entry by Boris Iofan marked the start of eclectic historicism of Stalinist Architecture and the end of constructivist domination in Soviet Union.
== Examples ==
Notable representations of functionalist architecture include:
Aarhus University, Denmark
ADGB Trade Union School, Germany
Administratívna budova spojov, Bratislava, Slovakia
Obchodný a obytný dom Luxor, Bratislava, Slovakia
Villa Tugendhat, Brno, Czech Republic
Kavárna Era, Brno, Czech Republic
Kolonie Nový dům, Brno, Czech Republic
Veletržní palác, Prague, Czech Republic
Villa Müller, Prague, Czech Republic
Zlín city, Czech Republic
Tomas Bata Memorial, Zlín, Czech Republic
Booth House, Bridge Street, Sydney, Australia
Bullfighting Arena, Póvoa de Varzim, Portugal
Glass Palace, Helsinki, Finland
Hotel Hollywood, Sydney, Australia
Knarraros lighthouse, Stokkseyri, Iceland
Pärnu Rannahotell, Pärnu, Estonia
Pärnu Rannakohvik, Estonia
Södra Ängby, Stockholm, Sweden
Stanislas Brukalski's villa, Warsaw, Poland
Modernist Center of Gdynia, Poland
Villa Savoye, Poissy, France
=== Södra Ängby, Sweden ===
The residential area of Södra Ängby in western Stockholm, Sweden, blended a functionalist or international style with garden city ideals. Encompassing more than 500 buildings, it remains the largest coherent functionalistic villa area in Sweden and possibly the world, still well-preserved more than a half-century after its construction 1933–40 and protected as a national cultural heritage.
=== Zlín, Czech Republic ===
Zlín is a city in the Czech Republic which was in the 1930s completely reconstructed on principles of functionalism. In that time the city was a headquarters of Bata Shoes company and Tomáš Baťa initiated a complex reconstruction of the city which was inspired by functionalism and the Garden city movement.
Zlín's distinctive architecture was guided by principles that were strictly observed during its whole inter-war development. Its central theme was the derivation of all architectural elements from the factory buildings. The central position of the industrial production in the life of all Zlín inhabitants was to be highlighted. Hence the same building materials (red bricks, glass, reinforced concrete) were used for the construction of all public (and most private) edifices. The common structural element of Zlín architecture is a square bay of 20x20 feet (6.15x6.15 m). Although modified by several variations, this high modernist style leads to a high degree of uniformity of all buildings. It highlights the central and unique idea of an industrial garden city at the same time. Architectural and urban functionalism was to serve the demands of a modern city. The simplicity of its buildings which also translated into its functional adaptability was to prescribe (and also react to) the needs of everyday life.
The urban plan of Zlín was the creation of František Lydie Gahura, a student at Le Corbusier's atelier in Paris. Architectural highlights of the city are e.g. the Villa of Tomáš Baťa, Baťa's Hospital, Tomas Bata Memorial, The Grand Cinema or Baťa's Skyscraper.
=== Khrushchyovka ===
Khrushchyovka (Russian: хрущёвка, IPA: [xrʊˈɕːɵfkə]) is an unofficial name of type of low-cost, concrete-paneled or brick three- to five-storied apartment building which was developed in the Soviet Union during the early 1960s, during the time its namesake Nikita Khrushchev directed the Soviet government. The apartment buildings also went by the name of "Khruschoba" (Хрущёв+трущоба, Khrushchev-slum).
== Functionalism in landscape architecture ==
The development of functionalism in landscape architecture paralleled its development in building architecture. At the residential scale, designers like Christopher Tunnard, James Rose, and Garrett Eckbo advocated a design philosophy based on the creation of spaces for outdoor living and the integration of house and garden. At a larger scale, the German landscape architect and planner Leberecht Migge advocated the use of edible gardens in social housing projects as a way to counteract hunger and increase self-sufficiency of families. At a still larger scale, the Congrès International d'Architecture Moderne advocated for urban design strategies based on human proportions and in support of four functions of human settlement: housing, work, play, and transport.
== See also ==
Modernist architecture; streamline moderne
Enrique Yáñez
== Literature ==
== References ==
== External links ==
Fostinum: Czech and Slovak Functionalist Architecture | Wikipedia/Functionalism_(architecture) |
An industrial design right is an intellectual property right that protects the visual design of objects that are purely utilitarian. An industrial design consists of the creation of a shape, configuration or composition of pattern or color, or combination of pattern and color in three-dimensional form containing aesthetic value. An industrial design can be a two- or three-dimensional pattern used to produce a product, industrial commodity or handicraft.
Under the Hague Agreement Concerning the International Deposit of Industrial Designs, a WIPO-administered treaty, a procedure for an international registration exists. To qualify for registration, the national laws of most member states of WIPO require the design to be novel. An applicant can file for a single international deposit with WIPO or with the national office in a country party to the treaty. The design will then be protected in as many member countries of the treaty as desired. Design rights started in the United Kingdom in 1787 with the Designing and Printing of Linen Act and have expanded from there.
Registering for an industrial design right is related to granting a patent.
== Law making ==
=== Kenya ===
According to industrial property Act 2001, an industrial design is defined as "any composition of lines or colours or any three-dimensional form whether or not associated with lines or colours, provided that such composition or form gives a special appearance to a product of industry or handicraft and can serve as pattern for a product of industry or handicraft" .
An industrial design is registrable if it is new. An industrial design is deemed to be new if it has not been disclosed to the public, anywhere in the world, by publication in tangible form or, in Kenya by use or in any other way, prior to the filing date or, where applicable, the priority date of the application for registration. However a disclosure of the industrial design is not taken into consideration if it occurred not earlier than twelve months before the filing date or, where applicable, the priority date of the application and if it was by reason or in consequence of acts committed by the applicant or his predecessor in title; or an evident abuse committed by a third party in relation to the applicant or his predecessor in title.
=== India ===
India's Design Act, 2000 was enacted to consolidate and amend the law relating to protection of design and to comply with the articles 25 and 26 of Trade-Related Aspects of Intellectual Property Rights TRIPS agreement. The new act, (earlier Patent and Design Act, 1911 was repealed by this act) now defines "design" to mean only the features of shape, configuration, pattern, ornament, or composition of lines or colours applied to any article, whether in two- or three-dimensional, or in both forms, by any industrial process or means, whether manual or mechanical or chemical, separate or combined, which in the finished article appeal to and are judged solely by the eye; but does not include any mode or principle of construction.
=== Indonesia ===
In Indonesia the protection of the Right to Industrial Design shall be granted for 10 (ten) years commencing from the filing date and there is not any renewal or annuity after the given period.
Industrial Designs that are Granted Protection
1. The Right to Industrial Design shall be granted for an Industrial Design that is novel/new
2. An Industrial Design shall be deemed new if on the filing date, such Industrial Design is not the same as any previous disclosure.
3. The previous disclosure as referred to in point 2 shall be one which before :
a. The filing date or
b. The Priority Date, if the applicant is filed with priority right.
c. Has been announced or used in Indonesia or outside Indonesia.
An industrial design shall not be deemed to have been announced if within the period of 6 (six) months at the latest before the filing date, such industrial design
a. Has been displayed in a national or international exhibition in Indonesia or overseas that is official or deemed to be official; or,
b. Has been used in Indonesia by the designer in an experiment for the purposes of education, research or development.
=== Canada ===
Canadian law affords ten years of protection to industrial designs that are registered; there is no protection for unregistered designs. The Industrial Design Act defines "design" or "industrial design" to mean "features of shape, configuration, pattern or ornament and any combination of those features that, in a finished article, appeal to and are judged solely by the eye." The design must also be original: in 2012, the Patent Appeal Board rejected a design for a trash can, and gave guidance as to what the Act requires:
The degree of originality required to register an original design is greater than that laid down by Canadian copyright legislation, but less than that required to register a patent.
The articles being compared should not be examined side by side, but separate so that imperfect recollection comes into play.
One is to look at the design as a whole.
Any change must be substantial. It must not be trivial or infinitesimal.
During the existence of an exclusive right, no person can "make, import for the purpose of trade or business, or sell, rent, or offer or expose for sale or rent, any article in respect of which the design is registered." The rule also applies to kits and substantial differences are in reference to previously published designs.
Registering an industrial design in Canada may be appropriate for a variety of articles such as consumer products, vehicles, sports equipment, packaging, etc., having an original aesthetic appearance, and may even be used to protect new technologies such as electronic icons. Industrial designs can also serve to complement other forms of intellectual property rights such as patents and trade-marks.
The Canadian courts see infrequent litigation concerning industrial designs — the first case in almost two decades took place in 2012 between Bodum and Trudeau Corporation concerning visual features of double wall drinking glasses.
It is possible for a registered design to also receive protection under Canadian copyright or trademark law:
a "useful article" (ie, one with a utilitarian function) will receive copyright protection where it is reproduced in a quantity of fifty or less, but that limitation does not apply with respect to:
a graphic or photographic representation that is applied to the face of an article
a trade-mark or a representation thereof or a label
material that has a woven or knitted pattern or that is suitable for piece goods or surface coverings or for making wearing apparel
a representation of a real or fictitious being, event or place that is applied to an article as a feature of shape, configuration, pattern or ornament
where a registered design has become publicly identifiable with the product, it may be eligible for registration as a "distinguishing guise" under trademark law, but such registration cannot be used to limit the development of any art or industry
=== European Union ===
Registered and unregistered European Union designs are available which provide a unitary right covering the European Union. Protection for a registered EU design is for up to 25 years, subject to the payment of renewal fees every five years. The unregistered EU design lasts for three years after a design is made available to the public and infringement only occurs if the protected design has been copied.
=== United Kingdom ===
Legislation given in Britain during the years 1787 to 1839 protected designs for textiles. The Copyright of Design Act passed in 1842 allowed other material designs, such as those for metal and earthenware objects, to be registered with a diamond mark to indicate the date of registration.
In addition to the design protection available under community designs, UK law provides its own national registered design right (Registered Designs Act 1949, later amended by Copyright, Designs and Patents Act 1988) and an unregistered design right. The unregistered right, which exists automatically if the requirements are met, can last for up to 15 years. The registered design right can last up to 25 years subject to the payment of maintenance fees. The topography of semi-conductor circuits are also covered by integrated circuit layout design protection, a form of protection which lasts 10 years.
=== Japan ===
Article 1 of the Japanese Design Law states: "This law was designed to protect and utilize designs and to encourage creation of designs in order to contribute to industrial development". The protection period in Japan is 20 years from the day of registration.
=== United States ===
U.S. design patents last fifteen years from the date of grant if filed on or after May 13, 2015 (fourteen years if filed before May 13, 2015) and cover the ornamental aspects of utilitarian objects. Objects that lack a use beyond that conferred by their appearance or the information they convey may be covered by copyright—a form of intellectual property of much longer duration that exists as soon as a qualifying work is created. In some circumstances, rights may also be acquired in trade dress, but trade dress protection is akin to trademark rights and requires that the design have source significance or "secondary meaning". It is useful only to prevent source misrepresentations; trade dress protection.
=== Australia ===
In Australia, design patent registration lasts for 5 years, with an option to be extended once for an additional 5 years. For the patent to be granted, a formalities exam is needed. If infringement action is to be taken, the design needs to become certified which involves a substantive examination. This process ensures that the design is truly unique and eligible for protection under Australian patent law.
== Duration of design rights ==
Depending on the jurisdiction registered design rights have a duration between 15 and 50 years. Members of the WIPO Hague system have to publish their maximum term of protection for design rights. These terms are presented in the table below. Some of the jurisdiction below are unions or collaborative office for design registration like the African Intellectual Property Organization, the European Union and the Benelux.
== Industrial design applications ==
Between 1883 and the early 1950s, the offices of Japan and the United States of America averaged a similar number of industrial design applications, rarely exceeding 10,000. The office of Japan received the highest number of applications per year from the 1950s thru to the late 1990s, reaching approximately 50,000 annual filings at its peak. The office of China, which received 640 applications when it first began receiving applications in 1985, has seen an unprecedented rate of growth, peaking at 805,710 applications filed in 2021. The office of the Republic of Korea surpassed the office of Japan in 2004 and has remained in second position ever since. In 2012, the office of the US moved ahead of Japan to become the third largest globally. The EUIPO began receiving applications in 2003 and moved up to fourth position in 2019. Among these top five offices, the EUIPO is the only one to have a multiple design system. Applications filed at the European Union IP Office contained 109,132 designs in 2022.
In 2022, about 1.1 million industrial design applications were filed worldwide. Asia accounted for 70.3% of all designs in applications filed worldwide in 2022. Asia was followed by Europe (22.4%) and North America (4.4%).
== Bibliography ==
Brian W. Gray & Effie Bouzalas, editors, Industrial Design Rights: An International Perspective (Kluwer Law International: The Hague, 2001) ISBN 90-411-9684-6
== See also ==
Design patent (US patent law)
Geschmacksmuster (German design law)
Industrial design rights in the European Union
Open-design movement
Utility model
Design Law Treaty
Hague Agreement Concerning the International Deposit of Industrial Designs
== References ==
== External links ==
Information about industrial design rights on the UK Patent Office web site
International Designs on the WIPO web site
Hague System for the International Registration of Industrial Designs on the WIPO web site | Wikipedia/Industrial_design_rights |
Form follows function is a principle of design associated with late 19th- and early 20th-century architecture and industrial design in general, which states that the appearance and structure of a building or object (architectural form) should primarily relate to its intended function or purpose.
== Origins of the phrase ==
The architect Louis Sullivan coined the maxim, which encapsulates Viollet-le-Duc's theories: "a rationally designed structure may not necessarily be beautiful but no building can be beautiful that does not have a rationally designed structure". Sullivan also credited his friend and mentor, John H. Edelmann, who theorized the concept of "suppressed function" with inspiration for this maxim.
The maxim is often incorrectly attributed to the sculptor Horatio Greenough (1805–1852), whose thinking mostly predates the later functionalist approach to architecture. Greenough's writings were for a long time largely forgotten, and were rediscovered only in the 1930s. In 1947, a selection of his essays was published as Form and Function: Remarks on Art by Horatio Greenough. The earliest formulation of the idea as "in architecture only that shall show that has a definite function" belongs not to an architect, but to a monk Carlo Lodoli (1690–1761), who uttered the phrase while inspired by positivist thinking (Lodoli's words were published by his student, Francesco Algarotti, in 1757).
Sullivan was Greenough's much younger compatriot and admired rationalist thinkers such as Thoreau, Emerson, Whitman, and Melville, as well as Greenough himself. In 1896, Sullivan coined the phrase in an article titled The Tall Office Building Artistically Considered, though he later attributed the core idea to the ancient Roman architect, engineer, and author Marcus Vitruvius Pollio, who first asserted in his book De architectura that a structure must exhibit the three qualities of firmness, commodity, and delight — that is, it must be solid, useful, and beautiful. Sullivan actually wrote that "form ever follows function", but the simpler and less emphatic phrase is more widely remembered. For Sullivan, this was distilled wisdom, an aesthetic credo, the single "rule that shall permit of no exception". The full quote is:
Whether it be the sweeping eagle in his flight, or the open apple-blossom, the toiling work-horse, the blithe swan, the branching oak, the winding stream at its base, the drifting clouds, over all the coursing sun, form ever follows function, and this is the law. Where function does not change, form does not change. The granite rocks, the ever-brooding hills, remain for ages; the lightning lives, comes into shape, and dies, in a twinkling.
It is the pervading law of all things organic and inorganic, of all things physical and metaphysical, of all things human and all things superhuman, of all true manifestations of the head, of the heart, of the soul, that the life is recognizable in its expression, that form ever follows function. This is the law.
Sullivan developed the shape of the tall steel skyscraper in late 19th-century Chicago at a moment in which technology, taste and economic forces converged and made it necessary to break with established styles. If the shape of the building was not going to be chosen out of the old pattern book, something had to determine form, and according to Sullivan it was going to be the purpose of the building. Thus, "form follows function", as opposed to "form follows precedent". Sullivan's assistant, Frank Lloyd Wright, adopted and professed the same principle in a slightly different form.
== Debate on the functionality of ornamentation ==
In 1910, the Austrian architect Adolf Loos gave a lecture titled "Ornament and Crime" in reaction to the elaborate ornament used by the Vienna Secession architects. Modernists adopted Loos's moralistic argument as well as Sullivan's maxim. Loos had worked as a carpenter in the USA. He celebrated efficient plumbing and industrial artifacts like corn silos and steel water towers as examples of functional design.
== Application in different fields ==
"Form follows function" is closely associated with utilitarian design, a concept of products designed exclusively for utility ("function") instead of "contemplating pleasure".
=== Architecture ===
The phrase "form (ever) follows function" became a battle cry of Modernist architects after the 1930s. The credo was taken to imply that decorative elements, which architects call "ornament", were superfluous in modern buildings. The phrase can best be implemented in design by asking the question, "Does it work?" Design in architecture utilizing this mantra follows the functionality and purpose of the building. For example, a family home would be designed around familial and social interactions and life. It would be purposeful, without functionless flare. A building's beauty comes from the function it serves rather than from its visual design. One aim of the Modernists after World War II was to elevate the living conditions of the masses. Many people around the world were living in less than ideal conditions, worsened by war. The Modernists sought to bring these people into more livable, humane spaces that, while not conventionally beautiful, were extremely functional. As a result, architecture utilizing "form follows function" became a sign of hope and progress.
Despite coining the term, Louis Sullivan himself neither thought nor designed along such lines at the peak of his career. Indeed, while his buildings could be spare and crisp in their principal masses, he often punctuated their plain surfaces with eruptions of lush Art Nouveau and Celtic Revival decorations, usually cast in iron or terracotta, and ranging from organic forms like vines and ivy, to more geometric designs, and interlace, inspired by his Irish design heritage. Probably the most famous example is the writhing green ironwork that covers the entrance canopies of the Carson, Pirie, Scott and Company Building on South State Street in Chicago. These ornaments, often executed by the talented younger draftsman in Sullivan's employ, would eventually become Sullivan's trademark; to students of architecture, they are his instantly recognizable signature.
=== Automobile designing ===
If the design of an automobile conforms to its function—for instance, the Fiat Multipla's shape, which is partly due to the desire to sit six people in two rows—then its form is said to follow its function.
=== Product design ===
One episode in the history of the inherent conflict between functional design and the demands of the marketplace took place in 1935, after the introduction of the streamlined Chrysler Airflow, when the American auto industry temporarily halted attempts to introduce optimal aerodynamic forms into mass manufacture. Some car-makers thought aerodynamic efficiency would result in a single optimal auto-body shape, a "teardrop" shape, which would not be good for unit sales. General Motors adopted two different positions on streamlining, one meant for its internal engineering community, the other meant for its customers. Like the annual model year change, so-called aerodynamic styling is often meaningless in terms of technical performance. Subsequently, drag coefficient has become both a marketing tool and a means of improving the sale-ability of a car by reducing its fuel consumption, slightly, and increasing its top speed, markedly.
The American industrial designers of the 1930s and 1940s like Raymond Loewy, Norman Bel Geddes and Henry Dreyfuss grappled with the inherent contradictions of "form follows function" as they redesigned blenders and locomotives and duplicating machines for mass-market consumption. Loewy formulated his "MAYA" (Most Advanced Yet Acceptable) principle to express that product designs are bound by functional constraints of math and materials and logic, but their acceptance is constrained by social expectations. His advice was that for very new technologies, they should be made as familiar as possible, but for familiar technologies, they should be made surprising.
Victor Papanek (1923–1998) was one influential twentieth-century designer and design philosopher who taught and wrote as a proponent of "form follows function".
By honestly applying "form follows function", industrial designers had the potential to put their clients out of business. Some simple single-purpose objects like screwdrivers and pencils and teapots might be reducible to a single optimal form, precluding product differentiation. Some objects made too durable would prevent sales of replacements (see Planned obsolescence). From the standpoint of functionality, some products are simply unnecessary.
An alternative approach referred to as "form leads function", or "function follows form", starts with vague, abstract, or underspecified designs. These designs, sometimes generated using tools like text-to-image models, can serve as triggers for generating novel ideas for product design.
=== Software engineering ===
It has been argued that the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any. This does not mean that process is irrelevant, but that processes compatible with an artifact's requirements lead to roughly similar results.
The principle can also be applied to enterprise application architectures of modern business, where "function" encompasses the business processes which should be assisted by the enterprise architecture, or "form". If the architecture were to dictate how the business operates, then the business is likely to suffer from inflexibility and the inability to adapt to change. Service-oriented architecture enables an enterprise architect to rearrange the "form" of the architecture to meet the functional requirements of a business by adopting standards-based communication protocols which enable interoperability. This stands in conflict with Conway's law, which states from a social point of view that "form follows organization".
Furthermore, domain-driven design postulates that structure (software architecture, design pattern, implementation) should emerge from constraints of the modeled domain (functional requirement).
While "form" and "function" may be more or less explicit and invariant concepts to the many engineering doctrines, metaprogramming and the functional programming paradigm lend themselves very well to explore, blur and invert the essence of those two concepts.
The agile software development movement espouses techniques such as "test-driven development", in which the engineer begins with a minimum unit of user-oriented functionality, creates an automated test for such and then implements the functionality and iterates, repeating this process. The result and argument for this discipline are that the structure or "form" emerges from actual function, and in fact because done organically, makes the project more adaptable long-term, as well of as higher-quality because of the functional base of automated tests.
== See also ==
Truth to materials
Aesthetics
Design science (methodology)
Separation of content and presentation
User-centered design
== References ==
Notes
Bibliography
Gelernter, Mark (1995). Sources of Architectural Form: A Critical History of Western Design Theory. Manchester University Press. ISBN 978-0-7190-4129-7. Retrieved 2024-02-12.
Heskett, J. (2005). Design: A Very Short Introduction. Very Short Introductions. OUP Oxford. ISBN 978-0-19-160661-8. Retrieved 2025-02-01.
Strauss, Inbal (2021). Form Unfollows Function: Subversions of Functionality (PhD Fine Art thesis). Michaelmas: University of Oxford.
== External links ==
"E. H. Gombrich’s adoption of the formula form follows function: A case of mistaken identity?" by Jan Michl
"How form functions: On esthetics and Gestalt theory" by Roy Behrens
"The Tall Office Building Artistically Considered" by Louis H. Sullivan in 1896. | Wikipedia/Form_follows_function |
Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems that use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use.
Electrical engineering is divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science.
Electrical engineers typically hold a degree in electrical engineering, electronic or electrical and electronic engineering. Practicing engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the National Society of Professional Engineers (NSPE), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE).
Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software.
== History ==
Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term "electricity". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery.
=== 19th century ===
In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism.
In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction.
In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy.
Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation.
During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts.
In about 1885, Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world.
During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard.
=== Early 20th century ===
During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose-built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2,100 miles (3,400 km).
Millimetre wave communication was first investigated by Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60 GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.
In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode.
In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936.
In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives.
In 1948, Claude Shannon published "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise).
=== Solid-state electronics ===
The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices.
The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959.
The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world.
The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking.
The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).
The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution.
== Subfields ==
One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today, electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes, certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right.
=== Power and energy ===
Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems.
=== Telecommunications ===
Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer.
Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static.
=== Control engineering ===
Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation.
Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback.
Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries.
=== Electronics ===
Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner.
Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering.
Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today.
=== Microelectronics and nanoelectronics ===
Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level.
Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002.
Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide) to obtain the desired transport of electronic charge and control of current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer working in the field to have a very good working knowledge of the effects of quantum mechanics.
=== Signal processing ===
Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals.
Signal processing is a very mathematically oriented and intensive area forming the core of digital signal processing and it is rapidly expanding with new applications in every field of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems.
DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing.
=== Instrumentation ===
Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points.
Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control.
=== Computers ===
Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering.
=== Photonics and optics ===
Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber-optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials).
== Related disciplines ==
Mechatronics is an engineering discipline that deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles.
Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems.
The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication.
In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion.
== Education ==
Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering.
At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered.
Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree.
== Professional practice ==
In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union).
The advantages of licensure vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law.
Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer.
In Australia, Canada, and the United States, electrical engineers make up around 0.25% of the labor force.
== Tools and work ==
From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunications systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery.
Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others.
Although most electrical engineers will understand basic circuit theory (that is, the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunications systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering.
A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high-frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting.
For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important.
The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers.
Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets.
== See also ==
== Notes ==
== References ==
Bibliography
Abramson, Albert (1955). Electronic Motion Pictures: A History of the Television Camera. University of California Press.
Åström, K.J.; Murray, R.M. (2021). Feedback Systems: An Introduction for Scientists and Engineers, Second Edition. Princeton University Press. p. 108. ISBN 978-0-691-21347-7.
Bayoumi, Magdy A.; Swartzlander, Earl E. Jr. (31 October 1994). VLSI Signal Processing Technology. Springer. ISBN 978-0-7923-9490-7.
Bhushan, Bharat (1997). Micro/Nanotribology and Its Applications. Springer. ISBN 978-0-7923-4386-8.
Bissell, Chris (25 July 1996). Control Engineering, 2nd Edition. CRC Press. ISBN 978-0-412-57710-9.
Chandrasekhar, Thomas (1 December 2006). Analog Communication (Jntu). Tata McGraw-Hill Education. ISBN 978-0-07-064770-1.
Chaturvedi, Pradeep (1997). Sustainable Energy Supply in Asia: Proceedings of the International Conference, Asia Energy Vision 2020, Organised by the Indian Member Committee, World Energy Council Under the Institution of Engineers (India), During November 15–17, 1996 at New Delhi. Concept Publishing Company. ISBN 978-81-7022-631-4.
Dodds, Christopher; Kumar, Chandra; Veering, Bernadette (March 2014). Oxford Textbook of Anaesthesia for the Elderly Patient. Oxford University Press. ISBN 978-0-19-960499-9.
Fairman, Frederick Walker (11 June 1998). Linear Control Theory: The State Space Approach. John Wiley & Sons. ISBN 978-0-471-97489-5.
Fredlund, D. G.; Rahardjo, H.; Fredlund, M. D. (30 July 2012). Unsaturated Soil Mechanics in Engineering Practice. Wiley. ISBN 978-1-118-28050-8.
Grant, Malcolm Alister; Bixley, Paul F (1 April 2011). Geothermal Reservoir Engineering. Academic Press. ISBN 978-0-12-383881-0.
Grigsby, Leonard L. (16 May 2012). Electric Power Generation, Transmission, and Distribution, Third Edition. CRC Press. ISBN 978-1-4398-5628-4.
Heertje, Arnold; Perlman, Mark (1990). Evolving technology and market structure: studies in Schumpeterian economics. University of Michigan Press. ISBN 978-0-472-10192-4.
Huurdeman, Anton A. (31 July 2003). The Worldwide History of Telecommunications. John Wiley & Sons. ISBN 978-0-471-20505-0.
Iga, Kenichi; Kokubun, Yasuo (12 December 2010). Encyclopedic Handbook of Integrated Optics. CRC Press. ISBN 978-1-4200-2781-5.
Jalote, Pankaj (31 January 2006). An Integrated Approach to Software Engineering. Springer. ISBN 978-0-387-28132-2.
Khanna, Vinod Kumar (1 January 2009). Digital Signal Processing. S. Chand. ISBN 978-81-219-3095-6.
Lambourne, Robert J. A. (1 June 2010). Relativity, Gravitation and Cosmology. Cambridge University Press. ISBN 978-0-521-13138-4.
Leitgeb, Norbert (6 May 2010). Safety of Electromedical Devices: Law – Risks – Opportunities. Springer. ISBN 978-3-211-99683-6.
Leondes, Cornelius T. (8 August 2000). Energy and Power Systems. CRC Press. ISBN 978-90-5699-677-2.
Mahalik, Nitaigour Premchand (2003). Mechatronics: Principles, Concepts and Applications. Tata McGraw-Hill Education. ISBN 978-0-07-048374-3.
Maluf, Nadim; Williams, Kirt (1 January 2004). Introduction to Microelectromechanical Systems Engineering. Artech House. ISBN 978-1-58053-591-5.
Manolakis, Dimitris G.; Ingle, Vinay K. (21 November 2011). Applied Digital Signal Processing: Theory and Practice. Cambridge University Press. ISBN 978-1-139-49573-8.
Martini, L., "BSCCO-2233 multilayered conductors", in Superconducting Materials for High Energy Colliders, pp. 173–181, World Scientific, 2001 ISBN 981-02-4319-7.
Martinsen, Orjan G.; Grimnes, Sverre (29 August 2011). Bioimpedance and Bioelectricity Basics. Academic Press. ISBN 978-0-08-056880-5.
McDavid, Richard A.; Echaore-McDavid, Susan (1 January 2009). Career Opportunities in Engineering. Infobase Publishing. ISBN 978-1-4381-1070-7.
Merhari, Lhadi (3 March 2009). Hybrid Nanocomposites for Nanotechnology: Electronic, Optical, Magnetic and Biomedical Applications. Springer. ISBN 978-0-387-30428-1.
Mook, William Moyer (2008). The Mechanical Response of Common Nanoscale Contact Geometries. ISBN 978-0-549-46812-7.
Naidu, S. M.; Kamaraju, V. (2009). High Voltage Engineering. Tata McGraw-Hill Education. ISBN 978-0-07-066928-4.
Obaidat, Mohammad S.; Denko, Mieso; Woungang, Isaac (9 June 2011). Pervasive Computing and Networking. John Wiley & Sons. ISBN 978-1-119-97043-9.
Rosenberg, Chaim M. (2008). America at the Fair: Chicago's 1893 World's Columbian Exposition. Arcadia Publishing. ISBN 978-0-7385-2521-1.
Schmidt, Rüdiger, "The LHC accelerator and its challenges", in Kramer M.; Soler, F.J.P. (eds), Large Hadron Collider Phenomenology, pp. 217–250, CRC Press, 2004 ISBN 0-7503-0986-5.
Severs, Jeffrey; Leise, Christopher (24 February 2011). Pynchon's Against the Day: A Corrupted Pilgrim's Guide. Lexington Books. ISBN 978-1-61149-065-7.
Shetty, Devdas; Kolk, Richard (14 September 2010). Mechatronics System Design, SI Version. Cengage Learning. ISBN 978-1-133-16949-9.
Smith, Brian W. (January 2007). Communication Structures. Thomas Telford. ISBN 978-0-7277-3400-6.
Sullivan, Dennis M. (24 January 2012). Quantum Mechanics for Electrical Engineers. John Wiley & Sons. ISBN 978-0-470-87409-7.
Taylor, Allan (2008). Energy Industry. Infobase Publishing. ISBN 978-1-4381-1069-1.
Thompson, Marc (12 June 2006). Intuitive Analog Circuit Design. Newnes. ISBN 978-0-08-047875-3.
Tobin, Paul (1 January 2007). PSpice for Digital Communications Engineering. Morgan & Claypool Publishers. ISBN 978-1-59829-162-9.
Tunbridge, Paul (1992). Lord Kelvin, His Influence on Electrical Measurements and Units. IET. ISBN 978-0-86341-237-0.
Tuzlukov, Vyacheslav (12 December 2010). Signal Processing Noise. CRC Press. ISBN 978-1-4200-4111-8.
Walker, Denise (2007). Metals and Non-metals. Evans Brothers. ISBN 978-0-237-53003-7.
Wildes, Karl L.; Lindgren, Nilo A. (1 January 1985). A Century of Electrical Engineering and Computer Science at MIT, 1882–1982. MIT Press. p. 19. ISBN 978-0-262-23119-0.
Zhang, Yan; Hu, Honglin; Luo, Jijun (27 June 2007). Distributed Antenna Systems: Open Architecture for Future Wireless Communications. CRC Press. ISBN 978-1-4200-4289-4.
== Further reading ==
Adhami, Reza; Meenen, Peter M.; Hite, Denis (2007). Fundamental Concepts in Electrical and Computer Engineering with Practical Design Problems. Universal-Publishers. ISBN 978-1-58112-971-7.
Bober, William; Stevens, Andrew (27 August 2012). Numerical and Analytical Methods with MATLAB for Electrical Engineers. CRC Press. ISBN 978-1-4398-5429-7.
Bobrow, Leonard S. (1996). Fundamentals of Electrical Engineering. Oxford University Press. ISBN 978-0-19-510509-4.
Chen, Wai Kai (16 November 2004). The Electrical Engineering Handbook. Academic Press. ISBN 978-0-08-047748-0.
Ciuprina, G.; Ioan, D. (30 May 2007). Scientific Computing in Electrical Engineering. Springer. ISBN 978-3-540-71980-9.
Faria, J. A. Brandao (15 September 2008). Electromagnetic Foundations of Electrical Engineering. John Wiley & Sons. ISBN 978-0-470-69748-1.
Jones, Lincoln D. (July 2004). Electrical Engineering: Problems and Solutions. Dearborn Trade Publishing. ISBN 978-1-4195-2131-7.
Karalis, Edward (18 September 2003). 350 Solved Electrical Engineering Problems. Dearborn Trade Publishing. ISBN 978-0-7931-8511-5.
Krawczyk, Andrzej; Wiak, S. (1 January 2002). Electromagnetic Fields in Electrical Engineering. IOS Press. ISBN 978-1-58603-232-6.
Laplante, Phillip A. (31 December 1999). Comprehensive Dictionary of Electrical Engineering. Springer. ISBN 978-3-540-64835-2.
Leon-Garcia, Alberto (2008). Probability, Statistics, and Random Processes for Electrical Engineering. Prentice Hall. ISBN 978-0-13-147122-1.
Malaric, Roman (2011). Instrumentation and Measurement in Electrical Engineering. Universal-Publishers. ISBN 978-1-61233-500-1.
Sahay, Kuldeep; Pathak, Shivendra (1 January 2006). Basic Concepts of Electrical Engineering. New Age International. ISBN 978-81-224-1836-1.
Srinivas, Kn (1 January 2007). Basic Electrical Engineering. I. K. International Pvt Ltd. ISBN 978-81-89866-34-1.
== External links ==
International Electrotechnical Commission (IEC)
MIT OpenCourseWare Archived 26 January 2008 at the Wayback Machine in-depth look at Electrical Engineering – online courses with video lectures.
IEEE Global History Network A wiki-based site with many resources about the history of IEEE, its members, their professions and electrical and informational technologies and sciences. | Wikipedia/electrical_engineering |
Transatlantic telegraph cables were undersea cables running under the Atlantic Ocean for telegraph communications. Telegraphy is a largely obsolete form of communication, and the cables have long since been decommissioned, but telephone and data are still carried on other transatlantic telecommunications cables.
The Atlantic Telegraph Company led by Cyrus West Field constructed the first transatlantic telegraph cable. The project began in 1854 with the first cable laid from Valentia Island off the west coast of Ireland to Bay of Bulls, Trinity Bay, Newfoundland. The first communications occurred on August 16, 1858, but the line speed was poor. The first official telegram to pass between two continents that day was a letter of congratulations from Queen Victoria of the United Kingdom to President of the United States James Buchanan. Signal quality declined rapidly, slowing transmission to an almost unusable speed. The cable was destroyed after three weeks when Wildman Whitehouse applied excessive voltage to it while trying to achieve faster operation. It has been argued that the cable's faulty manufacture, storage and handling would have caused its premature failure in any case. Its short life undermined public and investor confidence and delayed efforts to restore a connection.
The second cable was laid in 1865 with improved material. It was laid from the ship SS Great Eastern, built by John Scott Russell and Isambard Kingdom Brunel and skippered by Sir James Anderson. More than halfway across, the cable broke, and after many rescue attempts, it was abandoned. In July 1866 a third cable was laid from The Anglo-American Cable house on the Telegraph Field, Foilhommerum. On July 13, Great Eastern steamed westward to Heart's Content, Newfoundland, and on July 27 the successful connection was put into service. The 1865 cable was also retrieved and spliced, so two cables were in service. These cables proved more durable. Line speed was very good, and the slogan "Two weeks to two minutes" was coined to emphasize the great improvement over ship-borne dispatches. The cables altered the personal, commercial and political relations between people across the Atlantic. Since 1866, there has been a permanent cable connection between the continents.
In the 1870s, duplex and quadruplex transmission and receiving systems were set up that could relay multiple messages over the cable. Before the first transatlantic cable, communications between Europe and the Americas had occurred only by ship and could be delayed for weeks by severe winter storms. By contrast, the transatlantic cable made possible a message and response on the same day.
== Early history ==
In the 1840s and 1850s several people proposed or advocated construction of a telegraph cable across the Atlantic, including Edward Thornton and Alonzo Jackman.
As early as 1840 Samuel F. B. Morse proclaimed his faith in the idea of a submarine line across the Atlantic Ocean. By 1850 a cable was run between England and France. That year, Bishop John T. Mullock, head of the Catholic Church in Newfoundland, proposed a telegraph line through the forest from St. John's to Cape Ray and cables across the Gulf of St. Lawrence from Cape Ray to Nova Scotia across the Cabot Strait.
Around the same time, a similar plan occurred to Frederic Newton Gisborne, a telegraph engineer in Nova Scotia. In the spring of 1851 he procured a grant from the Newfoundland legislature and, having formed a company, began building the landline.
== A plan takes shape ==
In 1854, businessman and financier Cyrus West Field invited Gisborne to his house to discuss the project. From his visitor, Field considered the idea that the cable to Newfoundland might be extended across the Atlantic Ocean.
Field was ignorant of submarine cables and the deep sea. He consulted Morse and Lieutenant Matthew Maury, an authority on oceanography. The charts Maury constructed from soundings in the logs of multiple ships indicated that there was a feasible route across the Atlantic. It seemed so ideal for cable laying that Maury named it Telegraph Plateau. Maury's charts also indicated that a route directly to the US was too rugged to be tenable and considerably longer. Field adopted Gisborne's scheme as a preliminary step to the bigger undertaking and promoted the New York, Newfoundland and London Telegraph Company to establish a telegraph line between America and Europe.
The first step was to finish the line between St. John's and Nova Scotia, which was undertaken by Gisborne and Field's brother, Matthew. In 1855 an attempt was made to lay a cable across the Cabot Strait in the Gulf of Saint Lawrence. It was laid out from a barque in tow of a steamer. When half the cable was laid, a gale rose, and the line was cut to keep the barque from sinking. In 1856 a steamboat was fitted out for the purpose, and the link from Cape Ray, Newfoundland to Aspy Bay, Nova Scotia was successfully laid. The project's final cost exceeded $1 million, and the transatlantic segment would cost much more.
In 1855, Field crossed the Atlantic, the first of 56 crossings in the course of the project, to consult with John Watkins Brett, the greatest authority on submarine cables at the time. Brett's Submarine Telegraph Company laid the first ocean cable in 1850 across the English Channel, and his English and Irish Magnetic Telegraph Company had laid a cable to Ireland in 1853, the deepest cable to that date. Further reasons for the trip were that all the commercial manufacturers of submarine cable were in Britain, and Field had failed to raise significant funds for the project in New York.
Field pushed the project ahead with tremendous energy and speed. Even before forming a company to carry it out, he ordered 2,500 nautical miles (4,600 km; 2,900 mi) of cable from the Gutta Percha Company. The Atlantic Telegraph Company was formed in October 1856, with Brett as president and Field as vice president. Charles Tilston Bright, who already worked for Brett, was made chief engineer, and Wildman Whitehouse, a medical doctor self-educated in electrical engineering, was appointed chief electrician. Field provided a quarter of the capital himself. After the remaining shares were sold, largely to existing investors in Brett's company, an unpaid board of directors was formed, which included William Thomson (the future Lord Kelvin), a respected scientist. Thomson also acted as a scientific advisor. Morse, a shareholder in the Nova Scotia project and acting as the electrical advisor, was also on the board.
== First transatlantic cable ==
The cable consisted of 7 copper wires, each weighing 26 kg/km (107 pounds per nautical mile), covered with three coats of gutta-percha (as suggested by Jonathan Nash Hearder), weighing 64 kg/km (261 pounds per nautical mile), and wound with tarred hemp, over which a sheath of 18 strands, each of 7 iron wires, was laid in a close helix. It weighed nearly 550 kg/km (1.1 tons per nautical mile), was relatively flexible, and could withstand tension of several tens of kilonewtons (several tons).
The cable from the Gutta Percha Company was armoured separately by wire-rope manufacturers, the standard practice at the time. In the rush to proceed, only four months were allowed for the cable's completion. As no wire-rope maker had the capacity to make so much cable in such a short period, the task was shared by two English firms: Glass, Elliot & Co. of Greenwich and R.S. Newall and Company of Birkenhead. Late in manufacturing, it was discovered that the two batches had been made with strands twisted in opposite directions. This meant that they could not be directly spliced wire-to-wire, as the iron wire on both cables would unwind when it was put under tension during laying. The problem was solved by splicing through an improvised wooden bracket to hold the wires in place, but the mistake created negative publicity for the project.
The British government gave Field a subsidy of £1,400 a year (£170,000 today) and loaned ships for cable laying and support. Field also solicited aid from the U.S. government, and a bill authorizing a subsidy was submitted in Congress. It passed the Senate by only a single vote, due to opposition from protectionist senators. It passed in the House of Representatives despite similar resistance and was signed by President Franklin Pierce.
The first attempt, in 1857, was a failure. The cable-laying vessels were the converted warships HMS Agamemnon and USS Niagara, borrowed from their respective governments. Both were needed as neither could hold 2,500 nautical miles of cable alone. The cable was started at the white strand near Ballycarbery Castle in County Kerry, on the southwest coast of Ireland, on August 5, 1857. It broke on the first day, but was grappled and repaired. It broke again over Telegraph Plateau, nearly 3,200 m (10,500 ft) deep, and the operation was abandoned for the year. Three hundred miles (480 km) of cable were lost, but the remaining 1,800 miles (2,900 km) were sufficient to complete the task. During this period, Morse clashed with Field, was removed from the board, and took no further part in the enterprise.
The problems with breakage were due largely to difficulty controlling the cable tensions with the braking mechanism as the cable was payed out. A new mechanism was designed and successfully tested in the Bay of Biscay with Agamemnon in May 1858. On 10 June, Agamemnon and Niagara set sail to try again. Ten days out they encountered a severe storm, and the enterprise was nearly brought to a premature end. The ships were top-heavy with cable, which could not all fit in the holds, and the ships struggled to stay upright. Ten sailors were hurt, and Thomson's electrical cabin was flooded. The vessels arrived at the middle of the Atlantic on June 25 and spliced cable from the two ships together. Agamemnon payed out eastwards towards Valentia Island, and Niagara westward towards Newfoundland. The cable broke after less than 3 nautical miles (5.6 km; 3.5 mi), again after about 54 nautical miles (100 km; 62 mi), and for a third time when about 200 nautical miles (370 km; 230 mi) had been run out of each vessel.
The expedition returned to Queenstown, County Cork, Ireland. Some directors were in favour of abandoning the project and selling off the cable, but Field persuaded them to keep going. The ships set out again on 17 July, and the middle splice was finished on 29 July 1858. The cable ran easily this time. Niagara arrived in Trinity Bay, Newfoundland on 4 August, and the next morning the shore end was landed. Agamemnon arrived at Valentia Island on 5 August; the shore end was landed at Knightstown and laid to the nearby cable house.
== First contact ==
Test messages were sent from Newfoundland beginning 10 August 1858. The first was successfully read at Valentia on 12 August and in Newfoundland on 13 August. Further test and configuration messages followed until 16 August, when the first official message was sent via the cable:
Directors of Atlantic Telegraph Company, Great Britain, to Directors in America:—Europe and America are united by telegraph. Glory to God in the highest; on earth peace, good will towards men.
Next was the text of a congratulatory telegram from Queen Victoria to President James Buchanan at his summer residence in the Bedford Springs Hotel in Pennsylvania, expressing hope that the cable would prove "an additional link between the nations whose friendship is founded on their common interest and reciprocal esteem". The President responded: "It is a triumph more glorious, because far more useful to mankind, than was ever won by conqueror on the field of battle. May the Atlantic telegraph, under the blessing of Heaven, prove to be a bond of perpetual peace and friendship between the kindred nations, and an instrument destined by Divine Providence to diffuse religion, civilization, liberty, and law throughout the world."
The messages were hard to decipher; Queen Victoria's message of 98 words took 16 hours to send. Nonetheless, they engendered an outburst of enthusiasm. The next morning a grand salute of 100 guns resounded in New York City, streets were hung with flags, bells of the churches were rung, and at night the city was illuminated. On 1 September there was a parade, followed by an evening torchlight procession and a fireworks display that caused a fire in the Town Hall. Bright was knighted for his part, the first such honour to the telegraph industry.
== Failure of the first cable ==
Operation of the 1858 cable was plagued by conflict between two of the project's senior members – Thomson and Whitehouse. Whitehouse was a medical doctor by training, but had taken an enthusiastic interest in the new electrical technology and given up his medical practice to follow a new career. He had no formal training in physics; all his knowledge was gained through practical experience. The two clashed even before the project began, when Whitehouse disputed Thomson's law of squares when the latter presented it to a British Association meeting in 1855. Thomson's law predicted that transmission speed on the cable would be very slow due to an effect called retardation. To test the theory, Bright gave Whitehouse overnight access to the Magnetic Telegraph Company's long underground lines. Whitehouse joined several lines together to a distance similar to the transatlantic route and declared that there would be no problem. Morse was also present at this test and supported Whitehouse. Thomson believed that Whitehouse's measurements were flawed and that underground and underwater cables were not fully comparable. Thomson believed that a larger cable was needed to mitigate the retardation problem. In mid-1857, on his own initiative, he examined samples of copper core of allegedly identical specification and found variations in resistance up to a factor of two. But cable manufacture was already underway, and Whitehouse supported use of a thinner cable, so Field went with the cheaper option.
Another point of contention was the itinerary for deployment. Thomson favoured starting mid-Atlantic and the two ships heading in opposite directions, which would halve the time required. Whitehouse wanted both ships to travel together from Ireland so that progress could be reported back to the base in Valentia through the cable. Whitehouse overruled Thomson's suggestion on the 1857 voyage, but Bright convinced the directors to approve a mid-ocean start on the subsequent 1858 voyage. Whitehouse, as chief electrician, was supposed to be on board the cable-laying vessel, but repeatedly found excuses for the 1857 attempt, the trials in the Bay of Biscay, and the two attempts in 1858. In 1857, Thomson was sent in his place, and in 1858 Field diplomatically assigned the two to different ships to avoid conflict—but as Whitehouse continued to evade the voyage, Thomson went alone.
=== Thomson's mirror galvanometer ===
After his experience on the 1857 voyage, Thomson realised that a better method of detecting the telegraph signal was required. While waiting for the next voyage, he developed his mirror galvanometer, an extremely sensitive instrument, much better than any until then. He requested £2,000 from the board to build several, but was given only £500 for a prototype and permission to try it on the next voyage. It was extremely good at detecting the positive and negative edges of telegraph pulses that represented a Morse "dash" and "dot" respectively (the standard system on submarine cables—as, unlike overland telegraphy, both pulses were of the same length). Thomson believed that he could use the instrument with the low voltages from regular telegraph equipment even over the vast length of the Atlantic cable. He successfully tested it on 2,700 miles (4,300 km) of cable in underwater storage at Plymouth.
The mirror galvanometer proved yet another point of contention. Whitehouse wanted to work the cable with a very different scheme, driving it with a massive high-voltage induction coil producing several thousand volts, so enough current would be available to drive standard electromechanical printing telegraphs used on inland telegraphs. Thomson's instrument had to be read by eye and was not capable of printing. Nine years later, he invented the syphon recorder for the second transatlantic attempt in 1866. The decision to start mid-Atlantic, combined with Whitehouse dropping out of another voyage, left Thomson on board Agamemnon sailing towards Ireland, with a free hand to use his equipment without Whitehouse's interference. Although Thomson had the status of a mere advisor to engineer C. W. de Sauty, it was not long before all electrical decisions were deferred to him. Whitehouse, staying behind in Valentia, remained out of contact until the ship reached Ireland and landed the cable.
Around this time, the board started having doubts over Whitehouse's generally negative attitude. Not only did he repeatedly clash with Thomson, but was also critical of Field, and his repeated refusals to carry out his primary duty as chief electrician onboard ship made a very bad impression. With the removal of Morse, Whitehouse had lost his only ally on the board, but at this time no action was taken.
=== Cable is damaged and Whitehouse dismissed ===
When Agamemnon reached Valentia on 5 August, Thomson handed over to Whitehouse, and the project was declared a success to the press. Thomson received clear signals throughout the voyage using the mirror galvanometer, but Whitehouse immediately connected his own equipment. The effects of the cable's poor handling and design, and Whitehouse's repeated attempts to drive up to 2,000 volts through the cable, compromised the cable's insulation. Whitehouse attempted to hide the poor performance and was vague in his communications. The expected inaugural message from Queen Victoria had been widely publicised, and when it was not forthcoming, the press speculated that there were problems. Whitehouse announced that five or six weeks would be required for "adjustments". The Queen's message had been received in Newfoundland, but Whitehouse was unable to read the confirmation copy sent back the other way. Finally, on 17 August, he announced receipt. What he did not announce was that the message had been received on the mirror galvanometer when he finally gave up trying with his own equipment. Whitehouse had the message reentered into his printing telegraph locally so he could send on the printed tape and pretend that it had been received that way.
In September 1858, after several days of progressive deterioration of the insulation, the cable failed altogether. The reaction to the news was tremendous. Some writers even hinted that the line was a mere hoax; others pronounced it a stock-exchange speculation. Whitehouse was recalled for the board's investigation, and Thomson took over in Valentia, tasked with reconstructing the events that Whitehouse had obfuscated. Whitehouse was held responsible for the failure and dismissed. The cable might have failed eventually anyway, but Whitehouse certainly brought it about much sooner. The cable was particularly vulnerable in the first hundred miles from Ireland, consisting of the old 1857 cable that was spliced into the new lay and known to be poorly manufactured. Samples showed that in places the conductor was badly off-centre and could easily break through the insulation due to mechanical strains during laying. Tests were conducted on samples of cable submerged in seawater. When perfectly insulated, there was no problem applying thousands of volts. However, a sample with a pinprick hole "lit up like a lantern" when tested, and a large hole was burned in the insulation.
Although the cable was never put in service for public use and never worked well, there was time for a few messages to be passed that went beyond testing. The collision between the Cunard Line ships Europa and Arabia was reported on 17 August. The British Government used the cable to countermand an order for two regiments in Canada to embark for England, saving £50,000. A total of 732 messages were passed before the cable failed.
== Preparing a new attempt ==
Field was undaunted by the failure. He was eager to renew the work, but the public had lost confidence in the scheme, and his efforts to revive the company were futile. It was not until 1864 that, with the assistance of Thomas Brassey and John Pender, he succeeded in raising the necessary capital. The Glass, Elliot, and Gutta-Percha Companies were united to form the Telegraph Construction and Maintenance Company (Telcon, later part of BICC), which undertook to manufacture and lay the new cable. C. F. Varley replaced Whitehouse as chief electrician.
In the meantime, long cables had been submerged in the Mediterranean and the Red Sea. With this experience, an improved cable was designed. The core consisted of seven twisted strands of very pure copper weighing 300 pounds per nautical mile (73 kg/km), coated with Chatterton's compound, then covered with four layers of gutta-percha, alternating with four thin layers of the compound cementing the whole, and bringing the weight of the insulator to 400 lb/nmi (98 kg/km). This core was covered with hemp saturated in a preservative solution, and on the hemp were helically wound eighteen single strands of high tensile steel wire produced by Webster & Horsfall Ltd of Hay Mills Birmingham, each covered with fine strands of manila yarn steeped in the preservative. The weight of the new cable was 35.75 long hundredweight (4000 lb) per nautical mile (980 kg/km), or nearly twice the weight of the old. The Haymills site successfully manufactured 26,000 nautical miles (48,000 km) of wire (1,600 tons), made by 250 workers over eleven months.
== Great Eastern and the second cable ==
The new cable was laid by the ship SS Great Eastern captained by Sir James Anderson. Her immense hull was fitted with three iron tanks for the reception of 2,300 nautical miles (4,300 km) of cable, and her decks furnished with the paying-out gear. At noon on 15 July 1865, Great Eastern left the Nore for Foilhommerum Bay, Valentia Island, where the shore end was laid by Caroline. This attempt failed on 2 August when, after 1,062 nautical miles (1,967 km) had been payed out, the cable snapped near the stern of the ship, and the end was lost.
Great Eastern steamed back to England, where Field issued another prospectus and formed the Anglo-American Telegraph Company, to lay a new cable and complete the broken one. On 13 July 1866, Great Eastern started paying out once more. Despite problems with the weather on the evening of Friday, 27 July, the expedition reached the port of Heart's Content, Newfoundland in a thick fog. Daniel Gooch, chief engineer of the Telegraph Construction and Maintenance Company, who had been aboard the Great Eastern, sent a message to the Secretary of State for Foreign Affairs, Lord Stanley, saying "Perfect communication established between England and America; God grant it will be a lasting source of benefit to our country." The next morning at 9 a.m. a message from England cited these words from the leader in The Times: "It is a great work, a glory to our age and nation, and the men who have achieved it deserve to be honoured among the benefactors of their race." The shore end was landed at Heart's Content Cable Station during the day by Medway. Congratulations poured in, and friendly telegrams were again exchanged between Queen Victoria and the United States.
In August 1866, several ships, including Great Eastern, put to sea again in order to grapple the lost cable of 1865. Their goal was to find the end of the lost cable, splice it to new cable, and complete the run to Newfoundland. They were determined to find it, and their search was based solely upon positions recorded "principally by Captain Moriarty, R. N.", who placed the end of the lost cable at longitude 38° 50' W.
There were some who thought it hopeless to try, declaring that to locate a cable 2.5 mi (4.0 km) down would be like looking for a small needle in a large haystack. However, Robert Halpin, first officer of Great Eastern, navigated HMS Terrible and grappling ship Albany to the correct location. Albany moved slowly here and there, "fishing" for the lost cable with a five-pronged grappling hook at the end of a stout rope. Suddenly, on 10 August, Albany "caught" the cable and brought it to the surface. It seemed to be an unrealistically easy success. During the night, the cable slipped from the buoy to which it had been secured, and the process had to start all over again. This happened several more times, with the cable slipping after being secured in a frustrating battle against rough seas. One time, a sailor even was flung across the deck when the grapnel rope snapped and recoiled around him. Great Eastern and another grappling ship, Medway, arrived to join the search on 12 August. It was not until over a fortnight later, in early September 1866, that the cable was finally retrieved so that it could be worked on; it took 26 hours to get it safely on board Great Eastern. The cable was carried to the electrician's room, where it was determined that the cable was connected. All on the ship cheered or wept as rockets were sent up into the sky to light the sea. The recovered cable was then spliced to a fresh cable in her hold and payed out to Heart's Content, Newfoundland, where she arrived on Saturday, 7 September. There were now two working telegraph lines.
== Repairing the cable ==
Broken cables required an elaborate repair procedure. The approximate distance to the break was determined by measuring the resistance of the broken cable. The repair ship navigated to the location. The cable was hooked with a grapple and brought on board to test for electrical continuity. Buoys were deployed to mark the ends of good cable, and a splice was made between the two ends.
== Communication speeds ==
Initially messages were sent by an operator using Morse code. The reception was very bad on the 1858 cable, and it took two minutes to transmit just one character (a single letter or a single number), a rate of about 0.1 words per minute. This was despite the use of the highly sensitive mirror galvanometer. The inaugural message from Queen Victoria took 67 minutes to transmit to Newfoundland, but it took 16 hours for the confirmation copy to be transmitted back to Whitehouse in Valentia.
For the 1866 cable, the methods of cable manufacture, as well as sending messages, had been vastly improved. The 1866 cable could transmit 8 words a minute—80 times faster than the 1858 cable. Oliver Heaviside and Mihajlo Idvorski Pupin in later decades understood that the bandwidth of a cable is hindered by an imbalance between capacitive and inductive reactance, which causes a severe dispersion and hence a signal distortion; see telegrapher's equations. This has to be solved by iron tape or by load coils. It was not until the 20th century that message transmission speeds over transatlantic cables would reach even 120 words per minute. London became the world centre in telecommunications. Eventually, no fewer than eleven cables radiated from Porthcurno Cable Station near Land's End and formed with their Commonwealth links a "live" girdle around the world; the All Red Line.
== Later cables ==
Additional cables were laid between Foilhommerum and Heart's Content in 1873, 1874, 1880, and 1894. By the end of the 19th century, British-, French-, German-, and American-owned cables linked Europe and North America in a sophisticated web of telegraphic communications.
The original cables were not fitted with repeaters, which potentially could completely solve the retardation problem and consequently speed up operation. Repeaters amplify the signal periodically along the line. On telegraph lines this is done with relays, but there was no practical way to power them in a submarine cable. The first transatlantic cable with repeaters was TAT-1 in 1956. This was a telephone cable and used a different technology for its repeaters.
== Impact ==
A 2018 study in the American Economic Review found that the transatlantic telegraph substantially increased trade over the Atlantic and reduced prices. The study estimates that efficiency gains due to the establishment of the telegraph connection amounted to 8 percent of export value.
== See also ==
1929 Grand Banks earthquake
Commercial Cable Company
Transatlantic telephone cable
Western Union Telegraph Expedition – overland alternative via Russia
== Notes ==
== References ==
Bright, Charles Tilston, Submarine Telegraphs, London: Crosby Lockwood, 1898 OCLC 776529627.
Burns, Russell W., Communications: An International History of the Formative Years, IET, 2004 ISBN 0863413277.
Clarke, Arthur C. Voice Across the Sea (1958) and How the World was One (1992); the two books include some of the same material.
Gordon, John Steele. A Thread across the Ocean: The Heroic Story of the Transatlantic Cable. New York: Walker & Co, 2002. ISBN 978-0-8027-1364-3.
Clayton, Howard, Atlantic Bridgehead: The Story of Transatlantic Communications, Garnstone Press, 1968 OCLC 609237003.
Cookson, Gillian, The Cable, Tempus Publishing, 2006 ISBN 0752439030.
Cowan, Mary Morton, Cyrus Field's Big Dream: The Daring Effort to Lay the First Transatlantic Telegraph Cable, Boyds Mills Press, 2018 ISBN 1684371422.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 9780471205050.
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David and Charles, 1973 OCLC 655205099.
Lindley, David, Degrees Kelvin: A Tale of Genius, Invention, and Tragedy, Joseph Henry Press, 2004 ISBN 0309167825.
Rozwadowski, Helen M. Fathoming the Ocean: The Discovery and Exploration of the Deep Sea, Harvard University Press, 2009 ISBN 0674042948.
== Further reading ==
Fleming, John Ambrose (1911). "Telegraph" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 26 (11th ed.). Cambridge University Press. pp. 513–541.
Hearn, Chester G., Circuits in the Sea: The Men, the Ships, and the Atlantic Cable, Westport, Connecticut: Prager, 2004 ISBN 0275982319.
Mueller, Simone M. "From cabling the Atlantic to wiring the world: A review essay on the 150th anniversary of the Atlantic telegraph cable of 1866." Technology and Culture (2016): 507–526. online.
Müller, Simone. "The Transatlantic Telegraphs and the 'Class of 1866' – the Formative Years of Transnational Networks in Telegraphic Space, 1858–1884/89." Historical Social Research/Historische Sozialforschung (2010): 237–259. online
Murray, Donald (June 1902). "How Cables Unite the World". The World's Work: A History of Our Time. II: 2298–2309. Retrieved 9 July 2009.
Standage, Tom. The Victorian Internet (1998). ISBN 0-7538-0703-3. The story of the men and women who were the earliest pioneers of the on-line frontier, and the global network they created – a network that was, in effect, the Victorian Internet.
== External links ==
The Atlantic Cable by Bern Dibner (1959) – Complete free electronic version of The Atlantic Cable by Bern Dibner (1959), hosted by the Smithsonian Institution Libraries
History of the Atlantic Cable & Undersea Communications – Comprehensive history of submarine telegraphy with much original material, including photographs of cable manufacturers samples
PBS, American Experience: The Great Transatlantic Cable
The History Channel: Modern Marvels: Transatlantic Cable: 2500 Miles of Copper
A collection of articles on the history of telegraphy
Cabot Strait Telegraph Cable 1856 between Newfoundland and Nova Scotia
American Heritage: The Cable Under the Sea
Alan Hall – First Transatlantic Cable and First message sent to USA 1856 Memorial
Travelogue around the world's communications cables by Neal Stephenson
IEEE History Centre: County Kerry Transatlantic Cable Stations, 1866
IEEE History Centre: Landing of the Transatlantic Cable, 1866
Cyrus Field, "Laying Of The Atlantic Cable" (1866)
The Great Eastern – Robert Dudley Lithographs 1865–66 | Wikipedia/Transatlantic_telegraph_cable |
Wireless telegraphy or radiotelegraphy is the transmission of text messages by radio waves, analogous to electrical telegraphy using cables. Before about 1910, the term wireless telegraphy was also used for other experimental technologies for transmitting telegraph signals without wires. In radiotelegraphy, information is transmitted by pulses of radio waves of two different lengths called "dots" and "dashes", which spell out text messages, usually in Morse code. In a manual system, the sending operator taps on a switch called a telegraph key which turns the transmitter on and off, producing the pulses of radio waves. At the receiver the pulses are audible in the receiver's speaker as beeps, which are translated back to text by an operator who knows Morse code.
Radiotelegraphy was the first means of radio communication. The first practical radio transmitters and receivers invented in 1894–1895 by Guglielmo Marconi used radiotelegraphy. It continued to be the only type of radio transmission during the first few decades of radio, called the "wireless telegraphy era" up until World War I, when the development of amplitude modulation (AM) radiotelephony allowed sound (audio) to be transmitted by radio. Beginning about 1908, powerful transoceanic radiotelegraphy stations transmitted commercial telegram traffic between countries at rates up to 200 words per minute.
Radiotelegraphy was used for long-distance person-to-person commercial, diplomatic, and military text communication throughout the first half of the 20th century. It became a strategically important capability during the two world wars since a nation without long-distance radiotelegraph stations could be isolated from the rest of the world by an enemy cutting its submarine telegraph cables. Radiotelegraphy remains popular in amateur radio. It is also taught by the military for use in emergency communications. However, commercial radiotelegraphy is obsolete.
== Principles ==
Wireless telegraphy or radiotelegraphy, commonly called CW (continuous wave), ICW (interrupted continuous wave) transmission, or on-off keying, and designated by the International Telecommunication Union as emission type A1A or A2A, is a radio communication method. It was transmitted by several different modulation methods during its history. The primitive spark-gap transmitters used until 1920 transmitted damped waves, which had very wide bandwidth and tended to interfere with other transmissions. This type of emission was banned by 1934, except for some legacy use on ships. The vacuum tube (valve) transmitters which came into use after 1920 transmitted code by pulses of unmodulated sinusoidal carrier wave called continuous wave (CW), which is still used today. To receive CW transmissions, the receiver requires a circuit called a beat frequency oscillator (BFO). The third type of modulation, frequency-shift keying (FSK) was used mainly by radioteletype networks (RTTY). Morse code radiotelegraphy was gradually replaced by radioteletype in most high volume applications by World War II.
In manual radiotelegraphy the sending operator manipulates a switch called a telegraph key, which turns the radio transmitter on and off, producing pulses of unmodulated carrier wave of different lengths called "dots" and "dashes", which encode characters of text in Morse code. At the receiving location, Morse code is audible in the receiver's earphone or speaker as a sequence of buzzes or beeps, which is translated back to text by an operator who knows Morse code. With automatic radiotelegraphy teleprinters at both ends use a code such as the International Telegraph Alphabet No. 2 and produced typed text.
Radiotelegraphy is obsolete in commercial radio communication, and its last civilian use, requiring maritime shipping radio operators to use Morse code for emergency communications, ended in 1999 when the International Maritime Organization switched to the satellite-based GMDSS system. However it is still used by amateur radio operators, and military services require signalmen to be trained in Morse code for emergency communication. A CW coastal station, KSM, still exists in California, run primarily as a museum by volunteers, and occasional contacts with ships are made. In a minor legacy use, VHF omnidirectional range (VOR) and NDB radio beacons in the aviation radio navigation service still transmit their one to three letter identifiers in Morse code.
Radiotelegraphy is popular amongst radio amateurs world-wide, who commonly refer to it as continuous wave, or just CW. A 2021 analysis of over 700 million communications logged by the Club Log blog, and a similar review of data logged by the American Radio Relay League, both show that wireless telegraphy is the 2nd most popular mode of amateur radio communication, accounting for nearly 20% of contacts. This makes it more popular than voice communication, but not as popular as the FT8 digital mode, which accounted for 60% of amateur radio contacts made in 2021. Since 2003, knowledge of Morse code and wireless telegraphy has no longer been required to obtain an amateur radio license in many countries, it is, however, still required in some countries to obtain a licence of a different class. As of 2021, licence Class A in Belarus and Estonia, or the General class in Monaco, or Class 1 in Ukraine require Morse proficiency to access the full amateur radio spectrum including the high frequency (HF) bands. Further, CEPT Class 1 licence in Ireland, and Class 1 in Russia, both of which require proficiency in wireless telegraphy, offer additional privileges: a shorter and more desirable call sign in both countries, and the right to use a higher transmit power in Russia.
== History ==
Efforts to find a way to transmit telegraph signals without wires grew out of the success of electric telegraph networks, the first instant telecommunication systems. Developed beginning in the 1830s, a telegraph line was a person-to-person text message system consisting of multiple telegraph offices linked by an overhead wire supported on telegraph poles. To send a message, an operator at one office would tap on a switch called a telegraph key, creating pulses of electric current which spelled out a message in Morse code. When the key was pressed, it would connect a battery to the telegraph line, sending current down the wire. At the receiving office, the current pulses would operate a telegraph sounder, a device that would make a "click" sound when it received each pulse of current. The operator at the receiving station who knew Morse code would translate the clicking sounds to text and write down the message. The ground was used as the return path for current in the telegraph circuit, to avoid having to use a second overhead wire.
By the 1860s, the telegraph was the standard way to send most urgent commercial, diplomatic and military messages, and industrial nations had built continent-wide telegraph networks, with submarine telegraph cables allowing telegraph messages to bridge oceans. However installing and maintaining a telegraph line linking distant stations was very expensive, and wires could not reach some locations such as ships at sea. Inventors realized if a way could be found to send electrical impulses of Morse code between separate points without a connecting wire, it could revolutionize communications.
The successful solution to this problem was the discovery of radio waves in 1887, and the development of practical radiotelegraphy transmitters and receivers by about 1899.
Over several years starting in 1894, the Italian inventor Guglielmo Marconi worked on adapting the newly discovered phenomenon of radio waves to communication, turning what was essentially a laboratory experiment up to that point into a useful communication system, building the first radiotelegraphy system using them. Preece and the General Post Office (GPO) in Britain at first supported and gave financial backing to Marconi's experiments conducted on Salisbury Plain from 1896. Preece had become convinced of the idea through his experiments with wireless induction. However, the backing was withdrawn when Marconi formed the Wireless Telegraph & Signal Company. GPO lawyers determined that the system was a telegraph under the meaning of the Telegraph Act and thus fell under the Post Office monopoly. This did not seem to hold back Marconi.: 243–244 After Marconi sent wireless telegraphic signals across the Atlantic Ocean in 1901, the system began being used for regular communication including ship-to-shore and ship-to-ship communication.
With this development, wireless telegraphy came to mean radiotelegraphy, Morse code transmitted by radio waves. The first radio transmitters, primitive spark gap transmitters used until World War I, could not transmit voice (audio signals). Instead, the operator would send the text message on a telegraph key, which turned the transmitter on and off, producing short ("dot") and long ("dash") pulses of radio waves, groups of which comprised the letters and other symbols of the Morse code. At the receiver, the signals could be heard as musical "beeps" in the earphones by the receiving operator, who would translate the code back into text. By 1910, communication by what had been called "Hertzian waves" was being universally referred to as "radio", and the term wireless telegraphy has been largely replaced by the more modern term "radiotelegraphy".
== Methods ==
The primitive spark-gap transmitters used until 1920 transmitted by a modulation method called damped wave. As long as the telegraph key was pressed, the transmitter would produce a string of transient pulses of radio waves which repeated at an audio rate, usually between 50 and several thousand hertz. In a receiver's earphone, this sounded like a musical tone, rasp or buzz. Thus the Morse code "dots" and "dashes" sounded like beeps. Damped wave had a large frequency bandwidth, meaning that the radio signal was not a single frequency but occupied a wide band of frequencies. Damped wave transmitters had a limited range and interfered with the transmissions of other transmitters on adjacent frequencies.
After 1905 new types of radiotelegraph transmitters were invented which transmitted code using a new modulation method: continuous wave (CW) (designated by the International Telecommunication Union as emission type A1A). As long as the telegraph key was pressed, the transmitter produced a continuous sinusoidal wave of constant amplitude. Since all the radio wave's energy was concentrated at a single frequency, CW transmitters could transmit further with a given power, and also caused virtually no interference to transmissions on adjacent frequencies. The first transmitters able to produce continuous wave were the arc converter (Poulsen arc) transmitter, invented by Danish engineer Valdemar Poulsen in 1903, and the Alexanderson alternator, invented 1906–1912 by Reginald Fessenden and Ernst Alexanderson. These slowly replaced the spark transmitters in high power radiotelegraphy stations.
However, the radio receivers used for damped wave could not receive continuous wave. Because the CW signal produced while the key was pressed was just an unmodulated carrier wave, it made no sound in a receiver's earphones. To receive a CW signal, some way had to be found to make the Morse code carrier wave pulses audible in a receiver.
This problem was solved by Reginald Fessenden in 1901. In his "heterodyne" receiver, the incoming radiotelegraph signal is mixed in the receiver's detector crystal or vacuum tube with a constant sine wave generated by an electronic oscillator in the receiver called a beat frequency oscillator (BFO). The frequency of the oscillator
f
BFO
{\displaystyle f_{\text{BFO}}}
is offset from the radio transmitter's frequency
f
IN
{\displaystyle f_{\text{IN}}}
. In the detector the two frequencies subtract, and a beat frequency (heterodyne) at the difference between the two frequencies is produced:
f
BEAT
=
|
f
IN
−
f
BFO
|
{\displaystyle f_{\text{BEAT}}=|f_{\text{IN}}-f_{\text{BFO}}|}
. If the BFO frequency is near enough to the radio station's frequency, the beat frequency is in the audio frequency range and can be heard in the receiver's earphones. During the "dots" and "dashes" of the signal, the beat tone is produced, while between them there is no carrier so no tone is produced. Thus the Morse code is audible as musical "beeps" in the earphones.
The BFO was rare until the invention in 1913 of the first practical electronic oscillator, the vacuum tube feedback oscillator by Edwin Armstrong. After this time BFOs were a standard part of radiotelegraphy receivers. Each time the radio was tuned to a different station frequency, the BFO frequency had to be changed also, so the BFO oscillator had to be tunable. In later superheterodyne receivers from the 1930s on, the BFO signal was mixed with the constant intermediate frequency (IF) produced by the superheterodyne's detector. Therefore, the BFO could be a fixed frequency.
Continuous-wave vacuum tube transmitters replaced the other types of transmitter with the availability of power tubes after World War I because they were cheap. CW became the standard method of transmitting radiotelegraphy by the 20s, damped wave spark transmitters were banned by 1930 and CW continues to be used today. Even today most communications receivers produced for use in shortwave communication stations have BFOs.
== Industry ==
The International Radiotelegraph Union was unofficially established at the first International Radiotelegraph Convention in 1906, and was merged into the International Telecommunication Union in 1932. When the United States entered World War I, private radiotelegraphy stations were prohibited, which put an end to several pioneers' work in this field. By the 1920s, there was a worldwide network of commercial and government radiotelegraphic stations, plus extensive use of radiotelegraphy by ships for both commercial purposes and passenger messages. The transmission of sound (radiotelephony) began to displace radiotelegraphy by the 1920s for many applications, making possible radio broadcasting. Wireless telegraphy continued to be used for private person-to-person business, governmental, and military communication, such as telegrams and diplomatic communications, and evolved into radioteletype networks. The ultimate implementation of wireless telegraphy was telex, using radio signals, which was developed in the 1930s and was for many years the only reliable form of communication between many distant countries. The most advanced standard, CCITT R.44, automated both routing and encoding of messages by short wave transmissions.
Today, due to more modern text transmission methods, Morse code radiotelegraphy for commercial use has become obsolete. On shipboard, the computer and satellite-linked GMDSS system have largely replaced Morse as a means of communication.
== Regulation ==
Continuous wave (CW) radiotelegraphy is regulated by the International Telecommunication Union (ITU) as emission type A1A.
The US Federal Communications Commission issues a lifetime commercial Radiotelegraph Operator License. This requires passing a simple written test on regulations, a more complex written exam on technology, and demonstrating Morse reception at 20 words per minute plain language and 16 wpm code groups. (Credit is given for amateur extra class licenses earned under the old 20 wpm requirement.)
== Gallery ==
== See also ==
AT&T Corporation originally American Telephone and Telegraph Company
Electrical telegraph
Imperial Wireless Chain
Radioteletype
== References and notes ==
=== General ===
American Institute of Electrical Engineers. (1908). "Wireless Telephony – By R. A. Fessenden (Illustrated.)", Transactions of the American Institute of Electrical Engineers. New York: American Institute of Electrical Engineers.
=== Citations ===
== Further reading ==
Sarkar, T. K.; Mailloux, Robert; Oliner, Arthur A.; Salazar-Palma, M.; Sengupta, Dipak L. (2006-01-30). History of Wireless. Wiley. ISBN 978-0-471-78301-5.
Aitken, Hugo G. J. (1976). Syntony and spark: the origins of radio. Science, culture and society. New York London Sydney Toronto: J. Wiley and sons. ISBN 0471018163.
Sivowitch, Elliot N. (December 1970). "A technological survey of broadcasting's "pre-history," 1876–1920". Journal of Broadcasting. 15 (1): 1–20. doi:10.1080/08838157009363620. ISSN 0021-938X.
"Wireless telegraphy". The New International Encyclopædia. Dodd, Mead. 1922. p. 637.
Chisholm, Hugh (1911). The Encyclopædia Britannica: Submarine Mines-Tom-tom. At the University Press.
Stanley, Rupert (1919). Textebook on wireless telegraphy. Longmans, Green.
Miessner, Benjamin Franklin (1916). Radiodynamics, the wireless control of torpedoes and other mechanisms. University of California Libraries. New York, D. Van Nostrand company.
Thompson, Silvanus P. (Silvanus Phillips) (1915). Elementary lessons in electricity and magnetism. University of Michigan. New York : Macmillan.{{cite book}}: CS1 maint: publisher location (link)
Ashley, Charles Grinnell; Hayward, Charles Brian (1912). Wireless Telegraphy and Wireless Telephony: An Understandable Presentation of the Science of Wireless Transmission of Intelligence. American School of Correspondence.
Massie, Walter Wentworth; Underhill, Charles Reginald (1908). Wireless Telegraphy and Telephony Popularly Explained. D. Van Nostrand Company.
"Developments in wireless telegraphy". International Marine Engineering. Simmons-Boardman Publishing Company. 1911.
Bottone, Selimo Romeo (1910). Wireless telegraphy and Hertzian waves. London, New York, Whittaker & co.
Murray, James Erskine (1907). A handbook of wireless telegraphy;. University of Wisconsin - Madison. New York, D. Van Nostrand company; [etc.]
Twining, Harry La Verne (1909). Wireless Telegraphy and High Frequency Electricity: A Manual Containing Detailed Information for the Construction of Transformers, Wireless Telegraph and High Frequency Apparatus, with Chapters on Their Theory and Operation.
Poincaré, Lucien (28 February 2005) [1909]. "Chapter VII: A Chapter in the History of Science: Wireless telegraphy". The New Physics and Its Evolution. New York.{{cite book}}: CS1 maint: location missing publisher (link)
Fleming, John Ambrose (1908). The principles of electric wave telegraphy. University of California. London, New York and Bombay, Longmans, Green, and Co.
Simmons, Harold H. (1909). "Wireless telegraphy". Outlines of electrical engineering. University of Michigan. London; New York : Cassell and Co.
Murray, James Erskine (1907). A handbook of wireless telegraphy;. University of Michigan. New York, D. Van Nostrand company; [etc.]
Domenico Mazzotto (1906). Wireless Telegraphy and Telephony. University of Michigan. Whittaker & Co.
Collins, A. Frederick (Archie Frederick) (1905). Wireless telegraphy; its history, theory and practice. University of Michigan. New York, McGraw publishing company.
Charles Henry Sewall (1903). Wireless Telegraphy: Its Origins, Development, Inventions, and Apparatus. University of California. D. Van Nostrand Co.
Trevert, Edward (1904). ... The A B C of Wireless Telegraphy: A Plain Treatise on Hertzian Wave Signaling; Embracing Theory, Methods of Operation, and how to Build Various Pieces of the Apparatus Employed. Bubier publishing Company.
John Joseph Fahie (1900). A History of Wireless Telegraphy, 1838-1899: Including Some Bare-wire Proposals for Subaqueous Telegraphs. University of Michigan. Dodd, Mead & co.
"Telegraphing across space, Electric wave method". The Electrical Engineer. Biggs & Company. 1898.
"Radio telephony". Transactions of the American Institute of Electrical Engineers. American Institute of Electrical Engineers. 1919. p. 306.
== External links ==
John Joseph Fahie, A History of Wireless Telegraphy, 1838–1899: including some bare-wire proposals for subaqueous telegraphs:
1899 (first edition)
1901 (second edition)
Alfred Thomas Story, The Story of Wireless Telegraphy {1904}
Sparks Telegraph Key Review
Cyril M. Jansky, Principles of Radiotelegraphy (1919)
Principles of Radiotelegraphy (1919) | Wikipedia/Wireless_telegraphy |
Processor design is a subfield of computer science and computer engineering (fabrication) that deals with creating a processor, a key component of computer hardware.
The design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. For microprocessor design, this description is then manufactured employing some of the various semiconductor device fabrication processes, resulting in a die which is bonded onto a chip carrier. This chip carrier is then soldered onto, or inserted into a socket on, a printed circuit board (PCB).
The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using registers, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow.
Processor designs are often tested and validated on one or several FPGAs before sending the design of the processor to a foundry for semiconductor fabrication.
== Details ==
=== Basics ===
CPU design is divided into multiple components. Information is transferred through datapaths (such as ALUs and pipelines). These datapaths are controlled through logic by control units. Memory components include register files and caches to retain information, or certain actions. Clock circuitry maintains internal rhythms and timing through clock drivers, PLLs, and clock distribution networks. Pad transceiver circuitry which allows signals to be received and sent and a logic gate cell library which is used to implement the logic. Logic gates are the foundation for processor design as they are used to implement most of the processor's components.
CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency, power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them as intellectual property. Control logic implementation techniques (logic synthesis using CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic, finite-state machines, microprogramming (common from 1965 to 1985), and Programmable logic arrays (common in the 1980s, no longer common).
=== Implementation logic ===
Device types used to implement the logic include:
Individual vacuum tubes, individual transistors and semiconductor diodes, and transistor-transistor logic small-scale integration logic chips – no longer used for CPUs
Programmable array logic and programmable logic devices – no longer used for CPUs
Emitter-coupled logic (ECL) gate arrays – no longer common
CMOS gate arrays – no longer used for CPUs
CMOS mass-produced ICs – the vast majority of CPUs by volume
CMOS ASICs – only for a minority of special applications due to expense
Field-programmable gate arrays (FPGA) – common for soft microprocessors, and more or less required for reconfigurable computing
A CPU design project generally has these major tasks:
Programmer-visible instruction set architecture, which can be implemented by a variety of microarchitectures
Architectural study and performance modeling in ANSI C/C++ or SystemC
High-level synthesis (HLS) or register transfer level (RTL, e.g. logic) implementation
RTL verification
Circuit design of speed critical components (caches, registers, ALUs)
Logic synthesis or logic-gate-level design
Timing analysis to confirm that all logic and circuits will run at the specified operating frequency
Physical design including floorplanning, place and route of logic gates
Checking that RTL, gate-level, transistor-level and physical-level representations are equivalent
Checks for signal integrity, chip manufacturability
Re-designing a CPU core to a smaller die area helps to shrink everything (a "photomask shrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one very-large-scale integration chip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost.
As with most complex electronic designs, the logic verification effort (proving that the design does not have bugs) now dominates the project schedule of a CPU.
Key CPU architectural innovations include index register, cache, virtual memory, instruction pipelining, superscalar, CISC, RISC, virtual machine, emulators, microprogram, and stack.
=== Microarchitectural concepts ===
=== Research topics ===
A variety of new CPU design ideas have been proposed,
including reconfigurable logic, clockless CPUs, computational RAM, and optical computing.
=== Performance analysis and benchmarking ===
Benchmarking is a way of testing CPU speed. Examples include SPECint and SPECfp, developed by Standard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.
Some of the commonly used metrics include:
Instructions per second - Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see Megahertz Myth).
FLOPS - The number of floating point operations per second is often important in selecting computers for scientific computations.
Performance per watt - System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.
Some system designers building parallel computers pick CPUs based on the speed per dollar.
System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP)
Computer programmers who program directly in assembly language want a CPU to support a full featured instruction set.
Low power - For systems with limited power sources (e.g. solar, batteries, human power).
Small size or low weight - for portable embedded systems, systems for spacecraft.
Environmental impact - Minimizing environmental impact of computers during manufacturing and recycling as well during use. Reducing waste, reducing hazardous materials. (see Green computing).
There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa.
== Markets ==
There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets.
=== General-purpose computing ===
As of 2010, in the general-purpose computing market, that is, desktop, laptop, and server computers commonly used in businesses and homes, the Intel IA-32 and the 64-bit version x86-64 architecture dominate the market, with its rivals PowerPC and SPARC maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops.
Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption.
==== High-end processor economics ====
In 1984, most high-performance CPUs required four to five years to develop.
=== Scientific computing ===
Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs.
=== Embedded design ===
As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors.
These single-function devices differ from the more familiar general-purpose CPUs in several ways:
Low cost is of high importance.
It is important to maintain a low power dissipation as embedded devices often have a limited battery life and it is often impractical to include cooling fans.
To give lower system cost, peripherals are integrated with the processor on the same silicon chip.
Keeping peripherals on-chip also reduces power consumption as external GPIO ports typically require buffering so that they can source or sink the relatively high current loads that are required to maintain a strong signal outside of the chip.
Many embedded applications have a limited amount of physical space for circuitry; keeping peripherals on-chip will reduce the space required for the circuit board.
The program and data memories are often integrated on the same chip. When the only allowed program memory is ROM, the device is known as a microcontroller.
For many embedded applications, interrupt latency will be more critical than in some general-purpose processors.
==== Embedded processor economics ====
The embedded CPU family with the largest number of total units shipped is the 8051, averaging nearly a billion units per year. The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.4730 square millimeters of silicon.
As of 2009, more CPUs are produced using the ARM architecture family instruction sets than any other 32-bit instruction set.
The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time.
The 32-bit Parallax Propeller microcontroller architecture and the first chip were designed by two people in about 10 human years of work time.
The 8-bit AVR architecture and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology.
The 8-bit 6502 architecture and the first MOS Technology 6502 chip were designed in 13 months by a group of about 9 people.
==== Research and educational CPU design ====
The 32-bit Berkeley RISC I and RISC II processors were mostly designed by a series of students as part of a four quarter sequence of graduate courses.
This design became the basis of the commercial SPARC processor design.
For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of 7400 series integrated circuits.
One team of 4 students designed and built a simple 32 bit CPU during that semester.
Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester.
The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time.
24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU.
==== Soft microprocessor cores ====
For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market.
== See also ==
Amdahl's law
Central processing unit
Comparison of instruction set architectures
Complex instruction set computer
CPU cache
Electronic design automation
Heterogeneous computing
High-level synthesis
History of general-purpose CPUs
Integrated circuit design
Microarchitecture
Microprocessor
Minimal instruction set computer
Moore's law
Reduced instruction set computer
System on a chip
Network on a chip
Process design kit – a set of documents created or accumulated for a semiconductor device production process
Uncore
== References ==
=== General references ===
Hwang, Enoch (2006). Digital Logic and Microprocessor Design with VHDL. Thomson. ISBN 0-534-46593-5.
Processor Design: An Introduction | Wikipedia/Chip_design |
The Doctor of Engineering (DEng or EngD) or Doctor of Engineering Sciences is a research doctorate in engineering and applied science. An EngD is a terminal degree similar to a PhD in engineering but applicable more in industry rather than in academia. The degree is usually aimed toward working professionals.
The DEng/EngD along with the PhD represents the highest academic qualification in engineering, and the successful completion of either in engineering is generally required to gain employment as a full-time, tenure-track university professor or postdoctoral researcher in the field. However, due to its nature, a DEng/EngD graduate might be more suitable for the Professor of Practice position. Individuals can use the academic title doctor, which is often represented via the English honorific "Dr”.
DEng/EngD candidates submit a significant project, typically referred to as a thesis or praxis, consisting of a body of applied and practical methods/products with the main goal of solving complex industrial problems. Candidates must defend this work before a panel of expert examiners called a thesis or dissertation committee.
== International equivalent qualifications ==
Countries following the German/US model of education usually have similar requirements for awarding PhD (Eng) and doctor of engineering degrees. The common degree abbreviations in the US are DEng/EngD, DEngSc/EngScD, whereas in Germany it is more commonly known as Dr-Ing. The common degree abbreviation in the Netherlands is Professional Doctorate in Engineering (PDEng), which is equivalent to the EngD (as of 1 September 2022, the PDEng title in The Netherlands has been renamed to EngD).
"Dr.techn.", an abbreviation of "Doctor technicae" (German: "Doktor der technischen Wissenschaften") which also interchangeably translates to Doctor of Engineering Sciences, Doctor of Science, Doctor of Technical Sciences, or Doctor of Technology, is a doctoral title granted in Austria by universities of technology, such as TU Wien.
== History ==
To be admitted as a doctoral student, one must usually hold a Master's degree in engineering or related science subject and pass a comprehensive entrance exam. The student must complete the necessary required course work, be taught examinable courses, perform independent research under the supervision of a qualified doctoral advisor, and pass the thesis defense. The degree requires a high level of expertise in the theoretical aspects of relevant scientific principles and experience with details of the implementation of theory on realistic problems. The DEng takes three to six years (full-time) to complete and has compulsory taught components and coursework/projects and is granted in recognition of high achievement in scholarship and an ability to apply engineering fundamentals to the solution of complex technical problems.
A Doctor of Engineering degree awarded by universities in East Asia is equivalent to a PhD degree. To be admitted as a doctoral student, one must hold a master's degree in the same or related subject and pass a comprehensive entrance exam. The student must complete necessary course work, perform independent research under the supervision of a qualified Doctoral Advisor, and pass the thesis defense. It usually takes more than three years for a student with an M.S. Degree to complete his/her doctoral study. However, there are few areas of study (such as Materials Science, Polymer Technology, and Biomedical Engineering) where both Doctor of Science and Doctor of Engineering can be awarded depending upon the graduate school which houses the department.
In Germany the doctoral degree in engineering is called Doktoringenieur (Doktor der Ingenieurwissenschaften, Dr.-Ing.) and is usually earned after four to six years of research and completing a dissertation. A researcher pursuing a doctorate needs to hold a master's degree or the Diplom-Ingenieur degree (Dipl.-Ing.).
In France the degree of "Doctor-Engineer" (docteur-ingénieur) was a formerly applied science research degree. It was discontinued after 1984 and engineers wishing to go further as researchers now have to seek a PhD.
== British Higher Doctorate ==
In the United Kingdom, the D.Eng. degree was traditionally awarded as a higher doctorate on the basis of a significant contribution to some field of engineering over the course of a career. However, since 1992 some British universities have introduced the Engineering Doctorate, abbreviated as "EngD", which is instead a research doctorate and regarded in the UK as equivalent to a PhD.
== Modern British Engineering Doctorate ==
The Engineering Doctorate scheme is a British postgraduate education programme promoted by the UK's Engineering and Physical Sciences Research Council (EPSRC). The programme is undertaken for over four years. Students conduct PhD-equivalent research and undertake taught business and technical courses whilst working closely with an industrial sponsor. Successful candidates are awarded the degree of Doctor of Engineering (EngD) and are addressed as doctor.
In the UK a similar formation to doctorate is the NVQ 8 or QCF 8. However, a doctoral degree typically incorporates a research project which must offer an original contribution to knowledge within an academic subject area; an element which NVQs lack.
The Engineering Doctorate (EngD) scheme was established by the EPSRC in 1992 following the recommendations of the 1990 Engineering Doctorate Report, produced by a working group chaired by Professor John Parnaby. The scheme was launched with five centres - at Warwick, UMIST and Manchester universities and a Welsh consortium led by University College Swansea. After a 1997 review, a further tranche of five centres were established, and further centres were added in 2001 and 2006 following calls by EPSRC in particular areas of identified national need.
In a 2006 stakeholder survey of the scheme conducted on behalf of EPSRC it was found that the quality of output of research engineers was perceived to match or exceed that of a PhD. However, the majority of respondents disagreed with claims that EngDs were recruited to higher-paid posts than PhDs or that EngDs were more desirable to employers than PhDs. Observations were made that the EngD was not widely known, and that universities may offer EngD degrees that were not necessarily of the format promoted by the EPSRC.
A March 2007 "Review of the EPSRC Engineering Doctorate Centres" noted that since 1992, some 1230 research engineers had been enrolled, sponsored by over 510 different companies (28 had sponsored at least six REs), at 22 centres based at 14 universities (some jointly run by several collaborating universities). The panel remained convinced of the value and performance of the EngD scheme, and made six key recommendations including clearer brand definition, academic study of the longer-term impacts of the scheme, promotion of the scheme to potential new sponsors, business sectors and REs, work with the Engineering Council UK to develop a career path for REs to Chartered Engineer status, creation of a virtual "EngD Academy", and increased resources for the scheme.
Work on establishing an Association of Engineering Doctorates began in 2010.
== Relationship between DEng/EngD and PhD ==
In some countries, the Doctor of Engineering and the PhD in Engineering are equivalent degrees. Both doctorates are research doctorates representing the highest academic qualification in engineering. As such, both EngD and PhD programs require students to develop original research leading to a dissertation defense. Furthermore, both doctorates enable holders to become faculty members at academic institutions. The EngD and PhD in Engineering are terminal degrees, allowing the recipient to obtain a tenure-track position.
In other cases, the distinction is one of orientation and intended outcomes. The Doctor of Engineering degree is designed for practitioners who wish to apply the knowledge they gain in a business or technical environment. Unlike a Doctor of Philosophy (PhD) degree program, wherein research leads to foundational work that is published in industry journals, the EngD demands that research be applied to solving a real-world problem using the latest engineering concepts and tools. The program culminates in the production of a thesis, dissertation, or praxis, for use by practicing engineers to address a common concern or challenge. Research toward the EngD is “applied” rather than basic.
The PhD is highly focused on developing theoretical knowledge, while the EngD emphasizes applied research. Upon completion, graduates of PhD programs generally migrate to full-time faculty positions in academia, while those of EngD programs re-emerge in the industry as applied researchers or Executives. If working full-time in industry, graduates of EngD and PhD programs often become adjunct professors in top undergraduate and graduate degree programs.
== List of Universities or Research Centres ==
=== Malaysia ===
The following universities in Malaysia offer Doctor of Engineering degrees:
University of Malaya
University Sains Malaysia
University of Putra Malaysia
National University of Malaysia
University of Technology Malaysia
Universiti Tunku Abdul Rahman
=== United States ===
The following universities which also happen to have an ABET accredited undergraduate degree offer Doctor of Engineering degrees:
Colorado State University
Columbia University
George Washington University
Johns Hopkins University
Lamar University
Morgan State University
Old Dominion University
Pennsylvania State University
Purdue University
Rensselaer Polytechnic Institute
Southern Methodist University
Texas A&M
University of California, Berkeley
University of Dayton
University of Michigan–Ann Arbor
University of Michigan–Dearborn
This listing is incomplete. ABET accreditation is not applicable to doctoral programs. Therefore, there are a number of schools with regionally accredited doctoral programs which are not on this list.
=== United Kingdom ===
In 2009, Engineering Doctorate schemes were offered by 45 UK universities, both singly or in partnership with other universities as industrial doctorate centres. Students on the scheme are encouraged to describe themselves as 'research engineers' rather than 'research students' and as of 2009 the minimum funding level was £1,500 higher than the minimum funding level for PhD students. Advocates of the scheme like to draw attention to the fact that EngD students share some courses with MBA students.
The following EPSRC-funded centres have offered EngDs:
Advanced Forming and Manufacture (University of Strathclyde)
Biopharmaceutical Process Development (Newcastle University)
Bioprocess Engineering Leadership (University College London)
Centre for Doctoral Training in Non-Destructive Evaluation (Imperial College London, Bristol, Nottingham, Strathclyde, Warwick)
Centre for Doctoral Training in Sustainable Materials and Manufacturing (University of Warwick, University of Exeter, Cranfield University)
Centre for Digital Entertainment (University of Bath, Bournemouth University)
COATED: Centre Of Advanced Training for Engineering Doctorates (Swansea University)
Doctoral Training Partnership (DTP) in Structural Metallic Systems for Gas Turbine Applications (Universities of Cambridge, Swansea and Birmingham)
Efficient Fossil Energy Technologies (The Universities of Nottingham, University of Birmingham, and Loughborough University)
Engineering Doctoral Centre in High Value, Low Environmental Impact Manufacturing (University of Warwick)
Formulation Engineering (University of Birmingham)
Industrial Doctorate Centre in Composites Manufacture (University of Bristol, University of Nottingham, University of Manchester, Cranfield University)
Industrial Doctoral Centre for Offshore Renewable Energy (IDCORE) (Universities of Edinburgh, Strathclyde and Exeter)
Innovative and Collaborative Construction Engineering (Loughborough University)
Large-scale Complex IT Systems (Universities of Leeds, Oxford, St Andrews and York)
Manufacturing Technology Engineering Doctorate Centre (MTEDC - The Universities of Nottingham, University of Birmingham, and Loughborough University)
Machining Science (University of Sheffield)
MATTER: Manufacturing Advances Through Training Engineering Researchers (Swansea University)
Micro & Nano-Materials and Technologies (University of Surrey)
Molecular Modelling and Materials Science (University College London)
Nuclear Engineering (Imperial College London, University of Manchester)
Optics and Photonics Technologies (Heriot-Watt (lead), Glasgow, St Andrews, Strathclyde and the Scottish University Physics Alliance)
STREAM - IDC for the Water Sector (Cranfield University, Imperial College London, University of Exeter, University of Sheffield, Newcastle University)
Sustainability for Engineering and Energy Systems (University of Surrey)
Systems (University of Bristol and University of Bath)
Systems Approaches to Biomedical Science (University of Oxford)
Technologies for Sustainable Built Environments (University of Reading)
Transport and the Environment (University of Southampton)
Urban Sustainability and Resilience (University College London)
Virtual Environments, Imaging and Visualisation (University College London)
The following EPSRC-funded centres have offered EngDs:
EPSRC and NERC Industrial Centre for Doctoral Training for Offshore Renewable Energy - IDCORE (University of Edinburgh, University of Exeter, Strathclyde University and the Scottish Association for Marine Sciences)
Renewable Energy Marine Structures (REMS) (Cranfield University, University of Oxford and University of Strathclyde)
Centre for Doctoral Training in Sustainable Materials and Manufacturing (University of Warwick, University of Exeter, Cranfield University)
Industrial Doctoral Centre for Offshore Renewable Energy (University of Edinburgh, University of Exeter, Strathclyde University, the Scottish Association for Marine Sciences and HR-Wallingford)
Industrial Doctorate Centre in Composites Manufacture (University of Bristol, University of Nottingham, University of Manchester, Cranfield University)
Centre for Digital Entertainment (University of Bath, Bournemouth University)
Innovative and Collaborative Construction Engineering (Loughborough University)
Large-scale Complex IT Systems (Universities of Leeds, Oxford, St Andrews and York)
Bioprocess Engineering Leadership (University College London)
Systems (University of Bristol and University of Bath)
Micro & Nano-Materials and Technologies (University of Surrey)
Nuclear Engineering (Imperial College London, University of Manchester)
Optics and Photonics Technologies (Heriot-Watt (lead), Glasgow, St Andrews, Strathclyde and the Scottish University Physics Alliance)
Sustainability for Engineering and Energy Systems (University of Surrey)
Transport and the Environment (University of Southampton)
Molecular Modelling and Materials Science (University College London)
Urban Sustainability and Resilience (University College London)
Biopharmaceutical Process Development (Newcastle University)
STREAM - IDC for the Water Sector (Cranfield University, Imperial College London, University of Exeter, University of Sheffield, Newcastle University)
Systems Approaches to Biomedical Science (University of Oxford)
Technologies for Sustainable Built Environments (University of Reading)
Formulation Engineering (University of Birmingham)
Efficient Fossil Energy Technologies (The Universities of Nottingham, University of Birmingham, and Loughborough University)
Manufacturing Technology Engineering Doctorate Centre (MTEDC - The Universities of Nottingham, University of Birmingham, and Loughborough University)
Centre for Doctoral Training in Non-Destructive Evaluation (Imperial College London, Bristol, Nottingham, Strathclyde, Warwick)
Advanced Forming and Manufacture (University of Strathclyde)
Machining Science (University of Sheffield)
MATTER: Manufacturing Advances Through Training Engineering Researchers (Swansea University)
COATED: Centre Of Advanced Training for Engineering Doctorates (Swansea University)
Doctoral Training Partnership (DTP) in Structural Metallic Systems for Gas Turbine Applications (Universities of Cambridge, Swansea and Birmingham)
Engineering Doctoral Centre in High Value, Low Environmental Impact Manufacturing (University of Warwick)
== See also ==
British degree abbreviations
Doctor of Business Administration
Doctor of Education
Doctor of Philosophy
Doctor of Science
Doctor of Technology, an academic PhD-level degree
Doktoringenieur, the equivalent engineering doctore degree in Germany
Engineer's degree
Engineering education
Master of Engineering
== References ==
== External links ==
IDCORE's website
EPSRC's website
Promotional pamphlet by EPSRC (pdf)
Article about the EngD Archived 2018-08-07 at the Wayback Machine
Association of EngDs UK
== External links ==
IDCORE's website
EPSRC's website
Promotional pamphlet by EPSRC (pdf)
Article about the EngD Archived 2018-08-07 at the Wayback Machine
Association of EngDs UK | Wikipedia/Engineering_Doctorate |
Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines. Automation has been achieved by various means including mechanical, hydraulic, pneumatic, electrical, electronic devices, and computers, usually in combination. Complicated systems, such as modern factories, airplanes, and ships typically use combinations of all of these techniques. The benefit of automation includes labor savings, reducing waste, savings in electricity costs, savings in material costs, and improvements to quality, accuracy, and precision.
Automation includes the use of various equipment and control systems such as machinery, processes in factories, boilers, and heat-treating ovens, switching on telephone networks, steering, stabilization of ships, aircraft and other applications and vehicles with reduced human intervention. Examples range from a household thermostat controlling a boiler to a large industrial control system with tens of thousands of input measurements and output control signals. Automation has also found a home in the banking industry. It can range from simple on-off control to multi-variable high-level algorithms in terms of control complexity.
In the simplest type of an automatic control loop, a controller compares a measured value of a process with a desired set value and processes the resulting error signal to change some input to the process, in such a way that the process stays at its set point despite disturbances. This closed-loop control is an application of negative feedback to a system. The mathematical basis of control theory was begun in the 18th century and advanced rapidly in the 20th. The term automation, inspired by the earlier word automatic (coming from automaton), was not widely used before 1947, when Ford established an automation department. It was during this time that the industry was rapidly adopting feedback controllers, Technological advancements introduced in the 1930s revolutionized various industries significantly.
The World Bank's World Development Report of 2019 shows evidence that the new industries and jobs in the technology sector outweigh the economic effects of workers being displaced by automation. Job losses and downward mobility blamed on automation have been cited as one of many factors in the resurgence of nationalist, protectionist and populist politics in the US, UK and France, among other countries since the 2010s.
== History ==
=== Early history ===
It was a preoccupation of the Greeks and Arabs (in the period between about 300 BC and about 1200 AD) to keep accurate track of time. In Ptolemaic Egypt, about 270 BC, Ctesibius described a float regulator for a water clock, a device not unlike the ball and cock in a modern flush toilet. This was the earliest feedback-controlled mechanism. The appearance of the mechanical clock in the 14th century made the water clock and its feedback control system obsolete.
The Persian Banū Mūsā brothers, in their Book of Ingenious Devices (850 AD), described a number of automatic controls. Two-step level controls for fluids, a form of discontinuous variable structure controls, were developed by the Banu Musa brothers. They also described a feedback controller. The design of feedback control systems up through the Industrial Revolution was by trial-and-error, together with a great deal of engineering intuition. It was not until the mid-19th century that the stability of feedback control systems was analyzed using mathematics, the formal language of automatic control theory.
The centrifugal governor was invented by Christiaan Huygens in the seventeenth century, and used to adjust the gap between millstones.
=== Industrial Revolution in Western Europe ===
The introduction of prime movers, or self-driven machines advanced grain mills, furnaces, boilers, and the steam engine created a new requirement for automatic control systems including temperature regulators (invented in 1624; see Cornelius Drebbel), pressure regulators (1681), float regulators (1700) and speed control devices. Another control mechanism was used to tent the sails of windmills. It was patented by Edmund Lee in 1745. Also in 1745, Jacques de Vaucanson invented the first automated loom. Around 1800, Joseph Marie Jacquard created a punch-card system to program looms.
In 1771 Richard Arkwright invented the first fully automated spinning mill driven by water power, known at the time as the water frame. An automatic flour mill was developed by Oliver Evans in 1785, making it the first completely automated industrial process.
A centrifugal governor was used by Mr. Bunce of England in 1784 as part of a model steam crane. The centrifugal governor was adopted by James Watt for use on a steam engine in 1788 after Watt's partner Boulton saw one at a flour mill Boulton & Watt were building. The governor could not actually hold a set speed; the engine would assume a new constant speed in response to load changes. The governor was able to handle smaller variations such as those caused by fluctuating heat load to the boiler. Also, there was a tendency for oscillation whenever there was a speed change. As a consequence, engines equipped with this governor were not suitable for operations requiring constant speed, such as cotton spinning.
Several improvements to the governor, plus improvements to valve cut-off timing on the steam engine, made the engine suitable for most industrial uses before the end of the 19th century. Advances in the steam engine stayed well ahead of science, both thermodynamics and control theory. The governor received relatively little scientific attention until James Clerk Maxwell published a paper that established the beginning of a theoretical basis for understanding control theory.
=== 20th century ===
Relay logic was introduced with factory electrification, which underwent rapid adaption from 1900 through the 1920s. Central electric power stations were also undergoing rapid growth and the operation of new high-pressure boilers, steam turbines and electrical substations created a large demand for instruments and controls. Central control rooms became common in the 1920s, but as late as the early 1930s, most process controls were on-off. Operators typically monitored charts drawn by recorders that plotted data from instruments. To make corrections, operators manually opened or closed valves or turned switches on or off. Control rooms also used color-coded lights to send signals to workers in the plant to manually make certain changes.
The development of the electronic amplifier during the 1920s, which was important for long-distance telephony, required a higher signal-to-noise ratio, which was solved by negative feedback noise cancellation. This and other telephony applications contributed to the control theory. In the 1940s and 1950s, German mathematician Irmgard Flügge-Lotz developed the theory of discontinuous automatic controls, which found military applications during the Second World War to fire control systems and aircraft navigation systems.
Controllers, which were able to make calculated changes in response to deviations from a set point rather than on-off control, began being introduced in the 1930s. Controllers allowed manufacturing to continue showing productivity gains to offset the declining influence of factory electrification.
Factory productivity was greatly increased by electrification in the 1920s. U.S. manufacturing productivity growth fell from 5.2%/yr 1919–29 to 2.76%/yr 1929–41. Alexander Field notes that spending on non-medical instruments increased significantly from 1929 to 1933 and remained strong thereafter.
The First and Second World Wars saw major advancements in the field of mass communication and signal processing. Other key advances in automatic controls include differential equations, stability theory and system theory (1938), frequency domain analysis (1940), ship control (1950), and stochastic analysis (1941).
Starting in 1958, various systems based on solid-state digital logic modules for hard-wired programmed logic controllers (the predecessors of programmable logic controllers [PLC]) emerged to replace electro-mechanical relay logic in industrial control systems for process control and automation, including early Telefunken/AEG Logistat, Siemens Simatic, Philips/Mullard/Valvo Norbit, BBC Sigmatronic, ACEC Logacec, Akkord Estacord, Krone Mibakron, Bistat, Datapac, Norlog, SSR, or Procontic systems.
In 1959 Texaco's Port Arthur Refinery became the first chemical plant to use digital control.
Conversion of factories to digital control began to spread rapidly in the 1970s as the price of computer hardware fell.
=== Significant applications ===
The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic.: 158 Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor.
The logic performed by telephone switching relays was the inspiration for the digital computer.
The first commercially successful glass bottle-blowing machine was an automatic model introduced in 1905. The machine, operated by a two-man crew working 12-hour shifts, could produce 17,280 bottles in 24 hours, compared to 2,880 bottles made by a crew of six men and boys working in a shop for a day. The cost of making bottles by machine was 10 to 12 cents per gross compared to $1.80 per gross by the manual glassblowers and helpers.
Sectional electric drives were developed using control theory. Sectional electric drives are used on different sections of a machine where a precise differential must be maintained between the sections. In steel rolling, the metal elongates as it passes through pairs of rollers, which must run at successively faster speeds. In paper making paper, the sheet shrinks as it passes around steam-heated drying arranged in groups, which must run at successively slower speeds. The first application of a sectional electric drive was on a paper machine in 1919. One of the most important developments in the steel industry during the 20th century was continuous wide strip rolling, developed by Armco in 1928.
Before automation, many chemicals were made in batches. In 1930, with the widespread use of instruments and the emerging use of controllers, the founder of Dow Chemical Co. was advocating continuous production.
Self-acting machine tools that displaced hand dexterity so they could be operated by boys and unskilled laborers were developed by James Nasmyth in the 1840s. Machine tools were automated with Numerical control (NC) using punched paper tape in the 1950s. This soon evolved into computerized numerical control (CNC).
Today extensive automation is practiced in practically every type of manufacturing and assembly process. Some of the larger processes include electrical power generation, oil refining, chemicals, steel mills, plastics, cement plants, fertilizer plants, pulp and paper mills, automobile and truck assembly, aircraft production, glass manufacturing, natural gas separation plants, food and beverage processing, canning and bottling and manufacture of various kinds of parts. Robots are especially useful in hazardous applications like automobile spray painting. Robots are also used to assemble electronic circuit boards. Automotive welding is done with robots and automatic welders are used in applications like pipelines.
=== Space/computer age ===
With the advent of the space age in 1957, controls design, particularly in the United States, turned away from the frequency-domain techniques of classical control theory and backed into the differential equation techniques of the late 19th century, which were couched in the time domain. During the 1940s and 1950s, German mathematician Irmgard Flugge-Lotz developed the theory of discontinuous automatic control, which became widely used in hysteresis control systems such as navigation systems, fire-control systems, and electronics. Through Flugge-Lotz and others, the modern era saw time-domain design for nonlinear systems (1961), navigation (1960), optimal control and estimation theory (1962), nonlinear control theory (1969), digital control and filtering theory (1974), and the personal computer (1983).
== Advantages, disadvantages, and limitations ==
Perhaps the most cited advantage of automation in industry is that it is associated with faster production and cheaper labor costs. Another benefit could be that it replaces hard, physical, or monotonous work. Additionally, tasks that take place in hazardous environments or that are otherwise beyond human capabilities can be done by machines, as machines can operate even under extreme temperatures or in atmospheres that are radioactive or toxic. They can also be maintained with simple quality checks. However, at the time being, not all tasks can be automated, and some tasks are more expensive to automate than others. Initial costs of installing the machinery in factory settings are high, and failure to maintain a system could result in the loss of the product itself.
Moreover, some studies seem to indicate that industrial automation could impose ill effects beyond operational concerns, including worker displacement due to systemic loss of employment and compounded environmental damage; however, these findings are both convoluted and controversial in nature, and could potentially be circumvented.
The main advantages of automation are:
Increased throughput or productivity
Improved quality
Increased predictability
Improved robustness (consistency), of processes or product
Increased consistency of output
Reduced direct human labor costs and expenses
Reduced cycle time
Increased accuracy
Relieving humans of monotonously repetitive work
Required work in development, deployment, maintenance, and operation of automated processes — often structured as "jobs"
Increased human freedom to do other things
Automation primarily describes machines replacing human action, but it is also loosely associated with mechanization, machines replacing human labor. Coupled with mechanization, extending human capabilities in terms of size, strength, speed, endurance, visual range & acuity, hearing frequency & precision, electromagnetic sensing & effecting, etc., advantages include:
Relieving humans of dangerous work stresses and occupational injuries (e.g., fewer strained backs from lifting heavy objects)
Removing humans from dangerous environments (e.g. fire, space, volcanoes, nuclear facilities, underwater, etc.)
The main disadvantages of automation are:
High initial cost
Faster production without human intervention can mean faster unchecked production of defects where automated processes are defective.
Scaled-up capacities can mean scaled-up problems when systems fail — releasing dangerous toxins, forces, energies, etc., at scaled-up rates.
Human adaptiveness is often poorly understood by automation initiators. It is often difficult to anticipate every contingency and develop fully preplanned automated responses for every situation. The discoveries inherent in automating processes can require unanticipated iterations to resolve, causing unanticipated costs and delays.
People anticipating employment income may be seriously disrupted by others deploying automation where no similar income is readily available.
=== Paradox of automation ===
The paradox of automation says that the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical. Lisanne Bainbridge, a cognitive psychologist, identified these issues notably in her widely cited paper "Ironies of Automation." If an automated system has an error, it will multiply that error until it is fixed or shut down. This is where human operators come in. A fatal example of this was Air France Flight 447, where a failure of automation put the pilots into a manual situation they were not prepared for.
=== Limitations ===
Current technology is unable to automate all the desired tasks.
Many operations using automation have large amounts of invested capital and produce high volumes of products, making malfunctions extremely costly and potentially hazardous. Therefore, some personnel is needed to ensure that the entire system functions properly and that safety and product quality are maintained.
As a process becomes increasingly automated, there is less and less labor to be saved or quality improvement to be gained. This is an example of both diminishing returns and the logistic function.
As more and more processes become automated, there are fewer remaining non-automated processes. This is an example of the exhaustion of opportunities. New technological paradigms may, however, set new limits that surpass the previous limits.
==== Current limitations ====
Many roles for humans in industrial processes presently lie beyond the scope of automation. Human-level pattern recognition, language comprehension, and language production ability are well beyond the capabilities of modern mechanical and computer systems (but see Watson computer). Tasks requiring subjective assessment or synthesis of complex sensory data, such as scents and sounds, as well as high-level tasks such as strategic planning, currently require human expertise. In many cases, the use of humans is more cost-effective than mechanical approaches even where the automation of industrial tasks is possible. Therefore, algorithmic management as the digital rationalization of human labor instead of its substitution has emerged as an alternative technological strategy. Overcoming these obstacles is a theorized path to post-scarcity economics.
=== Societal impact and unemployment ===
Increased automation often causes workers to feel anxious about losing their jobs as technology renders their skills or experience unnecessary. Early in the Industrial Revolution, when inventions like the steam engine were making some job categories expendable, workers forcefully resisted these changes. Luddites, for instance, were English textile workers who protested the introduction of weaving machines by destroying them. More recently, some residents of Chandler, Arizona, have slashed tires and pelted rocks at self-driving car, in protest over the cars' perceived threat to human safety and job prospects.
The relative anxiety about automation reflected in opinion polls seems to correlate closely with the strength of organized labor in that region or nation. For example, while a study by the Pew Research Center indicated that 72% of Americans are worried about increasing automation in the workplace, 80% of Swedes see automation and artificial intelligence (AI) as a good thing, due to the country's still-powerful unions and a more robust national safety net.
According to one estimate, 47% of all current jobs in the US have the potential to be fully automated by 2033. Furthermore, wages and educational attainment appear to be strongly negatively correlated with an occupation's risk of being automated. Erik Brynjolfsson and Andrew McAfee argue that "there's never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value. However, there's never been a worse time to be a worker with only 'ordinary' skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate." Others however argue that highly skilled professional jobs like a lawyer, doctor, engineer, journalist are also at risk of automation.
According to a 2020 study in the Journal of Political Economy, automation has robust negative effects on employment and wages: "One more robot per thousand workers reduces the employment-to-population ratio by 0.2 percentage points and wages by 0.42%." A 2025 study in the American Economic Journal found that the introduction of industrial robots reduced 1993 and 2014 led to reduced employment of men and women by 3.7 and 1.6 percentage points.
Research by Carl Benedikt Frey and Michael Osborne of the Oxford Martin School argued that employees engaged in "tasks following well-defined procedures that can easily be performed by sophisticated algorithms" are at risk of displacement, and 47% of jobs in the US were at risk. The study, released as a working paper in 2013 and published in 2017, predicted that automation would put low-paid physical occupations most at risk, by surveying a group of colleagues on their opinions. However, according to a study published in McKinsey Quarterly in 2015 the impact of computerization in most cases is not the replacement of employees but the automation of portions of the tasks they perform. The methodology of the McKinsey study has been heavily criticized for being intransparent and relying on subjective assessments. The methodology of Frey and Osborne has been subjected to criticism, as lacking evidence, historical awareness, or credible methodology. Additionally, the Organisation for Economic Co-operation and Development (OECD) found that across the 21 OECD countries, 9% of jobs are automatable.
Based on a formula by Gilles Saint-Paul, an economist at Toulouse 1 University, the demand for unskilled human capital declines at a slower rate than the demand for skilled human capital increases. In the long run and for society as a whole it has led to cheaper products, lower average work hours, and new industries forming (i.e., robotics industries, computer industries, design industries). These new industries provide many high salary skill-based jobs to the economy. By 2030, between 3 and 14 percent of the global workforce will be forced to switch job categories due to automation eliminating jobs in an entire sector. While the number of jobs lost to automation is often offset by jobs gained from technological advances, the same type of job loss is not the same one replaced and that leading to increasing unemployment in the lower-middle class. This occurs largely in the US and developed countries where technological advances contribute to higher demand for highly skilled labor but demand for middle-wage labor continues to fall. Economists call this trend "income polarization" where unskilled labor wages are driven down and skilled labor is driven up and it is predicted to continue in developed economies.
== Lights-out manufacturing ==
Lights-out manufacturing is a production system with no human workers, to eliminate labor costs. It grew in popularity in the U.S. when General Motors in 1982 implemented humans "hands-off" manufacturing to "replace risk-averse bureaucracy with automation and robots". However, the factory never reached full "lights out" status.
The expansion of lights out manufacturing requires:
Reliability of equipment
Long-term mechanic capabilities
Planned preventive maintenance
Commitment from the staff
== Health and environment ==
The costs of automation to the environment are different depending on the technology, product or engine automated. There are automated engines that consume more energy resources from the Earth in comparison with previous engines and vice versa. Hazardous operations, such as oil refining, the manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation.
The automation of vehicles could prove to have a substantial impact on the environment, although the nature of this impact could be beneficial or harmful depending on several factors. Because automated vehicles are much less likely to get into accidents compared to human-driven vehicles, some precautions built into current models (such as anti-lock brakes or laminated glass) would not be required for self-driving versions. Removal of these safety features reduces the weight of the vehicle, and coupled with more precise acceleration and braking, as well as fuel-efficient route mapping, can increase fuel economy and reduce emissions. Despite this, some researchers theorize that an increase in the production of self-driving cars could lead to a boom in vehicle ownership and usage, which could potentially negate any environmental benefits of self-driving cars if they are used more frequently.
Automation of homes and home appliances is also thought to impact the environment. A study of energy consumption of automated homes in Finland showed that smart homes could reduce energy consumption by monitoring levels of consumption in different areas of the home and adjusting consumption to reduce energy leaks (e.g. automatically reducing consumption during the nighttime when activity is low). This study, along with others, indicated that the smart home's ability to monitor and adjust consumption levels would reduce unnecessary energy usage. However, some research suggests that smart homes might not be as efficient as non-automated homes. A more recent study has indicated that, while monitoring and adjusting consumption levels do decrease unnecessary energy use, this process requires monitoring systems that also consume an amount of energy. The energy required to run these systems sometimes negates their benefits, resulting in little to no ecological benefit.
== Convertibility and turnaround time ==
Another major shift in automation is the increased demand for flexibility and convertibility in manufacturing processes. Manufacturers are increasingly demanding the ability to easily switch from manufacturing Product A to manufacturing Product B without having to completely rebuild the production lines. Flexibility and distributed processes have led to the introduction of Automated Guided Vehicles with Natural Features Navigation.
Digital electronics helped too. Former analog-based instrumentation was replaced by digital equivalents which can be more accurate and flexible, and offer greater scope for more sophisticated configuration, parametrization, and operation. This was accompanied by the fieldbus revolution which provided a networked (i.e. a single cable) means of communicating between control systems and field-level instrumentation, eliminating hard-wiring.
Discrete manufacturing plants adopted these technologies fast. The more conservative process industries with their longer plant life cycles have been slower to adopt and analog-based measurement and control still dominate. The growing use of Industrial Ethernet on the factory floor is pushing these trends still further, enabling manufacturing plants to be integrated more tightly within the enterprise, via the internet if necessary. Global competition has also increased demand for Reconfigurable Manufacturing Systems.
== Automation tools ==
Engineers can now have numerical control over automated devices. The result has been a rapidly expanding range of applications and human activities. Computer-aided technologies (or CAx) now serve as the basis for mathematical and organizational tools used to create complex systems. Notable examples of CAx include computer-aided design (CAD software) and computer-aided manufacturing (CAM software). The improved design, analysis, and manufacture of products enabled by CAx has been beneficial for industry.
Information technology, together with industrial machinery and processes, can assist in the design, implementation, and monitoring of control systems. One example of an industrial control system is a programmable logic controller (PLC). PLCs are specialized hardened computers which are frequently used to synchronize the flow of inputs from (physical) sensors and events with the flow of outputs to actuators and events.
Human-machine interfaces (HMI) or computer human interfaces (CHI), formerly known as man-machine interfaces, are usually employed to communicate with PLCs and other computers. Service personnel who monitor and control through HMIs can be called by different names. In the industrial process and manufacturing environments, they are called operators or something similar. In boiler houses and central utility departments, they are called stationary engineers.
Different types of automation tools exist:
ANN – Artificial neural network
DCS – Distributed control system
HMI – Human machine interface
RPA – Robotic process automation
SCADA – Supervisory control and data acquisition
PLC – Programmable logic controller
Instrumentation
Motion control
Robotics
Host simulation software (HSS) is a commonly used testing tool that is used to test the equipment software. HSS is used to test equipment performance concerning factory automation standards (timeouts, response time, processing time).
== Cognitive automation ==
Cognitive automation, as a subset of AI, is an emerging genus of automation enabled by cognitive computing. Its primary concern is the automation of clerical tasks and workflows that consist of structuring unstructured data. Cognitive automation relies on multiple disciplines: natural language processing, real-time computing, machine learning algorithms, big data analytics, and evidence-based learning.
According to Deloitte, cognitive automation enables the replication of human tasks and judgment "at rapid speeds and considerable scale." Such tasks include:
Document redaction
Data extraction and document synthesis / reporting
Contract management
Natural language search
Customer, employee, and stakeholder onboarding
Manual activities and verifications
Follow-up and email communications
== Recent and emerging applications ==
=== CAD AI ===
Artificially intelligent computer-aided design (CAD) can use text-to-3D, image-to-3D, and video-to-3D to automate in 3D modeling. AI CAD libraries could also be developed using linked open data of schematics and diagrams. Ai CAD assistants are used as tools to help streamline workflow.
=== Automated power production ===
Technologies like solar panels, wind turbines, and other renewable energy sources—together with smart grids, micro-grids, battery storage—can automate power production.
=== Agricultural production ===
Many agricultural operations are automated with machinery and equipment to improve their diagnosis, decision-making and/or performing. Agricultural automation can relieve the drudgery of agricultural work, improve the timeliness and precision of agricultural operations, raise productivity and resource-use efficiency, build resilience, and improve food quality and safety. Increased productivity can free up labour, allowing agricultural households to spend more time elsewhere.
The technological evolution in agriculture has resulted in progressive shifts to digital equipment and robotics. Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies
Motorized mechanization has generally increased in recent years. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades.
Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce.
=== Retail ===
Many supermarkets and even smaller stores are rapidly introducing self-checkout systems reducing the need for employing checkout workers. In the U.S., the retail industry employs 15.9 million people as of 2017 (around 1 in 9 Americans in the workforce). Globally, an estimated 192 million workers could be affected by automation according to research by Eurasia Group.
Online shopping could be considered a form of automated retail as the payment and checkout are through an automated online transaction processing system, with the share of online retail accounting jumping from 5.1% in 2011 to 8.3% in 2016. However, two-thirds of books, music, and films are now purchased online. In addition, automation and online shopping could reduce demands for shopping malls, and retail property, which in the United States is currently estimated to account for 31% of all commercial property or around 7 billion square feet (650 million square metres). Amazon has gained much of the growth in recent years for online shopping, accounting for half of the growth in online retail in 2016. Other forms of automation can also be an integral part of online shopping, for example, the deployment of automated warehouse robotics such as that applied by Amazon using Kiva Systems.
=== Food and drink ===
The food retail industry has started to apply automation to the ordering process; McDonald's has introduced touch screen ordering and payment systems in many of its restaurants, reducing the need for as many cashier employees. The University of Texas at Austin has introduced fully automated cafe retail locations. Some cafes and restaurants have utilized mobile and tablet "apps" to make the ordering process more efficient by customers ordering and paying on their device. Some restaurants have automated food delivery to tables of customers using a conveyor belt system. The use of robots is sometimes employed to replace waiting staff.
=== Construction ===
Automation in construction is the combination of methods, processes, and systems that allow for greater machine autonomy in construction activities. Construction automation may have multiple goals, including but not limited to, reducing jobsite injuries, decreasing activity completion times, and assisting with quality control and quality assurance.
=== Mining ===
Automated mining involves the removal of human labor from the mining process. The mining industry is currently in the transition towards automation. Currently, it can still require a large amount of human capital, particularly in the third world where labor costs are low so there is less incentive for increasing efficiency through automation.
=== Video surveillance ===
The Defense Advanced Research Projects Agency (DARPA) started the research and development of automated visual surveillance and monitoring (VSAM) program, between 1997 and 1999, and airborne video surveillance (AVS) programs, from 1998 to 2002. Currently, there is a major effort underway in the vision community to develop a fully-automated tracking surveillance system. Automated video surveillance monitors people and vehicles in real-time within a busy environment. Existing automated surveillance systems are based on the environment they are primarily designed to observe, i.e., indoor, outdoor or airborne, the number of sensors that the automated system can handle and the mobility of sensors, i.e., stationary camera vs. mobile camera. The purpose of a surveillance system is to record properties and trajectories of objects in a given area, generate warnings or notify the designated authorities in case of occurrence of particular events.
=== Highway systems ===
As demands for safety and mobility have grown and technological possibilities have multiplied, interest in automation has grown. Seeking to accelerate the development and introduction of fully automated vehicles and highways, the U.S. Congress authorized more than $650 million over six years for intelligent transport systems (ITS) and demonstration projects in the 1991 Intermodal Surface Transportation Efficiency Act (ISTEA). Congress legislated in ISTEA that:[T]he Secretary of Transportation shall develop an automated highway and vehicle prototype from which future fully automated intelligent vehicle-highway systems can be developed. Such development shall include research in human factors to ensure the success of the man-machine relationship. The goal of this program is to have the first fully automated highway roadway or an automated test track in operation by 1997. This system shall accommodate the installation of equipment in new and existing motor vehicles.Full automation commonly defined as requiring no control or very limited control by the driver; such automation would be accomplished through a combination of sensor, computer, and communications systems in vehicles and along the roadway. Fully automated driving would, in theory, allow closer vehicle spacing and higher speeds, which could enhance traffic capacity in places where additional road building is physically impossible, politically unacceptable, or prohibitively expensive. Automated controls also might enhance road safety by reducing the opportunity for driver error, which causes a large share of motor vehicle crashes. Other potential benefits include improved air quality (as a result of more-efficient traffic flows), increased fuel economy, and spin-off technologies generated during research and development related to automated highway systems.
=== Waste management ===
Automated waste collection trucks prevent the need for as many workers as well as easing the level of labor required to provide the service.
=== Business process ===
Business process automation (BPA) is the technology-enabled automation of complex business processes. It can help to streamline a business for simplicity, achieve digital transformation, increase service quality, improve service delivery or contain costs. BPA consists of integrating applications, restructuring labor resources and using software applications throughout the organization. Robotic process automation (RPA; or RPAAI for self-guided RPA 2.0) is an emerging field within BPA and uses AI. BPAs can be implemented in a number of business areas including marketing, sales and workflow.
=== Home ===
Home automation (also called domotics) designates an emerging practice of increased automation of household appliances and features in residential dwellings, particularly through electronic means that allow for things impracticable, overly expensive or simply not possible in recent past decades. The rise in the usage of home automation solutions has taken a turn reflecting the increased dependency of people on such automation solutions. However, the increased comfort that gets added through these automation solutions is remarkable.
=== Laboratory ===
Automation is essential for many scientific and clinical applications. Therefore, automation has been extensively employed in laboratories. From as early as 1980 fully automated laboratories have already been working. However, automation has not become widespread in laboratories due to its high cost. This may change with the ability of integrating low-cost devices with standard laboratory equipment. Autosamplers are common devices used in laboratory automation.
=== Logistics automation ===
Logistics automation is the application of computer software or automated machinery to improve the efficiency of logistics operations. Typically this refers to operations within a warehouse or distribution center, with broader tasks undertaken by supply chain engineering systems and enterprise resource planning systems.
=== Industrial automation ===
Industrial automation deals primarily with the automation of manufacturing, quality control, and material handling processes. General-purpose controllers for industrial processes include programmable logic controllers, stand-alone I/O modules, and computers. Industrial automation is to replace the human action and manual command-response activities with the use of mechanized equipment and logical programming commands. One trend is increased use of machine vision to provide automatic inspection and robot guidance functions, another is a continuing increase in the use of robots. Industrial automation is simply required in industries.
==== Industrial Automation and Industry 4.0 ====
The rise of industrial automation is directly tied to the "Fourth Industrial Revolution", which is better known now as Industry 4.0. Originating from Germany, Industry 4.0 encompasses numerous devices, concepts, and machines, as well as the advancement of the industrial internet of things (IIoT). An "Internet of Things is a seamless integration of diverse physical objects in the Internet through a virtual representation." These new revolutionary advancements have drawn attention to the world of automation in an entirely new light and shown ways for it to grow to increase productivity and efficiency in machinery and manufacturing facilities. Industry 4.0 works with the IIoT and software/hardware to connect in a way that (through communication technologies) add enhancements and improve manufacturing processes. Being able to create smarter, safer, and more advanced manufacturing is now possible with these new technologies. It opens up a manufacturing platform that is more reliable, consistent, and efficient than before. Implementation of systems such as SCADA is an example of software that takes place in Industrial Automation today. SCADA is a supervisory data collection software, just one of the many used in Industrial Automation. Industry 4.0 vastly covers many areas in manufacturing and will continue to do so as time goes on.
==== Industrial robotics ====
Industrial robotics is a sub-branch in industrial automation that aids in various manufacturing processes. Such manufacturing processes include machining, welding, painting, assembling and material handling to name a few. Industrial robots use various mechanical, electrical as well as software systems to allow for high precision, accuracy and speed that far exceed any human performance. The birth of industrial robots came shortly after World War II as the U.S. saw the need for a quicker way to produce industrial and consumer goods. Servos, digital logic and solid-state electronics allowed engineers to build better and faster systems and over time these systems were improved and revised to the point where a single robot is capable of running 24 hours a day with little or no maintenance. In 1997, there were 700,000 industrial robots in use, the number has risen to 1.8M in 2017 In recent years, AI with robotics is also used in creating an automatic labeling solution, using robotic arms as the automatic label applicator, and AI for learning and detecting the products to be labelled.
==== Programmable Logic Controllers ====
Industrial automation incorporates programmable logic controllers in the manufacturing process. Programmable logic controllers (PLCs) use a processing system which allows for variation of controls of inputs and outputs using simple programming. PLCs make use of programmable memory, storing instructions and functions like logic, sequencing, timing, counting, etc. Using a logic-based language, a PLC can receive a variety of inputs and return a variety of logical outputs, the input devices being sensors and output devices being motors, valves, etc. PLCs are similar to computers, however, while computers are optimized for calculations, PLCs are optimized for control tasks and use in industrial environments. They are built so that only basic logic-based programming knowledge is needed and to handle vibrations, high temperatures, humidity, and noise. The greatest advantage PLCs offer is their flexibility. With the same basic controllers, a PLC can operate a range of different control systems. PLCs make it unnecessary to rewire a system to change the control system. This flexibility leads to a cost-effective system for complex and varied control systems.
PLCs can range from small "building brick" devices with tens of I/O in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems.
They can be designed for multiple arrangements of digital and analog inputs and outputs (I/O), extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.
It was from the automotive industry in the United States that the PLC was born. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time-consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics.
When digital computers became available, being general-purpose programmable devices, they were soon applied to control sequential and combinatorial logic in industrial processes. However, these early computers required specialist programmers and stringent operating environmental control for temperature, cleanliness, and power quality. To meet these challenges, the PLC was developed with several key attributes. It would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, and performance could be traded off for reliability.
==== Agent-assisted automation ====
Agent-assisted automation refers to automation used by call center agents to handle customer inquiries. The key benefit of agent-assisted automation is compliance and error-proofing. Agents are sometimes not fully trained or they forget or ignore key steps in the process. The use of automation ensures that what is supposed to happen on the call actually does, every time. There are two basic types: desktop automation and automated voice solutions.
== Control ==
=== Open-loop and closed-loop ===
=== Discrete control (on/off) ===
One of the simplest types of control is on-off control. An example is a thermostat used on household appliances which either open or close an electrical contact. (Thermostats were originally developed as true feedback-control mechanisms rather than the on-off common household appliance thermostat.)
Sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic that involves system states. An elevator control system is an example of sequence control.
=== PID controller ===
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism (controller) widely used in industrial control systems.
In a PID loop, the controller continuously calculates an error value
e
(
t
)
{\displaystyle e(t)}
as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D) which give their name to the controller type.
The theoretical understanding and application date from the 1920s, and they are implemented in nearly all analog control systems; originally in mechanical controllers, and then using discrete electronics and latterly in industrial process computers.
=== Sequential control and logical sequence or system state control ===
Sequential control may be either to a fixed sequence or to a logical one that will perform different actions depending on various system states. An example of an adjustable but otherwise fixed sequence is a timer on a lawn sprinkler.
States refer to the various conditions that can occur in a use or sequence scenario of the system. An example is an elevator, which uses logic based on the system state to perform certain actions in response to its state and operator input. For example, if the operator presses the floor n button, the system will respond depending on whether the elevator is stopped or moving, going up or down, or if the door is open or closed, and other conditions.
Early development of sequential control was relay logic, by which electrical relays engage electrical contacts which either start or interrupt power to a device. Relays were first used in telegraph networks before being developed for controlling other devices, such as when starting and stopping industrial-sized electric motors or opening and closing solenoid valves. Using relays for control purposes allowed event-driven control, where actions could be triggered out of sequence, in response to external events. These were more flexible in their response than the rigid single-sequence cam timers. More complicated examples involved maintaining safe sequences for devices such as swing bridge controls, where a lock bolt needed to be disengaged before the bridge could be moved, and the lock bolt could not be released until the safety gates had already been closed.
The total number of relays and cam timers can number into the hundreds or even thousands in some factories. Early programming techniques and languages were needed to make such systems manageable, one of the first being ladder logic, where diagrams of the interconnected relays resembled the rungs of a ladder. Special computers called programmable logic controllers were later designed to replace these collections of hardware with a single, more easily re-programmed unit.
In a typical hard-wired motor start and stop circuit (called a control circuit) a motor is started by pushing a "Start" or "Run" button that activates a pair of electrical relays. The "lock-in" relay locks in contacts that keep the control circuit energized when the push-button is released. (The start button is a normally open contact and the stop button is a normally closed contact.) Another relay energizes a switch that powers the device that throws the motor starter switch (three sets of contacts for three-phase industrial power) in the main power circuit. Large motors use high voltage and experience high in-rush current, making speed important in making and breaking contact. This can be dangerous for personnel and property with manual switches. The "lock-in" contacts in the start circuit and the main power contacts for the motor are held engaged by their respective electromagnets until a "stop" or "off" button is pressed, which de-energizes the lock in relay.
Commonly interlocks are added to a control circuit. Suppose that the motor in the example is powering machinery that has a critical need for lubrication. In this case, an interlock could be added to ensure that the oil pump is running before the motor starts. Timers, limit switches, and electric eyes are other common elements in control circuits.
Solenoid valves are widely used on compressed air or hydraulic fluid for powering actuators on mechanical components. While motors are used to supply continuous rotary motion, actuators are typically a better choice for intermittently creating a limited range of movement for a mechanical component, such as moving various mechanical arms, opening or closing valves, raising heavy press-rolls, applying pressure to presses.
=== Computer control ===
Computers can perform both sequential control and feedback control, and typically a single computer will do both in an industrial application. Programmable logic controllers (PLCs) are a type of special-purpose microprocessor that replaced many hardware components such as timers and drum sequencers used in relay logic–type systems. General-purpose process control computers have increasingly replaced stand-alone controllers, with a single computer able to perform the operations of hundreds of controllers. Process control computers can process data from a network of PLCs, instruments, and controllers to implement typical (such as PID) control of many individual variables or, in some cases, to implement complex control algorithms using multiple inputs and mathematical manipulations. They can also analyze data and create real-time graphical displays for operators and run reports for operators, engineers, and management.
Control of an automated teller machine (ATM) is an example of an interactive process in which a computer will perform a logic-derived response to a user selection based on information retrieved from a networked database. The ATM process has similarities with other online transaction processes. The different logical responses are called scenarios. Such processes are typically designed with the aid of use cases and flowcharts, which guide the writing of the software code. The earliest feedback control mechanism was the water clock invented by Greek engineer Ctesibius (285–222 BC).
== See also ==
== References ==
=== Citations ===
=== Sources ===
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from In Brief to The State of Food and Agriculture 2022 – Leveraging automation in agriculture for transforming agrifood systems, FAO, FAO.
== Further reading == | Wikipedia/Industrial_automation |
IEEE Transactions on Nuclear Science is a peer-reviewed scientific journal published monthly by the IEEE. Sponsored by IEEE Nuclear and Plasma Sciences Society, the journal covers the theory, technology, and application areas related to nuclear science and engineering. Its editor-in-chief is Zane Bell (Oak Ridge National Laboratory).
The journal was founded in 1954 under the name Transactions of the Institute of Radio Engineers Professional Group on Nuclear Science and was retitled to IRE Transactions on Nuclear Science the following year. Its title was changed to its current name in 1963.
According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.8.
== References ==
== External links ==
Official website | Wikipedia/IEEE_Transactions_on_Nuclear_Science |
Semiconductor device modeling creates models for the behavior of semiconductor devices based on fundamental physics, such as the doping profiles of the devices. It may also include the creation of compact models (such as the well known SPICE transistor models), which try to capture the electrical behavior of such devices but do not generally derive them from the underlying physics. Normally it starts from the output of a semiconductor process simulation.
== Introduction ==
The figure to the right provides a simplified conceptual view of "the big picture". This figure shows two inverter stages and the resulting input-output voltage-time plot of the circuit. From the digital systems point of view the key parameters of interest are: timing delays, switching power, leakage current and cross-coupling (crosstalk) with other blocks. The voltage levels and transition speed are also of concern.
The figure also shows schematically the importance of Ion versus Ioff, which in turn is related to drive-current (and mobility) for the "on" device and several leakage paths for the "off" devices. Not shown explicitly in the figure are the capacitances—both intrinsic and parasitic—that affect dynamic performance.
The power scaling which is now a major driving force in the industry is reflected in the simplified equation shown in the figure—critical parameters are capacitance, power supply and clocking frequency. Key parameters that relate device behavior to system performance include the threshold voltage, driving current and subthreshold characteristics.
It is the confluence of system performance issues with the underlying technology and device design variables that results in the ongoing scaling laws that we now codify as Moore's law.
== Device modeling ==
The physics and modeling of devices in integrated circuits is dominated by MOS and bipolar transistor modeling. However, other devices are important, such as memory devices, that have rather different modeling requirements. There are of course also issues of reliability engineering—for example, electro-static discharge (ESD) protection circuits and devices—where substrate and parasitic devices are of pivotal importance. These effects and modeling are not considered by most device modeling programs; the interested reader is referred to several excellent monographs in the area of ESD and I/O modeling.
== Physics driven vs. compact models ==
Physics driven device modeling is intended to be accurate, but it is not fast enough for higher level tools, including circuit simulators such as SPICE. Therefore, circuit simulators normally use more empirical models (often called compact models) that do not directly model the underlying physics. For example, inversion-layer mobility modeling, or the modeling of mobility and its dependence on physical parameters, ambient and operating conditions is an important topic both for TCAD (technology computer aided design) physical models and for circuit-level compact models. However, it is not accurately modeled from first principles, and so resort is taken to fitting experimental data. For mobility modeling at the physical level the electrical variables are the various scattering mechanisms, carrier densities, and local potentials and fields, including their technology and ambient dependencies.
By contrast, at the circuit-level, models parameterize the effects in terms of terminal voltages and empirical scattering parameters. The two representations can be compared, but it is unclear in many cases how the experimental data is to be interpreted in terms of more microscopic behavior.
== History ==
The evolution of technology computer-aided design (TCAD)—the synergistic combination of process, device and circuit simulation and modeling tools—finds its roots in bipolar technology, starting in the late 1960s, and the challenges of junction isolated, double-and triple-diffused transistors. These devices and technology were the basis of the first integrated circuits; nonetheless, many of the scaling issues and underlying physical effects are integral to IC design, even after four decades of IC development. With these early generations of IC, process variability and parametric yield were an issue—a theme that will reemerge as a controlling factor in future IC technology as well.
Process control issues—both for the intrinsic devices and all the associated parasitics—presented formidable challenges and mandated the development of a range of advanced physical models for process and device simulation. Starting in the late 1960s and into the 1970s, the modeling approaches exploited were dominantly one- and two-dimensional simulators. While TCAD in these early generations showed exciting promise in addressing the physics-oriented challenges of bipolar technology, the superior scalability and power consumption of MOS technology revolutionized the IC industry. By the mid-1980s, CMOS became the dominant driver for integrated electronics. Nonetheless, these early TCAD developments set the stage for their growth and broad deployment as an essential toolset that has leveraged technology development through the VLSI and ULSI eras which are now the mainstream.
IC development for more than a quarter-century has been dominated by the MOS technology. In the 1970s and 1980s NMOS was favored owing to speed and area advantages, coupled with technology limitations and concerns related to isolation, parasitic effects and process complexity. During that era of NMOS-dominated LSI and the emergence of VLSI, the fundamental scaling laws of MOS technology were codified and broadly applied. It was also during this period that TCAD reached maturity in terms of realizing robust process modeling (primarily one-dimensional) which then became an integral technology design tool, used universally across the industry. At the same time device simulation, dominantly two-dimensional owing to the nature of MOS devices, became the work-horse of technologists in the design and scaling of devices. The transition from NMOS to CMOS technology resulted in the necessity of tightly coupled and fully 2D simulators for process and device simulations. This third generation of TCAD tools became critical to address the full complexity of twin-well CMOS technology (see Figure 3a), including issues of design rules and parasitic effects such as latchup. An abbreviated perspective of this period, through the mid-1980s, is given in; and from the point of view of how TCAD tools were used in the design process, see.
== See also ==
Compact Model Coalition
Diode modelling
Technology CAD
Transistor models
== References ==
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field of electronic design automation. This summary was derived (with permission) from Vol II, Chapter 25, Device Modeling—from physics to electrical parameter extraction, by Robert W. Dutton, Chang-Hoon Choi and Edwin C. Kan.
R.W. Dutton and A.J. Strojwas, Perspectives on technology and technology-driven CAD , IEEE Trans. CAD-ICAS, vol. 19, no. 12, pp. 1544–1560, December, 2000. | Wikipedia/Semiconductor_device_modeling |
The function block diagram (FBD) is a graphical language for programmable logic controller design, that can describe the function between input variables and output variables. A function is described as a set of elementary blocks. Input and output variables are connected to blocks by connection lines.
== Design ==
Inputs and outputs of the blocks are wired together with connection lines or links. Single lines may be used to connect two logical points of the diagram:
An input variable and an input of a block
An output of a block and an input of another block
An output of a block and an output variable
The connection is oriented, meaning that the line carries associated data from the left end to the right end. The left and right ends of the connection line must be of the same type.
Multiple right connection, also called divergence, can be used to broadcast information from its left end to each of its right ends. All ends of the connection must be of the same type.
== Language ==
Function Block Diagram is one of five languages for logic or control configuration supported by standard IEC 61131-3 for a control system such as a programmable logic controller (PLC) or a Distributed Control System (DCS). The other supported languages are ladder logic, sequential function chart, structured text, and instruction list.
== References ==
== External links ==
Runpower PLC | Wikipedia/Function_block_diagram |
A process plant shutdown system is a functional safety countermeasure crucial in any hazardous process plant such as oil and gas production plants and oil refineries. The concept also applies to non-process facilities such as nuclear plants. These systems are used to protect people, assets, and the environment when process conditions get out of the safe design envelope the equipment was designed for.
As the name suggests, these systems are not intended for controlling the process itself but rather for protection. Process control is performed by means of an independent process control systems (PCS) and should not be relied upon to execute critical safety actions.
Although functionally separate, process control and shutdown systems are usually interfaced under one system, called an integrated control and safety system (ICSS). Shutdown systems typically use equipment that is SIL 2 certified as a minimum, whereas control systems can start with SIL 1. SIL applies to both hardware and software requirements such as cards, processors redundancy and voting functions.
== Types ==
There are two main types of safety shutdown systems in process plants:
Process safety system (PSS) or process shutdown system (PSD).
Safety shutdown system (SSS) or emergency shutdown (ESD), which usually entails activation of an emergency depressurization (EDP) or emergency blowdown system.
=== Process shutdown (PSD) ===
An automatic PSD typically isolates the system by shutdown isolation valves, thus bringing it to a safe state before the process parameters, such as level, temperature or pressure, exit the system safe design envelope. Its inputs are critical process signals from the likes of pressure and temperature transmitters, which must be separate from those used for process control. This separation provides redundancy and reliability.
=== Emergency shutdown (ESD) ===
These systems may also be redefined in terms of ESD/EDP levels as:
ESD level 1: In charge of general plant area shutdown, will also activate ESD level 2 if necessary. This level can only be activated from the main control room.
ESD level 2: This level shuts down and isolates individual ESD zones and may activate if necessary EDP.
ESD level 3: provides fluid containment by closing shutdown isolation valves or emergency shutdown valves (ESDVs).
The safety shutdown system shall shut down the facilities to a safe state in case of an emergency situation, thus protecting personnel, the environment and the asset. The safety shutdown system shall manage all inputs and outputs relative to emergency shutdown (ESD) functions (environment and personnel protection). Inputs include for example manual activation and signals from the fire and gas system (FGS). Apart from the actuation of shutdown valves and blowdown valves, outputs include isolation of electrical sources, power shutdown, activation of fire pumps, etc. ESD is usually activated when a loss of containment and/or a fire is detected, although it may be activated at any time the plant operators feel it is necessary to preserve life, assets and the environment.
==== Fire and gas system (FGS) ====
The main objectives of the fire and gas system are to:
Detect at an early stage the presence of flammable gas using gas detectors.
Detect at an early stage hazardous liquid spills.
Detect incipient fire and the presence of fire using fire detectors.
Provide automatic and/or facilities for manual activation of the fire protection system as required.
Transmitting input to the ESD system for it to initiate appropriate automatic actions.
==== Emergency depressurization (EDP) ====
Emergency depressurization, or blowdown, is an important system for safeguarding process plant in the event of an emergency. Equipment such as pressure vessels exposed to fire could undergo catastrophic failure leading to an uncontrolled loss of containment. Depressurization reduces potential failure by removing inventory from the plant thereby decreasing the internal mechanical stresses and extending the plant’s integrity at elevated temperatures. Its function is distinct from that of pressure relief valves, which are passive devices opening if pressure reaches a value above the process safety trip, but still below the design pressure of the equipment. Relief valves complement the PSD.
A process plant is typically divided into isolatable sections by emergency shutdown valves (ESDVs). Each section may be designated as belonging to a fire zone that is depressurized by a dedicated blowdown valve (BDV) or set of BDVs. During ESD conditions, the depressurization of only specific isolatable sections is undertaken. However, during more widespread emergency circumstances, the whole facility may be depressurized.
In a typical depressurization system, the goal is typically reduce the pressure in the plant to less than 50% of the design pressure or to 7 barg, whichever is lower, within 15 minutes.
Disposal of blowdown fluids is generally to flare systems or, if safe to do so, non-fired blowdown drums. Blowdown may be strategically delayed by fire zone to shave peak flow and allow the flare to deal with the incoming gas. This is generally referred to as a staggered blowdown.
A depressurization system comprises an actuated valve and a restriction orifice. The BDV valve is normally held in the closed position but opens on demand or on failure of the actuator. A restriction orifice (RO) downstream of the BDV is sized to achieve the desired blowdown rate. A locked-open valve may be located downstream of the orifice. The valve, in the closed position, allows the functionality of the BDV to be tested without depressurizing that section of the plant.
== See also ==
Safety integrity level
== Notes == | Wikipedia/Plant_process_and_emergency_shutdown_systems |
Sequential function chart (SFC) is a visual programming language used for programmable logic controllers (PLCs). It is one of the five languages defined by IEC 61131-3 standard. The SFC standard is defined as Preparation of function charts for control systems, and was based on GRAFCET (itself based on binary Petri nets).
It can be used to program processes that can be split into steps.
Main components of SFC are:
Steps with associated actions;
Transitions with associated logic conditions;
Directed links between steps and transitions.
Steps in an SFC diagram can be active or inactive. Actions are only executed for active steps. A step can be active for one of two motives:
It is an initial step as specified by the programmer.
It was activated during a scan cycle and not deactivated since.
Steps are activated when all steps above it are active and the connecting transition is superable (i.e. its associated condition is true). When a transition is passed, all steps above are deactivated at once and after all steps below are activated at once.
Actions associated with steps can be of several types, the most relevant ones being Continuous (N), Set (S), and Reset (R). Apart from the obvious meaning of Set and Reset, an N action ensures that its target variable is set to 1 as long as the step is active. An SFC rule states that if two steps have an N action on the same target, the variable must never be reset to 0. It is also possible to insert LD (Ladder Diagram) actions inside an SFC program (and this is the standard way, for instance, to work on integer variables).
SFC is an inherently parallel programming language in that multiple control flows — Program Organization Units (POUs) in the standard's parlance — can be active at once.
Non-standard extensions to the language include macroactions: i.e. actions inside a program unit that influence the state of another program unit. The most relevant such macroaction is "forcing", in which a POU can decide the active steps of another POU.
== See also ==
DRAKON-chart
UML activity diagram
Continuous Function Chart
== References ==
== External links ==
SFC/GRAFCET free stencils for Microsoft Visio
Rockwell Automation, Allen-Bradley. Sequential Function Charts
CODESYS | Wikipedia/Sequential_function_chart |
A cam timer or drum sequencer is an electromechanical system for controlling a sequence of events automatically. It resembles a music box with movable pins, controlling electrical switches instead of musical notes.
== Description ==
An electric motor drives a shaft arranged with a series of cams or a drum studded with pegs along its surface. Associated with each cam is one or more switches. The motor rotates at a fixed speed, and the camshaft is driven through a speed-reducing gearbox at a convenient slow speed. Indentations or protrusions on the cams operate the switches at different times. Complex sequences of opening and closing switches can be made by the arrangement of the cams and switches. The switches then operate different elements of the controlled system - for example, motors, valves, etc.
A programmer may change or rearrange (reprogram) peg or cam positions. Much like the pegs in a music box cylinder activate the notes, in a drum sequencer, as the drum of the sequencer spins, the pegs run across switches, activating machine processes. The placement of the pegs along the length of the cylinder determines which switch will activate along the length of the drum. Where the peg lies along the circular circumference of the drum determines at what point the peg will activate the switch in the drum's spin. The drum performs repetitive switching operations by controlling the timing and sequence of switches.
Most cam timers use a miniature mains synchronous motor to rotate the mechanism at an accurate constant speed. Occasionally, more complex timers with two motors are seen.
A drum sequencer is a reprogrammable electromechanical timing device that activates electric switches in repetitive sequences. These sequencers were primarily used in industrial applications to enable automated manufacturing processes.
== Uses ==
Industrial machines use Cam timers and drum sequencers to control repetitive sequencing operations. The cam followers often operated hydraulic valves. Cam timers in the industry were superseded by the introduction of programmable logic controllers (PLCs), which offer improved flexibility and more complicated control logic functions. In consumer products like washing machines, they were replaced with ASICs or microcontrollers.
The most common use for cam timers is in automatic washing machines, which drive the washing sequence according to a pre-programmed pattern. They are gradually being superseded by microprocessor-controlled systems, which have greater versatility and thus can more efficiently respond to various feedback.
Another example is the usage in electromechanical pinball machines, where the Cam timer is also known as a 'Score Motor.'
== Methods used to increase control ==
The most basic cam timer rotates continually, which is inconvenient when waiting for events that occur at variable times.
With washing machine cam timers, it is necessary to wait a variable amount of time (for example, waiting for a tank of water to heat up to a preset temperature). To achieve this, the cam motor is subjected to control by one of its switches. The timer sequence switches the cam motor off, and the motor is started again by the signal from the thermostat when the required temperature is reached.
Usually, washing machine thermostats have fewer fixed temperature detection points than the number of wash temperatures used. For intermediate temperatures, the cam mechanism uses the stop and wait for the method to heat to the nearest temperature below the one desired, then uses only the fixed timing of the heating element to increase the water to the desired temperature.
Some cam timers also have a fast forward mode, where applying power to a point on the controller causes rapid advance of the mechanism. This is often seen on washing machine controllers. Rapid advance can be achieved by moving of gearing, which may be triggered by various means.
Using feedback, external time delay, and other sensory circuits, it is possible to build an electromechanical state machine using a cam timer. These are common in washing machines, where the cam timer runs in phases, but also stops and waits for external signals such as a fill level sensor, or a water heating temperature sensor.
== Replacement with electronic controllers ==
While still fairly popular, cam timers are mechanical and hence subject to wear and reliability problems. Their reliability record remains good, but there is always some failure rate with mechanical switch contacts.
Electronic controllers have largely replaced cam timers in most applications, primarily to reduce costs and also to maximize product features.
Cam timers don't offer the greater flexibility that CPU-based controllers provide. In addition to offering more wash program variations, a CPU-based washing machine controller can respond to malfunctions, automatically initiate test cycles, reducing manufacturing costs, and provide fault codes in the field, again reducing repair costs. It also provides feedback on real-world failure rates and causes. All of these reduce manufacturing and business costs.
== See also ==
Clock
Drum machine (electronic musical instrument)
Timer
Player Piano (with a looped tape)
Category:Mechanical musical instruments – contains automatic playing musical instruments using pinned cylinders, etc.
Pinball
== References ==
== External links ==
Drum Sequencer | Wikipedia/Drum_sequencer_(controller) |
In science, computing, and engineering, a black box is a system which can be viewed in terms of its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is "opaque" (black). The term can be used to refer to many inner workings, such as those of a transistor, an engine, an algorithm, the human brain, or an institution or government.
To analyze an open system with a typical "black box approach", only the behavior of the stimulus/response will be accounted for, to infer the (unknown) box. The usual representation of this "black box system" is a data flow diagram centered in the box.
The opposite of a black box is a system where the inner components or logic are available for inspection, which is most commonly referred to as a white box (sometimes also known as a "clear box" or a "glass box").
== History ==
The modern meaning of the term "black box" seems to have entered the English language around 1945. In electronic circuit theory the process of network synthesis from transfer functions, which led to electronic circuits being regarded as "black boxes" characterized by their response to signals applied to their ports, can be traced to Wilhelm Cauer who published his ideas in their most developed form in 1941. Although Cauer did not himself use the term, others who followed him certainly did describe the method as black-box analysis. Vitold Belevitch puts the concept of black-boxes even earlier, attributing the explicit use of two-port networks as black boxes to Franz Breisig in 1921 and argues that 2-terminal components were implicitly treated as black-boxes before that.
In cybernetics, a full treatment was given by Ross Ashby in 1956. A black box was described by Norbert Wiener in 1961 as an unknown system that was to be identified using the techniques of system identification. He saw the first step in self-organization as being able to copy the output behavior of a black box. Many other engineers, scientists and epistemologists, such as Mario Bunge, used and perfected the black box theory in the 1960s.
== Systems theory ==
In systems theory, the black box is an abstraction representing a class of concrete open system which can be viewed solely in terms of its stimuli inputs and output reactions:
The constitution and structure of the box are altogether irrelevant to the approach under consideration, which is purely external or phenomenological. In other words, only the behavior of the system will be accounted for.
The understanding of a black box is based on the "explanatory principle", the hypothesis of a causal relation between the input and the output. This principle states that input and output are distinct, that the system has observable (and relatable) inputs and outputs and that the system is black to the observer (non-openable).
=== Recording of observed states ===
An observer makes observations over time. All observations of inputs and outputs of a black box can be written in a table, in which, at each of a sequence of times, the states of the box's various parts, input and output, are recorded. Thus, using an example from Ashby, examining a box that has fallen from a flying saucer might lead to this protocol:
Thus, every system, fundamentally, is investigated by the collection of a long protocol, drawn out in time, showing the sequence of input and output states. From this there follows the fundamental deduction that all knowledge obtainable from a Black Box (of given input and output) is such as can be obtained by re-coding the protocol (the observation table); all that, and nothing more.
If the observer also controls input, the investigation turns into an experiment (illustration), and hypotheses about cause and effect can be tested directly.
When the experimenter is also motivated to control the box, there is active feedback in the box/observer relation, promoting what in control theory is called a feed forward architecture.
=== Modeling ===
The modeling process is the construction of a predictive mathematical model, using existing historic data (observation table).
=== Testing the black box model ===
A developed black box model is a validated model when black-box testing methods ensures that it is, based solely on observable elements.
With back testing, out of time data is always used when testing the black box model. Data has to be written down before it is pulled for black box inputs.
== Other theories ==
Black box theories are those theories defined only in terms of their function. The term can be applied in any field where some inquiry is made into the relations between aspects of the appearance of a system (exterior of the black box), with no attempt made to explain why those relations should exist (interior of the black box). In this context, Newton's theory of gravitation can be described as a black box theory.
Specifically, the inquiry is focused upon a system that has no immediately apparent characteristics and therefore has only factors for consideration held within itself hidden from immediate observation. The observer is assumed ignorant in the first instance as the majority of available data is held in an inner situation away from facile investigations. The black box element of the definition is shown as being characterised by a system where observable elements enter a perhaps imaginary box with a set of different outputs emerging which are also observable.
=== Adoption in humanities ===
In humanities disciplines such as philosophy of mind and behaviorism, one of the uses of black box theory is to describe and understand psychological factors in fields such as marketing when applied to an analysis of consumer behaviour.
=== Black box theory ===
Black Box theory is even wider in application than professional studies:
The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box.
(...) This simple rule proved very effective and is an illustration of how the Black Box principle in cybernetics can be used to control situations that, if gone into deeply, may seem very complex. A further example of the Black Box principle is the treatment of mental patients. The human brain is certainly a Black Box, and while a great deal of neurological research is going on to understand the mechanism of the brain, progress in treatment is also being made by observing patients' responses to stimuli.
== Applications ==
=== Computing and mathematics ===
In computer programming and software engineering, black box testing is used to check that the output of a program is as expected, given certain inputs. The term "black box" is used because the actual program being executed is not examined.
In computing in general, a black box program is one where the user cannot see the inner workings (perhaps because it is a closed source program) or one which has no side effects and the function of which need not be examined, a routine suitable for re-use.
Also in computing, a black box refers to a piece of equipment provided by a vendor for the purpose of using that vendor's product. It is often the case that the vendor maintains and supports this equipment, and the company receiving the black box typically is hands-off.
In mathematical modeling, a limiting case.
=== Science and technology ===
In neural networking or heuristic algorithms (computer terms generally used to describe "learning" computers or "AI simulations"), a black box is used to describe the constantly changing section of the program environment which cannot easily be tested by the programmers. This is also called a white box in the context that the program code can be seen, but the code is so complex that it is functionally equivalent to a black box.
In physics, a black box is a system whose internal structure is unknown, or need not be considered for a particular purpose.
In cryptography to capture the notion of knowledge obtained by an algorithm through the execution of a cryptographic protocol such as a zero-knowledge proof protocol. If the output of an algorithm when interacting with the protocol matches that of a simulator given some inputs, it only needs to know the inputs.
=== Other applications ===
In philosophy and psychology, the school of behaviorism sees the human mind as a black box; see other theories.
== See also ==
== References == | Wikipedia/Black_box_(systems) |
Advanced Design System (ADS) is an electronic design automation software system produced by PathWave Design, a division of Keysight Technologies. It provides an integrated design environment to designers of RF electronic products such as mobile phones, pagers, wireless networks, satellite communications, radar systems, and high-speed data links.
Keysight ADS supports every step of the design process — schematic capture, layout, design rule checking, frequency-domain and time-domain circuit simulation, and electromagnetic field simulation — allowing the engineer to fully characterize and optimize an RF design without changing tools.
Keysight has donated copies of the ADS software to the electrical engineering departments at many universities.
== See also ==
Momentum (electromagnetic simulator) — 3D Planar EM simulator element of ADS platforms
FEM Element — Arbitrary 3D geometry EM simulator element of ADS platforms
== Notes ==
The deprecated Tektronix ADS is another, unrelated, electronic design automation system composed of TekSpice and QuickIC.
== External links ==
Official website
Agilent ADS tutorial and forum EM Talk
ADS Basics Playlist - Keysight Technologies
30-Second Demos of user-inspired innovations in Advanced Design System (ADS 2014) | Wikipedia/Advanced_Design_System |
Electronic circuit design comprises the analysis and synthesis of electronic circuits.
== Methods ==
To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Linear circuits, that is, circuits wherein the outputs are linearly dependent on the inputs, can be analyzed by hand using complex analysis. Simple nonlinear circuits can also be analyzed in this way. Specialized software has been created to analyze circuits that are either too complicated or too nonlinear to analyze by hand.
Circuit simulation software allows engineers to design circuits more efficiently, reducing the time cost and risk of error involved in building circuit prototypes. Some of these make use of hardware description languages such as VHDL or Verilog.
=== Network simulation software ===
More complex circuits are analyzed with circuit simulation software such as SPICE and EMTP.
==== Linearization around operating point ====
When faced with a new circuit, the software first tries to find a steady state solution wherein all the nodes conform to Kirchhoff's Current Law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element.
Once the steady state solution is found, the software can analyze the response to perturbations using piecewise approximation, harmonic balance or other methods.
==== Piece-wise linear approximation ====
Software such as the PLECS interface to Simulink uses piecewise linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.
=== Synthesis ===
Simple circuits may be designed by connecting a number of elements or functional blocks such as integrated circuits.
More complex digital circuits are typically designed with the aid of computer software. Logic circuits (and sometimes mixed mode circuits) are often described in such hardware description languages as HDL, VHDL or Verilog, then synthesized using a logic synthesis engine.
== See also ==
Circuit design
Integrated circuit design
== References == | Wikipedia/Electronic_circuit_design |
The Exner equation describes conservation of mass between sediment in the bed of a channel and sediment that is being transported.
It states that bed elevation increases (the bed aggrades) proportionally to the amount of sediment that drops out of transport, and conversely decreases (the bed degrades) proportionally to the amount of sediment that becomes entrained by the flow.
It was developed by the Austrian meteorologist and sedimentologist Felix Maria Exner, from whom it derives its name.
It is typically applied to sediment in a fluvial system such as a river.
The Exner equation states that the change in bed elevation,
η
{\displaystyle \eta }
, over time,
t
{\displaystyle t}
, is equal to one over the grain packing density,
ε
o
{\displaystyle \varepsilon _{o}}
, times the negative divergence of sediment flux,
q
s
{\displaystyle \mathbf {q_{s}} }
,
∂
η
∂
t
=
−
1
ε
o
∇
⋅
q
s
{\displaystyle {\frac {\partial \eta }{\partial t}}=-{\frac {1}{\varepsilon _{o}}}\nabla \cdot \mathbf {q_{s}} }
Note that
ε
o
{\displaystyle \varepsilon _{o}}
can also be expressed as
(
1
−
λ
p
)
{\displaystyle (1-\lambda _{p})}
, where
λ
p
{\displaystyle \lambda _{p}}
equals the bed porosity.
Good values of
ε
o
{\displaystyle \varepsilon _{o}}
for natural systems range from 0.45 to 0.75. A typical value for spherical grains is 0.64, as given by random close packing. An upper bound for close-packed spherical grains is 0.74048 (see sphere packing for more details); this degree of packing is extremely improbable in natural systems, making random close packing the more realistic upper bound on grain packing density.
Often, for reasons of computational convenience and/or lack of data, the Exner equation is used in its one-dimensional form. This is generally done with respect to the downstream direction
x
{\displaystyle x}
, as one is typically interested in the downstream distribution of erosion and deposition though a river reach
∂
η
∂
t
=
−
1
ε
o
∂
q
s
∂
x
{\displaystyle {\frac {\partial \eta }{\partial t}}=-{\frac {1}{\varepsilon _{o}}}{\frac {\partial {q_{s}}}{\partial x}}}
where
q
s
{\displaystyle q_{s}}
is scalar sediment flux in the downstream direction.
== References == | Wikipedia/Exner_equation |
A runoff models or rainfall-runoff model describes how rainfall is converted into runoff in a drainage basin (catchment area or watershed). More precisely, it produces a surface runoff hydrograph in response to a rainfall event, represented by and input as a hyetograph.
Rainfall-runoff models need to be calibrated before they can be used.
A well known runoff model is the linear reservoir, but in practice it has limited applicability.
The runoff model with a non-linear reservoir is more universally applicable, but still it holds only for catchments whose surface area is limited by the condition that the rainfall can be considered more or less uniformly distributed over the area. The maximum size of the watershed then depends on the rainfall characteristics of the region. When the study area is too large, it can be divided into sub-catchments and the various runoff hydrographs may be combined using flood routing techniques.
== Linear reservoir ==
The hydrology of a linear reservoir (figure 1) is governed by two equations.
flow equation:
Q
=
A
⋅
S
{\displaystyle Q=A\cdot S}
, with units [L/T], where L is length (e.g. mm) and T is time (e.g. h, day)
continuity or water balance equation:
R
=
Q
+
d
S
d
T
{\displaystyle R=Q+{\frac {dS}{dT}}}
, with units [L/T]
where:
Q is the runoff or discharge
R is the effective rainfall or rainfall excess or recharge
A is the constant reaction factor or response factor with unit [1/T]
S is the water storage with unit [L]
dS is a differential or small increment of S
dT is a differential or small increment of T
Runoff equation
A combination of the two previous equations results in a differential equation, whose solution is:
Q
2
=
Q
1
exp
(
−
A
(
T
2
−
T
1
)
)
+
R
[
1
−
exp
(
−
A
(
T
2
−
T
1
)
)
]
{\displaystyle Q_{2}=Q_{1}\exp \left(-A(T_{2}-T_{1})\right)+R\left[1-\exp \left(-A(T_{2}-T_{1})\right)\right]}
This is the runoff equation or discharge equation, where Q1 and Q2 are the values of Q at time T1 and T2 respectively while T2−T1 is a small time step during which the recharge can be assumed constant.
Computing the total hydrograph
Provided the value of A is known, the total hydrograph can be obtained using a successive number of time steps and computing, with the runoff equation, the runoff at the end of each time step from the runoff at the end of the previous time step.
Unit hydrograph
The discharge may also be expressed as: Q = − dS/dT . Substituting herein the expression of Q in equation (1) gives the differential equation dS/dT = A·S, of which the solution is: S = exp(− A·t) . Replacing herein S by Q/A according to equation (1), it is obtained that: Q = A exp(− A·t) . This is called the instantaneous unit hydrograph (IUH) because the Q herein equals Q2 of the foregoing runoff equation using R = 0, and taking S as unity which makes Q1 equal to A according to equation (1).
The availability of the foregoing runoff equation eliminates the necessity of calculating the total hydrograph by the summation of partial hydrographs using the IUH as is done with the more complicated convolution method.
Determining the response factor A
When the response factor A can be determined from the characteristics of the watershed (catchment area), the reservoir can be used as a deterministic model or analytical model, see hydrological modelling.
Otherwise, the factor A can be determined from a data record of rainfall and runoff using the method explained below under non-linear reservoir. With this method the reservoir can be used as a black box model.
Conversions
1 mm/day corresponds to 10 m3/day per ha of the watershed
1 L/s per ha corresponds to 8.64 mm/day or 86.4 m3/day per ha
== Non-linear reservoir ==
Contrary to the linear reservoir, the non linear reservoir has a reaction factor A that is not a constant, but it is a function of S or Q (figure 2, 3).
Normally A increases with Q and S because the higher the water level is the higher the discharge capacity becomes. The factor is therefore called Aq instead of A.
The non-linear reservoir has no usable unit hydrograph.
During periods without rainfall or recharge, i.e. when R = 0, the runoff equation reduces to
Q2 = Q1 exp { − Aq (T2 − T1) }, or:
or, using a unit time step (T2 − T1 = 1) and solving for Aq:
Aq = − ln (Q2/Q1)
Hence, the reaction or response factor Aq can be determined from runoff or discharge measurements using unit time steps during dry spells, employing a numerical method.
Figure 3 shows the relation between Aq (Alpha) and Q for a small valley (Rogbom) in Sierra Leone.
Figure 4 shows observed and simulated or reconstructed discharge hydrograph of the watercourse at the downstream end of the same valley.
== Recharge ==
The recharge, also called effective rainfall or rainfall excess, can be modeled by a pre-reservoir (figure 6) giving the recharge as overflow. The pre-reservoir knows the following elements:
a maximum storage (Sm) with unit length [L]
an actual storage (Sa) with unit [L]
a relative storage: Sr = Sa/Sm
a maximum escape rate (Em) with units length/time [L/T]. It corresponds to the maximum rate of evaporation plus percolation and groundwater recharge, which will not take part in the runoff process (figure 5, 6)
an actual escape rate: Ea = Sr·Em
a storage deficiency: Sd = Sm + Ea − Sa
The recharge during a unit time step (T2−T1=1) can be found from R = Rain − Sd
The actual storage at the end of a unit time step is found as Sa2 = Sa1 + Rain − R − Ea, where Sa1 is the actual storage at the start of the time step.
The Curve Number method (CN method) gives another way to calculate the recharge. The initial abstraction herein compares with Sm − Si, where Si is the initial value of Sa.
== Nash model ==
The Nash model uses a series (cascade) of linear reservoirs in which each reservoir empties into the next until the runoff is obtained. For calibration, the model requires considerable research.
== Software ==
Figures 3 and 4 were made with the RainOff program, designed to analyse rainfall and runoff using the non-linear reservoir model with a pre-reservoir. The program also contains an example of the hydrograph of an agricultural subsurface drainage system for which the value of A can be obtained from the system's characteristics.
Raven is a robust and flexible hydrological modelling framework, designed for application to challenging hydrological problems in academia and practice. This fully object-oriented code provides complete flexibility in spatial discretization, interpolation, process representation, and forcing function generation. Models built with Raven can be as simple as a single watershed lumped model with only a handful of state variables to a full semi-distributed system model with physically-based infiltration, snowmelt, and routing. This flexibility encourages stepwise modelling while enabling investigation into critical research issues regarding discretization, numerical implementation, and ensemble simulation of surface water hydrological models. Raven is open source, covered under the Artistic License 2.0.
The SMART hydrological model includes agricultural subsurface drainage flow, in addition to soil and groundwater reservoirs, to simulate the flow path contributions to streamflow.
Vflo is another software program for modeling runoff. Vflo uses radar rainfall and GIS data to generate physics-based, distributed runoff simulation.
The WEAP (Water Evaluation And Planning) software platform models runoff and percolation from climate and land use data, using a choice of linear and non-linear reservoir models.
The RS MINERVE software platform simulates the formation of free surface run-off flow and its propagation in rivers or channels. The software is based on object-oriented programming and allows hydrologic and hydraulic modeling according to a semi-distributed conceptual scheme with different rainfall-runoff model such as HBV, GR4J, SAC-SMA or SOCONT.
The IHACRES is a catchment-scale rainfall-streamflow modelling methodology. Its purpose is to assist the hydrologist or water resources engineer to characterise the dynamic relationship between basin rainfall and streamflow.
== References == | Wikipedia/Runoff_model_(reservoir) |
The Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes soil erosion processes.
Erosion models play critical roles in soil and water resource conservation and nonpoint source pollution assessments, including: sediment load assessment and inventory, conservation planning and design for sediment control, and for the advancement of scientific understanding. The USLE or one of its derivatives are main models used by United States government agencies to measure water erosion.
The USLE was developed in the U.S., based on soil erosion data collected beginning in the 1930s by the U.S. Department of Agriculture
(USDA) Soil Conservation Service (now the USDA Natural Resources Conservation Service). The model has been used for decades for purposes of conservation planning both in the United States where it originated and around the world, and has been used to help implement the United States' multibillion-dollar conservation program. The Revised Universal Soil Loss Equation (RUSLE) and the Modified Universal Soil Loss Equation (MUSLE) continue to be used for similar purposes.
== Overview of erosion models ==
The two primary types of erosion models are process-based models and empirically based models. Process-based (physically based) models mathematically describe the erosion processes of detachment, transport, and deposition and through the solutions of the equations describing those processes provide estimates of soil loss and sediment yields from specified land surface areas. Erosion science is not sufficiently advanced for there to exist completely process-based models which do not include empirical aspects. The primary indicator, perhaps, for differentiating process-based from other types of erosion models is the use of the sediment continuity equation discussed below. Empirical models relate management and environmental factors directly to soil loss and/or sedimentary yields through statistical relationships. Lane et al. provided a detailed discussion regarding the nature of process-based and empirical erosion models, as well as a discussion of what they termed conceptual models, which lie somewhere between the process-based and purely empirical models. Current research effort involving erosion modeling is weighted toward the development of process-based erosion models. On the other hand, the standard model for most erosion assessment and conservation planning is the empirically based USLE, and there continues to be active research and development of USLE-based erosion prediction technology.
== Description of USLE ==
The USLE was developed from erosion plot and rainfall simulator experiments. The USLE is composed of six factors to predict the long-term average annual soil loss (A). The equation includes the rainfall erosivity factor (R), the soil erodibility factor (K), the topographic factors (L and S), and the cropping management factors (C and P). The equation takes the simple product form:
A
=
R
K
L
S
C
P
{\displaystyle A=RKLSCP}
The USLE has another concept of experimental importance, the unit plot concept. The unit plot is defined as the standard plot condition to determine the soil's erodibility. These conditions are when the LS factor = 1 (slope = 9% and length = 22.1 m (72.6 ft) where the plot is fallow and tillage is up and down slope and no conservation practices are applied (CP=1). In this state:
K
=
A
/
R
{\displaystyle K=A/R}
A simpler method to predict K was presented by Wischmeier et al. which includes the particle size of the soil, organic matter content, soil structure and profile permeability. The soil erodibility factor K can be approximated from a nomograph if this information is known. The LS factors can easily be determined from a slope effect chart by knowing the length and gradient of the slope. The cropping management factor (C) and conservation practices factor (P) are more difficult to obtain and must be determined empirically from plot data. They are described in soil loss ratios (C or P with / C or P without).
Various techniques have emerged over the last few decades to compute the five RUSLE factors. However, determining the P factor has proven to be challenging as there is usually a lack of geospatial information on the specific soil conservation practices in a given region. Thus, to estimate the P factor value in the RUSLE formula, a combination of land use type and slope gradient is often used, where a lower value indicates more effective control of soil erosion.
Creating field boundaries, such as stone walls, hedgerows, earth banks, and lynchets, effectively prevented or reduced soil erosion in pre-industrial agriculture. Recently, a novel P-factor model for Europe has been developed from the data retrieved during a statistical survey that recorded the occurrence of stone walls and grass margins in EU countries. While this is one of the first efforts to incorporate cultural landscape features into a soil erosion model on a continental scale, the authors of the study pointed out several limitations, such as the small number of surveyed points and the chosen interpolation technique. It has been demonstrated that landscape archaeology has the potential to fill this gap in the data about soil conservation practices using a GIS-based tool called Historic Landscape Characterisation (HLC). Starting from the assumptions that the construction of field boundaries has always represented an effective method to limit soil erosion and that the efficiency of any conservation measures to mitigate soil erosion increases with the increasing of the slope, a new P factor equation has been developed integrating the HLC within the RUSLE model. In a recent study, modeling landscape archaeological data in a soil loss estimation equation enables deeper reflection on how historical strategies for soil management might relate to current environmental and climate conditions.
== See also ==
Certified Professional in Erosion and Sediment Control (CPESC)
Erosion control
WEPP (Water Erosion Prediction Project), a physically based erosion simulation model
== References ==
== External links ==
"About the Universal Soil Loss Equation" - USDA
RUSLE2 - Official site - USDA | Wikipedia/Universal_Soil_Loss_Equation |
The United States Environmental Protection Agency (EPA) Storm Water Management Model (SWMM) is a dynamic rainfall–runoff–subsurface runoff simulation model used for single-event to long-term (continuous) simulation of the surface/subsurface hydrology quantity and quality from primarily urban/suburban areas.
It can simulate the rainfall-runoff, runoff, evaporation, infiltration and groundwater connection for roots, streets, grassed areas, rain gardens and ditches and pipes, for example. The hydrology component of SWMM operates on a collection of subcatchment areas divided into impervious and pervious areas with and without depression storage to predict runoff and pollutant loads from precipitation, evaporation and infiltration losses from each of the subcatchment. Besides, low impact development (LID) and best management practice areas on the subcatchment can be modeled to reduce the impervious and pervious runoff. The routing or hydraulics section of SWMM transports this water and possible associated water quality constituents through a system of closed pipes, open channels, storage/treatment devices, ponds, storages, pumps, orifices, weirs, outlets, outfalls and other regulators.
SWMM tracks the quantity and quality of the flow generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period composed of multiple fixed or variable time steps. The water quality constituents such as water quality constituents can be simulated from buildup on the subcatchments through washoff to a hydraulic network with optional first order decay and linked pollutant removal, best management practice and low-impact development (LID) removal and treatment can be simulated at selected storage nodes. SWMM is one of the hydrology transport models which the EPA and other agencies have applied widely throughout North America and through consultants and universities throughout the world. The latest update notes and new features can be found on the EPA website in the download section. Recently added in November 2015 were the EPA SWMM 5.1 Hydrology Manual (Volume I) and in 2016 the EPA SWMM 5.1 Hydraulic Manual (Volume II) and EPA SWMM 5.1 Water Quality (including LID Modules) Volume (III) + Errata.
== Program description ==
The EPA storm water management model (SWMM) is a dynamic rainfall-runoff-routing simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and generate runoff and pollutant loads. The routing portion of SWMM transports this runoff through a system of pipes, channels, storage/treatment devices, pumps, and regulators. SWMM tracks the quantity and quality of runoff generated within each subcatchment, and the flow rate, flow depth, and quality of water in each pipe and channel during a simulation period divided into multiple time steps.
SWMM accounts for various hydrologic processes that produce runoff from urban areas. These include:
time-varying rainfall
evaporation of standing surface water
snow accumulation and melting
rainfall interception from depression storage
infiltration of rainfall into unsaturated soil layers
percolation of infiltrated water into groundwater layers
interflow between groundwater and the drainage system
nonlinear reservoir routing of overland flow
capture and retention of rainfall/runoff with various types of low impact development (LID) practices.
SWMM also contains a flexible set of hydraulic modeling capabilities used to route runoff and external inflows through the drainage system network of pipes, channels, storage/treatment units and diversion structures. These include the ability to:
handle networks of unlimited size·
use a wide variety of standard closed and open conduit shapes as well as natural channels·
model special elements such as storage/treatment units, flow dividers, pumps, weirs, and orifices·
apply external flows and water quality inputs from surface runoff, groundwater interflow, rainfall-dependent infiltration/inflow, dry weather sanitary flow, and user-defined inflows
utilize either kinematic wave or full dynamic wave flow routing methods·
model various flow regimes, such as backwater, surcharging, reverse flow, and surface ponding·
apply user-defined dynamic control rules to simulate the operation of pumps, orifice openings, and weir crest levels.
Spatial variability in all of these processes is achieved by dividing a study area into a collection of smaller, homogeneous subcatchment areas, each containing its own fraction of pervious and impervious sub-areas. Overland flow can be routed between sub-areas, between subcatchments, or between entry points of a drainage system.
Since its inception, SWMM has been used in thousands of sewer and stormwater studies throughout the world. Typical applications include:
design and sizing of drainage system components for flood control
sizing of detention facilities and their appurtenances for flood control and water quality protection·
flood plain mapping of natural channel systems, by modeling the river hydraulics and associated flooding problems using prismatic channels·
designing control strategies for minimizing Combined Sewer Overflow (CSO) and Sanitary Sewer Overflow (SSO)·
evaluating the impact of inflow and infiltration on sanitary sewer overflows·
generating non-point source pollutant loadings for waste load allocation studies·
evaluating the effectiveness of BMPs and subcatchment LID's for reducing wet weather pollutant loadings. Rainfall-runoff modeling of urban and rural watersheds
hydraulic and water quality analysis of storm, sanitary, and combined sewer systems
master planning of sewer collection systems and urban watersheds
system evaluations associated with USEPA's regulations including NPDES permits, CMOM, and TMDL
1D and 2D (surface ponding) predictions of flood levels and flooding volume
EPA SWMM is public domain software that may be freely copied and distributed. The SWMM 5 public domain consists of C engine code and Delphi SWMM 5 graphical user interface code. The C code and Delphi code are easily edited and can be recompiled by students and professionals for custom features or extra output features.
== History ==
SWMM was first developed between 1969–1971 and has undergone four major upgrades since those years. The major upgrades were: (1) Version 2 in 1973-1975, (2) Version 3 in 1979-1981, (3) Version 4 in 1985-1988 and (4) Version 5 in 2001-2004. A list of the major changes and post-2004 changes are shown in Table 1. The current SWMM edition, Version 5.2.3, is a complete re-write of the previous Fortran releases in the programming language C, and it can be run under Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10 and also with a recompilation under Unix. The code for SWMM5 is open source and public domain code that can be downloaded from the EPA website.
EPA SWMM 5 provides an integrated graphical environment for editing watershed input data, running hydrologic, hydraulic, real time control and water quality simulations, and viewing the results in a variety of graphical formats. These include color-coded thematic drainage area maps, time series graphs and tables, profile plots, scatter plots and statistical frequency analyses.
The last rewrite of EPA SWMM was produced by the Water Supply and Water Resources Division of the U.S. Environmental Protection Agency's National Risk Management Research Laboratory with assistance from the consulting firm of CDM Inc under a Cooperative Research and Development Agreement (CRADA). SWMM 5 is used as the computational engine for many modeling packages plus components of SWMM5 are in other modeling packages. The major modeling packages that use all or some of the SWMM5 components are shown in the Vendor section. The update history of SWMM 5 from the original SWMM 5.0.001 to the current version SWMM 5.2.3 can be found at the EPA website. SWMM 5 was approved FEMA Model Approval Page in May 2005, with a note about the versions that are approved on the FEMA Approval Page SWMM 5 Version 5.0.005 (May 2005) and up for NFIP modeling. SWMM 5 is used as the computational engine for many modeling packages (see the SWMM 5 Platform Section of this article) and some components of SWMM5 are in other modeling packages (see the SWMM 5 Vendor Section of this article).
== SWMM conceptual model ==
SWMM conceptualizes a drainage system as a series of water and material flows between several major environmental compartments. These compartments and the SWMM objects they contain include:
The atmosphere compartment, from which precipitation falls and pollutants are deposited onto the land surface compartment. SWMM uses Rain Gage objects to represent rainfall inputs to the system. The rain gage objects can use time series, external text files or NOAA rainfall data files. The Rain Gage objects can use precipitation for thousands of years. Using the SWMM-CAT Addon to SWMM5 climate change can now be simulated using modified temperature, evaporation or rainfall.
The Land Surface compartment, which is represented by one or more subcatchment objects. It receives precipitation from the Atmospheric compartment in the form of rain or snow; it sends outflow in the form of infiltration to the groundwater compartment and also as surface runoff and pollutant loadings to the Transport compartment. The low impact development (LID) controls are part of the subcatchments and store, infiltrate or evaporate the runoff.
The groundwater compartment receives infiltration from the Land Surface compartment and transfers a portion of this inflow to the transport compartment. This compartment is modeled using aquifer objects. The connection to the Transport compartment can be either a static boundary or a dynamic depth in the channels. The links in the Transport compartment now also have seepage and evaporation.
The transport compartment contains a network of conveyance elements (channels, pipes, pumps, and regulators) and storage/treatment units that transport water to outfalls or to treatment facilities. Inflows to this compartment can come from surface runoff, groundwater interflow, sanitary dry weather flow, or from user-defined hydrographs. The components of the Transport compartment are modeled with Node and Link objects.
Not all compartments need to appear in a particular SWMM model. For example, one could model just the transport compartment, using pre-defined hydrographs as inputs. If kinematic wave routing is used, then the nodes do not need to contain an outfall.
== Model parameters ==
The simulated model parameters for subcatchments are surface roughness, depression storage, slope, flow path length; for Infiltration: Horton: max/min rates and decay constant; Green-Ampt: hydraulic conductivity, initial moisture deficit and suction head; Curve Number: NRCS (SCS) Curve number; All: time for saturated soil to fully drain; for Conduits: Manning’s roughness; for Water Quality: buildup/washoff function coefficients, first-order decay coefficients, removal equations. A study area can be divided into any number of individual subcatchments, each of which drains to a single point. Study areas can range in size from a small portion of a single lot up to thousands of acres. SWMM uses hourly or more frequent rainfall data as input and can be run for single events or in a continuous fashion for any number of years.
== Hydrology and hydraulics capabilities ==
SWMM 5 accounts for various hydrologic processes that produce surface and subsurface runoff from urban areas. These include:
Time-varying rainfall for an unlimited number of rain gages for both design and continuous hyetographs
evaporation of standing surface water on watersheds and surface ponds
snowfall accumulation, plowing, and melting
rainfall interception from depression storage in both impervious and pervious areas
infiltration of precipitation into unsaturated soil layers
percolation of infiltrated water into groundwater layers
interflow between groundwater and pipes and ditches
nonlinear reservoir routing of watershed overland flow.
Spatial variability in all of these processes is achieved by dividing a study area into a collection of smaller, homogeneous watershed or subcatchment areas, each containing its fraction of pervious and impervious sub-areas. Overland flow can be routed between sub-areas, between subcatchments, or between entry points of a drainage system.
SWMM also contains a flexible set of hydraulic modeling capabilities used to route runoff and external inflows through the drainage system network of pipes, channels, storage/treatment units and diversion structures. These include the ability to:
Simulate drainage networks of unlimited size
use a wide variety of standard closed and open conduit shapes as well as natural or irregular channels
model special elements such as storage/treatment units, outlets, flow dividers, pumps, weirs, and orifices
apply external flows and water quality inputs from surface runoff, groundwater interflow, rainfall-dependent infiltration/inflow, dry weather sanitary flow, and user-defined inflows
utilize either steady, kinematic wave or full dynamic wave flow routing methods
model various flow regimes, such as backwater, surcharging, pressure, reverse flow, and surface ponding
apply user-defined dynamic control rules to simulate the operation of pumps, orifice openings, and weir crest levels
Infiltration is the process of rainfall penetrating the ground surface into the unsaturated soil zone of pervious subcatchments areas. SWMM5 offers four choices for modeling infiltration:
=== Classical infiltration method ===
This method is based on empirical observations showing that infiltration decreases exponentially from an initial maximum rate to some minimum rate over the course of a long rainfall event. Input parameters required by this method include the maximum and minimum infiltration rates, a decay coefficient that describes how fast the rate decreases over time, and the time it takes a fully saturated soil to completely dry (used to compute the recovery of infiltration rate during dry periods).
==== Modified Horton Method ====
This is a modified version of the classical Horton Method that uses the cumulative infiltration in excess of the minimum rate as its state variable (instead of time along the Horton curve), providing a more accurate infiltration estimate when low rainfall intensities occur. It uses the same input parameters as does the traditional Horton Method.
=== Green–Ampt method ===
This method for modeling infiltration assumes that a sharp wetting front exists in the soil column, separating soil with some initial moisture content below from saturated soil above. The input parameters required are the initial moisture deficit of the soil, the soil's hydraulic conductivity, and the suction head at the wetting front. The recovery rate of moisture deficit during dry periods is empirically related to the hydraulic conductivity.
=== Curve number method ===
This approach is adopted from the NRCS (SCS) curve number method for estimating runoff. It assumes that the total infiltration capacity of a soil can be found from the soil's tabulated curve number. During a rain event this capacity is depleted as a function of cumulative rainfall and remaining capacity. The input parameters for this method are the curve number and the time it takes a fully saturated soil to completely dry (used to compute the recovery of infiltration capacity during dry periods).
SWMM also allows the infiltration recovery rate to be adjusted by a fixed amount on a monthly basis to account for seasonal variation in such factors as evaporation rates and groundwater levels. This optional monthly soil recovery pattern is specified as part of a project's evaporation data.
In addition to modeling the generation and transport of runoff flows, SWMM can also estimate the production of pollutant loads associated with this runoff. The following processes can be modeled for any number of user-defined water quality constituents:
Dry-weather pollutant buildup over different land uses
pollutant washoff from specific land uses during storm events
direct contribution of wet and dry rainfall deposition
reduction in dry-weather buildup due to street cleaning
reduction in washoff load due to BMPs and LIDs
entry of dry weather sanitary flows and user-specified external inflows at any point in the drainage system
routing of water quality constituents through the drainage system
reduction in constituent concentration through treatment in storage units or by natural processes in pipes and channels.
Rain gages in SWMM5 supply precipitation data for one or more subcatchment areas in a study region. The rainfall data can be either a user-defined time series or come from an external file. Several different popular rainfall file formats currently in use are supported, as well as a standard user-defined format. The principal input properties of rain gages include:
rainfall data type (e.g., intensity, volume, or cumulative volume)
recording time interval (e.g., hourly, 15-minute, etc.)
source of rainfall data (input time series or external file)
name of rainfall data source
The other principal input parameters for the subcatchments include:
assigned rain gage
outlet node or subcatchment and routing fraction
assigned land uses
tributary surface area
imperviousness and zero percent imperviousness
slope
characteristic width of overland flow
Manning's n for overland flow on both pervious and impervious areas
depression storage in both pervious and impervious areas
percent of impervious area with no depression storage.
infiltration parameters
snowpack
groundwater parameters
LID parameters for each LID Control Used
== Routing options ==
Steady-flow routing represents the simplest type of routing possible (actually no routing) by assuming that within each computational time step flow is uniform and steady. Thus it simply translates inflow hydrographs at the upstream end of the conduit to the downstream end, with no delay or change in shape. The normal flow equation is used to relate flow rate to flow area (or depth).
This type of routing cannot account for channel storage, backwater effects, entrance/exit losses, flow reversal or pressurized flow. It can only be used with dendritic conveyance networks, where each node has only a single outflow link (unless the node is a divider in which case two outflow links are required). This form of routing is insensitive to the time step employed and is really only appropriate for preliminary analysis using long-term continuous simulations. Kinematic wave routing solves the continuity equation along with a simplified form of the momentum equation in each conduit. The latter requires that the slope of the water surface equal the slope of the conduit.
The maximum flow that can be conveyed through a conduit is the full normal flow value. Any flow in excess of this entering the inlet node is either lost from the system or can pond atop the inlet node and be re-introduced into the conduit as capacity becomes available.
Kinematic wave routing allows flow and area to vary both spatially and temporally within a conduit. This can result in attenuated and delayed outflow hydrographs as inflow is routed through the channel. However this form of routing cannot account for backwater effects, entrance/exit losses, flow reversal, or pressurized flow, and is also restricted to dendritic network layouts. It can usually maintain numerical stability with moderately large time steps, on the order of 1 to 5 minutes. If the aforementioned effects are not expected to be significant then this alternative can be an accurate and efficient routing method, especially for long-term simulations.
Dynamic wave routing solves the complete one-dimensional Saint Venant flow equations and therefore produces the most theoretically accurate results. These equations consist of the continuity and momentum equations for conduits and a volume continuity equation at nodes.
With this form of routing it is possible to represent pressurized flow when a closed conduit becomes full, such that flows can exceed the full normal flow value. Flooding occurs when the water depth at a node exceeds the maximum available depth, and the excess flow is either lost from the system or can pond atop the node and re-enter the drainage system.
Dynamic wave routing can account for channel storage, backwater, entrance/exit losses, flow reversal, and pressurized flow. Because it couples together the solution for both water levels at nodes and flow in conduits it can be applied to any general network layout, even those containing multiple downstream diversions and loops. It is the method of choice for systems subjected to significant backwater effects due to downstream flow restrictions and with flow regulation via weirs and orifices. This generality comes at a price of having to use much smaller time steps, on the order of a minute or less (SWMM can automatically reduce the user-defined maximum time step as needed to maintain numerical stability).
== Integrated hydrology/hydraulics ==
One of the great advances in SWMM 5 was the integration of urban/suburban subsurface flow with the hydraulic computations of the drainage network. This advance is a tremendous improvement over the separate subsurface hydrologic and hydraulic computations of the previous versions of SWMM because it allows the modeler to conceptually model the same interactions that occur physically in the real open channel/shallow aquifer environment. The SWMM 5 numerical engine calculates the surface runoff, subsurface hydrology and assigns the current climate data at either the wet or dry hydrologic time step. The hydraulic calculations for the links, nodes, control rules and boundary conditions of the network are then computed at either a fixed or variable time step within the hydrologic time step by using interpolation routines and the simulated hydrologic starting and ending values. The versions of SWMM 5 greater than SWMM 5.1.007 allow the modeler to simulate climate changes by globally changing the rainfall, temperature, and evaporation using monthly adjustments.
An example of this integration was the collection of the different SWMM 4 link types in the runoff, transport and Extran blocks to one unified group of closed conduit and open channel link types in SWMM 5 and a collection of node types (Figure 2).
SWMM contains a flexible set of hydraulic modeling capabilities used to route runoff and external inflows through the drainage system network of pipes, channels, storage/treatment units, and diversion structures. These include the ability to do the following:
Handle drainage networks of unlimited size. Use a wide variety of standard closed and open conduit shapes as well as natural channels. Model special elements, such as storage/treatment units, flow dividers, pumps, weirs, and orifices. Apply external flows and water quality inputs from surface runoff, groundwater interflow, rainfall-dependent infiltration/inflow, dry weather sanitary flow, and user-defined inflows. Utilize either kinematic wave or full dynamic wave flow routing methods. Model various flow regimes, such as backwater, surcharging, reverse flow, and surface ponding. apply user-defined dynamic control rules to simulate the operation of pumps, orifice openings, and weir crest levels. Percolation of infiltrated water into groundwater layers. Interflow between groundwater and the drainage system. Nonlinear reservoir routing of overland flow. Runoff reduction via LID controls.
== Low-impact development components ==
The low-impact development (LID) function was new to SWMM 5.0.019/20/21/22 and SWMM 5.1+ It is integrated within the subcatchment and allows further refinement of the overflows, infiltration flow and evaporation in rain barrel, swales, permeable paving, green roof, rain garden, bioretention and infiltration trench. The term low-impact development (Canada/US) is used in Canada and the United States to describe a land planning and engineering design approach to managing stormwater runoff. In recent years many states in the US have adopted LID concepts and standards to enhance their approach to reducing the harmful potential for storm water pollution in new construction projects. LID takes many forms but can generally be thought of as an effort to minimize or prevent concentrated flows of storm water leaving a site. To do this the LID practice suggests that when impervious surfaces (concrete, etc.) are used, they are periodically interrupted by pervious areas which can allow the storm water to infiltrate (soak into the earth)
A variety of sub-processes in each LID can be defined in SWMM5 such as: surface, pavement, soil, storage, drainmat and drain.
Each type of LID has limitations on the type of sub-process allowed by SWMM 5. It has a good report feature and a LID summary report can be in the rpt file and an external report file in which the surface depth can be seen, soil moisture, storage depth, surface inflow, evaporation, surface infiltration, soil percolation, storage infiltration, surface outflow and the LID continuity error. There can be multiple LID's per subcatchment and no issues have been had because of having many complicated LID sub networks and processes inside the Subcatchments of SWMM 5 or any continuity issues not solvable by a smaller wet hydrology time step. The types of SWMM 5 LID compartments are: storage, underdrain, surface, pavement and soil. a bio-retention cell has storage, underdrain and surface compartments. an infiltration trench lid has storage, underdrain and surface compartments. A porous pavement LID has storage, underdrain and pavement compartments. A rain barrel has only storage and underdrain compartments and a vegetative swale LID has a single surface compartment. Each type of LID shares different underlying compartment objects in SWMM 5 which are called layers.
This set of equations can be solved numerically at each runoff time step to determine how an inflow hydrograph to the LID unit is converted into some combination of runoff hydrograph, sub-surface storage, sub-surface drainage, and infiltration into the surrounding native soil. In addition to Street Planters and Green Roofs, the bio-retention model just described can be used to represent Rain Gardens by eliminating the storage layer and also Porous Pavement systems by replacing the soil layer with a pavement layer.
The surface layer of the LID receives both direct rainfall and runon from other areas. It loses water through infiltration into the soil layer below it, by evapotranspiration (ET) of any water stored in depression storage and vegetative capture, and by any surface runoff that might occur. The soil layer contains an amended soil mix that can support vegetative growth. It receives infiltration from the surface layer and loses water through ET and by percolation into the storage layer below it. The storage layer consists of coarse crushed stone or gravel. It receives percolation from the soil zone above it and loses water by either infiltration into the underlying natural soil or by outflow through a perforated pipe underdrain system.
New as of July 2013, the EPA's National Stormwater Calculator is a Windows desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States. Estimates are based on local soil conditions, land cover, and historic rainfall records. The Calculator accesses several national databases that provide soil, topography, rainfall, and evaporation information for the chosen site. The user supplies information about the site's land cover and selects the types of low impact development (LID) controls they would like to use on-site. The LID Control features in SWMM 5.1.013 include the following among types of Green infrastructure:
StreetPlanter: Bioretention cells are depressions that contain vegetation grown in an engineered soil mixture placed above a gravel drainage bed. They provide storage, infiltration and evaporation of both direct rainfall and runoff captured from surrounding areas. Street planters consist of concrete boxes filled with an engineered soil that supports vegetative growth. Beneath the soil is a gravel bed that provides additional storage. The walls of a planter extend 3 to 12 inches above the soil bed to allow for ponding within the unit. The thickness of the soil growing medium ranges from 6 to 24 inches while gravel beds are 6 to 18 inches in depth. The planter's capture ratio is the ratio of its area to the impervious area whose runoff it captures.
Raingarden: Rain gardens are a type of bio-retention cell consisting of just the engineered soil layer with no gravel bed below it. Rain Gardens are shallow depressions filled with an engineered soil mix that supports vegetative growth. They are usually used on individual home lots to capture roof runoff. Typical soil depths range from 6 to 18 inches. The capture ratio is the ratio of the rain garden's area to the impervious area that drains onto it.
GreenRoof: Green roofs are another variation of a bio-retention cell that have a soil layer laying atop a special drainage mat material that conveys excess percolated rainfall off of the roof. Green Roofs (also known as Vegetated Roofs) are bio-retention systems placed on roof surfaces that capture and temporarily store rainwater in a soil growing medium. They consist of a layered system of roofing designed to support plant growth and retain water for plant uptake while preventing ponding on the roof surface. The thickness used for the growing medium typically ranges from 3 to 6 inches.
InfilTrench: infiltration trenches are narrow ditches filled with gravel that intercept runoff from upslope impervious areas. They provide storage volume and additional time for captured runoff to infiltrate the native soil below.
PermPave or permeable pavements: Continuous Permeable Pavement systems are excavated areas filled with gravel and paved over with a porous concrete or asphalt mix. Continuous Permeable Pavement systems are excavated areas filled with gravel and paved over with a porous concrete or asphalt mix. Modular Block systems are similar except that permeable block pavers are used instead. Normally all rainfall will immediately pass through the pavement into the gravel storage layer below it where it can infiltrate at natural rates into the site's native soil. Pavement layers are usually 4 to 6 inches in height while the gravel storage layer is typically 6 to 18 inches high. The Capture Ratio is the percent of the treated area (street or parking lot) that is replaced with permeable pavement.
Cistern: Rain barrels (or cisterns) are containers that collect roof runoff during storm events and can either release or re-use the rainwater during dry periods. Rain harvesting systems collect runoff from rooftops and convey it to a cistern tank where it can be used for non-potable water uses and on-site infiltration. The harvesting system is assumed to consist of a given number of fixed-sized cisterns per 1000 square feet of rooftop area captured. The water from each cistern is withdrawn at a constant rate and is assumed to be consumed or infiltrated entirely on-site.
VegSwale: Vegetative swales are channels or depressed areas with sloping sides covered with grass and other vegetation. They slow down the conveyance of collected runoff and allow it more time to infiltrate the native soil beneath it. Infiltration basins are shallow depressions filled with grass or other natural vegetation that capture runoff from adjoining areas and allow it to infiltrate into the soil.
Wet ponds are frequently used for water quality improvement, groundwater recharge, flood protection, aesthetic improvement or any combination of these. Sometimes they act as a replacement for the natural absorption of a forest or other natural process that was lost when an area is developed. As such, these structures are designed to blend into neighborhoods and are viewed as an amenity.
Dry ponds temporarily store water after a storm, but eventually empties out at a controlled rate to a downstream water body.
Sand filters generally control runoff water quality, providing very limited flow rate control. A typical sand filter system consists of two or three chambers or basins. The first is the sedimentation chamber, which removes floatables and heavy sediments. The second is the filtration chamber, which removes additional pollutants by filtering the runoff through a sand bed. The third is the discharge chamber. Infiltration trench, is a type of best management practice (BMP) that is used to manage stormwater runoff, prevent flooding and downstream erosion, and improve water quality in an adjacent river, stream, lake or bay. It is a shallow excavated trench filled with gravel or crushed stone that is designed to infiltrate stormwater though permeable soils into the groundwater aquifer.
A Vegatated filter strip is a type of buffer strip that is an area of vegetation, generally narrow and long, that slows the rate of runoff, allowing sediments, organic matter, and other pollutants that are being conveyed by the water to be removed by settling out. Filter strips reduce erosion and the accompanying stream pollution, and can be a best management practice.
Other LID like concepts around the world include sustainable drainage system (SUDS). The idea behind SUDS is to try to replicate natural systems that use cost effective solutions with low environmental impact to drain away dirty and surface water run-off through collection, storage, and cleaning before allowing it to be released slowly back into the environment, such as into watercourses.
In addition the following features can also be simulated using the features of SWMM 5 (storage ponds, seepage, orifices, Weirs, seepage and evaporation from natural channels): constructed wetlands, wet ponds, dry ponds, infiltration basin, non-surface sand filters, vegetated filterstrips, vegetated filterstrip and infiltration basin. A WetPark would be a combination of wet and dry ponds and LID features. A WetPark is also considered a constructed wetland.
== SWMM5 components ==
The SWMM 5.0.001 to 5.1.022 main components are rain gages, watersheds, LID controls or BMP features such as Wet and Dry Ponds, nodes, links, pollutants, landuses, time patterns, curves, time series, controls, transects, aquifers, unit hydrographs, snowmelt and shapes (Table 3). Other related objects are the types of Nodes and the Link Shapes. The purpose of the objects is to simulate the major components of the hydrologic cycle, the hydraulic components of the drainage, sewer or stormwater network, and the buildup/washoff functions that allow the simulation of water quality constituents. A watershed simulation starts with a precipitation time history. SWMM 5 has many types of open and closed pipes and channels: dummy, circular, filled circular, rectangular closed, rectangular open, trapezoidal, triangular, parabolic, power function, rectangular triangle, rectangle round, modified baskethandle, horizontal ellipse, vertical ellipse, arch, eggshaped, horseshoe, gothic, catenary, semielliptical, baskethandle, semicircular, irregular, custom and force main.
The major objects or hydrology and hydraulic components in SWMM 5 are:
GAGE rain gage
SUBCATCH subcatchment
NODE conveyance system node
LINK conveyance system link
POLLUT pollutant
LANDUSE land use category
TIMEPATTERN, dry weather flow time pattern
CURVE generic table of values
TSERIES generic time series of values
CONTROL conveyance system control rules
TRANSECT irregular channel cross-section
AQUIFER groundwater aquifer
UNITHYD RDII unit hydrograph
SNOWMELT snowmelt parameter set
SHAPE custom conduit shape
LID LID treatment units
The major overall components are called in the SWMM 5 input file and C code of the simulation engine: gage, subcatch, node, link, pollute, landuse, timepattern, curve, tseries, control, transect, aquifer, unithyd, snowmelt, shape and lid. The subsets of possible nodes are: junction, outfall, storage and divider. Storage Nodes are either tabular with a depth/area table or a functional relationship between area and depth. Possible node inflows include: external_inflow, dry_weather_inflow, wet_weather_inflow, groundwater_inflow, rdii_inflow, flow_inflow, concen_inflow, and mass_inflow. The dry weather inflows can include the possible patterns: monthly_pattern, daily_pattern, hourly_pattern, and weekend_pattern.
The SWMM 5 component structure allows the user to choose which major hydrology and hydraulic components are using during the simulation:
Rainfall/runoff with infiltration options: horton, modified horton, green ampt and curve number
RDII
Water Quality
Groundwater
Snowmelt
Flow Routing with Routing Options: Steady State, Kinematic Wave and Dynamic Wave
== SWMM 3 and 4 to 5 converter ==
The SWMM 3 and SWMM 4 converter can convert up to two files from the earlier SWMM 3 and 4 versions at one time to SWMM 5. Typically one would convert a Runoff and Transport file to SWMM 5 or a Runoff and Extran File to SWMM 5. If there is a combination of a SWMM 4 Runoff, Transport and Extran network then it will have to be converted in pieces and the two data sets will have to be copied and pasted together to make one SWMM 5 data set. The x,y coordinate file is only necessary if there are not existing x, y coordinates on the D1 line of the SWMM 4 Extran input data[ set. The command File=>Define Ini File can be used to define the location of the ini file. The ini file will save the conversion project input data files and directories.
The SWMMM3 and SWMM 3.5 files are fixed format. The SWMM 4 files are free format. The converter will detect which version of SWMM is being used. The converted files can be combined using a text editor to merge the created inp files.
== SWMM-CAT Climate Change AddOn ==
The Storm Water Management Model Climate Adjustment Tool (SWMM-CAT) is a new addition to SWMM5 (December 2014). It is a simple to use software utility that allows future climate change projections to be incorporated into the Storm Water Management Model (SWMM). SWMM was recently updated to accept a set of monthly adjustment factors for each of these time series that could represent the impact of future changes in climatic conditions. SWMM-CAT provides a set of location-specific adjustments that derived from global climate change models run as part of the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project Phase 3 (CMIP3) archive (Figure 4). SWMM-CAT is a utility that adds location-specific climate change adjustments to a Storm Water Management Model (SWMM) project file. Adjustments can be applied on a monthly basis to air temperature, evaporation rates, and precipitation, as well as to the 24-hour design storm at different recurrence intervals. The source of these adjustments are global climate change models run as part of the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project Phase 3 (CMIP3) archive. Downscaled results from this archive were generated and converted into changes with respect to historical values by USEPA's CREAT project.
The following steps are used to select a set of adjustments to apply to SWMM5:
1) Enter the latitude and longitude coordinates of the location if available or its 5-digit zip code. SWMM-CAT will display a range of climate change outcomes for the CMIP3 results closest to the location.
2) Select whether to use climate change projections based on either a near-term or far-term projection period. The displayed climate change outcomes will be updated to reflect the chosen choice.
3) Select a climate change outcome to save to SWMM. There are three choices that span the range of outcomes produced by the different global climate models used in the CMIP3 project. The Hot/Dry outcome represents a model whose average temperature change was on the high end and whose average rainfall change was on the lower end of all model projections. The Warm/Wet outcome represents a model whose average temperature change was on the lower end and whose average rainfall change was on the wetter end of the spectrum. The Median outcome is for a model whose temperature and rainfall changes were closest to the median of all models.
4) Click the Save Adjustments to SWMM link to bring up a dialog form that will allow the selection of an existing SWMM project file to save the adjustments to. The form will also allow the selection of which type of adjustments (monthly temperature, evaporation, rainfall, or 24-hour design storm) to save. Conversion of temperature and evaporation units is automatically handled depending on the unit system (US or SI) detected in the SWMM file.
== EPA stormwater calculator based on SWMM5 ==
Other external programs that aid in the generation of data for the EPA SWMM 5 model include: SUSTAIN, BASINS, SSOAP, and the EPA’s National Stormwater Calculator (SWC) which is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The estimates are based on local soil conditions, land cover, and historic rainfall records (Figure 5).
== SWMM platforms ==
The SWMM5 engine is used by a variety of software packages, including many commercial software packages. Some of these software packages include:
EPA-SWMM from EPA
ICM SWMM from Autodesk Water Infrastructure in Autodesk
InfoDrainage, from Autodesk Water Infrastructure in Autodesk
InfoWorks ICM which has RDII, Water Quality, and Hydrology Components from SWMM5. Autodesk Water Infrastructure in Autodesk
Autodesk Storm and Sanitary Analysis from Autodesk
PCSWMM
MIKE URBAN
SewerGEMS and CivilStorm from Bentley Systems, Inc.
Fluidit Sewer and Fluidit Storm
Flood Modeller by Jacobs
GeoSWMM by Utilian
Giswater
GISpipe GIS-based EPANET and SWMM integration software.
PySWMM by OpenWaterAnalytics
AquaTwin-Sewer by Aquinuity
Tuflow by Tuflow
InfoSWMM from Autodesk Water Infrastructure in Autodesk
XPSWMM (modified SWMM4 engine) from Autodesk Water Infrastructure in Autodesk
== See also ==
SWAT model – Soil & Water Assessment ToolPages displaying wikidata descriptions as a fallback
Stochastic empirical loading and dilution model – Stormwater quality model
WAFLEX – Model of rivers
Hydrology – Science of the movement, distribution, and quality of water on Earth
Hydraulics – Applied engineering involving liquids
Surface runoff – Flow of excess rainwater not infiltrating in the ground over its surface
Precipitation (meteorology) – Product of the condensation of atmospheric water vapor that falls under gravityPages displaying short descriptions of redirect targets
Antecedent moisture – hydrologic term describing the relative wetness condition of a catchmentPages displaying wikidata descriptions as a fallback
Evapotranspiration – Natural processes of water movement within the water cycle
EPANET – Water distribution system modeling software
Rainfall – Form of precipitationPages displaying short descriptions of redirect targets
Hydrological transport model – Type of mathematical model
Computer simulation – Process of mathematical modelling, performed on a computer
Water pollution – Contamination of water bodies
Water quality – Assessment against standards for use
Surface-water hydrology – Sub-field of hydrology concerned with above-earth water
== References ==
== External links ==
EPA SWMM 5.2 Download
EPA National Stormwater Calculator - SWMM 5 Based
"What Is Stormwater Management and Why Is It Important?". Expert Environmental Consulting -. 2018-01-31. Retrieved 2023-12-11. | Wikipedia/Storm_Water_Management_Model |
A hydrograph is a graph showing the rate of flow (discharge) versus time past a specific point in a river, channel, or conduit carrying flow. The rate of flow is typically expressed in units of cubic meters per second (m³/s) or cubic feet per second (cfs).
Hydrographs often relate changes of precipitation to changes in discharge over time. The term can also refer to a graph showing the volume of water reaching a particular outfall, or location in a sewerage network. Graphs are commonly used in the design of sewerage, more specifically, the design of surface water sewerage systems and combined sewers.
== Terminology ==
Other related terms include:
Approach Segment
the river flow before the storm (antecedent flow).
Rising limb
The rising limb of the hydrograph, also known as concentration curve, reflects a prolonged increase in discharge from a catchment area, typically in response to a rainfall event.
Peak discharge
the highest point on the hydrograph when the rate of discharge is greatest.
Recession (or falling) limb
The recession limb extends from the peak flow rate onward. The end of stormflow (a.k.a. quickflow or direct runoff) and the return to groundwater-derived flow (base flow) is often taken as the point of inflection of the recession limb. The recession limb represents the withdrawal of water from the storage built up in the basin during the earlier phases of the hydrograph.
Lag-1
autocorrelation method to compare streamflow data to itself by shifting or "lagging" initial discharge dataset 1 time unit. A Lag-10 would mean the initial data is shifted 10 days, then is compared to an unshifted version of the data. Not to be confused with lag time.
Lag time
the time interval from the maximum rainfall to the peak discharge.
Time to peak
time interval from the start of rainfall to the peak discharge.
Time of concentration
the time from the end of the precipitation period to the end of the quick–response runoff in the hydrograph.
== Types ==
Types of hydrographs include:
Stream discharge hydrographs
Stream stage hydrographs
Precipitation hydrographs
Storm hydrographs
Flood hydrographs
Annual hydrographs a.k.a. regimes
Direct Runoff Hydrograph
Effective Runoff Hydrograph
Raster Hydrograph
Lag-1 Hydrograph
Storage opportunities in the drainage network (e.g., lakes, reservoirs, wetlands, channel and bank storage capacity)
== Baseflow separation ==
A stream hydrograph is commonly determining the influence of different hydrologic processes on discharge from the subject catchment. Because the timing, magnitude, and duration of groundwater return flow differs so greatly from that of direct runoff, separating and understanding the influence of these distinct processes is key to analyzing and simulating the likely hydrologic effects of various land use, water use, weather, and climate conditions and changes.
However, the process of separating “baseflow” from “direct runoff” is an inexact science. In part this is because these two concepts are not, themselves, entirely distinct and unrelated. Return flow from groundwater increases along with overland flow from saturated or impermeable areas during and after a storm event; moreover, a particular water molecule can easily move through both pathways en route to the watershed outlet. Therefore, separation of a purely “baseflow component” in a hydrograph is a somewhat arbitrary exercise. Nevertheless, various graphical and empirical techniques have been developed to perform these hydrograph separations. The separation of base flow from direct runoff can be an important first step in developing rainfall-runoff models for a watershed of interest—for example, in developing and applying unit hydrographs as described below.
== Unit hydrograph ==
A unit hydrograph (UH) is the hypothetical unit response of a watershed (in terms of runoff volume and timing) to a unit input of rainfall. It can be defined as the direct runoff hydrograph (DRH) resulting from one unit (e.g., one cm or one inch) of effective rainfall occurring uniformly over that watershed at a uniform rate over a unit period of time. As a UH is applicable only to the direct runoff component of a hydrograph (i.e., surface runoff), a separate determination of the baseflow component is required.
A UH is specific to a particular watershed, and specific to a particular length of time corresponding to the duration of the effective rainfall. That is, the UH is specified as being the 1-hour, 6-hour, or 24-hour UH, or any other length of time up to the time of concentration of direct runoff at the watershed outlet. Thus, for a given watershed, there can be many unit hydrographs, each one corresponding to a different duration of effective rainfall.
The UH technique provides a practical and relatively easy-to-apply tool for quantifying the effect of a unit of rainfall on the corresponding runoff from a particular drainage basin. UH theory assumes that a watershed's runoff response is linear, time-invariant, and that the effective rainfall occurs uniformly over the entirety of the watershed. In the real world, none of these assumptions are strictly true. Nevertheless, the application of UH methods typically yields a reasonable approximation of the flood response of natural watersheds. The linear assumptions underlying UH theory allow for the variation in storm intensity over time (i.e., the storm hyetograph) to be simulated by applying the principles of superposition and proportionality to separate storm components to determine the resulting cumulative hydrograph. This allows for a relatively straightforward calculation of the hydrograph response to any arbitrary rain event.
An instantaneous unit hydrograph is a further refinement of the concept; for an IUH, the input rainfall is assumed to all take place at a discrete point in time (obviously, this isn't the case for actual rainstorms). Making this assumption can greatly simplify the analysis involved in constructing a unit hydrograph, and it is necessary for the creation of a geomorphologic instantaneous unit hydrograph.
The creation of a GIUH is possible given nothing more than topologic data for a particular drainage basin. In fact, only the number of streams of a given order, the mean length of streams of a given order, and the mean land area draining directly to streams of a given order are absolutely required (and can be estimated rather than explicitly calculated if necessary). It is therefore possible to calculate a GIUH for a basin without any data about stream height or flow, which may not always be available.
== Subsurface hydrology hydrograph ==
In subsurface hydrology (hydrogeology), a hydrograph is a record of the water level (the observed hydraulic head in wells screened across an aquifer).
Typically, a hydrograph is recorded for monitoring of heads in aquifers during non-test conditions (e.g., to observe the seasonal fluctuations in an aquifer). When an aquifer test is being performed, the resulting observations are typically called drawdown, since they are subtracted from pre-test levels and often only the change in water level is dealt with.
== Raster hydrograph ==
Raster hydrographs are pixel-based plots for visualizing and identifying variations and changes in large multidimensional data sets. Originally developed by Keim (2000) they were first applied in hydrology by Koehler (2004) as a means of highlighting inter-annual (long-term) and intra-annual (e.g., seasonality) changes in streamflow.
The raster hydrographs in the USGS WaterWatch, like those developed by Koehler, depict years on the y-axis and days along the x-axis. Users can choose to plot streamflow (actual values or log values), streamflow percentile, or streamflow class (from 1, for low flow, to 7 for high flow), for Daily, 7-Day, 14-Day, and 28-Day streamflow. For a more comprehensive description of raster hydrographs, see Strandhagen et al. (2006).
== Lag-1 hydrograph ==
A Lag-1 hydrograph is a graph of discharge which can be accomplished without a time axis (Koehler 2022). This technique allows data properties such as Q, dQ/dt, and d2Q/dt2, and trends of increasing, decreasing or no change flow to be readily seen and understood on a single graph. Flow pulse reference lines can easily be added and interpreted. The methodology is based on the time-series serial correlation lag-1 graph and uses the normally unwanted (but still valuable) autocorrelation present within the streamflow data.
The x-axis represents the discharge for a date, Qt, while the y-axis represents the discharge for the next day, Qt+1.
Data preparation and plotting methods are identical to an autocorrelation lag 1 plot, where 1 indicates a 1-day or daily time step. The table below shows how the time-series discharge are shifted. It is critical that the temporal sequence is maintained for the data. Thinking of the x values as “flow for today” and the y values as “flow for tomorrow” helps visualize the order of the data.
== See also ==
Aquifer test
Hydrogeology
Baseflow
Routing (hydrology)
Runoff model (reservoir)
Stream gauge
Surface water
== References ==
== External links ==
The U.S. Geological Survey (USGS) offers real-time streamflow data for thousands of streams in the United States.
The U.S. Geological Survey (USGS) also offers an online toolkit to create a raster hydrograph [1] for any of its streamflow gaging stations in the United States.
SCS Dimensionless Unit Hydrograph.
SERC activity and Matlab code for calculating and using Unit Hydrograph. | Wikipedia/Hydrograph |
The Bradshaw Model is an idealised geographical model which suggests how a river's characteristics vary between the upper course and lower course of a river. It indicates how discharge, occupied channel width, channel depth, and average load quantity increase downstream, and other properties such as load particle size, channel bed roughness, and gradient as characteristics that decrease. These features are represented by triangles; an increase in the size of a triangle represents an increase in the variable. Generally the Bradshaw model shows the characteristics expected to be present in a river, but due to the nature of rivers and the ever-changing environment in which they exist, not all rivers assimilate to the model. Therefore, the model is often applied to compare natural rivers against ideal rivers that fit the model perfectly.
== References == | Wikipedia/Bradshaw_model |
The Canadian Pacific Railway (French: Chemin de fer Canadien Pacifique) (reporting marks CP, CPAA, MILW, SOO), also known simply as CPR or Canadian Pacific and formerly as CP Rail (1968–1996), is a Canadian Class I railway incorporated in 1881. The railway is owned by Canadian Pacific Kansas City Limited, known until 2023 as Canadian Pacific Railway Limited, which began operations as legal owner in a corporate restructuring in 2001.
The railway is headquartered in Calgary, Alberta. In 2023, the railway owned approximately 20,100 kilometres (12,500 mi) of track in seven provinces of Canada and into the United States, stretching from Montreal to Vancouver, and as far north as Edmonton. Its rail network also served Minneapolis–St. Paul, Milwaukee, Detroit, Chicago, and Albany, New York, in the United States.
The railway was first built between eastern Canada and British Columbia between 1875 and 1885 (connecting with Ottawa Valley and Georgian Bay area lines built earlier), fulfilling a commitment extended to British Columbia when it entered Confederation in 1871; the CPR was Canada's first transcontinental railway. Primarily a freight railway, the CPR was for decades the only practical means of long-distance passenger transport in most regions of Canada and was instrumental in the colonization and development of Western Canada. The CPR became one of the largest and most powerful companies in Canada, a position it held as late as 1975. The company acquired two American lines in 2009: the Dakota, Minnesota and Eastern Railroad (DM&E) and the Iowa, Chicago and Eastern Railroad (IC&E). Also, the company owns the Indiana Harbor Belt Railroad, a Hammond, Indiana-based terminal railroad along with Conrail Shared Assets Operations. CPR purchased the Kansas City Southern Railway in December 2021 for US$31 billion. On April 14, 2023, KCS became a wholly owned subsidiary of CPR, and both CPR and its subsidiaries began doing business under the name of its parent company, CPKC.
The CPR is publicly traded on both the Toronto Stock Exchange and the New York Stock Exchange under the ticker CP. Its U.S. headquarters are in Minneapolis. As of March 30, 2023, the largest shareholder of Canadian Pacific stock exchange is TCI Fund Management Limited, a London-based hedge fund that owns 6% of the company.
== History ==
The creation of the Canadian Pacific Railway was undertaken as the National Dream by the Conservative government of John A. Macdonald, together with mining magnate Alexander Tilloch Galt. As a condition for joining the Canadian Confederation, British Columbia had insisted on a transport link to the East, with the rest of the Confederation. In 1873, Macdonald, among other high-ranking politicians, bribed in the Pacific Scandal, granted contracts to the Canada Pacific Railway Company, which was unrelated to the current company, as opposed to the Inter-Ocean Railway Company, which was thought to have connections to the Northern Pacific Railway Company in the United States. After this scandal, the Conservatives were removed from power, and Alexander Mackenzie, the new Liberal prime minister, ordered construction of the railway under the supervision of the Department of Public Works.
Enabled by the CPR Act of 1874, work began in 1875 on the Lake Superior to Manitoba section of the CPR. The ceremonial sod-turning at Westfort on June 1, 1875, was prominently reported in the June 10 edition of the Toronto Globe. It noted that a crowd of "upwards of 500 ladies and gentlemen" gathered to celebrate the event on the left bank of the Kaministiquia River in the District of Thunder Bay, about four miles upriver from Fort William. Once completed in 1882 with a last spike at Feist Lake, near Vermilion Bay, Ontario, the line was turned over to the newly minted private Canadian Pacific Railway company. In 1883, the first wheat shipment from Manitoba was transported over this line to the Lakehead (Fort William and Port Arthur) on Lake Superior.
Macdonald would later return as prime minister and adopt a more aggressive construction policy; bonds were floated in London and called for tenders to complete sections of the railway in British Columbia. American contractor Andrew Onderdonk was selected, and his men began construction on May 15, 1880.
In October 1880, a new consortium signed a contract with the Macdonald government, agreeing to build the railway for $25 million in credit and 25 million acres (100,000 km2) of land. In addition, the government defrayed surveying costs and exempted the railway from property taxes for 20 years.
A beaver was chosen as the railway's logo in honour of Donald Smith, 1st Baron Strathcona and Mount Royal, who had risen from factor to governor of the Hudson's Bay Company over a lengthy career in the beaver fur trade.
=== Building the railway, 1881–1886 ===
Building the railway took over four years. The Canadian Pacific Railway began its westward expansion from Bonfield, Ontario, where the first spike was driven into a sunken railway tie. That was the point where the Canada Central Railway (CCR) extension ended. The CCR started in Brockville and extended to Pembroke. It then followed a westward route along the Ottawa River and continued to Mattawa at the confluence of the Mattawa and Ottawa rivers. It then proceeded to Bonfield.
It was presumed that the railway would travel through the rich "fertile belt" of the North Saskatchewan River Valley and cross the Rocky Mountains via the Yellowhead Pass. However, a more southerly route across the arid Palliser's Triangle in Saskatchewan and via Kicking Horse Pass and down the Field Hill to the Rocky Mountain Trench was chosen.
In 1881, construction progressed at a pace too slow for the railway's officials who, in 1882, hired the renowned railway executive William Cornelius Van Horne to oversee construction. Van Horne stated that he would have 800 km (500 mi) of main line built in 1882. Floods delayed the start of the construction season, but over 672 km (418 mi) of main line, as well as sidings and branch lines, were built that year.
The Thunder Bay branch (west from Fort William) was completed in June 1882 by the Department of Railways and Canals and turned over to the company in May 1883. By the end of 1883, the railway had reached the Rocky Mountains, just 8 km (5.0 mi) east of Kicking Horse Pass. The treacherous 190 km (120 mi) of railway west of Fort William was completed by Purcell & Company, headed by "Canada's wealthiest and greatest railroad contractor," industrialist Hugh Ryan.
Many thousands of navvies worked on the railway. Many were European immigrants. An unknown number of Stoney Nakoda also assisted in track laying and construction work in the Kicking Horse Pass region. In British Columbia, government contractors eventually hired 17,000 workers from China, known as "coolies". After 2+1⁄2 months of hard labour, they could net as little as $16 ($485 in 2023 adjusted for inflation) Chinese labourers in British Columbia made only between 75 cents and $1.25 a day, paid in rice mats, and not including expenses, leaving barely anything to send home. They did the most dangerous construction jobs, such as working with explosives to clear tunnels through rock. The exact number of Chinese workers who died is unknown, but historians estimate the number is between 600 and 800.
By 1883, railway construction was progressing rapidly, but the CPR was in danger of running out of funds. In response, on January 31, 1884, the government passed the Railway Relief Bill, providing a further $22.5 million in loans to the CPR. The bill received royal assent on March 6, 1884.
In March 1885, the North-West Rebellion broke out in the District of Saskatchewan. Van Horne, in Ottawa at the time, suggested to the government that the CPR could transport troops to Qu'Appelle in the District of Assiniboia in 10 days. Some sections of track were incomplete or had not been used before, but the trip to Winnipeg was made in nine days and the rebellion quickly suppressed. Controversially, the government subsequently reorganized the CPR's debt and provided a further $5 million loan. This money was desperately needed by the CPR. Even with Van Horne's support with moving troops to Qu'Appelle, the government still delayed in giving its support to CPR, due to Macdonald pressuring George Stephen for additional benefits.
On November 7, 1885, the last spike was driven at Craigellachie, British Columbia. Four days earlier, the last spike of the Lake Superior section was driven in just west of Jackfish, Ontario. While the railway was completed four years after the original 1881 deadline, it was completed more than five years ahead of the new date of 1891 that Macdonald gave in 1881.
In Eastern Canada, the CPR had created a network of lines reaching from Quebec City to St. Thomas, Ontario, by 1885 – mainly by buying the Quebec, Montreal, Ottawa & Occidental Railway from the Quebec government and by creating a new railway company, the Ontario and Quebec Railway (O&Q). It also launched a fleet of Great Lakes ships to link its terminals. Through the O&Q, the CPR had effected purchases and long-term leases of several railways, and built a line between Perth, Ontario, and Toronto (completed on May 5, 1884) to connect these acquisitions. The CPR obtained a 999-year lease on the O&Q on January 4, 1884. In 1895, it acquired a minority interest in the Toronto, Hamilton and Buffalo Railway, giving it a link to New York and the Northeast United States.
=== 1886–1900 ===
The last spike in the CPR was driven on November 7, 1885, by one of its directors, Donald Smith.
The first transcontinental passenger train departed from Montreal's Dalhousie Station, at Berri Street and Notre Dame Street, at 8 pm on June 28, 1886, and arrived at Port Moody at noon on July 4. This train consisted of two baggage cars, a mail car, one second-class coach, two immigrant sleepers, two first-class coaches, two sleeping cars and a diner (several dining cars were used throughout the journey, as they were removed from the train during the night, with another one added the next morning).
By that time, however, the CPR had decided to move its western terminus from Port Moody to Granville, which was renamed "Vancouver" later that year. The first official train destined for Vancouver arrived on May 23, 1887, although the line had already been in use for three months. The CPR quickly became profitable, and all loans from the federal government were repaid years ahead of time. In 1888, a branch line was opened between Sudbury and Sault Ste. Marie where the CPR connected with the American railway system and its own steamships. That same year, work was started on a line from London, Ontario, to the Canada–US border at Windsor, Ontario. That line opened on June 12, 1890.
The CPR also leased the New Brunswick Railway in 1891 for 991 years, and built the International Railway of Maine, connecting Montreal with Saint John, New Brunswick, in 1889. The connection with Saint John on the Atlantic coast made the CPR the first truly transcontinental railway company in Canada and permitted trans-Atlantic cargo and passenger services to continue year-round when sea ice in the Gulf of St. Lawrence closed the port of Montreal during the winter months. By 1896, competition with the Great Northern Railway for traffic in southern British Columbia forced the CPR to construct a second line across the province, south of the original line. Van Horne, now president of the CPR, asked for government aid, and the government agreed to provide around $3.6 million to construct a railway from Lethbridge, Alberta, through Crowsnest Pass to the south shore of Kootenay Lake, in exchange for the CPR agreeing to reduce freight rates in perpetuity for key commodities shipped in Western Canada.
The controversial Crowsnest Pass Agreement effectively locked the eastbound rate on grain products and westbound rates on certain "settlers' effects" at the 1897 level. Although temporarily suspended during the First World War, it was not until 1983 that the "Crow Rate" was permanently replaced by the Western Grain Transportation Act, which allowed the gradual increase of grain shipping prices. The Crowsnest Pass line opened on June 18, 1898, and followed a complicated route through the maze of valleys and passes in southern British Columbia, rejoining the original mainline at Hope after crossing the Cascade Mountains via Coquihalla Pass.
The Southern Mainline, generally known as the Kettle Valley Railway in British Columbia, was built in response to the booming mining and smelting economy in southern British Columbia, and the tendency of the local geography to encourage and enable easier access from neighbouring US states than from Vancouver or the rest of Canada, which was viewed to be as much of a threat to national security as it was to the province's control of its own resources. The local passenger service was re-routed to this new southerly line, which connected numerous emergent small cities across the region. Independent railways and subsidiaries that were eventually merged into the CPR in connection with this route were the Shuswap and Okanagan Railway, the Kaslo and Slocan Railway, the Columbia and Kootenay Railway, the Columbia and Western Railway and various others.
==== Settlement of western Canada ====
Under the initial contract with the Canadian government to build the railway, the CPR was granted 100,000 square kilometres (25 million acres). Canadian Pacific then began an intense campaign to bring immigrants to Canada; its agents operated in many overseas locations, where immigrants were often sold a package that included passage on a CP ship, travel on a CP train and land sold by the CP railway. Land was priced at $2.50 an acre and up but required cultivation. To transport immigrants, Canadian Pacific developed a fleet of over a thousand Colonist cars, low-budget sleeper cars designed to transport immigrant families from eastern Canadian seaports to the west.
=== 1901–1914 ===
During the first decade of the 20th century, the CPR continued to build more lines. In 1908, the CPR opened a line connecting Toronto with Sudbury. Several operational improvements were also made to the railway in Western Canada.
On November 3, 1909, the Lethbridge Viaduct over the Oldman River valley at Lethbridge, Alberta, was opened. It is 1,624 metres (5,328 feet) long and, at its maximum, 96 metres (315 feet) high, making it one of the longest railway bridges in Canada. In 1916, the CPR replaced its line through Rogers Pass, which was prone to avalanches (the most serious of which killed 62 men in 1910) with the Connaught Tunnel, an eight-kilometre-long (5-mile) tunnel under Mount Macdonald that was, at the time of its opening, the longest railway tunnel in the Western Hemisphere.
On January 21, 1910, a passenger train derailed on the CPR line at the Spanish River bridge at Nairn, Ontario (near Sudbury), killing at least 43.
On January 3, 1912, the CPR acquired the Dominion Atlantic Railway, a railway that ran in western Nova Scotia. This acquisition gave the CPR a connection to Halifax, a significant port on the Atlantic Ocean. The CPR acquired the Quebec Central Railway on December 14, 1912.
During the late 19th century, the railway undertook an ambitious program of hotel construction, building Glacier House in Glacier National Park, Mount Stephen House at Field, British Columbia, the Château Frontenac in Quebec City and the Banff Springs Hotel. By then, the CPR had competition from three other transcontinental lines, all of them money-losers. In 1919, these lines were consolidated into the government-owned Canadian National Railways.
=== First World War ===
During the First World War, CPR put the entire resources of the "world's greatest travel system" at the disposal of the British Empire, not only trains and tracks, but also its ships, shops, hotels, telegraphs and, above all, its people. Aiding the war effort meant transporting and billeting troops; building and supplying arms and munitions; arming, lending and selling ships. Fifty-two CPR ships were pressed into service during World War I, carrying more than a million troops and passengers and four million tons of cargo. Twenty seven survived and returned to CPR. CPR also helped the war effort with money and jobs. CPR made loans and guarantees to the Allies of some $100 million. As a lasting tribute, CPR commissioned three statues and 23 memorial tablets to commemorate the efforts of those who fought and those who died in the war. After the war, the Federal government created Canadian National Railways (CNR, later CN) out of several bankrupt railways that fell into government hands during and after the war. CNR would become the main competitor to the CPR in Canada. In 1923, Henry Worth Thornton replaced David Blyth Hanna becoming the second president of the CNR, and his competition spurred Edward Wentworth Beatty, the first Canadian-born president of the CPR, to action. During this time the railway land grants were formalized.
=== Great Depression and the Second World War, 1929–1945 ===
The Great Depression, which lasted from 1929 until 1939, hit many companies heavily. While the CPR was affected, it was not affected to the extent of its rival CNR because it, unlike the CNR, was debt-free. The CPR scaled back on some of its passenger and freight services and stopped issuing dividends to its shareholders after 1932. Hard times led to the creation of new political parties such as the Social Credit movement and the Cooperative Commonwealth Federation, as well as popular protest in the form of the On-to-Ottawa Trek.
One highlight of the late 1930s, both for the railway and for Canada, was the visit of King George VI and Queen Elizabeth during their 1939 royal tour of Canada, the first time that the reigning monarch had visited the country. The CPR and the CNR shared the honours of pulling the royal train across the country, with the CPR undertaking the westbound journey from Quebec City to Vancouver. Later that year, the Second World War began. As it had done in World War I, the CPR devoted much of its resources to the war effort. It retooled its Angus Shops in Montreal to produce Valentine tanks and other armoured vehicles, and transported troops and resources across the country. Additionally, 22 of the CPR's ships went to war, 12 of which were sunk.
=== 1946–1978 ===
After the Second World War, the transportation industry in Canada changed. Where railways had previously provided almost universal freight and passenger services, cars, trucks and airplanes started to take traffic away from railways. This naturally helped the CPR's air and trucking operations, and the railway's freight operations continued to thrive hauling resource traffic and bulk commodities. However, passenger trains quickly became unprofitable. During the 1950s, the railway introduced new innovations in passenger service. In 1955, it introduced The Canadian, a new luxury transcontinental train. However, in the 1960s, the company started to pull out of passenger services, ending services on many of its branch lines. It also discontinued its secondary transcontinental train The Dominion in 1966, and in 1970, unsuccessfully applied to discontinue The Canadian. For the next eight years, it continued to apply to discontinue the service, and service on The Canadian declined markedly. On October 29, 1978, CP Rail transferred its passenger services to Via Rail, a new federal Crown corporation that is responsible for managing all intercity passenger service formerly handled by both CP Rail and CN. Via eventually took almost all of its passenger trains, including The Canadian, off CP's lines.
In 1968, as part of a corporate reorganization, each of the major operations, including its rail operations, were organized as separate subsidiaries. The name of the railway was changed to CP Rail, and the parent company changed its name to Canadian Pacific Limited in 1971. Its air, express, telecommunications, hotel and real estate holdings were spun off, and ownership of all of the companies transferred to Canadian Pacific Investments. The slogan was: "TO THE FOUR CORNERS OF THE WORLD". The company discarded its beaver logo, adopting the new Multimark (which, when mirrored by an adjacent "multi-mark" creates a diamond appearance on a globe) that was used – with a different colour background – for each of its operations.
=== 1979–2001 ===
==== The 1979 Mississauga train derailment ====
On November 10, 1979, a derailment of a hazardous materials train in Mississauga, Ontario, led to the evacuation of 200,000 people; there were no fatalities. Mississauga Mayor Hazel McCallion threatened to sue Canadian Pacific for the derailment. Part of the compromise was to accept GO Transit commuter rail service along the Galt Subdivision corridor up to Milton, Ontario. Limited trains ran along the Milton line on weekdays only. Expansions to Cambridge, Ontario may be coming in the future.
In 1984, CP Rail commenced construction of the Mount Macdonald Tunnel to augment the Connaught Tunnel under the Selkirk Mountains. The first revenue train passed through the tunnel in 1988. At 14.7 km (nine miles), it is the longest tunnel in the Americas. During the 1980s, the Soo Line Railroad, in which CP Rail still owned a controlling interest, underwent several changes. It acquired the Minneapolis, Northfield and Southern Railway in 1982. Then on February 21, 1985, the Soo Line obtained a controlling interest in the bankrupt Milwaukee Road, merging it into its system on January 1, 1986. Also in 1980, Canadian Pacific bought out the controlling interests of the Toronto, Hamilton and Buffalo Railway (TH&B) from Conrail and molded it into the Canadian Pacific System, dissolving the TH&B's name from the books in 1985. In 1987, most of CPR's trackage in the Great Lakes region, including much of the original Soo Line, were spun off into a new railway, the Wisconsin Central, which was subsequently purchased by CN. Influenced by the Canada-U.S. Free Trade Agreement of 1989, which liberalized trade between the two nations, the CPR's expansion continued during the early 1990s: CP Rail gained full control of the Soo Line in 1990, adding the "System" to the former's name, and bought the Delaware and Hudson Railway in 1991. These two acquisitions gave CP Rail routes to the major American cities of Chicago (via the Soo Line and Milwaukee Road as part of its historically logical route) and New York City (via the D&H).
During the 1990s, both CP Rail and CN attempted unsuccessfully to buy out the eastern assets of the other, so as to permit further rationalization. In 1996, CP Rail moved its head office from Windsor Station in Montreal to Gulf Canada Square in Calgary and changed its name back to Canadian Pacific Railway.
A new subsidiary company, the St. Lawrence and Hudson Railway, was created to operate its money-losing lines in eastern North America, covering Quebec, Southern and Eastern Ontario, trackage rights to Chicago, Illinois, (on Norfolk Southern lines from Detroit) as well as the Delaware and Hudson Railway in the northeastern United States. However, the new subsidiary, threatened with being sold off and free to innovate, quickly spun off money-losing track to short lines, instituted scheduled freight service, and produced an unexpected turn-around in profitability. On 1 January 2001 the StL&H was formally amalgamated with the CP Rail system.
=== 2001 to 2023 ===
In 2001, the CPR's parent company, Canadian Pacific Limited, spun off its five subsidiaries, including the CPR, into independent companies. In September 2007, CPR announced it was acquiring the Dakota, Minnesota and Eastern Railroad from London-based Electra Private Equity. The merger was completed as of October 31, 2008.
Canadian Pacific Railway Ltd. trains resumed regular operations on June 1, 2012, after a nine-day strike by some 4,800 locomotive engineers, conductors and traffic controllers who walked off the job on May 23, stalling Canadian freight traffic and costing the economy an estimated CA$80 million (US$77 million). The strike ended with a government back-to-work bill forcing both sides to come to a binding agreement.
On July 6, 2013, a unit train of crude oil which CP had subcontracted to short-line operator Montreal, Maine and Atlantic Railway derailed in Lac-Mégantic, killing 47. On August 14, 2013, the Quebec government added the CPR, along with lessor World Fuel Services (WFS), to the list of corporate entities from which it seeks reimbursement for the environmental cleanup of the Lac-Mégantic derailment. On July 15, the press reported that CP would appeal the legal order.
On October 12, 2014, it was reported that Canadian Pacific had tried to enter into a merger with American railway CSX, but was unsuccessful.
In 2015–16 Canadian Pacific sought to merge with American railway Norfolk Southern. and wanted to have a shareholder vote on it. CP ultimately terminated its efforts to merge on April 11, 2016.
On February 4, 2019, a loaded grain train ran away from the siding at Partridge just above the Upper Spiral Tunnel in Kicking Horse Pass. The 112-car grain train with three locomotives derailed into the Kicking Horse River just after the Trans Canada Highway overpass. The three crew members on the lead locomotive were killed. The Canadian Pacific Police Service (CPPS) investigated the fatal derailment. It later came to light that, although Creel said that the RCMP "retain jurisdiction" over the investigation, the RCMP wrote that "it never had jurisdiction because the crash happened on CP property". On January 26, 2020, Canadian current affairs program The Fifth Estate broadcast an episode on the derailment, and the next day the Canadian Transportation Safety Board (TSB) called for the RCMP to investigate as lead investigator Don Crawford said, "There is enough to suspect there's negligence here and it needs to be investigated by the proper authority".
On February 4, 2020, the TSB demoted its lead investigator in the crash probe after his superiors decided these comments were "completely inappropriate". The TSB stated that it "does not share the view of the lead safety investigator". The CPPS say they did a thorough investigation into the actions of the crew, which is now closed and resulted in no charges, while the Alberta Federation of Labour and the Teamsters Canada Rail Conference called for an independent police probe.
On November 20, 2019, it was announced that Canadian Pacific would purchase the Central Maine and Quebec Railway from Fortress Transportation and Infrastructure Investors. The line has had a series of different owners since being spun off of the Canadian Pacific in 1995. The first operator was the Canadian American Railroad a division of Iron Road Railways. In 2002 the Montreal, Maine & Atlantic took over operations after CDAC declared bankruptcy. The Central, Maine and Quebec Railway started operations in 2014 after the MMA declared bankruptcy due to the Lac-Mégantic derailment. On this new acquisition, CP CEO Keith Creel remarked that this gives CP a true coast-to-coast network across Canada and an increased presence in New England. On June 4, 2020; Canadian Pacific bought the Central Maine and Quebec.
==== Merger with Kansas City Southern (2021–2023) ====
On March 21, 2021, CP announced that it was planning to purchase the Kansas City Southern Railway (KCS) for US$29 billion. The US Surface Transportation Board (STB) would first have to approve the purchase, which was expected to be completed by the middle of 2022.
However, a competing cash and stock offer was later made by Canadian National Railway (CN) on April 20 at $33.7 billion. On 13 May, KCS announced that they planned to accept the merger offer from CN, but would give CP until May 21 to come up with a higher bid. On May 21, KCS and CN agreed to a merger. However, CN's merger attempt was blocked by a STB ruling in August that the company could not use a voting trust to assume control of KCS, due to concerns about potentially reduced competition in the railroad industry.
On September 12, KCS accepted a new $31 billion offer from CP. Though CP's offer was lower than the offer made by CN, the STB permitted CP to use a voting trust to take control of KCS. The voting trust allowed CP to become the beneficial owner of KCS in December, but the two railroads operated independently until receiving approval for a merger of operations from the STB. That approval came on March 15, 2023, which permitted the railroads to merge as soon as April 14. On April 14, 2023, KCS officially became a subsidiary of CPR, and CPR with its subsidiaries began conducting business under the name of its parent company, Canadian Pacific Kansas City (CPKC).
== Freight trains ==
Over half of CP's freight traffic is in grain (24% of 2016 freight revenue), intermodal freight (22%), and coal (10%) and the vast majority of its profits are made in western Canada. A major shift in trade from the Atlantic to the Pacific has caused serious drops in CPR's wheat shipments through Thunder Bay. It also ships chemicals and plastics (12% of 2016 revenue), automotive parts and assembled automobiles (6%), potash (6%), sulphur and other fertilizers (5%), forest products (5%), and various other products (11%). The busiest part of its railway network is along its main line between Calgary and Vancouver. Since 1970, coal has become a major commodity hauled by CPR. Coal is shipped in unit trains from coal mines in the mountains, including Sparwood, British Columbia, to terminals at Roberts Bank and North Vancouver, from where it is then shipped to Japan.
Grain is hauled by the CPR from the prairies to ports at Thunder Bay (the former cities of Fort William and Port Arthur), Quebec City and Vancouver, where it is then shipped overseas. The traditional winter export port was Saint John, New Brunswick, when ice closed the St. Lawrence River. Grain has always been a significant commodity hauled by the CPR; between 1905 and 1909, the CPR double-tracked its section of track between Fort William, Ontario (part of present-day Thunder Bay) and Winnipeg to facilitate grain shipments. For several decades this was the only long stretch of double-track mainline outside of urban areas on the CPR. Today, though the Thunder Bay-Winnipeg section is now single tracked, the CPR still has two long distance double track lines serving rural areas, including a 121-kilometre (75 mi) stretch between Kent, British Columbia, and Vancouver which follows the Fraser River into the Coast Mountains, as well as the Canadian Pacific Winchester Sub, a 160-kilometre (100 mi) stretch of double track mainline which runs from Smiths Falls, Ontario, through downtown Montreal which runs through many rural farming communities. However, CPR was, as of 2020, partially dismantling the stretch of double track mainline on the Winchester Sub.
== Passenger trains ==
The train was the primary mode of long-distance transport in Canada until the 1960s. Among the many types of people who rode CPR trains were new immigrants heading for the prairies, military troops (especially during the two world wars) and upper class tourists. It also custom-built many of its passenger cars at its CPR Angus Shops to be able to meet the demands of the upper class.
The CPR also had a line of Great Lakes ships integrated into its transcontinental service. From 1884 until 1912, these ships linked Owen Sound on Georgian Bay to Fort William. Following a major fire in December 1911 that destroyed the grain elevator, operations were relocated to a new, larger port created by the CPR at Port McNicoll opening in May 1912. Five ships allowed daily service, and included the S.S. Assiniboia and S.S. Keewatin built in 1907 which remained in use until the end of service. Travellers went by train from Toronto to that Georgian Bay port, then travelled by ship to link with another train at the Lakehead. After World War II, the trains and ships carried automobiles as well as passengers. This service featured what was to become the last boat train in North America. The Steam Boat was a fast, direct connecting train between Toronto and Port McNicoll. The passenger service was discontinued at the end of season in 1965 with one ship, the Assiniboia, carrying on in freight service for two more years before being sold. Planned to be a floating restaurant, "Assiniboia" caught fire during renovations in 1969 and was subsequently scrapped. Meanwhile "Keewatin" which was laid up in 1966 and scheduled to be scrapped, was purchased by RJ and Diane Peterson in 1967 and towed to their marina in Douglas, Michigan to serve as a marine museum. Forty-five years later Skyline International CEO Gil Blutrich purchased "Keewatin" and engaged former crewman Eric Conroy to repatriate "Keewatin" to Port McNicoll and operate her as an historical attraction, which he did in 2012 through 2019. "Keewatin" was closed to visitors in 2020 as a result of the COVID-19 pandemic and did not reopen in Port McNicoll. In 2023 "Keewatin" was donated by Skyline to the Marine Museum of the Great Lakes at Kingston, and towed to Hamilton shipyards for restoration before proceeding to Kingston, where it reopened to visitors in 2024.
After the Second World War, passenger traffic declined as automobiles and airplanes became more common, but the CPR continued to innovate in an attempt to keep passenger numbers up. Beginning November 9, 1953, the CPR introduced Budd Rail Diesel Cars (RDCs) on many of its lines. Officially called "Dayliners" by the CPR, they were always referred to as Budd Cars by employees. Greatly reduced travel times and reduced costs resulted, which saved service on many lines for a number of years. The CPR went on to acquire the second largest fleet of RDCs totalling 52 cars. Only the Boston and Maine Railroad had more. This CPR fleet also included the rare model RDC-4 (which consisted of a mail section at one end and a baggage section at the other end with no formal passenger section). On April 24, 1955, the CPR introduced a new luxury transcontinental passenger train, The Canadian. The train provided service between Vancouver and Toronto or Montreal (east of Sudbury; the train was in two sections). The train, which operated on an expedited schedule, was pulled by diesel locomotives, and used new, streamlined, stainless steel rolling stock. This service was initially heavily promoted by the company and many images of the train, especially as it traversed the Canadian Rockies, were captured by CPR's official photographer Nicholas Morant. Featured in numerous advertising promotions worldwide, several such images have gained iconic status.
Starting in the 1960s, however, the railway started to discontinue much of its passenger service, particularly on its branch lines. For example, passenger service ended on its line through southern British Columbia and Crowsnest Pass in January 1964, and on its Quebec Central in April 1967, and the transcontinental train The Dominion was dropped in January 1966. On October 29, 1978, CP Rail transferred its passenger services to Via Rail, a new federal Crown corporation that was now responsible for intercity passenger services in Canada. Canadian Prime Minister Brian Mulroney presided over major cuts in Via Rail service on January 15, 1990. This ended service by The Canadian over CPR rails, and the train was rerouted on the former Super Continental route via Canadian National without a change of name. Where both trains had been daily prior to January 15, 1990, cuts, the surviving Canadian was only a three-times-weekly operation. In October 2012, The Canadian was reduced to twice-weekly for the six-month off-season period, and as of 2025 operates three-times-weekly for only six months a year. In addition to inter-city passenger services, the CPR also provided commuter rail services in Montreal. CP Rail introduced Canada's first bi-level passenger cars here in 1970. On October 1, 1982, the Montreal Urban Community Transit Commission (STCUM) assumed responsibility for the commuter services previously provided by CP Rail. It continues under the Metropolitan Transportation Agency (AMT).
As of 2025 Canadian Pacific Railway operates two commuter services under contract. GO Transit contracts CPR to operate 10 return trips between Milton and central Toronto in Ontario. In Montreal, 59 daily commuter trains run on CPR lines from Lucien-L'Allier Station to Candiac, Hudson and Blainville–Saint-Jérôme on behalf of the AMT. CP no longer operates Vancouver's West Coast Express on behalf of TransLink, a regional transit authority. Bombardier Transportation assumed control of train operations on May 5, 2014. Although CP Rail no longer owns the track nor operates the commuter trains, it handles dispatching of Metra trains on the Milwaukee District/North and Milwaukee District/West Lines in Chicago, on which the CP also provides freight service via trackage rights.
=== Sleeping, Dining and Parlour Car Department ===
Sleeping cars were operated by a separate department of the railway that included the dining and parlour cars and aptly named as the Sleeping, Dining and Parlour Car Department. The CPR decided from the very beginning that it would operate its own sleeping cars, unlike railways in the United States that depended upon independent companies that specialized in providing cars and porters, including building the cars themselves. Pullman was long a famous name in this regard; its Pullman porters were legendary. Other early companies included the Wagner Palace Car Company. Bigger-sized berths and more comfortable surroundings were built by order of the CPR's General Manager, William Van Horne, who was a large man himself. Providing and operating their own cars allowed better control of the service provided as well as keeping all of the revenue received, although dining-car services were never profitable. But railway managers realized that those who could afford to travel great distances expected such facilities, and their favourable opinion would bode well to attracting others to Canada and the CPR's trains.
== Express ==
W. C. Van Horne decided from the very beginning that the CPR would retain as much revenue from its various operations as it could. This translated into keeping express, telegraph, sleeping car and other lines of business for themselves, creating separate departments or companies as necessary. This was necessary as the fledgling railway would need all the income it could get, and in addition, he saw some of these ancillary operations such as express and telegraph as being quite profitable. Others such as sleeping and dining cars were kept in order to provide better control over the quality of service being provided to passengers. Hotels were likewise crucial to the CPR's growth by attracting travellers.
Dominion Express Company was formed independently in 1873 before the CPR itself, although train service did not begin until the summer of 1882 at which time it operated over some 500 kilometres (300 mi) of track from Rat Portage (Kenora) Ontario west to Winnipeg, Manitoba. It was soon absorbed into the CPR and expanded everywhere the CPR went. It was renamed Canadian Pacific Express Company on September 1, 1926, and the headquarters moved from Winnipeg, to Toronto, and the company also handled the establishment of the first money order system in Canada. It was operated as a separate company with the railway charging them to haul express cars on trains, and was initially highly profitable.
Express operations consisted of separate cars included on existing Canadian Pacific routes, were typically charged on a less-than-carload basis, and transported a wide range of goods, including fresh goods like dairy or flowers, refrigerated goods such as fish, transport of cash and jewellery, livestock with handlers and in some cases goods that took an entire carload, such as automobiles.
The company later expanded to shipping by transport truck. The company eventually became unprofitable, possibly due to competition from trucking companies, was purchased by an employee buyout in 1994 and renamed itself Interlink Systems. The company failed quickly, and went into receivership in 1997.
== Special trains ==
=== Silk trains ===
Between the 1890s and 1933, the CPR transported raw silk from Vancouver, where it had been shipped from the Orient, to silk mills in New York and New Jersey. A silk train could carry several million dollars' worth of silk, so they had their own armed guards. To avoid train robberies and so minimize insurance costs, they travelled quickly and stopped only to change locomotives and crews, which was often done in under five minutes. The silk trains had right over all other trains; even passenger trains (including the royal train of 1939) would be put in sidings to make the silk trains' trip faster. At the end of World War II, the invention of nylon made silk less valuable, so the silk trains died out.
=== Funeral trains ===
Funeral trains would carry the remains of important people, such as prime ministers. As the train would pass, mourners would be at certain spots to show respect. Two of the CPR's funeral trains are particularly well-known. On June 10, 1891, the funeral train of Prime Minister Sir John A. Macdonald ran from Ottawa to Kingston, Ontario. The train consisted of five heavily draped passenger cars and was pulled by 4-4-0 No. 283. On September 14, 1915, the funeral train of former CPR president Sir William Cornelius Van Horne ran from Montreal to Joliet, Illinois, pulled by 4-6-2 No. 2213.
=== Royal trains ===
The CPR ran a number of trains that transported members of the Canadian royal family when they toured the country, taking them through Canada's scenery, forests, and small towns, and enabling people to see and greet them. Their trains were elegantly decorated; some had amenities such as a post office and barber shop. The CPR's most notable royal train was in 1939, when the CPR and the CNR had the honour of carrying King George VI and Queen Elizabeth during their coast-to-coast-and-back tour of Canada; one company took the royal couple from Quebec City to Vancouver and the other company took them on the return journey to Halifax. This was the first tour of Canada by its reigning monarch. The steam locomotives used to pull the train included CPR 2850, a Hudson (4-6-4) built by Montreal Locomotive Works in 1938, CNR 6400, a U-4-a Northern (4-8-4) and CNR 6028 a U-1-b Mountain (4-8-2) type. They were specially painted royal blue, with the exception of CNR 6028 which was not painted, with silver trim as was the entire train. The locomotives ran 5,189 km (3,224 mi) across Canada, through 25 changes of crew, without engine failure. The King, somewhat of a railbuff, rode in the cab when possible. After the tour, King George gave the CPR permission to use the term "Royal Hudson" for the CPR locomotives and to display Royal Crowns on their running boards. This applied only to the semi-streamlined locomotives (2820–2864), not the "standard" Hudsons (2800–2819).
=== Better Farming Train ===
CPR provided the rolling stock for the Better Farming Train which toured rural Saskatchewan between 1914 and 1922 to promote the latest information on agricultural research. It was staffed by the University of Saskatchewan and operating expenses were covered by the Department of Agriculture.
=== School cars ===
Between 1927 and the early 1950s, the CPR ran a school car to reach children who lived in Northern Ontario, far from schools. A teacher would travel in a specially designed car to remote areas and would stay to teach in one area for two to three days, then leave for another area. Each car had a blackboard and a few sets of chairs and desks. They also contained miniature libraries and accommodation for the teacher.
=== Silver Streak ===
Major shooting for the 1976 film Silver Streak, a fictional comedy tale of a murder-ridden train trip from Los Angeles to Chicago, was done on the CPR, mainly in the Alberta area with station footage at Toronto's Union Station. The train set was so lightly disguised as the fictional "AMRoad" that the locomotives and cars still carried their original names and numbers, along with the easily identifiable CP Rail red-striped paint scheme. Most of the cars are still in revenue service on Via Rail Canada; the lead locomotive (CP 4070) and the second unit (CP 4067) were sold to Via Rail and CTCUM respectively.
=== Holiday Train ===
Starting in 1999, CP runs a Holiday Train along its main line during the months of November and December. The Holiday Train celebrates the holiday season and collects donations for community food banks and hunger issues. The Holiday Train also provides publicity for CP and a few of its customers. Each train has a box car stage for entertainers who are travelling along with the train.
The train is a freight train, but also pulls vintage passenger cars which are used as lodging/transportation for the crew and entertainers. Only entertainers and CP employees are allowed to board the train aside from a coach car that takes employees and their families from one stop to the next. All donations collected in a community remain in that community for distribution.
There are two Holiday Trains that cover 150 stops in Canada and the United States Northeast and Midwest. Each train is roughly 1,000 feet (300 m) in length with brightly decorated railway cars, including a modified box car that has been turned into a travelling stage for performers. They are each decorated with hundred of thousands of LED Christmas lights. In 2013 to celebrate the program's 15th year, three signature events were held in Hamilton, Ontario, Calgary, Alberta, and Cottage Grove, Minnesota, to further raise awareness for hunger issues.
The trains feature different entertainers each year; in 2016, one train featured Dallas Smith and the Odds, while the other featured Colin James and Kelly Prescott. After its 20th anniversary tour in 2018, which hosted Terri Clark, Sam Roberts Band, The Trews and Willy Porter, the tour reported to have raised more than CA$15.8 million and collected more than 4.5 million pounds (2,000 t) of food since 1999.
=== Royal Canadian Pacific ===
On June 7, 2000, the CPR inaugurated the Royal Canadian Pacific, a luxury excursion service that operates between the months of June and September. It operates along a 1,050 km (650 mi) route from Calgary, through the Columbia Valley in British Columbia, and returning to Calgary via Crowsnest Pass. The trip takes six days and five nights. The train consists of up to eight luxury passenger cars built between 1916 and 1931 and is powered by first-generation diesel locomotives.
=== Steam train ===
In 1998, the CPR repatriated one of its former passenger steam locomotives that had been on static display in the United States following its sale in January 1964, long after the close of the steam era. CPR Hudson 2816 was re-designated Empress 2816 following a 30-month restoration that cost in excess of $1 million. It was subsequently returned to service to promote public relations. It has operated across much of the CPR system, including lines in the U.S. and been used for various charitable purposes; 100% of the money raised goes to the nationwide charity Breakfast for Learning — the CPR bears all of the expenses associated with the operation of the train. 2816 is the subject of Rocky Mountain Express, a 2011 IMAX film which follows the locomotive on an eastbound journey beginning in Vancouver, and which tells the story of the building of the CPR. 2816 has been stored indefinitely since 2012 after CEO E. Hunter Harrison discontinued the steam program.
The locomotive was fired up on November 13, 2020, for a steam test and moved around the Ogden campus yard. At the time, CP had plans to utilize the locomotive only for a special Holiday Train at Home broadcast, after which it was put in storage. However, in mid-2021, CEO Keith Creel announced intentions to bring 2816 back to full operational status, for a tour from their Calgary headquarters to Mexico City, if the merger with Kansas City Southern Railway was approved by the Surface Transportation Board in the United States. Work on the needed overhaul began in earnest in late 2021 for a planned date in 2023. On April 24, 2024, No. 2816 began its Final Spike Steam Tour for the Canadian Pacific Kansas City, running from Calgary to Mexico City.
=== Spirit Train ===
In 2008, Canadian Pacific partnered with the 2010 Olympic and Paralympic Winter Games to present a "Spirit Train" tour that featured Olympic-themed events at various stops. Colin James was a headline entertainer. Several stops were met by protesters who argued that the games were slated to take place on stolen indigenous land.
=== CP Canada 150 Train ===
In 2017, CP ran the CP Canada 150 Train from Port Moody to Ottawa to celebrate Canada's 150th year since Confederation. The train stopped in 13 cities along its 3-week summer tour, offering a free block party and concert from Dean Brody, Kelly Prescott and Dallas Arcand. The heritage train drew out thousands to sign the special "Spirit of Tomorrow" car, where children were invited to write their wishes for the future of Canada and send them to Ottawa. Prime Minister Justin Trudeau and daughter Ella-Grace Trudeau also visited the train and rode it from Revelstoke to Calgary.
== Non-railway services ==
Historically, Canadian Pacific operated several non-railway businesses. In 1971, these businesses were split off into the separate company Canadian Pacific Limited, and in 2001, that company was further split into five companies. CP no longer provides any of these services.
=== Canadian Pacific Telegraphs ===
The original charter of the CPR granted in 1881 provided for the right to create an electric telegraph and telephone service including charging for it. The telephone had barely been invented but telegraph was well established as a means of communicating quickly across great distances. Being allowed to sell this service meant the railway could offset the costs of constructing and maintaining a pole line along its tracks across vast distances for its own purposes which were largely for dispatching trains. It began doing so in 1882 as the separate Telegraph Department. It would go on to provide a link between the cables under the Atlantic and Pacific oceans when they were completed. Before the CPR line, messages to the west could be sent only via the United States.
Paid for by the word, the telegram was an expensive way to send messages, but they were vital to businesses. An individual receiving a personal telegram was seen as being someone important except for those that transmitted sorrow in the form of death notices. Messengers on bicycles delivered telegrams and picked up a reply in cities. In smaller locations, the local railway station agent would handle this on a commission basis. To speed things, at the local end messages would first be telephoned. In 1931, it became the Communications Department in recognition of the expanding services provided which included telephones lines, news wire, ticker quotations for capital stocks and eventually teleprinters. All were faster than mail and very important to business and the public alike for many decades before mobile phones and computers came along. It was the coming of these newer technologies especially cellular telephones that eventually resulted in the demise of these services even after formation in 1967 of CN-CP Telecommunications in an effort to effect efficiencies through consolidation rather than competition. Deregulation in the 1980s, brought about mergers and the sale of remaining services and facilities.
=== Canadian Pacific Radio ===
On January 17, 1930, the CPR applied for licences to operate radio stations in 11 cities from coast-to-coast for the purpose of organising its own radio network in order to compete with the CNR Radio service. The CNR had built a radio network with the aim of promoting itself as well as entertaining its passengers during their travels. The onset of the Great Depression hurt the CPR's financial plan for a rival project and in April they withdrew their applications for stations in all but Toronto, Montreal and Winnipeg. CPR did not end up pursuing these applications but instead operated a phantom station in Toronto known as "CPRY," with initials standing for "Canadian Pacific Royal York" which operated out of studios at CP's Royal York Hotel and leased time on CFRB and CKGW. A network of affiliates carried the CPR radio network's broadcasts in the first half of the 1930s, but the takeover of CNR's Radio service by the new Canadian Radio Broadcasting Commission removed CPR's need to have a network for competitive reasons and CPR's radio service was discontinued in 1935.
CPR programming included a series of concert broadcasts from Montreal with an orchestra conducted by Douglas Clarke and a series called Concert Orchestra broadcast from the Royal York Hotel featuring conductor Rex Battle, and another series of concerts, this time sponsored by Imperial Oil and featuring conductor Reginald Stewart with a 55-piece orchestra and some of the leading soloists of the day, also performing at the Royal York.
=== Canadian Pacific Steamships ===
Steamships played an important part in the history of CP from the very earliest days. During construction of the line in British Columbia even before the private CPR took over from the government contractor, ships were used to bring supplies to the construction sites. Similarly, to reach the isolated area of Superior in northern Ontario ships were used to bring in supplies to the construction work. While this work was going on there was already regular passenger service to the West. Trains operated from Toronto to Owen Sound where CPR steamships connected to Fort William where trains once again operated to reach Winnipeg. Before the CPR was completed the only way to reach the West was through the United States via St. Paul and Winnipeg. This Great Lakes steam ship service continued as an alternative route for many years and was always operated by the railway. Canadian Pacific passenger service on the lakes ended in 1965.
In 1883, CPR began purchasing sailing ships as part of a railway supply service on the Great Lakes. Over time, CPR became a railway company with widely organized water transportation auxiliaries including the Great Lakes service, the trans-Pacific service, the Pacific coastal service, the British Columbia lake and river service, the trans-Atlantic service and the Bay of Fundy Ferry service. In the 20th century, the company evolved into an intercontinental railway which operated two transoceanic services which connected Canada with Europe and with Asia. The range of CPR services were aspects of an integrated plan.
Once the railway was completed to British Columbia, the CPR chartered and soon bought their own passenger steamships as a link to the Orient. These sleek steamships were of the latest design and christened with "Empress" names (e. g., RMS Empress of Britain, Empress of Canada, Empress of Australia, and so forth). Travel to and from the Orient and cargo, especially imported tea and silk, were an important source of revenue, aided by Royal Mail contracts. This was an important part of the All-Red Route linking the various parts of the British Empire.
The other ocean part was the Atlantic service to and from the United Kingdom, which began with acquisition of two existing lines, Beaver Line, owned by Elder Dempster and Allan Lines. These two segments became Canadian Pacific Ocean Services (later, Canadian Pacific Steamships) and operated separately from the various lake services operated in Canada, which were considered to be a direct part of the railway's operations. These trans-ocean routes made it possible to travel from Britain to Hong Kong using only the CPR's ships, trains and hotels. CP's 'Empress' ships became world-famous for their luxury and speed. They had a practical role, too, in transporting immigrants from much of Europe to Canada, especially to populate the vast prairies. They also played an important role in both world wars with many of them being lost to enemy action, including Empress of Britain.
There were also a number of rail ferries operated over the years as well including, between Windsor, Ontario, and Detroit from 1890 until 1915. This began with two paddle-wheelers capable of carrying 16 cars. Passenger cars were carried as well as freight. This service ended in 1915 when the CPR made an agreement with the Michigan Central to use their Detroit River tunnel opened in 1910. Pennsylvania-Ontario Transportation Company was formed jointly with the PRR in 1906 to operate a ferry across Lake Erie between Ashtabula, Ohio, and Port Burwell, Ontario, to carry freight cars, mostly of coal, much of it to be burned in CPR steam locomotives. Only one ferry boat was ever operated, Ashtabula, a large vessel which eventually sank in a harbour collision in Ashtabula on September 18, 1958, thus ending the service.
Canadian Pacific Car and Passenger Transfer Company was formed by other interest in 1888 linking the CPR in Prescott, Ontario, and the NYC in Ogdensburg, New York. Service on this route had actually begun very early, in 1854, along with service from Brockville. A bridge built in 1958 ended passenger service however, freight continued until Ogdensburg's dock was destroyed by fire September 25, 1970, thus ending all service. CPC&PTC was never owned by the CPR. Bay of Fundy ferry service was operated for passengers and freight for many years linking Digby, Nova Scotia, and Saint John, New Brunswick. Eventually, after 78 years, with the changing times the scheduled passenger services would all be ended as well as ocean cruises. Cargo would continue on both oceans with a change over to containers. CP was an intermodal pioneer especially on land with road and railway mixing to provide the best service. CP Ships was the final operation, and in the end it too left CP ownership when it was spun off in 2001. CP Ships was merged with Hapag-Lloyd in 2005.
==== British Columbia Coast Steamships ====
The Canadian Pacific Railway Coast Service (British Columbia Coast Steamships or BCCS) was established when the CPR acquired in 1901 Canadian Pacific Navigation Company (no relation) and its large fleet of ships that served 72 ports along the coast of British Columbia including on Vancouver Island. Service included the Vancouver-Victoria-Seattle Triangle Route, Gulf Islands, Powell River, as well as Vancouver-Alaska service. BCCS operated a fleet of 14 passenger ships made up of a number of Princess ships, pocket versions of the famous oceangoing Empress ships along with a freighter, three tugs and five railway car barges. Popular with tourists, the Princess ships were famous in their own right especially Princess Marguerite (II) which operated from 1949 until 1985 and was the last coastal liner in operation. The most notorious of the princess ships, however, is Princess Sophia, which sank with no survivors after striking the Vanderbilt Reef in Alaska's Lynn Canal, constituting the largest maritime disaster in the history of the Pacific Northwest. These services continued for many years until changing conditions in the late 1950s brought about their decline and eventual demise at the end of season in 1974. Princess Marguerite was acquired by the province's British Columbia Steamship (1975) Ltd. and continued to operate for a number of years. In 1977 although BCCSS was the legal name, it was rebranded as Coastal Marine Operations (CMO). By 1998 the company was bought by the Washington Marine Group which after purchase was renamed Seaspan Coastal Intermodal Company and then subsequently rebranded in 2011 as Seaspan Ferries Corporation. Passenger service ended in 1981.
==== British Columbia Lake and River Service ====
The Canadian Pacific Railway Lake and River Service (British Columbia Lake and River Service) developed slowly and in spurts of growth. CP began a long history of service in the Kootenays region of southern British Columbia beginning with the purchase in 1897 of the Columbia and Kootenay Steam Navigation Company which operated a fleet of steamers and barges on the Arrow Lakes and was merged into the CPR as the CPR Lake and River Service which also served the Arrow Lakes and Columbia River, Kootenay Lake and Kootenai River, Lake Okanagan and Skaha Lake, Slocan Lake, Trout Lake, and Shuswap Lake and the Thompson River/Kamloops Lake.
All of these lake operations had one thing in common, the need for shallow draft therefore sternwheelers were the choice of ship. Tugs and barges handled railway equipment including one operation that saw the entire train including the locomotive and caboose go along. These services gradually declined and ended in 1975 except for a freight barge on Slocan Lake. This was the one where the entire train went along since the barge was a link to an isolated section of track. The Iris G tug boat and a barge were operated under contract to CP Rail until the last train ran late in December 1988. The sternwheel steamship Moyie on Kootenay Lake was the last CPR passenger boat in BC lake service, having operated from 1898 until 1957. She became a beached historical exhibit, as are also the Sicamous and Naramata at Penticton on Lake Okanagan.
=== Canadian Pacific Hotels ===
To promote tourism and passenger ridership the Canadian Pacific established a series of first class hotels. These hotels became landmarks famous in their own right and are known collectively as Canada's "grand railway hotels". They include the Algonquin in St. Andrews, Château Frontenac in Quebec, Royal York in Toronto, Minaki Lodge in Minaki Ontario, Hotel Vancouver, Empress Hotel in Victoria and the Banff Springs Hotel and Chateau Lake Louise in the Canadian Rockies. Several signature hotels were acquired from its competitor Canadian National during the 1980s, including the Jasper Park Lodge. The hotels retain their Canadian Pacific heritage, but are no longer operated by the railway. In 1998, Canadian Pacific Hotels acquired Fairmont Hotels, an American company, becoming Fairmont Hotels and Resorts; the combined corporation operated the historic Canadian properties as well as the Fairmont's U.S. properties until merged with Raffles Hotels and Resorts and Swissôtel in 2006.
=== Canadian Pacific Air Lines ===
Canadian Pacific Airlines, also called CP Air, operated from 1942 to 1987 and was the main competitor of Canadian government-owned Air Canada. Based at Vancouver International Airport, it served Canadian and international routes until it was purchased by Pacific Western Airlines which merged PWA and CP Air to create Canadian Airlines.
== Locomotives ==
=== Steam locomotives ===
In the CPR's early years, it made extensive use of American-type 4-4-0 steam locomotives, and such examples of this are the Countess of Dufferin or No. 29. Later, considerable use was also made of the 4-6-0 type for passenger and 2-8-0 type for freight. Starting in the 20th century, the CPR bought and built hundreds of Ten-Wheeler-type 4-6-0s for passenger and freight service and similar quantities of 2-8-0s and 2-10-2s for freight. 2-10-2s were also used in passenger service on mountain routes. The CPR bought hundreds of 4-6-2 Pacifics between 1906 and 1948 with later versions being true dual-purpose passenger and fast-freight locomotives.
The CPR built hundreds of its own locomotives at its shops in Montreal, first at the "New Shops", as the DeLorimer shops were commonly referred to, and at the massive Angus Shops that replaced them in 1904. Some of the CPR's best-known locomotives were the 4-6-4 Hudsons. First built in 1929, they began a new era of modern locomotives with capabilities that changed how transcontinental passenger trains ran, eliminating frequent changes en route. The 2800s, as the Hudson type was known, ran from Toronto to Fort William, a distance of 1,305 kilometres (811 mi), while another lengthy engine district was from Winnipeg to Calgary 1,339 kilometres (832 mi).
Especially notable were the semi-streamlined H1 class Royal Hudsons, locomotives that were given their name because one of their class hauled the royal train carrying King George VI and Queen Elizabeth on the 1939 royal tour across Canada without change or failure. That locomotive, No. 2850, is preserved in the Exporail exhibit hall of the Canadian Railway Museum in Saint-Constant, Quebec. One of the class, No. 2860, was restored by the British Columbia government and used in excursion service on the British Columbia Railway between 1974 and 1999.
The CPR also made many of their older 2-8-0s, built in the turn of the century, into 2-8-2s.
In 1929, the CPR received its first 2-10-4 Selkirk locomotives, the largest steam locomotives to run in Canada and the British Empire. Named after the Selkirk Mountains where they served, these locomotives were well suited for steep grades. They were regularly used in passenger and freight service. The CPR would own 37 of these locomotives, including number 8000, an experimental high pressure engine. The last steam locomotives that the CPR received, in 1949, were Selkirks, numbered 5930–5935.
=== Diesel locomotives ===
In 1937, the CPR acquired its first diesel-electric locomotive, a custom-built one-of-a-kind switcher numbered 7000. This locomotive was not successful and was not repeated. Production-model diesels were imported from American Locomotive Company (Alco) starting with five model S-2 yard switchers in 1943 and followed by further orders. In 1949, operations on lines in Vermont were dieselized with Alco FA1 road locomotives (eight A and four B units), five ALCO RS-2 road switchers, three Alco S-2 switchers and three EMD E8 passenger locomotives. In 1948 Montreal Locomotive Works began production of ALCO designs.
In 1949, the CPR acquired 13 Baldwin-designed locomotives from the Canadian Locomotive Company for its isolated Esquimalt and Nanaimo Railway and Vancouver Island was quickly dieselized. Following that successful experiment, the CPR started to dieselize its main network. Dieselization was completed 11 years later, with its last steam locomotive running on 6 November 1960. The CPR's first-generation locomotives were mostly made by General Motors Diesel and Montreal Locomotive Works (American Locomotive Company designs), with some made by the Canadian Locomotive Company to Baldwin and Fairbanks Morse designs.
CP was the first railway in North America to pioneer alternating current (AC) traction diesel-electric locomotives in 1984. In 1995, CP turned to GE Transportation for the first production AC traction locomotives in Canada, and now has the highest percentage of AC locomotives in service of all North American Class I railways.
On September 16, 2019, Progress Rail rolled out two SD70ACU rebuilds in Canadian Pacific heritage paint schemes; 7010 wears a Tuscan red and grey paint scheme with script writing, and the 7015 wears a similar paint scheme with block lettering.
On November 11, 2019, five SD70ACU units with commemorative military themes were unveiled during CPR's Remembrance Day ceremony. These units are numbered 7020–7023, with 7024 being renumbered to 6644 to commemorate the date of D-Day: June 6, 1944.
In 2021, Canadian Pacific repainted two locomotives orange: ES44AC 8757 which was unveiled for National Day for Truth and Reconciliation in September 2021, and ES44AC 8781 to commemorate shipper Hapag-Lloyd.
The fleet includes these types:
==== Final diesel roster ====
==== Retired diesel roster ====
== Corporate structure ==
Canadian Pacific Railway Limited (TSX: CP NYSE: CP) is a Canadian railway transportation company that operates the Canadian Pacific Railway. It was created in 2001 when the CPR's former parent company, Canadian Pacific Limited, spun off its railway operations. On October 3, 2001, the company's shares began to trade on the New York Stock Exchange and the Toronto Stock Exchange under the "CP" symbol. During 2003, the company earned CA$3.5 billion in freight revenue. In October 2008, Canadian Pacific Railway Ltd was named one of "Canada's Top 100 Employers" by Mediacorp Canada Inc., and was featured in Maclean's. Later that month, CPR was named one of Alberta's Top Employers, which was reported in both the Calgary Herald and the Edmonton Journal.
=== Presidents ===
== Major facilities ==
Canadian Pacific owned a large number of large yards and repair shops across their system, which were used for many operations ranging from intermodal terminals to classification yards. Below are some examples of these.
=== Hump yards ===
Hump yards work by using a small hill over which cars are pushed, before being released down a slope and switched automatically into cuts of cars, ready to be made into outbound trains. Many of these yards were closed in 2012 and 2013 under Hunter Harrison's company-wide restructuring; only the St. Paul Yard hump remains open.
Calgary, Alberta – 68-hectare (168-acre) Alyth Yard; handles 2,200 cars daily (closed)
Franklin Park, Illinois – Bensenville Yard (closed)
Montreal, Quebec – St. Luc Yard; active since 1950. Flat switching since the mid-1980s. (closed)
St. Paul, Minnesota – Pig's Eye Yard / St. Paul Yard
Toronto, Ontario – Toronto Yard (also known as "Toronto Freight Yard or Agincourt Yard") (closed)
Winnipeg, Manitoba – Rugby Yard (also known as "Weston Yard") (closed)
== Aircraft ==
As of February 2023, Transport Canada lists the following aircraft in its database and operate as ICAO airline designator CRR, and telephony RAILCAR.
1 - Cessna Citation Sovereign (Cessna 680)
1 - Bombardier CL-600
== Joint partnership ==
Toronto Terminal Railways – management team for Toronto's Union Station with Canadian National Railway.
== See also ==
== Notes ==
== References ==
== Further reading ==
White, Richard (2011). Railroaded: The Transcontinentals and the Making of Modern America. W. W. Norton & Company. ISBN 978-0-393-06126-0.
== External links ==
Business data for Canadian Pacific Railway:
Official website
CPR, from Sea to Sea: The Scottish Connection Archived 13 March 2005 at the Wayback Machine – Historical essay, illustrated with photographs from the CPR Archives and the McCord Museum's Notman Photographic Archives
Lavalle, Omer; Marshall, Tabitha (4 March 2015). "Canadian Pacific Railway". The Canadian Encyclopedia (online ed.). Historica Canada. Archived from the original on 14 April 2012. Retrieved 12 April 2012.
The Canadian Pacific Railway inception – Digital artifacts, archival and graphic material from the UBC Library Digital Collections
Winchester, Clarence, ed. (1936), "The conquest of Canada", Railway Wonders of the World, pp. 65–74, illustrated account of the construction of the Canadian Pacific Railway | Wikipedia/CPR_Telegraphs |
The Canadian National Railway Company (French: Compagnie des chemins de fer nationaux du Canada) (reporting mark CN) is a Canadian Class I freight railway headquartered in Montreal, Quebec, which serves Canada and the Midwestern and Southern United States.
CN is Canada's largest railway, in terms of both revenue and the physical size of its rail network, spanning Canada from the Atlantic coast in Nova Scotia to the Pacific coast in British Columbia across approximately 20,000 route miles (32,000 km) of track. In the late 20th century, CN gained extensive capacity in the United States by taking over such railroads as the Illinois Central.
CN is a public company with 24,671 employees and, as of July 2024, a market cap of approximately US$75 billion. CN was government-owned, as a Canadian Crown corporation, from its founding in 1919 until being privatized in 1995. As of 2019, Bill Gates was the largest single shareholder of CN stock, owning a 14.2% interest through Cascade Investment and his own Bill and Melinda Gates Foundation.
From 1919 to 1978, the railway was known as "Canadian National Railways" (CNR).
== History ==
The Canadian National Railways (CNR) was incorporated on June 6, 1919, comprising several railways that had become bankrupt and fallen into Government of Canada hands, along with some railways already owned by the government. Primarily a freight railway, CN also operated passenger services until 1978, when they were assumed by Via Rail. The only passenger services run by CN after 1978 were several mixed trains (freight and passenger) in Newfoundland, and several commuter trains both on CN's electrified routes and towards the South Shore in the Montreal area (the latter lasted without any public subsidy until 1986). The Newfoundland mixed trains lasted until 1988, while the Montreal commuter trains are now operated by Montreal's EXO.
On November 17, 1995, the Government of Canada privatized CN. Over the next decade, the company expanded significantly into the United States, purchasing Illinois Central Railroad and Wisconsin Central Transportation, among others.
=== Creation of the company, 1918–1923 ===
The excessive construction of railway lines in Canada led to significant financial difficulties striking many of them, in the years leading up to 1920:
In response to public concerns, the Government of Canada assumed majority ownership of the near-bankrupt Canadian Northern Railway (CNoR) on September 6, 1918, and appointed a "Board of Management" to oversee the company. At the same time, CNoR was also directed to assume management of Canadian Government Railways (CGR), a system mainly comprising the Intercolonial Railway of Canada (IRC), National Transcontinental Railway (NTR), Prince Edward Island Railway (PEIR), and the Hudson Bay Railway (HBR).
On December 20, 1918, the Government of Canada created the Canadian National Railways (CNR) – a body with no corporate powers – through Order in Council as a means to simplify the funding and operation of the various railway companies. The absorption of the Intercolonial Railway would see CNR adopt that system's slogan, The People's Railway.
Another Canadian railway, the Grand Trunk Pacific Railway (GTPR), encountered financial difficulty on March 7, 1919, when its parent company Grand Trunk Railway (GTR) defaulted on repayment of construction loans to the Government of Canada.
The Canadian National Railway Company then evolved through the following steps:
the "railways, works and undertakings of the Companies comprised in the Canadian Northern System" were vested in the newly incorporated Company in June 1919, with provision for the later inclusion of any of the Government Railways
vesting of the Grand Trunk Pacific Railway System in the Minister of Railways and Canals, acting as Government Receiver, in March 1919
acquisition of the Grand Trunk Railway System in November 1919, implemented in May 1920
GTR management and shareholders opposed to nationalization took legal action, but after several years of arbitration, the GTR was finally absorbed into the CNR on January 30, 1923. Although several smaller independent railways would be added to the CNR in subsequent years as they went bankrupt or it became politically expedient to do so, the system was more or less finalized at that point. However, certain related lawsuits were not resolved until as late as 1936.
Canadian National Railways was born out of both wartime and domestic urgency. Until the rise of the personal automobile and creation of taxpayer-funded all-weather highways, railways were the only viable long-distance land transportation available in Canada. As such, their operation consumed a great deal of public and political attention. Canada was one of many nations to engage in railway nationalization in order to safeguard critical transportation infrastructure during the First World War.
In the early 20th century, many governments were taking a more interventionist role in the economy, foreshadowing the influence of economists like John Maynard Keynes. This political trend, combined with broader geo-political events, made nationalization an appealing choice for Canada. The Winnipeg General Strike of 1919 and allied involvement in the Russian Revolution seemed to validate the continuing process. The need for a viable rail system was paramount in a time of civil unrest and foreign military action.
=== Acquisitions ===
Bessemer & Lake Erie Railroad
The B&LE was acquired with the purchase of Great Lakes Transportation and the DM&IR.
British Columbia Railway
In 2003, BCOL sold to Canadian National and leased the railroad to CN for 60 years.
Central Vermont Railway
Central Vermont was nationalized in 1918 and consolidated into the Grand Trunk Western in 1971 with the creation of the Grand Trunk Corporation.
Duluth Missabe & Iron Range Railroad
The DM&IR was purchased by Great Lakes Transportation and in 2011 the DM&IR was merged into CN's Wisconsin Central Subsidiary. The DM&IR was acquired at the same time as the Bessemer & Lake Erie Railroad.
Duluth Winnipeg & Pacific Railroad
The DWP was nationalized with CN in 1918 and became a part of CN's Grand Trunk Corporation in 1971. In 2011 the DWP was merged into the larger Wisconsin Central Subsidiary of CN.
Elgin, Joliet and Eastern Railway
In 2009, CN acquired the Elgin, Joliet and Eastern Railway to assist with traffic congestion in Chicago and the surrounding area. In 2013 EJ&E was merged into the greater Wisconsin Central Subsidiary of CN.
Grand Trunk Western Railroad
The GTW was merged with Central Vermont in 1971 with the creation of the Grand Trunk Corporation. In 1991 the GTW was merged with CN under the "North America" consolidation program. Many of GTWs locomotives and rolling stock would be repainted and the motive power would get the new CN scheme.
Illinois Central Railroad
In 1998, IC was purchased by CN, which also acquired the Chicago Central in the deal. A year later, the two railroads were formally amalgamated into the CN system.
Iowa Northern Railway
In 2023, CN acquired the Iowa Northern Railway, but the transaction is awaiting approval by the Surface Transportation Board (STB). On January 14, 2025, the STB approved the CN acquisition of the Iowa Northern Railway, a Class III shortline that specializes in the transport of grain, ethanol, and other bio-fuels commodities in the state of Iowa.
Mackenzie Northern Railway
In 2006, CN acquired Mackenzie Northern Railway, previously purchased by RailAmerica. This purchase allowed CN to increase their network footprint and hold the northernmost trackage of the contiguous North American railway network. Since being purchased by CN in 2006, it has been officially known as the Meander River Subdivision.
Newfoundland Railway
On 31 March 1949, CNR acquired the assets of the Newfoundland Railway, which in 1979 were reorganized into Terra Transport. CN officially abandoned its rail network in Newfoundland on 1 October 1988.
Savage Alberta Railway
On December 1, 2006, CN announced that it had purchased Savage Alberta Railway for $25 million and that it had begun operating the railway the same day.
TransX Group of Companies
In 2018, CN acquired the Winnipeg-based TransX Group of Companies. Transx continues to operate independently.
Wisconsin Central Railroad
In January 2001, CN acquired the WC for $800 million.
=== CN's U.S. subsidiaries prior to privatization ===
CN's railway network in the late 1980s consisted of the company's Canadian trackage, along with the following U.S. subsidiary lines: Grand Trunk Western Railroad (GTW) operating in Michigan, Indiana, and Illinois; Duluth, Winnipeg and Pacific Railway (DWP) operating in Minnesota; Central Vermont Railway (CV) operating down the Connecticut River valley from Quebec to Long Island Sound; and the Berlin subdivision to Portland, Maine, known informally as the Grand Trunk Eastern, sold to a short-line operator in 1989.
=== Privatization ===
In 1992, a new management team led by ex-federal government bureaucrats, Paul Tellier and Michael Sabia, started preparing CN for privatization by emphasizing increased productivity. This was achieved largely through aggressive cuts to the company's management structure, widescale layoffs in its workforce and continued abandonment or sale of its branch lines. In 1993 and 1994, the company experimented with a rebranding that saw the names CN, Grand Trunk Western, and Duluth, Winnipeg, and Pacific replaced under a collective CN North America moniker. In this time, CPR and CN entered into negotiations regarding a possible merger of the two companies. This was later rejected by the Government of Canada, whereupon CPR offered to purchase outright all of CN's lines from Ontario to Nova Scotia, while an unidentified U.S. railroad (rumoured to have been Burlington Northern Railroad) would purchase CN's lines in western Canada. This too was rejected. In 1995, the entire company including its U.S. subsidiaries reverted to using CN exclusively.
The CN Commercialization Act was enacted into law on July 13, 1995, and by November 28, 1995, the Government of Canada had completed an initial public offering (IPO) and transferred all of its shares to private investors. Two key prohibitions in this legislation include, 1) that no individual or corporate shareholder may own more than 15% of CN, and 2) that the company's headquarters must remain in Montreal, thus maintaining CN as a Canadian corporation.
=== Contraction and expansion since privatization ===
Following the successful IPO, CN has recorded impressive gains in its stock price, largely through an aggressive network rationalization and purchase of newer more fuel-efficient locomotives. Numerous branch lines were shed in the late 1990s across Canada, resulting in dozens of independent short line railway companies being established to operate former CN track that had been considered marginal. This network rationalization resulted in a core east–west freight railway stretching from Halifax to Chicago and Toronto to Vancouver and Prince Rupert. The railway also operated trains from Winnipeg to Chicago using trackage rights for part of the route south of Duluth.
In addition to the rationalization in Canada, the company also expanded in a strategic north–south direction in the central United States. In 1998, in an era of mergers in the U.S. rail industry, CN bought the Illinois Central Railroad (IC), which connected the already existing lines from Vancouver, British Columbia, to Halifax, Nova Scotia, with a line running from Chicago, Illinois, to New Orleans, Louisiana. This single purchase of IC transformed CN's entire corporate focus from being an east–west uniting presence within Canada (sometimes to the detriment of logical business models) into a north–south NAFTA railway (in reference to the North American Free Trade Agreement). CN was then feeding Canadian raw material exports into the U.S. heartland and beyond to Mexico through a strategic alliance with Kansas City Southern Railway (KCS).
In 1999, CN and BNSF Railway, the second largest rail system in the U.S., announced their intent to merge, forming a new corporate entity North American Railways, headquartered in Montreal to conform to the CN Commercialization Act of 1995. The merger announcement by CN's Paul Tellier and BNSF's Robert Krebs was greeted with skepticism by the U.S. government's Surface Transportation Board (STB), and protested by other major North American rail companies, namely CPR and Union Pacific Railroad (UP). Rail customers also denounced the proposed merger, following the confusion and poor service sustained in southeastern Texas in 1998 following UP's purchase of Southern Pacific Railroad two years earlier. In response to the rail industry, shippers, and political pressure, the STB placed a 15-month moratorium on all rail-industry mergers, effectively scuttling CN-BNSF plans. Both companies dropped their merger applications and have never refiled.
After the STB moratorium expired, CN purchased Wisconsin Central (WC) in 2001, which allowed the company's rail network to encircle Lake Michigan and Lake Superior, permitting more efficient connections from Chicago to western Canada. The deal also included Canadian WC subsidiary Algoma Central Railway (ACR), giving access to Sault Ste. Marie and Michigan's Upper Peninsula. The purchase of Wisconsin Central also made CN the owner of EWS, the principal freight train operator in the United Kingdom.
On May 13, 2003, the provincial government of British Columbia announced the provincial Crown corporation, BC Rail (BCR), would be sold with the winning bidder receiving BCR's surface operating assets (locomotives, cars, and service facilities). The provincial government is retaining ownership of the tracks and right-of-way. On November 25, 2003, it was announced CN's bid of CA$1 billion would be accepted over those of CPR and several U.S. companies. The transaction was closed effective July 15, 2004. Many opponents – including CPR – accused the government and CN of rigging the bidding process, though this has been denied by the government. Documents relating to the case are under court seal, as they are connected to a parallel marijuana grow-op investigation connected with two senior government aides also involved in the sale of BC Rail.
Also contested was the economic stimulus package the government gave cities along the BC Rail route. Some saw it as a buy-off to get the municipalities to cooperate with the lease, though the government asserted the package was intended to promote economic development along the corridor. Passenger service along the route had been ended by BC Rail a few years earlier due to ongoing losses resulting from deteriorating service. The cancelled passenger service has subsequently been replaced by a blue-plate tourist service, the Rocky Mountaineer, with fares well over double what the BCR coach fares had been.
CN also announced in October 2003 an agreement to purchase Great Lakes Transportation (GLT), a holding company owned by Blackstone Group for US$380 million. GLT was the owner of Bessemer & Lake Erie Railroad, Duluth, Missabe and Iron Range Railway (DM&IR), and the Pittsburgh & Conneaut Dock Company. The key instigator for the deal was the fact that since the Wisconsin Central purchase, CN was required to use DM&IR trackage rights for a short 18 km (11 mi) "gap" near Duluth, Minnesota, on the route between Chicago and Winnipeg. To purchase this short section, CN was told by GLT it would have to purchase the entire company. Also included in GLT's portfolio were eight Great Lakes vessels for transporting bulk commodities such as coal and iron ore as well as various port facilities. Following Surface Transportation Board approval for the transaction, CN completed the purchase of GLT on May 10, 2004.
On December 24, 2008, the STB approved CN's purchase for $300 million of the principal lines of the Elgin, Joliet & Eastern Railway Company (EJ&E) (reporting mark EJE) from the U.S. Steel Corporation, originally announced on September 27, 2007. The STB's decision was to become effective on January 23, 2009, with a closure of the transaction shortly thereafter. The EJ&E lines create a bypass around the western side of heavily congested Chicago-area rail hub and its conversion to use for mainline freight traffic is expected to alleviate substantial bottlenecks for both regional and intercontinental rail traffic subject to lengthy delays entering and exiting Chicago freight yards. The purchase of the lightly used EJ&E corridor was positioned by CN as a boon not only for its own business but for the efficiency of the entire U.S. rail system.
On December 31, 2011, CN completed the merger of DM&IR, DWP, and WC into its Wisconsin Central Ltd. subsidiary.
In March 2021, CN subsidiary WCL reached a deal to sell roughly 1,400 km (900 mi) of non-core rail lines and assets in Michigan, Wisconsin, and Ontario to short-line operator Watco.
In April 2021, CN bid nearly $30 billion for Kansas City Southern (KCS), ostensibly creating a bidding war between itself and CPR, who had placed a $25 billion bid for the company in March. CN's offer represented a 21% premium to the one made by Canadian Pacific, offering $325 for each share and including $200 in cash. The move by CN was influenced by the projected economic upturn once the world began to emerge from the COVID-19 pandemic, with KCS's railroad network reaching from Canada, through the United States, and running along the Panama Canal. On May 21, CN and KCS agreed to merge, but lengthy regulatory approvals are required to put it into effect. However, on August 31, the US Surface Transportation Board (STB) denied a voting trust between CN and KCS. With the decision by the STB, KCS re-engaged with CP on CP's original offer. The merger between the Kansas City Southern and Canadian Pacific Railway was ultimately approved on March 15, 2023, and the two railroads merged on April 14, 2023.
After losing the battle against CP for the purchase of KCS, in hearings before the STB for the CP-KCS merger, CN filed a plan to acquire the KCS line linking Kansas City with Springfield, IL, St. Louis, MO and East St. Louis, IL, the former Gateway Western, tie it to its former IC Gilman Subdivision, and thus create a new corridor between Kansas City and St. Louis with Michigan and Eastern Canada, bypassing Chicago, and which, according to the plan presented by CN, divert 80,000 long haul-truck shipments to rail annually. A few months later, CN resigned its intentions to purchase the Springfield Line in order to try to obtain trackage rights on the line, always with the same intention of creating the corridor proposed in the original plan to purchase the line filed with the STB. Both the initial plan to purchase the line and the subsequent plan to acquire trackage rights included the execution of corridor improvement works, valued at more than US$250 million. The STB would ultimately reject plans submitted by CN to operate on the Springfield Line.
Due to a failure to reach an agreement with the Teamsters Canada Rail Conference Canadian National's Canadian operations, along with those of CPKC, shut down from August 22, 2024 as the companies engaged in a lockout.
In December 2023, CN set to acquire the Iowa Northern Railway (IANR) with the proposition to be reviewed by the Surface Transportation Board (STB). On January 14, 2025, the STB approved the CN acquisition of the Iowa Northern Railway.
== CN today ==
Since the company operates in two countries, CN maintains some corporate distinction by having its U.S. lines incorporated under the Delaware-domiciled Grand Trunk Corporation for legal purposes; however, the entire company in both Canada and the U.S. operates under CN, as can be seen in its locomotive and rail car repainting programs.
Since the Illinois Central purchase in 1998, CN has been increasingly focused on running a "scheduled freight railroad/railway". This has resulted in improved shipper relations, as well as reduced the need for maintaining pools of surplus locomotives and freight cars. CN has also undertaken a rationalization of its existing track network by removing double track sections in some areas and extending passing sidings in other areas.
CN is also a rail industry leader in the employment of radio-control (R/C) for switching locomotives in yards, resulting in reductions to the number of yard workers required. CN has frequently been touted in recent years within North American rail industry circles as being the most-improved railroad in terms of productivity and the lowering of its operating ratio, acknowledging the fact the company is becoming increasingly profitable. Due to the rising popularity of ethanol, shuttle trains, and mineral commodities, CN Rail Service is increasing in popularity.
In 2011, the company was added to the Dow Jones Sustainability World Index.
=== Projects ===
In April 2012 a plan was announced to build an 800 kilometres (500 mi) railway that would run north from Sept-Îles, Quebec; the railway would support mining and other resource extraction in the Labrador Trough.
In September 2012, CN announced the trial of locomotives fuelled by natural gas as a potential alternative to conventional diesel fuel. Two EMD SD40 diesel-electric locomotives fuelled with 90% natural gas and 10% diesel were tested in service between Edmonton and Fort McMurray, Alberta.
=== Controversies ===
==== Accidents ====
in 1986 near Dalehurst, Alberta, a CN westbound freight slammed into a Via Rail eastbound, killing 23 and injuring 71. The wreck was caused due to multiple factors caused by CN.
In December 1999 the Ultratrain, a petroleum products unit train linking the Levis (Quebec) Ultramar oil refinery with a petroleum depot in Montreal, derailed into the path of an oncoming freight train, travelling in the opposite direction between Sainte-Madeleine and Saint-Hilaire-Est, south of Montreal. The two crew members on the freight train were killed in the ensuing explosion (the crew's last words were "you guys are derailed, we're hitting you!"). The Ultratrain derailed at a broken rail caused by a defective weld that was not fixed in time, despite being repeatedly reported by train crews; the report by the Transportation Safety Board of Canada called into question CN's quality assurance program for rail welds as well as the lack of detection equipment for defective wheels. In memory of the dead crewmen, two new stations on the line have been named after them (Davis and Thériault).
On May 14, 2003, a trestle collapsed under the weight of a freight train near McBride, British Columbia, killing both crew members. Both men had been disciplined earlier for refusing to take another train on the same bridge, claiming it was unsafe. It was revealed that as far back as 1999, several bridge components had been reported as rotten, yet no repairs had been ordered by management. Eventually, the disciplinary records of both crewmen were amended posthumously.
Two CN trains collided on August 4, 2007, on the banks of the Fraser River near Prince George, British Columbia. Several cars carrying gasoline, diesel and lumber burst into flames. Water bombers were used to help put out the fires. Some fuel had seeped into the Fraser River.
==== Derailments ====
On May 27, 2002, a CN train derailed at 12:30 p.m. north of Vermontville Highway in Potterville, Michigan. The train was hauling a total of 58 cars. Thirty-five of the cars derailed and 11 of them contained hazmat material. Nine were carrying propane and two cars carried sulfuric acid. Two of the propane tankers were leaking and a third was suspected of leaking. Each propane car contains 34,000 gallons of propane gas which is considered an extreme fire and explosive hazard. An evacuation of Potterville was declared. CN along with other agencies worked throughout the week to clean the area.
A second CN train derailment in Potterville, Michigan, occurred in May 2006, though no evacuation was necessary. The cause of this derailment was found to be a failed wheel bearing on the 82nd car.
About 9:04 am central standard time on February 9, 2003, northbound CN freight train M33371 derailed 22 of its 108 cars in Tamaroa, Illinois. Four of the derailed cars released methanol, and the methanol from two of these four cars fueled a fire. Other derailed cars contained phosphoric acid, hydrochloric acid, formaldehyde, and vinyl chloride. Two cars containing hydrochloric acid, one car containing formaldehyde, and one car containing vinyl chloride released product but were not involved in the fire. About 850 residents were evacuated from the area within a 3-mile (4.8 km) radius of the derailment, which included the entire village of Tamaroa. Improper placement of bond wire welds on the head of the rail just outside the joint bars, where untempered martensite associated with the welds led to fatigue and subsequent cracking that, because of increased stresses associated with known soft ballast conditions, rapidly progressed to rail failure.
On August 5, 2005, in the Cheakamus River derailment, a CN train had nine cars derail on a bridge over the Cheakamus River, causing 41,000 litres (11,000 US gal) of caustic soda to spill into the river, killing thousands of fish by caustic burns and asphyxiation. The CBC reported environmental experts say it would take the river 50 years or more to recover from the toxic pollution. CN is facing accusations from local British Columbians over the railway's supposed lack of response to this issue, touted as the worst chemical spill in British Columbia's history.
A derailment at Moran, 20 miles (32 km) north of Lillooet, on June 30, 2006, has raised more questions about CN's safety policies. Two more derailments near Lytton in August 2006 have continued criticism. In the first case, 20 coal cars of a CPR train using a CN bridge derailed, dumping 12 cars of coal into the Thompson River. In the second case half a dozen grain cars spilled on a CN train.
On June 19, 2009, a CN freight train derailed at a highway/rail grade crossing in Cherry Valley, Illinois (near Rockford). The train consisted of two locomotives and 114 cars, 19 of which derailed. All of the derailed cars were tank cars carrying denatured fuel ethanol, a flammable liquid, and thirteen were breached or lost product and caught fire. As a result of the fire that erupted after the derailment, a passenger in a car stopped at the crossing was fatally injured, two passengers in the same car received serious injuries, and five occupants of other cars waiting at the highway/rail crossing were injured. Two responding firefighters also sustained minor injuries. The release of ethanol and the resulting fire prompted a mandatory evacuation of about 600 residences within a 0.5-mile (0.80 km) radius of the accident site. Monetary damages were estimated to total $7.9 million. The probable cause of the accident was the washout of the track structure that was discovered about 1 hour before the train's arrival, and CN's failure to notify the train crew of the known washout in time to stop the train. Contributing factors were CN's failure to work with Winnebago County to develop a comprehensive storm water management plan to address previous washouts, CN's failure to issue the flash flood warning to the train crew, and the inadequate design of the train's DOT-111 tank cars.
==== Disputes ====
In March 2004 a strike by the Canadian Auto Workers union showed deep-rooted divisions between organized labour and the company's current management.
Transport Canada has restricted CN to trains not exceeding 80 car lengths because of the multiple derailments on the former BCR line north from Squamish. This was due to sufficient warnings from the former B.C. Rail to Canadian National Railway to avoid trains of over 60 cars. Unfortunately these warnings were ignored by CN who had been running trains well in excess of 80 cars on this winding and mountainous section of track, known for some of the steepest track in North America.
In October 2013 the James Street bridge between Thunder Bay and Fort William First Nation was subject to an act of arson causing great structural damage to the bridge. The bridge was the most direct route between Thunder Bay and Fort William First Nation reserve and was used by foot traffic, vehicular traffic, and rail traffic. The matter of who is responsible for the maintenance and repair of the bridge is subject to great controversy between the City of Thunder Bay and CN due to an agreement dating back to 1906 between the Grand Trunk Pacific Railway Company (later incorporated as CNR along with other railways) and the City of Fort William (later merged with the City of Port Arthur into the City of Thunder Bay). The 1906 Agreement states that "The Company will give the Municipal Corporation the perpetual right to cross said bridge for ... vehicle and foot traffic" and that "The Company will maintain the bridge in perpetuity without cost to the Town ...". After the fire, CN made repairs to the bridge for use of its rail system but did not repair the damage to the vehicle lanes which render it unsafe for vehicle use. CN maintains that the 1906 Agreement does not speak to replacement of the bridge while the position of the City of Thunder Bay is that CN is solely responsible for making the necessary repairs to restore function to the vehicle lanes of the bridge.
At 12:01 a.m. on August 22, 2024, CN shut its operations and locked out thousands of Teamsters Canada union members. The lockout, however, lasted for less than a full day: By the afternoon of August 22, the Canadian government ordered CN to end the lockout and to arbitrate with its labor union.
==== Other incidents ====
Controversy arose again in Canadian political circles in 2003 following the company's decision to refer solely to its acronym "CN" and not "Canadian National", a move some interpret as being an attempt to distance the company from references to "Canada". Canada's Minister of Transport at the time called this policy move "obscene" after nationalists noted it could be argued the company is no longer Canadian, being primarily owned by American stockholders. The controversy is somewhat tempered by the fact a majority of large corporations are being increasingly referred to by acronyms.
The residents of Wabamun Lake, in Alberta, staged a blockade of CN tracks in August 2005, when they were unsatisfied with the railway's response to a derailment catastrophe that spilled over 700,000 Litres of tarry fuel oil and about 80,000 L of carcinogenic pole treatment oil into the lake. Reporters found pre-spill evidence. CN executives admitted CN failed to provide public safety information to prevent public exposure to carcinogenic, toxic chemicals. The tar-like oil and chemicals killed over 500 large migratory birds, animals, fish and other aquatic life.
In the years following CN's 1998 acquisition of Illinois Central, the company has come under scrutiny for illicit practices that allegedly cause the delay of Amtrak schedules. In 2012, Amtrak filed a formal complaint against CN with the Surface Transportation Board, stating that the prioritization of freight traffic over passenger traffic was commonplace on Amtrak routes operating on CN lines. The complaint cited over 4,000 delays during fiscal year 2011 on the route between Chicago and Carbondale, totaling over 26 days of net wasted schedule time; it also reported 99% of delays between Chicago and New Orleans on the City of New Orleans route were caused by CN dispatching issues. In 2018, Amtrak began issuing public report cards, grading the impact of freight railroads on passenger train performance. CN received the lowest-possible grade of "F" on the first card issued in March 2018.
==== Offences ====
On June 15, 2017, CN pleaded guilty in the Provincial Court of Alberta to one offence under the Fisheries Act and three offences under the Canadian Environmental Protection Act. It was fined $2.5 million for being non-compliant with a number of requirements under the Storage Tank Systems for Petroleum and Allied Petroleum Products Regulations, which caused an estimated 90 litres of diesel to be released into Edmonton's storm sewer.
On September 15, 2021, CN pleaded guilty in Prince Rupert Provincial Court to a charge of violating the Fisheries Act and fined $2.5 million for spraying pesticides along its rail line, which runs along the Skeena River and over many tributaries and wetlands in British Columbia, which were found to be deleterious to fish.
== Non-rail subsidiaries ==
=== CN Telegraph ===
CN Telegraph originated as the Great North West Telegraph Company in 1880 to connect Ontario and Manitoba and became a subsidiary of Western Union in 1881. In 1915, facing bankruptcy, GNWTC was acquired by the Canadian Northern Railway's telegraph company. When Canadian Northern was nationalized in 1918 and amalgamated into Canadian National Railways in 1921, its telegraph arm was renamed the Canadian National Telegraph Company. CN Telegraphs began co-operating with its Canadian Pacific-owned rival CPR Telegraphs in the 1930s, sharing telegraph networks and co-founding a teleprinter system in 1957. In 1967 the two services were amalgamated into a joint venture CNCP Telecommunications which evolved into a telecoms company. CN sold its stake of the company to CP in 1984.
=== CNR Radio ===
In 1923 CNR's second president, Sir Henry Thornton who succeeded David Blyth Hanna (1919–1922), created the CNR Radio Department to provide passengers with entertainment radio reception and give the railway a competitive advantage over its rival, CP. This led to the creation of a network of CNR radio stations across the country, North America's first radio network. As anyone in the vicinity of a station could hear its broadcasts the network's audience extended far beyond train passengers to the public at large.
Claims of unfair competition from CP as well as pressure on the government to create a public broadcasting system similar to the British Broadcasting Corporation led the government of R. B. Bennett (who had been a corporate lawyer with Canadian Pacific as a client prior to entering politics) to pressure CNR into ending its on-train radio service in 1931 and then withdrawing from the radio business entirely in 1933. CNR's radio assets were sold for $50,000 to a new public broadcaster, the Canadian Radio Broadcasting Commission, which in turn became the Canadian Broadcasting Corporation in 1936.
=== CN Hotels ===
Canadian railways built and operated their own resort hotels, ostensibly to provide rail passengers travelling long distances a place to sleep overnight. These hotels became attractions in and of themselves – a place for a rail passenger to go for a holiday. As each railway company sought to be more attractive than its competitors, they made their hotels more attractive and luxurious.
Canadian National Hotels was the CNRs chain of hotels and was a combination of hotels inherited by the CNR when it acquired various railways and structures built by the CNR itself. The chain's principal rival was Canadian Pacific Hotels.
=== Canadian National Steamship Company ===
Canadian National operated a fleet of passenger and cargo vessels on both the West Coast and East Coast of Canada which operated under a branch of the company known as Canadian National Steamships, later CN Marine.
==== West Coast ====
Swan Hunter and Wigham Richardson of Wallsend, England, built Prince George and Prince Rupert for the Grand Trunk Pacific Railway in 1910. In 1930 Cammell Laird of Birkenhead, England, built Prince David, Prince Henry and Prince Robert. Prince Henry was sold in 1937. Prince George was destroyed by fire in 1945. Prince David and Prince Robert were requisitioned in 1939 as Royal Canadian Navy armed merchant cruisers, converted into landing ships in 1943, and sold in 1948. In 1948 a second Prince George was built by Yarrows Limited, becoming CN's sole remaining Pacific Coast passenger liner. She was switched from scheduled routes to pleasure cruises, and was the last CN ship that served the west coast. After a fire in 1975 she was sold in 1976 (first to British Columbia Steamship Company and finally Wong Brother Enterprises) before finally being sold to Chinese breakers in 1995 (and sank on her way to China in 1996 in Unimak Pass).
===== Former Canadian Northern Pacific ships =====
SS Canora was built in 1918 for the Canadian Northern Pacific's Patricia Bay to Port Mann route. In 1919 the ship became part of Canadian National.
===== Former Grand Trunk Pacific steamships =====
These ships served the Pacific coast with GTP until Canadian National took possession of them in 1925:
Prince Rupert (1910–56)
Prince George (1910–45) – Caught fire and destroyed in 1945.
Prince Albert
Prince John
===== CN-built steamships for the West Coast =====
Ships specially built for CN for the West Coast. After the Second World War steamship service had dropped and by the 1950s the ships were withdrawn. Prince George (II) stayed in service, but to do cruises on the West Coast. By 1975 Prince George (II) was retired, ending CN's steamship era on the West Coast.
Prince Henry
Prince David
Prince Robert
Prince Charles
Prince William
Prince George (II) (1948–1975) – Built and replaced the first Prince George after it caught fire in 1945. Prince George (II) was the last ship that served the west coast for CN.
==== East Coast ====
In 1928–29 Cammell Laird built a set of five ships for CN to carry mail, passengers and freight between eastern Canada and the Caribbean via Bermuda. Each ship was named after the wife of an English or British admiral who was noted for his actions in the Caribbean, and who had been knighted or ennobled. They were therefore nicknamed the Lady-liners or Lady-boats. Lady Nelson along with Lady Hawkins and Lady Drake were designed for service to the eastern islands of the British West Indies and had larger passenger capacity but lesser cargo capacity than Lady Rodney and Lady Somers who were built for service to western islands. In the Second World War Lady Somers was requisitioned as an ocean boarding vessel while her four sister ships continued in CN service. The Italian submarine Morosini sank Lady Somers in 1941. Lady Hawkins and Lady Drake were sunk by German U-boats in 1942. Lady Nelson was torpedoed in 1942 but refloated and converted to a hospital ship, while Lady Rodney survived the war unscathed. The two surviving Lady Boats, Nelson and Rodney, were sold in 1952 after declining passenger traffic and rising labour costs made them too expensive to run.
==== Cargo ships ====
In 1928 CN took over most of the fleet of Canadian Government Merchant Marine Ltd, giving it a fleet of about 45 cargo ships. When France surrendered to Germany in June 1940 the Canadian Government seized CGT's MV Maurienne and contracted CN to manage her.
=== Aquatrain ===
CN operated a rail barge service between Prince Rupert, British Columbia, to Whittier, Alaska, from 1963 to 2021.
== Corporate governance ==
Robert Pace is the chair of the CNR board. The other board members are Donald J. Carty, V. Maureen Kempston Darkes, Gordon D. Giffin, Edith E. Holiday, Luc Jobin, Denis Losier, Kevin G. Lynch, James E. O'Connor, Robert L. Phillips, and Laura Stein.
=== Heads of the corporation ===
Thornton and Harrison were the only non-Canadians to head CN.
=== 1900s ===
From 1919 to 1995, CN was also the responsibility of the relevant federal cabinet minister as a Crown Corporation:
1919–1936 – Minister of Railways and Canals
1936–present – Minister of Transport
=== 2000s ===
Claude Mongeau was president and CEO from 2010 to 2016, previously serving as CFO for almost a decade. His tenure was marked by early praise from leadership for his working on the tracks for several months alongside the company's railroaders. He was also credited with implementing precision railroading.
However, mainline derailments increased in the middle of his tenure, resulting in his bonus being capped. Operating ratio also declined during his time as CEO. He resigned in 2016 after being diagnosed with throat cancer, and the board appointed Luc Jobin to replace him.
During his tenure, Jobin joined the board of British American Tobacco in 2017. In 2018, Jobin resigned "as the railway struggles through operational and customer service challenges," CBC wrote.
== Passenger trains ==
=== Early years ===
When CNR was first created, it inherited a large number of routes from its constituent railways, but eventually pieced its passenger network into one coherent network. For example, on December 3, 1920, CNR inaugurated the Continental Limited, which operated over four of its predecessors, as well as the Temiskaming and Northern Ontario Railway. The 1920s saw growth in passenger travel, and CNR inaugurated several new routes and introduced new services, such as radio, on its trains. However, the growth in passenger travel ended with the Great Depression, which lasted between 1929 and 1939, but picked up somewhat in World War II. By the end of World War II, many of CNR's passenger cars were old and worn down. Accidents at Dugald, Manitoba, in 1947 and Canoe River, British Columbia, in 1950, wherein extra passenger trains composed of older, wooden equipment collided with transcontinental passenger trains composed of newer, all-steel equipment, demonstrated the dangers inherent in the older cars. In 1953, CNR ordered 359 lightweight passenger cars, allowing them to re-equip their major routes.
On April 24, 1955, the same day that the CPR introduced its transcontinental train The Canadian, CNR introduced its own new transcontinental passenger train, the Super Continental, which used new streamlined rolling stock. However, the Super Continental was never considered as glamorous as the Canadian. For example, it did not include dome cars. Dome cars would be added in the early 1960s with the purchase of six former Milwaukee Road "Super Domes". They were used on the Super Continental in the summer tourist season.
=== New services ===
Rail passenger traffic in Canada declined significantly between World War II and 1960 due to automobiles and airplanes. In the 1960s CN's privately owned rival CPR reduced its passenger services significantly. However, the government-owned CN continued much of its passenger services and marketed new schemes. One, introduced on 5 April 1962, was the "Red, White and Blue" fare structure, which offered deep discounts on off-peak days ("red") and were credited with increasing passenger numbers on some routes as much as 600%. Another exercise was the rebranding of the express trains in the Ontario–Quebec corridor with the Rapido label.
In 1968, CN introduced a new high-speed train, the United Aircraft Turbo, which was powered by gas turbines instead of diesel engines. It made the trip between Toronto and Montreal in four hours, but was not entirely successful because it was somewhat uneconomical and not always reliable. The trainsets were retired in 1982 and later scrapped at Metrecy, in Laval, Quebec.
On CN's narrow gauge lines in Newfoundland, CN also operated a main line passenger train that ran from St. John's to Port aux Basques called the Caribou. Nicknamed the Newfie Bullett, this train ran until June 1969. It was replaced by the CN Roadcruiser Buses. The CN Roadcruiser service was started in fall 1968 and was run in direct competition with the company's own passenger train. Travellers saw that the buses could travel between St. John's and Port aux Basques in 14 hours versus the train's 22 hours. After the demise of the Caribou, the only passenger train service run by CN on the island were the mixed (freight and passenger) trains that ran on the Bonavista, Carbonear and Argentia branch lines. The only passenger service surviving on the main line was between Bishop's Falls and Corner Brook.
In 1976, CN created an entity called Via-CN as a separate operating unit for its passenger services. Via evolved into a coordinated marketing effort with CP Rail for rail passenger services, and later into a separate Crown corporation responsible for inter-city passenger services in Canada. Via Rail took over CN's passenger services on April 1, 1978.
=== Decline ===
CN continued to fund its commuter rail services in Montreal until 1982, when the Montreal Urban Community Transit Commission (MUCTC) assumed financial responsibility for them; operation was contracted out to CN, which eventually spun off a separate subsidiary, Montrain, for this purpose. When the Montreal–Deux-Montagnes line was completely rebuilt in 1994–1995, the new rolling stock came under the ownership of the MUCTC, until a separate government agency, the Agence métropolitaine de transport (now AMT), was set up to consolidate all suburban transit administration around Montreal. Since then, suburban service has resumed to Saint-Hilaire, and a new line to Mascouche opened in December 2014.
In Newfoundland, Terra Transport would continue to operate the mixed trains on the branch lines until 1984. The main line run between Corner Brook and Bishop's Falls made its last run on September 30, 1988. Terra Transport/CN would run the Roadcruiser bus service until March 29, 1996, whereupon the bus service was sold off to DRL Coachlines of Triton, Newfoundland.
=== Expansion and service cuts ===
From the acquisition of the Algoma Central Railway in 2001 until service cancellation in July 2015, CN operated passenger service between Sault Ste. Marie and Hearst, Ontario. The passenger service operated three days per week and provided year-round access to remote tourist camps and resorts.
In January 2014, CN announced it was cutting the service, blaming the Government of Canada for cutting a subsidy necessary to keep the service running. It was argued as an essential service; however, the service had always been deemed financially uneconomic, and despite an extension of funding in April 2014, Algoma Central service was suspended as of July 2015.
CN operates the Agawa Canyon Tour excursion, an excursion that runs from Sault Ste. Marie, Ontario, north to the Agawa Canyon. The canyon tour train consists of up to 28 passenger cars and 2 dining cars, the majority of which were built for CN by Canadian Car and Foundry in 1953–54. These cars were transferred to the D&RGW Ski Train and bought back by CN in 2009.
After CN acquired BC Rail in 2004, it started operating a railbus service between Seton Portage and Lillooet, British Columbia called the Kaoham Shuttle.
CN crews used to operate commuter trains on behalf of GO Transit in the Toronto and the surrounding vicinity. This changed in 2008 when a deal was reached with Bombardier Transportation that switched all CN crews for Bombardier crews.
== Locomotives ==
=== Steam ===
The CNR acquired its first 4-8-4 Confederation locomotives in 1927. Over the next 20 years, it ordered over 200 for passenger and heavy freight service. The CNR also used several 4-8-2 Mountain locomotives, almost exclusively for passenger service. No. 6060, a streamlined 4-8-2, was the last CN steam locomotive, running in excursion service in the 1970s. CNR also used several 2-8-2 Mikado locomotives.
=== Electric ===
CN inherited from the Canadian Northern Railway several boxcab electrics used through the Mount Royal Tunnel. Those were built between 1914 and 1918 by General Electric in Schenectady, New York. To operate the new Montreal Central Station, which opened in 1943 and was to be kept free of locomotive smoke, they were supplemented by nearly identical locomotives from the National Harbours Board; those engines were built in 1924 by Beyer, Peacock & Company and English Electric. In 1950, three General Electric centre-cab electric locomotives were added to the fleet. In 1952 CN added electric multiple units built by Canadian Car and Foundry.
Electrification was restricted to Montreal, and went from Central Station to Saint-Lambert (south), Turcot (west), Montréal-Nord (east) and Saint-Eustache-sur-le-lac, later renamed Deux-Montagnes, (north). But as steam locomotives gave way to diesels, engine changeovers were no longer necessary, and catenary was eventually pulled from the west, east and from the south. However, until the end of the original electrification, CN's electric locomotives pulled Via Rail's trains, including its diesel electric locomotives, to and from Central Station.
The last 2,400 V DC CN electric locomotive ran on June 6, 1995, the very same locomotive that pulled the inaugural train through the Mount Royal Tunnel back in 1918. Later in 1995 the AMT's Electric Multiple Units began operating under 25 kV AC 60 Hz electrification, and in 2014, dual-power locomotives entered service on the Mascouche line.
=== Turbo ===
In May 1966, CN ordered five seven-car UAC TurboTrain for the Montreal–Toronto service. It planned to operate them in tandem, connecting two trains together into a larger fourteen-car arrangement with a total capacity of 644 passengers. The Canadian trains were built by Montreal Locomotive Works, with their ST6 engines supplied by UAC's Canadian division (now Pratt & Whitney Canada) in Longueuil, Quebec.
CN and their ad agency wanted to promote the new service as an entirely new form of transit, so they dropped the "train" from the name. In CN's marketing literature the train was referred to simply as the "Turbo", although it retained the full TurboTrain name in CN's own documentation and communication with UAC. A goal of CN's marketing campaign was to get the train into service for Expo '67, and the Turbo was rushed through its trials. It was late for Expo, a disappointment to all involved, but the hectic pace did not let up and it was cleared for service after only one year of testing.
The Turbo's first demonstration run in December 1968 with Conductor James Abbey of Toronto in command, included a large press contingent. An hour into its debut run, the Turbo collided with a truck at a highway crossing near Kingston.
The Turbo's final run was on October 31, 1982.
=== Diesel ===
CNR's first foray into diesel motive power was with self-propelled railcars. In November 1925, Railcar No. 15820 completed a 72-hour journey from Montreal to Vancouver with the 185-horsepower (138 kW) diesel engine in nearly continuous operation for the entire 4,726-kilometre (2,937 mi) trip. Railcars were used on marginal economic routes instead of the more-expensive-to-operate steam locomotives used for busier routes.
In 1929, the CNR made its first experiment with mainline diesel electric locomotives, acquiring two 1,330-horsepower (990 kW) engines from Westinghouse, numbered 9000 and 9001. It was the first North American railway to use diesels in mainline service. These early units proved the feasibility of the diesel concept, but were not always reliable. No. 9000 served until 1939, and No. 9001 until 1947. The difficulties of the Great Depression precluded much further progress towards diesel locomotives. The CNR began its conversion to diesel locomotives after World War II, and had fully dieselized by 1960. Most of the CNR's first-generation diesel locomotives were made by General Motors Diesel (GMD) and Montreal Locomotive Works.
For its narrow-gauge lines in Newfoundland CN acquired from GMD the 900 series, Models NF110 (road numbers 900–908) and NF210 (road numbers 909–946). For use on the branch lines, CN purchased the EMD G8 (road numbers 800–805).
For passenger service the CNR acquired GMD FP9 diesels, as well as CLC CPA16-5, ALCO MLW FPA-2 and FPA-4 diesels. These locomotives made up most of the CNR's passenger fleet, although CN also owned some 60 RailLiners (Budd Rail Diesel Cars), some dual-purpose diesel freight locomotives (freight locomotives equipped with passenger train apparatus, such as steam generators) as well as the locomotives for the Turbo trainsets. Via acquired most of CN's passenger fleet when it took over CN passenger service in 1978.
The CN fleet as of 2007 consists of 1,548 locomotives, most of which are products of either General Motors' Electro-Motive Division (EMD), or General Electric/GE Transportation Systems. Some locomotives more than 30 years old remain in service.
Much of the current roster is made up of EMD SD70I and EMD SD75I locomotives and GE C44-9W locomotives. Recently acquired are the new EMD SD70M-2 and GE ES44DC. Since 2015 the GE ES44AC & GE ET44AC are the latest units.
Beginning in the early summer months of 2010, CN purchased a small order of GE C40-8's and GE C40-8W's from Union Pacific and BNSF Railway, respectively. The intent was to use them as a cheaper power alternative. CN currently has 65 GE ES44ACs on its roster and all 65 were ordered and delivered from December 2012 to December 2013. They are CN's first AC-powered locomotives. In 2015, CN started ordering more GE units, the ET44AC. On November 17, 2020, CN revealed five heritage units to mark the 25th anniversary of becoming a publicly traded company. They had originally been spotted a month earlier, but were not yet formally announced by the company. The locomotives were repainted into various schemes of railroads CN had previously acquired, and included four GE ET44ACs painted in IC, EJ&E, WC, and BC Rail paint, and an EMD SD70M-2 painted in GTW paint.
== Major facilities ==
CN owns a large number of large yards and repair shops across their system. They are used for many operations, ranging from intermodal terminals to classification yards. Examples include:
=== Hump yards ===
Hump yards work by using a small hill over which cars are pushed before being released down a slope and switched automatically into cuts of cars, ready to join into outbound trains. CN's active humps include:
Vaughan, Ontario: MacMillan Yard
Winnipeg, Manitoba: Symington Yard
Gary, Indiana: Kirk Yard
Memphis, Tennessee: Harrison Yard
== See also ==
Canadian Pacific Kansas City
GO Transit
Narrow gauge railways in Canada
Newfoundland T'Railway
Ontario Northland Railway
Rail transport in Canada
== Notes ==
== References ==
== Further reading ==
== External links ==
Official website
Canadian National Railway fonds (RG30/R231) at Library and Archives Canada
CN Images of Canada Gallery Archived November 30, 2012, at the Wayback Machine
Canadian National Railway Historic Photograph Collection
CNR Trucking: Express and Freight Vehicles | Wikipedia/CN_Telegraph |
The Baltimore–Washington telegraph line was the first long-distance telegraph system set up to run overland in the United States.
== Building of line ==
In March 1843, the US Congress appropriated US$30,000 (equivalent to $1,012,393 in 2024) to Samuel Morse to lay a telegraph line between Washington, D.C., and Baltimore, Maryland, along the right-of-way of the Baltimore and Ohio Railroad.
Morse originally decided to lay the wire underground, asking Ezra Cornell to lay the line using a special cable-laying plow that Cornell had developed. Wire began to be laid in Baltimore on October 21, 1843. Cornell's plow was pulled by eight mules, and cut a ditch 2 inches (5.1 cm) wide and 20 inches (51 cm) deep, laid a pipe with the wires, and reburied the pipe, in an integrated operation. However, the project was stopped after about 9.3 miles (15 km) of wire was laid because the line was failing.
Morse learned that Cooke and Wheatstone were using poles for their lines in England and decided to follow their lead. Installation of the lines and poles from Washington to Baltimore began on April 1, 1844, using chestnut poles 23 feet (7 m) high spaced 300 feet (90 m) apart, for a total of about 700 poles. Two 16-gauge copper wires were installed; they were insulated with cotton thread, shellac, and a mixture of "beeswax, resin, linseed oil, and asphalt." A test of the still incomplete line occurred on May 1, 1844, when news of the Whig Party's nomination of Henry Clay for U.S. President was sent from the party's convention in Baltimore to the Capitol Building in Washington.
== Operations ==
Morse's line was demonstrated on May 24, 1844, from the Old Supreme Court Chamber in the United States Capitol in Washington to the Mount Clare station of the railroad in Baltimore, and commenced with the transmission of Morse's first message (from Washington) to Alfred Vail (in Baltimore), "What hath God wrought", a phrase from the Bible's Book of Numbers. The phrase was suggested by Annie Ellsworth, whose husband was a supporter of Morse's, and knew Morse was religious.
As U.S. Postmaster General, Cave Johnson was in charge of the line. Morse was made superintendent of the line, and Alfred Vail and Henry Rogers the operators.
The next year, Johnson reported that "the importance of [the line] to the public does not consist of any probable income that can ever be derived from it," which led to the invention being returned for private development.
== See also ==
First transcontinental telegraph
Timeline of North American telegraphy
== References ==
== External links ==
Electronic Technology in the House of Representatives
History of the Telegraph
Contemporary account of the construction of the transcontinental telegraph | Wikipedia/Baltimore-Washington_telegraph_line |
The Foy–Breguet telegraph, also called the French telegraph, was an electrical telegraph of the needle telegraph type developed by Louis-François-Clement Breguet and Alphonse Foy in the 1840s for use in France. The system used two-needle instruments that presented a display using the same code as that on the optical telegraph of Claude Chappe. The Chappe telegraph was extensively used in France by the government, so this arrangement was appealing to them as it meant there was no need to retrain operators.
Most needle telegraph systems moved the needles by means of an electromagnet driven by battery power applied to the line at the sending end. In contrast, the Foy-Breguet telegraph used electromagnets but they did not directly drive the needle. Instead, they operated the detent of a clockwork mechanism which released the needle to move on one position at a time.
The Chappe telegraph existed in some other countries, but no country besides France tried to duplicate the Chappe telegraph, or any other optical telegraph, as an electrical telegraph. Generally, each electrical telegraph system had a new code developed specifically to suit it. This was problematic for international communications, and in 1855 France abandoned the Foy–Breguet telegraph in favour of the Morse telegraph to bring them into line with the German–Austrian Telegraph Union. Many central European countries were members of this union and they had adopted the Morse system for better interoperability.
== Development ==
The first attempt to bring the electrical telegraph to France was made by Samuel Morse in 1838. He demonstrated his system to the French Academy of Sciences and made a bid for the contract to install a telegraph along the line of the Paris to Saint-Germain railway. However, the French government decided that they did not want to entrust the construction of telegraph lines to private companies. Private operation of telegraph systems had been illegal in France since 1837 and all telegraph infrastructure was owned and operated by the state. Electrical telegraph could only start in France if the government sponsored it. France had the most extensive optical telegraph system of any country, developed for military purposes by Claude Chappe in the revolutionary and Napoleonic periods. There were strong arguments put forward for the superiority of optical telegraphs over electrical telegraphs. Chief amongst these reasons was that electrical systems were vulnerable to attack by saboteurs. In an optical system, only the telegraph stations needed to be defended. An electrical system was impossible to defend over its many hundreds of miles of exposed wires.
Alphonse Foy, the chief administrator of the French telegraphs, had a further objection to the Morse system. He believed that his illiterate telegraph operators would not easily be able to learn the Morse code. He did not, however, entirely reject the electrical telegraph. After the Morse system was rejected in 1839, Foy investigated the Cooke–Wheatstone telegraph in use in England. Foy realised that the needle telegraph displays used by the Cooke–Wheatstone system could be adapted to display the symbols of the French optical telegraph. He asked Louis-François-Clement Breguet to design such a system. It was first tested on the Paris Saint-Cloud to Versailles line in 1842.
Funding for an electrical telegraph was approved in 1844. Foy specified that the new telegraph must show the same display as the Chappe telegraph so that there was no need for operator retraining. This required the display to have three moving parts; the Chappe telegraph had a pivoted crossbar (the regulator) with two moveable arms (the indicators), one at each end of the regulator. A design meeting this requirement was submitted by Pierre-Antoine Joseph Dujardin. Implemented as a needle telegraph, the arrangement required three moving needles, which in turn required three signal wires. The wires were a significant part of the cost of installation; the Morse system, for instance, required only one wire.
In May 1845, Foy ran a comparative test between the Dujardin, Breguet, and Cooke-Wheatstone systems on the Paris, Saint Germain to Rouen line. Foy rejected the Dujardin system in favour of the one by Breguet, even though the Dujardin system more fully mimicked the Chappe system than Breguet's. The Breguet design required only two signal wires, but at the expense of having only two moveable needles. These represented the indicators of the Chappe system. The regulator was simply a marking on the face of the instrument, not a moving part—it was permanently in the horizontal position. The disadvantage of doing this is that it drastically reduced the available codespace which in turn impacted the speed a message could be transmitted.
The rejection was perhaps due to the economic reason, or perhaps because Breguet was better acquainted with Foy. Breguet had a long history of working with the French telegraph. His grandfather, Abraham-Louis Breguet, a watchmaker, had worked with Chappe on the design of the optical telegraph and Louis inherited the business. The Chappe system used a large codebook with thousands of predetermined phrases and sentences. 92 codepoints were used to specify the line and page of the codebook (see Telegraph code § Chappe code). There were some early attempts to use a reduced codebook on the Foy–Breguet system, but this was soon dropped in favour of a purely alphabetic code.
=== France compared to other countries ===
Many other European countries installed optical telegraphs. Napoleon extended the Chappe system into conquered territories. Other countries developed their own systems, but none of them were as extensive as that in France. Only the system of Abraham Niclas Edelcrantz in Sweden even came close. Consequently, other nations did not have such a strong desire for backward compatibility as France and were able to move to the electrical telegraph sooner. France was unique in requiring the electrical telegraph to mimic the optical telegraph.
== Operation ==
The display of Foy–Breguet telegraph instruments consists of two needles each pivoted at its centre. One half of each needle is coloured black and the other half white. The black part of the needles is meant to represent the indicators of the Chappe telegraph. The white part of the needles is ignored. A bar is marked on the faceplate of the instrument between the pivot points of the needles. This is meant to represent the regulator of the Chappe telegraph, but in the Foy–Breguet system it is purely decorative – it does not move. Each needle can take on any one of eight positions, moving in steps of 45°, resulting in a codespace of 8×8=64 codepoints.
Unlike other needle telegraphs, the motive force that rotates the needles is not provided by the electric current on the telegraph line. Instead, it is provided by a clockwork mechanism that has to be kept wound. The winding keys can be seen in the image of the instrument hanging on chains either side of the instrument face. There is a separate key and a separate mechanism for each needle. When it is desired to wind the mechanism, the key is attached to a square winder situated directly below each needle. When current is applied to one of the telegraph lines, the detent of the corresponding clockwork mechanism is released by means of the armature of an electromagnet and the needle advances by 45°. When the current is cut off, the detent is again released and the needle advances a further 45°. The current is applied to both the sending and receiving instrument so that the sending operator can view the resulting transmission.
The operator controls the transmission by means of two manipulators. Each of these manipulators has a crank handle which can be set in any one of eight notched positions corresponding to the eight possible positions of one of the needles. As the crank handle is turned through the notches, the battery is alternately connected and disconnected from the line and the local instrument. Thus, current is alternately applied and removed from the mechanism turning the needles.
A drawback of the Foy–Breguet system was that it did not use repeaters over long distances. Other major telegraph systems used relays for this purpose and there were efforts to apply this technology to the French system. This was unsuccessful, which meant that the French system had to employ operators to retransmit messages in some places. The requirement to provide two lines could not be met, or was not economic to meet, on some routes. A single-needle instrument was developed to fill this need. This instrument was mechanically identical to one half of the two-needle version. In fact, it was possible to use one side only of a two-needle instrument with a single line if desired. The coding was the same on the one-needle device except that the positions of the two indicators of each character were sent sequentially instead of in parallel. This reduced the transmission speed to 16–18 wpm.
== Connection to England ==
A submarine telegraph cable was laid from England to France by the Submarine Telegraph Company in 1851. In the UK, the Cooke and Wheatstone telegraph was in use, which used a different code. This meant that at the English end, both a Foy–Breguet operator and a Cooke–Wheatstone operator were required so that messages could be recoded between the two systems. The Foy–Breguet system was faster to send and read (between 24 and 46 wpm) than the Cooke–Wheatstone. A Foy–Breguet operator could instantly see the letter being transmitted from the visual pattern, whereas the Cooke–Wheatstone operator had to count the left and right deviations of the single needle.
== Withdrawal ==
For a decade France maintained a mixture of optical telegraph and electrical telegraph systems on its network. The Foy-Breguet system ensured that operators could easily be transferred from the optical to the electric systems, although many optical operators (semaphorists) declined to become telegraphists when their lines were updated. The semaphorists were largely rural workers on isolated stations used to taking on the responsibilities of carrying out mechanical repairs by themselves. After all, if the equipment broke down, they no longer had the means to call for assistance. Telegraphists were located in offices with management and service personnel on hand. They were forbidden from attempting any kind of repair and had a more paperwork intensive job. Despite its advantages in the French context, the uniqueness of the French system eventually led to its decline.
During the 1850s, as international telegraph traffic grew, having different telegraph systems in different countries became increasingly problematic. Direct connections were not possible and operators had to be employed to recode messages crossing borders. The code that was later to become known as International Morse Code was adopted in several countries. It was first used on Hamburg railways and was devised by Friedrich Clemens Gerke. This code was a heavily modified version of the original American Morse code and was known as the Hamburg code or Gerke code. Gerke's code was adopted in 1851 by the German-Austrian Telegraph Union which represented many central European countries. In 1855, France also adopted the code and replaced the Foy–Breguet telegraph equipment with the Morse system.
== References ==
== Bibliography ==
Aitken, Frédéric; Foulc, Jean-Numa, From Deep Sea to Laboratory 1, John Wiley & Sons, 2019 ISBN 1786303744.
Butrica, Andrew J. (1986). From inspecteur to ingénieur: telegraphy and the genesis of electrical engineering in France, 1845-1881 (Thesis Dissertation). Iowa State University. Retrieved 8 March 2020.
Coe, Lewis, The Telegraph: A History of Morse's Invention and Its Predecessors in the United States, McFarland, 2003 ISBN 0786418087.
Haigh, Kenneth Richardson, Cableships and Submarine Cables, Adlard Coles, 1968 OCLC 497380538.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052.
Holzmann, Gerard J.; Pehrson, Björn, The Early History of Data Networks, Wiley, 1995 ISBN 0818667826.
Roberts, Steven, Distant Writing: A History of the Telegraph Companies in Britain between 1838 and 1868, ch. 13 "The companies abroad", accessed 4 March 2020.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859 OCLC 258508686.
Turnbull, Laurence, The Electro-magnetic Telegraph, A. Hart, 1853 OCLC 60717772.
== External links ==
Berghen, Fons Vanden, "Louis Breguet et ses appareils télégraphiques", Les Cahiers de la FNARH, pp. 14–25, no. 111, Fédération Nationale des Associations de personnel de La Poste et d'Orange pour la Recherche Historique, 2009 (in French). Includes many photographs of French telegraph instruments. | Wikipedia/Foy–Breguet_telegraph |
There are two types of radio network currently in use around the world: the one-to-many (simplex communication) broadcast network commonly used for public information and mass-media entertainment, and the two-way radio (duplex communication) type used more commonly for public safety and public services such as police, fire, taxicabs, and delivery services. Cell phones are able to send and receive simultaneously by using two different frequencies at the same time. Many of the same components and much of the same basic technology applies to all three.
The two-way type of radio network shares many of the same technologies and components as the broadcast-type radio network but is generally set up with fixed broadcast points (transmitters) with co-located receivers and mobile receivers/transmitters or transceivers. In this way both the fixed and mobile radio units can communicate with each other over broad geographic regions ranging in size from small single cities to entire states/provinces or countries. There are many ways in which multiple fixed transmit/receive sites can be interconnected to achieve the range of coverage required by the jurisdiction or authority implementing the system: conventional wireless links in numerous frequency bands, fibre-optic links, or microwave links. In all of these cases the signals are typically backhauled to a central switch of some type where the radio message is processed and resent (repeated) to all transmitter sites where it is required to be heard.
In contemporary two-way radio systems, a concept called trunking is commonly used to achieve better efficiency of radio spectrum use. It provides a very wide range of coverage, with no switching of channels required by the mobile radio user as it roams throughout the system coverage. Trunking of two-way radio is identical to the concept used for cellular phone systems where each fixed and mobile radio is specifically identified to the system controller and its operation is switched by the controller.
== Broadcasting networks ==
The broadcast type of radio network is a network system which distributes radio programming to multiple stations simultaneously, or slightly delayed, for the purpose of extending total coverage beyond the limits of a single broadcast signal. The resulting expanded audience for radio programming or information essentially applies the benefits of mass-production to the broadcasting enterprise. A radio network has two sales departments, one to package and sell programs to radio stations, and one to sell the audience of those programs to advertisers.
Most radio networks also produce much of their programming. Originally, radio networks owned some or all of the stations that broadcast the network's radio format programming. Presently however, there are many networks that do not own any stations and only produce and/or distribute programming. Similarly station ownership does not always indicate network affiliation. A company might own stations in several different markets and purchase programming from a variety of networks.
Radio networks rose rapidly with the growth of regular broadcasting of radio to home listeners in the 1920s. This growth took various paths in different places. In Britain the BBC was developed with public funding, in the form of a broadcast receiver license, and a broadcasting monopoly in its early decades. In contrast, in the United States various competing commercial broadcasting networks arose funded by advertising revenue. In that instance, the same corporation that owned or operated the network often manufactured and marketed the listener's radio.
Major technical challenges to be overcome when distributing programs over long distances are maintaining signal quality and managing the number of switching/relay points in the signal chain. Early on, programs were sent to remote stations (either owned or affiliated) by various methods, including leased telephone lines, pre-recorded gramophone records and audio tape. The world's first all-radio, non-wireline network was claimed to be the Rural Radio Network, a group of six upstate New York FM stations that began operation in June 1948. Terrestrial microwave relay, a technology later introduced to link stations, has been largely supplanted by coaxial cable, fiber, and satellite, which usually offer superior cost-benefit ratios.
Many early radio networks evolved into television networks.
== See also ==
List of radio broadcast networks
Lists of radio stations
== References == | Wikipedia/Radio_network |
Telegram style, telegraph style, telegraphic style, or telegraphese is a clipped way of writing which abbreviates words and packs information into the smallest possible number of words or characters. It originated in the telegraph age when telecommunication consisted only of short messages transmitted by hand over the telegraph wire. The telegraph companies charged for their service by the number of words in a message, with a maximum of 15 characters per word for a plain-language telegram, and 10 per word for one written in code. The style developed to minimize costs but still convey the message clearly and unambiguously.
The related term cablese describes the style of press messages sent uncoded but in a highly condensed style over submarine communications cables. In the U.S. Foreign Service, cablese referred to condensed telegraphic messaging that made heavy use of abbreviations and avoided use of definite or indefinite articles, punctuation, and other words unnecessary for comprehension of the message.
== Antecedents ==
Before the telegraph age military dispatches from overseas were made by letters transported by rapid sailing ships. Clarity and concision were often considered important in such correspondence.
An apocryphal story about the briefest correspondence in history has a writer (variously identified as Victor Hugo or Oscar Wilde) inquiring about the sales of his new book by sending the message "?" to his publisher, and receiving "!" in reply.
== Telegraphic coded expressions ==
Through the history of telegraphy, very many dictionaries of telegraphese, codes or ciphers were developed, each serving to minimise the number of characters or words which needed to be transmitted in order to impart a message; the drivers for this economy were, for telegraph operators, the resource cost and limited bandwidth of the system; and for the consumer, the cost of sending messages.
Examples of telegraphic code-words and their equivalent expressions, taken from The Adams Cable Codex (1894) are:
Note that in the Adams code, the code-words are all actual English words; some telegraph companies charged more for coded messages, or had shorter word-size limits (10-character maximum vs. 15 characters). Compare these to the following examples from the A.B.C. Universal Commercial Electric Telegraphic Code (1901) all of which are English-like, but invented words:
== Comparison to modern text messaging ==
In some ways, telegram style was the precursor to the abbreviated language used in text messaging or short message standard (SMS) services such as Twitter, referred to as SMS language
For telegrams, space was at a premium—economically speaking—and abbreviations were used as necessity. This motivation was revived for compressing information into the 160-character limit of a costly SMS before the advent of multi-message capabilities. Length constraints, and the initial handicap of having to enter each individual letter using multiple keypresses on a numeric pad, drove re-adoption of telegraphic style. Continued space limits and high per-message cost meant the practice persisted for some time after the introduction of built-in predictive text assistance. Some who favor predictive entry claim that telegraphing persists, despite it then needing more effort to write (and read); however, many others assert that predictive text generation is usually wrong, and hence find it more tedious and vexing to erase-and-correct predicted text than to turn off auto-text generation and directly enter their messages "telegraph style".
== Other languages ==
In Japanese, telegrams are printed using the katakana script, one of the few instances in which this script is used for entire sentences. This is a rare context in which someone might see the particle katakana ヲ instead of the equivalent hiragana を; these are virtually never used in words, so they are not in the parts of speech that get substituted into katakana.
== Telegram length ==
The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer.
According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters.
For German telegrams, the mean length is 11.5 words or 72.4 characters. At the end of the 19th century the average length of a German telegram was calculated as 14.2 words.
== Gallery ==
== See also ==
Headlinese, a similar shorthand in newspaper headlines
SMS language, abbreviated styles used in instant messaging and texting
== Further reading ==
Ross, Nelson E. (1928). How to Write Telegrams Properly. Archived from the original on 2017-03-15 – via The Telegram Office.
Standage, Tom (1998). The Victorian internet : the remarkable story of the telegraph and the nineteenth century's on-line pioneers. Macmillan. ISBN 0-8027-1342-4.
== References == | Wikipedia/Telegraphese |
The Foy–Breguet telegraph, also called the French telegraph, was an electrical telegraph of the needle telegraph type developed by Louis-François-Clement Breguet and Alphonse Foy in the 1840s for use in France. The system used two-needle instruments that presented a display using the same code as that on the optical telegraph of Claude Chappe. The Chappe telegraph was extensively used in France by the government, so this arrangement was appealing to them as it meant there was no need to retrain operators.
Most needle telegraph systems moved the needles by means of an electromagnet driven by battery power applied to the line at the sending end. In contrast, the Foy-Breguet telegraph used electromagnets but they did not directly drive the needle. Instead, they operated the detent of a clockwork mechanism which released the needle to move on one position at a time.
The Chappe telegraph existed in some other countries, but no country besides France tried to duplicate the Chappe telegraph, or any other optical telegraph, as an electrical telegraph. Generally, each electrical telegraph system had a new code developed specifically to suit it. This was problematic for international communications, and in 1855 France abandoned the Foy–Breguet telegraph in favour of the Morse telegraph to bring them into line with the German–Austrian Telegraph Union. Many central European countries were members of this union and they had adopted the Morse system for better interoperability.
== Development ==
The first attempt to bring the electrical telegraph to France was made by Samuel Morse in 1838. He demonstrated his system to the French Academy of Sciences and made a bid for the contract to install a telegraph along the line of the Paris to Saint-Germain railway. However, the French government decided that they did not want to entrust the construction of telegraph lines to private companies. Private operation of telegraph systems had been illegal in France since 1837 and all telegraph infrastructure was owned and operated by the state. Electrical telegraph could only start in France if the government sponsored it. France had the most extensive optical telegraph system of any country, developed for military purposes by Claude Chappe in the revolutionary and Napoleonic periods. There were strong arguments put forward for the superiority of optical telegraphs over electrical telegraphs. Chief amongst these reasons was that electrical systems were vulnerable to attack by saboteurs. In an optical system, only the telegraph stations needed to be defended. An electrical system was impossible to defend over its many hundreds of miles of exposed wires.
Alphonse Foy, the chief administrator of the French telegraphs, had a further objection to the Morse system. He believed that his illiterate telegraph operators would not easily be able to learn the Morse code. He did not, however, entirely reject the electrical telegraph. After the Morse system was rejected in 1839, Foy investigated the Cooke–Wheatstone telegraph in use in England. Foy realised that the needle telegraph displays used by the Cooke–Wheatstone system could be adapted to display the symbols of the French optical telegraph. He asked Louis-François-Clement Breguet to design such a system. It was first tested on the Paris Saint-Cloud to Versailles line in 1842.
Funding for an electrical telegraph was approved in 1844. Foy specified that the new telegraph must show the same display as the Chappe telegraph so that there was no need for operator retraining. This required the display to have three moving parts; the Chappe telegraph had a pivoted crossbar (the regulator) with two moveable arms (the indicators), one at each end of the regulator. A design meeting this requirement was submitted by Pierre-Antoine Joseph Dujardin. Implemented as a needle telegraph, the arrangement required three moving needles, which in turn required three signal wires. The wires were a significant part of the cost of installation; the Morse system, for instance, required only one wire.
In May 1845, Foy ran a comparative test between the Dujardin, Breguet, and Cooke-Wheatstone systems on the Paris, Saint Germain to Rouen line. Foy rejected the Dujardin system in favour of the one by Breguet, even though the Dujardin system more fully mimicked the Chappe system than Breguet's. The Breguet design required only two signal wires, but at the expense of having only two moveable needles. These represented the indicators of the Chappe system. The regulator was simply a marking on the face of the instrument, not a moving part—it was permanently in the horizontal position. The disadvantage of doing this is that it drastically reduced the available codespace which in turn impacted the speed a message could be transmitted.
The rejection was perhaps due to the economic reason, or perhaps because Breguet was better acquainted with Foy. Breguet had a long history of working with the French telegraph. His grandfather, Abraham-Louis Breguet, a watchmaker, had worked with Chappe on the design of the optical telegraph and Louis inherited the business. The Chappe system used a large codebook with thousands of predetermined phrases and sentences. 92 codepoints were used to specify the line and page of the codebook (see Telegraph code § Chappe code). There were some early attempts to use a reduced codebook on the Foy–Breguet system, but this was soon dropped in favour of a purely alphabetic code.
=== France compared to other countries ===
Many other European countries installed optical telegraphs. Napoleon extended the Chappe system into conquered territories. Other countries developed their own systems, but none of them were as extensive as that in France. Only the system of Abraham Niclas Edelcrantz in Sweden even came close. Consequently, other nations did not have such a strong desire for backward compatibility as France and were able to move to the electrical telegraph sooner. France was unique in requiring the electrical telegraph to mimic the optical telegraph.
== Operation ==
The display of Foy–Breguet telegraph instruments consists of two needles each pivoted at its centre. One half of each needle is coloured black and the other half white. The black part of the needles is meant to represent the indicators of the Chappe telegraph. The white part of the needles is ignored. A bar is marked on the faceplate of the instrument between the pivot points of the needles. This is meant to represent the regulator of the Chappe telegraph, but in the Foy–Breguet system it is purely decorative – it does not move. Each needle can take on any one of eight positions, moving in steps of 45°, resulting in a codespace of 8×8=64 codepoints.
Unlike other needle telegraphs, the motive force that rotates the needles is not provided by the electric current on the telegraph line. Instead, it is provided by a clockwork mechanism that has to be kept wound. The winding keys can be seen in the image of the instrument hanging on chains either side of the instrument face. There is a separate key and a separate mechanism for each needle. When it is desired to wind the mechanism, the key is attached to a square winder situated directly below each needle. When current is applied to one of the telegraph lines, the detent of the corresponding clockwork mechanism is released by means of the armature of an electromagnet and the needle advances by 45°. When the current is cut off, the detent is again released and the needle advances a further 45°. The current is applied to both the sending and receiving instrument so that the sending operator can view the resulting transmission.
The operator controls the transmission by means of two manipulators. Each of these manipulators has a crank handle which can be set in any one of eight notched positions corresponding to the eight possible positions of one of the needles. As the crank handle is turned through the notches, the battery is alternately connected and disconnected from the line and the local instrument. Thus, current is alternately applied and removed from the mechanism turning the needles.
A drawback of the Foy–Breguet system was that it did not use repeaters over long distances. Other major telegraph systems used relays for this purpose and there were efforts to apply this technology to the French system. This was unsuccessful, which meant that the French system had to employ operators to retransmit messages in some places. The requirement to provide two lines could not be met, or was not economic to meet, on some routes. A single-needle instrument was developed to fill this need. This instrument was mechanically identical to one half of the two-needle version. In fact, it was possible to use one side only of a two-needle instrument with a single line if desired. The coding was the same on the one-needle device except that the positions of the two indicators of each character were sent sequentially instead of in parallel. This reduced the transmission speed to 16–18 wpm.
== Connection to England ==
A submarine telegraph cable was laid from England to France by the Submarine Telegraph Company in 1851. In the UK, the Cooke and Wheatstone telegraph was in use, which used a different code. This meant that at the English end, both a Foy–Breguet operator and a Cooke–Wheatstone operator were required so that messages could be recoded between the two systems. The Foy–Breguet system was faster to send and read (between 24 and 46 wpm) than the Cooke–Wheatstone. A Foy–Breguet operator could instantly see the letter being transmitted from the visual pattern, whereas the Cooke–Wheatstone operator had to count the left and right deviations of the single needle.
== Withdrawal ==
For a decade France maintained a mixture of optical telegraph and electrical telegraph systems on its network. The Foy-Breguet system ensured that operators could easily be transferred from the optical to the electric systems, although many optical operators (semaphorists) declined to become telegraphists when their lines were updated. The semaphorists were largely rural workers on isolated stations used to taking on the responsibilities of carrying out mechanical repairs by themselves. After all, if the equipment broke down, they no longer had the means to call for assistance. Telegraphists were located in offices with management and service personnel on hand. They were forbidden from attempting any kind of repair and had a more paperwork intensive job. Despite its advantages in the French context, the uniqueness of the French system eventually led to its decline.
During the 1850s, as international telegraph traffic grew, having different telegraph systems in different countries became increasingly problematic. Direct connections were not possible and operators had to be employed to recode messages crossing borders. The code that was later to become known as International Morse Code was adopted in several countries. It was first used on Hamburg railways and was devised by Friedrich Clemens Gerke. This code was a heavily modified version of the original American Morse code and was known as the Hamburg code or Gerke code. Gerke's code was adopted in 1851 by the German-Austrian Telegraph Union which represented many central European countries. In 1855, France also adopted the code and replaced the Foy–Breguet telegraph equipment with the Morse system.
== References ==
== Bibliography ==
Aitken, Frédéric; Foulc, Jean-Numa, From Deep Sea to Laboratory 1, John Wiley & Sons, 2019 ISBN 1786303744.
Butrica, Andrew J. (1986). From inspecteur to ingénieur: telegraphy and the genesis of electrical engineering in France, 1845-1881 (Thesis Dissertation). Iowa State University. Retrieved 8 March 2020.
Coe, Lewis, The Telegraph: A History of Morse's Invention and Its Predecessors in the United States, McFarland, 2003 ISBN 0786418087.
Haigh, Kenneth Richardson, Cableships and Submarine Cables, Adlard Coles, 1968 OCLC 497380538.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052.
Holzmann, Gerard J.; Pehrson, Björn, The Early History of Data Networks, Wiley, 1995 ISBN 0818667826.
Roberts, Steven, Distant Writing: A History of the Telegraph Companies in Britain between 1838 and 1868, ch. 13 "The companies abroad", accessed 4 March 2020.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859 OCLC 258508686.
Turnbull, Laurence, The Electro-magnetic Telegraph, A. Hart, 1853 OCLC 60717772.
== External links ==
Berghen, Fons Vanden, "Louis Breguet et ses appareils télégraphiques", Les Cahiers de la FNARH, pp. 14–25, no. 111, Fédération Nationale des Associations de personnel de La Poste et d'Orange pour la Recherche Historique, 2009 (in French). Includes many photographs of French telegraph instruments. | Wikipedia/Foy-Breguet_telegraph |
GN Store Nord A/S is a Danish manufacturer of hearing aids (GN ReSound/GN Hearing), speakerphones, videobars and headsets (Jabra (GN Audio) and SteelSeries). GN Store Nord A/S is listed on NASDAQ OMX Copenhagen (ISIN code DK0010272632).
== History ==
=== The Great Northern Telegraph Company ===
The company was founded as The Great Northern Telegraph Company (Det Store Nordiske Telegrafselskab A/S) in Denmark in June 1869. It was set up as a merger of three recently established telegraph companies initiated by Danish industrial mogul Carl Frederik Tietgen. The aim of the firm was to create a worldwide telegraph company.
The starting point of The Great Northern Telegraph Company (now GN Store Nord) was a concession agreement, which C.F. Tietgen made with the Russian Tzar in 1869. The agreement gave The Great Northern Telegraph Company exclusive rights – and obligations – to establish and run a telegraph line in Russia. This represented a great pioneer task for the company in establishing connections from Europe to the Far East. The Russian authorities ran the actual construction work in Russia. They had already set up a telegraph line in parts of Siberia but were looking for a business partner to cover China and Japan before continuing the Russian line all the way east to Vladivostok. Thus, The Great Northern Telegraph Company was given the responsibility to establish and run its own telegraph line in Asia and additionally assist the Russians with operations, maintenance, technical assistance, and education. The naval officer Edouard Suenson was put in charge of the company's operations in the Far East. On his return to Denmark, in 1877, he was appointed CEO of the parent company in Copenhagen.
In the following years, the telegraph line expanded massively – both in Europe and in Asia. First, Oslo, London and Paris were covered. Later, operations took place along the coast of China ranging from Hong Kong to Shanghai and further into Japan, where the first telegraph station opened in Nagasaki in 1897. In addition to the telegraph line, telegraph stations and offices opened at several locations.
In 1897, negotiations began about a potential connection going from Scotland to the United States through the Faroe Islands, Iceland, and Greenland. In 1906, the cable was established, although without the final connection to the United States, which had to wait for almost 60 years to become a reality. When the transatlantic connection was finally established, however, it represented a remarkable expansion, which significantly facilitated communication between people around the world.
The beginning of the 20th century was characterized by several wars and disputes, which affected the company's operations. World War I and largely the Russian Revolution changed the map of Europe, but this only meant an increase in demand for telegraphy. Thus, the company succeeded in prolonging its concession agreement in 1921, signed by Lenin.
The 1920s and the early 1930s represented great decades for The Great Northern Telegraph Company as it had managed to acquire a reputation as being one of the leading international telecommunication companies in the world. The late 1930s, however, presented great challenges as competition from wireless telegraphy was becoming increasingly severe. In addition, World War II caused great damage to the telegraph lines around the world, which meant that in 1945 the company had only two lines left; the England-Faroes-Iceland line and the Sweden-Finland line. Although broken lines were repaired and re-established after the war, the company had to acknowledge that an era was over.
Thus, the new strategy was to focus on a broader segment by investing in various companies across sectors. This strategy was initiated in 1939 with the investment in the battery factory Hellesens. Over the following decades, The Great Northern Telegraph Company balanced between investing in the telecommunications industry and other industries. On the industry side, it invested in companies such as Lauritz Knudsen, which produced electrical goods, and in 1947, the radiotelephone production company Storno (a contraction of Store Nordiske (Great Northern)) was founded. Other acquisitions were Telematic, which produced telephones, Elmi, which produced measuring equipment, and Danavox, which produced hearing aids.
=== GN Store Nord since 1985 ===
In 1985, The Great Northern Telegraph Company changed its name to GN Store Nord (GN = Great Nordic) with the aim of creating a new group identity and organizing its businesses. In this process, all subsidiaries were renamed to include GN: GN Danavox, GN NetTest, GN Automatic, etc. A major change happened in 1991 when GN was assigned the attractive GMS concession from the national Danish telecommunication authorities. In March 1992, GN's new subsidiary Sonofon opened the first private mobile telephone network in Denmark. Although GN was not the only investor in Sonofon, it owned the majority of the shares. With the blossoming of the data communications and telephony industry, and under CEO Jørgen Lindegaard, GN was back on track and enjoyed big success in the late 1990s. In 2000, the company sold Sonofon to Norwegian telecom operator Telenor for a price of DKK 14,7 billion.
A large amount was invested in the GN subsidiary NetTest, which had evolved from the former Elmi and considered GN's prospective core business. It was decided to let NetTest acquire the French company Photonetics for a price of DKK 9,1 billion. The optimistic view of the future was also reflected in the share price, which had increased fivefold in only one year, from September 1, 1999, to September 1, 2000. Same year the American Jabra Corporation was acquired.
The focus and investment in NetTest, however, resulted in a serious downturn since GN had misjudged the market development of NetTest's products. In 2001, net profit ended at DKK -9,2 billion, followed by a share price decline equivalent to previous years' gains. Thereby major parts of the yield from Sonofon was lost within one year, and shareholders were raging in the media and at the annual general meeting.
The following years' turbulence led to the company selling most of its subsidiaries and leaving Tietgen's old headquarters in 1893 at Kongens Nytorv in Copenhagen. GN Store Nord's headquarter currently resides in Ballerup North of Copenhagen, Denmark. GN continued focusing on its two core businesses; hearing aids and headsets, produced by GN Hearing and GN Audio, respectively.
On October 2, 2006, GN announced its decision to divest GN Hearing (formerly GN ReSound) and GN Otometrics (a company producing audio measuring equipment) to Swiss competitor Sonova (formerly known as Phonak). The deal, however, was annulled after being blocked by the German Cartel Office. After this, GN announced that it intended to keep the two companies but filed an appeal against the court ruling. The case is still pending.
The blocked deal, however, left GN challenged to the extreme with two underperforming businesses, a thin product pipeline, a heavy debt position, and facing a highly adverse macroeconomic environment. Nonetheless, with comprehensive restructuring and management efforts, the company managed to survive. Since then, GN has gradually fought its way back.
In 2009, GN Audio (then GN Netcom) made a decision to globally market all its products under the same brand, Jabra (a company that GN had acquired in 2000). The purpose of consolidating all products under the same brand was to strengthen the company's position as the world's leading supplier of headsets.
Today, GN Audio is a world leader in Unified Communications headsets, and within the last couple of years, the company has managed to be the first at introducing a number of innovative products on the market. In 2014, the company launched the world's first sports headset with a built-in heart rate monitor. In addition, a series of noise cancellation headsets with a concentration zone has been launched, which are specially designed to improve employees' ability to concentrate in noisy open offices.
GN Hearing also got back on track. In 2010, the company launched the world's first hearing aid with 2.4 GHz technology – the new wireless technology was groundbreaking compared to the previous inductive technology. In 2014, GN Hearing changed the industry once more with the introduction of the world's first Made for iPhone hearing aid, which, based on the 2.4 GHz technology enables the streaming of sound directly from an iPhone without any body-worn devices.
In October 2016, GN Audio acquired VXi Corporation, the manufacturer of both the VXi and BlueParrott headset brands.
== Business ==
Today, the GN Group consists of GN Store Nord A/S, GN Hearing A/S, and GN Audio A/S. GN develops and manufactures intelligent hearing, audio, video, and gaming solutions. GN's offerings are marketed by the brands ReSound, Jabra, SteelSeries, Beltone, Interton, Danavox, and FalCom in more than 100 markets globally.
55°44′19″N 12°23′12″E
== Directors ==
(Incomplete)
(1873–1908) Edouard Suenson
(1908–1938) Kay Suenson
(1938–1966) Bent Suenson
(1985–2000) Christian Tillisch (GN Netcom)
(1987–1993) Thomas Duer
(2000–2003) Niels B. Christiansen (GN Netcom)
(1995–2001) Jørgen Lindegaard
(2001–2006) Jørn Kildegaard
(1997–2008) Jesper Mailind (GN ReSound)
(2006–2009) Toon Bouten (GN Netcom)
(2008–2010) Mike Van der Wallen (GN ReSound)
(2009–2013) Mogens Elsberg (GN Netcom)
(2014–2015) Niels Svenningsen (GN Netcom)
(2010–2014) Lars Viksmoen (GN ReSound)
(2014–2018) Anders Hedegaard(GN ReSound/GN Hearing)
(2018–2019) Jakob Gudbrand(GN ReSound / GN Hearing)
(2019–2023) Gitte Pugholm Aabo (GN ReSound / GN Hearing)
(2015–2023) René Svendsen-Tune (GN Netcom/GN Audio)
(2023–present) Peter Karlstromer (Group CEO GN Audio / GN Hearing)
== Further reading ==
Baark, Erik. Lightning Wires: The Telegraph and China's Technological Modernization 1860-1890 (ABC-CLIO/Greenwood. 1997)
Iversen, Martin Jes. "Via Northern: Strategic and organisational upheavals of Great Northern Telegraph Company, 1939–1948 and 1966–1977." Scandinavian Economic History Review 51.1 (2003): 29–45.
Jacobsen, Kurt. "Wasted opportunities? The Great Northern Telegraph Company and the wireless challenge." Business History 52.2 (2010): 231–250.
Helge Holst, Elektriciteten, Nordisk Forlag, 1911.
Kurt Jacobsen, Den røde tråd. Det Store Nordiske Telegraf-Selskabs storpolitiske spil efter den russiske revolution, København: Gyldendal 1997.
GN Store Nord's 125th anniversary publication: From dots and dashes to tele - and data communication, June 1, 1994.
Martin Jes-Iversen, Turn Around - Kampen om GN Store Nord, Lindhardt og Ringhof, 2015. ISBN 978-87-11-33739-4.
== References ==
== External links ==
Official website
GN at NASDAQ OMX
Great Northern Telegraph History
www.stornotime.dk The story about a Storno radiotelefonfabrik. | Wikipedia/Great_Northern_Telegraph_Company |
The Baltimore–Washington telegraph line was the first long-distance telegraph system set up to run overland in the United States.
== Building of line ==
In March 1843, the US Congress appropriated US$30,000 (equivalent to $1,012,393 in 2024) to Samuel Morse to lay a telegraph line between Washington, D.C., and Baltimore, Maryland, along the right-of-way of the Baltimore and Ohio Railroad.
Morse originally decided to lay the wire underground, asking Ezra Cornell to lay the line using a special cable-laying plow that Cornell had developed. Wire began to be laid in Baltimore on October 21, 1843. Cornell's plow was pulled by eight mules, and cut a ditch 2 inches (5.1 cm) wide and 20 inches (51 cm) deep, laid a pipe with the wires, and reburied the pipe, in an integrated operation. However, the project was stopped after about 9.3 miles (15 km) of wire was laid because the line was failing.
Morse learned that Cooke and Wheatstone were using poles for their lines in England and decided to follow their lead. Installation of the lines and poles from Washington to Baltimore began on April 1, 1844, using chestnut poles 23 feet (7 m) high spaced 300 feet (90 m) apart, for a total of about 700 poles. Two 16-gauge copper wires were installed; they were insulated with cotton thread, shellac, and a mixture of "beeswax, resin, linseed oil, and asphalt." A test of the still incomplete line occurred on May 1, 1844, when news of the Whig Party's nomination of Henry Clay for U.S. President was sent from the party's convention in Baltimore to the Capitol Building in Washington.
== Operations ==
Morse's line was demonstrated on May 24, 1844, from the Old Supreme Court Chamber in the United States Capitol in Washington to the Mount Clare station of the railroad in Baltimore, and commenced with the transmission of Morse's first message (from Washington) to Alfred Vail (in Baltimore), "What hath God wrought", a phrase from the Bible's Book of Numbers. The phrase was suggested by Annie Ellsworth, whose husband was a supporter of Morse's, and knew Morse was religious.
As U.S. Postmaster General, Cave Johnson was in charge of the line. Morse was made superintendent of the line, and Alfred Vail and Henry Rogers the operators.
The next year, Johnson reported that "the importance of [the line] to the public does not consist of any probable income that can ever be derived from it," which led to the invention being returned for private development.
== See also ==
First transcontinental telegraph
Timeline of North American telegraphy
== References ==
== External links ==
Electronic Technology in the House of Representatives
History of the Telegraph
Contemporary account of the construction of the transcontinental telegraph | Wikipedia/Baltimore–Washington_telegraph_line |
The Quadruplex telegraph is a type of electrical telegraph which allows a total of four separate signals to be transmitted and received on a single wire at the same time (two signals in each direction). Quadruplex telegraphy thus implements a form of multiplexing.
The technology was invented by Thomas Edison, who sold the rights to Jay Gould, the owner of the Atlantic and Pacific Telegraph Company, in 1874 for the sum of $30,000 (equivalent to $834,000 in 2024). Edison had previously been turned down by Western Union for the sale of the Quadruplex. This proved to be a grave mistake. Jay Gould used the Quadruplex to wage price wars on Western Union and to short its stock. Cornelius Vanderbilt was Western Union's largest shareholder and caught the brunt of Jay Gould's move. Vanderbilt died during the saga, which left his son William in charge. William Vanderbilt, much like his father, was no match for Jay Gould and quickly buckled. To stop the rate war Western Union bought Atlantic Pacific (and the rights to the Quadruplex from Jay Gould) for $5 million dollars (equivalent to $139,000,000 in 2024).
The problem of sending two signals simultaneously in opposite directions on the same wire had been solved previously by Julius Wilhelm Gintl and improved to commercial viability by J. B. Stearns; Edison added the ability to double the number in each direction.
The method combined a diplex (multiplex two signals in the same direction), which Edison had previously invented, with a Stearns style Duplex (simultaneous bi-directional communication). In each case, a clever trick is used.
Since telegraphs use a single wire, the current must flow through the signal (noise producing) relay at both ends (local and remote). In the Duplex, the challenge is simply not to have the local signal relay clack when the key is pressed, but to clack when the remote is pressed. This is achieved by dividing the relay into two solenoid windings and feeding the local key's energizing voltage into the midpoint of these. Thus when the local key is pressed, the current divides equally in two directions. One of these goes through a relay coil, then into a matched termination load. The matched termination load and relay coil are matched to an identical setup at the receiving end, to keep the current between the two solenoid coils as even as possible. The other half of the current is sent down the wire to the remote relay (which often switches the remote signal relay) and its termination load. Since the current flowing into this Y-shaped junction between the solenoids flows in opposite directions in the two local solenoids they sum to no net magnetic field, and the local relay is not activated. At the remote end, the sent current flows through both solenoids in the same direction and into the termination load. Since current flows the same way in both solenoids the remote signal relay is activated by this local key.
For the Diplex, a different trick is used. To send two messages simultaneously, one has two independent local telegraph keys. These are arranged so the battery is reversed in polarity on one of these. First note the challenge to overcome: the duplex solenoid as described above would not resolve which way the current is flowing. While the solenoid's magnetic field would be in the opposite direction, the induced ferromagnet in the iron bar would be attracted either way, closing the signal relay regardless of the current flow direction. The solution is to replace the iron with a permanent magnet, and the relay switch is replaced with a double pole switch. Now the permanent magnet senses the field direction and is pushed or pulled. When the permanent magnet north is repelled, the switch closes to one pole, and when the permanent magnet south is repelled the switch closes to the other pole. To increase practicality, Edison found other additional relays were necessary to provide hysteresis that prevented the switch from being indeterminate or fluttering at the moment of current reversals, and to send the separated signal to the appropriate sound emitter.
== Innovations ==
While this is conceptually elementary to modern engineers, one has to appreciate that multiplexing was a patent-worthy breakthrough and a huge economic win for telegraphy, since most of the challenge and expense was in the long wires between stations. This sort of polarity-based diplexing is analogous to the modern so-called "Charlieplexing" often used in LED panels: there the diode nature of LEDs allows two different (red or green) LEDs connected to ground to be controlled with the same wire depending on the voltage polarity. Edison and Stearns were dealing with the limited electronic components of the day.
Stearn's innovation was to use a capacitor in the termination load. Without this, only short transmission distances were possible because the impedance mismatch of the reactive long wire would not balance the currents in the two halves of the local relay, activating it. This was innovative since impedance matching for transmission lines instead of a simple ohmic circuits was not appreciated initially. This was a significant technological advancement, as at the time capacitors were difficult to produce.
Edison's innovations were the use of a polarized permanent magnet relay (instead of the yet-to-be-invented diode) and the use of some ancillary relay logic to add a useful hysteresis to avoid the indeterminate current reversal states (avoiding the need for expensive capacitors). The method of combining the diplex and the duplex Edison developed enabled the Quadruplex.
== See also ==
Polar modulation
Phonograph
== References == | Wikipedia/Quadruplex_telegraph |
A telegraph key, clacker, tapper or morse key is a specialized electrical switch used by a trained operator to transmit text messages in Morse code in a telegraphy system. Keys are used in all forms of electrical telegraph systems, including landline (also called wire) telegraphy and radio (also called wireless) telegraphy. An operator uses the telegraph key to send electrical pulses (or in the case of modern CW, unmodulated radio waves) of two different lengths: short pulses, called dots or dits, and longer pulses, called dashes or dahs. These pulses encode the letters and other characters that spell out the message.
== Types ==
The first telegraph key was invented by Alfred Vail, an associate of Samuel Morse. Since then the technology has evolved and improved, resulting in a range of key designs.
=== Straight keys ===
A straight key is the common telegraph key as seen in various movies. It is a simple bar with a knob on top and an electrical contact underneath. When the bar is pressed down against spring tension, it makes a closed electric circuit. Traditionally, American telegraph keys had flat topped knobs and narrow bars (frequently curved), while European telegraph keys had ball shaped knobs and thick bars. This appears to be purely a matter of culture and training, but the users of each are tremendously partisan.
Straight keys have been made in numerous variations for over 150 years and in numerous countries. They are the subject of an avid community of key collectors. The straight keys also had a shorting bar that closed the electrical circuit through the station when the operator was not actively sending messages. The shorting switch for an unused key was needed in telegraph systems wired in the style of North American railroads, in which the signal power was supplied from batteries only in telegraph offices at one or both ends of a line, rather than each station having its own bank of batteries, which was often used in Europe. The shorting bar completed the electrical path to the next station and all following stations, so that their sounders could respond to signals coming down the line, allowing the operator in the next town to receive a message from the central office. Although occasionally included in later keys for reasons of tradition, the shorting bar is unnecessary for radio telegraphy, except as a convenience to produce a steady signal for tuning the transmitter.
The straight key is simple and reliable, but the rapid pumping action needed to send a string of dots (or dits as most operators call them) poses some medically significant drawbacks.
Transmission speeds vary from 5 words (25 characters) per minute, by novice operators, up to about 30 words (150 characters) per minute by skilled operators. In the early days of telegraphy, a number of professional telegraphers developed a repetitive stress injury known as glass arm or telegraphers’ paralysis. "Glass arm" may be reduced or eliminated by increasing the side play of the straight key, by loosening the adjustable trunnion screws. Such problems can be avoided either by using good manual technique, or by only using side-to-side key types.
=== Alternative designs ===
In addition to the basic up-and-down telegraph key, telegraphers have been experimenting with alternate key designs from the beginning of telegraphy. Many are made to move side-to-side instead of up-and-down. Some of the designs, such as sideswipers (or bushwhackers) and semi-automatic keys operate mechanically.
Beginning in the mid-20th century electronic devices called keyers have been developed, which are operated by special keys of various designs generally categorized as single-paddle keys (also called sideswipers), and double-paddle keys (or "iambic" or "squeeze" keys). The keyer may be either an independent device that attaches to the transmitter in place of a telegraph key, or circuitry incorporated in modern amateurs' radios.
==== Sideswipers ====
The first widely accepted alternative key was the sideswiper or sidewinder, sometimes called a cootie key or bushwhacker. This key uses a side-to-side action with contacts on both the left and right and the arm spring-loaded to return to center; the operator may make a dit or dah by swinging the lever in either direction. A series of dits can be sent by rocking the arm back and forth.
This first new style of key was introduced in part to increase speed of sending, but more importantly to reduce the repetitive strain injury which telegraphers called "glass arm". The side-to-side motion reduces strain, and uses different muscles than the up-and-down motion (called "pounding brass"). Nearly all advanced keys use some form of side-to-side action.
The alternating action produces a distinctive rhythm or swing which noticeably affects the operator's transmission rhythm (known as fist). Although the original sideswiper is now rarely seen or used, when the left and right contacts are electrically separated a sideswiper becomes a modern single-paddle key (see below); likewise, a modern single-lever key becomes an old-style sideswiper when its two contacts are wired together.
==== Semi-automatic key ====
A popular side-to-side key is the semi-automatic key or "bug", sometimes known as a Vibroplex key after an early manufacturer of mechanical, semi-automatic keys. The original bugs were fully mechanical, based on a kind of simple clockwork mechanism, and required no electronic keyer. A skilled operator can achieve sending speeds in excess of 40 words per minute with a bug.
The benefit of the clockwork mechanism is that it reduces the motion required from the telegrapher's hand, which provides greater speed of sending, and it produces uniformly timed dits (dots, or short pulses) and maintains constant rhythm; consistent timing and rhythm are crucial for decoding the signal on the other end of the telegraph line.
The single paddle is held between the knuckle and the thumb of the right hand. When the paddle is pressed to the right (with the thumb), it kicks a horizontal pendulum which then rocks against the contact point, sending a series of short pulses (dits or dots) at a speed which is controlled by the pendulum’s length. When the paddle is pressed toward the left (with the knuckle) it makes a continuous contact suitable for sending dahs (dashes); the telegrapher remains responsible for timing the dahs to proportionally match the dits. The clockwork pendulum needs the extra kick that the stronger thumb press provides, which established the standard left-right paddle directions for the dit-dah assignments that persists on the paddles on 21st century electronic keys. A few semi-automatic keys were made with mirror-image mechanisms for left-handed telegraphers.
== Electronic keyers and paddle keys ==
Like semi-automatic keys, the telegrapher operates an electronic keyer by tapping a paddle key, swinging its lever(s) from side-to-side. When pressed to one side (usually left), the keyer electronics generate a series of dahs; when pressed to the other side (usually right), a series of dits. Keyers work with two different types of keys: Single paddle and double paddle keys.
Like semi-automatic keys, pressing the paddle on one side produces a dit and the other a dah. Single paddle keys are also called single lever keys or sideswipers, the same name as the older side-to-side key design they greatly resemble. Double paddle keys are also called "iambic" keys
or "squeeze" keys. Also like the old semi-automatic keys, the conventional assignment of the paddle directions (for a right-handed telegrapher) is that pressing a paddle with the right thumb (pressing the single paddle rightward, or for a double-paddle key, pressing the left paddle with the thumb, rightwards towards the center) creates a series of dits. Pressing a paddle with the right knuckle (hence swinging a single paddle leftward, or the right paddle on a double-paddle key leftward to the center) creates a series of dahs. Left-handed telegraphers sometimes elect to reverse the electrical contacts, so their left-handed keying is a mirror image of standard right-handed keying.
Single paddle keys are essentially the same as the original sideswiper keys, with the left and right electrical contacts wired separately. Double-paddle keys have one arm for each of the two contacts, each arm held away from the common center by a spring; pressing either of the paddles towards the center makes contact, the same as pressing a single-lever key to one side. For double-paddle keys wired to an "iambic" keyer, squeezing both paddles together makes a double-contact, which causes the keyer to send alternating dits and dahs (or dahs and dits, depending on which lever makes first contact).
Most electronic keyers include dot and dash memory functions, so the operator does not need to use perfect spacing between dits and dahs or vice versa. With dit or dah memory, the operator's keying action can be about one dit ahead of the actual transmission. The electronics in the keyer adjusts the timing so that the output of each letter is machine-perfect. Electronic keyers allow very high speed transmission of code.
Using a keyer in "iambic" mode requires a key with two paddles: One paddle produces dits and the other produces dahs. Pressing both at the same time (a "squeeze") produces an alternating dit-dah-dit-dah ( ▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ) sequence, which starts with a dit if the dit side makes contact first, or a dah ( ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄ ) if the dah side connects first.
An additional advantage of electronic keyers over semiautomatic keys is that code speed is easily changed with electronic keyers, just by turning a knob. With a semiautomatic key, the location of the pendulum weight and the pendulum spring tension and contact must all be repositioned and rebalanced to change the dit speed.
=== Double-lever paddles ===
Keys having two separate levers, one for dits and the other for dahs are called dual or dual-lever paddles. With a dual paddle both contacts may be closed simultaneously, enabling the "iambic" functions of an electronic keyer that is designed to support them: By pressing both paddles (squeezing the levers together) the operator can create a series of alternating dits and dahs, analogous to a sequence of iambs in poetry. For that reason, dual paddles are sometimes called squeeze keys or iambic keys. Typical dual-paddle keys' levers move horizontally, like the earlier single-paddle keys, as opposed to how the original "straight-keys'" arms move up-and-down.
Whether the sequence begins with a dit or a dah is determined by which lever makes contact first: If the dah lever is closed first, then the first element will be a dah, so the string of elements will be similar to a sequence of trochees in poetry, and the method could logically just as well be called "trochaic keying" ( ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ). If the dit lever makes first contact, then the string begins with a dit ( ▄ ▄▄▄ ▄ ▄▄▄ ▄ ▄▄▄ ▄ ).
Insofar as iambic keying is a function of the electronic keyer, it is not correct, technically, to refer to a dual paddle key itself as "iambic", although this is commonly done in marketing. A dual paddle key is required for iambic sending, which also requires an iambic keyer. But any single- or dual-paddle key can be used non-iambicly, without squeezing, and there were electronic keyers made which did not have iambic functions.
Iambic keying or squeeze keying reduces the key strokes or hand movements necessary to make some characters, e.g. the letter C, which can be sent by merely squeezing the two paddles together. With a single-paddle or non-iambic keyer, the hand motion would require alternating four times for C (dah-dit-dah-dit ▄▄▄ ▄ ▄▄▄ ▄ ).
The efficiency of iambic keying has recently been discussed in terms of movements per character and timings for high speed CW, with the author concluding that the timing difficulties of correctly operating a keyer iambicly at high speed outweigh any small benefits.
Iambic keyers function in one of at least two major modes: Mode A and mode B. There is a third, rarely available mode U.
==== Mode A ====
Mode A is the original iambic mode, in which alternate dots and dashes are produced as long as both paddles are depressed. Mode A is essentially "what you hear is what you get": When the paddles are released, the keying stops with the last dot or dash that was being sent while the paddles were held.
==== Mode B ====
Mode B is the second mode, which devolved from a logic error in an early iambic keyer. Over the years iambic mode B has become something of a standard and is the default setting in most keyers.
In mode B, dots and dashes are produced as long as both paddles are depressed. When the paddles are released, the keying continues by sending one more element than has already been heard. I.e., if the paddles were released during a dah then the last element sent will be a following dit; if the paddles were released during a dit then the sequence will end with the following dah.
Users accustomed to one mode may find it difficult to adapt to the other, so most modern keyers allow selection of the desired mode.
==== Mode U ====
A third electronic keyer mode useful with a dual paddle is the "Ultimatic" mode (mode U), so-called for the brand name of the electronic keyer that introduced it. In the Ultimatic keying mode, the keyer will switch to the opposite element if the second lever is pressed before the first is released (that is, squeezed).
=== Single-lever paddle keys ===
A single-lever paddle key has separate contacts for dits and dahs, but there is no ability to make both contacts simultaneously by squeezing the paddles together for iambic mode.
When a single-paddle key is used with an electronic keyer, continuous dits are created by holding the dit-side paddle ( ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ...); likewise, continuous dahs are created by holding the dah paddle ( ▄▄▄ ▄▄▄ ▄▄▄ ▄▄▄ ▄▄▄ ...).
A single-paddle key can non-iambicly operate any electronic keyer, whether or not it even offers iambic functions, and regardless of whether the keyer iambically operates in mode A, B, or U.
== Non-telegraphic keys ==
Simple telegraph-like keys were long used to control the flow of electricity in laboratory tests of electrical circuits. Often, these were simple "strap" keys, in which a bend in the key lever provided the key's spring action.
Telegraph-like keys were once used in the study of operant conditioning with pigeons. Starting in the 1940s, initiated by B. F. Skinner at Harvard University, the keys were mounted vertically behind a small circular hole about the height of a pigeon's beak in the front wall of an operant conditioning chamber. Electromechanical recording equipment detected the closing of the switch whenever the pigeon pecked the key. Depending on the psychological questions being investigated, keypecks might have resulted in the presentation of food or other stimuli.
== Operators' fist ==
With straight keys, side-swipers, and, to an extent, bugs, each and every telegrapher has their own unique style or rhythm pattern when transmitting a message. An operator's style is known as their "fist".
Since every fist is unique, other telegraphers can usually identify the individual telegrapher transmitting a particular message. This had a huge significance during the first and second World Wars, since the on-board telegrapher's fist could be used to track individual ships and submarines, and for traffic analysis.
However, with electronic keyers (either single- or double-paddle) this is no longer the case: Keyers produce uniformly "perfect" code at a set speed, which is altered at the request of the receiver, usually not the sender. Only inter-character and inter-word spacing remain unique to the operator, and can produce a less clear semblance of a fist.
== See also ==
Morse code
Prosigns for Morse code
QSK operation (full break-in)
Telegraphist
Words per minute
== Explanatory notes ==
== References ==
== External links ==
Sparks Telegraph Key Review – A pictorial review of telegraphy and telegraph keys with an emphasis on spark (wireless) telegraphy.
The Telegraph Office – A resource for telegraph key collectors and historians
The Keys of N1KPR Archived 2013-03-02 at the Wayback Machine
The Art and Skill of Radio Telegraphy
Development of the Morse Key
Telegraph Keys – An online resource for identification of all types of telegraph instruments | Wikipedia/Telegraph_key |
The Schilling telegraph is a needle telegraph invented by Pavel Schilling in the nineteenth century. It consists of a bank of needle instruments (six as developed for use in Russia) which between them display a binary code representing a letter or numeral. Signals were sent from a piano-like keyboard, and an additional circuit was provided for calling attention at the receiving end by setting off an alarm.
The code was read from the position of paper discs suspended on threads. These had different colours on the two sides. Each disc was turned by electromagnetic action on a magnetised needle.
== Overview ==
Schilling's telegraph is one of a type called needle telegraphs. These are telegraphs that use a coil of wire as an electromagnet to deflect a small magnet shaped like a compass needle. The position of the needle imparts the telegraphed information to the person receiving the message. Schilling's 1832 demonstration telegraph in St. Petersburg used six wires for signalling, one wire for calling, and a common return, making eight wires in all. Each signal wire was connected to one of six needle instruments which together displayed a binary code. The calling wire had the same function as the ringing signal on a telephone, but in this case was connected to a seventh needle.
Schilling's telegraph was developed to the stage where a project was initiated by the government to install it in Russia, but the idea was abandoned after Schilling died. See the Pavel Schilling article for more historical details. Telegraphy in Russia subsequently used more advanced designs.
== Signal needles ==
Each needle was hung above its coil horizontally by a silk thread. A paper disc was attached to the thread coloured white on one side and black on the other. When the coil was energised, either the black side or the white side would turn towards the observer, depending on the polarity of the applied current. In some models, Schilling attached a platinum-plated wire to the thread which descended into a container of mercury. The end of the wire in the mercury was paddle shaped so that the motion of the needle was damped and oscillation suppressed. Two permanently magnetised steel pins were screwed into the wooden base to hold the needle in the neutral position, that is, so that the paper disc was edge on, not displaying a colour. A second needle underneath the coil was used for better linking with these permanent magnets.
== Sending equipment ==
For the sending equipment, the earliest demonstrations of the Schilling telegraph crudely touched the ends of the wires to the poles of the battery manually. These were connected to six separate needle instruments, rather than a bank of them in one assembly. Later, a more sophisticated arrangement was devised – messages were sent from a piano-like keyboard with alternating white and black keys. There were sixteen keys altogether, each white and black pair of keys operating one of the wires. Positive or negative voltage was applied to the wire depending on whether the white or black key was pressed. Changing the colour of key used resulted in the needle swinging in the opposite direction, thus showing the opposite side of the dual-coloured disc. The polarities were arranged such that the colour of the keys corresponded to the colour of the disc displayed on the needle. See Artemenko for a photograph of this equipment.
The switches underneath the keys operated in the following manner. Metal bridges fixed underneath the keys dipped into two reservoirs of mercury. One of these reservoirs was permanently connected to one or other of the battery poles depending on the colour of the key, and the other was permanently connected to one of the signal wires. The keys for the common wire connect to the opposite polarity of the battery to that of the same colour of the signal wire keys. A limitation of this system is that only the set of black keys, or the set of white keys, can be used at any one time.
== Calling needle ==
The calling needle was similar to the signal needles with some additional mechanical apparatus. The needle was suspended by metal wire, rather than a silk thread, and had attached to it a horizontal arm. When a call was made the arm turned and pushed over a lever with a lead weight attached which fell under gravity and released the detent of a clockwork alarm. In specifying a separate calling line and mechanism, Schilling was following the arrangements on the electrochemical telegraph of Samuel Thomas von Sömmerring. Schilling had become familiar with Sömmerring's work while a diplomat in Munich and frequently visited Sömmerring to see and assist with his telegraph.
== Coding ==
The Schilling telegraph used binary coding. Each needle either displayed a disc or remained in the neutral position, corresponding to binary "1" or "0" respectively in modern notation. Six needles were needed to generate sufficient codepoints for the Russian alphabet – the modern alphabet has 33 letters, and there were even more in the nineteenth century. The ability to choose between displaying the white discs or the black discs doubled the codespace, but not all codepoints were used. Codepoints that spanned the fewest keys were preferentially used. For the Latin alphabet, used in Western European countries, five needles were sufficient.
== Notes ==
== References ==
== Bibliography ==
Artemenko, Roman, "Pavel Schilling - inventor of the electromagnetic telegraph", PC Week, vol. 3, iss. 321, 29 January 2002 (in Russian).
Dawson, Keith, "Electromagnetic telegraphy: early ideas, proposals and apparatus", pp. 113–142 in, Hall, A. Rupert; Smith, Norman (eds), History of Technology, vol. 1, Bloomsbury Publishing, 2016 ISBN 1350017345.
Fahie, John Joseph, A History of Electric Telegraphy, to the Year 1837, London: E. & F.N. Spon, 1884 OCLC 559318239.
Garratt, G.R.M., "The early history of telegraphy", Philips Technical Review, vol. 26, no. 8/9, pp. 268–284, 21 April 1966.
Huurdeman, A.A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052.
Yarotsky, A.V., "150th anniversary of the electromagnetic telegraph", Telecommunication Journal, vol. 49, no. 10, pp. 709–715, October 1982. | Wikipedia/Schilling_telegraph |
Analogue filters are a basic building block of signal processing much used in electronics. Amongst their many applications are the separation of an audio signal before application to bass, mid-range, and tweeter loudspeakers; the combining and later separation of multiple telephone conversations onto a single channel; the selection of a chosen radio station in a radio receiver and rejection of others.
Passive linear electronic analogue filters are those filters which can be described with linear differential equations (linear); they are composed of capacitors, inductors and, sometimes, resistors (passive) and are designed to operate on continuously varying analogue signals. There are many linear filters which are not analogue in implementation (digital filter), and there are many electronic filters which may not have a passive topology – both of which may have the same transfer function of the filters described in this article. Analogue filters are most often used in wave filtering applications, that is, where it is required to pass particular frequency components and to reject others from analogue (continuous-time) signals.
Analogue filters have played an important part in the development of electronics. Especially in the field of telecommunications, filters have been of crucial importance in a number of technological breakthroughs and have been the source of enormous profits for telecommunications companies. It should come as no surprise, therefore, that the early development of filters was intimately connected with transmission lines. Transmission line theory gave rise to filter theory, which initially took a very similar form, and the main application of filters was for use on telecommunication transmission lines. However, the arrival of network synthesis techniques greatly enhanced the degree of control of the designer.
Today, it is often preferred to carry out filtering in the digital domain where complex algorithms are much easier to implement, but analogue filters do still find applications, especially for low-order simple filtering tasks and are often still the norm at higher frequencies where digital technology is still impractical, or at least, less cost effective. Wherever possible, and especially at low frequencies, analogue filters are now implemented in a filter topology which is active in order to avoid the wound components (i.e. inductors, transformers, etc.) required by passive topology.
It is possible to design linear analogue mechanical filters using mechanical components which filter mechanical vibrations or acoustic waves. While there are few applications for such devices in mechanics per se, they can be used in electronics with the addition of transducers to convert to and from the electrical domain. Indeed, some of the earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of mechanical quantities, with kinetic energy, potential energy and heat energy corresponding to the energy in inductors, capacitors and resistors respectively.
== Historical overview ==
There are three main stages in the history of passive analogue filter development:
Simple filters. The frequency dependence of electrical response was known for capacitors and inductors from very early on. The resonance phenomenon was also familiar from an early date and it was possible to produce simple, single-branch filters with these components. Although attempts were made in the 1880s to apply them to telegraphy, these designs proved inadequate for successful frequency-division multiplexing. Network analysis was not yet powerful enough to provide the theory for more complex filters and progress was further hampered by a general failure to understand the frequency domain nature of signals.
Image filters. Image filter theory grew out of transmission line theory and the design proceeded in a similar manner to transmission line analysis. For the first time filters could be produced that had precisely controllable passbands and other parameters. These developments took place in the 1920s and filters produced to these designs were still in widespread use in the 1980s, only declining as the use of analogue telecommunications has declined. Their immediate application was the economically important development of frequency division multiplexing for use on intercity and international lines.
Network synthesis filters. The mathematical bases of network synthesis were laid in the 1930s and 1940s. After World War II, network synthesis became the primary tool of filter design. Network synthesis put filter design on a firm mathematical foundation, freeing it from the mathematically sloppy techniques of image design and severing the connection with physical lines. The essence of network synthesis is that it produces a design that will (at least if implemented with ideal components) accurately reproduce the response originally specified in black box terms.
Throughout this article the letters R, L, and C are used with their usual meanings to represent resistance, inductance, and capacitance, respectively. In particular they are used in combinations, such as LC, to mean, for instance, a network consisting only of inductors and capacitors. Z is used for electrical impedance, any 2-terminal combination of RLC elements and in some sections D is used for the rarely seen quantity elastance, which is the inverse of capacitance.
== Resonance ==
Early filters utilised the phenomenon of resonance to filter signals. Although electrical resonance had been investigated by researchers from a very early stage, it was at first not widely understood by electrical engineers. Consequently, the much more familiar concept of acoustic resonance (which in turn, can be explained in terms of the even more familiar mechanical resonance) found its way into filter design ahead of electrical resonance. Resonance can be used to achieve a filtering effect because the resonant device will respond to frequencies at, or near, to the resonant frequency but will not respond to frequencies far from resonance. Hence frequencies far from resonance are filtered out from the output of the device.
=== Electrical resonance ===
Resonance was noticed early on in experiments with the Leyden jar, invented in 1746. The Leyden jar stores electricity due to its capacitance, and is, in fact, an early form of capacitor. When a Leyden jar is discharged by allowing a spark to jump between the electrodes, the discharge is oscillatory. This was not suspected until 1826, when Felix Savary in France, and later (1842) Joseph Henry in the US noted that a steel needle placed close to the discharge does not always magnetise in the same direction. They both independently drew the conclusion that there was a transient oscillation dying with time.
Hermann von Helmholtz in 1847 published his important work on conservation of energy in part of which he used those principles to explain why the oscillation dies away, that it is the resistance of the circuit which dissipates the energy of the oscillation on each successive cycle. Helmholtz also noted that there was evidence of oscillation from the electrolysis experiments of William Hyde Wollaston. Wollaston was attempting to decompose water by electric shock but found that both hydrogen and oxygen were present at both electrodes. In normal electrolysis they would separate, one to each electrode.
Helmholtz explained why the oscillation decayed but he had not explained why it occurred in the first place. This was left to Sir William Thomson (Lord Kelvin) who, in 1853, postulated that there was inductance present in the circuit as well as the capacitance of the jar and the resistance of the load. This established the physical basis for the phenomenon – the energy supplied by the jar was partly dissipated in the load but also partly stored in the magnetic field of the inductor.
So far, the investigation had been on the natural frequency of transient oscillation of a resonant circuit resulting from a sudden stimulus. More important from the point of view of filter theory is the behaviour of a resonant circuit when driven by an external AC signal: there is a sudden peak in the circuit's response when the driving signal frequency is at the resonant frequency of the circuit. James Clerk Maxwell heard of the phenomenon from Sir William Grove in 1868 in connection with experiments on dynamos, and was also aware of the earlier work of Henry Wilde in 1866. Maxwell explained resonance mathematically, with a set of differential equations, in much the same terms that an RLC circuit is described today.
Heinrich Hertz (1887) experimentally demonstrated the resonance phenomena by building two resonant circuits, one of which was driven by a generator and the other was tunable and only coupled to the first electromagnetically (i.e., no circuit connection). Hertz showed that the response of the second circuit was at a maximum when it was in tune with the first. The diagrams produced by Hertz in this paper were the first published plots of an electrical resonant response.
=== Acoustic resonance ===
As mentioned earlier, it was acoustic resonance that inspired filtering applications, the first of these being a telegraph system known as the "harmonic telegraph". Versions are due to Elisha Gray, Alexander Graham Bell (1870s), Ernest Mercadier and others. Its purpose was to simultaneously transmit a number of telegraph messages over the same line and represents an early form of frequency division multiplexing (FDM). FDM requires the sending end to be transmitting at different frequencies for each individual communication channel. This demands individual tuned resonators, as well as filters to separate out the signals at the receiving end. The harmonic telegraph achieved this with electromagnetically driven tuned reeds at the transmitting end which would vibrate similar reeds at the receiving end. Only the reed with the same resonant frequency as the transmitter would vibrate to any appreciable extent at the receiving end.
Incidentally, the harmonic telegraph directly suggested to Bell the idea of the telephone. The reeds can be viewed as transducers converting sound to and from an electrical signal. It is no great leap from this view of the harmonic telegraph to the idea that speech can be converted to and from an electrical signal.
=== Early multiplexing ===
By the 1890s electrical resonance was much more widely understood and had become a normal part of the engineer's toolkit. In 1891 Hutin and Leblanc patented an FDM scheme for telephone circuits using resonant circuit filters. Rival patents were filed in 1892 by Michael Pupin and John Stone Stone with similar ideas, priority eventually being awarded to Pupin. However, no scheme using just simple resonant circuit filters can successfully multiplex (i.e. combine) the wider bandwidth of telephone channels (as opposed to telegraph) without either an unacceptable restriction of speech bandwidth or a channel spacing so wide as to make the benefits of multiplexing uneconomic.
The basic technical reason for this difficulty is that the frequency response of a simple filter approaches a fall of 6 dB/octave far from the point of resonance. This means that if telephone channels are squeezed in side by side into the frequency spectrum, there will be crosstalk from adjacent channels in any given channel. What is required is a much more sophisticated filter that has a flat frequency response in the required passband like a low-Q resonant circuit, but that rapidly falls in response (much faster than 6 dB/octave) at the transition from passband to stopband like a high-Q resonant circuit. Obviously, these are contradictory requirements to be met with a single resonant circuit. The solution to these needs was founded in the theory of transmission lines and consequently the necessary filters did not become available until this theory was fully developed. At this early stage the idea of signal bandwidth, and hence the need for filters to match to it, was not fully understood; indeed, it was as late as 1920 before the concept of bandwidth was fully established. For early radio, the concepts of Q-factor, selectivity and tuning sufficed. This was all to change with the developing theory of transmission lines on which image filters are based, as explained in the next section.
At the turn of the century as telephone lines became available, it became popular to add telegraph onto telephone lines with an earth return phantom circuit. An LC filter was required to prevent telegraph clicks being heard on the telephone line. From the 1920s onwards, telephone lines, or balanced lines dedicated to the purpose, were used for FDM telegraph at audio frequencies. The first of these systems in the UK was a Siemens and Halske installation between London and Manchester. GEC and AT&T also had FDM systems. Separate pairs were used for the send and receive signals. The Siemens and GEC systems had six channels of telegraph in each direction, the AT&T system had twelve. All of these systems used electronic oscillators to generate a different carrier for each telegraph signal and required a bank of band-pass filters to separate out the multiplexed signal at the receiving end.
== Transmission line theory ==
The earliest model of the transmission line was probably described by Georg Ohm (1827) who established that resistance in a wire is proportional to its length. The Ohm model thus included only resistance. Latimer Clark noted that signals were delayed and elongated along a cable, an undesirable form of distortion now called dispersion but then called retardation, and Michael Faraday (1853) established that this was due to the capacitance present in the transmission line. Lord Kelvin (1854) found the correct mathematical description needed in his work on early transatlantic cables; he arrived at an equation identical to the conduction of a heat pulse along a metal bar. This model incorporates only resistance and capacitance, but that is all that was needed in undersea cables dominated by capacitance effects. Kelvin's model predicts a limit on the telegraph signalling speed of a cable but Kelvin still did not use the concept of bandwidth, the limit was entirely explained in terms of the dispersion of the telegraph symbols. The mathematical model of the transmission line reached its fullest development with Oliver Heaviside. Heaviside (1881) introduced series inductance and shunt conductance into the model making four distributed elements in all. This model is now known as the telegrapher's equation and the distributed-element parameters are called the primary line constants.
From the work of Heaviside (1887) it had become clear that the performance of telegraph lines, and most especially telephone lines, could be improved by the addition of inductance to the line. George Campbell at AT&T implemented this idea (1899) by inserting loading coils at intervals along the line. Campbell found that as well as the desired improvements to the line's characteristics in the passband there was also a definite frequency beyond which signals could not be passed without great attenuation. This was a result of the loading coils and the line capacitance forming a low-pass filter, an effect that is only apparent on lines incorporating lumped components such as the loading coils. This naturally led Campbell (1910) to produce a filter with ladder topology, a glance at the circuit diagram of this filter is enough to see its relationship to a loaded transmission line. The cut-off phenomenon is an undesirable side-effect as far as loaded lines are concerned but for telephone FDM filters it is precisely what is required. For this application, Campbell produced band-pass filters to the same ladder topology by replacing the inductors and capacitors with resonators and anti-resonators respectively. Both the loaded line and FDM were of great benefit economically to AT&T and this led to fast development of filtering from this point onwards.
== Image filters ==
The filters designed by Campbell were named wave filters because of their property of passing some waves and strongly rejecting others. The method by which they were designed was called the image parameter method and filters designed to this method are called image filters. The image method essentially consists of developing the transmission constants of an infinite chain of identical filter sections and then terminating the desired finite number of filter sections in the image impedance. This exactly corresponds to the way the properties of a finite length of transmission line are derived from the theoretical properties of an infinite line, the image impedance corresponding to the characteristic impedance of the line.
From 1920 John Carson, also working for AT&T, began to develop a new way of looking at signals using the operational calculus of Heaviside which in essence is working in the frequency domain. This gave the AT&T engineers a new insight into the way their filters were working and led Otto Zobel to invent many improved forms. Carson and Zobel steadily demolished many of the old ideas. For instance the old telegraph engineers thought of the signal as being a single frequency and this idea persisted into the age of radio with some still believing that frequency modulation (FM) transmission could be achieved with a smaller bandwidth than the baseband signal right up until the publication of Carson's 1922 paper. Another advance concerned the nature of noise, Carson and Zobel (1923) treated noise as a random process with a continuous bandwidth, an idea that was well ahead of its time, and thus limited the amount of noise that it was possible to remove by filtering to that part of the noise spectrum which fell outside the passband. This too, was not generally accepted at first, notably being opposed by Edwin Armstrong (who ironically, actually succeeded in reducing noise with wide-band FM) and was only finally settled with the work of Harry Nyquist whose thermal noise power formula is well known today.
Several improvements were made to image filters and their theory of operation by Otto Zobel. Zobel coined the term constant k filter (or k-type filter) to distinguish Campbell's filter from later types, notably Zobel's m-derived filter (or m-type filter). The particular problems Zobel was trying to address with these new forms were impedance matching into the end terminations and improved steepness of roll-off. These were achieved at the cost of an increase in filter circuit complexity.
A more systematic method of producing image filters was introduced by Hendrik Bode (1930), and further developed by several other investigators including Piloty (1937–1939) and Wilhelm Cauer (1934–1937). Rather than enumerate the behaviour (transfer function, attenuation function, delay function and so on) of a specific circuit, instead a requirement for the image impedance itself was developed. The image impedance can be expressed in terms of the open-circuit and short-circuit impedances of the filter as
Z
i
=
Z
o
Z
s
{\displaystyle \scriptstyle Z_{i}={\sqrt {Z_{o}Z_{s}}}}
. Since the image impedance must be real in the passbands and imaginary in the stopbands according to image theory, there is a requirement that the poles and zeroes of Zo and Zs cancel in the passband and correspond in the stopband. The behaviour of the filter can be entirely defined in terms of the positions in the complex plane of these pairs of poles and zeroes. Any circuit which has the requisite poles and zeroes will also have the requisite response. Cauer pursued two related questions arising from this technique: what specification of poles and zeroes are realisable as passive filters; and what realisations are equivalent to each other. The results of this work led Cauer to develop a new approach, now called network synthesis.
This "poles and zeroes" view of filter design was particularly useful where a bank of filters, each operating at different frequencies, are all connected across the same transmission line. The earlier approach was unable to deal properly with this situation, but the poles and zeroes approach could embrace it by specifying a constant impedance for the combined filter. This problem was originally related to FDM telephony but frequently now arises in loudspeaker crossover filters.
== Network synthesis filters ==
The essence of network synthesis is to start with a required filter response and produce a network that delivers that response, or approximates to it within a specified boundary. This is the inverse of network analysis which starts with a given network and by applying the various electric circuit theorems predicts the response of the network. The term was first used with this meaning in the doctoral thesis of Yuk-Wing Lee (1930) and apparently arose out of a conversation with Vannevar Bush. The advantage of network synthesis over previous methods is that it provides a solution which precisely meets the design specification. This is not the case with image filters, a degree of experience is required in their design since the image filter only meets the design specification in the unrealistic case of being terminated in its own image impedance, to produce which would require the exact circuit being sought. Network synthesis on the other hand, takes care of the termination impedances simply by incorporating them into the network being designed.
The development of network analysis needed to take place before network synthesis was possible. The theorems of Gustav Kirchhoff and others and the ideas of Charles Steinmetz (phasors) and Arthur Kennelly (complex impedance) laid the groundwork. The concept of a port also played a part in the development of the theory, and proved to be a more useful idea than network terminals. The first milestone on the way to network synthesis was an important paper by Ronald M. Foster (1924), A Reactance Theorem, in which Foster introduces the idea of a driving point impedance, that is, the impedance that is connected to the generator. The expression for this impedance determines the response of the filter and vice versa, and a realisation of the filter can be obtained by expansion of this expression. It is not possible to realise any arbitrary impedance expression as a network. Foster's reactance theorem stipulates necessary and sufficient conditions for realisability: that the reactance must be algebraically increasing with frequency and the poles and zeroes must alternate.
Wilhelm Cauer expanded on the work of Foster (1926) and was the first to talk of realisation of a one-port impedance with a prescribed frequency function. Foster's work considered only reactances (i.e., only LC-kind circuits). Cauer generalised this to any 2-element kind one-port network, finding there was an isomorphism between them. He also found ladder realisations of the network using Thomas Stieltjes' continued fraction expansion. This work was the basis on which network synthesis was built, although Cauer's work was not at first used much by engineers, partly because of the intervention of World War II, partly for reasons explained in the next section and partly because Cauer presented his results using topologies that required mutually coupled inductors and ideal transformers. Designers tend to avoid the complication of mutual inductances and transformers where possible, although transformer-coupled double-tuned amplifiers are a common way of widening bandwidth without sacrificing selectivity.
== Image method versus synthesis ==
Image filters continued to be used by designers long after the superior network synthesis techniques were available. Part of the reason for this may have been simply inertia, but it was largely due to the greater computation required for network synthesis filters, often needing a mathematical iterative process. Image filters, in their simplest form, consist of a chain of repeated, identical sections. The design can be improved simply by adding more sections and the computation required to produce the initial section is on the level of "back of an envelope" designing. In the case of network synthesis filters, on the other hand, the filter is designed as a whole, single entity and to add more sections (i.e., increase the order) the designer would have no option but to go back to the beginning and start over. The advantages of synthesised designs are real, but they are not overwhelming compared to what a skilled image designer could achieve, and in many cases it was more cost effective to dispense with time-consuming calculations. This is simply not an issue with the modern availability of computing power, but in the 1950s it was non-existent, in the 1960s and 1970s available only at cost, and not finally becoming widely available to all designers until the 1980s with the advent of the desktop personal computer. Image filters continued to be designed up to that point and many remained in service into the 21st century.
The computational difficulty of the network synthesis method was addressed by tabulating the component values of a prototype filter and then scaling the frequency and impedance and transforming the bandform to those actually required. This kind of approach, or similar, was already in use with image filters, for instance by Zobel, but the concept of a "reference filter" is due to Sidney Darlington. Darlington (1939), was also the first to tabulate values for network synthesis prototype filters, nevertheless it had to wait until the 1950s before the Cauer-Darlington elliptic filter first came into use.
Once computational power was readily available, it became possible to easily design filters to minimise any arbitrary parameter, for example time delay or tolerance to component variation. The difficulties of the image method were firmly put in the past, and even the need for prototypes became largely superfluous. Furthermore, the advent of active filters eased the computation difficulty because sections could be isolated and iterative processes were not then generally necessary.
== Realisability and equivalence ==
Realisability (that is, which functions are realisable as real impedance networks) and equivalence (which networks equivalently have the same function) are two important questions in network synthesis. Following an analogy with Lagrangian mechanics, Cauer formed the matrix equation,
[
A
]
=
s
2
[
L
]
+
s
[
R
]
+
[
D
]
=
s
[
Z
]
{\displaystyle \mathbf {[A]} =s^{2}\mathbf {[L]} +s\mathbf {[R]} +\mathbf {[D]} =s\mathbf {[Z]} }
where [Z],[R],[L] and [D] are the nxn matrices of, respectively, impedance, resistance, inductance and elastance of an n-mesh network and s is the complex frequency operator
s
=
σ
+
i
ω
{\displaystyle \scriptstyle s=\sigma +i\omega }
. Here [R],[L] and [D] have associated energies corresponding to the kinetic, potential and dissipative heat energies, respectively, in a mechanical system and the already known results from mechanics could be applied here. Cauer determined the driving point impedance by the method of Lagrange multipliers;
Z
p
(
s
)
=
det
[
A
]
s
a
11
{\displaystyle Z_{\mathrm {p} }(s)={\frac {\det \mathbf {[A]} }{s\,a_{11}}}}
where a11 is the complement of the element A11 to which the one-port is to be connected. From stability theory Cauer found that [R], [L] and [D] must all be positive-definite matrices for Zp(s) to be realisable if ideal transformers are not excluded. Realisability is only otherwise restricted by practical limitations on topology. This work is also partly due to Otto Brune (1931), who worked with Cauer in the US prior to Cauer returning to Germany. A well known condition for realisability of a one-port rational impedance due to Cauer (1929) is that it must be a function of s that is analytic in the right halfplane (σ>0), have a positive real part in the right halfplane and take on real values on the real axis. This follows from the Poisson integral representation of these functions. Brune coined the term positive-real for this class of function and proved that it was a necessary and sufficient condition (Cauer had only proved it to be necessary) and they extended the work to LC multiports. A theorem due to Sidney Darlington states that any positive-real function Z(s) can be realised as a lossless two-port terminated in a positive resistor R. No resistors within the network are necessary to realise the specified response.
As for equivalence, Cauer found that the group of real affine transformations,
[
T
]
T
[
A
]
[
T
]
{\displaystyle \mathbf {[T]} ^{T}\mathbf {[A]} \mathbf {[T]} }
where,
[
T
]
=
[
1
0
⋯
0
T
21
T
22
⋯
T
2
n
⋅
⋯
T
n
1
T
n
2
⋯
T
n
n
]
{\displaystyle \mathbf {[T]} ={\begin{bmatrix}1&0\cdots 0\\T_{21}&T_{22}\cdots T_{2n}\\\cdot &\cdots \\T_{n1}&T_{n2}\cdots T_{nn}\end{bmatrix}}}
is invariant in Zp(s), that is, all the transformed networks are equivalents of the original.
== Approximation ==
The approximation problem in network synthesis is to find functions which will produce realisable networks approximating to a prescribed function of frequency within limits arbitrarily set. The approximation problem is an important issue since the ideal function of frequency required will commonly be unachievable with rational networks. For instance, the ideal prescribed function is often taken to be the unachievable lossless transmission in the passband, infinite attenuation in the stopband and a vertical transition between the two. However, the ideal function can be approximated with a rational function, becoming ever closer to the ideal the higher the order of the polynomial. The first to address this problem was Stephen Butterworth (1930) using his Butterworth polynomials. Independently, Cauer (1931) used Chebyshev polynomials, initially applied to image filters, and not to the now well-known ladder realisation of this filter.
=== Butterworth filter ===
Butterworth filters are an important class of filters due to Stephen Butterworth (1930) which are now recognised as being a special case of Cauer's elliptic filters. Butterworth discovered this filter independently of Cauer's work and implemented it in his version with each section isolated from the next with a valve amplifier which made calculation of component values easy since the filter sections could not interact with each other and each section represented one term in the Butterworth polynomials. This gives Butterworth the credit for being both the first to deviate from image parameter theory and the first to design active filters. It was later shown that Butterworth filters could be implemented in ladder topology without the need for amplifiers. Possibly the first to do so was William Bennett (1932) in a patent which presents formulae for component values identical to the modern ones. Bennett, at this stage though, is still discussing the design as an artificial transmission line and so is adopting an image parameter approach despite having produced what would now be considered a network synthesis design. He also does not appear to be aware of the work of Butterworth or the connection between them.
=== Insertion-loss method ===
The insertion-loss method of designing filters is, in essence, to prescribe a desired function of frequency for the filter as an attenuation of the signal when the filter is inserted between the terminations relative to the level that would have been received were the terminations connected to each other via an ideal transformer perfectly matching them. Versions of this theory are due to Sidney Darlington, Wilhelm Cauer and others all working more or less independently and is often taken as synonymous with network synthesis. Butterworth's filter implementation is, in those terms, an insertion-loss filter, but it is a relatively trivial one mathematically since the active amplifiers used by Butterworth ensured that each stage individually worked into a resistive load. Butterworth's filter becomes a non-trivial example when it is implemented entirely with passive components. An even earlier filter that influenced the insertion-loss method was Norton's dual-band filter where the input of two filters are connected in parallel and designed so that the combined input presents a constant resistance. Norton's design method, together with Cauer's canonical LC networks and Darlington's theorem that only LC components were required in the body of the filter resulted in the insertion-loss method. However, ladder topology proved to be more practical than Cauer's canonical forms.
Darlington's insertion-loss method is a generalisation of the procedure used by Norton. In Norton's filter it can be shown that each filter is equivalent to a separate filter unterminated at the common end. Darlington's method applies to the more straightforward and general case of a 2-port LC network terminated at both ends. The procedure consists of the following steps:
determine the poles of the prescribed insertion-loss function,
from that find the complex transmission function,
from that find the complex reflection coefficients at the terminating resistors,
find the driving point impedance from the short-circuit and open-circuit impedances,
expand the driving point impedance into an LC (usually ladder) network.
Darlington additionally used a transformation found by Hendrik Bode that predicted the response of a filter using non-ideal components but all with the same Q. Darlington used this transformation in reverse to produce filters with a prescribed insertion-loss with non-ideal components. Such filters have the ideal insertion-loss response plus a flat attenuation across all frequencies.
=== Elliptic filters ===
Elliptic filters are filters produced by the insertion-loss method which use elliptic rational functions in their transfer function as an approximation to the ideal filter response and the result is called a Chebyshev approximation. This is the same Chebyshev approximation technique used by Cauer on image filters but follows the Darlington insertion-loss design method and uses slightly different elliptic functions. Cauer had some contact with Darlington and Bell Labs before WWII (for a time he worked in the US) but during the war they worked independently, in some cases making the same discoveries. Cauer had disclosed the Chebyshev approximation to Bell Labs but had not left them with the proof. Sergei Schelkunoff provided this and a generalisation to all equal ripple problems. Elliptic filters are a general class of filter which incorporate several other important classes as special cases: Cauer filter (equal ripple in passband and stopband), Chebyshev filter (ripple only in passband), reverse Chebyshev filter (ripple only in stopband) and Butterworth filter (no ripple in either band).
Generally, for insertion-loss filters where the transmission zeroes and infinite losses are all on the real axis of the complex frequency plane (which they usually are for minimum component count), the insertion-loss function can be written as;
1
1
+
J
F
2
{\displaystyle {\frac {1}{1+JF^{2}}}}
where F is either an even (resulting in an antimetric filter) or an odd (resulting in an symmetric filter) function of frequency. Zeroes of F correspond to zero loss and the poles of F correspond to transmission zeroes. J sets the passband ripple height and the stopband loss and these two design requirements can be interchanged. The zeroes and poles of F and J can be set arbitrarily. The nature of F determines the class of the filter;
if F is a Chebyshev approximation the result is a Chebyshev filter,
if F is a maximally flat approximation the result is a passband maximally flat filter,
if 1/F is a Chebyshev approximation the result is a reverse Chebyshev filter,
if 1/F is a maximally flat approximation the result is a stopband maximally flat filter,
A Chebyshev response simultaneously in the passband and stopband is possible, such as Cauer's equal ripple elliptic filter.
Darlington relates that he found in the New York City library Carl Jacobi's original paper on elliptic functions, published in Latin in 1829. In this paper Darlington was surprised to find foldout tables of the exact elliptic function transformations needed for Chebyshev approximations of both Cauer's image parameter, and Darlington's insertion-loss filters.
=== Other methods ===
Darlington considers the topology of coupled tuned circuits to involve a separate approximation technique to the insertion-loss method, but also producing nominally flat passbands and high attenuation stopbands. The most common topology for these is shunt anti-resonators coupled by series capacitors, less commonly, by inductors, or in the case of a two-section filter, by mutual inductance. These are most useful where the design requirement is not too stringent, that is, moderate bandwidth, roll-off and passband ripple.
== Other notable developments and applications ==
=== Mechanical filters ===
Edward Norton, around 1930, designed a mechanical filter for use on phonograph recorders and players. Norton designed the filter in the electrical domain and then used the correspondence of mechanical quantities to electrical quantities to realise the filter using mechanical components. Mass corresponds to inductance, stiffness to elastance and damping to resistance. The filter was designed to have a maximally flat frequency response.
In modern designs it is common to use quartz crystal filters, especially for narrowband filtering applications. The signal exists as a mechanical acoustic wave while it is in the crystal and is converted by transducers between the electrical and mechanical domains at the terminals of the crystal.
=== Distributed-element filters ===
Distributed-element filters are composed of lengths of transmission line that are at least a significant fraction of a wavelength long. The earliest non-electrical filters were all of this type. William Herschel (1738–1822), for instance, constructed an apparatus with two tubes of different lengths which attenuated some frequencies but not others. Joseph-Louis Lagrange (1736–1813) studied waves on a string periodically loaded with weights. The device was never studied or used as a filter by either Lagrange or later investigators such as Charles Godfrey. However, Campbell used Godfrey's results by analogy to calculate the number of loading coils needed on his loaded lines, the device that led to his electrical filter development. Lagrange, Godfrey, and Campbell all made simplifying assumptions in their calculations that ignored the distributed nature of their apparatus. Consequently, their models did not show the multiple passbands that are a characteristic of all distributed-element filters. The first electrical filters that were truly designed by distributed-element principles are due to Warren P. Mason starting in 1927.
=== Transversal filters ===
Transversal filters are not usually associated with passive implementations but the concept can be found in a Wiener and Lee patent from 1935 which describes a filter consisting of a cascade of all-pass sections. The outputs of the various sections are summed in the proportions needed to result in the required frequency function. This works by the principle that certain frequencies will be in, or close to antiphase, at different sections and will tend to cancel when added. These are the frequencies rejected by the filter and can produce filters with very sharp cut-offs. This approach did not find any immediate applications, and is not common in passive filters. However, the principle finds many applications as an active delay line implementation for wide band discrete-time filter applications such as television, radar and high-speed data transmission.
=== Matched filter ===
The purpose of matched filters is to maximise the signal-to-noise ratio (S/N) at the expense of pulse shape. Pulse shape, unlike many other applications, is unimportant in radar while S/N is the primary limitation on performance. The filters were introduced during WWII (described 1943) by Dwight North and are often eponymously referred to as "North filters".
=== Filters for control systems ===
Control systems have a need for smoothing filters in their feedback loops with criteria to maximise the speed of movement of a mechanical system to the prescribed mark and at the same time minimise overshoot and noise induced motions. A key problem here is the extraction of Gaussian signals from a noisy background. An early paper on this was published during WWII by Norbert Wiener with the specific application to anti-aircraft fire control analogue computers. Rudy Kalman (Kalman filter) later reformulated this in terms of state-space smoothing and prediction where it is known as the linear-quadratic-Gaussian control problem. Kalman started an interest in state-space solutions, but according to Darlington this approach can also be found in the work of Heaviside and earlier.
== Modern practice ==
LC filters at low frequencies become awkward; the components, especially the inductors, become expensive, bulky, heavy, and non-ideal. Practical 1 H inductors require many turns on a high-permeability core; that material will have high losses and stability issues (e.g., a large temperature coefficient). For applications such as a mains filters, the awkwardness must be tolerated. For low-level, low-frequency, applications, RC filters are possible, but they cannot implement filters with complex poles or zeros. If the application can use power, then amplifiers can be used to make RC active filters that can have complex poles and zeros. In the 1950s, Sallen–Key active RC filters were made with vacuum tube amplifiers; these filters replaced the bulky inductors with bulky and hot vacuum tubes. Transistors offered more power-efficient active filter designs. Later, inexpensive operational amplifiers enabled other active RC filter design topologies. Although active filter designs were commonplace at low frequencies, they were impractical at high frequencies where the amplifiers were not ideal; LC (and transmission line) filters were still used at radio frequencies.
Gradually, the low frequency active RC filter was supplanted by the switched-capacitor filter that operated in the discrete time domain rather than the continuous time domain. All of these filter technologies require precision components for high performance filtering, and that often requires that the filters be tuned. Adjustable components are expensive, and the labor to do the tuning can be significant. Tuning the poles and zeros of a 7th-order elliptic filter is not a simple exercise. Integrated circuits have made digital computation inexpensive, so now low frequency filtering is done with digital signal processors. Such digital filters have no problem implementing ultra-precise (and stable) values, so no tuning or adjustment is required. Digital filters also don't have to worry about stray coupling paths and shielding the individual filter sections from one another. One downside is the digital signal processing may consume much more power than an equivalent LC filter. Inexpensive digital technology has largely supplanted analogue implementations of filters. However, there is still an occasional place for them in the simpler applications such as coupling where sophisticated functions of frequency are not needed. Passive filters are still the technology of choice at microwave frequencies.
== See also ==
Audio filter
Composite image filter
Digital filter
Electronic filter
Linear filter
Network synthesis filters
== Footnotes ==
== References ==
== Bibliography ==
Belevitch, V, "Summary of the history of circuit theory", Proceedings of the IRE, vol. 50, iss. 5, pp. 848–855, May 1962 doi:10.1109/JRPROC.1962.288301.
Blanchard, J, "The History of Electrical Resonance", Bell System Technical Journal, vol. 23, pp. 415–433, 1944.
Cauer, E; Mathis, W; Pauli, R, "Life and work of Wilhelm Cauer (1900–1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000), Perpignan, June, 2000.
Darlington, S, "A history of network synthesis and filter theory for circuits composed of resistors, inductors, and capacitors", IEEE Transactions on Circuits and Systems, vol. 31, pp. 3–13, 1984 doi:10.1109/TCS.1984.1085415.
Fagen, M D; Millman, S, A History of Engineering and Science in the Bell System: Volume 5: Communications Sciences (1925–1980), AT&T Bell Laboratories, 1984 ISBN 0932764061.
Godfrey, Charles, "On discontinuities connected with the propagation of wave-motion along a periodically loaded string", Philosophical Magazine, ser. 5, vol. 45, no. 275, pp. 356–363, April 1898.
Hunt, Bruce J, The Maxwellians, Cornell University Press, 2005 ISBN 0-8014-8234-8.
Lundheim, L, "On Shannon and Shannon's formula", Telektronikk, vol. 98, no. 1, pp. 20–29, 2002.
Mason, Warren P, "Electrical and mechanical analogies", Bell System Technical Journal, vol. 20, no. 4, pp. 405–414, October 1941.
Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, McGraw-Hill 1964.
== Further reading ==
Fry, T C, "The use of continued fractions in the design of electrical networks", Bulletin of the American Mathematical Society, volume 35, pages 463–498, 1929 (full text available). | Wikipedia/Voice_frequency_telegraphy |
The British and Irish Magnetic Telegraph Company (also called the Magnetic Telegraph Company or the Magnetic) was a provider of telegraph services and infrastructure. It was founded in 1850 by John Brett. The Magnetic became the principal competitor to the largest telegraph company in the United Kingdom, Electric Telegraph Company (the Electric), and became the leading company in Ireland. The two companies dominated the market until the telegraph was nationalised in 1870.
The Magnetic's telegraph system differed from other telegraph companies. They favoured underground cables rather than wires suspended on poles. This system was problematic because of the limitations of insulation materials available at the time, but the Magnetic was constrained by the wayleaves owned by other companies on better routes. They were also unique in not using batteries which were required on other systems. Instead the operator generated the necessary power electromagnetically. The coded message was sent by the operator moving handles which moved coils past a permanent magnet thus generating telegraph pulses.
The Magnetic laid the first submarine telegraph cable to Ireland and developed an extensive telegraph network there. They had a close connection with the Submarine Telegraph Company and for a while had a monopoly on underwater, and hence, international communication. They also closely cooperated with the London District Telegraph Company who provided a cheap telegram service in London. The Magnetic was amongst the first to employ women as telegraph operators.
== Company history ==
The English and Irish Magnetic Telegraph Company (which was also known as the Magnetic) was established by John Brett in 1850. John Pender also had an interest and Charles Tilston Bright was the chief engineer. The company's initial objective was to connect Britain with Ireland following the success of the Submarine Telegraph Company in connecting England with France with the first ocean cable to be put in service. The British and Irish Magnetic Telegraph Company was formed in 1857 in Liverpool through a merger of the English and Irish Magnetic Telegraph Company and the British Telegraph Company (originally known as the British Electric Telegraph Company).
The main competitor of the Magnetic was the Electric Telegraph Company, later, after a merger, the Electric and International Telegraph Company (the Electric for short) founded by William Fothergill Cooke. By the end of the 1850s, the Electric and Magnetic companies were virtually a cartel in Britain. In 1859, the Magnetic moved its headquarters from Liverpool to Threadneedle Street in London, in recognition that they were no longer a regional company. They shared these premises with the Submarine Telegraph Company.
The company had a close relationship with the Submarine Telegraph Company who laid the first cable to France and many subsequent submarine telegraph cables to Europe. From about 1857, the Magnetic had an agreement with them that all their submarine cables were to be used only with the landlines of the Magnetic. The Magnetic also had control of the first cable to Ireland. This control of international traffic gave them a significant advantage in the domestic market.
Another company with a close relationship was the London District Telegraph Company (the District), formed in 1859. The District provided a cheap telegram service within London only. They shared headquarters and directors with the Magnetic. The Magnetic installed their lines and trained their staff in return for the District passing on traffic for the Magnetic outside London.
The Magnetic founded its own press agency. It promoted its agency by offering lower rates to customers who used it than the rates for customers who wanted connections to rival agencies. In 1870, The Magnetic, along with several other telegraph companies including the Electric, were nationalised under the Telegraph Act 1868 and the company wound up.
== Telegraph system ==
The telegraph system of the Magnetic was somewhat different from other companies. This was largely because the Electric held the patents for the Cooke and Wheatstone telegraph. The name of the company refers to the fact that their telegraph system did not require batteries. Power for the transmissions was generated electromagnetically. The system, invented by William Thomas Henley and George Foster in 1848, was a needle telegraph and came in double-needle or single-needle versions. The machine was worked by the operator pushing pedal keys. An armature connected to the key moved two coils through the magnetic field of a permanent magnet. This generated a pulse of current which caused a deflection of the corresponding needle at both ends of the line. The needles were magnetised and so arranged that they were held in position by the permanent magnet after deflection. The operator was able to apply a current in the reverse direction so that there were two positions that the needle could be held in. The code consisted of various combinations of successive needle deflections to the left or right.
In later years, the Magnetic used other telegraph systems. After the takeover of the British Telegraph Company, the Magnetic acquired the rights to the needle telegraph instrument of that company's founder, Henry Highton. This instrument was the cheapest of any of the instruments produced at the time, but like all needle telegraphs, was slower than audible systems due to the operator having to continually look up at the instrument while transcribing the message. Some companies moved to needle instruments with endstops making two different sounds when the needle struck them (an innovation of Cooke and Wheatstone in 1845) to solve this problem. The Magnetic instead used an 1854 invention of Charles Tilston Bright on its more busy lines. This was the acoustic telegraph (not to be confused with the acoustic telegraphy method of multiplexing) known as Bright's bells. In this system, two bells placed either side of the operator are rung with a hammer made to strike the bell by a solenoid driven by a relay. They are so arranged that the right and left bells are struck according to whether a positive or negative pulse of current is received on the telegraph line. Such bells make a much louder sound than the clicking of a needle.
The Magnetic found a method of overcoming the problem of dispersion on long submarine telegraph cables. The poorly understood phenomenon at that time was called retardation because different parts of a telegraph pulse travels at different speeds on the cable. Part of the pulse appears to be 'retarded', arriving later than the rest at the destination. This 'smearing out' of the pulse interferes with neighbouring pulses making the transmission unintelligible unless messages are sent at a much slower speed. The Magnetic found that if they generated pulses of opposite polarity to the main pulse and slightly delayed from it, the retarded signal was sufficiently cancelled to make the line usable at normal operator speeds. This system was developed theoretically by William Thomson and demonstrated to work by Fleeming Jenkin.
The Magnetic played a part in solving the dispersion problem on the transatlantic telegraph cable of the Atlantic Telegraph Company. Magnetic were strongly connected with this project; Bright promoted it and shares were sold largely to Magnetic shareholders, including Pender. Dispersion on the 1858 Atlantic cable had been so severe that it was almost unusable: it was destroyed by misguided attempts to solve the problem using high voltage. For the 1866 cable, it was planned to use the Magnetic's opposite polarity pulse method, but doubts were expressed over whether it would work over such a great distance. Magnetic connected together various of their British underground cables to provide a total line length of over 2,000 miles (3,200 km) for proof of principle testing. Dispersion was not eliminated from submarine cables until loading coils started to be used on them from 1906 onwards.
== Telegraph network ==
=== First connection to Ireland ===
The company's first objective, in 1852, was to provide the first telegraph service between Great Britain and Ireland by means of a submarine cable between Portpatrick in Scotland and Donaghadee in Ireland. The cable core was gutta-percha insulated copper wire made by the Gutta Percha Company. This was armoured with iron wires by R. S. Newall and Company at their works in Sunderland. Before this could be achieved, two other companies attempted to be the first to make the connection across the Irish Sea.
Despite having the contract to lay the Magnetic company's cable, Newall also secretly constructed another cable at their Gateshead works with the intention of being first to get a telegraph connection to Ireland. This Newall cable was only lightly armoured with an open 'bird-cage' structure of the iron wires, there was no cushioning layer between the core and the armour, and the insulation was not properly tested before laying because of the great hurry to get the job done before Magnetic was ready. This cable was laid from Holyhead in Wales to Howth, near Dublin with William Henry Woodhouse as engineer, and thence to Dublin via underground cable along the railway line. Laying of the submarine cable was completed on 1 June 1852 by the City of Dublin Steam Packet Company's chartered paddle steamer Britannia of 1825, usually used as a cattle ship, and with assistance from the Admiralty with HMS Prospero. However, the cable failed a few days later and was never put into service.
In July of the same year, the Electric Telegraph Company of Ireland tried using an insulated cable inside a hemp rope on the Portpatrick to Donaghadee route. This construction proved problematic because it floated (the Submarine Telegraph Company's Dover to Calais cable in 1850 was also lightweight, having no protection at all other than the insulation, but they had taken the precaution of adding periodic lead weights to sink the cable). It was laid from a schooner Reliance, assisted by tugs. The strong sea currents in the Irish Sea, much deeper than the English Channel, dragged the cable into a large bow and there was consequently insufficient length to land it. The attempt was abandoned.
For their cable, Magnetic were more careful in testing the insulation of batches of cable than Newall. Coils of cable were hung over the side of the dock and left to soak before testing. They used a new type of battery for insulation testing that was capable of being used at sea. Previously, the test batteries had been lined wooden cases with liquid electrolyte (Daniell cells). The new 'sand battery' comprised a moulded gutta-percha case filled with sand saturated with electrolyte, making it virtually unspillable. 144 cells were used in series (around 150 V). Several suspect portions of insulation were removed and repaired, by opening up the iron wire armouring with Spanish windlasses. Newall attempted to lay the Sunderland-made cable, again using the chartered steamer Britannia, in the autumn of 1852. The cable was too taut as she sailed from Portpatrick, resulting in the test instruments being dragged into the sea. Several delays caused by broken iron wires as the cable was laid, resulted in the ship drifting off course and running out of cable and this attempt too was abandoned.
Magnetic were successful with a new cable in 1853 over the same route, with Newall this time using the chartered Newcastle collier William Hutt. This was a six-core cable and heavier than the 1852 cable, weighing seven tons per mile. At over 180 fathoms (330 m) down, it was the deepest cable laid to that date. Repairs to the cable in 1861 required 128 splices. Tests on pieces of retrieved cable found that the copper wire used was very impure, containing less than 50% copper, despite the Gutta Percha Company specifying 85%.
=== Land network ===
The Magnetic's network was centred on northern England, Scotland, and Ireland, with its headquarters in Liverpool. Like most other telegraph companies, it ran its major telegraph trunk lines along railways in its home area. One of their first lines was ten unarmoured wires buried in the space between two railway tracks of the Lancashire and Yorkshire Railway. The Magnetic developed an extensive underground cable network from 1851 onwards. This was in contrast to other companies who used wires suspended between telegraph poles, or in built up areas, from rooftop to rooftop. Partly, the Magnetic buried cables for better protection from the elements. However, a more pressing reason was that many railway companies had exclusive agreements with the Electric, which shut out the Magnetic. Further, the British Telegraph Company, had exclusive rights for overhead lines on public roads, and the United Kingdom Telegraph Company had exclusive rights along canals. The Magnetic had a particular problem in reaching London. Their solution was to run buried cables along major roads. Ten wires were installed in this way along the route London–Birmingham–Manchester–Glasgow–Carlisle.
Wires on poles do not need to be electrically insulated (although they may have a protective coating). This is not so with underground lines. These must be insulated from the ground and from each other. The insulation must also be waterproof. Good insulating materials were not available in the early days of telegraphy, but after William Montgomerie sent samples of gutta-percha to Europe in 1843, the Gutta Percha Company started making gutta-percha insulated electrical cable from 1848 onwards. Gutta-percha is a natural rubber that is thermoplastic, so is good for continuous processes like cable making. Synthetic thermoplastic insulating material was not available until the invention of polyethylene in the 1930s, and it was not used for submarine cables until the 1940s. On cooling, gutta-percha is hard, durable, and waterproof, making it suitable for underground (and later submarine) cables. This was the cable chosen by the Magnetic for its underground lines.
In Ireland too, the Magnetic developed an extensive network of underground cables. In 1851, in anticipation of the submarine cable connection being laid to Donaghadee, the Magnetic laid an underground cable to Dublin. Once the submarine link was in place, Dublin could be connected to London via Manchester and Liverpool. In the west of Ireland, by 1855 they had laid cables that stretched down the entire length of the island on the route Portrush–Sligo–Galway–Limerick–Tralee–Cape Clear. The relationship of the Magnetic with Irish railway companies was the exact opposite of that in Britain. The Magnetic obtained exclusive agreements with many railways, including in 1858 with the Midland Great Western Railway. In Ireland, it was the Electric's turn to be forced on to the roads and canals.
In 1856, the Magnetic discovered that the insulation of cables laid in dry soil was deteriorating. This was due to the essential oils in the gutta-percha evaporating, leaving just a porous, woody residue. Bright tried to overcome this by reinjecting the oils, but with limited success. This problem was the main driver for acquiring the unprofitable British Telegraph Company—so that the Magnetic inherited their overhead cable rights. From this point, the Magnetic avoided laying new underground cables except where it was essential to do so.
== Atlantic cable ==
Brett started the fundraising for the Atlantic Telegraph Company's project to build the transatlantic telegraph cable at the Magnetic's Liverpool headquarters in November 1856. Brett was one of the founders of this company and the Magnetic's shareholders were inclined to invest because they expected that the transatlantic traffic would mean more business for the Magnetic's Irish lines. This was because the landing point for the cable was in Ireland and traffic would therefore have to pass through the Magnetic's lines.
== Social issues ==
The Magnetic was an early advocate of employing women as telegraph operators. They were paid according to the speed with which they could send messages, up to the maximum of ten shillings per week when 10 wpm was achieved. It was a popular job with unmarried women who otherwise had few good options.
== Notes ==
== References ==
== Bibliography ==
Ash, Stewart, "The development of submarine cables", ch. 1 in, Burnett, Douglas R.; Beckman, Robert; Davenport, Tara M., Submarine Cables: The Handbook of Law and Policy, Martinus Nijhoff Publishers, 2014 ISBN 9789004260320.
Barty-King, Hugh, Girdle Round the Earth: The Story of Cable and Wireless and Its Predecessors to Mark the Group's Jubilee, 1929–1979, London: Heinemann, 1979 OCLC 6809756, ISBN 0434049026.
Bowers, Brian, Sir Charles Wheatstone FRS: 1802-1875, Institution of Electrical Engineers, 2001 ISBN 0852961030.
Beauchamp, Ken, History of Telegraphy, Institution of Engineering and Technology, 2001 ISBN 0852967926.
Bright, Charles Tilston, Submarine Telegraphs, London: Crosby Lockwood, 1898 OCLC 776529627.
Bright, Edward Brailsford; Bright, Charles, The Life Story of the Late Sir Charles Tilston Bright, Civil Engineer, Cambridge University Press, 2012 ISBN 1108052886 (first published 1898).
Cookson, Gillian, A Victorian Scientist and Engineer: Fleeming Jenkin and the Birth of Electrical Engineering, Ashgate, 2000 ISBN 0754600793.
Hagen, John B., Radio-Frequency Electronics, Cambridge University Press, 2009 ISBN 052188974X.
Haigh, Kenneth Richardson, Cableships and Submarine Cables, Adlard Coles, 1968 OCLC 497380538.
Hills, Jill, The Struggle for Control of Global Communication,University of Illinois Press, 2002 ISBN 0252027574.
Hunt, Bruce J., The Maxwellians, Cornell University Press, 2005 ISBN 0801482348.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052.
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David and Charles, 1973 OCLC 655205099.
Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 031333207X.
Morse, Samuel, "Examination of the Telegraphic Apparatus and the Processes in Telegraphy", in, Blake, William Phipps (ed), Reports of the United States Commissioners to the Paris Universal Exposition, 1867, vol. 4, US Government Printing Office, 1870 OCLC 752259860.
Newell, E.L., "Loading coils for ocean cables", Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics, vol. 76, iss. 4, pp. 478–482, September 1957.
Roberts, Steven, Distant Writing, distantwriting.co.uk,
ch. 5, "Competitors and allies", archived 1 July 2016.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859.
Shaffner, Taliaferro Preston, "Magneto-electric battery", Shaffner's Telegraph Companion, vol. 2, pp. 162–167, 1855 OCLC 191123856. See also, Catalogue of the Special Loan Collection of Scientific Apparatus at the South Kensington Museum, p. 301, 1876.
Smith, Willoughby, The Rise and Extension of Submarine Telegraphy, London: J.S. Virtue & Co., 1891 OCLC 1079820592.
Wheen, Andrew, Dot-Dash to Dot.Com: How Modern Telecommunications Evolved from the Telegraph to the Internet, Springer, 2011 ISBN 1441967605.
"The progress of the telegraph: part VII", Nature, vol. 12, pp. 110–113, 10 June 1875.
== External links ==
Henley's magneto electric double needle telegraph, 1848–1852 at the Science Museum, London. | Wikipedia/Magnetic_Telegraph_Company |
A heliograph (from Ancient Greek ἥλιος (hḗlios) 'sun' and γράφειν (gráphein) 'to write') is a solar telegraph system that signals by flashes of sunlight (generally using Morse code from the 1840s) reflected by a mirror. The flashes are produced by momentarily pivoting the mirror, or by interrupting the beam with a shutter. The heliograph was a simple but effective instrument for instantaneous optical communication over long distances during the late 19th and early 20th centuries. Its main uses were military, surveying and forest protection work. Heliographs were standard issue in the British and Royal Australian armies until the 1960s, and were used by the Pakistani army as late as 1975.
== Description ==
There were many heliograph types. Most heliographs were variants of the British Army Mance Mark V version (Fig.1). It used a flat round mirror with a small unsilvered spot in the centre. The sender aligned the heliograph to the target by looking at the reflected target in the mirror and moving their head until the target was hidden by the unsilvered spot. Keeping their head still, they then adjusted the aiming rod so its cross wires bisected the target. They then turned up the sighting vane, which covered the cross wires with a diagram of a cross, and aligned the mirror with the tangent and elevation screws, so the small shadow that was the reflection of the unsilvered spot hole was on the cross target. This indicated that the sunbeam was pointing at the target.
The flashes were produced by a keying mechanism that tilted the mirror up a few degrees at the push of a lever at the back of the instrument. If the Sun was in front of the sender, its rays were reflected directly from this mirror to the receiving station. If the Sun was behind the sender, the sighting rod was replaced by a second mirror, to capture the sunlight from the main mirror and reflect it to the receiving station. The U.S. Army's Signal Corps heliograph used a flat square mirror that did not tilt. This type produced flashes by a shutter mounted on a second tripod (Fig 4).
The heliograph had certain advantages. It allowed long-distance communication without a fixed infrastructure, though it could also be linked to make a fixed network extending for hundreds of miles, as in the fort-to-fort network used for the Geronimo military campaign. It was very portable, did not require any power source, and was relatively secure since it was invisible to those not near the axis of operation, and the beam was very narrow, spreading only 50 ft (15 m) per 1 mi (1.6 km) of range. However, anyone in the beam with the correct knowledge could intercept signals without being detected. In the Second Boer War (1899–1902) in South Africa, where both sides used heliographs, tubes were sometimes used to decrease the dispersion of the beam. In some other circumstances, though, a narrow beam made it difficult to stay aligned with a moving target, as when communicating from shore to a moving ship, so the British issued a dispersing lens to broaden the heliograph beam from its natural diameter of 0.5 degrees to 15 degrees.
The range of a heliograph depends on the opacity of the air and the effective collecting area of the mirrors. Heliograph mirrors ranged from 1.5 to 12 in (38 to 305 mm) or more. Stations at higher altitudes benefit from thinner, clearer air, and are required in any event for great ranges, to clear the curvature of the Earth. A good approximation for ranges of 20 to 50 mi (32 to 80 km) is that the flash of a circular mirror is visible to the naked eye at a distance of 10 mi (16 km) for each inch of mirror diameter, and farther apart seen with a telescope. The world record distance was established by a detachment of U.S. Army signal sergeants by the inter-operation of stations in North America on Mount Ellen, (Utah), and Mount Uncompahgre, (Colorado), 183 mi (295 km) apart on 17 September 1894, with Army Signal Corps heliographs carrying mirrors only 8 inches (20 cm) on a side.
== History ==
The German professor Carl Friedrich Gauss (1777–1855), of the University of Göttingen developed and used a predecessor of the heliograph (the heliotrope) in 1821. His device directed a controlled beam of sunlight to a distant station to be used as a marker for geodetic survey work, and was suggested as a means of telegraphic communications. This is the first reliably documented heliographic device, despite much speculation about possible ancient incidents of sun-flash signalling, and the documented existence of other forms of ancient optical telegraphy.
For example, one author in 1919 chose to "hazard the theory" that the Italian mainland signals from the capital of Rome that ancient Roman emperor Tiberius (42 B.C.-A.D.37, reigned A.D.14 to 37), watched for from his imperial retreat on the island of Capri. were mirror flashes, but admitted "there are no references in ancient writings to the use of signaling by mirrors", and that the documented means of ancient long-range visual telecommunications was by beacon fires and beacon smoke, not mirrors.
Similarly, the story that a shield was used as a heliograph at the ancient famous Battle of Marathon between the Greeks and Persians in 490 B.C. is also unfortunately a modern myth, originating in the 1800s. The ancient historian Herodotus never mentioned any flash. What Herodotus did write was that someone was accused of having arranged to "hold up a shield as a signal". Suspicion grew in the later 1900s, that the flash theory was implausible. The conclusion after testing the theory was "Nobody flashed a shield at the Battle of Marathon".
In a letter dated 3 June 1778, John Norris, High Sheriff of Buckinghamshire, England, notes: "Did this day heliograph intelligence from Dr [Benjamin] Franklin in Paris to Wycombe". However, there is little evidence that "heliograph" here is other than a misspelling of "holograph". The term "heliograph" for solar telegraphy did not enter the English language until the 1870s—even the word "telegraphy" was not coined until the 1790s.
Henry Christopher Mance (1840–1926), of the British Government's Persian Gulf Telegraph Department, developed the first widely accepted heliograph about 1869, while stationed at Karachi (now in modern Pakistan) in the then Bombay Presidency of British India. Mance was familiar with heliotropes by their use earlier for the mapping project of the Great Trigonometrical Survey of India (done 1802–1871). The Mance Heliograph was operated easily by one man, and since it weighed about 7 lb (3.2 kg), the operator could readily carry the device and its supporting tripod. The British Army tested the heliograph in India at a range of 35 mi (56 km) with favorable results. During the Jowaki Afridi expedition sent by the British-Indian government in 1877, the heliograph was first tested in war.
The simple and effective instrument that Mance invented was to be an important part of military communications for more than 60 years. The usefulness of heliographs was limited to daytimes with strong sunlight, but they were the most powerful type of visual signalling device known. In pre-radio times heliography was often the only means of communication that could span ranges of as much as 100 mi (160 km) with a lightweight portable instrument.
In the United States military, by mid-1878, a younger Colonel Nelson A. Miles had established a line of heliographs connecting far-flung military outposts of Fort Keogh and Fort Custer, in the northern Montana Territory, a distance of 140 mi (230 km). In 1886, United States Army now General Nelson A. Miles (1839–1925), set up a network of 27 heliograph stations in the Arizona and New Mexico territories of the old Southwest during the extended campaign and hunt for the native Apache renegade chief / guerrilla warfare leader Geronimo (1829–1909). In 1890, now little-known Major W.J. Volkmar of the U.S. Army demonstrated in the Arizona and New Mexico territories, the possibility of performing communication by heliograph over a heliograph network aggregating 2,000 mi (3,200 km) in length. The network of communication begun by General Miles in 1886, and continued by unsung and now unfortunately relatively unknown Lieutenant W. A. Glassford, was perfected in 1889 at ranges of 85, 88, 95 and 125 mi (137, 142, 153 and 201 km) over a rugged and broken country, which was the stronghold of the Apache, Commanche and other hostile native Indian tribes.
By 1887, heliographs in use included not only the British Mance and Begbie heliographs, but also the American Grugan, Garner and Pursell heliographs. The Grugan and Pursell heliographs used shutters, and the others used movable mirrors operated by a finger key. The Mance, Grugan and Pursell heliographs used two tripods, and the others one. The signals could either be momentary flashes, or momentary obscurations. In 1888, the U.S. Army Signal Corps reviewed all of these devices, as well as the Finley Helio-Telegraph, and finding none completely suitable, developed its own instrument of the U.S. Army Signal Corps heliograph, a two-tripod, shutter-based machine of 13+7⁄8 lb (6.3 kg) total weight, and ordered 100, for a total cost of $4,205. By 1893, the number of heliographs manufactured for the American Army Signal Corps was 133.
The heyday of the heliograph was probably the Second Boer War of the 1890s and early 1900s in South Africa, where it was much used by both the British and the native immigrant Boers. The terrain and climate, as well as the nature of the campaign, made heliography a logical choice. For night communications, the British used some large signal lamps, brought inland on railroad cars, and equipped with leaf-type shutters for keying a beam of light into dots and dashes. During the early stages of the war, the British Army garrisons were besieged in Kimberley, along with the sieges of Ladysmith, and at Mafeking. With land wire telegraph lines cut, the only contact with the outside world was via light-beam communication, helio by day, and signal lamps at night.
In 1909, the use of heliography for forestry protection was introduced by the United States Forestry Service in the western States. By 1920, such use was widespread in the US and beginning in the neighboring Dominion of Canada to the north, and the heliograph was regarded as "next to the telephone, the most useful communication device that is at present available for forest-protection services". D.P. Godwin of the U.S. Forestry Service invented a very portable (4.5 lb [2.0 kg]) heliograph of the single-tripod, shutter plus mirror type for forestry use.
Immediately prior to the outbreak of World War I (1914–1918), the mounted cavalry regiments of the Russian Imperial Army in the Russian Empire were still being trained in heliograph communications to augment the efficiency of their scouting and reporting roles. Following the two Russian Revolutions of 1917, the revolutionary Bolshevik / Communist units of their Red Army during the subsequent Russian Civil War of 1918–1922, made use of a series of heliograph stations to disseminate intelligence efficiently. This continued even a decade later about counter-revolutionary basmachi rebel movements in Central Asia's Turkestan region in 1926.
During World War II (1939–1945), Union of South Africa and Royal Australian military forces used the heliograph while fighting enemy Nazi German and Fascist Italian forces along the southern coast of the Mediterranean Sea in Libya and western Egypt with fellow defending British military in the desert North African campaign in 1940, 1941 and 1942.
The heliograph remained standard equipment for military signallers in the Royal Australian and British armies until the 1940s, where it was considered a "low probability of intercept" type of communication. The Canadian Army was the last major military force to have the heliograph as an issue item. By the time the mirror instruments were retired, they were seldom used for signalling. However, as recently as the 1980s, heliographs were used by insurgent Afghan mujahedeen forces during the Soviet invasion of Afghanistan in 1978–1979. Signal mirrors are still included in survival kits for emergency signaling to search and rescue aircraft.
== Automated heliographs ==
Most heliographs of the 19th and 20th centuries were completely manual. The steps of aligning the heliograph on the target, co-aligning the reflected sunbeam with the heliograph, maintaining the sunbeam alignment as the sun moved, transcribing the message into flashes, modulating the sunbeam into those flashes, detecting the flashes at the receiving end, and transcribing the flashes into the message were all done manually. One notable exception – many French heliographs used clockwork heliostats to automatically steer out the sun's motion. By 1884, all active units of the "Mangin apparatus" (a dual-mode French Army military field optical telegraph that could use either lantern or sunlight) were equipped with clockwork heliostats. The Mangin apparatus with heliostat was still in service in 1917. Proposals to automate both the modulation of the sunbeam (by clockwork) and the detection (by electrical selenium photodetectors, or photographic means) date back to at least 1882. In 1961, the United States Air Force was working on a space heliograph to signal between satellites
In May 2012, "Solar Beacon" robotic mirrors designed at the University of California at Berkeley were mounted on the twin towers of the Golden Gate Bridge at the entrance to San Francisco Bay, and a web site set up where the public could schedule times for the mirrors to signal with sun-flashes, entering the time and their latitude, longitude and altitude. The solar beacons were later moved to Sather Tower at the U.C. – Berkeley campus. By June 2012, the public could specify a "custom show" of up to 32 "on" or "off" periods of 4 seconds each, permitting the transmission of a few characters of Morse Code. The designer described the Solar Beacon as a "heliostat", not a "heliograph".
The first digitally controlled heliograph was designed and built in 2015. It was a semi-finalist in the Broadcom MASTERS competition.
== See also ==
Heliography, an early photographic process invented by Joseph Nicéphore Niépce around 1822
Heliotrope (instrument)
Operation On-Target, a Scouting program
Signal lamp
== References ==
== Further reading ==
Lewis Coe, Great Days of the Heliograph, Crown Point, 1987 OCLC 16902284
== External links ==
Heliography: Communicating with Mirrors Photographs of British, American and Portuguese heliographs.
The Heliograph A description of the British Mance, Begbie and French LeSeurre heliographs with illustrations (1899)
Eliografo Detailed color photographs of a World War 2 British Mance heliograph (Italian).
"Heliograph" at the National Library of Australia: Trove; 100+ historical heliograph photographs at the Australian War Memorial and elsewhere
Royal Signals Datasheet No. 2. The Heliograph (revised April 2003) Archived 5 September 2012 at the Wayback Machine
CHAPTER IV THE HELIOGRAPH (PAGE 48 OF THE 1905 SIGNALLING HANDBOOK)
Mance Mark V Heliograph Detailed photos of a British Mark V Heliograph and kit, links to patents. Clicking on visible photos reveals high resolution photos.
The Heliograph in the Apache Wars
Signals communication in the South African War 1899–1902
Heliographs at the Museum of RetroTechnology | Wikipedia/Heliograph |
The Teletype Model 33 is an electromechanical teleprinter designed for light-duty office use. It is less rugged and cost less than earlier Teletype models. The Teletype Corporation introduced the Model 33 as a commercial product in 1963, after it had originally been designed for the United States Navy. The Model 33 was produced in three versions:
Model 33 ASR (Automatic Send and Receive), which has a built-in eight-hole punched tape reader and tape punch;
Model 33 KSR (Keyboard Send and Receive), which lacks the paper tape reader and punch;
Model 33 RO (Receive Only) which has neither a keyboard nor a reader/punch.
The Model 33 was one of the first products to employ the newly-standardized ASCII character encoding method, which was first published in 1963. A companion Teletype Model 32 used the older, established five-bit Baudot code. Because of its low price and ASCII compatibility, the Model 33 was widely used, and the large quantity of teleprinters sold strongly influenced several de facto standards that developed during the 1960s.
Teletype Corporation's Model 33 terminal, introduced in 1963, was one of the most popular terminals in the data communications industry until the late 1970s. Over a half-million 33s were made by 1975, and the 500,000th was plated with gold and placed on special exhibit. Another 100,000 were made in the next 18 months, and serial number 600,000, manufactured in the United States Bicentennial, was painted red, white and blue, and shown around the country.
The Model 33 originally cost about $1000 (equivalent to $10,000 today), much less than other teleprinters and computer terminals in the mid-1960s, such as the Friden Flexowriter and the IBM 1050. In 1976, a new Model 33 RO printer cost about $600 (equivalent to $3,000 today).
As Teletype Corporation realized the growing popularity of the Model 33, it began improving its most failure-prone components, gradually upgrading the original design from "light duty" to "standard duty", as promoted in its later advertising (see advertisement). The machines had good durability and faced little competition in their price class, until the appearance of Digital Equipment Corporation's DECwriter series of teleprinters.
== Naming conventions ==
While the manufacturer called the Model 33 teleprinter with a tape punch and tape reader a "Model 33 ASR", many computer users used the shorter term "ASR-33". The earliest known source for this equipment naming discrepancy comes from Digital Equipment Corporation (DEC) documentation, where the September 1963 PDP-4 brochure calls the Teletype Model 28 KSR a "KSR-28" in the paragraph titled "Printer-Keyboard and Control Type 65". This naming convention was extended from the Teletype Model 28 to other Teletype equipment in later DEC documentation, consistent with DEC's practice of designating equipment using letters followed by numerals. For example, the DEC PDP-15 price list from April 1970 lists a number of Teletype Corporation teleprinters using this alternative naming convention. This practice was widely adopted as other computer manufacturers published their documentation. For example, Micro Instrumentation and Telemetry Systems marketed the Teletype Model 33 ASR as "Teletype ASR-33".
The trigram "tty" became widely used as an informal abbreviation for "Teletype", often used to designate the main text input and output device on many early computer systems. The abbreviation remains in use by radio amateurs ("ham radio") and in the hearing-impaired community, to refer to text input and output assistive devices.
== Obsolescence ==
Early video terminals, such as the Tektronix 4010, did not become available until 1970, and initially cost around $10,000 (equivalent to $103,000 today). However, the introduction of integrated circuits and semiconductor memory later that decade allowed the price of cathode-ray tube-based terminals to rapidly fall below the price of a Teletype teleprinter.
"Dumb terminals", such as the low-cost ADM-3 (1975) began to undercut the market for Teletype terminals. Such basic video terminals, which could only sequentially display lines of text and scroll them, were often called glass Teletypes ("glass TTYs") analogous to the Teletype printers. More-advanced video terminals, such as the Digital Equipment Corporation VT52 (1975), the ADM-3A (1976), and the VT100 (1978), could communicate much faster than electromechanical printers, and could support use of a full-screen text editor program without generating large amounts of paper printouts. Teletype machines were gradually replaced in new installations by much faster dot-matrix printers and video terminals in the middle-to-late 1970s.
Because of falling sales, Teletype Corporation shut down Model 33 production in 1981.
== Technical information ==
The design objective for the Model 33 was a machine that would fit into a small office space, match with other office equipment of the time and operate up to two hours per day on average. Since this machine was designed for light duty use, adjustments that Teletype made in previous teleprinters by turning screws were made by bending metal bars and levers. Many Model 33 parts were not heat-treated and hardened. The base is die-cast metal, but self-tapping screws were used, along with parts that snapped together without bolting.
Everything is mechanically-powered by a single electric motor, located at the rear of the mechanism. The motor runs continuously as long as power is on, generating a familiar humming and slight rattle from its vibration. The noise level increases considerably whenever the printing or paper tape mechanisms are operating. Similar noises became iconic for the sounds of an active newswire or computer terminal. There is a mechanical bell, activated by code 07 (Control-G, also known as BEL), to draw special attention when needed.
The Teletype Model 33, including the stand, stands 34 inches (860 mm) high, 22 inches (560 mm) wide and 18.5 inches (470 mm) deep, not including the paper holder. The machine weighs 75 pounds (34 kg) on the stand, including paper. It requires less than 4 amperes at 115 VAC and 60 Hz. The recommended operating environment is a temperature of 40 to 110 °F (4 to 43 °C), a relative humidity of between 2 and 95 percent, and an altitude of 0 to 10,000 feet (0 to 3,048 m). The printing paper is an 8.44-by-4.5-inch (214 by 114 mm) diameter roll, and the paper tape is a 1,000-foot (300 m) roll of 1-inch (25 mm) wide tape. Nylon fabric ink ribbons are 0.5-inch (13 mm) wide by 60-yard (55 m) long, with plastic spools and eyelets to trigger automatic reversal of the ribbon feed direction.: 16
The entire Model 33 ASR mechanism requires periodic application of grease and oil in approximately 500 locations.
=== Paper tape options ===
As a cost-saving measure, the optional paper tape mechanisms were dependent on the keyboard and page printer mechanisms. The interface between the paper tape reader and the rest of the terminal is completely mechanical, with power, clock, and eight data bits (which Teletype called "intelligence") all transmitted in parallel through metal levers. Configuration of user-selectable options (such as parity) is done with mechanical clips that depress or release various levers. Sensing of punched holes by the paper tape reader is done by using metal pins which mechanically probe for their presence or absence. The paper tape reader and punch can handle eight-bit data, allowing the devices to be efficiently used to download or upload binary data for computers.
Earlier Teletype machine designs, such as the Model 28 ASR, had allowed the user to operate the keyboard to punch tape while independently transmitting a previously punched tape, or to punch a tape while printing something else. Independent use of the paper tape punch and reader is not possible with the Model 33 ASR.
The tape punch required oiled paper tape to keep its mechanism lubricated. There is a transparent, removable chad receptacle beneath the tape punch, which required periodic emptying.: 33
=== Printing ===
The printing mechanism is usually geared to run at a maximum ten characters per second speed, or 100 words per minute (wpm), but other slower speeds were available: 60 wpm, 66 wpm, 68.2 wpm, and 75 wpm. There are also many typefont options. The Teletype Parts Bulletin listed 69 available Model 33 type element factory-installed options (frequent type element changes in the field were impractical). The type element, called a "typewheel" in Teletype's technical manuals, is cylindrical, with characters arranged in four tiers, 16 characters per tier, and thus is capable of printing 64 characters. The character to be printed is selected by rotating the typewheel clockwise or anticlockwise and raising or lowering it, then striking the typewheel with a padded hammer, which impacts the element against the ink ribbon and paper.
The Model 33 prints on 8.5-inch (220 mm) wide paper, supplied on continuous 5-inch (130 mm) diameter rolls approximately 100 feet (30 m) long, and fed via friction instead of a tractor feed. It prints at a fixed pitch of 10 characters per inch, and supported 74-character lines, although 72 characters is often commonly stated.
=== Keyboard ===
The Model 33 keyboard generates the seven-bit ASCII code, also known as CCITT International Telegraphic Alphabet No. 5, with one (even) parity bit and two stop bits, with a symbol rate of 110 baud, but it only supports an upper-case subset of that code; it does not support lower-case letters or the `, {, |, }, and ~ characters.
The keyboard required heavy pressure to operate the keys - on par with a mechanical typewriter - far more than any modern keyboard.
The Model 33 can operate either in half-duplex mode, in which signals from the keyboard are sent to the print mechanism, so that characters are printed as they are typed (local echo), or in full-duplex mode, in which keyboard signals are sent only to the transmission line, and the receiver has to transmit the character back to the Model 33 in order for it to be printed (remote echo). The factory setting is half-duplex, but it can be changed to full-duplex by the user.
=== Answer-back and unattended operation ===
The Teletype Model 33 contains an answer-back mechanism that is generally used in dial-up networks such as the Teletypewriter Exchange Service (TWX). At the beginning of the message, the sending machine can transmit an enquiry character or WRU ("who are you") code, and the recipient machine automatically initiates a response, which is encoded in a rotating drum that had been preprogrammed by breaking off tabs. The answer-back drum in the recipient machine rotates and sends a unique identifying code to the sender, so that the sender can verify connection to the correct recipient. The WRU code can also be sent at the end of the message. A correct response confirms that the connection had remained unbroken during the message transmission. To conclude the transmission, the sending machine operator presses the disconnect button.: 45
The receiving machine can also be set up to not require operator intervention.: 8-9 Since messages were often sent across multiple time zones to their destination, it was common to send a message to a location where the receiving machine was operating in an office that was closed and unstaffed overnight. This also took advantage of lower telecommunication charges for non-urgent messages which were sent at off-peak times.
The sole electric motor in the machine has to be left running continuously whenever unattended operation is expected, and is designed to withstand many hours of idling. The motor displays a "HOT" warning label, clearly visible once the cover is removed.
=== Communications interface ===
The communications module in the Model 33 is known as a Call Control Unit (CCU), and occupies the space to the right of the keyboard and printer. Various CCU types were available; most of them operated on the telephone network and included the relevant user controls. Variants included rotary dial, DTMF signalling or a mechanical card dialer. An acoustic coupler for a de facto standard-sized and shaped telephone handset was also available.: 14
Another CCU type is called "Computer Control Private Line", which operated on a local 20 mA current loop, the de facto standard serial protocol for computer terminals before the rise of RS-232 signaling. "Private Line" CCUs had a blank panel with no user controls or displays, since the terminal can be semi-permanently hard-wired to the computer or other device at the far end of the communications line.
== Related machines ==
The Teletype Model 32 line used the same mechanism and looked identical, except for having a three-row keyboard and, on the ASR version, a five-hole paper tape reader and punch, both appropriate for Baudot code.
Teletype also introduced a more-expensive ASCII Model 35 (ASR-35) for heavy-duty use, whose printer mechanism is based on the older, rugged Model 28. The basic Model 35 is mounted in a light gray console that matched the width of the Model 33, while the Model 35 ASR, with eight-hole mechanical tape punch and reader, is installed in a console about twice as wide.
The tape reader is mounted separately from the printer-punch mechanism on the left side of the console, and behind it is a tray for storing a manual, sheets of paper, or other miscellanea. To the right of the keyboard is a panel that can optionally house a rotary dial or DTMF pushbuttons for dialing a connection to a network via telephone lines.
The printer cover in later units also feature soundproofing materials, making the Model 35 somewhat quieter than the Model 33 while printing and punching paper tapes. All versions of the Model 35 have a copy holder on the printer cover, making it more convenient for the operator when transcribing written material.
Teletype Model 35 is mentioned as being used in "Experiment One", in the first RFC, RFC 1. The Model 35 was widely used as terminals for the minicomputers and IMPs to send and receive text messages over the very early ARPANET, which later evolved into the Internet.
The Model 38 (ASR-38) was constructed similar to and has all the typing capabilities of a Model 33 ASR, plus additional features. A two-color inked ribbon and additional ASCII control codes allowed automatic switching between red and black output while printing. An extended keyboard and type element support uppercase and lowercase printing with some additional special characters. A wider pin-feed platen and typing mechanism allowed printing 132 columns on fan-fold paper, making its output similar to the 132-column page size of the then industry-standard IBM 1403 model printers.
More expensive Teletype systems have paper tape readers that used light sensors to detect the presence or absence of punched holes in the tape. These can work at much higher speeds (hundreds of characters per second). More sophisticated punches were also available that could run at somewhat higher speeds; Teletype's DRPE punch can operate at speeds up to 240 characters per second.
== Historical impact ==
ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone and Telegraph's Teletypewriter Exchange Service (TWX) using Teletype Model 33 teleprinters.
The Teletype Model 33 series was influential in the development and interpretation of ASCII code characters. In particular, the Teletype Model 33 machine assignments for codes 17 (Control-Q, DC1, also known as XON) and 19 (Control-S, DC3, also known as XOFF) became de facto standards.
The programming language BASIC was designed to be written and edited on a low-speed Teletype Model 33. The slow speed of the Teletype Model 33 influenced the user interface of minicomputer operating systems, including UNIX.
A Teletype Model 33 provided Bill Gates' first computing experience.
In 1965, Stanford University psychology professors Patrick Suppes and Richard C. Atkinson, in the pilot program for computer assisted instruction, experimented with using computers to provide arithmetic and spelling drills via Teletypes and acoustic couplers to elementary school students in the Palo Alto Unified School District in California and elsewhere.
In 1971, Ray Tomlinson chose the "@" symbol on his Teletype Model 33 ASR keyboard for use in network email addresses.
The serial ports in Unix-like systems are named /dev/tty..., which is short for "Teletype".
== See also ==
Teletype Model 28
== References ==
== External links ==
Photo of a Model 33 ASR
Keyboard layout for Windows that simulates the ASR33 keyboard
ASR 33 Teletype Information with movies and sound | Wikipedia/Teletype_Model_33 |
The U.S. Military Telegraph Corps was formed in 1861 following the outbreak of the American Civil War. David Strouse, Samuel M. Brown, Richard O'Brian and David H. Bates, all from the Pennsylvania Railroad Company, were sent to Washington, D.C. to serve in the newly created office. In October of that year, Anson Stager was appointed department head. During the war, they were charged with maintaining communications between the federal government in Washington and the commanding officers of the far-flung units of the Union Army.
== Telegraph facilities prior to the Civil War ==
Before the start of the Civil War, there were three major telegraph companies in operation: the American Telegraph Company, the Western Union Telegraph Company, and the Southwestern Telegraph Company. The American Telegraph Company's lines occupied the entire region east of the Hudson River and ran all along the Atlantic coast down to the Gulf of Mexico. Cities were connected from Newfoundland to New Orleans. From this main backbone, the American Telegraph Company's lines branched west to cities like Pittsburgh, Philadelphia, and Cincinnati. At each of these points, the American Telegraph Company's lines met the Western Union lines which occupied much of the remaining northern portion of the U.S. Western Union also extended a line as far west as San Francisco by 1861. In the southern states, the American Telegraph Company's lines met the Southwestern Telegraph Company's lines at Chattanooga, Mobile, and New Orleans. From these cities, the Southwestern Telegraph Company's lines occupied the rest of the South and Southwest, including Texas and Arkansas.
== Formation of Military Telegraph Corps ==
Not long after the Confederate attack on Fort Sumter on April 12, 1861 President Abraham Lincoln ordered seventy-five thousand troops to assemble in Washington, D.C. On April 19, 1861, Harpers Ferry, West Virginia (which was along the Baltimore & Ohio Railroad) was captured by Confederate troops. Thus, Washington lost one of its only railroad and telegraphic communications route to the North. The only railroad and telegraph lines connecting the Northern states to Washington left were located in Maryland, a state whose loyalty to the Union was not trusted.
Due to the dire situation of the railroads and telegraphic communication in Washington, the commercial telegraph lines surrounding the city were seized and Simon Cameron, the Secretary of War, sent a request to the president of the Pennsylvania Railroad to send Thomas A. Scott to get the railroad telegraph service in Washington under control. Scott made his way to Washington and began filling positions to help him manage the railroads and telegraph lines. He asked Andrew Carnegie, who was superintendent of the Pittsburgh division of the Pennsylvania Railroad to assist him. Carnegie obliged and drafted men from his railroad division to accompany him to Washington in order to help the government take possession of and operate the railroads around the capital.
Carnegie's first task when he arrived in Washington was to extend the Baltimore & Ohio Railroad from its old depot in Washington across the Potomac River into Virginia. While extending the Baltimore & Ohio Railroad, telegraph lines were built and communication was opened at stations such as Alexandria, Burke's Station, and Fairfax. The first government telegraph line built connected the War Office with the Navy Yard. Carnegie stayed in Washington until November 1861. By the time he left, the military railroad and telegraph operations were running smoothly.
Along with the appointment of Carnegie, Scott made a demand for telegraph operators who excelled at running trains by telegraph. Scott called on four telegraph operators from the Pennsylvania Railroad to report to Washington. These operators were David Strouse (who later became the superintendent of the Military Telegraph Corps), D.H. Bates, Samuel M. Brown, and Richard O'Brien. The four operators arrived in Washington on April 27, 1861. Strouse and Bates were stationed at the War Department; Brown was stationed in the Navy Yard; and O'Brien was stationed at the Baltimore & Ohio Railroad depot, which was for some time army headquarters. Thus, these four men made up the initial United States Military Telegraph Corps, which would ultimately grow to a force of over 1,500 men.
In June, 1863, Albert Brown Chandler joined the corps and began work at the War Department as a disbursing clerk, cashier, and telegraph operator. He developed ciphers for transmitting secret communications, and worked with Thomas Eckert and Charles A. Tinker as confidential telegraphers for Lincoln and Secretary of War Edwin Stanton.
== Special status of the corps ==
Although the U.S. Military Telegraph Corps played a prominent role in transmitting messages to and from commanders in the battlefield, it functioned independently from military control. The Corps employed civilian operators out on the battlefield and in the War Department. Only supervisory personnel were granted military commissions from the Quartermaster Department in order to distribute funds and property. All of the orders the telegraph operators received came directly from the Secretary of War. Also, because there was no government telegraph organization before the Civil War, there was no appropriation of funds by Congress to pay for the expenses of erecting poles, running cables, or the salaries of operators. As a result, the first six months that the U.S. Military Telegraph Corps was in operation Edward S. Sanford, president of the American Telegraph Company, paid for all these expenses. He was later reimbursed by Congress for his generosity.
== Construction of telegraph lines ==
The Telegraph Construction Corps were charged with the dangerous job of building telegraph lines in the field during battles. Consisting of about one hundred fifty men, the Telegraph Construction Corps set out in wagon trains to construct temporary lines. During a battle, one wagon was stationed at the starting point of the battle to act as a receiving station, while another wagon traveled into the field to be a sending station. Thus, the orders could be sent back and forth between general headquarters and the battlefield, and what occurred during the battles could be sent back to the Military Telegraph Office in Washington, D.C.
Initially these telegraph lines were only constructed for temporary use because of the brittle exposed copper wire that was used. But, after insulated wire began to be used, permanent lines were built. The Telegraph Construction Corps would load a coil of this wire on a mule's back and lead it straight forward to unreel the wire. As the mule moved forward unwinding the wire, two men followed and hung the line on fences and bushes so that it would not be run over until it was propped up with pikes. Because these lines were so vulnerable to Confederate wire tapping and cutting, cavalry patrols kept guard of the wires when they were being built in an area lacking in Union soldiers. Over the course of the war, the Telegraph Construction Corps built a total of 15,389 miles of field, land, and submarine telegraph lines.
== Operators ==
Serving as a U.S. Military Telegraph Corps operators, whether in the field or in the War Office was a hard and thankless job. The operators that served on the battlefield had the more dangerous job of the two. They encountered the constant threat of being captured, shot, or killed by Confederate troops whether they were establishing communications on the battle front, sending messages behind during a retreat, or venturing out to repair a line. Telegraph operators faced a casualty rate of ten percent, a rate similar to the infantry men they served with. Added to these dangers was the strenuous relationship the operators had with the military commanders they served under. Many of the commanders resented the Military Telegraph Corps operators because they were not members of the military, but employees of the Quartermasters Department. As a result, these commanders felt that the operators were not fit to serve with them and ultimately distrusted these men.
Although the job of an operator in the War Office was not as dangerous, it was still a demanding job. The operators had to be quick and intelligent when receiving messages. Important messages were sent using cipher codes. The cipher-operators had the major responsibility of decoding these viable pieces of information and moving the information along to higher-ranking officials or President Lincoln, who frequently visited the Military Telegraph office in the War Department building. Along with decoding Union telegrams, the cipher-operators also had to decode Confederate ciphers. By decoding the Confederate cipher codes, plots such as setting fire to major hotels in New York City were averted.
The U.S. Military Telegraph Corps operators served courageously during the Civil War. But, because these men were not members of the military, they did not receive recognition or a pension for their services, even though the supervisory personnel did because of the military commissions they received. As a result, the families of those men killed in action had to depend on charity to continue on. The operators of the U.S. Military Telegraph Corps were not recognized for their service until 1897, when then-President Grover Cleveland approved an act directing the Secretary of War to issue certificates of honorable service to all members (including those who died) of the U.S. Military Telegraph Corps. This certificate of recognition did not include the pension former service-members sought.
== Disbanding ==
Once the Civil War was over the task of reconstructing the Confederate telegraph lines began. The U.S. government required that all of the major communication lines were to be repaired and controlled by the U.S. Military Telegraph Corps. Due to the lack of funds, the Confederate telegraph lines were in bad shape when the war ended and the operators of the U.S. Military Telegraph Corps faced a mountain of work. But these men rose to the challenge and on February 27, 1865 an order by the Quartermaster General transferred the Union control of telegraph lines in the South to commercial telegraph companies under the supervision of U.S. Military Telegraph Corps Assistant Superintendents . Furthermore, this order relinquished control of all lines seized by the government in the North and sold the lines constructed by the U.S. Military Telegraph Corps to private telegraph companies. Once control of the telegraph lines were turned over to the telegraph companies, the operators were discharged one by one. The only office that remained was the original telegraph office in the War Department.
== References ==
Bates, David H. (October 1907). Lincoln in the Telegraph Office: Recollections of the United States Military Telegraph Office during the Civil War. New York: The Century Co. Retrieved 2008-06-26.
== External links ==
The Military Telegraph Service
Signal Corps Telegraph
United States Military Telegraph
U.S. Military Telegraph Corps Cipher Codes | Wikipedia/U.S._Military_Telegraph_Corps |
The first transcontinental telegraph (completed October 24, 1861) was a line that connected the existing telegraph network in the eastern United States to a small network in California, by means of a link between Omaha, Nebraska and Carson City, Nevada, via Salt Lake City. It was a milestone in electrical engineering and in the formation of the United States. It served as the only method of near-instantaneous communication between the east and west coasts during the 1860s. For comparison, in 1841, it took 110 days for news of the death of President William Henry Harrison to reach Los Angeles.
== Background ==
After the development of efficient telegraph systems in the 1830s, their use saw almost explosive growth in the 1840s. Samuel Morse's first experimental line between Washington, D.C., and Baltimore—the Baltimore-Washington telegraph line—was demonstrated on May 24, 1844. By 1850 there were lines covering most of the eastern states, and a separate network of lines was soon constructed in the booming economy of California.
California was admitted to the United States in 1850, the first state on the Pacific coast. Major efforts ensued to integrate California with the other states, including sea, overland mail pioneered by George Chorpenning, the Pony Express, and passenger services such as Butterfield Overland Mail. Proposals for the subsidy of a telegraph line to California were made in Congress throughout the 1850s, and in 1860 the U.S. Post Office was authorized to spend $40,000 per year to build and maintain an overland line. The year before, the California State Legislature had authorized a similar subsidy of $6,000 per year.
== Construction ==
Construction of the first transcontinental telegraph was the work of Western Union, which Hiram Sibley, Jeptha Wade, and Ezra Cornell had established in 1856 by merging companies operating east of the Mississippi River. A second significant step was the passing of the Telegraph Act by Congress in 1860, which authorized the government to open bids for the construction of a telegraph line between Missouri and California and regulated the service to be provided. Eventually, the only bidder would be Sibley, because all competitors—Theodore Adams, Benjamin Ficklin and John Harmon—withdrew at the last minute. Later they joined Sibley in his effort.
Similar to the first transcontinental railroad, elimination of the gap in the telegraph service between Fort Kearny in Nebraska and Fort Churchill in Nevada was planned to be divided between teams that would be advancing the construction in opposite directions. The Pacific Telegraph Company would build west from Nebraska and the Overland Telegraph Company would build east from Nevada's connection to the California system. James Gamble, an experienced telegraph builder in California, was put in charge of the western crew, and Edward Creighton was responsible for the eastern crew. From Salt Lake City, a crew in charge of James Street advanced westward, and W.H. Stebbins's crew eastward toward Fort Kearny. Creighton's crew erected its first pole on 4 July 1861. When the project was completed in October 1861, they had planted 27,500 poles holding 2,000 miles (3,200 km) of single-strand iron wire over a terrain that was not always inviting. California Chief Justice Stephen Field sent one of the first messages from San Francisco to Abraham Lincoln, using the occasion to assure the president of California's allegiance to the Union. The construction coincided with Civil War combat to the southeast. The entire cost of the system was half a million dollars (equivalent to $17.5 million in 2024).
== Operation ==
Difficulties did not stop with the completion of the project. Keeping it in operation faced multiple problems: (a) inclement weather in the form of lightning bolts, strong winds, and heavy snow damaged both poles and the wire; (b) rubbing on the poles by bison from time to time sent down sections of the telegraph, eventually contributing to their demise; (c) the system had to be rerouted through Chicago to avoid Confederate attempts to cut the line in Missouri to disrupt communications among Union forces; (d) Native Americans soon started to do the same farther west as part of their hostilities with the Army.
Financially, the first transcontinental telegraph was a big success from the beginning. The charge during the first week of operation was US$1 (equivalent to about $35 in 2024) per word, whereas the Telegraph Act of 1860 had specified 30 cents.
The telegraph line immediately made the Pony Express obsolete, which officially ceased operations two days later. The overland telegraph line was operated until 1869, when it was replaced by a multi-line telegraph that had been constructed alongside the route of the first transcontinental railroad.
== See also ==
Telegraph in United States history
Australian Overland Telegraph Line, a north–south Australian telegraph line completed in 1872
== References ==
== Bibliography ==
Peters, Arthur K. (1996). Seven Trails West. Abbeville Press. ISBN 1-55859-782-4.
Jepsen, Thomas (1987). "The Telegraph Comes to Colorado: A New Technology and Its Consequences". Essays and Monographs in Colorado History. 7: 1–25.
== External links and sources ==
Contemporary account of the construction of the transcontinental telegraph
History of the first transcontinental telegraph
Central Pacific Railroad Photographic History Museum: Pacific Telegraph Act of 1860 | Wikipedia/First_transcontinental_telegraph |
The telautograph is an ancestor of the modern fax machine. It transmits electrical signals representing the position of a pen or tracer at the sending station to repeating mechanisms attached to a pen at the receiving station, thus reproducing at the receiving station a drawing, writing, or signature made by the sender. It was the first such device to transmit drawings to a stationary sheet of paper; previous inventions in Europe had used a constantly moving strip of paper to make such transmissions and the pen could not be lifted between words. Surprisingly, at least from a modern perspective, some early telautographs used digital/pulse-based transmission while later more successful devices reverted to analog signaling.
== Invention ==
The telautograph's invention is attributed to the American engineer Elisha Gray, who patented it on July 31, 1888. Gray's patent stated that the telautograph would allow "one to transmit his own handwriting to a distant point over a two-wire circuit." It was the first facsimile machine in which the stylus was controlled by horizontal and vertical bars. The telautograph was first publicly exhibited at the 1893 World's Columbian Exposition held in Chicago.
Gray started experimenting in 1887 with analog transmission of the pen position signals using variable resistances as was done in previous devices, but was dissatisfied with the performance he achieved. He then turned to pulse-based or digital pen position transmission.
Gray's early patents show devices to accomplish the required functions over two line wire circuits with a common ground connection. Pulses were sent over each wire to signal small steps of pen movement. Momentary current interruptions of a baseline direct current signaled pen lifting/lowering and paper feed, and changing polarities were used to encode pen movement direction.
While the patent schema's geometry implies vertical and horizontal coordinates, Gray's first practical system (discussed later) had a different coordinate scheme, based on transmitting two radial distances along approximately diagonal directions from two fixed points. Later systems used in the 20th century
transmitted the angle of two crank arm joints in a five bar linkage, comprising two pen motor cranks, two pen linkage bars, and the body of the instrument.
In an 1888 interview in The Manufacturer & Builder (Vol. 24: No. 4: pages 85–86) Gray said:
By my invention you can sit down in your office in Chicago, take a pencil in your hand, write a message to me, and as your pencil moves, a pencil here in my laboratory moves simultaneously, and forms the same letters and words in the same way. What you write in Chicago is instantly reproduced here in fac-simile. You may write in any language, use a code or cipher, no matter, a fac-simile is produced here. If you want to draw a picture it is the same, the picture is reproduced here. The artist of your newspaper can, by this device, telegraph his pictures of a railway wreck or other occurrences just as a reporter telegraphs his description in words.
However these first devices were crude to the point of uselessness. Some of his subsequent refinements
changed the encoding scheme. They also mention use of four wires for increased speed and accuracy, but the additional wires were later abandoned. It's clear from the commentary in these and other patents that Gray needed to increase the speed and accuracy of his pulse based system, and in fact he patented a large number of increasingly complicated and refined mechanisms to achieve this.
In 1893 Gray's system using the mechanism seen in Pat. US491347 was good enough to exhibit at the Chicago World's Fair and at a Royal Society conversazione in London in 1894. An article in Manufacturer and Builder of this year describes the current and previous versions. Apparently at this stage Gray used 40 steps per inch. It's clear how challenging the technical problem was; a later film of a similar device shows the rapidity with which an operator might move the pen. This type of use would produce perhaps 600-1000 pulses per second on a digital system, a challenge for any electromechanical system connected over earth return telephone/telegraph lines. A more elegant technology was around the corner, and an analog coup was being staged at the turn of the century.
By the end of the 19th century the telautograph was modified by Foster Ritchie, a former Gray assistant. Calling it the telewriter, Ritchie's version of the telautograph could be used for either copying or speaking over the same telephone connection.
Ritchie had returned to the analog principle and made it work well. He did this by adding an AC signal whenever the pen needed to be lowered, on top of the direct current position signal already on the line wires. The angle of the two pen crank bars was turned into the position signal by two rheostats, driving large D'Arsonval movements at the receiver that moved similar crank bars, in turn moving the receiver pen. Interruption of the direct current advanced the paper.
The AC pen lowering signal was highly important. If Ritchie understood the significance of this technique, he strangely failed to reveal (or protect) this principle in his patents. George S. Tiffany on behalf of the Gray National Telautograph Company understood the significance of the AC signal quite well. In the patent he filed shortly after and presumably in response to Ritchie
he explains that the use of either an AC signal superimposed on the pen current signal or intentional mechanical vibrations added at the receiver can overcome static pen and actuator friction, and allow the pen to follow the transmitter quite perfectly. This principle is in common use today in the form of dither, as applied to proportional pneumatic and hydraulic control valves and regulators. A dither signal can overcome both magnetic hysteresis and static friction and was preferable to mechanical vibration, as later Telautograph designs
used it exclusively.
Apparently this technique worked well, because even though Tiffany studiously avoided every constructional feature of Ritchie's patent, he used the exact same fundamental technique, and the analog telautograph principle continued to be used for at least the next 35 years, such as in those installed in the Frick Art Reference Library around 1935, also see interior view. Tiffany patents after 1901
refined the mechanism but not the principle.
Ritchie marketed his design as the Telewriter in the UK. The claim in this last reference that the phone and Ritchie's telautograph could be used simultaneously over the same line is dubious given the interference to be expected between the AC pen control signal and a phone signal, and statements to the contrary in Ritchie's patents. Contemporary accounts describe the operations separately and not together or even describe the telautograph being disconnected when the telephone was in use.
All available images and descriptions of commercial telautographs after 1901 depict the open loop analog devices that Ritchie pioneered. While Tiffany did eventually design a servomechanism controlled telautograph in 1916 it's not clear if this was ever commercialized.
== Usage ==
The telautograph became very popular for the transmission of signatures over a distance, and in banks and large hospitals to ensure that doctors' orders and patient information were transmitted quickly and accurately.
Teleautograph systems were installed in a number of major railroad stations to relay hand-written reports of train movements from the interlocking tower to various parts of the station. The teleautograph network in Grand Central Terminal included a public display in the main concourse into the 1960s; a similar setup in Chicago Union Station remained in operation into the 1970s.
A telautograph was used in 1911 to warn workers on the 10th floor about the Triangle Shirtwaist Factory fire that had broken out two floors below.
An example of a telautograph machine writing script can be seen in the 1956 movie Earth vs the Flying Saucers as the output device for the mechanical translator. The 1936 movie Sinner Take All shows it being used in an office setting to secretly message instructions to a secretary.
The Telautograph Corporation changed its name several times. In 1971, it was acquired by Arden/Mayfair. In 1993, Danka Industries purchased the company and renamed it Danka/Omnifax. In 1999, Xerox corporation purchased the company and called it the Omnifax division, which has since been absorbed by the corporation.
Machines like the telautograph are still in use today. The Allpoint Pen is currently in use and has been used to register tens of thousands of voters in the United States, and the LongPen, an invention conceived of by writer Margaret Atwood, is used by authors to sign their books at a distance.
== References ==
== External links ==
"Telautograph". Engineering and Technology History Wiki. 3 October 2023.
Archive of Xerox Omnifax Division website, the successor to Telautograph Corporation.
Telautograph historical description
"Telautograph" . The New Student's Reference Work . 1914.
=== Patents ===
Patent images in TIFF format
U.S. patent 0,386,814 Art of Telegraphy, issued July 1888 (first telautograph patent)
U.S. patent 0,386,815 Telautograph, issued July 1888
U.S. patent 0,461,470Telautograph, issued October 1891
U.S. patent 0,461,472 Art of and Apparatus for Telautographic Communication, issued October 1891 (improved speed and accuracy)
U.S. patent 0,491,347 Telautograph, issued February 1893
U.S. patent 0,494,562 Telautograph, issued April 1893 | Wikipedia/Telautograph |
Integrated Services Digital Network (ISDN) is a set of communication standards for simultaneous digital transmission of voice, video, data, and other network services over the digitalised circuits of the public switched telephone network. Work on the standard began in 1980 at Bell Labs and was formally standardized in 1988 in the CCITT "Red Book". By the time the standard was released, newer networking systems with much greater speeds were available, and ISDN saw relatively little uptake in the wider market. One estimate suggests ISDN use peaked at a worldwide total of 25 million subscribers at a time when 1.3 billion analog lines were in use. ISDN has largely been replaced with digital subscriber line (DSL) systems of much higher performance.
Prior to ISDN, the telephone system consisted of digital links like T1/E1 on the long-distance lines between telephone company offices and analog signals on copper telephone wires to the customers, the "last mile". At the time, the network was viewed as a way to transport voice, with some special services available for data using additional equipment like modems or by providing a T1 on the customer's location. What became ISDN started as an effort to digitize the last mile, originally under the name "Public Switched Digital Capacity" (PSDC). This would allow call routing to be completed in an all-digital system, while also offering a separate data line. The Basic Rate Interface, or BRI, is the standard last-mile connection in the ISDN system, offering two 64 kbit/s "bearer" lines and a single 16 kbit/s "data" channel for commands and data.
Although ISDN was successful in a few countries such as Germany, on a global scale the system was largely ignored and garnered the industry nickname "innovation(s) subscribers didn't need." It found a use for a time for small-office digital connection, using the voice lines for data at 64 kbit/s, sometimes "bonded" to 128 kbit/s, but the introduction of 56 kbit/s modems undercut its value in many roles. It also found use in videoconference systems, where the direct end-to-end connection was desirable. The H.320 standard was designed around its 64 kbit/s data rate. The underlying ISDN concepts found wider use as a replacement for the T1/E1 lines it was originally intended to extend, roughly doubling the performance of those lines.
== History ==
=== Digital lines ===
Since its introduction in 1881, the twisted pair copper line has been installed for telephone use worldwide, with well over a billion individual connections installed by the year 2000. Over the first half of the 20th century, the connection of these lines to form calls was increasingly automated, culminating in the crossbar switches that had largely replaced earlier concepts by the 1950s.
As telephone use surged in the post-WWII era, the problem of connecting the massive number of lines became an area of significant study. Bell Labs' seminal work on digital encoding of voice led to the use of 64 kbit/s as a standard for voice lines (or 56 kbit/s in some systems). In 1962, Robert Aaron of Bell introduced the T1 system, which carried 1.544 Mbit/s of data on a pair of twisted pair lines over a distance of about one mile. This was used in the Bell network to carry traffic between local switch offices, with 24 voice lines at 64 kbit/s and a separate 8 kbit/s line for signaling commands like connecting or hanging up a call. This could be extended over long distances using repeaters in the lines. T1 used a very simple encoding scheme, alternate mark inversion (AMI), which reached only a few percent of the theoretical capacity of the line but was appropriate for 1960s electronics.
By the late 1970s, T1 lines and their faster counterparts, along with all-digital switching systems, had replaced the earlier analog systems for most of the western world, leaving only the customer's equipment and their local end office using analog systems. Digitizing this "last mile" was increasingly seen as the next problem that needed to be solved. However, these connections now represented over 99% of the total telephony network, as the upstream links had increasingly been aggregated into a smaller number of much higher performance systems, especially after the introduction of fiber optic lines. If the system was to become all-digital, a new standard would be needed that was appropriate for the existing customer lines, which might be miles long and of widely varying quality.
=== Standardization ===
Around 1978, Ralph Wyndrum, Barry Bossick and Joe Lechleider of Bell Labs began one such effort to develop a last-mile solution. They studied a number of derivatives of the T1's AMI concept and concluded that a customer-side line could reliably carry about 160 kbit/s of data over a distance of 4 to 5 miles (6.4 to 8.0 km). That would be enough to carry two voice-quality lines at 64 kbit/s as well as a separate 16 kbit/s line for data. At the time, modems were normally 300 bit/s and 1200 bit/s would not become common until the early 1980s and the 2400 bit/s standard would not be completed until 1984. In this market, 16 kbit/s represented a significant advance in performance in addition to being a separate channel that coexists with voice channels.
A key problem was that the customer might only have a single twisted pair line to the location of the handset, so the solution used in T1 with separate upstream and downstream connections was not universally available. With analog connections, the solution was to use echo cancellation, but at the much higher bandwidth of the new concept, this would not be so simple. A debate broke out between teams worldwide about the best solution to this problem; some promoted newer versions of echo cancellation, while others preferred the "ping pong" concept where the direction of data would rapidly switch the line from send to receive at such a high rate it would not be noticeable to the user. John Cioffi had recently demonstrated echo cancellation would work at these speeds, and further suggested that they should consider moving directly to 1.5 Mbit/s performance using this concept. The suggestion was literally laughed off the table (His boss told him to "sit down and shut up") but the echo cancellation concept that was taken up by Joe Lechleider eventually came to win the debate.
Meanwhile, the debate over the encoding scheme itself was also ongoing. As the new standard was to be international, this was even more contentious as several regional digital standards had emerged in the 1960s and 70s and merging them was not going to be easy. To further confuse issues, in 1984 the Bell System was broken up and the US center for development moved to the American National Standards Institute (ANSI) T1D1.3 committee. Thomas Starr of the newly formed Ameritech led this effort and eventually convinced the ANSI group to select the 2B1Q standard proposed by Peter Adams of British Telecom. This standard used an 80 kHz base frequency and encoded two bits per baud to produce the 160 kbit/s base rate. Ultimately Japan selected a different standard, and Germany selected one with three levels instead of four, but all of these could interchange with the ANSI standard.
From an economic perspective, the European Commission sought to liberalize and regulate ISDN across the European Economic Community. The Council of the European Communities adopted Council Recommendation 86/659/EEC in December 1986 for its coordinated introduction within the framework of CEPT. ETSI (the European Telecommunications Standards Institute) was created by CEPT in 1988 and would develop the framework.
=== Rollout ===
With digital-quality voice made possible by ISDN, offering two separate lines and continuous data connectivity, there was an initial global expectation of high customer demand for such systems in both the home and office environments. This expectation was met with varying degrees of success across different regions.
In the United States, many changes in the market led to the introduction of ISDN being tepid. During the lengthy standardization process, new concepts rendered the system largely superfluous. In the office, multi-line digital switches like the Meridian Norstar took over telephone lines while local area networks like Ethernet provided performance around 10 Mbit/s which had become the baseline for inter-computer connections in offices. ISDN offered no real advantages in the voice role and was far from competitive in data. Additionally, modems had continued improving, introducing 9600 bit/s systems in the late 1980s and 14.4 kbit/s in 1991, which significantly eroded ISDN's value proposition for the home customer.
Conversely, in Europe, ISDN found fertile ground for deployment, driven by regulatory support, infrastructural needs, and the absence of comparable high-speed communication technologies at the time. The technology was widely embraced for its ability to digitalize the "last mile" of telecommunications, significantly enhancing the quality and efficiency of voice, data, and video transmission over traditional analog systems.
Meanwhile, Lechleider had proposed using ISDN's echo cancellation and 2B1Q encoding on existing T1 connections so that the distance between repeaters could be doubled to about 2 miles (3.2 km). Another standards war broke out, but in 1991 Lechleider's 1.6 Mbit/s "High-Speed Digital Subscriber Line" eventually won this process as well, after Starr drove it through the ANSI T1E1.4 group. A similar standard emerged in Europe to replace their E1 lines, increasing the sampling range from 80 to 100 kHz to achieve 2.048 Mbit/s. By the mid-1990s, these Primary Rate Interface (PRI) lines had largely replaced T1 and E1 between telephone company offices.
=== Replacement by ADSL ===
Lechleider also believed this higher-speed standard would be much more attractive to customers than ISDN had proven. Unfortunately, at these speeds, the systems suffered from a type of crosstalk known as "NEXT", for "near-end crosstalk". This made longer connections on customer lines difficult. Lechleider noted that NEXT only occurred when similar frequencies were being used, and could be diminished if one of the directions used a different carrier rate, but doing so would reduce the potential bandwidth of that channel. Lechleider suggested that most consumer use would be asymmetric anyway, and that providing a high-speed channel towards the user and a lower speed return would be suitable for many uses.
This work in the early 1990s eventually led to the ADSL concept, which emerged in 1995. An early supporter of the concept was Alcatel, who jumped on ADSL while many other companies were still devoted to ISDN. Krish Prabu stated that "Alcatel will have to invest one billion dollars in ADSL before it makes a profit, but it is worth it." They introduced the first DSL Access Multiplexers (DSLAM), the large multi-modem systems used at the telephony offices, and later introduced customer ADSL modems under the Thomson brand. Alcatel remained the primary vendor of ADSL systems for well over a decade.
ADSL quickly replaced ISDN as the customer-facing solution for last-mile connectivity. ISDN has largely disappeared on the customer side, remaining in use only in niche roles like dedicated teleconferencing systems and similar legacy systems.
== Design ==
Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice, video, and fax, over a single line. Multiple devices can be attached to the line, and used as needed. That means an ISDN line can take care of what were expected to be most people's complete communications needs (apart from broadband Internet access and entertainment television) at a much higher transmission rate, without forcing the purchase of multiple analog phone lines. It also refers to integrated switching and transmission in that telephone switching and carrier wave transmission are integrated rather than separate as in earlier technology.
=== Configurations ===
In ISDN, there are two types of channels, B (for "bearer") and D (for "data"). B channels are used for data (which may include voice), and D channels are intended for signaling and control (but can also be used for data).
There are two ISDN implementations. Basic Rate Interface (BRI), also called basic rate access (BRA) — consists of two B channels, each with bandwidth of 64 kbit/s, and one D channel with a bandwidth of 16 kbit/s. Together these three channels can be designated as 2B+D. Primary Rate Interface (PRI), also called primary rate access (PRA) in Europe — contains a greater number of B channels and a D channel with a bandwidth of 64 kbit/s. The number of B channels for PRI varies according to the nation: in North America and Japan it is 23B+1D, with an aggregate bit rate of 1.544 Mbit/s (T1); in Europe, India and Australia it is 30B+2D, with an aggregate bit rate of 2.048 Mbit/s (E1). Broadband Integrated Services Digital Network (BISDN) is another ISDN implementation and it is able to manage different types of services at the same time. It is primarily used within network backbones and employs ATM.
Another alternative ISDN configuration can be used in which the B channels of an ISDN BRI line are bonded to provide a total duplex bandwidth of 128 kbit/s. This precludes use of the line for voice calls while the internet connection is in use. The B channels of several BRIs can be bonded, a typical use is a 384K videoconferencing channel.
Using bipolar with eight-zero substitution encoding technique, call data is transmitted over the data (B) channels, with the signaling (D) channels used for call setup and management. Once a call is set up, there is a simple 64 kbit/s synchronous bidirectional data channel (actually implemented as two simplex channels, one in each direction) between the end parties, lasting until the call is terminated. There can be as many calls as there are bearer channels, to the same or different end-points. Bearer channels may also be multiplexed into what may be considered single, higher-bandwidth channels via a process called B channel BONDING, or via use of Multi-Link PPP "bundling" or by using an H0, H11, or H12 channel on a PRI.
The D channel can also be used for sending and receiving X.25 data packets, and connection to X.25 packet network, this is specified in X.31. In practice, X.31 was only commercially implemented in the UK, France, Japan and Germany.
=== Reference points ===
A set of reference points are defined in the ISDN standard to refer to certain points between the telco and the end-user equipment.
R – defines the point between non-ISDN terminal equipment, terminal equipment 2 (TE2), and a terminal adapter (TA) which provides translation to and from such a device
S – defines the point between ISDN terminal equipment, terminal equipment 1 (TE1), or a TA and a Network Termination Type 2 (NT2) device
T – defines the point between the NT2 and network termination 1 (NT1) devices.
Most NT-1 devices can perform the functions of the NT2 as well, and so the S and T reference points are generally collapsed into the S/T reference point.
In North America, the NT1 device is considered customer premises equipment (CPE) and must be maintained by the customer, thus, the U interface is provided to the customer. In other locations, the NT1 device is maintained by the telco, and the S/T interface is provided to the customer. In India, service providers provide U interface and an NT1 may be supplied by Service provider as part of service offering.
=== Basic Rate Interface ===
The entry level interface to ISDN is the Basic Rate Interface (BRI), a 128 kbit/s service delivered over a pair of standard telephone copper wires. The 144 kbit/s overall payload rate is divided into two 64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D' channel or data channel). This is sometimes referred to as 2B+D.
The interface specifies the following network interfaces:
The U interface is a two-wire interface between the exchange and a network terminating unit, which is usually the demarcation point in non-North American networks.
The T interface is a serial interface between a computing device and a terminal adapter, which is the digital equivalent of a modem.
The S interface is a four-wire bus that ISDN consumer devices plug into; the S & T reference points are commonly implemented as a single interface labeled 'S/T' on a Network termination 1 (NT1).
The R interface defines the point between a non-ISDN device and a terminal adapter (TA) which provides translation to and from such a device.
BRI-ISDN is very popular in Europe but is much less common in North America. It is also common in Japan — where it is known as INS64.
=== Primary Rate Interface ===
The other ISDN access available is the Primary Rate Interface (PRI), which is carried over T-carrier (T1) with 24 time slots (channels) in North America, and over E-carrier (E1) with 32 channels in most other countries. Each channel provides transmission at a 64 kbit/s data rate.
With the E1 carrier, the available channels are divided into 30 bearer (B) channels, one data (D) channel, and one timing and alarm channel. This scheme is often referred to as 30B+2D.
In North America, PRI service is delivered via T1 carriers with only one data channel, often referred to as 23B+D, and a total data rate of 1544 kbit/s. Non-Facility Associated Signalling (NFAS) allows two or more PRI circuits to be controlled by a single D channel, which is sometimes called 23B+D + n*24B. D-channel backup allows for a second D channel in case the primary fails. NFAS is commonly used on a Digital Signal 3 (DS3/T3).
PRI-ISDN is popular throughout the world, especially for connecting private branch exchanges to the public switched telephone network (PSTN).
Even though many network professionals use the term ISDN to refer to the lower-bandwidth BRI circuit, in North America BRI is relatively uncommon whilst PRI circuits serving PBXs are commonplace.
=== Bearer channel ===
The bearer channel (B) is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz with G.711 encoding. B-channels can also be used to carry data, since they are nothing more than digital channels.
Each one of these channels is known as a DS0.
Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled over RBS lines. This was commonplace in the 20th century, but has since become less so.
=== X.25 ===
X.25 can be carried over the B or D channels of a BRI line, and over the B channels of a PRI line. X.25 over the D channel is used at many point-of-sale (credit card) terminals because it eliminates the modem setup, and because it connects to the central system over a B channel, thereby eliminating the need for modems and making much better use of the central system's telephone lines.
X.25 was also part of an ISDN protocol called "Always On/Dynamic ISDN", or AO/DI. This allowed a user to have a constant multi-link PPP connection to the internet over X.25 on the D channel, and brought up one or two B channels as needed.
=== Frame Relay ===
In theory, Frame Relay can operate over the D channel of BRIs and PRIs, but it is seldom, if ever, used.
== Uses ==
=== Telephone industry ===
ISDN is a core technology in the telephone industry. A telephone network can be thought of as a collection of wires strung between switching systems. The common electrical specification for the signals on these wires is T1 or E1. Between telephone company switches, the signaling is performed via SS7. Normally, a PBX is connected via a T1 with robbed bit signaling to indicate on-hook or off-hook conditions and MF and DTMF tones to encode the destination number. ISDN is much better because messages can be sent much more quickly than by trying to encode numbers as long (100 ms per digit) tone sequences. This results in faster call setup times. Also, a greater number of features are available and fraud is reduced.
In common use, ISDN is often limited to usage to Q.931 and related protocols, which are a set of signaling protocols establishing and breaking circuit-switched connections, and for advanced calling features for the user. Another usage was the deployment of videoconference systems, where a direct end-to-end connection is desirable. ISDN uses the H.320 standard for audio coding and video coding.
ISDN is also used as a smart-network technology intended to add new services to the public switched telephone network (PSTN) by giving users direct access to end-to-end circuit-switched digital services and as a backup or failsafe circuit solution for critical use data circuits.
=== Video conferencing ===
One of ISDNs successful use-cases was in the videoconference field, where even small improvements in data rates are useful, but more importantly, its direct end-to-end connection offers lower latency and better reliability than packet-switched networks of the 1990s. The H.320 standard for audio coding and video coding was designed with ISDN in mind, and more specifically its 64 kbit/s basic data rate. including audio codecs such as G.711 (PCM) and G.728 (CELP), and discrete cosine transform (DCT) video codecs such as H.261 and H.263.
=== Broadcast industry ===
ISDN is used heavily by the broadcast industry as a reliable way of switching low-latency, high-quality, long-distance audio circuits. In conjunction with an appropriate codec using MPEG or various manufacturers' proprietary algorithms, an ISDN BRI can be used to send stereo bi-directional audio coded at 128 kbit/s with 20 Hz – 20 kHz audio bandwidth, although commonly the G.722 algorithm is used with a single 64 kbit/s B channel to send much lower latency mono audio at the expense of audio quality. Where very high quality audio is required multiple ISDN BRIs can be used in parallel to provide a higher bandwidth circuit switched connection. BBC Radio 3 commonly makes use of three ISDN BRIs to carry 320 kbit/s audio stream for live outside broadcasts. ISDN BRI services are used to link remote studios, sports grounds and outside broadcasts into the main broadcast studio. ISDN via satellite is used by field reporters around the world. It is also common to use ISDN for the return audio links to remote satellite broadcast vehicles.
In many countries, such as the UK and Australia, ISDN has displaced the older technology of equalised analogue landlines, with these circuits being phased out by telecommunications providers. Use of IP-based streaming codecs such as Comrex ACCESS and ipDTL is becoming more widespread in the broadcast sector, using broadband internet to connect remote studios.
=== Backup lines ===
Providing a backup line for business's inter-office and internet connectivity was a popular use of the technology.
== International deployment ==
A study of the Germany's Federal Ministry of Education and Research shows the following share of ISDN-channels per 1,000 inhabitants in 2005:
Norway 401
Denmark 339
Germany 333
Switzerland 331
Japan 240
United Kingdom 160
Finland 160
Sweden 135
Italy 105
France 85
Spain 58
United States 47
=== Australia ===
Telstra provides the business customer with the ISDN services. There are five types of ISDN services which are ISDN2, ISDN2 Enhanced, ISDN10, ISDN20 and ISDN30. Telstra changed the minimum monthly charge for voice and data calls. In general, there are two group of ISDN service types; The Basic Rate services – ISDN 2 or ISDN 2 Enhanced. Another group of types are the Primary Rate services, ISDN 10/20/30. Telstra announced that the new sales of ISDN product would be unavailable as of 31 January 2018. The final exit date of ISDN service and migration to the new service was on 31 May 2022.
=== France ===
Orange offers ISDN services under their product name Numeris (2 B+D), of which a professional Duo and home Itoo version is available. ISDN is generally known as RNIS in France and has widespread availability. The introduction of ADSL is reducing ISDN use for data transfer and Internet access, although it is still common in more rural and outlying areas, and for applications such as business voice and point-of-sale terminals. In 2023, Numeris services will enter a phase-out process. They will be replaced by VoIP services.
=== Germany ===
In Germany, ISDN was very popular with an installed base of 25 million channels (29% of all subscriber lines in Germany as of 2003 and 20% of all ISDN channels worldwide). Due to the success of ISDN, the number of installed analog lines was decreasing. Deutsche Telekom (DTAG) offered both BRI and PRI. Competing phone companies often offered ISDN only and no analog lines. However, these operators generally offered free hardware that also allows the use of POTS equipment, such as NTBAs ("Network Termination for ISDN Basic rate Access": small devices that bridge the two-wire UK0 line to the four-wire S0 bus) with integrated terminal adapters. Because of the widespread availability of ADSL services, ISDN was primarily used for voice and fax traffic.
Until 2007 ISDN (BRI) and ADSL/VDSL were often bundled on the same line, mainly because the combination of DSL with an analog line had no cost advantage over a combined ISDN-DSL line. This practice turned into an issue for the operators when vendors of ISDN technology stopped manufacturing it and spare parts became hard to come by. Since then phone companies started introducing cheaper xDSL-only products using VoIP for telephony, also in an effort to reduce their costs by operating separate data & voice networks.
Since approximately 2010, most German operators have offered more and more VoIP on top of DSL lines and ceased offering ISDN lines. New ISDN lines have been no longer available in Germany since 2018, existing ISDN lines were phased out from 2016 onwards and existing customers were encouraged to move to DSL-based VoIP products.
Deutsche Telekom intended to phase-out by 2018 but announced the complete transition in 2020, other providers like Vodafone estimate to have their phase-out completed by 2022.
=== Greece ===
OTE, the incumbent telecommunications operator, offers ISDN BRI (BRA) services in Greece. Following the launch of ADSL in 2003, the importance of ISDN for data transfer began to decrease and is today limited to niche business applications with point-to-point requirements.
=== India ===
Bharat Sanchar Nigam Limited, Reliance Communications and Bharti Airtel are the largest communication service providers, and offer both ISDN BRI and PRI services across the country. Reliance Communications and Bharti Airtel uses the DLC technology for providing these services. With the introduction of broadband technology, the load on bandwidth is being absorbed by ADSL. ISDN continues to be an important backup network for point-to-point leased line customers such as banks, e-Seva Centers, Life Insurance Corporation of India, and SBI ATMs.
=== Japan ===
On April 19, 1988, Japanese telecommunications company NTT began offering nationwide ISDN services trademarked INS Net 64, and INS Net 1500, a fruition of NTT's independent research and trial from the 1970s of what it referred to the INS (Information Network System).
Previously, in April 1985, Japanese digital telephone exchange hardware made by Fujitsu was used to experimentally deploy the world's first I interface ISDN. The I interface, unlike the older and incompatible Y interface, is what modern ISDN services use today.
Since 2000, NTT's ISDN offering have been known as FLET's ISDN, incorporating the "FLET's" brand that NTT uses for all of its ISP offerings.
In Japan, the number of ISDN subscribers dwindled as alternative technologies such as ADSL, cable Internet access, and fiber to the home gained greater popularity. On November 2, 2010, NTT announced plans to migrate their backend from PSTN to the IP network from around 2020 to around 2025. For this migration, ISDN services will be retired, and fiber optic services are recommended as an alternative.
=== Norway ===
On April 19, 1988, Norwegian telecommunications company Telenor began offering nationwide ISDN services trademarked INS Net 64, and INS Net 1500, a fruition of NTT's independent research and trial from the 1970s of what it referred to the INS (Information Network System).
=== United Kingdom ===
In the United Kingdom, British Telecom (BT) provides ISDN2e (BRI) as well as ISDN30 (PRI). Until April 2006, they also offered services named Home Highway and Business Highway, which were BRI ISDN-based services that offered integrated analogue connectivity as well as ISDN. Later versions of the Highway products also included built-in USB sockets for direct computer access. Home Highway was bought by many home users, usually for Internet connection, although not as fast as ADSL, because it was available before ADSL and in places where ADSL does not reach.
In early 2015, BT announced their intention to retire the UK's ISDN infrastructure by 2025.
=== United States and Canada ===
ISDN-BRI never gained popularity as a general use telephone access technology in Canada and the US, and remains a niche product. The service was seen as "a solution in search of a problem", and the extensive array of options and features were difficult for customers to understand and use. ISDN has long been known by derogatory backronyms highlighting these issues, such as It Still Does Nothing, Innovations Subscribers Don't Need, and I Still Don't kNow, or, from the supposed standpoint of telephone companies, I Smell Dollars Now.
Although various minimum bandwidths have been used in definitions of Broadband Internet access, ranging up from 64 kbit/s up to 1.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States FCC, as of 2008, defines broadband as anything above 768 kbit/s. Once the term "broadband" came to be associated with data rates incoming to the customer at 256 kbit/s or more, and alternatives like ADSL grew in popularity, the consumer market for BRI did not develop. Its only remaining advantage is that, while ADSL has a functional distance limitation and can use ADSL loop extenders, BRI has a greater limit and can use repeaters. As such, BRI may be acceptable for customers who are too remote for ADSL. Widespread use of BRI is further stymied by some small North American CLECs such as CenturyTel having given up on it and not providing Internet access using it. However, AT&T in most states (especially the former SBC/SWB territory) will still install an ISDN BRI line anywhere a normal analog line can be placed.
ISDN-BRI is currently primarily used in industries with specialized and very specific needs. High-end videoconferencing hardware can bond up to 8 B-channels together (using a BRI circuit for every 2 channels) to provide digital, circuit-switched video connections to almost anywhere in the world. This is very expensive, and is being replaced by IP-based conferencing, but where cost concern is less of an issue than predictable quality and where a QoS-enabled IP does not exist, BRI is the preferred choice.
Most modern non-VoIP PBXs use ISDN-PRI circuits. These are connected via T1 lines with the central office switch, replacing older analog two-way and direct inward dialing (DID) trunks. PRI is capable of delivering Calling Line Identification (CLID) in both directions so that the telephone number of an extension, rather than a company's main number, can be sent. It is still commonly used in recording studios and some radio programs, when a voice-over actor or host is in one studio conducting remote work, but the director and producer are in a studio at another location. The ISDN protocol delivers channelized, not-over-the-Internet service, powerful call setup and routing features, faster setup and tear down, superior audio fidelity as compared to plain old telephone service (POTS), lower delay and, at higher densities, lower cost.
In 2013, Verizon announced it would no longer take orders for ISDN service in the Northeastern United States, signalling the beginning of the end for the technology in that region.
== See also ==
ISDN User Part
Common ISDN Application Programming Interface
Asynchronous Transfer Mode
B-ISDN
ETSI
List of device bandwidths
== Notes ==
== References ==
== External links ==
Cioffi, John (May 2011). "Lighting up copper". IEEE Communications Magazine. 49 (5): 30–43. doi:10.1109/MCOM.2011.5762795. S2CID 8661205. Archived from the original on 2021-05-01. Retrieved 2020-09-25.
Published recommendations available in English, French and Spanish (list), ITU, archived from the original on 2012-08-27, retrieved 2008-08-19
Fine, ISDN, Harvard, archived from the original on 2010-08-18, retrieved 2003-08-14
B, Ralph, ISDN, archived from the original on 2010-06-11, retrieved 2004-11-04
ISDN, Roblee, archived from the original on 2010-03-07, retrieved 2007-08-02 | Wikipedia/Integrated_Services_Digital_Network |
Earth-return telegraph is the system whereby the return path for the electric current of a telegraph circuit is provided by connection to the earth through an earth electrode. Using earth return saves a great deal of money on installation costs since it halves the amount of wire that is required, with a corresponding saving on the labour required to string it. The benefits of doing this were not immediately noticed by telegraph pioneers, but it rapidly became the norm after the first earth-return telegraph was put into service by Carl August von Steinheil in 1838.
Earth-return telegraph began to have problems towards the end of the 19th century due to the introduction of electric trams. These seriously disturbed earth-return operation and some circuits were returned to the old metal-conductor return system. At the same time, the rise of telephony, which was even more intolerant to the interference on earth-return systems, started to displace electrical telegraphy altogether, bringing to an end the earth-return technique in telecommunications.
== Description ==
A telegraph line between two telegraph offices, like all electrical circuits, requires two conductors to form a complete circuit. This usually means two distinct metal wires in the circuit, but in the earth-return circuit one of these is replaced by connections to earth (also called ground) to complete the circuit. Connection to earth is made by means of metal plates with a large surface area buried deeply in the ground. These plates could be made of copper or galvanised iron. Other methods include connecting to metal gas or water pipes where these are available, or laying a long wire rope on damp ground. The latter method is not very reliable, but was common in India up to 1868.
Soil has poor resistivity compared to copper wires, but the Earth is such a large body that it effectively forms a conductor with an enormous cross-sectional area and high conductance. It is only necessary to ensure that there is good contact with the Earth at the two stations. To do this, the earth plates must be buried deep enough to always be in contact with moist soil. In arid areas this can be problematic. Operators were sometimes instructed to pour water on the earth plates to maintain connection. The plates must also be large enough to pass sufficient current. For the ground circuit to have a conductance as good as the conductor it replaces, the surface area of the plate is made larger than the cross-sectional area of the conductor by the same factor as the resistivity of the ground exceeds the resistivity of copper, or whatever other metal is being used for the wire.
== Reason for use ==
The advantage of the earth-return system is that it reduces the amount of metal wire that would otherwise be required, a substantial saving on long telegraph lines that may run for hundreds, or even thousands, of miles. This advantage was not so apparent in early telegraph systems which often required multiple signal wires. All of the circuits in such a system could use the same single return conductor (unbalanced lines), so the cost saving would have been minimal. Examples of multiwire systems included Pavel Schilling's experimental system in 1832, which had six signal wires so that the Cyrillic alphabet could be binary coded, and the Cooke and Wheatstone five-needle telegraph in 1837. The latter did not require a return conductor at all because the five signal wires were always used in pairs with opposite polarity currents until code points for numerals were added.
The expense of multiwire systems rapidly led to single-signal-wire systems becoming the norm for long-distance telegraph. Around the time earth return was introduced, the two most widely used systems were the Morse system of Samuel Morse (from 1844) and the Cooke and Wheatstone one-needle telegraph (from 1843). A few two-signal-wire systems lingered on; the Cooke and Wheatstone two-needle system used on British railways, and the Foy-Breguet telegraph used in France. With the reduction in the number of signal wires, the cost of the return wire was much more significant, leading to earth return becoming the standard.
Sömmerring's telegraph was an electrochemical, rather than an electromagnetic telegraph and is placed out of chronological order. It is shown here for comparison because it directly inspired Schilling's electromagnetic telegraph, but Schilling used a greatly reduced number of wires.
== History ==
=== Early experiments ===
The first use of an earth return to complete an electric circuit was by William Watson in 1747 excluding experiments using a water return path. Watson, in a demonstration on Shooter's Hill, London, sent an electric current through 2,800 feet of iron wire, insulated with baked wood, with an earth-return path. Later that year he increased that distance to two miles. One of the first demonstrations of a water-return path was by John Henry Winkler, a professor in Leipzig, who used the River Pleisse in this way in an experiment on 28 July 1746. The first experimenter to test an earth-return circuit with a low-voltage battery rather than a high-voltage friction machine was Basse of Hameln in 1803. These early experiments were not aimed at producing a telegraph, but rather, were designed to determine the speed of electricity. In the event, the transmission of electrical signals proved to be faster than the experimenters were able to measure – indistinguishable from instantaneous.
Watson's result seems to have been unknown, or forgotten, by early telegraph experimenters who used a return conductor to complete the circuit. One early exception was a telegraph invented by Harrison Gray Dyar in 1826 using friction machines. Dyar demonstrated this telegraph around a race course on Long Island, New York, in 1828 using an earth-return circuit. The demonstration was an attempt to get backing for construction of a New York to Philadelphia line, but the project was unsuccessful (and is unlikely to have worked over a long distance), Dyar was quickly forgotten, and earth return had to be reinvented yet again.
=== First earth-return telegraph ===
The first telegraph put into service with an earth return is due to Carl August von Steinheil in 1838. Steinheil's discovery was independent of earlier work and he is often, inaccurately, cited as the inventor of the principle. Steinheil was working on providing a telegraph along the Nuremberg–Fürth railway line, a distance of five miles. Steinheil first attempted, at the suggestion of Carl Friedrich Gauss, to use the two rails of the track as the telegraph conductors. This failed because the rails were not well insulated from earth and there was consequently a conducting path between them. However, this initial failure made Steinheil realise that the earth could be used as a conductor and he then succeeded with only one wire and an earth return.
Steinheil realised that the "galvanic excitation" in the earth was not confined to the direct route between the two ends of the telegraph wire, but extended outwards indefinitely. He speculated that this might mean that telegraphy without any wires at all was possible; he may have been the first to consider wireless telegraphy as a real possibility. He succeeded in transmitting a signal 50 feet by electromagnetic induction, but this distance was not of practical use.
The use of earth-return circuits rapidly became the norm, helped along by Steinheil declining to patent the idea – he wished to make it freely available as a public service on his part. However, Samuel Morse was not immediately aware of Steinheil's discovery when he installed the first telegraph line in the United States in 1844 using two copper wires. Earth return became so ubiquitous that some telegraph engineers appear not to have realised that early telegraphs all used return wires. In 1856, a couple of decades after the introduction of earth return, Samuel Statham of the Gutta Percha Company and Wildman Whitehouse tried to patent a return wire and got as far as provisional protection.
=== Problems with electric power ===
The introduction of electric power, especially electric tram lines in the 1880s, seriously disturbed earth-return telegraph lines. The starting and stopping of the trams generated large electromagnetic spikes which overwhelmed code pulses on telegraph lines. That was particularly a problem on lines where high-speed automatic working was in use and, most especially, on submarine telegraph cables. The latter type could be thousands of miles long and the arriving signal was consequently small. On land, repeaters in the line would be used to regenerate the signal, but they were not available for submarine cables until the middle of the 20th century. Sensitive instruments like the syphon recorder were used to detect weak signals on long submarine cables, and they were easily disrupted by trams.
The problem caused by electric trams was so severe in some places that it led to the reintroduction of return conductors. A return conductor following the same path as the main conductor will have the same interference induced in it. Such common-mode interference can be entirely removed if both parts of the circuit are identical (a balanced line). One such case of interference occurred in 1897 in Cape Town, South Africa. The disruption was so great that not only was the buried cable through the city replaced with a balanced line, but a balanced submarine cable was laid for five or six nautical miles out to sea, where it was spliced on to the original cable.
The advent of telephony, which initially used the same earth-return lines used by telegraphy, made it essential to use balanced circuits, because telephone lines were even more susceptible to interference. One of the first to realise that all-metal circuits would solve the severe noise problems encountered on earth-return telephone circuits was John J. Carty, the future chief engineer of the American Telephone and Telegraph Company. Carty began installing metallic returns on lines under his control and reported that the noises immediately disappeared almost entirely.
== See also ==
Single-wire earth return, used for electric power distribution.
== Notes ==
== References ==
== Bibliography ==
Artemenko, Roman, "Pavel Schilling - inventor of the electromagnetic telegraph", PC Week, vol. 3, iss. 321, 29 January 2002 (in Russian).
Brooks, David, "Indian and American telegraphs", Journal of the Society of Telegraph Engineers, vol. 3, pp. 115–125, 1874.
Burns, Russel W., Communications: An International History of the Formative Years, IEE, 2004 ISBN 0863413277.
Calvert, James B., The Electromagnetic Telegraph, retrieved 14 April 2020.
Commissioners of Patents, Patents for Inventions: Abridgements of Specifications Relating to Electricity and Magnetism, Their Generation and Applications, George E. Eyre and William Spottiswoode, 1859. Statham and Whitehouse's claim for a return wire is on page 584.
Darling, Charles R., "Field telephones", The Electrical Review, vol. 77, no. 1,973, pp. 377–379, 17 September 1915.
Fahie, John Joseph, A History of Wireless Telegraphy, 1838–1899, Edingburgh and London: William Blackwood and Sons, 1899 LCCN 01-5391.
Fleming, John Ambrose, The Principles of Electric Wave Telegraphy, London: Longmans, 1910 OCLC 561016618.
Hawks, Ellison, "Pioneers of wireless", Wireless World, vol. 18, nos. 9 & 11, pp. 343–344, 421–422, 3 & 17 March 1926.
Hendrick, Burton J., The Age of Big Business, Cosimo, 2005 ISBN 1596050675.
Hubbard, Geoffrey, Cooke and Wheatstone and the Invention of the Electric Telegraph, Routledge, 2013 ISBN 1135028508.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 9780471205050.
Kahn, Douglas, Earth Sound Earth Signal: Energies and Earth Magnitude in the Arts, University of California Press, 2013 ISBN 0520257804.
King, W. James, "The development of electrical technology in the 19th century: The telegraph and the telephone", pp. 273–332 in, Contributions from the Museum of History and Technology: Papers 19–30, Smithsonian Institution, 1963 OCLC 729945946.
Margalit, Harry, Energy, Cities and Sustainability, Routledge, 2016 ISBN 1317528166.
Prescott, George Bartlett, History, Theory, and Practice of the Electric Telegraph, Boston: Ticknor and Fields, 1866 LCCN 17-10907.
Schwendler, Louis, Instructions for Testing Telegraph Lines and the Technical Arrangements of Offices, vol. 2, London: Trèubner & Co., 1878 OCLC 637561329
Shiers, George, The Electric Telegraph: An Historical Anthology, Arno Press, 1977 OCLC 1067753076.
Stachurski, Richard, Longitude by Wire: Finding North America, University of South Carolina, 2009 ISBN 1570038015.
Trotter, A.P., "Disturbance of submarine cable working by electric tramways", Journal of the Institution of Electrical Engineers, vol. 26, iss. 130, pp. 501–514, July 1897.
"Discussion of Mr. Trotter's paper", op. cit., pp. 515–532.
Wheen, Andrew, Dot-Dash to Dot.Com: How Modern Telecommunications Evolved from the Telegraph to the Internet, Springer, 2010 ISBN 1441967605. | Wikipedia/Earth-return_telegraph |
The Cooke and Wheatstone telegraph was an early electrical telegraph system dating from the 1830s invented by English inventor William Fothergill Cooke and English scientist Charles Wheatstone. It was a form of needle telegraph, and the first telegraph system to be put into commercial service. The receiver consisted of a number of needles that could be moved by electromagnetic coils to point to letters on a board. This feature was liked by early users who were unwilling to learn codes, and employers who did not want to invest in staff training.
In later systems, the letter board was dispensed with, and the code was read directly from the movement of the needles. This occurred because the number of needles was reduced, leading to more complex codes. The change was motivated by the economic need to reduce the number of telegraph wires used, which was related to the number of needles. The change became more urgent as the insulation of some of the early installations deteriorated, causing some of the original wires to be unusable. Cooke and Wheatstone's most successful system was eventually a one-needle system that continued in service into the 1930s.
Cooke and Wheatstone's telegraph played a part in the apprehension of the murderer John Tawell. Once it was known that Tawell had boarded a train to London, the telegraph was used to signal ahead to the terminus at Paddington and have him arrested there. The novelty of this use of the telegraph in crime-fighting generated a great deal of publicity and led to increased public acceptance and use of the telegraph.
== Inventors ==
The telegraph arose from a collaboration between William Fothergill Cooke and Charles Wheatstone, best known to schoolchildren from the eponymous Wheatstone bridge. Their collaboration was not a happy one because their objectives differed. Cooke was an inventor and entrepreneur who wished to patent and commercially exploit his inventions. Wheatstone, on the other hand, was an academic with no interest in commercial ventures, and he intended to publish his results and allow others to freely make use of them. This difference in outlook eventually led to a bitter dispute between the two men over claims to priority for the invention. Their differences were taken to arbitration with Marc Isambard Brunel acting for Cooke and John Frederic Daniell acting for Wheatstone. Cooke eventually bought out Wheatstone's interest in exchange for royalties.
Cooke had had some ideas for building a telegraph prior to his partnership with Wheatstone and had consulted scientist Michael Faraday for expert advice. In 1836, Cooke built both an experimental electrometer system and a mechanical telegraph involving a clockwork mechanism with an electromagnetic detent. However, much of the scientific knowledge for the model actually put into practice came from Wheatstone. Cooke's earlier ideas were largely abandoned.
== History ==
In January 1837, Cooke proposed to the directors of the Liverpool and Manchester Railway a design for a 60-code mechanical telegraph. This was too complicated for their purposes; the immediate need was for a simple signal communication between the Liverpool station and a rope-haulage engine house at the top of a steep incline through a long tunnel outside the station. Rope-haulage into main stations was common at this time to avoid noise and pollution, and in this case the gradient was too steep for the locomotive to ascend unaided. All that was required were a few simple signals such as an indication to the engine house to start hauling. Cooke was requested to build a simpler version with fewer codes, which he did by the end of April 1837. However, the railway decided to use instead a pneumatic telegraph equipped with whistles. Soon after this Cooke went into partnership with Wheatstone.
In May 1837 Cooke and Wheatstone patented a telegraph system that used a number of needles on a board that could be moved to point to letters of the alphabet. The patent recommended a five-needle system, but any number of needles could be used depending on the number of characters it was required to code. A four-needle system was installed between Euston and Camden Town in London on a rail line being constructed by Robert Stephenson between London and Birmingham. It was successfully demonstrated on 25 July 1837. This was a similar application to the Liverpool project. The carriages were detached at Camden Town and travelled under gravity into Euston. A system was needed to signal to an engine house at Camden Town to start hauling the carriages back up the incline to the waiting locomotive. As at Liverpool, the electric telegraph was in the end rejected in favour of a pneumatic system with whistles.
Cooke and Wheatstone had their first commercial success with a telegraph installed in 1838 on the Great Western Railway over the 13 miles (21 km) from Paddington station to West Drayton. Indeed, this was the first commercial telegraph in the world. This was a five-needle, six-wire system. The cables were originally installed underground in a steel conduit. However, the cables soon began to fail as a result of deteriorating insulation. As an interim measure, a two-needle system was used with three of the remaining working underground wires, which despite using only two needles had a greater number of codes. Since the new code had to be learned, not just read off the display, this was the first time in telegraph history that skilled telegraph operators were required.
When the line was extended to Slough in 1843, a one-needle, two-wire system was installed. Cooke also changed from running the cables in buried lead pipes to the less expensive and easier to maintain system of suspending uninsulated wires on poles from ceramic insulators, a system which he patented, and which rapidly became the most common method. This extension was done at Cooke's own expense, as the railway company was unwilling to finance a system it still considered experimental. Up to this point, the Great Western had insisted on exclusive use and refused Cooke permission to open public telegraph offices. Cooke's new agreement gave the railway free use of the system in exchange for Cooke's right to open public offices, establishing a public telegraph service for the first time. A flat rate was charged (unlike all later telegraph services which charged per word) of one shilling, but many people paid this just to see the strange equipment.
From this point on, the use of the electric telegraph started to grow on the new railways being built from London. The London and Blackwall Railway (another rope-hauled application) was equipped with the Cooke and Wheatstone telegraph when it opened in 1840, and many others followed. The distance involved on the Blackwall Railway (four miles) was too far for steam signalling and the engineer, Robert Stephenson, strongly supported the electric solution. In February 1845, an 88-mile line from Nine Elms to Gosport was completed along the London and South Western Railway, far longer than any other line up to that time. The Admiralty paid half the capital cost and £1,500 per annum for a private two-needle telegraph on this line to connect it to its base in Portsmouth, finally replacing the optical telegraph. In September 1845, the financier John Lewis Ricardo and Cooke formed the Electric Telegraph Company. This company bought out the Cooke and Wheatstone patents and solidly established the telegraph business. In 1869 the company was nationalised and became part of the General Post Office. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were still in use at the end of the nineteenth century. Some remained in service in the 1930s.
The Cooke and Wheatstone telegraph was largely confined to the United Kingdom and the British Empire. However, it was also used in Spain for a time. After nationalisation of the telegraph sector in the UK, the Post Office slowly replaced the diverse systems it had inherited, including the Cooke and Wheatstone telegraph, with the Morse telegraph system.
=== Tawell arrest ===
Murder suspect John Tawell was apprehended following the use of a needle telegraph message from Slough to Paddington on 1 January 1845. This is thought to be the first use of the telegraph to catch a murderer. The message was:
A MURDER HAS GUST BEEN COMMITTED AT SALT HILL AND THE SUSPECTED MURDERER WAS SEEN TO TAKE A FIRST CLASS TICKET TO LONDON BY THE TRAIN WHICH LEFT SLOUGH AT 742 PM HE IS IN THE GARB OF A KWAKER WITH A GREAT COAT ON WHICH REACHES NEARLY DOWN TO HIS FEET HE IS IN THE LAST COMPARTMENT OF THE SECOND CLASS COMPARTMENT
The Cooke and Wheatstone system did not support punctuation, lower case, or some letters. Even the two-needle system omitted the letters J, Q, and Z; hence the misspellings of 'just' and 'Quaker'. This caused some difficulty for the receiving operator at Paddington who repeatedly requested a resend after receiving K-W-A which he assumed was a mistake. This continued until a small boy suggested the sending operator be allowed to complete the word, after which it was understood. After arriving, Tawell was followed to a nearby coffee shop by a detective and arrested there. Newspaper coverage of this incident gave a great deal of publicity to the electric telegraph and brought it firmly into public view.
The widely publicised arrest of Tawell was one of two events which brought the telegraph to greater public attention and led to its widespread use beyond railway signalling. The other event was the announcement by telegraph of the birth of Alfred Ernest Albert, second son of Queen Victoria. The news was published in The Times at the unprecedented speed of 40 minutes after the announcement.
=== Railway block working ===
The signalling block system is a train safety system that divides the track into blocks and uses signals to prevent another train entering a block until a train already in the block has left. The system was proposed by Cooke in 1842 in Telegraphic Railways or the Single Way as a safer way of working on single lines. Previously, separation of trains had relied on strict timetabling only, which was unable to allow for unforeseen events. The first use of block working was probably in 1839 when George Stephenson had a Cooke and Wheatstone telegraph installed in the Clay Cross Tunnel of the North Midland Railway. Instruments specific to block working were installed in 1841. Block working became the norm and remains so to the present day, except that modern technology has allowed fixed blocks to be replaced with moving blocks on the busiest railways.
== Operation ==
The Cooke and Wheatstone telegraph consisted of a number of magnetic needles which could be made to turn a short distance either clockwise or anti-clockwise by electromagnetic induction from an energising winding. The direction of movement was determined by the direction of the current in the telegraph wires. The board was marked with a diamond shaped grid with a letter at each grid intersection, and so arranged that when two needles were energised they would point to a specific letter.
The number of wires required by the Cooke and Wheatstone system is equal to the number of needles used. Cooke and Wheatstone's patent recommends five needles, and this was the number on their early demonstration models. The number of symbols that can be obtained using a code similar to the one the five needle system used depends on the number of needles available; generalizing, with a number
x
{\displaystyle x}
of needles it is possible to encode
f
(
x
)
=
x
(
x
−
1
)
{\displaystyle f(x)=x(x-1)}
symbols. So:
At the sending end there were two rows of buttons, a pair of buttons for each coil in each row. The operator selected one button from each row. This connected two of the coils to the positive and negative ends of the battery respectively. The other ends of the coils were connected to the telegraph wires and thence to one end of the coils at the receiving station. The other ends of the receiving coils, while in receive mode, were all commoned together. Thus the current flowed through the same two coils at both ends and energised the same two needles. With this system the needles were always energised in pairs and always rotated in opposite directions.
=== Five-needle telegraph ===
The five-needle telegraph with twenty possible needle positions was six codes short of being able to encode the complete alphabet. The letters omitted were C, J, Q, U, X and Z. A great selling point of this telegraph was that it was simple to use and required little operator training. There is no code to learn, as the letter being sent was visibly displayed to both the sending and receiving operator.
At some point, the ability to move a single needle independently was added. This required an additional conductor for a common return, possibly by means of an earth return. This dramatically increased the codespace available, but using arbitrary codes would have required more extensive operator training since the display could not be read on sight from the grid as the simple alphabetic codes were. Because of this, the additional functionality was only used to add numerals by pointing a needle to the numeral required marked around the edge of the board. The economic need to reduce the number of wires in the end proved a stronger incentive than simplicity of use and led Cooke and Wheatstone to develop the two-needle and one-needle telegraphs.
=== Two-needle telegraph ===
The two-needle telegraph required three wires, one for each needle and a common return. The coding was somewhat different from the five-needle telegraph and needed to be learned, rather than read from a display. The needles could move to the left or right either one, two, or three times in quick succession, or a single time in both directions in quick succession. Either needle, or both together, could be moved. This gave a total of 24 codes, one of which was taken up by the stop code. Thus, three letters were omitted: J, Q and Z, which were substituted with G, K and S respectively.
Originally, the telegraph was fitted with a bell that rang when another operator wanted attention. This proved so annoying that it was removed. It was found that the clicking of the needle against its endstop was sufficient to draw attention.
=== One-needle telegraph ===
This system was developed to replace the failing multi-wire telegraph on the Paddington to West Drayton line. It required only two wires but a more complex code and slower transmission speed. Whereas the two-needle system needed a three-unit code (that is, up to three movements of the needles to represent each letter), the one-needle system used a four-unit code but had enough codes to encode the entire alphabet. Like the preceding two-needle system, the code units consisted of rapid deflections of the needle to either left or right in quick succession. The needle struck a post when it moved, causing it to ring. Different tones were provided for the left and right movements so that the operator could hear the needle's direction without looking at it.
== Codes ==
The codes were refined and adapted as they were used. By 1867 numerals had been added to the five-needle code. This was achieved through the provision of a sixth wire for common return making it possible to move just a single needle. With the original five wires it was only possible to move the needles in pairs and always in opposite directions since there was no common wire provided. Many more codes are theoretically possible with common return signalling, but not all of them can conveniently be used with a grid indication display. The numerals were worked in by marking them around the edge of the diamond grid. Needles 1 through 5 when energised to the right pointed to numerals 1 through 5 respectively, and to the left numerals 6 through 9 and 0 respectively. Two additional buttons were provided on the telegraph sets to enable the common return to be connected to either the positive or negative terminal of the battery according to the direction it was desired to move the needle.
Also by 1867, codes for Q () and Z () were added to the one-needle code, but not, apparently, for J. However, codes for Q (), Z (), and J () are marked on the plates of later needle telegraphs, together with six-unit codes for number shift () and letter shift (). Numerous compound codes were added for operator controls such as wait and repeat. These compounds are similar to the prosigns found in Morse code where the two characters are run together without a character gap. The two-needle number shift and letter shift codes are also compounds, which is the reason they have been written with an overbar.
== Explanatory notes ==
== Citations ==
== General and cited references ==
Beauchamp, Ken, History of Telegraphy, IET, 2001 ISBN 0852967926.
Bowers, Brian, Sir Charles Wheatstone: 1802–1875, IET, 2001 ISBN 0852961030.
Bowler, Peter J.; Morus, Iwan Rhys, Making Modern Science: A Historical Survey, University of Chicago Press, 2010 ISBN 0226068625.
Burns, Russel W., Communications: An International History of the Formative Years, IEE, 2004 ISBN 0863413277.
Cooke, William F., Telegraphic Railways or the Single Way, Simpkin, Marshall & Company, 1842 OCLC 213732219.
Duffy, Michael C., Electric Railways: 1880-1990, IEE, 2003, ISBN 9780852968055.
Guillemin, Amédée, The Applications of Physical Forces, Macmillan and Company, 1877 OCLC 5894380237.
Huurdeman, Anton A., The Worldwide History of Telecommunications, John Wiley & Sons, 2003 ISBN 0471205052.
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David and Charles, 1973 OCLC 655205099.
Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 031333207X.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859. | Wikipedia/Cooke_and_Wheatstone_telegraph |
A needle telegraph is an electrical telegraph that uses indicating needles moved electromagnetically as its means of displaying messages. It is one of the two main types of electromagnetic telegraph, the other being the armature system, as exemplified by the telegraph of Samuel Morse in the United States. Needle telegraphs were widely used in Europe and the British Empire during the nineteenth century.
Needle telegraphs were suggested shortly after Hans Christian Ørsted discovered that electric currents could deflect compass needles in 1820. Pavel Schilling developed a telegraph using needles suspended by threads. This was intended for installation in Russia for government use, but Schilling died in 1837 before it could be implemented. In 1833 Carl Friedrich Gauss and Wilhelm Eduard Weber in Göttingen built a telegraph line that was used for scientific study and communication between university sites. In 1837 Carl August von Steinheil adapted Gauss and Weber's rather cumbersome apparatus for use on various German railways.
In England, William Fothergill Cooke started building telegraphs, initially based on Schilling's design. With Charles Wheatstone, Cooke produced a much improved design. This was taken up by several railway companies. Cooke's Electric Telegraph Company, formed in 1846, provided the first public telegraph service. The needle telegraphs of the Electric Telegraph Company and their rivals were the standard form of telegraphy for the better part of the nineteenth century in the United Kingdom. They continued in use even after the Morse telegraph became the official standard in the UK in 1870. Some were still in use well in to the twentieth century.
== Early ideas ==
The history of the needle telegraph began with the landmark discovery, published by Hans Christian Ørsted on 21 April 1820, that an electric current deflected the needle of a nearby compass. Almost immediately, other scholars realised the potential this phenomenon had for building an electric telegraph. The first to suggest this was French mathematician Pierre-Simon Laplace. On 2 October, André-Marie Ampère, acting on Laplace's suggestion, sent a paper on this idea to the Paris Academy of Sciences. Ampère's (theoretical) telegraph had a pair of wires for each letter of the alphabet with a keyboard to control which pair was connected to a battery. At the receiving end, Ampère placed small magnets (needles) under the wires. The effect on the magnet in Ampère's scheme would have been very weak because he did not form the wire into a coil around the needle to multiply the magnetic effect of the current. Johann Schweigger had already invented the galvanometer (in September) using such a multiplier, but Ampère either had not yet got the news, or failed to realise its significance for a telegraph.
Peter Barlow investigated Ampère's idea, but thought it would not work. In 1824 he published his results, saying that the effect on the compass was seriously diminished "with only 200 feet of wire". Barlow, and other eminent academics of the time who agreed with him, were criticised by some writers for retarding the development of the telegraph. A decade passed between Ampère's paper being read and the first electromagnetic telegraphs being built.
== Development ==
=== Schilling telegraph ===
It was not until 1829 that the idea of applying Schweigger style multipliers to telegraph needles was mooted by Gustav Theodor Fechner in Leipzig. Fechner, in other respects following the scheme of Ampère, also suggested a pair of wires for each letter (twenty-four in the German alphabet) laid underground to connect Leipzig with Dresden. Fechner's idea was taken up by William Ritchie of the Royal Institution of Great Britain in 1830. Ritchie used twenty-six pairs of wires run across a lecture room as a demonstration of principle. Meanwhile, Pavel Schilling in Russia constructed a series of telegraphs also using Schweigger multipliers. The exact date that Schilling switched from developing electrochemical telegraphs to needle telegraphs is not known, but Hamel says he showed one in early development to Tsar Alexander I who died in 1825. In 1832, Schilling developed the first needle telegraph (and the first electromagnetic telegraph of any kind) intended for practical use. Tsar Nicholas I initiated a project to connect St. Petersburg with Kronstadt using Schilling's telegraph, but it was cancelled on Schilling's death in 1837.
Schilling's scheme had some drawbacks. Although it used far fewer wires than proposed by Ampère or used by Ritchie, his 1832 demonstration still used eight wires, which made the system expensive to install over very long distances. Schilling's scheme used a bank of six needle instruments which between them displayed a binary code representing a letter of the alphabet. Schilling did devise a code that allowed the letter code to be sent serially to a single needle instrument, but he found that the dignitaries he demonstrated the telegraph to could understand the six-needle version more readily. Transmission speed was very slow on the multi-needle telegraph, perhaps as low as four characters per minute, and even slower on the single-needle version. The reason for this was principally that Schilling had severely overdamped the movement of the needles by slowing them with a platinum paddle in a cup of mercury. Schilling's method of mounting the needle by suspending it by a silk thread over the multiplier also had practical difficulties. The instrument had to be carefully levelled before use and could not be moved or disturbed while in use.
=== Gauss and Weber telegraph ===
In 1833 Carl Friedrich Gauss and Wilhelm Eduard Weber set up an experimental needle telegraph between their laboratory in the University of Göttingen and the university astronomical observatory about a mile and a half away where they were studying the Earth's magnetic field. The line consisted of a pair of copper wires on posts above rooftop height. The receiving instrument they used was a converted laboratory instrument, of which the so called needle was a large bar magnet weighing a pound. In 1834, they replaced the magnet with an even heavier one, variously reported as 25, 30, and 100 pounds. The magnet moved so minutely a telescope was required to observe a scale reflected from it by a mirror. The initial purpose of this line was not telegraphic at all. It was used to confirm the correctness or otherwise of the then recent work of Georg Ohm, that is, they were verifying Ohm's law. They quickly found other uses, the first of which was the synchronisation of clocks in the two buildings. Within a few months, they developed a telegraph code that allowed them to send arbitrary messages. Signalling speeds were around seven characters per minute. In 1835, they replaced the batteries of their telegraph with a large magneto-electric apparatus which generated telegraph pulses as the operator moved a coil relative to a bar magnet. This machine was made by Carl August von Steinheil. The Gauss and Weber telegraph remained in daily service until 1838.
In 1836, the Leipzig–Dresden railway inquired whether the Gauss and Weber telegraph could be installed on their line. The laboratory instrument was much too cumbersome, and much too slow to be used in this way. Gauss asked Steinheil to develop something more practical for railway use. This he did, producing a compact needle instrument which also emitted sounds while it was receiving messages. The needle struck one of two bells, on the right and left respectively, when it was deflected. The two bells had different tones so that the operator could tell which way the needle had been deflected without constantly watching it.
Steinheil first installed his telegraph along five miles of track covering four stations around Munich. In 1838, he was installing another system on the Nuremberg–Fürth railway line. Gauss suggested that he should use the rails as conductors and entirely avoid installing wires. This failed when Steinheil tried it because the rails were not well insulated from the ground, but in the process of this failure, he realised that he could use the ground as one of the conductors. This was the first earth-return telegraph put into service anywhere.
== Commercial use ==
=== Cooke and Wheatstone telegraph ===
The most widely used needle system, and the first telegraph of any kind used commercially, was the Cooke and Wheatstone telegraph, employed in Britain and the British Empire in the 19th and early-20th centuries, due to Charles Wheatstone and William Fothergill Cooke. The inspiration to build a telegraph came in March 1836 when Cooke saw one of Schilling's needle instruments demonstrated by Georg Wilhelm Muncke in a lecture in Heidelberg (although he did not realise that the instrument was due to Schilling). Cooke was supposed to be studying anatomy, but immediately abandoned this and returned to England to develop telegraphy. He initially built a three-needle telegraph, but believing that needle telegraphs would always require multiple wires, he moved to mechanical designs. His first effort was a clockwork telegraph alarm, which later went into service with telegraph companies. He then invented a mechanical telegraph based on a musical snuff box. In this device the detent of the clockwork mechanism was released by the armature of an electromagnet. Cooke carried out this work extremely quickly. The needle telegraph was completed within three weeks, and the mechanical telegraph within six weeks of seeing Muncke's demonstration. Cooke attempted to interest the Liverpool and Manchester Railway in his mechanical telegraph for use as railway signalling, but it was rejected in favour of a system using steam whistles. Unsure of how far his telegraph could be made to work, Cooke consulted Michael Faraday and Peter Mark Roget. They put him in touch with eminent scientist Charles Wheatstone and the two then worked in partnership. Wheatstone suggested using a much improved needle instrument and they then developed a five-needle telegraph.
The Cooke and Wheatstone five-needle telegraph was a substantial improvement on the Schilling telegraph. The needle instruments were based on the galvanometer of Macedonio Melloni. They were mounted on a vertical board with the needles centrally pivoted. The needles could be directly observed and Schilling's delicate silk threads were entirely done away with. The system required five wires, a slight reduction on that used by Schilling, partly because the Cooke and Wheatstone system did not require a common wire. Instead of Schilling's binary code, current was sent through one wire to one needle's coil and returned via the coil and wire of another. This scheme was similar to that employed by Samuel Thomas von Sömmerring on his chemical telegraph, but with a much more efficient coding scheme. Sömmerring's code required one wire per character. Even better, the two needles energised were made to point to a letter of the alphabet. This allowed the apparatus to be used by unskilled operators without the need to learn a code – a key selling point to the railway companies the system was aimed at. Another advantage was that it was much faster at 30 characters per minute. It did not use heavy mercury as the damping fluid, but instead used a vane in air, a much better match for ideal damping.
The five-needle telegraph was first put into service with the Great Western Railway in 1838. However, it was soon dropped in favour of two-needle and single-needle systems. The cost of multiple wires proved to be a more important factor than the cost of training operators. In 1846, Cooke formed the Electric Telegraph Company with John Lewis Ricardo, the first company to offer a telegraph service to the public. They continued to sell needle telegraph systems to railway companies for signalling, but they also slowly built a national network for general use by businesses, the press, and the public. Needle telegraphs were officially superseded by the Morse telegraph when the UK telegraph industry was nationalised in 1870, but some continued in use well in to the twentieth century.
=== Other systems ===
The Henley-Foster telegraph was a needle telegraph used by the British and Irish Magnetic Telegraph Company, the main rival to the Electric Telegraph Company. It was invented in 1848 by William Thomas Henley and George Foster. It was made in both single-needle and two-needle forms which in operation were similar to the corresponding Cooke and Wheatstone instruments. The unique feature of this telegraph was that it did not require batteries. The telegraph pulses were generated by coils moving through a magnetic field as the operator worked the handles of the machine to send messages. The Henley-Foster instrument was the most sensitive instrument available in the 1850s. It could consequently be operated over a greater distance and worse quality lines than other systems.
The Foy-Breguet telegraph was invented by Alphonse Foy and Louis-François-Clement Breguet in 1842, and used in France. The instrument display was arranged to mimic the French optical telegraph system, with the two needles taking on the same positions as the arms of the Chappe semaphore (the optical system widely used in France). This arrangement meant that operators did not need to be retrained when their telegraph lines were upgraded to the electrical telegraph. The Foy-Breguet telegraph is usually described as a needle telegraph, but electrically it is actually a type of armature telegraph. The needles are not moved by a galvanometer arrangement. They are instead moved by a clockwork mechanism that the operator must keep wound up. The detent of the clockwork is released by an electromagnetic armature which operates on the edges of a received telegraph pulse.
According to Stuart M. Hallas, needle telegraphs were in use on the Great Northern Line as late as the 1970s. The telegraph code used on these instruments was the Morse code. Instead of the usual dots and dashes of different durations, but the same polarity, needle instruments used pulses of the same duration, but opposite polarities to represent the two code elements. This arrangement was commonly used on needle telegraphs and submarine telegraph cables in the 19th century after Morse code became the international standard.
== Pseudoscience ==
Sympathetic needles were a supposed 17th century means of instantaneous communication at a distance using magnetised needles. Pointing one needle to a letter of the alphabet was supposed to cause its partner needle to point to the same letter at another location.
== References ==
== Bibliography ==
Bowers, Brian, Sir Charles Wheatstone: 1802–1875, IEE, 2001 ISBN 9780852961032.
Bright, Charles, Submarine Telegraphs, London: Crosby Lockwood, 1898 OCLC 776529627.
Dawson, Keith, "Electromagnetic telegraphy: early ideas, proposals and apparatus", pp. 113–142 in, Hall, A. Rupert; Smith, Norman (eds), History of Technology, vol. 1, Bloomsbury Publishing, 2016 ISBN 1350017345.
Fahie, John Joseph, A History of Electric Telegraphy, to the Year 1837, London: E. & F.N. Spon, 1884 OCLC 559318239.
Garratt, G.R.M., "The early history of telegraphy", Philips Technical Review, vol. 26, no. 8/9, pp. 268–284, 21 April 1966.
Hallas, Stuart M., "The single needle telegraph", www.samhallas.co.uk, retrieved and archived 29 September 2019.
Hubbard, Geoffrey, Cooke and Wheatstone: And the Invention of the Electric Telegraph, Routledge, 2013 ISBN 1135028508.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David and Charles, 1973 OCLC 655205099.
Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 9780313332074.
Phillips, Ronnie J., "Digital technology and institutional change from the gilded age to modern times: The impact of the telegraph and the internet", Journal of Economic Issues, vol. 34, iss. 2, pp. 267-289, June 2000.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859 OCLC 258508686.
Taylor, William Bower, An Historical Sketch of Henry's Contribution to the Electro-magnetic Telegraph, Washington: Government Printing Office, 1879 OCLC 1046029882.
Yarotsky, A.V., "150th anniversary of the electromagnetic telegraph", Telecommunication Journal, vol. 49, no. 10, pp. 709–715, October 1982.
"The progress of the telegraph: part VII", Nature, vol. 12, pp. 110–113, 10 June 1875.
== External links ==
Media related to Needle telegraphs at Wikimedia Commons | Wikipedia/Needle_telegraph |
The next-generation network (NGN) is a body of key architectural changes in telecommunication core and access networks. The general idea behind the NGN is that one network transports all information and services (voice, data, and all sorts of media such as video) by encapsulating these into IP packets, similar to those used on the Internet. NGNs are commonly built around the Internet Protocol, and therefore the term all IP is also sometimes used to describe the transformation of formerly telephone-centric networks toward NGN.
NGN is a different concept from Future Internet, which is more focused on the evolution of Internet in terms of the variety and interactions of services offered.
== Introduction of NGN ==
According to ITU-T, the definition is:
A next-generation network (NGN) is a packet-based network which can provide services including Telecommunication Services and is able to make use of multiple broadband, quality of service-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.
From a practical perspective, NGN involves three main architectural changes that need to be looked at separately:
In the core network, NGN implies a consolidation of several (dedicated or overlay) transport networks each historically built for a different service into one core transport network (often based on IP and Ethernet). It implies amongst others the migration of voice from a circuit-switched architecture (PSTN) to VoIP, and also migration of legacy services such as X.25, Frame Relay (either commercial migration of the customer to a new service like IP VPN, or technical emigration by emulation of the "legacy service" on the NGN).
In the wired access network, NGN implies the migration from the dual system of legacy voice next to xDSL setup in local exchanges to a converged setup in which the DSLAMs integrate voice ports or VoIP, making it possible to remove the voice switching infrastructure from the exchange.
In the cable access network, NGN convergence implies migration of constant bit rate voice to CableLabs PacketCable standards that provide VoIP and SIP services. Both services ride over DOCSIS as the cable data layer standard.
In an NGN, there is a more defined separation between the transport (connectivity) portion of the network and the services that run on top of that transport. This means that whenever a provider wants to enable a new service, they can do so by defining it directly at the service layer without considering the transport layer – i.e. services are independent of transport details. Increasingly applications, including voice, tend to be independent of the access network (de-layering of network and applications) and will reside more on end-user devices (phone, PC, set-top box).
== Underlying technology components ==
Next-generation networks are based on Internet technologies including Internet Protocol (IP) and Multiprotocol Label Switching (MPLS). At the application level, Session Initiation Protocol (SIP) seems to be taking over from ITU-T H.323.
Initially H.323 was the most popular protocol, though its popularity decreased due to its original poor traversal of network address translation (NAT) and firewalls. For this reason as domestic VoIP services have been developed, SIP has been more widely adopted. However, in voice networks where everything is under the control of the network operator or telco, many of the largest carriers use H.323 as the protocol of choice in their core backbones. With the most recent changes introduced for H.323, it is now possible for H.323 devices to easily and consistently traverse NAT and firewall devices, opening up the possibility that H.323 may again be looked upon more favorably in cases where such devices encumbered its use previously. Nonetheless, most of the telcos are extensively researching and supporting IP Multimedia Subsystem (IMS), which gives SIP a major chance of being the most widely adopted protocol.
For voice applications one of the most important devices in NGN is a Softswitch – a programmable device that controls Voice over IP (VoIP) calls. It enables correct integration of different protocols within NGN. The most important function of the Softswitch is creating the interface to the existing telephone network, PSTN, through Signalling Gateways and Media Gateways. However, the Softswitch as a term may be defined differently by the different equipment manufacturers and have somewhat different functions.
The term Gatekeeper sometimes appears in NGN literature. This was originally a VoIP device, which converted voice and data from their analog or digital switched-circuit form (PSTN, SS7) to the packet-based one (IP) using gateways. It controlled one or more gateways. As soon as this kind of device started using the Media Gateway Control Protocol, the name was changed to Media Gateway Controller (MGC).
A Call Agent is a general name for devices/systems controlling calls.
The IP Multimedia Subsystem (IMS) is a standardised NGN architecture for an Internet media-services capability defined by the European Telecommunications Standards Institute (ETSI) and the 3rd Generation Partnership Project (3GPP).
== Implementations ==
In the UK, another popular acronym was introduced by BT (British Telecom) as 21CN (21st Century Networks, sometimes mistakenly quoted as C21N) – this is another loose term for NGN and denotes BT's initiative to deploy and operate NGN switches and networks in the period 2006–2008 (the aim being by 2008 BT to have only all-IP switches in their network). The concept was abandoned, however, in favor of maintaining current-generation equipment.
The first company in the UK to roll out a NGN was THUS plc which started deployment back in 1999. THUS' NGN contains 10,600 km of fibre optic cable with more than 190 points of presence throughout the UK. The core optical network uses dense wavelength-division multiplexing (DWDM) technology to provide scalability to many hundreds of gigabits per second of bandwidth, in line with growth demand. On top of this, the THUS backbone network uses MPLS technology to deliver the highest possible performance. IP/MPLS-based services carry voice, video and data traffic across a converged infrastructure, potentially allowing organisations to enjoy lower infrastructure costs, as well as added flexibility and functionality. Traffic can be prioritised with Classes of Service, coupled with Service Level Agreements (SLAs) that underpin quality of service performance guarantees. The THUS NGN accommodates seven Classes of Service, four of which are currently offered on MPLS IP VPN.
In the Netherlands, KPN is developing an NGN in a network transformation program called all-IP. Next Generation Networks also extends into the messaging domain and in Ireland, Openmind Networks has designed, built and deployed Traffic Control to handle the demands and requirements of all IP networks.
In Bulgaria, BTC (Bulgarian Telecommunications Company) has implemented the NGN as underlying network of its telco services on a large-scale project in 2004. The inherent flexibility and scalability of the new core network approach resulted in an unprecedented rise of classical services deployment as POTS/ISDN, Centrex, ADSL, VPN, as well as implementation of higher bandwidths for the Metro and Long-distance Ethernet / VPN services, cross-national transits and WebTV/IPTV application.
In February 2014, Deutsche Telekom revealed that its subsidiary Makedonski Telekom had become the first European incumbent to convert its PSTN infrastructure to an all IP network. It took just over two years for all 290,000 fixed lines to be migrated onto the new platform. The capital investment worth 14 million euros makes Macedonia the first country in the South-East Europe whose network will be fully based on Internet protocol.
In Canada, startup Wind Mobile owned by Globalive is deploying an all-IP wireless backbone for its mobile phone service.
In mid 2005, China Telecom announced its commercial roll-out of China Telecom's Next Generation Carrying Network, or CN2, using Internet Protocol Next-Generation Network (IP NGN) architecture. Its IPv6-capable backbone network leverages softswitches (the control layer) and protocols like DiffServ and MPLS, which boosts performance of the bearer layer. The MPLS-optimized architecture also enables Frame Relay and ATM traffic to be transported over a Layer 2 VPN, which supports both legacy traffic and new IP services over a single IP/MPLS network.
== See also ==
5G, the most common implementation of NGN as of 2023
Computer network
Fixed-Mobile Convergence Alliance (FMCA)
Flat IP
Mobile VoIP
IP Multimedia Subsystem (IMS)
Nanoscale network
Network convergence
Next-generation network services
Telecommunications equipment
== References ==
Migrating TDM Networks to NGN | A US Case Study
== External links ==
ETSI TISPAN website
ECMA TR/91 "Enterprise Communication in Next Generation Corporate Networks (NGCN) involving Public Next Generation Networks (NGN) (Ecma-International, December 2005)" (also ISO/IEC DTR 26905 and ETSI TR 102 478)
ITU-T Focus Group on Next Generation Networks (FGNGN)
ITU-T NGN Management Focus Group
NGN enabled label
NGN Forum | Wikipedia/Next-generation_network |
A hydraulic telegraph (Greek: υδραυλικός τηλέγραφος) refers to two different semaphore systems involving the use of water-based mechanisms as a telegraph. The earliest one was developed in 4th-century BC Greece, while the other was developed in 19th-century AD Britain. The Greek system was deployed in combination with semaphoric fires, while the latter British system was operated purely by hydraulic fluid pressure.
Although both systems employed water in their sending and receiver devices, their transmission media were completely different. The ancient Greek system transmitted its semaphoric information to the receiver visually, which limited its use to line-of-sight distances in good visibility weather conditions only. The 19th-century British system used water-filled pipes to effect changes to the water level in the receiver unit (similar to a transparent water-filled flexible tube used as a level indicator), thus limiting its range to the hydraulic pressure that could be generated at the transmitter's device.
While the Greek device was extremely limited in the codes (and hence the information) it could convey, the British device was never deployed in operation other than for very short-distance demonstrations. Although the British device could be used in any visibility within its range of operation, it could not work in freezing temperatures without additional infrastructure to heat the pipes. This contributed to its impracticality.
== Greek hydraulic semaphore system ==
The ancient Greek design was described in the 4th century BC by Aeneas Tacticus and the 3rd century BC by the historian Polybius.
The system involved identical containers on separate hills, which are not connected to each other; each container would be filled with water, and a vertical rod floated within it. The rods were inscribed with various predetermined codes at various points along its height.
To send a message, the sending operator would use a torch to signal the receiving operator; once the two were synchronized, they would simultaneously open the spigots at the bottom of their containers. Water would drain out until the water level reached the desired code, at which point the sender would signal with his torch, and the operators would simultaneously close their spigots. Thus the length of time between the sender's torch signals could be correlated with specific predetermined codes and messages.
A contemporary description of the ancient telegraphic method was provided by Polybius. In The Histories, Polybius wrote:
Aeneas, the author of the work on strategy, [writing] to find a remedy for the difficulty, advanced matters a little, but his device still fell far short of our requirements, as can be seen from his description of it.
He says that those who are about to [communicate] urgent news to each other by fire signal should procure two earthenware vessels of exactly the same width and depth, the depth being some three cubits and the width one. Then they should have corks made a little narrower than the mouths of the vessels [so that the cork slides through the neck and drops easily into the vessel] and through the middle of each cork should pass a rod graduated in equal section of three finger-breadths, each clearly marked off from the next. In each section should be written the most evident and ordinary events that occur in war, e.g., on the first, "Cavalry arrived in the country," on the second "Heavy infantry," on the third "Light-armed infantry," next "Infantry and cavalry," next "Ships," next "Corn," and so on until we have entered in all the sections the chief contingencies of which, at the present time, there is a reasonable probability in wartime. Next, he tells us to bore holes in both vessels of exactly the same size, so that they allow exactly the same escape.
Then we are to fill the vessels with water and put on the corks with the rods in them and allow the water to flow through the two apertures. When this is done it is evident that, the conditions being precisely similar, in proportion as the water escapes the two corks will sink and the rods will disappear into the vessels. When by experiment it is seen that the rapidity of escape is in both cases the same, the vessels are to be conveyed to the places in which both parties are to look after the signals and deposited there. Now whenever any of the contingencies written on the rods occurs he tells us to raise a torch and to wait until the corresponding party raises another. When both the torches are clearly visible the signaler is to lower his torch and at once allow the water to escape through the aperture. Whenever, as the corks sink, the contingency you wish to communicate reaches the mouth of the vessel he tells the signaler to raise his torch and the receivers of the signal are to stop the aperture at once and to note which of the messages written on the rods is at the mouth of the vessel. This will be the message delivered, if the apparatus works at the same pace in both cases.
Modern experiments show that the data transfer rate can achieve 151 letter per hour.
== British hydraulic semaphore system ==
The British civil engineer Francis Whishaw, who later became a principal in the General Telegraph Company, publicized a hydraulic telegraph in 1838 but was unable to deploy it commercially. By applying pressure at a transmitter device connected to a water-filled pipe which travelled all the way to a similar receiver device, he was able to effect a change in the water level which would then indicate coded information to the receiver's operator.
The system was estimated to cost £200 per mile (1.6 km) and could convey a vocabulary of 12,000 words. The U.K.'s Mechanics Magazine in March 1838 described it as follows:
...a column of water [can] be conveniently employed to transmit information. Mr. Francis Whishaw has conveyed a column of water through sixty yards of pipe in the most convoluted form, and the two ends of the column being on a level, motion is no sooner given to one end than it is communicated through the whole sixty yards to the other end of the column. No perceptible interval elapses between the time of impressing motion on one end of the column and of communicating it to the other.
To each end of a column he attaches a float board with an index, and the depression of any given number of figures on one index, will be immediately followed by a corresponding rise of the float board and index at the other end. It is supposed that this simple longitudinal motion can be made to convey all kinds of information. It appears to us that the amount of information which can be conveyed by the motion in one direction only, of the water, or backward and forwards, must be limited. To make the mere motion backwards and forwards of a float board, indicated on a graduated index, convey a great number of words or letters, is the difficulty to be overcome.
The article concluded speculatively that the "... hydraulic telegraph may supersede the semaphore and the galvanic telegraph".
== See also ==
Byzantine beacon system
Fryctoria
Heliograph
Optical communication
Signal lamp
== References ==
== External links ==
Connected Earth Archived 2007-06-16 at the Wayback Machine | Wikipedia/Hydraulic_telegraph |
The Atlantic Telegraph Company was a company formed on 6 November 1856 to undertake and exploit a commercial telegraph cable across the Atlantic Ocean, the first such telecommunications link.
== History ==
Cyrus Field, American businessman and financier, set his sights on laying the first transatlantic underwater telegraph cable after having been contacted by Frederic Newton Gisborne who attempted to connect St. John's, Newfoundland to New York City, but failed due to lack of funding. After inquiring about the feasibility of a transatlantic underwater cable to Lieutenant Matthew Fontaine Maury of the U.S. Navy, Field formed an agreement with the Englishmen John Watkins Brett and Charles Tilston Bright to create the Atlantic Telegraph Company. It was incorporated in December, 1856 with £350,000 capital, raised principally in London, Liverpool, Manchester, and Glasgow. The board of directors was composed of eighteen members from the United Kingdom, nine from the United States, and three from Canada. The original three projectors were joined by E.O.W. Whitehouse, who oversaw the manufacturing of the cables as chief electrician. Curtis M. Lampson served as vice-chairman for over a decade.
The board recruited the physicist William Thomson (later Lord Kelvin), who had publicly disputed some of Whitehouse's claims. The two had a tense relationship before Whitehouse was dismissed when the first cable failed in 1858. Later that year, another attempt was made to connect North America and Europe. This attempt was completed on August 5, 1858 and was celebrated by an exchange of messages between Queen Victoria of England and President Buchanan of the United States using the new cable line.
When a second cable, under Thomson's supervision, was proposed, the Admiralty lent the hulks of HMS Amethyst and HMS Iris to the company in 1864, both ships were then extensively modified in 1865 for ferrying the Atlantic cable from the works at Enderby's wharf, in East Greenwich, London, to Great Eastern at her Sheerness mooring. A new subsidiary company, the Telegraph Construction and Maintenance Company, under the chairmanship of John Pender was formed to execute the new venture.
The cable was coiled down into great cylindrical tanks at the wharf before being fed into Great Eastern. Amethyst and Iris transferred the 2,500 miles (4,022 km) of cable to Great Eastern, beginning in February 1865, an operation that took over three months.
On the failure of the expedition to lay the second cable in 1865, a third company was formed to raise the capital for a further attempt, the Anglo-American Telegraph Company. Both the hulks and Great Eastern were put to use again in 1866 and again in 1869.
The next expedition in 1866 was a success, also succeeding in recovering the lost second cable. The service generated revenues of £1,000 in its first day of operation.
The approximate price to send a telegram was: one word, one mile (1.6 km) = $0.0003809.
The Atlantic Telegraph Company operated the only two trans-Atlantic cables without competition until 1869, when a French cable was laid. Shortly after this company was established, an agreement was made to coordinate pricing of telegraph services and share revenues, effectively combining the French and Anglo-American interests into one combine. A second French company, compagnie française du télégraphe de Paris à New-York, was established in 1879.
== Anglo-American Telegraph Company ==
The Anglo-American Telegraph Company was founded after the failed attempt of laying a second cable by the Atlantic Telegraph Company in 1865. The new telegraph company took over the assets of the New York, Newfoundland, and London Telegraph Company and later merged with The French Transatlantic Cable Company in 1869. The new company set out to recover the lost cable using the CS Albany and CS Medway, working together with The Atlantic Telegraph Company until the two merged in 1873. They then went on to lay two more cables in 1873 and 1874 from Hearts Content, Newfoundland to Valentia Island by CS Robert Lowe in 1873 and CS Minia in 1874.
== Archive ==
Secretariat records (two volumes) of the Anglo-American Telegraph Company, 1866-1869, are held by BT Archives.
== References ==
== Further reading ==
Sharlin, H.I. (1979). Lord Kelvin: The Dynamic Victorian. Pennsylvania State University Press. ISBN 0-271-00203-4., pp127-147
Standage, T. (1998). The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century's Online Pioneers. Phoenix. ISBN 0-7538-0703-3.
== External links ==
BT Archives official site Archived 2011-02-19 at the Wayback Machine
BT Archives online catalogue | Wikipedia/Atlantic_Telegraph_Company |
Acoustic telegraphy (also known as harmonic telegraphy) was a name for various methods of multiplexing (transmitting more than one) telegraph messages simultaneously over a single telegraph wire by using different audio frequencies or channels for each message. A telegrapher used a conventional Morse key to tap out the message in Morse code. The key pulses were transmitted as pulses of a specific audio frequency. At the receiving end a device tuned to the same frequency resonated to the pulses but not to others on the same wire.
Inventors who worked on the acoustic telegraph included Charles Bourseul, Thomas Edison, Elisha Gray, and Alexander Graham Bell. Their efforts to develop acoustic telegraphy, in order to reduce the cost of telegraph service, led to the invention of the telephone.
Some of Thomas Edison's devices used multiple synchronized tuning forks tuned to selected audio frequencies and which opened and closed electrical circuits at the selected audio frequencies. Acoustic telegraphy was similar in concept to present-day FDMA, or frequency-division multiple access, used with radio frequencies.
The word acoustic comes from the Greek akoustikos meaning hearing, as with hearing of sound waves in air. Acoustic telegraphy devices were electromechanical and made musical or buzzing or humming sound waves in air for a few feet. But the primary function of these devices was not to generate sound waves, but rather to generate alternating electrical currents at selected audio frequencies in wires which transmitted telegraphic messages electrically over long distances.
== Patents ==
U.S. patent 0,161,739 – Improvement in Transmitters and Receivers for Electric Telegraphs – Alexander Graham Bell, issued April 6, 1875
U.S. patent 0,166,095 – Electrical Telegraph for Transmitting Musical Tones – Elisha Gray, issued July 27, 1875; Reissue # 8559 Jan. 28, 1879
U.S. patent 0,173,618 – Improvement In Electro-Harmonic Telegraphs – Elisha Gray, issued February 15, 1876
U.S. patent 0,182,996 – Acoustic Telegraph – Thomas Edison, issued October 10, 1876
U.S. patent 0,185,507 – Improvement in electro-haronic multiplex telegraphs – Thomas Edison, issued December 19, 1876
U.S. patent 0,186,330 – Acoustic Electric Telegraphs – Thomas Edison, issued January 16, 1877
U.S. patent 0,200,993 – Acoustic Telegraphs – Thomas Edison, issued March 5, 1878
U.S. patent 0,203,019 – Circuits for Acoustic or Telephonic Telegraphs – Thomas Edison, issued April 30, 1878
U.S. patent 0,235,142 – Acoustic Telegraph – Thomas Edison, issued December 7, 1880
The five Edison patents were assigned to Western Union Telegraph Company of New York.
== See also ==
Telegraph
Invention of the telephone
== References ==
Notes
Bibliography
Standage, Tom, The Victorian Internet, Berkley Books, New York (Penguin), 1998, ISBN 0-425-17169-8
D. Robertson. The Great Telephone Mystery, IEEE Review, Feb. 2006, Volume: 52, Issue: 2, pp. 44– 48, ISSN 0953-5683, INSPEC Accession Number: 8770451, Current Version Published: 2006-02-27.
Brooke Clarke. Telephone Patents | Wikipedia/Acoustic_telegraphy |
The Chinese telegraph code, or Chinese commercial code, is a four-digit character encoding enabling the use of Chinese characters in electrical telegraph messages.
== Encoding and decoding ==
A codebook is provided for encoding and decoding the Chinese telegraph code. It shows one-to-one correspondence between Chinese characters and four-digit numbers from 0000 to 9999. Chinese characters are arranged and numbered in dictionary order according to their radicals and strokes. Each page of the book shows 100 pairs of a Chinese character and a number in a 10×10 table. The most significant two digits of a code matches the page number, the next digit matches the row number, and the least significant digit matches the column number, with 1 being the column on the far right. For example, the code 0022 for the character 中 (zhōng), meaning “center,” is given in page 00, row 2, column 2 of the codebook, and the code 2429 for the character 文 (wén), meaning “script,” is given in page 24, row 2, column 9. The PRC’s Standard Telegraph Codebook (Ministry of Post and Telecommunications 2002) provides codes for approximately 7,000 Chinese characters.
Senders convert their messages written with Chinese characters to a sequence of digits according to the codebook. For instance, the phrase 中文信息 (Zhōngwén xìnxī), meaning “information in Chinese,” is rendered into the code as 0022 2429 0207 1873. It is transmitted using Morse code. Receivers decode the Morse code to get a sequence of digits, chop it into an array of quadruplets, and then decode them one by one referring to the book. Because non-digit characters were not used, the Morse codes for digits could be simplified, for example one's several consequent dashes could be replaced with a single dash.
The codebook also defines codes for Zhuyin alphabet, Latin alphabet, Cyrillic alphabet, and various symbols including special symbols for months, days in a month, and hours.
Senders may translate their messages into numbers by themselves or pay a small charge to have them translated by a telegrapher. Chinese expert telegraphers used to remember several thousands of codes of the most frequent use.
The Standard Telegraph Codebook gives alternative three-letter code (AAA, AAB, ...) for Chinese characters. It compresses telegram messages and cuts international fees by 25% as compared to the four-digit code.
=== Use ===
Looking up a character given a number is straightforward: page, row, column.
However, looking up a number given a character is more difficult, as it requires analyzing the character. The four-corner method was developed in the 1920s to allow people to more easily look up characters by the shape, and remains in use today as a Chinese input method for computers.
== History ==
The first Chinese telegraph code for was invented following the introduction of telegraphy to China in 1871, when the Great Northern Telegraph Company laid a cable between Shanghai and Hong Kong, linking the territory of the Qing dynasty to the international telegraph system.: 36 Septime Auguste Viguier, a French customs officer in Shanghai, published a codebook of Chinese characters in 1872, supplanting an earlier work by Danish astronomer Hans Schjellerup. Schjellerup and Viguier selected a small subset of commonly used characters and assigned each character a four-digit code between 0001 and 9999. As a result, tens of thousands of Chinese characters were simply not included in telegraphy.: 36
In consideration of the former code's insufficiency and disorder of characters, Zheng Guanying compiled a new codebook in 1881. It remained in effect until the Ministry of Transportation and Communications printed a new book in 1929. In 1933, a supplement was added to the book.
After the establishment of the People's Republic of China in 1949, the codebook forked into two different versions, due to revisions made in the mainland China and Taiwan independently from each other. The mainland version, the Standard Telegraph Codebook, adopted simplified characters in 1983.
== Application ==
The Chinese telegraph code can be used for a Chinese input method for computers. Ordinary computer users today hardly master it because it needs a lot of rote memorization. However, the related four-corner method, which allows one to look up characters by shape, is used.
Both the Hong Kong and Macau Resident Identity Cards display the Chinese telegraph code for the holder’s Chinese name. Business forms provided by the government and corporations in Hong Kong often require filling out telegraph codes for Chinese names. The codes help to input Chinese characters into a computer. When filling up the DS-160 form for the US visa, the Chinese telegraph codes are required if the applicant has a name in Chinese characters.
Chinese telegraph code is used extensively in law enforcement investigations worldwide that involve ethnic Chinese subjects where variant phonetic spellings of Chinese names can create confusion. Dialectical differences (Mr. Wu in Mandarin becomes Mr. Ng in Cantonese (吳先生); while Mr. Wu in Cantonese would become Mr. Hu in Mandarin (胡先生)) and differing romanization systems (Mr. Xiao in the Hanyu Pinyin system, and Mr. Hsiao in the Wade–Giles system) can create serious problems for investigators, but can be remedied by application of Chinese telegraph code. For instance, investigators following a subject in Taiwan named Hsiao Ai-Kuo might not know this is the same person known in mainland China as Xiao Aiguo and Hong Kong as Siu Oi-Kwok until codes are checked for the actual Chinese characters to determine all match as CTC: 5618/1947/0948 for 萧爱国 (simplified) / 蕭愛國 (traditional).
== See also ==
Code point
Four-corner method, a four-digit structural encoding method designed to aid lookup of telegraph codes
Telegraph code
Wiktionary page of Standard Telegraph Codebook (标准电码本(修订本)), 1983
== Notes ==
== References and bibliography ==
Baark, Erik. 1997. Lightning Wires: The Telegraph and China’s Technological Modernization, 1860–1890. Greenwood Press. ISBN 0-313-30011-9.
Baark, Erik. 1999. “Wires, codes, and people: The Great Northern Telegraph Company in China.” In China and Denmark: Relations Since 1674, edited by Kjeld Erik Brødsgaard and Mads Kirkebæk, Nordic Institute of Asian Studies, pp. 119–152. ISBN 87-87062-71-2.
Immigration Department of Hong Kong. 2006. Card face design of a smart identity card. Hong Kong Special Administrative District Government. Accessed on December 22, 2006.
Jacobsen, Kurt. 1997. “Danish watchmaker created the Chinese Morse system.” Morsum Magnificat, 51, pp. 14–19.
Lín Jìnyì (林 進益 / 林 进益), editor. 1984. 漢字電報コード変換表 Kanji denpō kōdo henkan hyō [Chinese character telegraph code conversion table] (In Japanese). Tokyo: KDD Engineering & Consulting.
Ministry of Post and Telecommunications (中央人民政府郵電部 / 中央人民政府邮电部 Zhōngyāng Rénmín Zhèngfǔ Yóudiànbù), editor. 1952. 標準電碼本 / 标准电码本 Biāozhǔn diànmǎběn [Standard telegraph codebook], 2nd edition (In Chinese). Beijing: Ministry of Post and Telecommunications.
Ministry of Post and Telecommunications (中华人民共和国邮电部 Zhōnghuá Rénmín Gònghéguó Yóudiànbù), editor. 2002. 标准电码本 Biāozhǔn diànmǎběn [Standard telegraph codebook], 修订本 xiūdìngběn [revised edition] (In Chinese). Beijing: 人民邮电出版社 Rénmín Yóudiàn Chūbǎnshè [People’s Post and Telecommunications Publishing]. ISBN 7-115-04219-5.
Reeds, James A. 2004. Chinese telegraph code (CTC). Accessed on December 25, 2006.
Shanghai City Local History Office (上海市地方志办公室 Shànghǎi Shì Dìfāngzhì Bàngōngshì). 2004. 专业志: 上海邮电志 Zhuānyèzhì: Shànghǎi yóudiànzhì [Industrial history: Post and communications history in Shanghai] (In Chinese). Accessed on December 22, 2006.
Stripp, Alan. 2002. Codebreaker in the Far East. Oxford University Press. ISBN 0-19-280386-7.
Tianjin Communications Corporation. 2004. 资费标准: 国内公众电报业务 Zīfèi biāozhǔn: Guónèi gōngzhòng diànbào yèwù [Rate standards: Domestic public telegraph service] (In Chinese). Accessed on December 26, 2006.
Viguier, Septime Auguste (威基謁 / 威基谒 Wēijīyè). 1872. 電報新書 / 电报新书 Diànbào xīnshū [New book for the telegraph] (In Chinese). Published in Shanghai.
Viguier, Septime Auguste (威基謁 / 威基谒 Wēijīyè) and Dé Míngzài (德 明在). 1871. 電信新法 / 电信新法 Diànxìn xīnfǎ [New method for the telegraph] (In Chinese).
Yasuoka Kōichi (安岡 孝一) and Yasuoka Motoko (安岡 素子). 1997. Why is “唡” included in JIS X 0221? (In Japanese). IPSJ SIG Technical Report, 97-CH-35, pp. 49–54.
Yasuoka Kōichi (安岡 孝一) and Yasuoka Motoko (安岡 素子). 2006. 文字符号の歴史: 欧米と日本編 Moji fugō no rekishi: Ōbei to Nippon hen [A history of character codes in Japan, America, and Europe] (In Japanese). Tokyo: 共立出版 Kyōritsu Shuppan ISBN 4-320-12102-3.
== External links ==
Chinese Commercial/Telegraph Code Lookup by NJStar
標準電碼本 (中文商用電碼) Standard telegraph code (Chinese commercial code) (in Chinese)
Unihan database from Unicode Consortium: includes mappings between Unicode and Mainland or Taiwan versions of the telegraph code (kMainlandTelegraph, kTaiwanTelegraph, in Unihan_OtherMappings.txt). | Wikipedia/Chinese_telegraph_code |
Telegraphy is the long-distance transmission of messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pigeon post is not. Ancient signalling systems, although sometimes quite extensive and sophisticated as in China, were generally not capable of transmitting arbitrary text messages. Possible messages were fixed and predetermined, so such systems are thus not true telegraphs.
The earliest true telegraph put into widespread use was the Chappe telegraph, an optical telegraph invented by Claude Chappe in the late 18th century. The system was used extensively in France, and European nations occupied by France, during the Napoleonic era. The electric telegraph started to replace the optical telegraph in the mid-19th century. It was first taken up in Britain in the form of the Cooke and Wheatstone telegraph, initially used mostly as an aid to railway signalling. This was quickly followed by a different system developed in the United States by Samuel Morse. The electric telegraph was slower to develop in France due to the established optical telegraph system, but an electrical telegraph was put into use with a code compatible with the Chappe optical telegraph. The Morse system was adopted as the international standard in 1865, using a modified Morse code developed in Germany in 1848.
The heliograph is a telegraph system using reflected sunlight for signalling. It was mainly used in areas where the electrical telegraph had not been established and generally used the same code. The most extensive heliograph network established was in Arizona and New Mexico during the Apache Wars. The heliograph was standard military equipment as late as World War II. Wireless telegraphy developed in the early 20th century became important for maritime use, and was a competitor to electrical telegraphy using submarine telegraph cables in international communications.
Telegrams became a popular means of sending messages once telegraph prices had fallen sufficiently. Traffic became high enough to spur the development of automated systems—teleprinters and punched tape transmission. These systems led to new telegraph codes, starting with the Baudot code. However, telegrams were never able to compete with the letter post on price, and competition from the telephone, which removed their speed advantage, drove the telegraph into decline from 1920 onwards. The few remaining telegraph applications were largely taken over by alternatives on the internet towards the end of the 20th century.
== Terminology ==
The word telegraph (from Ancient Greek: τῆλε (têle) 'at a distance' and γράφειν (gráphein) 'to write') was coined by the French inventor of the semaphore telegraph, Claude Chappe, who also coined the word semaphore.
A telegraph is a device for transmitting and receiving messages over long distances, i.e., for telegraphy. The word telegraph alone generally refers to an electrical telegraph. Wireless telegraphy is transmission of messages over radio with telegraphic codes.
Contrary to the extensive definition used by Chappe, Morse argued that the term telegraph can strictly be applied only to systems that transmit and record messages at a distance. This is to be distinguished from semaphore, which merely transmits messages. Smoke signals, for instance, are to be considered semaphore, not telegraph. According to Morse, telegraph dates only from 1832 when Pavel Schilling invented one of the earliest electrical telegraphs.
A telegraph message sent by an electrical telegraph operator or telegrapher using Morse code (or a printing telegraph operator using plain text) was known as a telegram. A cablegram was a message sent by a submarine telegraph cable, often shortened to "cable" or "wire". The suffix -gram is derived from ancient Greek: γραμμα (gramma), meaning something written, i.e. telegram means something written at a distance and cablegram means something written via a cable, whereas telegraph implies the process of writing at a distance.
Later, a Telex was a message sent by a Telex network, a switched network of teleprinters similar to a telephone network.
A wirephoto or wire picture was a newspaper picture that was sent from a remote location by a facsimile telegraph. A diplomatic telegram, also known as a diplomatic cable, is a confidential communication between a diplomatic mission and the foreign ministry of its parent country. These continue to be called telegrams or cables regardless of the method used for transmission.
== History ==
=== Early signalling ===
Passing messages by signalling over distance is an ancient practice. One of the oldest examples is the signal towers of the Great Wall of China. By 400 BC, signals could be sent by beacon fires or drum beats, and by 200 BC complex flag signalling had developed. During the Han dynasty (202 BC – 220 AD), signallers mainly used flags and wood fires—via the light of the flames swung high into the air at night, and via dark smoke produced by the addition of wolf dung during the day—to send signals. By the Tang dynasty (618–907) a message could be sent 1,100 kilometres (700 mi) in 24 hours. The Ming dynasty (1368–1644) used artillery as another possible signalling method. While the signalling was complex (for instance, flags of different colours could be used to indicate enemy strength), only predetermined messages could be sent. The Chinese signalling system extended well beyond the Great Wall. Signal towers away from the wall were used to give early warning of an attack. Others were built even further out as part of the protection of trade routes, especially the Silk Road.
Signal fires were widely used in Europe and elsewhere for military purposes. The Roman army made frequent use of them, as did their enemies, and the remains of some of the stations still exist. Few details have been recorded of European/Mediterranean signalling systems and the possible messages. One of the few for which details are known is a system invented by Aeneas Tacticus (4th century BC). Tacticus's system had water filled pots at the two signal stations which were drained in synchronisation. Annotation on a floating scale indicated which message was being sent or received. Signals sent by means of torches indicated when to start and stop draining to keep the synchronisation.
None of the signalling systems discussed above are true telegraphs in the sense of a system that can transmit arbitrary messages over arbitrary distances. Lines of signalling relay stations can send messages to any required distance, but all these systems are limited to one extent or another in the range of messages that they can send. A system like flag semaphore, with an alphabetic code, can certainly send any given message, but the system is designed for short-range communication between two persons. An engine order telegraph, used to send instructions from the bridge of a ship to the engine room, fails to meet both criteria; it has a limited distance and very simple message set. There was only one ancient signalling system described that does meet these criteria. That was a system using the Polybius square to encode an alphabet. Polybius (2nd century BC) suggested using two successive groups of torches to identify the coordinates of the letter of the alphabet being transmitted. The number of said torches held up signalled the grid square that contained the letter. There is no definite record of the system ever being used, but there are several passages in ancient texts that some think are suggestive. Holzmann and Pehrson, for instance, suggest that Livy is describing its use by Philip V of Macedon in 207 BC during the First Macedonian War. Nothing else that could be described as a true telegraph existed until the 17th century.: 26–29 Possibly the first alphabetic telegraph code in the modern era is due to Franz Kessler who published his work in 1616. Kessler used a lamp placed inside a barrel with a moveable shutter operated by the signaller. The signals were observed at a distance with the newly invented telescope.: 32–34
=== Optical telegraph ===
An optical telegraph is a telegraph consisting of a line of stations in towers or natural high points which signal to each other by means of shutters or paddles. Signalling by means of indicator pointers was called semaphore. Early proposals for an optical telegraph system were made to the Royal Society by Robert Hooke in 1684 and were first implemented on an experimental level by Sir Richard Lovell Edgeworth in 1767. The first successful optical telegraph network was invented by Claude Chappe and operated in France from 1793. The two most extensive systems were Chappe's in France, with branches into neighbouring countries, and the system of Abraham Niclas Edelcrantz in Sweden.: ix–x, 47
During 1790–1795, at the height of the French Revolution, France needed a swift and reliable communication system to thwart the war efforts of its enemies. In 1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. On 2 March 1791, at 11 am, they sent the message "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory) between Brulon and Parce, a distance of 16 kilometres (10 mi). The first means used a combination of black and white panels, clocks, telescopes, and codebooks to send their message.
In 1792, Claude was appointed Ingénieur-Télégraphiste and charged with establishing a line of stations between Paris and Lille, a distance of 230 kilometres (140 mi). It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. A decision to replace the system with an electric telegraph was made in 1846, but it took a decade before it was fully taken out of service. The fall of Sevastopol was reported by Chappe telegraph in 1855.: 92–94
The Prussian system was put into effect in the 1830s. However, they were highly dependent on good weather and daylight to work and even then could accommodate only about two words per minute. The last commercial semaphore link ceased operation in Sweden in 1880. As of 1895, France still operated coastal commercial semaphore telegraph stations, for ship-to-shore communication.
=== Electrical telegraph ===
The early ideas for an electric telegraph included in 1753 using electrostatic deflections of pith balls, proposals for electrochemical bubbles in acid by Campillo in 1804 and von Sömmering in 1809. The first experimental system over a substantial distance was by Ronalds in 1816 using an electrostatic generator. Ronalds offered his invention to the British Admiralty, but it was rejected as unnecessary, the existing optical telegraph connecting the Admiralty in London to their main fleet base in Portsmouth being deemed adequate for their purposes. As late as 1844, after the electrical telegraph had come into use, the Admiralty's optical telegraph was still used, although it was accepted that poor weather ruled it out on many days of the year.: 16, 37 France had an extensive optical telegraph system dating from Napoleonic times and was even slower to take up electrical systems.: 217–218
Eventually, electrostatic telegraphs were abandoned in favour of electromagnetic systems. An early experimental system (Schilling, 1832) led to a proposal to establish a telegraph between St Petersburg and Kronstadt, but it was never completed. The first operative electric telegraph (Gauss and Weber, 1833) connected Göttingen Observatory to the Institute of Physics about 1 km away during experimental investigations of the geomagnetic field.
The first commercial telegraph was by Cooke and Wheatstone following their English patent of 10 June 1837. It was demonstrated on the London and Birmingham Railway in July of the same year. In July 1839, a five-needle, five-wire system was installed to provide signalling over a record distance of 21 km on a section of the Great Western Railway between London Paddington station and West Drayton. However, in trying to get railway companies to take up his telegraph more widely for railway signalling, Cooke was rejected several times in favour of the more familiar, but shorter range, steam-powered pneumatic signalling. Even when his telegraph was taken up, it was considered experimental and the company backed out of a plan to finance extending the telegraph line out to Slough. However, this led to a breakthrough for the electric telegraph, as up to this point the Great Western had insisted on exclusive use and refused Cooke permission to open public telegraph offices. Cooke extended the line at his own expense and agreed that the railway could have free use of it in exchange for the right to open it up to the public.: 19–20
Most of the early electrical systems required multiple wires (Ronalds' system was an exception), but the system developed in the United States by Morse and Vail was a single-wire system. This was the system that first used the soon-to-become-ubiquitous Morse code. By 1844, the Morse system connected Baltimore to Washington, and by 1861 the west coast of the continent was connected to the east coast. The Cooke and Wheatstone telegraph, in a series of improvements, also ended up with a one-wire system, but still using their own code and needle displays.
The electric telegraph quickly became a means of more general communication. The Morse system was officially adopted as the standard for continental European telegraphy in 1851 with a revised code, which later became the basis of International Morse Code. However, Great Britain and the British Empire continued to use the Cooke and Wheatstone system, in some places as late as the 1930s. Likewise, the United States continued to use American Morse code internally, requiring translation operators skilled in both codes for international messages.
=== Railway telegraphy ===
Railway signal telegraphy was developed in Britain from the 1840s onward. It was used to manage railway traffic and to prevent accidents as part of the railway signalling system. On 12 June 1837 Cooke and Wheatstone were awarded a patent for an electric telegraph. This was demonstrated between Euston railway station—where Wheatstone was located—and the engine house at Camden Town—where Cooke was stationed, together with Robert Stephenson, the London and Birmingham Railway line's chief engineer. The messages were for the operation of the rope-haulage system for pulling trains up the 1 in 77 bank. The world's first permanent railway telegraph was completed in July 1839 between London Paddington and West Drayton on the Great Western Railway with an electric telegraph using a four-needle system.
The concept of a signalling "block" system was proposed by Cooke in 1842. Railway signal telegraphy did not change in essence from Cooke's initial concept for more than a century. In this system each line of railway was divided into sections or blocks of varying length. Entry to and exit from the block was to be authorised by electric telegraph and signalled by the line-side semaphore signals, so that only a single train could occupy the rails. In Cooke's original system, a single-needle telegraph was adapted to indicate just two messages: "Line Clear" and "Line Blocked". The signaller would adjust his line-side signals accordingly. As first implemented in 1844 each station had as many needles as there were stations on the line, giving a complete picture of the traffic. As lines expanded, a sequence of pairs of single-needle instruments were adopted, one pair for each block in each direction.
=== Wigwag ===
Wigwag is a form of flag signalling using a single flag. Unlike most forms of flag signalling, which are used over relatively short distances, wigwag is designed to maximise the distance covered—up to 32 km (20 mi) in some cases. Wigwag achieved this by using a large flag—a single flag can be held with both hands unlike flag semaphore which has a flag in each hand—and using motions rather than positions as its symbols since motions are more easily seen. It was invented by US Army surgeon Albert J. Myer in the 1850s who later became the first head of the Signal Corps. Wigwag was used extensively during the American Civil War where it filled a gap left by the electrical telegraph. Although the electrical telegraph had been in use for more than a decade, the network did not yet reach everywhere and portable, ruggedized equipment suitable for military use was not immediately available. Permanent or semi-permanent stations were established during the war, some of them towers of enormous height and the system was extensive enough to be described as a communications network.
=== Heliograph ===
A heliograph is a telegraph that transmits messages by flashing sunlight with a mirror, usually using Morse code. The idea for a telegraph of this type was first proposed as a modification of surveying equipment (Gauss, 1821). Various uses of mirrors were made for communication in the following years, mostly for military purposes, but the first device to become widely used was a heliograph with a moveable mirror (Mance, 1869). The system was used by the French during the 1870–71 siege of Paris, with night-time signalling using kerosene lamps as the source of light. An improved version (Begbie, 1870) was used by British military in many colonial wars, including the Anglo-Zulu War (1879). At some point, a morse key was added to the apparatus to give the operator the same degree of control as in the electric telegraph.
Another type of heliograph was the heliostat or heliotrope fitted with a Colomb shutter. The heliostat was essentially a surveying instrument with a fixed mirror and so could not transmit a code by itself. The term heliostat is sometimes used as a synonym for heliograph because of this origin. The Colomb shutter (Bolton and Colomb, 1862) was originally invented to enable the transmission of morse code by signal lamp between Royal Navy ships at sea.
The heliograph was heavily used by Nelson A. Miles in Arizona and New Mexico after he took over command (1886) of the fight against Geronimo and other Apache bands in the Apache Wars. Miles had previously set up the first heliograph line in the US between Fort Keogh and Fort Custer in Montana. He used the heliograph to fill in vast, thinly populated areas that were not covered by the electric telegraph. Twenty-six stations covered an area 320 by 480 km (200 by 300 mi). In a test of the system, a message was relayed 640 km (400 mi) in four hours. Miles' enemies used smoke signals and flashes of sunlight from metal, but lacked a sophisticated telegraph code. The heliograph was ideal for use in the American Southwest due to its clear air and mountainous terrain on which stations could be located. It was found necessary to lengthen the morse dash (which is much shorter in American Morse code than in the modern International Morse code) to aid differentiating from the morse dot.
Use of the heliograph declined from 1915 onwards, but remained in service in Britain and British Commonwealth countries for some time. Australian forces used the heliograph as late as 1942 in the Western Desert Campaign of World War II. Some form of heliograph was used by the mujahideen in the Soviet–Afghan War (1979–1989).
=== Teleprinter ===
A teleprinter is a telegraph machine that can send messages from a typewriter-like keyboard and print incoming messages in readable text with no need for the operators to be trained in the telegraph code used on the line. It developed from various earlier printing telegraphs and resulted in improved transmission speeds. The Morse telegraph (1837) was originally conceived as a system marking indentations on paper tape. A chemical telegraph making blue marks improved the speed of recording (Bain, 1846), but was delayed by a patent challenge from Morse. The first true printing telegraph (that is printing in plain text) used a spinning wheel of types in the manner of a daisy wheel printer (House, 1846, improved by Hughes, 1855). The system was adopted by Western Union.
Early teleprinters used the Baudot code, a five-bit sequential binary code. This was a telegraph code developed for use on the French telegraph using a five-key keyboard (Baudot, 1874). Teleprinters generated the same code from a full alphanumeric keyboard. A feature of the Baudot code, and subsequent telegraph codes, was that, unlike Morse code, every character has a code of the same length making it more machine friendly. The Baudot code was used on the earliest ticker tape machines (Calahan, 1867), a system for mass distributing information on current price of publicly listed companies.
=== Automated punched-tape transmission ===
In a punched-tape system, the message is first typed onto punched tape using the code of the telegraph system—Morse code for instance. It is then, either immediately or at some later time, run through a transmission machine which sends the message to the telegraph network. Multiple messages can be sequentially recorded on the same run of tape. The advantage of doing this is that messages can be sent at a steady, fast rate making maximum use of the available telegraph lines. The economic advantage of doing this is greatest on long, busy routes where the cost of the extra step of preparing the tape is outweighed by the cost of providing more telegraph lines. The first machine to use punched tape was Bain's teleprinter (Bain, 1843), but the system saw only limited use. Later versions of Bain's system achieved speeds up to 1000 words per minute, far faster than a human operator could achieve.
The first widely used system (Wheatstone, 1858) was first put into service with the British General Post Office in 1867. A novel feature of the Wheatstone system was the use of bipolar encoding. That is, both positive and negative polarity voltages were used. Bipolar encoding has several advantages, one of which is that it permits duplex communication. The Wheatstone tape reader was capable of a speed of 400 words per minute.: 190
=== Oceanic telegraph cables ===
A worldwide communication network meant that telegraph cables would have to be laid across oceans. On land cables could be run uninsulated suspended from poles. Underwater, a good insulator that was both flexible and capable of resisting the ingress of seawater was required. A solution presented itself with gutta-percha, a natural rubber from the Palaquium gutta tree, after William Montgomerie sent samples to London from Singapore in 1843. The new material was tested by Michael Faraday and in 1845 Wheatstone suggested that it should be used on the cable planned between Dover and Calais by John Watkins Brett. The idea was proved viable when the South Eastern Railway company successfully tested a three-kilometre (two-mile) gutta-percha insulated cable with telegraph messages to a ship off the coast of Folkestone. The cable to France was laid in 1850 but was almost immediately severed by a French fishing vessel. It was relaid the next year and connections to Ireland and the Low Countries soon followed.
Getting a cable across the Atlantic Ocean proved much more difficult. The Atlantic Telegraph Company, formed in London in 1856, had several failed attempts. A cable laid in 1858 worked poorly for a few days, sometimes taking all day to send a message despite the use of the highly sensitive mirror galvanometer developed by William Thomson (the future Lord Kelvin) before being destroyed by applying too high a voltage. Its failure and slow speed of transmission prompted Thomson and Oliver Heaviside to find better mathematical descriptions of long transmission lines. The company finally succeeded in 1866 with an improved cable laid by SS Great Eastern, the largest ship of its day, designed by Isambard Kingdom Brunel.
An overland telegraph from Britain to India was first connected in 1866 but was unreliable so a submarine telegraph cable was connected in 1870. Several telegraph companies were combined to form the Eastern Telegraph Company in 1872. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin.
From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable-laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted while it was able to quickly cut Germany's cables worldwide.
=== Facsimile ===
In 1843, Scottish inventor Alexander Bain invented a device that could be considered the first facsimile machine. He called his invention a "recording telegraph". Bain's telegraph was able to transmit images by electrical wires. Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. In 1855, an Italian priest, Giovanni Caselli, also created an electric telegraph that could transmit images. Caselli called his invention "Pantelegraph". Pantelegraph was successfully tested and approved for a telegraph line between Paris and Lyon.
In 1881, English inventor Shelford Bidwell constructed the scanning phototelegraph that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. Around 1900, German physicist Arthur Korn invented the Bildtelegraph widespread in continental Europe especially since a widely noticed transmission of a wanted-person photograph from Paris to London in 1908 used until the wider distribution of the radiofax. Its main competitors were the Bélinographe by Édouard Belin first, then since the 1930s, the Hellschreiber, invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission.
=== Wireless telegraphy ===
The late 1880s through to the 1890s saw the discovery and then development of a newly understood phenomenon into a form of wireless telegraphy, called Hertzian wave wireless telegraphy, radiotelegraphy, or (later) simply "radio". Between 1886 and 1888, Heinrich Rudolf Hertz published the results of his experiments where he was able to transmit electromagnetic waves (radio waves) through the air, proving James Clerk Maxwell's 1873 theory of electromagnetic radiation. Many scientists and inventors experimented with this new phenomenon but the consensus was that these new waves (similar to light) would be just as short range as light, and, therefore, useless for long range communication.
At the end of 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building a commercial wireless telegraphy system based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Building on the ideas of previous scientists and inventors Marconi re-engineered their apparatus by trial and error attempting to build a radio-based wireless telegraphic system that would function the same as wired telegraphy. He would work on the system through 1895 in his lab and then in field tests making improvements to extend its range. After many breakthroughs, including applying the wired telegraphy concept of grounding the transmitter and receiver, Marconi was able, by early 1896, to transmit radio far beyond the short ranges that had been predicted. Having failed to interest the Italian government, the 22-year-old inventor brought his telegraphy system to Britain in 1896 and met William Preece, a Welshman, who was a major figure in the field and Chief Engineer of the General Post Office. A series of demonstrations for the British government followed—by March 1897, Marconi had transmitted Morse code signals over a distance of about 6 km (3+1⁄2 mi) across Salisbury Plain.
On 13 May 1897, Marconi, assisted by George Kemp, a Cardiff Post Office engineer, transmitted the first wireless signals over water to Lavernock (near Penarth in Wales) from Flat Holm. His star rising, he was soon sending signals across the English Channel (1899), from shore to ship (1899) and finally across the Atlantic (1901). A study of these demonstrations of radio, with scientists trying to work out how a phenomenon predicted to have a short range could transmit "over the horizon", led to the discovery of a radio reflecting layer in the Earth's atmosphere in 1902, later called the ionosphere.
Radiotelegraphy proved effective for rescue work in sea disasters by enabling effective communication between ships and from ship to shore. In 1904, Marconi began the first commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907. Notably, Marconi's apparatus was used to help rescue efforts after the sinking of RMS Titanic. Britain's postmaster-general summed up, referring to the Titanic disaster, "Those who have been saved, have been saved through one man, Mr. Marconi...and his marvellous invention."
==== Non-radio wireless telegraphy ====
The successful development of radiotelegraphy was preceded by a 50-year history of ingenious but ultimately unsuccessful experiments by inventors to achieve wireless telegraphy by other means.
===== Ground, water, and air conduction =====
Several wireless electrical signaling schemes based on the (sometimes erroneous) idea that electric currents could be conducted long-range through water, ground, and air were investigated for telegraphy before practical radio systems became available.
The original telegraph lines used two wires between the two stations to form a complete electrical circuit or "loop". In 1837, however, Carl August von Steinheil of Munich, Germany, found that by connecting one leg of the apparatus at each station to metal plates buried in the ground, he could eliminate one wire and use a single wire for telegraphic communication. This led to speculation that it might be possible to eliminate both wires and therefore transmit telegraph signals through the ground without any wires connecting the stations. Other attempts were made to send the electric current through bodies of water, to span rivers, for example. Prominent experimenters along these lines included Samuel F. B. Morse in the United States and James Bowman Lindsay in Great Britain, who in August 1854, was able to demonstrate transmission across a mill dam at a distance of 500 yards (457 metres).
US inventors William Henry Ward (1871) and Mahlon Loomis (1872) developed electrical conduction systems based on the erroneous belief that there was an electrified atmospheric stratum accessible at low altitude. They thought atmosphere current, connected with a return path using "Earth currents" would allow for wireless telegraphy as well as supply power for the telegraph, doing away with artificial batteries. A more practical demonstration of wireless transmission via conduction came in Amos Dolbear's 1879 magneto electric telephone that used ground conduction to transmit over a distance of a quarter of a mile.
In the 1890s inventor Nikola Tesla worked on an air and ground conduction wireless electric power transmission system, similar to Loomis', which he planned to include wireless telegraphy. Tesla's experiments had led him to incorrectly conclude that he could use the entire globe of the Earth to conduct electrical energy and his 1901 large scale application of his ideas, a high-voltage wireless power station, now called Wardenclyffe Tower, lost funding and was abandoned after a few years.
Telegraphic communication using earth conductivity was eventually found to be limited to impractically short distances, as was communication conducted through water, or between trenches during World War I.
===== Electrostatic and electromagnetic induction =====
Both electrostatic and electromagnetic induction were used to develop wireless telegraph systems that saw limited commercial application. In the United States, Thomas Edison, in the mid-1880s, patented an electromagnetic induction system he called "grasshopper telegraphy", which allowed telegraphic signals to jump the short distance between a running train and telegraph wires running parallel to the tracks. This system was successful technically but not economically, as there turned out to be little interest by train travelers in the use of an on-board telegraph service. During the Great Blizzard of 1888, this system was used to send and receive wireless messages from trains buried in snowdrifts. The disabled trains were able to maintain communications via their Edison induction wireless telegraph systems, perhaps the first successful use of wireless telegraphy to send distress calls. Edison would also help to patent a ship-to-shore communication system based on electrostatic induction.
The most successful creator of an electromagnetic induction telegraph system was William Preece, chief engineer of Post Office Telegraphs of the General Post Office (GPO) in the United Kingdom. Preece first noticed the effect in 1884 when overhead telegraph wires in Grays Inn Road were accidentally carrying messages sent on buried cables. Tests in Newcastle succeeded in sending a quarter of a mile using parallel rectangles of wire.: 243 In tests across the Bristol Channel in 1892, Preece was able to telegraph across gaps of about 5 kilometres (3.1 miles). However, his induction system required extensive lengths of antenna wires, many kilometers long, at both the sending and receiving ends. The length of those sending and receiving wires needed to be about the same length as the width of the water or land to be spanned. For example, for Preece's station to span the English Channel from Dover, England, to the coast of France would require sending and receiving wires of about 30 miles (48 kilometres) along the two coasts. These facts made the system impractical on ships, boats, and ordinary islands, which are much smaller than Great Britain or Greenland. Also, the relatively short distances that a practical Preece system could span meant that it had few advantages over underwater telegraph cables.
=== Telegram services ===
A telegram service is a company or public entity that delivers telegraphed messages directly to the recipient. Telegram services were not inaugurated until electric telegraphy became available. Earlier optical systems were largely limited to official government and military purposes.
Historically, telegrams were sent between a network of interconnected telegraph offices. A person visiting a local telegraph office paid by the word to have a message telegraphed to another office and delivered to the addressee on a paper form.: 276 Messages (i.e. telegrams) sent by telegraph could be delivered by telegraph messenger faster than mail, and even in the telephone age, the telegram remained popular for social and business correspondence. At their peak in 1929, an estimated 200 million telegrams were sent.: 274
In 1919, the Central Bureau for Registered Addresses was established in the financial district of New York City. The bureau was created to ease the growing problem of messages being delivered to the wrong recipients. To combat this issue, the bureau offered telegraph customers the option to register unique code names for their telegraph addresses. Customers were charged $2.50 per year per code. By 1934, 28,000 codes had been registered.
Telegram services still operate in much of the world (see worldwide use of telegrams by country), but e-mail and text messaging have rendered telegrams obsolete in many countries, and the number of telegrams sent annually has been declining rapidly since the 1980s. Where telegram services still exist, the transmission method between offices is no longer by telegraph, but by telex or IP link.
==== Telegram length ====
As telegrams have been traditionally charged by the word, messages were often abbreviated to pack information into the smallest possible number of words, in what came to be called "telegram style".
The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer. According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters. For German telegrams, the mean length is 11.5 words or 72.4 characters. At the end of the 19th century, the average length of a German telegram was calculated as 14.2 words.
=== Telex ===
Telex (telegraph exchange) was a public switched network of teleprinters. It used rotary-telephone-style pulse dialling for automatic routing through the network. It initially used the Baudot code for messages. Telex development began in Germany in 1926, becoming an operational service in 1933 run by the Reichspost (the German imperial postal service). It had a speed of 50 baud—approximately 66 words per minute. Up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Telex was introduced into Canada in July 1957, and the United States in 1958. A new code, ASCII, was introduced in 1963 by the American Standards Association. ASCII was a seven-bit code and could thus support a larger number of characters than Baudot. In particular, ASCII supported upper and lower case whereas Baudot was upper case only.
=== Decline ===
Telegraph use began to permanently decline around 1920.: 248 The decline began with the growth of the use of the telephone.: 253 Ironically, the invention of the telephone grew out of the development of the harmonic telegraph, a device which was supposed to increase the efficiency of telegraph transmission and improve the profits of telegraph companies. Western Union gave up its patent battle with Alexander Graham Bell because it believed the telephone was not a threat to its telegraph business. The Bell Telephone Company was formed in 1877 and had 230 subscribers which grew to 30,000 by 1880. By 1886 there were a quarter of a million phones worldwide,: 276–277 and nearly 2 million by 1900.: 204 The decline was briefly postponed by the rise of special occasion congratulatory telegrams. Traffic continued to grow between 1867 and 1893 despite the introduction of the telephone in this period,: 274 but by 1900 the telegraph was definitely in decline.: 277
There was a brief resurgence in telegraphy during World War I but the decline continued as the world entered the Great Depression years of the 1930s.: 277 After the Second World War new technology improved communication in the telegraph industry. Telegraph lines continued to be an important means of distributing news feeds from news agencies by teleprinter machine until the rise of the internet in the 1990s. For Western Union, one service remained highly profitable—the wire transfer of money. This service kept Western Union in business long after the telegraph had ceased to be important.: 277 In the modern era, the telegraph that began in 1837 has been gradually replaced by digital data transmission based on computer information systems.
== Social implications ==
Optical telegraph lines were installed by governments, often for a military purpose, and reserved for official use only. In many countries, this situation continued after the introduction of the electric telegraph. Starting in Germany and the UK, electric telegraph lines were installed by railway companies. Railway use quickly led to private telegraph companies in the UK and the US offering a telegraph service to the public using telegraph along railway lines. The availability of this new form of communication brought on widespread social and economic changes.
The electric telegraph freed communication from the time constraints of postal mail and revolutionized the global economy and society. By the end of the 19th century, the telegraph was becoming an increasingly common medium of communication for ordinary people. The telegraph isolated the message (information) from the physical movement of objects or the process.
There was some fear of the new technology. According to author Allan J. Kimmel, some people "feared that the telegraph would erode the quality of public discourse through the transmission of irrelevant, context-free information." Henry David Thoreau thought of the Transatlantic cable "...perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough." Kimmel says these fears anticipate many of the characteristics of the modern internet age.
Initially, the telegraph was expensive, but it had an enormous effect on three industries: finance, newspapers, and railways. Telegraphy facilitated the growth of organizations "in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms". In the US, there were 200 to 300 stock exchanges before the telegraph, but most of these were unnecessary and unprofitable once the telegraph made financial transactions at a distance easy and drove down transaction costs.: 274–75 This immense growth in the business sectors influenced society to embrace the use of telegrams once the cost had fallen.
Worldwide telegraphy changed the gathering of information for news reporting. Journalists were using the telegraph for war reporting as early as 1846 when the Mexican–American War broke out. News agencies were formed, such as the Associated Press, for the purpose of reporting news by telegraph.: 274–75 Messages and information would now travel far and wide, and the telegraph demanded a language "stripped of the local, the regional; and colloquial", to better facilitate a worldwide media language. Media language had to be standardized, which led to the gradual disappearance of different forms of speech and styles of journalism and storytelling.
The spread of the railways created a need for an accurate standard time to replace local standards based on local noon. The means of achieving this synchronisation was the telegraph. This emphasis on precise time has led to major societal changes such as the concept of the time value of money.: 273–74
During the telegraph era there was widespread employment of women in telegraphy. The shortage of men to work as telegraph operators in the American Civil War opened up the opportunity for women of a well-paid skilled job.: 274 In the UK, there was widespread employment of women as telegraph operators even earlier – from the 1850s by all the major companies. The attraction of women for the telegraph companies was that they could pay them less than men. Nevertheless, the jobs were popular with women for the same reason as in the US; most other work available for women was very poorly paid.: 77 : 85
The economic impact of the telegraph was not much studied by economic historians until parallels started to be drawn with the rise of the internet. In fact, the electric telegraph was as important as the invention of printing in this respect. According to economist Ronnie J. Phillips, the reason for this may be that institutional economists paid more attention to advances that required greater capital investment. The investment required to build railways, for instance, is orders of magnitude greater than that for the telegraph.: 269–70
== In popular culture ==
The optical telegraph was quickly forgotten once it went out of service. While it was in operation, it was very familiar to the public across Europe. Examples appear in many paintings of the period. Poems include "Le Telégraphe" by Victor Hugo, and the collection Telegrafen: Optisk kalender för 1858 by Elias Sehlstedt is dedicated to the telegraph. In novels, the telegraph is a major component in Lucien Leuwen by Stendhal, and it features in The Count of Monte Cristo, by Alexandre Dumas.: vii–ix Joseph Chudy's 1796 opera, Der Telegraph oder die Fernschreibmaschine, was written to publicise Chudy's telegraph (a binary code with five lamps) when it became clear that Chappe's design was being taken up.: 42–43
Rudyard Kipling wrote a poem in praise of submarine telegraph cables; "And a new Word runs between: whispering, 'Let us be one!'" Kipling's poem represented a widespread idea in the late nineteenth century that international telegraphy (and new technology in general) would bring peace and mutual understanding to the world. When a submarine telegraph cable first connected America and Britain, the New York Post declared:
It is the harbinger of an age when international difficulties will not have time to ripen into bloody results, and when, in spite of the fatuity and perveseness of rulers, war will be impossible.
=== Newspaper names ===
Numerous newspapers and news outlets in various countries, such as The Daily Telegraph in Britain, The Telegraph in India, De Telegraaf in the Netherlands, and the Jewish Telegraphic Agency in the US, were given names which include the word "telegraph" due to their having received news by means of electric telegraphy. Some of these names are retained even though different means of news acquisition are now used.
== See also ==
Familygram
First transcontinental telegraph
Globotype
Radiogram
Telecommunications
== References ==
== Further reading ==
== External links ==
"Telegraph" . Encyclopædia Britannica (11th ed.). 1911.
Telegraph at the Encyclopædia Britannica
The Porthcurno Telegraph Museum (Archived 27 September 2013 at the Wayback Machine)—The biggest telegraph station in the world, now a museum
Distant Writing—The History of the Telegraph Companies in Britain between 1838 and 1868
Western Union Telegraph Company Records, 1820–1995—Archives Center, National Museum of American History, Smithsonian Institution.
Early telegraphy and fax engineering, still operable in a German computer museum (Archived 20 April 2012 at the Wayback Machine)
"Telegram Falls Silent Stop Era Ends Stop", The New York Times, 6 February 2006
International Facilities of the American Carriers—an overview of the U.S. international cable network in 1950
Elizabeth Bruton: "Communication Technology", in the 1914-1918-online. International Encyclopedia of the First World War | Wikipedia/Telegraph_office |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.